augment.ml_model_generalized_linear_regression | Tidying methods for Spark ML linear models |
augment.ml_model_linear_regression | Tidying methods for Spark ML linear models |
checkpoint_directory | Set/Get Spark checkpoint directory |
compile_package_jars | Compile Scala sources into a Java Archive (jar) |
connection_config | Read configuration values for a connection |
copy_to.spark_connection | Copy an R Data Frame to Spark |
download_scalac | Downloads default Scala Compilers |
ensure | Enforce Specific Structure for R Objects |
ensure_scalar_boolean | Enforce Specific Structure for R Objects |
ensure_scalar_character | Enforce Specific Structure for R Objects |
ensure_scalar_double | Enforce Specific Structure for R Objects |
ensure_scalar_integer | Enforce Specific Structure for R Objects |
find_scalac | Discover the Scala Compiler |
ft_binarizer | Feature Transformation - Binarizer |
ft_bucketizer | Feature Transformation - Bucketizer |
ft_count_vectorizer | Feature Tranformation - CountVectorizer |
ft_discrete_cosine_transform | Feature Transformation - Discrete Cosine Transform (DCT) |
ft_elementwise_product | Feature Transformation - ElementwiseProduct |
ft_index_to_string | Feature Transformation - IndexToString |
ft_one_hot_encoder | Feature Transformation - OneHotEncoder |
ft_quantile_discretizer | Feature Transformation - QuantileDiscretizer |
ft_regex_tokenizer | Feature Tranformation - RegexTokenizer |
ft_sql_transformer | Feature Transformation - SQLTransformer |
ft_stop_words_remover | Feature Tranformation - StopWordsRemover |
ft_string_indexer | Feature Transformation - StringIndexer |
ft_tokenizer | Feature Tranformation - Tokenizer |
ft_vector_assembler | Feature Transformation - VectorAssembler |
glance.ml_model_generalized_linear_regression | Tidying methods for Spark ML linear models |
glance.ml_model_linear_regression | Tidying methods for Spark ML linear models |
hive_context | Access the Spark API |
hive_context_config | Runtime configuration interface for Hive |
invoke | Invoke a Method on a JVM Object |
invoke_new | Invoke a Method on a JVM Object |
invoke_static | Invoke a Method on a JVM Object |
java_context | Access the Spark API |
livy_config | Create a Spark Configuration for Livy |
livy_service_start | Start Livy |
livy_service_stop | Start Livy |
ml_als_factorization | Spark ML - Alternating Least Squares (ALS) matrix factorization. |
ml_binary_classification_eval | Spark ML - Binary Classification Evaluator |
ml_classification_eval | Spark ML - Classification Evaluator |
ml_create_dummy_variables | Create Dummy Variables |
ml_decision_tree | Spark ML - Decision Trees |
ml_generalized_linear_regression | Spark ML - Generalized Linear Regression |
ml_glm_tidiers | Tidying methods for Spark ML linear models |
ml_gradient_boosted_trees | Spark ML - Gradient-Boosted Tree |
ml_kmeans | Spark ML - K-Means Clustering |
ml_lda | Spark ML - Latent Dirichlet Allocation |
ml_linear_regression | Spark ML - Linear Regression |
ml_load | Save / Load a Spark ML Model Fit |
ml_logistic_regression | Spark ML - Logistic Regression |
ml_model | Create an ML Model Object |
ml_model_data | Extracts data associated with a Spark ML model |
ml_multilayer_perceptron | Spark ML - Multilayer Perceptron |
ml_naive_bayes | Spark ML - Naive-Bayes |
ml_one_vs_rest | Spark ML - One vs Rest |
ml_options | Options for Spark ML Routines |
ml_pca | Spark ML - Principal Components Analysis |
ml_prepare_dataframe | Prepare a Spark DataFrame for Spark ML Routines |
ml_prepare_features | Pre-process the Inputs to a Spark ML Routine |
ml_prepare_inputs | Pre-process the Inputs to a Spark ML Routine |
ml_prepare_response_features_intercept | Pre-process the Inputs to a Spark ML Routine |
ml_random_forest | Spark ML - Random Forests |
ml_save | Save / Load a Spark ML Model Fit |
ml_saveload | Save / Load a Spark ML Model Fit |
ml_survival_regression | Spark ML - Survival Regression |
ml_tree_feature_importance | Spark ML - Feature Importance for Tree Models |
na.replace | Replace Missing Values in Objects |
registered_extensions | Register a Package that Implements a Spark Extension |
register_extension | Register a Package that Implements a Spark Extension |
sdf-saveload | Save / Load a Spark DataFrame |
sdf_along | Create DataFrame for along Object |
sdf_bind | Bind multiple Spark DataFrames by row and column |
sdf_bind_cols | Bind multiple Spark DataFrames by row and column |
sdf_bind_rows | Bind multiple Spark DataFrames by row and column |
sdf_broadcast | Broadcast hint |
sdf_checkpoint | Checkpoint a Spark DataFrame |
sdf_coalesce | Coalesces a Spark DataFrame |
sdf_copy_to | Copy an Object into Spark |
sdf_dim | Support for Dimension Operations |
sdf_import | Copy an Object into Spark |
sdf_last_index | Returns the last index of a Spark DataFrame |
sdf_len | Create DataFrame for Length |
sdf_load_parquet | Save / Load a Spark DataFrame |
sdf_load_table | Save / Load a Spark DataFrame |
sdf_mutate | Mutate a Spark DataFrame |
sdf_mutate_ | Mutate a Spark DataFrame |
sdf_ncol | Support for Dimension Operations |
sdf_nrow | Support for Dimension Operations |
sdf_num_partitions | Gets number of partitions of a Spark DataFrame |
sdf_partition | Partition a Spark Dataframe |
sdf_persist | Persist a Spark DataFrame |
sdf_pivot | Pivot a Spark DataFrame |
sdf_predict | Model Predictions with Spark DataFrames |
sdf_project | Project features onto principal components |
sdf_quantile | Compute (Approximate) Quantiles with a Spark DataFrame |
sdf_read_column | Read a Column from a Spark DataFrame |
sdf_register | Register a Spark DataFrame |
sdf_repartition | Repartition a Spark DataFrame |
sdf_residuals | Model Residuals |
sdf_residuals.ml_model_generalized_linear_regression | Model Residuals |
sdf_residuals.ml_model_linear_regression | Model Residuals |
sdf_sample | Randomly Sample Rows from a Spark DataFrame |
sdf_save_parquet | Save / Load a Spark DataFrame |
sdf_save_table | Save / Load a Spark DataFrame |
sdf_schema | Read the Schema of a Spark DataFrame |
sdf_separate_column | Separate a Vector Column into Scalar Columns |
sdf_seq | Create DataFrame for Range |
sdf_sort | Sort a Spark DataFrame |
sdf_with_sequential_id | Add a Sequential ID Column to a Spark DataFrame |
sdf_with_unique_id | Add a Unique ID Column to a Spark DataFrame |
spark-api | Access the Spark API |
spark-connections | Manage Spark Connections |
spark_apply | Apply an R Function in Spark |
spark_apply_log | Log Writter for Spark Apply |
spark_compilation_spec | Define a Spark Compilation Specification |
spark_config | Read Spark Configuration |
spark_connect | Manage Spark Connections |
spark_connection | Retrieve the Spark Connection Associated with an R Object |
spark_connection_is_open | Manage Spark Connections |
spark_context | Access the Spark API |
spark_context_config | Runtime configuration interface for Spark. |
spark_dataframe | Retrieve a Spark DataFrame |
spark_default_compilation_spec | Default Compilation Specification for Spark Extensions |
spark_dependency | Define a Spark dependency |
spark_disconnect | Manage Spark Connections |
spark_disconnect_all | Manage Spark Connections |
spark_get_checkpoint_dir | Set/Get Spark checkpoint directory |
spark_home_set | Set the SPARK_HOME environment variable |
spark_install_sync | helper function to sync sparkinstall project to sparklyr |
spark_jobj | Retrieve a Spark JVM Object Reference |
spark_load_table | Reads from a Spark Table into a Spark DataFrame. |
spark_log | View Entries in the Spark Log |
spark_read_csv | Read a CSV file into a Spark DataFrame |
spark_read_jdbc | Read from JDBC connection into a Spark DataFrame. |
spark_read_json | Read a JSON file into a Spark DataFrame |
spark_read_parquet | Read a Parquet file into a Spark DataFrame |
spark_read_source | Read from a generic source into a Spark DataFrame. |
spark_read_table | Reads from a Spark Table into a Spark DataFrame. |
spark_read_text | Read a Text file into a Spark DataFrame |
spark_save_table | Saves a Spark DataFrame as a Spark table |
spark_session | Access the Spark API |
spark_set_checkpoint_dir | Set/Get Spark checkpoint directory |
spark_table_name | Generate a Table Name from Expression |
spark_version | Get the Spark Version Associated with a Spark Connection |
spark_version_from_home | Get the Spark Version Associated with a Spark Installation |
spark_web | Open the Spark web interface |
spark_write_csv | Write a Spark DataFrame to a CSV |
spark_write_jdbc | Writes a Spark DataFrame into a JDBC table |
spark_write_json | Write a Spark DataFrame to a JSON file |
spark_write_parquet | Write a Spark DataFrame to a Parquet file |
spark_write_source | Writes a Spark DataFrame into a generic source |
spark_write_table | Writes a Spark DataFrame into a Spark table |
spark_write_text | Write a Spark DataFrame to a Text file |
src_databases | Show database list |
tbl_cache | Cache a Spark Table |
tbl_change_db | Use specific database |
tbl_uncache | Uncache a Spark Table |
tidy.ml_model_generalized_linear_regression | Tidying methods for Spark ML linear models |
tidy.ml_model_linear_regression | Tidying methods for Spark ML linear models |