spark_read_jdbc {sparklyr} | R Documentation |
Read from JDBC connection into a Spark DataFrame.
spark_read_jdbc(sc, name, options = list(), repartition = 0, memory = TRUE, overwrite = TRUE, columns = NULL, ...)
sc |
A |
name |
The name to assign to the newly generated table. |
options |
A list of strings with additional options. See http://spark.apache.org/docs/latest/sql-programming-guide.html#configuration. |
repartition |
The number of partitions used to distribute the generated table. Use 0 (the default) to avoid partitioning. |
memory |
Boolean; should the data be loaded eagerly into memory? (That is, should the table be cached?) |
overwrite |
Boolean; overwrite the table with the given name if it already exists? |
columns |
A vector of column names or a named vector of column types. |
... |
Optional arguments; currently unused. |
Other Spark serialization routines: spark_load_table
,
spark_read_csv
,
spark_read_json
,
spark_read_parquet
,
spark_read_source
,
spark_read_table
,
spark_read_text
,
spark_save_table
,
spark_write_csv
,
spark_write_jdbc
,
spark_write_json
,
spark_write_parquet
,
spark_write_source
,
spark_write_table
,
spark_write_text