sdf_copy_to {sparklyr} | R Documentation |
Copy an object into Spark, and return an R object wrapping the copied object (typically, a Spark DataFrame).
sdf_copy_to(sc, x, name, memory, repartition, overwrite, ...) sdf_import(x, sc, name, memory, repartition, overwrite, ...)
sc |
The associated Spark connection. |
x |
An R object from which a Spark DataFrame can be generated. |
name |
The name to assign to the copied table in Spark. |
memory |
Boolean; should the table be cached into memory? |
repartition |
The number of partitions to use when distributing the table across the Spark cluster. The default (0) can be used to avoid partitioning. |
overwrite |
Boolean; overwrite a pre-existing table with the name |
... |
Optional arguments, passed to implementing methods. |
sdf_copy_to
is an S3 generic that, by default, dispatches to
sdf_import
. Package authors that would like to implement
sdf_copy_to
for a custom object type can accomplish this by
implementing the associated method on sdf_import
.
Other Spark data frames: sdf_partition
,
sdf_register
, sdf_sample
,
sdf_sort
sc <- spark_connect(master = "spark://HOST:PORT") sdf_copy_to(sc, iris)