How to save a spark DataFrame as csv or txt on disk?

Apache Spark does not support native CSV output on disk.

You have 4 available solutions though:

1.  You can convert your Dataframe into an RDD :

方式一:

def convertToReadableString(r : Row) = ???

df.rdd.map{ convertToReadableString }.saveAsTextFile(filepath)

This will create a folder filepath. Under the file path, you'll find partitions files (e.g part-000*)

What I usually do if I want to append all the partitions into a big CSV is

cat filePath/part* > mycsvfile.csv

Some will use coalesce(1,false) to create one partition from the RDD. It's usually a bad practice, since it may overwhelm the driver by pulling all the data you are collecting to it.

Note that df.rdd will return an RDD[Row].

2.With Spark <2, you can use databricks spark-csv library:

Spark 1.4+:

方式二:

df.write.format("com.databricks.spark.csv").save(filepath)

Spark 1.3:

方式三:

df.save(filepath,"com.databricks.spark.csv")

With Spark 2.x the spark-csv package is not needed as it's included in Spark.

方式四:

df.write.format("csv").save(filepath)

You can convert to local Pandas data frame and use to_csv method (PySpark only).

Note: Solutions 1, 2 and 3 will result in CSV format files (part-*) generated by the underlying Hadoop API that Spark calls when you invoke save. You will have one part- file per partition.

另存为txt文件

方式一:

bank.rdd.repartition(1).saveAsTextFile("/tmp/df2.txt")

note: bank is a DataFrame

原文地址:

https://stackoverflow.com/questions/33174443/how-to-save-a-spark-dataframe-as-csv-on-disk

https://community.hortonworks.com/questions/42838/storage-dataframe-as-textfile-in-hdfs.html

最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
【社区内容提示】社区部分内容疑似由AI辅助生成,浏览时请结合常识与多方信息审慎甄别。
平台声明:文章内容(如有图片或视频亦包括在内)由作者上传并发布,文章内容仅代表作者本人观点,简书系信息发布平台,仅提供信息存储服务。

友情链接更多精彩内容