BigData / Apache Parquet Interview Questions
How do you read and write Parquet files in PySpark?
Spark provides first-class Parquet support via the DataFrameReader and DataFrameWriter APIs.
Read:
# Read a single file or directory of Parquet files
df = spark.read.parquet("s3://my-bucket/events/")
# With options
df = (spark.read
.option("mergeSchema", "true")
.parquet("hdfs:///datalake/transactions/"))
Write:
# Overwrite with Snappy compression (default)
df.write.mode("overwrite").parquet("s3://my-bucket/output/")
# Partition by date and region, use ZSTD
(df.write
.partitionBy("date", "region")
.option("compression", "zstd")
.mode("append")
.parquet("s3://my-bucket/partitioned/"))
Register as temp view for SQL:
df.createOrReplaceTempView("events")
spark.sql("SELECT date, SUM(revenue) FROM events GROUP BY date").show()
Invest now in Acorns!!! 🚀
Join Acorns and get your $5 bonus!
Acorns is a micro-investing app that automatically invests your "spare change" from daily purchases into diversified, expert-built portfolios of ETFs. It is designed for beginners, allowing you to start investing with as little as $5. The service automates saving and investing. Disclosure: I may receive a referral bonus.
Invest now!!! Get Free equity stock (US, UK only)!
Use Robinhood app to invest in stocks. It is safe and secure. Use the Referral link to claim your free stock when you sign up!.
The Robinhood app makes it easy to trade stocks, crypto and more.
Webull! Receive free stock by signing up using the link: Webull signup.
More Related questions...
