Prev Next

BigData / Apache Parquet Interview Questions

What is small file problem in Parquet-based data lakes and how do you solve it?

The small file problem occurs when a Parquet dataset accumulates thousands of tiny files (often from streaming writes or over-partitioning). Each file requires a separate HDFS/S3 metadata operation and a separate footer read, causing significant overhead.

Effects:

  • Slow query planning — the driver/namenode must enumerate and open many files.
  • Inefficient Parquet statistics — small row groups give poor skipping benefits.
  • High object-store API costs (S3 LIST + GET per file).

Solutions:

  1. Compaction job — periodically coalesce small files into larger ones. In Spark:
    spark.read.parquet(path).coalesce(N).write.mode("overwrite").parquet(path)
  2. Delta Lake / Iceberg OPTIMIZE
    OPTIMIZE my_table;
  3. Streaming micro-batch tuning — increase trigger interval or use maxFilesPerTrigger to reduce write frequency.
  4. Hudi MOR → COW compaction — compact merge-on-read log files into base Parquet files.
What is the primary overhead caused by too many small Parquet files in a data lake?
Which Spark transformation consolidates many small partitions into fewer output files?

Invest now in Acorns!!! 🚀 Join Acorns and get your $5 bonus!

Invest now in Acorns!!! 🚀
Join Acorns and get your $5 bonus!

Earn passively and while sleeping

Acorns is a micro-investing app that automatically invests your "spare change" from daily purchases into diversified, expert-built portfolios of ETFs. It is designed for beginners, allowing you to start investing with as little as $5. The service automates saving and investing. Disclosure: I may receive a referral bonus.

Invest now!!! Get Free equity stock (US, UK only)!

Use Robinhood app to invest in stocks. It is safe and secure. Use the Referral link to claim your free stock when you sign up!.

The Robinhood app makes it easy to trade stocks, crypto and more.


Webull! Receive free stock by signing up using the link: Webull signup.

More Related questions...

What is Apache Parquet and why is it used? What are the advantages of Parquet over CSV? How are Parquet files structured? (Row Groups, Column Chunks, Pages)? What is Schema Evolution in Parquet? What is Column Pruning and Projection Pushdown in Parquet? When would you choose Avro over Parquet? How does Parquet handle compression and encoding? What is the Vectorized Reader in Spark and how does it improve Parquet performance? How do you handle schema mismatches when merging multiple Parquet files? If a Spark query on Parquet is slow, what optimisation steps would you take? How do you load Parquet files into Snowflake? What are the supported data types in Parquet? How do you read and write Parquet files in PySpark? How do you read and write Parquet files in Python with PyArrow? What is partitioning in Parquet and how does it improve query performance? What are Bloom Filters in Parquet and when should you use them? What is the difference between Parquet, ORC, and Avro? What is Z-ordering (Z-order clustering) and how does it help Parquet queries? What is Apache Iceberg and how does it use Parquet? How does DuckDB query Parquet files and what makes it fast? What is the Parquet file footer and why does the reader fetch it first? How does Parquet support nested data (structs, lists, maps)? What is small file problem in Parquet-based data lakes and how do you solve it? What is the difference between repartition and coalesce when writing Parquet files? How does AWS Athena query Parquet files in S3? What is predicate pushdown in Parquet and how does it work end-to-end? What are best practices for writing Parquet files in production? How does Google BigQuery use Parquet-style columnar storage internally? What is Delta Lake and how does it extend Parquet for ACID transactions? How do you perform upserts (MERGE INTO) on Parquet-based tables in Delta Lake?
Show more question and Answers...

Web

Comments & Discussions