Prev Next

BigData / Apache Parquet Interview Questions

What is predicate pushdown in Parquet and how does it work end-to-end?

Predicate pushdown is the process of evaluating filter conditions as early as possible — before data reaches the compute layer — using metadata stored inside Parquet files.

End-to-end flow for WHERE amount > 1000:

  1. Reader fetches the Parquet footer and reads per-row-group statistics: min_amount, max_amount.
  2. For each row group, if max_amount ≤ 1000, the entire row group is skipped — no decompression, no I/O.
  3. For surviving row groups, the amount column chunk is decompressed and values are evaluated against > 1000 at the page level (with page-level indexes in Parquet 2.x).
  4. Only matching rows are returned to the engine.

Parquet 2.0 introduced column indexes (min/max per page, not just per row group), enabling finer-grained skipping within a row group.

Combined with partition pruning (directory skipping) and Bloom filters (equality skipping), predicate pushdown is the single most important performance feature of the Parquet format.

At what granularity does Parquet 1.x predicate pushdown (row group skipping) operate?
What feature introduced in Parquet 2.0 enables finer-grained skipping within a row group?

Invest now in Acorns!!! 🚀 Join Acorns and get your $5 bonus!

Invest now in Acorns!!! 🚀
Join Acorns and get your $5 bonus!

Earn passively and while sleeping

Acorns is a micro-investing app that automatically invests your "spare change" from daily purchases into diversified, expert-built portfolios of ETFs. It is designed for beginners, allowing you to start investing with as little as $5. The service automates saving and investing. Disclosure: I may receive a referral bonus.

Invest now!!! Get Free equity stock (US, UK only)!

Use Robinhood app to invest in stocks. It is safe and secure. Use the Referral link to claim your free stock when you sign up!.

The Robinhood app makes it easy to trade stocks, crypto and more.


Webull! Receive free stock by signing up using the link: Webull signup.

More Related questions...

What is Apache Parquet and why is it used? What are the advantages of Parquet over CSV? How are Parquet files structured? (Row Groups, Column Chunks, Pages)? What is Schema Evolution in Parquet? What is Column Pruning and Projection Pushdown in Parquet? When would you choose Avro over Parquet? How does Parquet handle compression and encoding? What is the Vectorized Reader in Spark and how does it improve Parquet performance? How do you handle schema mismatches when merging multiple Parquet files? If a Spark query on Parquet is slow, what optimisation steps would you take? How do you load Parquet files into Snowflake? What are the supported data types in Parquet? How do you read and write Parquet files in PySpark? How do you read and write Parquet files in Python with PyArrow? What is partitioning in Parquet and how does it improve query performance? What are Bloom Filters in Parquet and when should you use them? What is the difference between Parquet, ORC, and Avro? What is Z-ordering (Z-order clustering) and how does it help Parquet queries? What is Apache Iceberg and how does it use Parquet? How does DuckDB query Parquet files and what makes it fast? What is the Parquet file footer and why does the reader fetch it first? How does Parquet support nested data (structs, lists, maps)? What is small file problem in Parquet-based data lakes and how do you solve it? What is the difference between repartition and coalesce when writing Parquet files? How does AWS Athena query Parquet files in S3? What is predicate pushdown in Parquet and how does it work end-to-end? What are best practices for writing Parquet files in production? How does Google BigQuery use Parquet-style columnar storage internally? What is Delta Lake and how does it extend Parquet for ACID transactions? How do you perform upserts (MERGE INTO) on Parquet-based tables in Delta Lake?
Show more question and Answers...

Web

Comments & Discussions