the quantity of Data rising exponentially in the previous couple of years, one of many largest challenges has develop into discovering essentially the most optimum option to retailer numerous knowledge flavors. Not like within the (not thus far) previous, when relational databases have been thought of the one option to go, organizations now need to carry out evaluation over uncooked knowledge – consider social media sentiment evaluation, audio/video information, and so forth – which often couldn’t be saved in a standard (relational) means, or storing them in a standard means would require important time and effort, which enhance the general time-for-analysis.
One other problem was to someway follow a standard method to have knowledge saved in a structured means, however with out the need to design advanced and time-consuming ETL workloads to maneuver this knowledge into the enterprise knowledge warehouse. Moreover, what if half of the information professionals in your group are proficient with, let’s say, Python (knowledge scientists, knowledge engineers), and the opposite half (knowledge engineers, knowledge analysts) with SQL? Would you insist that “Pythonists” study SQL? Or, vice-versa?
Or, would you like a storage choice that may play to the strengths of your whole knowledge group? I’ve excellent news for you – one thing like this has already existed since 2013, and it’s referred to as Apache Parquet!
Parquet file format in a nutshell
Earlier than I present you the ins and outs of the Parquet file format, there are (at the least) 5 foremost the reason why Parquet is taken into account a de facto customary for storing knowledge these days:
- Knowledge compression – by making use of numerous encoding and compression algorithms, Parquet file supplies decreased reminiscence consumption
- Columnar storage – that is of paramount significance in analytic workloads, the place quick knowledge learn operation is the important thing requirement. However, extra on that later within the article…
- Language agnostic – as already talked about beforehand, builders could use completely different programming languages to govern the information within the Parquet file
- Open-source format – that means, you aren’t locked with a selected vendor
- Help for advanced knowledge varieties
Row-store vs Column-store
We’ve already talked about that Parquet is a column-based storage format. Nevertheless, to grasp the advantages of utilizing the Parquet file format, we first want to attract the road between the row-based and column-based methods of storing the information.
In conventional, row-based storage, the information is saved as a sequence of rows. One thing like this:
Now, after we are speaking about OLAP eventualities, a few of the frequent questions that your customers could ask are:
- What number of balls did we promote?
- What number of customers from the USA purchased a T-shirt?
- What’s the whole quantity spent by buyer Maria Adams?
- What number of gross sales did we have now on January 2nd?
To have the ability to reply any of those questions, the engine should scan each row from the start to the very finish! So, to reply the query: what number of customers from the USA purchased T-shirt, the engine has to do one thing like this:

Primarily, we simply want the data from two columns: Product (T-Shirts) and Nation (USA), however the engine will scan all 5 columns! This isn’t essentially the most environment friendly answer – I believe we will agree on that…
Column retailer
Let’s now look at how the column retailer works. As you might assume, the method is 180 levels completely different:

On this case, every column is a separate entity – that means, every column is bodily separated from different columns! Going again to our earlier enterprise query: the engine can now scan solely these columns which might be wanted by the question (Product and nation), whereas skipping scanning the pointless columns. And, normally, this could enhance the efficiency of the analytical queries.
Okay, that’s good, however the column retailer existed earlier than Parquet and it nonetheless exists exterior of Parquet as nicely. So, what’s so particular in regards to the Parquet format?
Parquet is a columnar format that shops the information in row teams
Wait, what?! Wasn’t it sophisticated sufficient even earlier than this? Don’t fear, it’s a lot simpler than it sounds 🙂
Let’s return to our earlier instance and depict how Parquet will retailer this identical chunk of knowledge:

Let’s cease for a second and clarify the illustration above, as that is precisely the construction of the Parquet file (some extra issues have been deliberately omitted, however we are going to come quickly to elucidate that as nicely). Columns are nonetheless saved as separate models, however Parquet introduces extra buildings, referred to as Row group.
Why is this extra construction tremendous necessary?
You’ll want to attend for a solution for a bit :). In OLAP eventualities, we’re primarily involved with two ideas: projection and predicate(s). Projection refers to a SELECT assertion in SQL language – which columns are wanted by the question. Again to our earlier instance, we’d like solely the Product and Nation columns, so the engine can skip scanning the remaining ones.
Predicate(s) seek advice from the WHERE clause in SQL language – which rows fulfill standards outlined within the question. In our case, we’re fascinated about T-Shirts solely, so the engine can fully skip scanning Row group 2, the place all of the values within the Product column equal socks!

Let’s rapidly cease right here, as I would like you to appreciate the distinction between numerous forms of storage when it comes to the work that must be carried out by the engine:
- Row retailer – the engine must scan all 5 columns and all 6 rows
- Column retailer – the engine must scan 2 columns and all 6 rows
- Column retailer with row teams – the engine must scan 2 columns and 4 rows
Clearly, that is an oversimplified instance, with solely 6 rows and 5 columns, the place you’ll positively not see any distinction in efficiency between these three storage choices. Nevertheless, in actual life, whenever you’re coping with a lot bigger quantities of knowledge, the distinction turns into extra evident.
Now, the honest query can be: how does Parquet “know” which row group to skip/scan?
Parquet file comprises metadata
Because of this each Parquet file comprises “knowledge about knowledge” – info similar to minimal and most values in a selected column inside a sure row group. Moreover, each Parquet file comprises a footer, which retains the details about the format model, schema info, column metadata, and so forth. You’ll find extra particulars about Parquet metadata varieties here.
Vital: In an effort to optimize the efficiency and eradicate pointless knowledge buildings (row teams and columns), the engine first must “get acquainted” with the information, so it first reads the metadata. It’s not a sluggish operation, but it surely nonetheless requires a sure period of time. Subsequently, if you happen to’re querying the information from a number of small Parquet information, question efficiency can degrade, as a result of the engine should learn metadata from every file. So, you ought to be higher off merging a number of smaller information into one larger file (however nonetheless not too large :)…
I hear you, I hear you: Nikola, what’s “small” and what’s “large”? Sadly, there isn’t any single “golden” quantity right here, however for instance, Microsoft Azure Synapse Analytics recommends that the individual Parquet file should be at least a few hundred MBs in size.
What else is in there?
Here’s a simplified, high-level illustration of the Parquet file format:

Can it’s higher than this? Sure, with knowledge compression
Okay, we’ve defined how skipping the scan of the pointless knowledge buildings (row teams and columns) could profit your queries and enhance the general efficiency. However, it’s not solely about that – keep in mind after I informed you on the very starting that one of many foremost benefits of the Parquet format is the decreased reminiscence footprint of the file? That is achieved by making use of numerous compression algorithms.
I’ve already written about numerous knowledge compression varieties in Energy BI (and the Tabular mannequin normally) here, so possibly it’s a good suggestion to begin by studying this text.
There are two foremost encoding varieties that allow Parquet to compress the information and obtain astonishing financial savings in area:
- Dictionary encoding – Parquet creates a dictionary of the distinct values within the column, and afterward replaces “actual” values with index values from the dictionary. Going again to our instance, this course of seems one thing like this:

You may suppose: why this overhead, when product names are fairly quick, proper? Okay, however now think about that you simply retailer the detailed description of the product, similar to: “Lengthy arm T-Shirt with utility on the neck”. And, now think about that you’ve got this product offered million occasions…Yeah, as an alternative of getting million occasions repeating worth “Lengthy arm…bla bla”, the Parquet will retailer solely the Index worth (integer as an alternative of textual content).
Can it’s higher than THIS?! Sure, with the Delta Lake file format
Okay, what the heck is now a Delta Lake format?! That is the article about Parquet, proper?
So, to place it in plain English: Delta Lake is nothing else however the Parquet format “on steroids”. After I say “steroids”, the principle one is the versioning of Parquet information. It additionally shops a transaction log to allow monitoring all modifications utilized to the Parquet file. That is also referred to as ACID-compliant transactions.
Because it helps not solely ACID transactions, but additionally helps time journey (rollbacks, audit trails, and so forth.) and DML (Knowledge Manipulation Language) statements, similar to INSERT, UPDATE and DELETE, you received’t be unsuitable if you happen to consider the Delta Lake as a “knowledge warehouse on the information lake” (who stated: Lakehouse😉😉😉). Analyzing the professionals and cons of the Lakehouse idea is out of the scope of this text, however if you happen to’re curious to go deeper into this, I counsel you learn this article from Databricks.
Conclusion
We evolve! Identical as we, the information can also be evolving. So, new flavors of knowledge required new methods of storing it. The Parquet file format is among the most effective storage choices within the present knowledge panorama, because it supplies a number of advantages – each when it comes to reminiscence consumption, by leveraging numerous compression algorithms, and quick question processing by enabling the engine to skip scanning pointless knowledge.
Thanks for studying!