How-to: Use Parquet with Impala, Hive, Pig, and MapReduce

Source: Cloudera Blog
The CDH software stack lets you use your tool of choice with the Parquet file format – – offering the benefits of columnar storage at each phase of data processing. 
An open source project co-founded by Twitter and Cloudera, Parquet was designed from the ground up as a state-of-the-art, general-purpose, columnar file format for the Apache Hadoop ecosystem. In particular, Parquet has several features that make it highly suited to use with Cloudera Impala for data warehouse-style operations:
  • Columnar storage layout: A query can examine and perform calculations on all values for a column while reading only a small fraction of the data from a data file or table.
  • Flexible compression options: The data can be compressed with any of several codecs. Different data files can be compressed differently. The compression is transparent to applications that read the data files.
  • Innovative encoding schemes: Sequences of identical, similar, or related data values can be represented in ways that save disk space and memory, yet require little effort to decode. The encoding schemes provide an extra level of space savings beyond the overall compression for each data file.
  • Large file size: The layout of Parquet data files is optimized for queries that process large volumes of data, with individual files in the multi-megabyte or even gigabyte range.
Impala can create Parquet tables, insert data into them, convert data from other file formats to Parquet, and then perform SQL queries on the resulting data files. Parquet tables created by Impala can be accessed by Apache Hive, and vice versa.
That said, the CDH software stack lets you use the tool of your choice with the Parquet file format, for each phase of data processing. For example, you can read and write Parquet files using Apache Pig and MapReduce jobs. You can convert, transform, and query Parquet tables through Impala and Hive. And you can interchange data files between all of those components — including ones external to CDH, such as Cascading and Apache Tajo.
In this blog post, you will learn the most important principles involved.

Using Parquet Tables with Impala

Impala can create tables that use Parquet data files; insert data into those tables, converting the data into Parquet format; and query Parquet data files produced by Impala or by other components. The only syntax required is the STORED AS PARQUET clause on the CREATE TABLE statement. After that, all SELECTINSERT, and other statements recognize the Parquet format automatically. For example, a session in the impala-shell interpreter might look as follows:

Once you create a Parquet table this way in Impala, you can query it or insert into it through either Impala or Apache Hive.
Remember that Parquet format is optimized for working with large data files, typically 1GB each. Avoid using the INSERT ... VALUES syntax, or partitioning the table at too granular a level, if that would produce a large number of small files that cannot take advantage of the Parquet optimizations for large data chunks.
Inserting data into a partitioned Impala table can be a memory-intensive operation, because each data file requires a 1GB memory buffer to hold the data before being written. Such inserts can also exceed HDFS limits on simultaneous open files, because each node could potentially write to a separate data file for each partition, all at the same time. Consider splitting up such insert operations into one INSERT statement per partition.
For complete instructions and examples, see the Parquet section in the Impala documentation.

Using Parquet Tables in Hive

To create a table named PARQUET_TABLE that uses the Parquet format, you would use a command like the following, substituting your own table name, column names, and data types:

Note: Once you create a Parquet table this way in Hive, you can query it or insert into it through either Impala or Hive. Before the first time you access a newly created Hive table through Impala, issue a one-time INVALIDATE METADATA statement in the impala-shell interpreter to make Impala aware of the new table.
If the table will be populated with data files generated outside of Impala and Hive, it is often useful to create the table as an external table pointing to the location where the files will be created:

To populate the table with an INSERT statement, and to read the table with a SELECT statement, see the Impala documentation for Parquet.
Select the compression to use when writing data with the parquet.compression property, for example:

The valid options for compression are:
  • GZIP

Using Parquet Files in Pig

Reading Parquet Files in Pig
Assuming the external table was created and populated with Impala or Hive as described above, the Pig instruction to read the data is:

Writing Parquet Files in Pig
Create and populate a Parquet file with the ParquetStorer class:

There are three compression options: uncompressed, snappy, and gzip. The default is snappy. You can specify one of them once before the first store instruction in a Pig script:

Note that with CDH 4.5, you must add Thrift to Pig’s additional JAR files:

You can find Thrift as follows:

To use a Pig action involving Parquet files with Apache Oozie, add Thrift to the Oozie sharelib:

Using Parquet Files in MapReduce

MapReduce needs thrift in its CLASSPATH and in libjars to access Parquet files. It also needs parquet-format in libjars. Perform the following setup before running MapReduce jobs that access Parquet data files:

Reading Parquet Files in MapReduce
Taking advantage of the Example helper classes in the Parquet JAR files, a simple map-only MapReduce job that reads Parquet files can use the ExampleInputFormat class and the Group value class. There is nothing special about the reduce phase when using Parquet files; the following example demonstrates how to read a Parquet file in a MapReduce job. (Lines pertaining to Parquet are highlighted.)

Writing Parquet Files in MapReduce
When writing Parquet files you will need to provide a schema. The schema can be specified in the run method of the job before submitting it; for example:

or it can be extracted from the input file(s) if they are in Parquet format:

Records can then be written in the mapper by composing a Group as value using the Example classes and no key:

You can set ompression before submitting the job with:

…using one of the following codec:
  • CompressionCodecName.UNCOMPRESSED
  • CompressionCodecName.SNAPPY
  • CompressionCodecName.GZIP

Parquet File Interoperability

Impala has included Parquet support from the beginning, using its own high-performance code written in C++ to read and write the Parquet files. The Parquet JARs for use with Hive, Pig, and MapReduce are available with CDH 4.5 and higher. Using the Java-based Parquet implementation on a CDH release prior to CDH 4.5 is not supported.
A Parquet table created by Hive can typically be accessed by Impala 1.1.1 and higher with no changes, and vice versa. Prior to Impala 1.1.1, when Hive support for Parquet was not available, Impala wrote a dummy SerDes class name into each data file. These older Impala data files require a one-time ALTER TABLE statement to update the metadata for the SerDes class name before they can be used with Hive. (See the Impala Release Notes for details.)
A Parquet file written by Hive, Impala, Pig, or MapReduce can be read by any of the others. Different defaults for file and block sizes, compression and encoding settings, and so on might cause performance differences depending on which component writes or reads the data files. For example, Impala typically sets the HDFS block size to 1GB and divides the data files into 1GB chunks, so that each I/O request reads an entire data file.
There may be limitations in a particular release. The following are known limitations in CDH 4:
  • The TIMESTAMP data type in Parquet files is not supported in Hive, Pig, or MapReduce in CDH 4. Attempting to read a Parquet table created with Impala that includes a TIMESTAMP column will fail.
  • At the time of writing, Parquet had not been tested with HCatalog. Without HCatalog, Pig cannot correctly read dynamically partitioned tables; that is true for all file formats.
  • Currently, Impala does not support table columns using nested data types or composite data types such as maps, structs, or arrays. Any Parquet data files that include such types cannot be queried through Impala.


You can find full examples of Java code at the Cloudera Parquet examples Github repository:
  • The example demonstrates the “identity” transform. It reads any Parquet data file and writes a new file with exactly the same content.
  • The example reads a Parquet data file, and produces a new text file in CSV format with the same content.


Popular posts from this blog

Cloudera Data Hub: Where Agility Meets Control

Architectural Patterns for Near Real-Time Data Processing with Apache Hadoop

Big Data Trendz