Skip to main content

Getting started with NextGen MapReduce (single node) in easy steps

Without any detailed explanation of what-is-what which is due for another blog entry, here are simple steps to get started with MRv2 (next generation MapReduce) in easy steps. Find more details about MRv2 details here. So, here are the steps

1) Download the Hadoop 2.x release here.

2) Extract it to a folder (let's call it $HADOOP_HOME).

3) Add the following to .bashrc in the home folder.

?
1
2
3
4
5
6
export HADOOP_HOME=/home/bigdata/Installations/hadoop-2.7.2
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export YARN_HOME=$HADOOP_HOME
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
4) Create the namenode and the datanode folder in $HADOOP_HOME folder.
?
1
2
mkdir -p $HADOOP_HOME/yarn/yarn_data/hdfs/namenode
mkdir -p $HADOOP_HOME/yarn/yarn_data/hdfs/datanode
5) Create the following configuration files in $HADOOP_HOME/etc/hadoop folder.
etc/hadoop/yarn-site.xml

?
1
2
3
4
5
6
7
8
   <property>
      <name>yarn.nodemanager.aux-services</name>
      <value>mapreduce_shuffle</value>
   </property>
   <property>
      <name>yarn.nodemanager.aux-services.mapreduce_shuffle.class</name>
      <value>org.apache.hadoop.mapred.ShuffleHandler</value>
   </property>

etc/hadoop/core-site.xml
?
1
2
3
4
   <property>
       <name>fs.defaultFS</name>
       <value>hdfs://localhost:8020</value>
   </property>
etc/hadoop/hdfs-site.xml
?
1
2
3
4
5
6
7
8
9
10
11
12
   <property>
      <name>dfs.replication</name>
      <value>1</value>
   </property>
   <property>
      <name>dfs.namenode.name.dir</name>
      <value>file:/home/bigdata/Installations/hadoop-2.7.2/yarn/yarn_data/hdfs/namenode</value>
   </property>
   <property>
      <name>dfs.datanode.data.dir</name>
      <value>file:/home/bigdata/Installations/hadoop-2.7.2/yarn/yarn_data/hdfs/datanode</value>
   </property>
etc/hadoop/mapred-site.xml
?
1
2
3
4
   <property>
      <name>mapreduce.framework.name</name>
      <value>yarn</value>
   </property>
6) Format the NameNode
?
1
hadoop namenode -format
7) Start the Hadoop daemons
?
1
2
3
4
5
6
hadoop-daemon.sh start namenode
hadoop-daemon.sh start datanode
hadoop-daemon.sh start secondarynamenode
yarn-daemon.sh start resourcemanager
yarn-daemon.sh start nodemanager
mr-jobhistory-daemon.sh start historyserver
8) Time to check if the installation has been a success or not

    a) Check the log files in the $HADOOP_HOME/logs folder for any errors.

    b) The following  consoles should come up

?
1
2
3
http://localhost:50070/ for NameNode
http://localhost:8088/cluster for ResourceManager
http://localhost:19888/jobhistory for Job History Server
    c) Run the jps command to make sure that the daemons are running.
?
1
2
3
4
5
6
7
2234 Jps
1989 ResourceManager
2023 NodeManager
1856 DataNode
2060 JobHistoryServer
1793 NameNode
2049 SecondaryNameNode
9) Create a file and copy it to HDFS
?
1
2
3
4
5
6
7
mkdir in
vi in/file
Hadoop is fast
Hadoop is cool
hadoop dfs -copyFromLocal in/ /in
10) Run the example job.
?
1
hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.2.jar wordcount /in /out
10) Verify that the output folder with the proper contents has been created through the NameNode Web console (http://localhost:50070/dfshealth.jsp) in the /out folder.

11) Stop the daemons once the job has been through successfully.

?
1
2
3
4
5
6
hadoop-daemon.sh stop namenode
hadoop-daemon.sh stop datanode
hadoop-daemon.sh stop secondarynamenode
yarn-daemon.sh stop resourcemanager
yarn-daemon.sh stop nodemanager
mr-jobhistory-daemon.sh stop historyserver
Note: It will be easy if all hadoop start/stop commands can be make as a shell script and execute the script file

Comments

Popular posts from this blog

Architectural Patterns for Near Real-Time Data Processing with Apache Hadoop

Thanks to Ted Malaska  and Cloudera
Evaluating which streaming architectural pattern is the best match to your use case is a precondition for a successful production deployment.
The Apache Hadoop ecosystem has become a preferred platform for enterprises seeking to process and understand large-scale data in real time. Technologies like Apache Kafka, Apache Flume, Apache Spark, Apache Storm, and Apache Samza are increasingly pushing the envelope on what is possible. It is often tempting to bucket large-scale streaming use cases together but in reality they tend to break down into a few different architectural patterns, with different components of the ecosystem better suited for different problems. In this post, I will outline the four major streaming patterns that we have encountered with customers running enterprise data hubs in production, and explain how to implement those patterns architecturally on Hadoop. Streaming PatternsThe four basic streaming patterns (often used in tandem) are…

How-to: Use Parquet with Impala, Hive, Pig, and MapReduce

Source: Cloudera Blog The CDH software stack lets you use your tool of choice with the Parquet file format – – offering the benefits of columnar storage at each phase of data processing.  An open source project co-founded by Twitter and Cloudera, Parquet was designed from the ground up as a state-of-the-art, general-purpose, columnar file format for the Apache Hadoop ecosystem. In particular, Parquet has several features that make it highly suited to use with Cloudera Impala for data warehouse-style operations: Columnar storage layout: A query can examine and perform calculations on all values for a column while reading only a small fraction of the data from a data file or table.Flexible compression options: The data can be compressed with any of several codecs. Different data files can be compressed differently. The compression is transparent to applications that read the data files.Innovative encoding schemes: Sequences of identical, similar, or related data values can be represented i…

3X FASTER INTERACTIVE QUERY WITH APACHE HIVE LLAP

Thanks to Carter Shanklin & Nita Dembla from Hortonworks for valuable post. One of the most exciting new features of HDP 2.6 from Hortonworks was the general availability of Apache Hive with LLAP. If you missed DataWorks Summit you’ll want to look at some of the great LLAP experiences our users shared, including Geisinger who found that Hive LLAP outperforms their traditional EDW for most of their queries, and Comcast who found Hive LLAP is faster than Presto for 75% of benchmark queries. These great results are thanks to performance and stability improvements Hortonworks made to Hive LLAP resulting in 3x faster interactive query in HDP 2.6. This blog dives into the reasons HDP 2.6 is so much faster. We’ll also take a look at the massive step forward Hive has made in SQL compliance with HDP 2.6, enabling Hive to run all 99 TPC-DS queries with only trivial modifications to the original source queries. STARTING OFF: 3X PERFORMANCE GAINS IN HDP 2.6 WITH HIVE LLAPLet’s start out with a s…