Install Hadoop on Ubuntu Single Node Cluster

Hadoop can be installed on a cluster of many machines in fully distributed mode or on a single machine in pseudo distributed mode.

Apart from these two modes, there is one more mode of running hadoop on standalone mode or local mode. In stand alone mode, there will be no daemons running and everything runs in a single JVM. It is easy to run and test Map Reduce programs in stand alone mode during development.

In Pseudo distributed mode, all the Hadoop daemons run on a local machine, simulating cluster on a small scale. It’s easy to install on a single machine and will be sufficient to run all components of Hadoop in pseudo distributed.

So, we will learn how to install a stable Hadoop version from Apache Software Foundation on a Single Node Cluster. Below Installation is done on Ubuntu Machine.

Hadoop Installation Prerequisites:

  • As Hadoop is written in Java, at least JDK 1.6 or later is required for installation of Hadoop.

If java is not installed already on Ubuntu please refer the post on How to Install Java on Ubuntu for installation instructions

Hadoop Installation:

  • Download a latest stable version of Hadoop from Apache Release Mirrors. In this example installation, Hadoop-2.3.0 version is downloaded from here. Download the zipped version of binary tarball Hadoop-2.3.0.tar.gz file.
  • Copy the binary gzipped file into your preferred directory location for hadoop installation. Generally into /usr/lib/hadoop. If this directory is not there follow the below instructions to create directory, copy file and extract the file.

From above commands, we have created directory /usr/lib/hadoop and changed mode to give permission to be edited by hadoop user.

copied the downloaded file from $HOME/Downloads, default download directory in Ubuntu,  into /usr/lib/hadoop directory and extracted the contents of gzipped file.

browse through the directory created, hadoop-2.3.0 under /usr/lib/hadoop folder. Below are the details of each directory under hadoop-2.3.0.

bin, sbin     — These two folders contains binary executable s stored as ‘*.sh’ files. So, these folders need to be added to the list of directories in PATH environment variable.

etc/hadoop — Configuration directory. It contains all the config files which needs to be modified specific to our installation.

include      —  This folder includes the ‘.h’ & ‘.hh’ files needed for C, C++ API.

lib, libexec  — These two are library folders which includes necessary library files.

share        — This folder contains documentation and source code for current hadoop release.

Now, we need to set up hadoop environment variables through .bashrc profile file, so that these will be picked up automatically whenever a terminal is started. Follow below instructions to set required environment variables.

In .bashrc file add the below lines at the bottom , based on hadoop installation directory.

As shown above, include both bin & sbin directories into list of directories in PATH environment variables. With this setting, we can access all the .sh files from terminal by default instead of rooting to hadoop directory every time. Save & close the .bashrc file.

It’s to verify the installation, close the terminal and open a new terminal and enter below command

If you get message similar to above, then your installation is successful

Configuring Hadoop:

  • Configuring SSH: Since, Hadoop runs multiple processes on one or more machines, we need to ensure that hadoop user should be able to connect to each host without requiring a password. This can be created by secure shell SSH. Please refer this post for SSH setup for hadoop.
  • Set JAVA_HOME environment variable in ‘’ file in hadoop’s configuration directory etc/hadoop and remaining environment variables can remain as it is.

Most of the Hadoop properties are configured through below four XML files

core-site.xml  — Common properties to HDFS, YARN, Map Reduce and etc.. are stored in this file.

hdfs-site.xml  — HDFS specific properties are stored in this file.

mapred-site.xml — Map Reduce specific properties need to be stored in this file. If this file is not present in HADOOP_CONF_DIR, then create it by renaming ‘mapred-site.xml.template‘ file to mapred-site.xml.

yarn-site.xml — YARN properties will be stored in this file.

All the above are site specific properties which are applied only to a single site, but hadoop provides default configurations as well. These are present in core-default.xml, hdfs-default.xml, mapred-default.xml and yarn-default.xml and these can be referenced from share directory.

Update core-site.xml:

We need to set below properties in core-site.xml configuration file.

Update hdfs-site.xml:

At the minimum, we need to set below properties in hdfs-site.xml configuration file. Replace ${HADOOP_HOME} with Hadoop installation directory full path like /usr/lib/hadoop/hadoop-2.3.0.

Update mapred-site.xml:

we need to set below properties in mapred-site.xml configuration file.

Update yarn-site.xml:

we need to set below properties in yarn-site.xml configuration file.

Format Name Node:

Now it’s time to format Name node, to start a fresh copy of hdfs file system. Use below command to format the name node.

Successful command will result look like below:

Name Node Format

Name Node Format

Running Hadoop Daemons:

1.  Run $ command from terminal to start hadoop daemons. This will start Name Node, Data Node and Secondary Name Node daemons in the background

2. Run $ jps command from terminal and verify the java processes running in the background.

3. Run $ command to start yarn daemons. This will start Resource Manager and Node Manager services.

4. Run $ jps command from terminal and verify the java processes running in the background.

Instead of and commands we can use to start both hdfs and yarn daemons. Similarly,, or commands are available to stop these daemons.

Successful execution of above four steps commands will result in messages like below:

 If all the daemons are started successfully as shown in result of jps command in above screen, then installation is successful.

Congratulations, Your Hadoop & Yarn Installation is successful and Daemons are running fine.


Profile photo of Siva

About Siva

Senior Hadoop developer with 4 years of experience in designing and architecture solutions for the Big Data domain and has been involved with several complex engagements. Technical strengths include Hadoop, YARN, Mapreduce, Hive, Sqoop, Flume, Pig, HBase, Phoenix, Oozie, Falcon, Kafka, Storm, Spark, MySQL and Java.