Understanding Hadoop 2.x Architecture and it’s Daemons

Prior to learn the concepts of Hadoop 2.x Architecture, I strongly recommend you to refer the my post on Hadoop Core Components, internals of Hadoop 1.x Architecture and its limitations. It will give you the idea about Hadoop2 Architecture requirement. And we have already learnt about the basic Hadoop components like Name Node, Secondary Name Node, Data Node, Job Tracker and Task Tracker. Continue reading “Understanding Hadoop 2.x Architecture and it’s Daemons”

7 Steps to Install Apache Hive with Hadoop on CentOS

Before we learn to install Apache Hive on CentOS let me give you the introduction of it. Hive is basically a data warehouse tool to store and process the structured data residing on HDFS. Hive was developed by Facebook and than after it is shifted to Apache Software Foundation and became an open source Apache Hive. Continue reading “7 Steps to Install Apache Hive with Hadoop on CentOS”

Simple explanation of Hadoop Core Components : HDFS and MapReduce

Before this post we have discussed about what is Hadoop and what kind of issues are solved by Hadoop. Now Let’s deep dive in to various components of Hadoop. Hadoop as a whole distribution provides only two core components and HDFS (which is Hadoop Distributed File System) and MapReduce (which is a distributed batch processing framework). And a complete bunch of machines which are running HDFS and MapReduce are known as Hadoop Cluster.

As you add more nodes in Hadoop Cluster the performance of your cluster will increase which means that Hadoop is Horizontally Scalable.  Continue reading “Simple explanation of Hadoop Core Components : HDFS and MapReduce”

Hadoop Core Concepts : The Bazics

In my earlier post about Brief Introduction of Hadoop, we have understood “What is Hadoop and What kind of problems it solves”. The next step is to understand Hadoop Core Concepts which talks more about,

  • Distributed system design
  • How data is distributed across multiple systems
  • What are the different components involved and how they communicate with each others

Continue reading “Hadoop Core Concepts : The Bazics”

6 Steps to Setup Apache Spark 1.0.1 (Multi Node Cluster) on CentOS

Before we move ahead lets learn a bit on Setup Apache Spark,

So, What is Apache Spark?

Apache Spark is a fast, real time and extremely expressive computing system which executes job in distributed (clustered) environment.

It is quite compatible with Apache Hadoop and more almost 10x faster than Hadoop MapReduce on Disk Computing and 100x faster using in memory computations. It provides rich APIs in Java, Scala and Python along with Functional Programming capabilities. Continue reading “6 Steps to Setup Apache Spark 1.0.1 (Multi Node Cluster) on CentOS”