RDD stands for Resilient Distributed Dataset. Apache Spark RDD is an abstract representation of the data which is divided into the partitions and distributed across the cluster. If you are aware about collection framework in Java than you can consider an RDD same as the Java collection object but here it is divided into various small pieces (referred as partitions) and is distributed across multiple nodes.
Apache Spark architecture enables to write computation application which are almost 10x faster than traditional Hadoop MapReuce applications. We have already discussed about features of Apache Spark in the introductory post.
Prior to Introduction to Apache Spark, it is necessary that we understand the actual requirement of Apache Spark. So let’s rewind to the earlier architecture of distributed data processing for big data analytics. And the most famous algorithm for large scale data processing is Hadoop MapReduce. Hadoop MapRecuce solves certain problems for distributed computation but it has it’s own limitations when it comes to data scale and processing time.
Before we learn to install Apache Hive on CentOS let me give you the introduction of it. Hive is basically a data warehouse tool to store and process the structured data residing on HDFS. Hive was developed by Facebook and than after it is shifted to Apache Software Foundation and became an open source Apache Hive.
Normally we create Spark Application JAR using Scala and SBT (Scala Build Tool). In my previous post on Creating Multi-node Spark Cluster we have executed a word count example using spark shell. As an extension to that, we’ll learn about How to create Spark Application JAR file with Scala and SBT? and How to execute it as a Spark Job on Spark Cluster?
Hadoop 1.x Architecture is a history now because in most of the Hadoop applications are using Hadoop 2.x Architecture. But still understanding of Hadoop 1.x Architecture will provide us the insights of how hadoop has evolved over the time.
In order to access files from HDFS one can use various Hadoop commands from UNIX shell. Additionally, Hadoop also provides powerful Java APIs using which a programmer can write a code for accessing files over HDFS. Before we go into the more details let’s understand the terminology to access files from HDFS.
Before this post we have discussed about what is Hadoop and what kind of issues are solved by Hadoop. Now Let’s deep dive in to various components of Hadoop. Hadoop as a whole distribution provides only two core components and HDFS (which is Hadoop Distributed File System) and MapReduce (which is a distributed batch processing framework). And a complete bunch of machines which are running HDFS and MapReduce are known as Hadoop Cluster.
In my earlier post about Brief Introduction of Hadoop, we have understood “What is Hadoop and What kind of problems it solves”. The next step is to understand Hadoop Core Concepts which talks more about,
- Distributed system design
- How data is distributed across multiple systems
- What are the different components involved and how they communicate with each others
Apache Hadoop solves very different kind of problems in Big Data world. So before we get an introduction of Hadoop it becomes necessary to understand the core problems in large scale computation. Than after we’ll try to understand how Hadoop solves these problems.