This post will give you a better hands on with creating numpy array. At the end of the post, you will have clarity on different ways of creating numpy arrays with helpful visualizations. If you are a beginner in Data Analytics or Data Science field, you must have in depth understanding of numpy package of python.
In this Spark aggregateByKey example post, we will discover how aggregationByKey could be a better alternative of groupByKey transformation when aggregation operation is involved. The most common problem while working with key-value pairs is grouping of values and aggregating them with respect to a common key. And Spark aggregateByKey transformation decently addresses this problem in a very intuitive way.
Apache Spark groupByKey example is quite similar as reduceByKey. It is again a transformation operation and also a wider operation because it demands data shuffle. Looking at spark groupByKey function it takes key-value pair (K,V) as an input produces RDD with key and list of values. Let’s try to understand the function in detail. At the end of this post we’ll also compare it with reduceByKey with respect to optimization technique.
Spark groupBy example can also be compared with groupby clause of SQL. In spark, groupBy is a transformation operation. Let’s have some overview first then we’ll understand this operation by some examples in Scala, Java and Python languages. Continue reading “Apache Spark groupBy Example”
Looking at spark reduceByKey example, we can say that reduceByKey is one step ahead then reduce function in Spark with the contradiction that it is a transformation operation. Let’s understand this operation by some examples in Scala, Java and Python languages. Continue reading “Apache Spark reduceByKey Example”
Here in spark reduce example, we’ll understand how reduce operation works in Spark with examples in languages like Scala, Java and Python. Spark reduce operation is an action kind of operation and it triggers a full DAG execution for all lined up lazy instructions. Continue reading “Apache Spark reduce Example”
In spark filter example, we’ll explore filter method of Spark RDD class in all of three languages Scala, Java and Python. Spark filter operation is a transformation kind of operation so its evaluation is lazy. Let’s dig a bit deeper. Continue reading “Apache Spark filter Example”
In Apache Spark map example, we’ll learn about all ins and outs of map function. Basically map is defined in abstract class RDD in spark and it is a transformation kind of operation which means it is a lazy operation. Let’s explore it in detail. Continue reading “Apache Spark map Example”
We have discussed a high level view of YARN Architecture in my post on Understanding Hadoop 2.x Architecture but YARN it self is a wider subject to understand. Keeping that in mind, we’ll about discuss YARN Architecture, it’s components and advantages in this post. Continue reading “YARN Architecture and Components”