Apache Hadoop

apache hadoop, hadoop, apache Hadoop architecture, apache Hadoop in big data, apache Hadoop provides, apache Hadoop in cloud computing, apache Hadoop tutorial, amazon aws, amazon S3, amazon EMR,
Spread the love

An open-source software utilities collection, Apache Hadoop facilitates with the help of a network of many computers. It solves problems which involve massive amounts of data and computation. Therefore, Apache Hadoop provides a software framework for distributed storage and processing of big data by the MapReduce programming model.

What is Apache Hadoop and where Apache Hadoop used for?

Gigabytes and petabytes of data can be efficiently stored and processed using Apache Hadoop. Hadoop lets clustering multiple computers analyze massive datasets in parallel faster instead of using one large computer to store and process the data.It gets easier to use all the storage and processing capacity in cluster servers. But, it even execute distributed processes against huge amounts of data. This is the building blocks on which other applications and services can be built.

Apache Hadoop Structure

Apache Hadoop consists of a storage part, called Hadoop Distributed File System (HDFS). Followed by a processing part, called MapReduce. Files in Apache Hadoop are split into large blocks and distributed across cluster nodes. Then it transfer packaged code to process the data. By using data locality, nodes are able to manipulate the data they have access to, enabling the dataset to be processed more quickly and efficiently. A conventional supercomputer architecture that uses a parallel file system would have slower processing rate as  data and computation are distributed over high-speed networks.

Why Apache Hadoop is better?

With Amazon EMR, you can process and analyze huge datasets using the latest versions of big data processing frameworks like Apache Hadoop, Spark, HBase, and Presto.
Amazon EMR’s pricing is low & simple with an hourly rate for every instance. You pay for an hour you use and leverage Spot Instances for greater savings.
With Amazon EMR’s you can provision as many as thousands of compute instances to process data at any scale.
Easy to use as we can launch an Amazon EMR cluster in a few minutes without node provisioning, cluster setup, Hadoop configuration, or cluster tuning.

Amazon EMR is secure. It uses security characteristics of AWS services which are Identity and Access Management (IAM), Security groups, Encryption, and AWS CloudTrail.

Different Modules of Apache Hadoop?

Hadoop consists have four main modules:

1. Hadoop Distributed File System or HDFS is a distributed file system. That runs on standard or low-end hardware. It offers more data amount than traditional file systems. In addition it offers high fault tolerance and native support of large datasets.

2.Hadoop Common brings common Java libraries that can be used across all modules.

3.  Another Resource Negotiator or YARN helps to maintain and monitor cluster nodes and resource usage.

4.Using MapReduce, programs can compute data in parallel. In the map task, data input is converted into a dataset that can be computed as key-value pairs. A reduce task consumes the output of a map task to collect it and produce the desired outcome.

5.   EMRFS can be used to run clusters based on HDFS data stored in Amazon S3. Shut down a cluster and have the data saved in Amazon S3 when the is finished. It requires payment only for the computing time that the cluster was running.


Spread the love

Leave a Reply