all repos — gemini-redirect @ eeab66bc9b797abda69f66981da8479ba428a2e4

blog/mdad/introduction-to-hadoop-and-its-mapreduce/index.html (view raw)

 1<!DOCTYPE html>
 2<html>
 3<head>
 4<meta charset="utf-8" />
 5<meta name="viewport" content="width=device-width, initial-scale=1" />
 6<title>Introduction to Hadoop and its MapReduce</title>
 7<link rel="stylesheet" href="../css/style.css">
 8</head>
 9<body>
10<main>
11<p>Hadoop is an open-source, free, Java-based programming framework that helps processing large datasets in a distributed environment and the problems that arise when trying to harness the knowledge from BigData, capable of running on thousands of nodes and dealing with petabytes of data. It is based on Google File System (GFS) and originated from the work on the Nutch open-source project on search engines.</p>
12<div class="date-created-modified">Created 2020-03-30<br>
13Modified 2020-04-01</div>
14<p>Hadoop also offers a distributed filesystem (HDFS) enabling for fast transfer among nodes, and a way to program with MapReduce.</p>
15<p>It aims to strive for the 4 V’s: Volume, Variety, Veracity and Velocity. For veracity, it is a secure environment that can be trusted.</p>
16<h2 class="title" id="milestones"><a class="anchor" href="#milestones">¶</a>Milestones</h2>
17<p>The creators of Hadoop are Doug Cutting and Mike Cafarella, who just wanted to design a search engine, Nutch, and quickly found the problems of dealing with large amounts of data. They found their solution with the papers Google published.</p>
18<p>The name comes from the plush of Cutting’s child, a yellow elephant.</p>
19<ul>
20<li>In July 2005, Nutch used GFS to perform MapReduce operations.</li>
21<li>In February 2006, Nutch started a Lucene subproject which led to Hadoop.</li>
22<li>In April 2007, Yahoo used Hadoop in a 1 000-node cluster.</li>
23<li>In January 2008, Apache took over and made Hadoop a top-level project.</li>
24<li>In July 2008, Apache tested a 4000-node cluster. The performance was the fastest compared to other technologies that year.</li>
25<li>In May 2009, Hadoop sorted a petabyte of data in 17 hours.</li>
26<li>In December 2011, Hadoop reached 1.0.</li>
27<li>In May 2012, Hadoop 2.0 was released with the addition of YARN (Yet Another Resource Navigator) on top of HDFS, splitting MapReduce and other processes into separate components, greatly improving the fault tolerance.</li>
28</ul>
29<p>From here onwards, many other alternatives have born, like Spark, Hive &amp; Drill, Kafka, HBase, built around the Hadoop ecosystem.</p>
30<p>As of 2017, Amazon has clusters between 1 and 100 nodes, Yahoo has over 100 000 CPUs running Hadoop, AOL has clusters with 50 machines, and Facebook has a 320-machine (2 560 cores) and 1.3PB of raw storage.</p>
31<h2 id="why_not_use_rdbms_"><a class="anchor" href="#why_not_use_rdbms_">¶</a>Why not use RDBMS?</h2>
32<p>Relational database management systems simply cannot scale horizontally, and vertical scaling will require very expensive servers. Similar to RDBMS, Hadoop has a notion of jobs (analogous to transactions), but without ACID or concurrency control. Hadoop supports any form of data (unstructured or semi-structured) in read-only mode, and failures are common but there’s a simple yet efficient fault tolerance.</p>
33<p>So what problems does Hadoop solve? It solves the way we should think about problems, and distributing them, which is key to do anything related with BigData nowadays. We start working with clusters of nodes, and coordinating the jobs between them. Hadoop’s API makes this really easy.</p>
34<p>Hadoop also takes very seriously the loss of data with replication, and if a node falls, they are moved to a different node.</p>
35<h2 id="major_components"><a class="anchor" href="#major_components">¶</a>Major components</h2>
36<p>The previously-mentioned HDFS runs on commodity machine, which are cost-friendly. It is very fault-tolerant and efficient enough to process huge amounts of data, because it splits large files into smaller chunks (or blocks) that can be more easily handled. Multiple nodes can work on multiple chunks at the same time.</p>
37<p>NameNode stores the metadata of the various datablocks (map of blocks) along with their location. It is the brain and the master in Hadoop’s master-slave architecture, also known as the namespace, and makes use of the DataNode.</p>
38<p>A secondary NameNode is a replica that can be used if the first NameNode dies, so that Hadoop doesn’t shutdown and can restart.</p>
39<p>DataNode stores the blocks of data, and are the slaves in the architecture. This data is split into one or more files. Their only job is to manage this access to the data. They are often distributed among racks to avoid data lose.</p>
40<p>JobTracker creates and schedules jobs from the clients for either map or reduce operations.</p>
41<p>TaskTracker runs MapReduce tasks assigned to the current data node.</p>
42<p>When clients need data, they first interact with the NameNode and replies with the location of the data in the correct DataNode. Client proceeds with interaction with the DataNode.</p>
43<h2 id="mapreduce"><a class="anchor" href="#mapreduce">¶</a>MapReduce</h2>
44<p>MapReduce, as the name implies, is split into two steps: the map and the reduce. The map stage is the «divide and conquer» strategy, while the reduce part is about combining and reducing the results.</p>
45<p>The mapper has to process the input data (normally a file or directory), commonly line-by-line, and produce one or more outputs. The reducer uses all the results from the mapper as its input to produce a new output file itself.</p>
46<p><img src="bitmap.png" alt="" /></p>
47<p>When reading the data, some may be junk that we can choose to ignore. If it is valid data, however, we label it with a particular type that can be useful for the upcoming process. Hadoop is responsible for splitting the data accross the many nodes available to execute this process in parallel.</p>
48<p>There is another part to MapReduce, known as the Shuffle-and-Sort. In this part, types or categories from one node get moved to a different node. This happens with all nodes, so that every node can work on a complete category. These categories are known as «keys», and allows Hadoop to scale linearly.</p>
49<h2 id="references"><a class="anchor" href="#references">¶</a>References</h2>
50<ul>
51<li><a href="https://youtu.be/oT7kczq5A-0">YouTube – Hadoop Tutorial For Beginners | What Is Hadoop? | Hadoop Tutorial | Hadoop Training | Simplilearn</a></li>
52<li><a href="https://youtu.be/bcjSe0xCHbE">YouTube – Learn MapReduce with Playing Cards</a></li>
53<li><a href="https://youtu.be/j8ehT1_G5AY?list=PLi4tp-TF_qjM_ed4lIzn03w7OnEh0D8Xi">YouTube – Video Post #2: Hadoop para torpes (I)-¿Qué es y para qué sirve?</a></li>
54<li><a href="https://youtu.be/NQ8mjVPCDvk?list=PLi4tp-TF_qjM_ed4lIzn03w7OnEh0D8Xi">Video Post #3: Hadoop para torpes (II)-¿Cómo funciona? HDFS y MapReduce</a></li>
55<li><a href="https://hadoop.apache.org/old/releases.html">Apache Hadoop Releases</a></li>
56<li><a href="https://youtu.be/20qWx2KYqYg?list=PLi4tp-TF_qjM_ed4lIzn03w7OnEh0D8Xi">Video Post #4: Hadoop para torpes (III y fin)- Ecosistema y distribuciones</a></li>
57<li><a href="http://www.hadoopbook.com/">Chapter 2 – Hadoop: The Definitive Guide, Fourth Edition</a> (<a href="http://grut-computing.com/HadoopBook.pdf">pdf,</a><a href="http://www.hadoopbook.com/code.html">code</a>)</li>
58</ul>
59</main>
60</body>
61</html>
62