all repos — gemini-redirect @ 0b267f51260b59f9c40f193d688f4c01f696328f

blog/ribw/a-practical-example-with-hadoop/index.html (view raw)

  1<!DOCTYPE html>
  2<html>
  3<head>
  4<meta charset="utf-8" />
  5<meta name="viewport" content="width=device-width, initial-scale=1" />
  6<title>A practical example with Hadoop</title>
  7<link rel="stylesheet" href="../css/style.css">
  8</head>
  9<body>
 10<main>
 11<p>In our <a href="/blog/ribw/introduction-to-hadoop-and-its-mapreduce/">previous Hadoop post</a>, we learnt what it is, how it originated, and how it works, from a theoretical standpoint. Here we will instead focus on a more practical example with Hadoop.</p>
 12<div class="date-created-modified">Created 2020-04-01<br>
 13Modified 2020-04-03</div>
 14<p>This post will showcase my own implementation to implement a word counter for any plain text document that you want to analyze.</p>
 15<h2 class="title" id="installation"><a class="anchor" href="#installation">¶</a>Installation</h2>
 16<p>Before running any piece of software, its executable code must first be downloaded into our computers so that we can run it. Head over to <a href="http://hadoop.apache.org/releases.html">Apache Hadoop’s releases</a> and download the <a href="https://www.apache.org/dyn/closer.cgi/hadoop/common/hadoop-3.2.1/hadoop-3.2.1.tar.gz">latest binary version</a> at the time of writing (3.2.1).</p>
 17<p>We will be using the <a href="https://linuxmint.com/">Linux Mint</a> distribution because I love its simplicity, although the process shown here should work just fine on any similar Linux distribution such as <a href="https://ubuntu.com/">Ubuntu</a>.</p>
 18<p>Once the archive download is complete, extract it with any tool of your choice (graphical or using the terminal) and execute it. Make sure you have a version of Java installed, such as <a href="https://openjdk.java.net/">OpenJDK</a>.</p>
 19<p>Here are all the three steps in the command line:</p>
 20<pre><code>wget https://www.apache.org/dyn/closer.cgi/hadoop/common/hadoop-3.2.1/hadoop-3.2.1.tar.gz
 21tar xf hadoop-3.2.1.tar.gz
 22hadoop-3.2.1/bin/hadoop version
 23</code></pre>
 24<h2 id="processing_data"><a class="anchor" href="#processing_data">¶</a>Processing data</h2>
 25<p>To take advantage of Hadoop, we have to design our code to work in the MapReduce model. Both the map and reduce phase work on key-value pairs as input and output, and both have a programmer-defined function.</p>
 26<p>We will use Java, because it’s a dependency that we already have anyway, so might as well.</p>
 27<p>Our map function needs to split each of the lines we receive as input into words, and we will also convert them to lowercase, thus preparing the data for later use (counting words). There won’t be bad records, so we don’t have to worry about that.</p>
 28<p>Copy or reproduce the following code in a file called <code>WordCountMapper.java</code>, using any text editor of your choice:</p>
 29<pre><code>import java.io.IOException;
 30
 31import org.apache.hadoop.io.IntWritable;
 32import org.apache.hadoop.io.LongWritable;
 33import org.apache.hadoop.io.Text;
 34import org.apache.hadoop.mapreduce.Mapper;
 35
 36public class WordCountMapper extends Mapper&lt;LongWritable, Text, Text, IntWritable&gt; {
 37    @Override
 38    public void map(LongWritable key, Text value, Context context)
 39            throws IOException, InterruptedException {
 40        for (String word : value.toString().split(&quot;\\W&quot;)) {
 41            context.write(new Text(word.toLowerCase()), new IntWritable(1));
 42        }
 43    }
 44}
 45</code></pre>
 46<p>Now, let’s create the <code>WordCountReducer.java</code> file. Its job is to reduce the data from multiple values into just one. We do that by summing all the values (our word count so far):</p>
 47<pre><code>import java.io.IOException;
 48import java.util.Iterator;
 49
 50import org.apache.hadoop.io.IntWritable;
 51import org.apache.hadoop.io.Text;
 52import org.apache.hadoop.mapreduce.Reducer;
 53
 54public class WordCountReducer extends Reducer&lt;Text, IntWritable, Text, IntWritable&gt; {
 55    @Override
 56    public void reduce(Text key, Iterable&lt;IntWritable&gt; values, Context context)
 57            throws IOException, InterruptedException {
 58        int count = 0;
 59        for (IntWritable value : values) {
 60            count += value.get();
 61        }
 62        context.write(key, new IntWritable(count));
 63    }
 64}
 65</code></pre>
 66<p>Let’s just take a moment to appreciate how absolutely tiny this code is, and it’s Java! Hadoop’s API is really awesome and lets us write such concise code to achieve what we need.</p>
 67<p>Last, let’s write the <code>main</code> method, or else we won’t be able to run it. In our new file <code>WordCount.java</code>:</p>
 68<pre><code>import org.apache.hadoop.fs.Path;
 69import org.apache.hadoop.io.IntWritable;
 70import org.apache.hadoop.io.Text;
 71import org.apache.hadoop.mapreduce.Job;
 72import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
 73import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
 74
 75public class WordCount {
 76    public static void main(String[] args) throws Exception {
 77        if (args.length != 2) {
 78            System.err.println(&quot;usage: java WordCount &lt;input path&gt; &lt;output path&gt;&quot;);
 79            System.exit(-1);
 80        }
 81
 82        Job job = Job.getInstance();
 83
 84        job.setJobName(&quot;Word count&quot;);
 85        job.setJarByClass(WordCount.class);
 86        job.setMapperClass(WordCountMapper.class);
 87        job.setReducerClass(WordCountReducer.class);
 88        job.setOutputKeyClass(Text.class);
 89        job.setOutputValueClass(IntWritable.class);
 90
 91        FileInputFormat.addInputPath(job, new Path(args[0]));
 92        FileOutputFormat.setOutputPath(job, new Path(args[1]));
 93
 94        boolean result = job.waitForCompletion(true);
 95
 96        System.exit(result ? 0 : 1);
 97    }
 98}
 99</code></pre>
100<p>And compile by including the required <code>.jar</code> dependencies in Java’s classpath with the <code>-cp</code> switch:</p>
101<pre><code>javac -cp &quot;hadoop-3.2.1/share/hadoop/common/*:hadoop-3.2.1/share/hadoop/mapreduce/*&quot; *.java
102</code></pre>
103<p>At last, we can run it (also specifying the dependencies in the classpath, this one’s a mouthful). Let’s run it on the same <code>WordCount.java</code> source file we wrote:</p>
104<pre><code>java -cp &quot;.:hadoop-3.2.1/share/hadoop/common/*:hadoop-3.2.1/share/hadoop/common/lib/*:hadoop-3.2.1/share/hadoop/mapreduce/*:hadoop-3.2.1/share/hadoop/mapreduce/lib/*:hadoop-3.2.1/share/hadoop/yarn/*:hadoop-3.2.1/share/hadoop/yarn/lib/*:hadoop-3.2.1/share/hadoop/hdfs/*:hadoop-3.2.1/share/hadoop/hdfs/lib/*&quot; WordCount WordCount.java results
105</code></pre>
106<p>Hooray! We should have a new <code>results/</code> folder along with the following files:</p>
107<pre><code>$ ls results
108part-r-00000  _SUCCESS
109$ cat results/part-r-00000 
110	154
1110	2
1121	3
1132	1
114addinputpath	1
115apache	6
116args	4
117boolean	1
118class	6
119count	1
120err	1
121exception	1
122-snip- (output cut for clarity)
123</code></pre>
124<p>It worked! Now this example was obviously tiny, but hopefully enough to demonstrate how to get the basics running on real world data.</p>
125</main>
126</body>
127</html>
128