all repos — gemini-redirect @ fcd5db0bbb4bb9fe310c1507e91664de87765514

blog/mdad/atom.xml (view raw)

   1<feed xmlns="http://www.w3.org/2005/Atom"><title>pagong</title><id>pagong</id><updated>2020-07-02T22:00:00+00:00</updated><entry><title>Data Mining and Data Warehousing</title><id>dist/index/index.html</id><updated>2020-07-02T22:00:00+00:00</updated><published>2020-07-02T22:00:00+00:00</published><summary>During 2020 at university, this subject ("Minería de Datos y Almacenes de Datos") had us write</summary><content type="html" src="dist/index/index.html">&lt;!DOCTYPE html&gt;
   2&lt;html&gt;
   3&lt;head&gt;
   4&lt;meta charset=&quot;utf-8&quot; /&gt;
   5&lt;meta name=&quot;viewport&quot; content=&quot;width=device-width, initial-scale=1&quot; /&gt;
   6&lt;title&gt;Data Mining and Data Warehousing&lt;/title&gt;
   7&lt;link rel=&quot;stylesheet&quot; href=&quot;../css/style.css&quot;&gt;
   8&lt;/head&gt;
   9&lt;body&gt;
  10&lt;main&gt;
  11&lt;h1 class=&quot;title&quot; id=&quot;data_mining_and_data_warehousing&quot;&gt;&lt;a class=&quot;anchor&quot; href=&quot;#data_mining_and_data_warehousing&quot;&gt;¶&lt;/a&gt;Data Mining and Data Warehousing&lt;/h1&gt;
  12&lt;div class=&quot;date-created-modified&quot;&gt;2020-07-03&lt;/div&gt;
  13&lt;p&gt;During 2020 at university, this subject (&amp;quot;Minería de Datos y Almacenes de Datos&amp;quot;) had us write
  14blog posts as assignments. I think it would be really fun and I wanted to preserve that work
  15here, with the hopes it's interesting to someone.&lt;/p&gt;
  16&lt;p&gt;The posts were auto-generated from the original HTML files and manually anonymized later.&lt;/p&gt;
  17&lt;/main&gt;
  18&lt;/body&gt;
  19&lt;/html&gt;
  20 </content></entry><entry><title>Privado: Final NoSQL evaluation</title><id>dist/final-nosql-evaluation/index.html</id><updated>2020-05-13T22:00:00+00:00</updated><published>2020-05-12T22:00:00+00:00</published><summary>This evaluation is a bit different to my </summary><content type="html" src="dist/final-nosql-evaluation/index.html">&lt;!DOCTYPE html&gt;
  21&lt;html&gt;
  22&lt;head&gt;
  23&lt;meta charset=&quot;utf-8&quot; /&gt;
  24&lt;meta name=&quot;viewport&quot; content=&quot;width=device-width, initial-scale=1&quot; /&gt;
  25&lt;title&gt;Privado: Final NoSQL evaluation&lt;/title&gt;
  26&lt;link rel=&quot;stylesheet&quot; href=&quot;../css/style.css&quot;&gt;
  27&lt;/head&gt;
  28&lt;body&gt;
  29&lt;main&gt;
  30&lt;p&gt;This evaluation is a bit different to my &lt;a href=&quot;/blog/mdad/nosql-evaluation/&quot;&gt;previous one&lt;/a&gt; because this time I have been tasked to evaluate student &lt;code&gt;a(i - 2)&lt;/code&gt;, and because I am &lt;code&gt;i = 11&lt;/code&gt; that happens to be &lt;code&gt;a(9) =&lt;/code&gt; a classmate.&lt;/p&gt;
  31&lt;div class=&quot;date-created-modified&quot;&gt;Created 2020-05-13&lt;br&gt;
  32Modified 2020-05-14&lt;/div&gt;
  33&lt;h2 class=&quot;title&quot; id=&quot;classmate_s_evaluation&quot;&gt;&lt;a class=&quot;anchor&quot; href=&quot;#classmate_s_evaluation&quot;&gt;¶&lt;/a&gt;Classmate’s Evaluation&lt;/h2&gt;
  34&lt;p&gt;&lt;strong&gt;Grading: A.&lt;/strong&gt;&lt;/p&gt;
  35&lt;p&gt;The post I have evaluated is Trabajo en grupo – Bases de datos NoSQL, 3ª entrada: Aplicación con una Base de datos NoSQL seleccionada.&lt;/p&gt;
  36&lt;p&gt;It starts with a very brief introduction with who has written the post, what data they will be using, and what database they have chosen.&lt;/p&gt;
  37&lt;p&gt;They properly describe their objective, how they will do it and what library will be used.&lt;/p&gt;
  38&lt;p&gt;They also explain where they obtain the data from, and what other things the site can do, which is a nice bonus.&lt;/p&gt;
  39&lt;p&gt;The post continues listing and briefly explaining all the tools used and what they are for, including commands to execute.&lt;/p&gt;
  40&lt;p&gt;At last, they list what files their project uses, what they do, and contains a showcase of images which lets the reader know what the application does.&lt;/p&gt;
  41&lt;p&gt;All in all, in my opinion, it’s clear they have put work into this entry and I have not noticed any major flaws, so they deserve the highest grade.&lt;/p&gt;
  42&lt;/main&gt;
  43&lt;/body&gt;
  44&lt;/html&gt;
  45 </content></entry><entry><title>A practical example with Hadoop</title><id>dist/a-practical-example-with-hadoop/index.html</id><updated>2020-04-17T22:00:00+00:00</updated><published>2020-03-29T22:00:00+00:00</published><summary>In our </summary><content type="html" src="dist/a-practical-example-with-hadoop/index.html">&lt;!DOCTYPE html&gt;
  46&lt;html&gt;
  47&lt;head&gt;
  48&lt;meta charset=&quot;utf-8&quot; /&gt;
  49&lt;meta name=&quot;viewport&quot; content=&quot;width=device-width, initial-scale=1&quot; /&gt;
  50&lt;title&gt;A practical example with Hadoop&lt;/title&gt;
  51&lt;link rel=&quot;stylesheet&quot; href=&quot;../css/style.css&quot;&gt;
  52&lt;/head&gt;
  53&lt;body&gt;
  54&lt;main&gt;
  55&lt;p&gt;In our &lt;a href=&quot;/blog/mdad/introduction-to-hadoop-and-its-mapreduce/&quot;&gt;previous Hadoop post&lt;/a&gt;, we learnt what it is, how it originated, and how it works, from a theoretical standpoint. Here we will instead focus on a more practical example with Hadoop.&lt;/p&gt;
  56&lt;div class=&quot;date-created-modified&quot;&gt;Created 2020-03-30&lt;br&gt;
  57Modified 2020-04-18&lt;/div&gt;
  58&lt;p&gt;This post will reproduce the example on Chapter 2 of the book &lt;a href=&quot;http://www.hadoopbook.com/&quot;&gt;Hadoop: The Definitive Guide, Fourth Edition&lt;/a&gt; (&lt;a href=&quot;http://grut-computing.com/HadoopBook.pdf&quot;&gt;pdf,&lt;/a&gt;&lt;a href=&quot;http://www.hadoopbook.com/code.html&quot;&gt;code&lt;/a&gt;), that is, finding the maximum global-wide temperature for a given year.&lt;/p&gt;
  59&lt;h2 class=&quot;title&quot; id=&quot;installation&quot;&gt;&lt;a class=&quot;anchor&quot; href=&quot;#installation&quot;&gt;¶&lt;/a&gt;Installation&lt;/h2&gt;
  60&lt;p&gt;Before running any piece of software, its executable code must first be downloaded into our computers so that we can run it. Head over to &lt;a href=&quot;http://hadoop.apache.org/releases.html&quot;&gt;Apache Hadoop’s releases&lt;/a&gt; and download the &lt;a href=&quot;https://www.apache.org/dyn/closer.cgi/hadoop/common/hadoop-3.2.1/hadoop-3.2.1.tar.gz&quot;&gt;latest binary version&lt;/a&gt; at the time of writing (3.2.1).&lt;/p&gt;
  61&lt;p&gt;We will be using the &lt;a href=&quot;https://linuxmint.com/&quot;&gt;Linux Mint&lt;/a&gt; distribution because I love its simplicity, although the process shown here should work just fine on any similar Linux distribution such as &lt;a href=&quot;https://ubuntu.com/&quot;&gt;Ubuntu&lt;/a&gt;.&lt;/p&gt;
  62&lt;p&gt;Once the archive download is complete, extract it with any tool of your choice (graphical or using the terminal) and execute it. Make sure you have a version of Java installed, such as &lt;a href=&quot;https://openjdk.java.net/&quot;&gt;OpenJDK&lt;/a&gt;.&lt;/p&gt;
  63&lt;p&gt;Here are all the three steps in the command line:&lt;/p&gt;
  64&lt;pre&gt;&lt;code&gt;wget https://apache.brunneis.com/hadoop/common/hadoop-3.2.1/hadoop-3.2.1.tar.gz
  65tar xf hadoop-3.2.1.tar.gz
  66hadoop-3.2.1/bin/hadoop version
  67&lt;/code&gt;&lt;/pre&gt;
  68&lt;p&gt;We will be using the two example data files that they provide in &lt;a href=&quot;https://github.com/tomwhite/hadoop-book/tree/master/input/ncdc/all&quot;&gt;their GitHub repository&lt;/a&gt;, although the full dataset is offered by the &lt;a href=&quot;https://www.ncdc.noaa.gov/&quot;&gt;National Climatic Data Center&lt;/a&gt; (NCDC).&lt;/p&gt;
  69&lt;p&gt;We will also unzip and concatenate both files into a single text file, to make it easier to work with. As a single command pipeline:&lt;/p&gt;
  70&lt;pre&gt;&lt;code&gt;curl https://raw.githubusercontent.com/tomwhite/hadoop-book/master/input/ncdc/all/190{1,2}.gz | gunzip &amp;gt; 190x
  71&lt;/code&gt;&lt;/pre&gt;
  72&lt;p&gt;This should create a &lt;code&gt;190x&lt;/code&gt; text file in the current directory, which will be our input data.&lt;/p&gt;
  73&lt;h2 id=&quot;processing_data&quot;&gt;&lt;a class=&quot;anchor&quot; href=&quot;#processing_data&quot;&gt;¶&lt;/a&gt;Processing data&lt;/h2&gt;
  74&lt;p&gt;To take advantage of Hadoop, we have to design our code to work in the MapReduce model. Both the map and reduce phase work on key-value pairs as input and output, and both have a programmer-defined function.&lt;/p&gt;
  75&lt;p&gt;We will use Java, because it’s a dependency that we already have anyway, so might as well.&lt;/p&gt;
  76&lt;p&gt;Our map function needs to extract the year and air temperature, which will prepare the data for later use (finding the maximum temperature for each year). We will also drop bad records here (if the temperature is missing, suspect or erroneous).&lt;/p&gt;
  77&lt;p&gt;Copy or reproduce the following code in a file called &lt;code&gt;MaxTempMapper.java&lt;/code&gt;, using any text editor of your choice:&lt;/p&gt;
  78&lt;pre&gt;&lt;code&gt;import java.io.IOException;
  79
  80import org.apache.hadoop.io.IntWritable;
  81import org.apache.hadoop.io.LongWritable;
  82import org.apache.hadoop.io.Text;
  83import org.apache.hadoop.mapreduce.Mapper;
  84
  85public class MaxTempMapper extends Mapper&amp;lt;LongWritable, Text, Text, IntWritable&amp;gt; {
  86    private static final int TEMP_MISSING = 9999;
  87    private static final String GOOD_QUALITY_RE = &amp;quot;[01459]&amp;quot;;
  88
  89    @Override
  90    public void map(LongWritable key, Text value, Context context)
  91            throws IOException, InterruptedException {
  92        String line = value.toString();
  93        String year = line.substring(15, 19);
  94        String temp = line.substring(87, 92).replaceAll(&amp;quot;^\\+&amp;quot;, &amp;quot;&amp;quot;);
  95        String quality = line.substring(92, 93);
  96
  97        int airTemperature = Integer.parseInt(temp);
  98        if (airTemperature != TEMP_MISSING &amp;amp;&amp;amp; quality.matches(GOOD_QUALITY_RE)) {
  99            context.write(new Text(year), new IntWritable(airTemperature));
 100        }
 101    }
 102}
 103&lt;/code&gt;&lt;/pre&gt;
 104&lt;p&gt;Now, let’s create the &lt;code&gt;MaxTempReducer.java&lt;/code&gt; file. Its job is to reduce the data from multiple values into just one. We do that by keeping the maximum out of all the values we receive:&lt;/p&gt;
 105&lt;pre&gt;&lt;code&gt;import java.io.IOException;
 106import java.util.Iterator;
 107
 108import org.apache.hadoop.io.IntWritable;
 109import org.apache.hadoop.io.Text;
 110import org.apache.hadoop.mapreduce.Reducer;
 111
 112public class MaxTempReducer extends Reducer&amp;lt;Text, IntWritable, Text, IntWritable&amp;gt; {
 113    @Override
 114    public void reduce(Text key, Iterable&amp;lt;IntWritable&amp;gt; values, Context context)
 115            throws IOException, InterruptedException {
 116        Iterator&amp;lt;IntWritable&amp;gt; iter = values.iterator();
 117        if (iter.hasNext()) {
 118            int maxValue = iter.next().get();
 119            while (iter.hasNext()) {
 120                maxValue = Math.max(maxValue, iter.next().get());
 121            }
 122            context.write(key, new IntWritable(maxValue));
 123        }
 124    }
 125}
 126&lt;/code&gt;&lt;/pre&gt;
 127&lt;p&gt;Except for some Java weirdness (…why can’t we just iterate over an &lt;code&gt;Iterator&lt;/code&gt;? Or why can’t we just manually call &lt;code&gt;next()&lt;/code&gt; on an &lt;code&gt;Iterable&lt;/code&gt;?), our code is correct. There can’t be a maximum if there are no elements, and we want to avoid dummy values such as &lt;code&gt;Integer.MIN_VALUE&lt;/code&gt;.&lt;/p&gt;
 128&lt;p&gt;We can also take a moment to appreciate how absolutely tiny this code is, and it’s Java! Hadoop’s API is really awesome and lets us write such concise code to achieve what we need.&lt;/p&gt;
 129&lt;p&gt;Last, let’s write the &lt;code&gt;main&lt;/code&gt; method, or else we won’t be able to run it. In our new file &lt;code&gt;MaxTemp.java&lt;/code&gt;:&lt;/p&gt;
 130&lt;pre&gt;&lt;code&gt;import org.apache.hadoop.fs.Path;
 131import org.apache.hadoop.io.IntWritable;
 132import org.apache.hadoop.io.Text;
 133import org.apache.hadoop.mapreduce.Job;
 134import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
 135import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
 136
 137public class MaxTemp {
 138    public static void main(String[] args) throws Exception {
 139        if (args.length != 2) {
 140            System.err.println(&amp;quot;usage: java MaxTemp &amp;lt;input path&amp;gt; &amp;lt;output path&amp;gt;&amp;quot;);
 141            System.exit(-1);
 142        }
 143
 144        Job job = Job.getInstance();
 145
 146        job.setJobName(&amp;quot;Max temperature&amp;quot;);
 147        job.setJarByClass(MaxTemp.class);
 148        job.setMapperClass(MaxTempMapper.class);
 149        job.setReducerClass(MaxTempReducer.class);
 150        job.setOutputKeyClass(Text.class);
 151        job.setOutputValueClass(IntWritable.class);
 152
 153        FileInputFormat.addInputPath(job, new Path(args[0]));
 154        FileOutputFormat.setOutputPath(job, new Path(args[1]));
 155
 156        boolean result = job.waitForCompletion(true);
 157
 158        System.exit(result ? 0 : 1);
 159    }
 160}
 161&lt;/code&gt;&lt;/pre&gt;
 162&lt;p&gt;And compile by including the required &lt;code&gt;.jar&lt;/code&gt; dependencies in Java’s classpath with the &lt;code&gt;-cp&lt;/code&gt; switch:&lt;/p&gt;
 163&lt;pre&gt;&lt;code&gt;javac -cp &amp;quot;hadoop-3.2.1/share/hadoop/common/*:hadoop-3.2.1/share/hadoop/mapreduce/*&amp;quot; *.java
 164&lt;/code&gt;&lt;/pre&gt;
 165&lt;p&gt;At last, we can run it (also specifying the dependencies in the classpath, this one’s a mouthful):&lt;/p&gt;
 166&lt;pre&gt;&lt;code&gt;java -cp &amp;quot;.:hadoop-3.2.1/share/hadoop/common/*:hadoop-3.2.1/share/hadoop/common/lib/*:hadoop-3.2.1/share/hadoop/mapreduce/*:hadoop-3.2.1/share/hadoop/mapreduce/lib/*:hadoop-3.2.1/share/hadoop/yarn/*:hadoop-3.2.1/share/hadoop/yarn/lib/*:hadoop-3.2.1/share/hadoop/hdfs/*:hadoop-3.2.1/share/hadoop/hdfs/lib/*&amp;quot; MaxTemp 190x results
 167&lt;/code&gt;&lt;/pre&gt;
 168&lt;p&gt;Hooray! We should have a new &lt;code&gt;results/&lt;/code&gt; folder along with the following files:&lt;/p&gt;
 169&lt;pre&gt;&lt;code&gt;$ ls results
 170part-r-00000  _SUCCESS
 171$ cat results/part-r-00000 
 1721901	317
 1731902	244
 174&lt;/code&gt;&lt;/pre&gt;
 175&lt;p&gt;It worked! Now this example was obviously tiny, but hopefully enough to demonstrate how to get the basics running on real world data.&lt;/p&gt;
 176&lt;/main&gt;
 177&lt;/body&gt;
 178&lt;/html&gt;
 179 </content></entry><entry><title>Developing a Python application for Cassandra</title><id>dist/developing-a-python-application-for-cassandra/index.html</id><updated>2020-04-15T22:00:00+00:00</updated><published>2020-03-22T23:00:00+00:00</published><summary>Warning</summary><content type="html" src="dist/developing-a-python-application-for-cassandra/index.html">&lt;!DOCTYPE html&gt;
 180&lt;html&gt;
 181&lt;head&gt;
 182&lt;meta charset=&quot;utf-8&quot; /&gt;
 183&lt;meta name=&quot;viewport&quot; content=&quot;width=device-width, initial-scale=1&quot; /&gt;
 184&lt;title&gt;Developing a Python application for Cassandra&lt;/title&gt;
 185&lt;link rel=&quot;stylesheet&quot; href=&quot;../css/style.css&quot;&gt;
 186&lt;/head&gt;
 187&lt;body&gt;
 188&lt;main&gt;
 189&lt;p&gt;&lt;em&gt;&lt;strong&gt;Warning&lt;/strong&gt;: this post is, in fact, a shameless self-plug to my own library. If you continue reading, you accept that you are okay with this. Otherwise, please close the tab, shut down your computer, and set it on fire.__(Also, that was a joke. Please don’t do that.)&lt;/em&gt;&lt;/p&gt;
 190&lt;div class=&quot;date-created-modified&quot;&gt;Created 2020-03-23&lt;br&gt;
 191Modified 2020-04-16&lt;/div&gt;
 192&lt;p&gt;Let’s do some programming! Today we will be making a tiny CLI application in &lt;a href=&quot;http://python.org/&quot;&gt;Python&lt;/a&gt; that queries &lt;a href=&quot;https://core.telegram.org/api&quot;&gt;Telegram’s API&lt;/a&gt; and stores the data in &lt;a href=&quot;http://cassandra.apache.org/&quot;&gt;Cassandra&lt;/a&gt;.&lt;/p&gt;
 193&lt;h2 class=&quot;title&quot; id=&quot;our_goal&quot;&gt;&lt;a class=&quot;anchor&quot; href=&quot;#our_goal&quot;&gt;¶&lt;/a&gt;Our goal&lt;/h2&gt;
 194&lt;p&gt;Our goal is to make a Python console application. This application will connect to &lt;a href=&quot;https://telegram.org/&quot;&gt;Telegram&lt;/a&gt;, and ask for your account credentials. Once you have logged in, the application will fetch all of your open conversations and we will store these in Cassandra.&lt;/p&gt;
 195&lt;p&gt;With the data saved in Cassandra, we can now very efficiently query information about your conversations given their identifier offline (no need to query Telegram anymore).&lt;/p&gt;
 196&lt;p&gt;&lt;strong&gt;In short&lt;/strong&gt;, we are making an application that performs efficient offline queries to Cassandra to print out information about your Telegram conversations given the ID you want to query.&lt;/p&gt;
 197&lt;h2 id=&quot;data_model&quot;&gt;&lt;a class=&quot;anchor&quot; href=&quot;#data_model&quot;&gt;¶&lt;/a&gt;Data model&lt;/h2&gt;
 198&lt;p&gt;The application itself is really simple, and we only need one table to store all the relevant information we will be needing. This table called &lt;code&gt;**users**&lt;/code&gt; will contain the following columns:&lt;/p&gt;
 199&lt;ul&gt;
 200&lt;li&gt;&lt;code&gt;**id**&lt;/code&gt;, of type &lt;code&gt;int&lt;/code&gt;. This will also be the &lt;code&gt;primary key&lt;/code&gt; and we’ll use it to query the database later on.&lt;/li&gt;
 201&lt;li&gt;&lt;code&gt;**first_name**&lt;/code&gt;, of type &lt;code&gt;varchar&lt;/code&gt;. This field contains the first name of the stored user.&lt;/li&gt;
 202&lt;li&gt;&lt;code&gt;**last_name**&lt;/code&gt;, of type &lt;code&gt;varchar&lt;/code&gt;. This field contains the last name of the stored user.&lt;/li&gt;
 203&lt;li&gt;&lt;code&gt;**username**&lt;/code&gt;, of type &lt;code&gt;varchar&lt;/code&gt;. This field contains the username of the stored user.
 204Because Cassandra uses a &lt;a href=&quot;https://cassandra.apache.org/doc/latest/architecture/overview.html&quot;&gt;wide column storage model&lt;/a&gt;, direct access through a key is the most efficient way to query the database. In our case, the key is the primary key of the &lt;code&gt;users&lt;/code&gt; table, using the &lt;code&gt;id&lt;/code&gt; column. The index for the primary key is ready to be used as soon as we create the table, so we don’t need to create it on our own.&lt;/li&gt;
 205&lt;/ul&gt;
 206&lt;h2 id=&quot;dependencies&quot;&gt;&lt;a class=&quot;anchor&quot; href=&quot;#dependencies&quot;&gt;¶&lt;/a&gt;Dependencies&lt;/h2&gt;
 207&lt;p&gt;Because we will program it in Python, you need Python installed. You can install it using a package manager of your choice or heading over to the &lt;a href=&quot;https://www.python.org/downloads/&quot;&gt;Python downloads section&lt;/a&gt;, but if you’re on Linux, chances are you have it installed already.&lt;/p&gt;
 208&lt;p&gt;Once Python 3.5 or above is installed, get a copy of the Cassandra driver for Python and Telethon through &lt;code&gt;pip&lt;/code&gt;:&lt;/p&gt;
 209&lt;pre&gt;&lt;code&gt;pip install cassandra-driver telethon
 210&lt;/code&gt;&lt;/pre&gt;
 211&lt;p&gt;For more details on that, see the &lt;a href=&quot;https://docs.datastax.com/en/developer/python-driver/3.22/installation/&quot;&gt;installation guide for &lt;code&gt;cassandra-driver&lt;/code&gt;&lt;/a&gt;, or the &lt;a href=&quot;https://docs.telethon.dev/en/latest/basic/installation.html&quot;&gt;installation guide for &lt;code&gt;telethon&lt;/code&gt;&lt;/a&gt;.&lt;/p&gt;
 212&lt;p&gt;As we did in our &lt;a href=&quot;/blog/mdad/cassandra-operaciones-basicas-y-arquitectura/&quot;&gt;previous post&lt;/a&gt;, we will setup a new keyspace for this application with &lt;code&gt;cqlsh&lt;/code&gt;. We will also create a table to store the users into. This could all be automated in the Python code, but because it’s a one-time thing, we prefer to use &lt;code&gt;cqlsh&lt;/code&gt;.&lt;/p&gt;
 213&lt;p&gt;Make sure that Cassandra is running in the background. We can’t make queries to it if it’s not running.&lt;/p&gt;
 214&lt;pre&gt;&lt;code&gt;$ bin/cqlsh
 215Connected to Test Cluster at 127.0.0.1:9042.
 216[cqlsh 5.0.1 | Cassandra 3.11.6 | CQL spec 3.4.4 | Native protocol v4]
 217Use HELP for help.
 218cqlsh&amp;gt; create keyspace mdad with replication = {'class': 'SimpleStrategy', 'replication_factor': 3};
 219cqlsh&amp;gt; use mdad;
 220cqlsh:mdad&amp;gt; create table users(id int primary key, first_name varchar, last_name varchar, username varchar);
 221&lt;/code&gt;&lt;/pre&gt;
 222&lt;p&gt;Python installed? Check. Python dependencies? Check. Cassandra ready? Check.&lt;/p&gt;
 223&lt;h2 id=&quot;the_code&quot;&gt;&lt;a class=&quot;anchor&quot; href=&quot;#the_code&quot;&gt;¶&lt;/a&gt;The code&lt;/h2&gt;
 224&lt;h3 id=&quot;getting_users&quot;&gt;&lt;a class=&quot;anchor&quot; href=&quot;#getting_users&quot;&gt;¶&lt;/a&gt;Getting users&lt;/h3&gt;
 225&lt;p&gt;The first step is connecting to &lt;a href=&quot;https://core.telegram.org/api&quot;&gt;Telegram’s API&lt;/a&gt;, for which we’ll use &lt;a href=&quot;https://telethon.dev/&quot;&gt;Telethon&lt;/a&gt;, a wonderful (wink, wink) Python library to interface with it.&lt;/p&gt;
 226&lt;p&gt;As with most APIs, we need to supply &lt;a href=&quot;https://my.telegram.org/&quot;&gt;our API key&lt;/a&gt; in order to use it (here &lt;code&gt;API_ID&lt;/code&gt; and &lt;code&gt;API_HASH&lt;/code&gt;). We will refer to them as constants. At the end, you may download the entire code and use my own key for this example. But please don’t use those values for your other applications!&lt;/p&gt;
 227&lt;p&gt;It’s pretty simple: we create a client, and for every dialog (that is, open conversation) we have, do some checks:&lt;/p&gt;
 228&lt;ul&gt;
 229&lt;li&gt;If it’s an user, we just store that in a dictionary mapping &lt;code&gt;ID → User&lt;/code&gt;.&lt;/li&gt;
 230&lt;li&gt;Else if it’s a group, we iterate over the participants and store those users instead.&lt;/li&gt;
 231&lt;/ul&gt;
 232&lt;pre&gt;&lt;code&gt;async def load_users():
 233    from telethon import TelegramClient
 234
 235    users = {}
 236
 237    async with TelegramClient(SESSION, API_ID, API_HASH) as client:
 238        async for dialog in client.iter_dialogs():
 239            if dialog.is_user:
 240                user = dialog.entity
 241                users[user.id] = user
 242                print('found user:', user.id, file=sys.stderr)
 243
 244            elif dialog.is_group:
 245                async for user in client.iter_participants(dialog):
 246                    users[user.id] = user
 247                    print('found member:', user.id, file=sys.stderr)
 248
 249    return list(users.values())
 250&lt;/code&gt;&lt;/pre&gt;
 251&lt;p&gt;With this we have a mapping ID to user, so we know we won’t have duplicates. We simply return the list of user values, because that’s all we care about.&lt;/p&gt;
 252&lt;h3 id=&quot;saving_users&quot;&gt;&lt;a class=&quot;anchor&quot; href=&quot;#saving_users&quot;&gt;¶&lt;/a&gt;Saving users&lt;/h3&gt;
 253&lt;p&gt;Inserting users into Cassandra is pretty straightforward. We take the list of &lt;code&gt;User&lt;/code&gt; objects as input, and prepare a new &lt;code&gt;INSERT&lt;/code&gt; statement that we can reuse (because we will be using it in a loop, this is the best way to do it).&lt;/p&gt;
 254&lt;p&gt;For each user, execute the statement with the user data as input parameters. Simple as that.&lt;/p&gt;
 255&lt;pre&gt;&lt;code&gt;def save_users(session, users):
 256    insert_stmt = session.prepare(
 257        'INSERT INTO users (id, first_name, last_name, username) ' 
 258        'VALUES (?, ?, ?, ?)')
 259
 260    for user in users:
 261        row = (user.id, user.first_name, user.last_name, user.username)
 262        session.execute(insert_stmt, row)
 263&lt;/code&gt;&lt;/pre&gt;
 264&lt;h3 id=&quot;fetching_users&quot;&gt;&lt;a class=&quot;anchor&quot; href=&quot;#fetching_users&quot;&gt;¶&lt;/a&gt;Fetching users&lt;/h3&gt;
 265&lt;p&gt;Given a list of users, yield all of them from the database. Similar to before, we prepare a &lt;code&gt;SELECT&lt;/code&gt; statement and just execute it repeatedly over the input user IDs.&lt;/p&gt;
 266&lt;pre&gt;&lt;code&gt;def fetch_users(session, users):
 267    select_stmt = session.prepare('SELECT * FROM users WHERE id = ?')
 268
 269    for user_id in users:
 270        yield session.execute(select_stmt, (user_id,)).one()
 271&lt;/code&gt;&lt;/pre&gt;
 272&lt;h3 id=&quot;parsing_arguments&quot;&gt;&lt;a class=&quot;anchor&quot; href=&quot;#parsing_arguments&quot;&gt;¶&lt;/a&gt;Parsing arguments&lt;/h3&gt;
 273&lt;p&gt;We’ll be making a little CLI application, so we need to parse console arguments. It won’t be anything fancy, though. For that we’ll be using &lt;a href=&quot;https://docs.python.org/3/library/argparse.html&quot;&gt;Python’s &lt;code&gt;argparse&lt;/code&gt; module&lt;/a&gt;:&lt;/p&gt;
 274&lt;pre&gt;&lt;code&gt;def parse_args():
 275    import argparse
 276
 277    parser = argparse.ArgumentParser(
 278        description='Dump and query Telegram users')
 279
 280    parser.add_argument('users', type=int, nargs='*',
 281        help='one or more user IDs to query for')
 282
 283    parser.add_argument('--load-users', action='store_true',
 284        help='load users from Telegram (do this first run)')
 285
 286    return parser.parse_args()
 287&lt;/code&gt;&lt;/pre&gt;
 288&lt;h3 id=&quot;all_together&quot;&gt;&lt;a class=&quot;anchor&quot; href=&quot;#all_together&quot;&gt;¶&lt;/a&gt;All together&lt;/h3&gt;
 289&lt;p&gt;Last, the entry point. We import a Cassandra Cluster, and connect to some default keyspace (we called it &lt;code&gt;mdad&lt;/code&gt; earlier).&lt;/p&gt;
 290&lt;p&gt;If the user wants to load the users into the database, we’ll do just that first.&lt;/p&gt;
 291&lt;p&gt;Then, for each user we fetch from the database, we print it. Last names and usernames are optional, so don’t print those if they’re missing (&lt;code&gt;None&lt;/code&gt;).&lt;/p&gt;
 292&lt;pre&gt;&lt;code&gt;async def main(args):
 293    from cassandra.cluster import Cluster
 294
 295    cluster = Cluster(CLUSTER_NODES)
 296    session = cluster.connect(KEYSPACE)
 297
 298    if args.load_users:
 299        users = await load_users()
 300        save_users(session, users)
 301
 302    for user in fetch_users(session, args.users):
 303        print('User', user.id, ':')
 304        print('  First name:', user.first_name)
 305        if user.last_name:
 306            print('  Last name:', user.last_name)
 307        if user.username:
 308            print('  Username:', user.username)
 309
 310        print()
 311
 312if __name__ == '__main__':
 313    asyncio.run(main(parse_args()))
 314&lt;/code&gt;&lt;/pre&gt;
 315&lt;p&gt;Because Telethon is an &lt;code&gt;[asyncio](https://docs.python.org/3/library/asyncio.html)&lt;/code&gt; library, we define it as &lt;code&gt;async def main(...)&lt;/code&gt; and run it with &lt;code&gt;asyncio.run(main(...))&lt;/code&gt;.&lt;/p&gt;
 316&lt;p&gt;Here’s what it looks like in action:&lt;/p&gt;
 317&lt;pre&gt;&lt;code&gt;$ python data.py --help
 318usage: data.py [-h] [--load-users] [users [users ...]]
 319
 320Dump and query Telegram users
 321
 322positional arguments:
 323  users         one or more user IDs to query for
 324
 325optional arguments:
 326  -h, --help    show this help message and exit
 327  --load-users  load users from Telegram (do this first run)
 328
 329$ python data.py --load-users
 330found user: 487158
 331found member: 59794114
 332found member: 487158
 333found member: 191045991
 334(...a lot more output)
 335
 336$ python data.py 487158 59794114
 337User 487158 :
 338  First name: Rick
 339  Last name: Pickle
 340
 341User 59794114 :
 342  Firt name: Peter
 343  Username: pete
 344&lt;/code&gt;&lt;/pre&gt;
 345&lt;p&gt;Telegram’s data now persists in Cassandra, and we can efficiently query it whenever we need to! I would’ve shown a video presenting its usage, but I’m afraid that would leak some of the data I want to keep private :-).&lt;/p&gt;
 346&lt;p&gt;Feel free to download the code and try it yourself:&lt;/p&gt;
 347&lt;p&gt;&lt;em&gt;download removed&lt;/em&gt;&lt;/p&gt;
 348&lt;h2 id=&quot;references&quot;&gt;&lt;a class=&quot;anchor&quot; href=&quot;#references&quot;&gt;¶&lt;/a&gt;References&lt;/h2&gt;
 349&lt;ul&gt;
 350&lt;li&gt;&lt;a href=&quot;https://docs.datastax.com/en/developer/python-driver/3.22/getting_started/&quot;&gt;DataStax Python Driver for Apache Cassandra – Getting Started&lt;/a&gt;&lt;/li&gt;
 351&lt;li&gt;&lt;a href=&quot;https://docs.telethon.dev/en/latest/&quot;&gt;Telethon’s Documentation&lt;/a&gt;&lt;/li&gt;
 352&lt;/ul&gt;
 353&lt;/main&gt;
 354&lt;/body&gt;
 355&lt;/html&gt;
 356 </content></entry><entry><title>Introduction to Hadoop and its MapReduce</title><id>dist/introduction-to-hadoop-and-its-mapreduce/index.html</id><updated>2020-03-31T22:00:00+00:00</updated><published>2020-03-29T22:00:00+00:00</published><summary>Hadoop is an open-source, free, Java-based programming framework that helps processing large datasets in a distributed environment and the problems that arise when trying to harness the knowledge from BigData, capable of running on thousands of nodes and dealing with petabytes of data. It is based on Google File System (GFS) and originated from the work on the Nutch open-source project on search engines.</summary><content type="html" src="dist/introduction-to-hadoop-and-its-mapreduce/index.html">&lt;!DOCTYPE html&gt;
 357&lt;html&gt;
 358&lt;head&gt;
 359&lt;meta charset=&quot;utf-8&quot; /&gt;
 360&lt;meta name=&quot;viewport&quot; content=&quot;width=device-width, initial-scale=1&quot; /&gt;
 361&lt;title&gt;Introduction to Hadoop and its MapReduce&lt;/title&gt;
 362&lt;link rel=&quot;stylesheet&quot; href=&quot;../css/style.css&quot;&gt;
 363&lt;/head&gt;
 364&lt;body&gt;
 365&lt;main&gt;
 366&lt;p&gt;Hadoop is an open-source, free, Java-based programming framework that helps processing large datasets in a distributed environment and the problems that arise when trying to harness the knowledge from BigData, capable of running on thousands of nodes and dealing with petabytes of data. It is based on Google File System (GFS) and originated from the work on the Nutch open-source project on search engines.&lt;/p&gt;
 367&lt;div class=&quot;date-created-modified&quot;&gt;Created 2020-03-30&lt;br&gt;
 368Modified 2020-04-01&lt;/div&gt;
 369&lt;p&gt;Hadoop also offers a distributed filesystem (HDFS) enabling for fast transfer among nodes, and a way to program with MapReduce.&lt;/p&gt;
 370&lt;p&gt;It aims to strive for the 4 V’s: Volume, Variety, Veracity and Velocity. For veracity, it is a secure environment that can be trusted.&lt;/p&gt;
 371&lt;h2 class=&quot;title&quot; id=&quot;milestones&quot;&gt;&lt;a class=&quot;anchor&quot; href=&quot;#milestones&quot;&gt;¶&lt;/a&gt;Milestones&lt;/h2&gt;
 372&lt;p&gt;The creators of Hadoop are Doug Cutting and Mike Cafarella, who just wanted to design a search engine, Nutch, and quickly found the problems of dealing with large amounts of data. They found their solution with the papers Google published.&lt;/p&gt;
 373&lt;p&gt;The name comes from the plush of Cutting’s child, a yellow elephant.&lt;/p&gt;
 374&lt;ul&gt;
 375&lt;li&gt;In July 2005, Nutch used GFS to perform MapReduce operations.&lt;/li&gt;
 376&lt;li&gt;In February 2006, Nutch started a Lucene subproject which led to Hadoop.&lt;/li&gt;
 377&lt;li&gt;In April 2007, Yahoo used Hadoop in a 1 000-node cluster.&lt;/li&gt;
 378&lt;li&gt;In January 2008, Apache took over and made Hadoop a top-level project.&lt;/li&gt;
 379&lt;li&gt;In July 2008, Apache tested a 4000-node cluster. The performance was the fastest compared to other technologies that year.&lt;/li&gt;
 380&lt;li&gt;In May 2009, Hadoop sorted a petabyte of data in 17 hours.&lt;/li&gt;
 381&lt;li&gt;In December 2011, Hadoop reached 1.0.&lt;/li&gt;
 382&lt;li&gt;In May 2012, Hadoop 2.0 was released with the addition of YARN (Yet Another Resource Navigator) on top of HDFS, splitting MapReduce and other processes into separate components, greatly improving the fault tolerance.&lt;/li&gt;
 383&lt;/ul&gt;
 384&lt;p&gt;From here onwards, many other alternatives have born, like Spark, Hive &amp;amp; Drill, Kafka, HBase, built around the Hadoop ecosystem.&lt;/p&gt;
 385&lt;p&gt;As of 2017, Amazon has clusters between 1 and 100 nodes, Yahoo has over 100 000 CPUs running Hadoop, AOL has clusters with 50 machines, and Facebook has a 320-machine (2 560 cores) and 1.3PB of raw storage.&lt;/p&gt;
 386&lt;h2 id=&quot;why_not_use_rdbms_&quot;&gt;&lt;a class=&quot;anchor&quot; href=&quot;#why_not_use_rdbms_&quot;&gt;¶&lt;/a&gt;Why not use RDBMS?&lt;/h2&gt;
 387&lt;p&gt;Relational database management systems simply cannot scale horizontally, and vertical scaling will require very expensive servers. Similar to RDBMS, Hadoop has a notion of jobs (analogous to transactions), but without ACID or concurrency control. Hadoop supports any form of data (unstructured or semi-structured) in read-only mode, and failures are common but there’s a simple yet efficient fault tolerance.&lt;/p&gt;
 388&lt;p&gt;So what problems does Hadoop solve? It solves the way we should think about problems, and distributing them, which is key to do anything related with BigData nowadays. We start working with clusters of nodes, and coordinating the jobs between them. Hadoop’s API makes this really easy.&lt;/p&gt;
 389&lt;p&gt;Hadoop also takes very seriously the loss of data with replication, and if a node falls, they are moved to a different node.&lt;/p&gt;
 390&lt;h2 id=&quot;major_components&quot;&gt;&lt;a class=&quot;anchor&quot; href=&quot;#major_components&quot;&gt;¶&lt;/a&gt;Major components&lt;/h2&gt;
 391&lt;p&gt;The previously-mentioned HDFS runs on commodity machine, which are cost-friendly. It is very fault-tolerant and efficient enough to process huge amounts of data, because it splits large files into smaller chunks (or blocks) that can be more easily handled. Multiple nodes can work on multiple chunks at the same time.&lt;/p&gt;
 392&lt;p&gt;NameNode stores the metadata of the various datablocks (map of blocks) along with their location. It is the brain and the master in Hadoop’s master-slave architecture, also known as the namespace, and makes use of the DataNode.&lt;/p&gt;
 393&lt;p&gt;A secondary NameNode is a replica that can be used if the first NameNode dies, so that Hadoop doesn’t shutdown and can restart.&lt;/p&gt;
 394&lt;p&gt;DataNode stores the blocks of data, and are the slaves in the architecture. This data is split into one or more files. Their only job is to manage this access to the data. They are often distributed among racks to avoid data lose.&lt;/p&gt;
 395&lt;p&gt;JobTracker creates and schedules jobs from the clients for either map or reduce operations.&lt;/p&gt;
 396&lt;p&gt;TaskTracker runs MapReduce tasks assigned to the current data node.&lt;/p&gt;
 397&lt;p&gt;When clients need data, they first interact with the NameNode and replies with the location of the data in the correct DataNode. Client proceeds with interaction with the DataNode.&lt;/p&gt;
 398&lt;h2 id=&quot;mapreduce&quot;&gt;&lt;a class=&quot;anchor&quot; href=&quot;#mapreduce&quot;&gt;¶&lt;/a&gt;MapReduce&lt;/h2&gt;
 399&lt;p&gt;MapReduce, as the name implies, is split into two steps: the map and the reduce. The map stage is the «divide and conquer» strategy, while the reduce part is about combining and reducing the results.&lt;/p&gt;
 400&lt;p&gt;The mapper has to process the input data (normally a file or directory), commonly line-by-line, and produce one or more outputs. The reducer uses all the results from the mapper as its input to produce a new output file itself.&lt;/p&gt;
 401&lt;p&gt;&lt;img src=&quot;bitmap.png&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;
 402&lt;p&gt;When reading the data, some may be junk that we can choose to ignore. If it is valid data, however, we label it with a particular type that can be useful for the upcoming process. Hadoop is responsible for splitting the data accross the many nodes available to execute this process in parallel.&lt;/p&gt;
 403&lt;p&gt;There is another part to MapReduce, known as the Shuffle-and-Sort. In this part, types or categories from one node get moved to a different node. This happens with all nodes, so that every node can work on a complete category. These categories are known as «keys», and allows Hadoop to scale linearly.&lt;/p&gt;
 404&lt;h2 id=&quot;references&quot;&gt;&lt;a class=&quot;anchor&quot; href=&quot;#references&quot;&gt;¶&lt;/a&gt;References&lt;/h2&gt;
 405&lt;ul&gt;
 406&lt;li&gt;&lt;a href=&quot;https://youtu.be/oT7kczq5A-0&quot;&gt;YouTube – Hadoop Tutorial For Beginners | What Is Hadoop? | Hadoop Tutorial | Hadoop Training | Simplilearn&lt;/a&gt;&lt;/li&gt;
 407&lt;li&gt;&lt;a href=&quot;https://youtu.be/bcjSe0xCHbE&quot;&gt;YouTube – Learn MapReduce with Playing Cards&lt;/a&gt;&lt;/li&gt;
 408&lt;li&gt;&lt;a href=&quot;https://youtu.be/j8ehT1_G5AY?list=PLi4tp-TF_qjM_ed4lIzn03w7OnEh0D8Xi&quot;&gt;YouTube – Video Post #2: Hadoop para torpes (I)-¿Qué es y para qué sirve?&lt;/a&gt;&lt;/li&gt;
 409&lt;li&gt;&lt;a href=&quot;https://youtu.be/NQ8mjVPCDvk?list=PLi4tp-TF_qjM_ed4lIzn03w7OnEh0D8Xi&quot;&gt;Video Post #3: Hadoop para torpes (II)-¿Cómo funciona? HDFS y MapReduce&lt;/a&gt;&lt;/li&gt;
 410&lt;li&gt;&lt;a href=&quot;https://hadoop.apache.org/old/releases.html&quot;&gt;Apache Hadoop Releases&lt;/a&gt;&lt;/li&gt;
 411&lt;li&gt;&lt;a href=&quot;https://youtu.be/20qWx2KYqYg?list=PLi4tp-TF_qjM_ed4lIzn03w7OnEh0D8Xi&quot;&gt;Video Post #4: Hadoop para torpes (III y fin)- Ecosistema y distribuciones&lt;/a&gt;&lt;/li&gt;
 412&lt;li&gt;&lt;a href=&quot;http://www.hadoopbook.com/&quot;&gt;Chapter 2 – Hadoop: The Definitive Guide, Fourth Edition&lt;/a&gt; (&lt;a href=&quot;http://grut-computing.com/HadoopBook.pdf&quot;&gt;pdf,&lt;/a&gt;&lt;a href=&quot;http://www.hadoopbook.com/code.html&quot;&gt;code&lt;/a&gt;)&lt;/li&gt;
 413&lt;/ul&gt;
 414&lt;/main&gt;
 415&lt;/body&gt;
 416&lt;/html&gt;
 417 </content></entry><entry><title>Data Warehousing and OLAP</title><id>dist/data-warehousing-and-olap/index.html</id><updated>2020-03-31T22:00:00+00:00</updated><published>2020-03-22T23:00:00+00:00</published><summary>Business intelligence (BI) refers to systems used to gain insights from data, traditionally taken from relational databases and being used to build a data warehouse. Performance and scalability are key aspects of BI systems.</summary><content type="html" src="dist/data-warehousing-and-olap/index.html">&lt;!DOCTYPE html&gt;
 418&lt;html&gt;
 419&lt;head&gt;
 420&lt;meta charset=&quot;utf-8&quot; /&gt;
 421&lt;meta name=&quot;viewport&quot; content=&quot;width=device-width, initial-scale=1&quot; /&gt;
 422&lt;title&gt;Data Warehousing and OLAP&lt;/title&gt;
 423&lt;link rel=&quot;stylesheet&quot; href=&quot;../css/style.css&quot;&gt;
 424&lt;/head&gt;
 425&lt;body&gt;
 426&lt;main&gt;
 427&lt;p&gt;Business intelligence (BI) refers to systems used to gain insights from data, traditionally taken from relational databases and being used to build a data warehouse. Performance and scalability are key aspects of BI systems.&lt;/p&gt;
 428&lt;div class=&quot;date-created-modified&quot;&gt;Created 2020-03-23&lt;br&gt;
 429Modified 2020-04-01&lt;/div&gt;
 430&lt;p&gt;Commonly, the data in the warehouse is a transformation of the original, operational data into a form better suited for reporting and analysis.&lt;/p&gt;
 431&lt;p&gt;This whole process is known as Online Analytical Processing (OLAP), and is different to the approach taken by relational databases, which is known as Online Transaction Processing (OLTP) and is optimized for individual transactions. OLAP is based on multidimensional databases simply by the way it works.&lt;/p&gt;
 432&lt;p&gt;The Business Intelligence Semantic Model (BISM) refers to the different semantics in which data can be accessed and queried.&lt;/p&gt;
 433&lt;p&gt;On the one hand, MDX is the language used for Microsoft’s BISM of multidimensional mode, and on the other, DAX is the language of tabular mode, based on Excel’s formula language and designed to be easy to use by those familiar with Excel.&lt;/p&gt;
 434&lt;h2 class=&quot;title&quot; id=&quot;types_of_data&quot;&gt;&lt;a class=&quot;anchor&quot; href=&quot;#types_of_data&quot;&gt;¶&lt;/a&gt;Types of data&lt;/h2&gt;
 435&lt;p&gt;The business data is often called detail data or &lt;em&gt;fact&lt;/em&gt; data, goes in a de-normalized table called the fact table. The term «facts» literally refers to the facts, such as number of products sold and amount received for products sold. Different tables will often represent different dimensions of the data, where «dimensions» simply means different ways to look at the data.&lt;/p&gt;
 436&lt;p&gt;Data can also be referred to as measures, because most of it is numbers and subject to aggregations. By measures, we refer to these values and numbers.&lt;/p&gt;
 437&lt;p&gt;Multidimensional databases are formed with separate fact and dimension tables, grouped to create a «cube» with both facts and dimensions.&lt;/p&gt;
 438&lt;h2 id=&quot;places_to_store_data&quot;&gt;&lt;a class=&quot;anchor&quot; href=&quot;#places_to_store_data&quot;&gt;¶&lt;/a&gt;Places to store data&lt;/h2&gt;
 439&lt;p&gt;Three different terms are often heard when talking about the places where data is stored: data lakes, data warehouses, and data marts. All of these have different target users, cost, size and growth.&lt;/p&gt;
 440&lt;p&gt;The data lake contains &lt;strong&gt;all&lt;/strong&gt; the data generated by your business. Nothing is filtered out, not even cancelled or invalid transactions. If there are future plans to use the data, or a need to analyze it in various ways, a data lake is often necessary.&lt;/p&gt;
 441&lt;p&gt;The data warehouse contains &lt;strong&gt;structured&lt;/strong&gt; data, or has already been modelled. It’s also multi-purpose, but often of a lot smaller scale. Operational users are able to easily evaluate reports or analyze performance here, since it is built for their needs.&lt;/p&gt;
 442&lt;p&gt;The data mart contains a &lt;strong&gt;small portion&lt;/strong&gt; of the data, and is often part of data warehouses themselves. It can be seen as a subsection built for specific departments, and as a benefit, users get isolated security and performance. The data here is clean, and subject-oriented.&lt;/p&gt;
 443&lt;h2 id=&quot;ways_to_store_data&quot;&gt;&lt;a class=&quot;anchor&quot; href=&quot;#ways_to_store_data&quot;&gt;¶&lt;/a&gt;Ways to store data&lt;/h2&gt;
 444&lt;p&gt;Data is often stored de-normalized, because it would not be feasible to store otherwise.&lt;/p&gt;
 445&lt;p&gt;There are two main techniques to implement data warehouses, known as Inmon approach and Kimball approach. They are named after Ralph Kimball &lt;em&gt;et al.&lt;/em&gt; for their work on «The Data Warehouse Lifecycle Toolkit», and Bill Inmon &lt;em&gt;et al.&lt;/em&gt; for their work on «Corporate Information Factory» respectively.&lt;/p&gt;
 446&lt;p&gt;When several independent systems identify and store data in different ways, we face what’s known as the problem of the stovepipe. Something as simple as trying to connect these systems or use their data in a warehouse results in an overly complicated system.&lt;/p&gt;
 447&lt;p&gt;To tackle this issue, Kimball advocates the use of «conformed dimensions», that is, some dimensions will be «of interest», and have the same attributes and rollups (or at least a subset) in different data marts. This way, warehouses contain dimensional databases to ease analysis in the data marts it is composed of, and users query the warehouse.&lt;/p&gt;
 448&lt;p&gt;The Inmon approach on the other hand has the warehouse laid out in third normal form, and users query the data marts, not the warehouse (so the data marts are dimensional in nature).&lt;/p&gt;
 449&lt;h2 id=&quot;key_takeaways&quot;&gt;&lt;a class=&quot;anchor&quot; href=&quot;#key_takeaways&quot;&gt;¶&lt;/a&gt;Key takeaways&lt;/h2&gt;
 450&lt;ul&gt;
 451&lt;li&gt;«BI» stands for «Business Intelligence» and refers to the system that &lt;em&gt;perform&lt;/em&gt; data analysis.&lt;/li&gt;
 452&lt;li&gt;«BISM» stands for «Business Intelligence Semantic Model», and Microsoft has two languages to query data: MDX and DAX.&lt;/li&gt;
 453&lt;li&gt;«OLAP» stands for «Online Analytical Processing», and «OLTP» for «Online Transaction Processing».&lt;/li&gt;
 454&lt;li&gt;Data mart, warehouse and lake refer to places at different scales and with different needs to store data.&lt;/li&gt;
 455&lt;li&gt;Inmon and Kimbal are different ways to implement data warehouses.&lt;/li&gt;
 456&lt;li&gt;Data facts contains various measures arranged into different dimensions, which together form a data cube.&lt;/li&gt;
 457&lt;/ul&gt;
 458&lt;h2 id=&quot;references&quot;&gt;&lt;a class=&quot;anchor&quot; href=&quot;#references&quot;&gt;¶&lt;/a&gt;References&lt;/h2&gt;
 459&lt;ul&gt;
 460&lt;li&gt;&lt;a href=&quot;https://media.wiley.com/product_data/excerpt/03/11181011/1118101103-157.pdf&quot;&gt;Chapter 1 – Professional Microsoft SQL Server 2012 Analysis Services with MDX and DAX (Harinath et al., 2012)&lt;/a&gt;&lt;/li&gt;
 461&lt;li&gt;&lt;a href=&quot;https://youtu.be/m_DzhW-2pWI&quot;&gt;YouTube – Data Mining in SQL Server Analysis Services&lt;/a&gt;&lt;/li&gt;
 462&lt;li&gt;Almacenes de Datos y Procesamiento Analítico On-Line (Félix R.)&lt;/li&gt;
 463&lt;li&gt;&lt;a href=&quot;https://youtu.be/qkJOace9FZg&quot;&gt;YouTube – What are Dimensions and Measures?&lt;/a&gt;&lt;/li&gt;
 464&lt;li&gt;&lt;a href=&quot;https://www.holistics.io/blog/data-lake-vs-data-warehouse-vs-data-mart/&quot;&gt;Data Lake vs Data Warehouse vs Data Mart&lt;/a&gt;&lt;/li&gt;
 465&lt;/ul&gt;
 466&lt;/main&gt;
 467&lt;/body&gt;
 468&lt;/html&gt;
 469 </content></entry><entry><title>Cassandra: Introducción</title><id>dist/cassandra-introduccion/index.html</id><updated>2020-03-29T22:00:00+00:00</updated><published>2020-03-04T23:00:00+00:00</published><summary>Este es el primer post en la serie sobre Cassandra, en el cuál introduciremos dicha bases de datos NoSQL y veremos sus características e instalación.</summary><content type="html" src="dist/cassandra-introduccion/index.html">&lt;!DOCTYPE html&gt;
 470&lt;html&gt;
 471&lt;head&gt;
 472&lt;meta charset=&quot;utf-8&quot; /&gt;
 473&lt;meta name=&quot;viewport&quot; content=&quot;width=device-width, initial-scale=1&quot; /&gt;
 474&lt;title&gt;Cassandra: Introducción&lt;/title&gt;
 475&lt;link rel=&quot;stylesheet&quot; href=&quot;../css/style.css&quot;&gt;
 476&lt;/head&gt;
 477&lt;body&gt;
 478&lt;main&gt;
 479&lt;p&gt;&lt;img src=&quot;1200px-Cassandra_logo.png&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;
 480&lt;div class=&quot;date-created-modified&quot;&gt;Created 2020-03-05&lt;br&gt;
 481Modified 2020-03-30&lt;/div&gt;
 482&lt;p&gt;Este es el primer post en la serie sobre Cassandra, en el cuál introduciremos dicha bases de datos NoSQL y veremos sus características e instalación.&lt;/p&gt;
 483&lt;p&gt;Otros posts en esta serie:&lt;/p&gt;
 484&lt;ul&gt;
 485&lt;li&gt;&lt;a href=&quot;/blog/mdad/cassandra-introduccion/&quot;&gt;Cassandra: Introducción&lt;/a&gt; (este post)&lt;/li&gt;
 486&lt;li&gt;&lt;a href=&quot;/blog/mdad/cassandra-operaciones-basicas-y-arquitectura/&quot;&gt;Cassandra: Operaciones Básicas y Arquitectura&lt;/a&gt;&lt;/li&gt;
 487&lt;/ul&gt;
 488&lt;p&gt;Este post está hecho en colaboración con un compañero.&lt;/p&gt;
 489&lt;hr /&gt;
 490&lt;h2 class=&quot;title&quot; id=&quot;finalidad_de_la_tecnología&quot;&gt;&lt;a class=&quot;anchor&quot; href=&quot;#finalidad_de_la_tecnología&quot;&gt;¶&lt;/a&gt;Finalidad de la tecnología&lt;/h2&gt;
 491&lt;p&gt;Apache Cassandra es una base de datos NoSQL distribuida y de código abierto (&lt;a href=&quot;https://github.com/apache/cassandra&quot;&gt;con un espejo en GitHub&lt;/a&gt;). Su filosofía es de tipo «clave-valor», y puede manejar grandes volúmenes de datos&lt;/p&gt;
 492&lt;p&gt;Entre sus objetivos, busca ser escalable horizontalmente (puede replicarse en varios centros manteniendo la latencia baja) y alta disponibilidad sin ceder en rendimiento.&lt;/p&gt;
 493&lt;h2 id=&quot;cómo_funciona&quot;&gt;&lt;a class=&quot;anchor&quot; href=&quot;#cómo_funciona&quot;&gt;¶&lt;/a&gt;Cómo funciona&lt;/h2&gt;
 494&lt;p&gt;Instancias de Cassandra se distribuyen en nodos iguales (es decir, no hay maestro-esclavo) que se comunican entre sí (P2P). De este modo, da buen soporte entre varios centros de datos, con redundancia y réplicas síncronas.&lt;/p&gt;
 495&lt;p&gt;&lt;img src=&quot;multiple-data-centers-and-data-replication-in-cassandra.jpg&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;
 496&lt;p&gt;Con respecto al modelo de datos, Cassandra particiona las filas con el objetivo de re-organizarla a lo largo distintas tablas. Como clave primaria, se usa un primer componente conocido como «clave de la partición». Dentro de cada partición, las filas se agrupan según el resto de columnas de la clave. Cualquier otra columna se puede indexar independientemente de la clave primaria.&lt;/p&gt;
 497&lt;p&gt;Las tablas se pueden crear, borrar, actualizar y consultar sin bloqueos. No hay soporte para JOIN o subconsultas, pero Cassandra prefiere de-normalizar los datos haciendo uso de características como coleciones.&lt;/p&gt;
 498&lt;p&gt;Para realizar las operaciones sobre cassandra se usa CQL (Cassandra Query Language), que tiene una sintaxis muy similar a SQL.&lt;/p&gt;
 499&lt;h2 id=&quot;características&quot;&gt;&lt;a class=&quot;anchor&quot; href=&quot;#características&quot;&gt;¶&lt;/a&gt;Características&lt;/h2&gt;
 500&lt;p&gt;Como ya hemos mencionado antes, la arquitectura de Cassandra es &lt;strong&gt;decentralizada&lt;/strong&gt;. No tiene un único punto que pudiera fallar porque todos los nodos son iguales (sin maestros), y por lo tanto, cualquiera puede dar servicio a la petición.&lt;/p&gt;
 501&lt;p&gt;Los datos se encuentran &lt;strong&gt;replicados&lt;/strong&gt; entre los distintos nodos del clúster (lo que ofrece gran &lt;strong&gt;tolerancia a fallos&lt;/strong&gt; sin necesidad de interrumpir la aplicación), y es trivial &lt;strong&gt;escalar&lt;/strong&gt; añadiendo más nodos al sistema.&lt;/p&gt;
 502&lt;p&gt;El nivel de &lt;strong&gt;consistencia&lt;/strong&gt; para lecturas y escrituras es configurable.&lt;/p&gt;
 503&lt;p&gt;Siendo de la familia Apache, Cassandra ofrece integración con Apache Hadoop para tener soporte MapReduce.&lt;/p&gt;
 504&lt;h2 id=&quot;arista_dentro_del_teorema_cap&quot;&gt;&lt;a class=&quot;anchor&quot; href=&quot;#arista_dentro_del_teorema_cap&quot;&gt;¶&lt;/a&gt;Arista dentro del Teorema CAP&lt;/h2&gt;
 505&lt;p&gt;Cassandra se encuentra dentro de la esquina «AP» junto con CouchDB y otros, porque garantiza tanto la disponibilidad como la tolerancia a fallos.&lt;/p&gt;
 506&lt;p&gt;Sin embargo, puede configurarse como un sistema «CP» si se prefiere respetar la consistencia en todo momento.&lt;/p&gt;
 507&lt;p&gt;&lt;img src=&quot;0.jpeg&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;
 508&lt;h2 id=&quot;descarga&quot;&gt;&lt;a class=&quot;anchor&quot; href=&quot;#descarga&quot;&gt;¶&lt;/a&gt;Descarga&lt;/h2&gt;
 509&lt;p&gt;Se pueden seguir las instrucciones de la página oficial para &lt;a href=&quot;https://cassandra.apache.org/download/&quot;&gt;descargar Cassandra&lt;/a&gt;. Para ello, se debe clicar en la &lt;a href=&quot;https://www.apache.org/dyn/closer.lua/cassandra/3.11.6/apache-cassandra-3.11.6-bin.tar.gz&quot;&gt;última versión para descargar el archivo&lt;/a&gt;. En nuestro caso, esto es el enlace nombrado «3.11.6», versión que utilizamos.&lt;/p&gt;
 510&lt;h2 id=&quot;instalación&quot;&gt;&lt;a class=&quot;anchor&quot; href=&quot;#instalación&quot;&gt;¶&lt;/a&gt;Instalación&lt;/h2&gt;
 511&lt;p&gt;Cassandra no ofrece binarios para Windows, por lo que usaremos Linux para instalarlo. En nuestro caso, tenemos un sistema Linux Mint (derivado de Ubuntu), pero una máquina virtual con cualquier Linux debería funcionar.&lt;/p&gt;
 512&lt;p&gt;Debemos asegurarnos de tener Java y Python 2 instalado mediante el siguiente comando:&lt;/p&gt;
 513&lt;pre&gt;&lt;code&gt;apt install openjdk-8-jdk openjdk-8-jre python2.7
 514&lt;/code&gt;&lt;/pre&gt;
 515&lt;p&gt;Para verificar que la instalación ha sido correcta, podemos mostrar las versiones de los programas:&lt;/p&gt;
 516&lt;pre&gt;&lt;code&gt;$ java -version
 517openjdk version &amp;quot;1.8.0_242&amp;quot;
 518OpenJDK Runtime Environment (build 1.8.0_242-8u242-b08-0ubuntu3~18.04-b08)
 519OpenJDK 64-Bit Server VM (build 25.242-b08, mixed mode)
 520
 521$ python2 --version
 522Python 2.7.17
 523&lt;/code&gt;&lt;/pre&gt;
 524&lt;p&gt;Una vez las dependencias estén instaladas, extraemos el fichero descargado o bien mediante la interfaz gráfica de nuestro sistema, o bien mediante un comando:&lt;/p&gt;
 525&lt;pre&gt;&lt;code&gt;tar xf apache-cassandra-3.11.6-bin.tar.gz
 526&lt;/code&gt;&lt;/pre&gt;
 527&lt;p&gt;Y finalmente, lanzar la ejecución de Cassandra:&lt;/p&gt;
 528&lt;pre&gt;&lt;code&gt;apache-cassandra-3.11.6/bin/cassandra
 529&lt;/code&gt;&lt;/pre&gt;
 530&lt;p&gt;Es posible que tarde un poco en abrirse, pero luego debería haber muchas líneas de log indicando. Para apagar el servidor, simplemente basta con pulsar &lt;code&gt;Ctrl+C&lt;/code&gt;.&lt;/p&gt;
 531&lt;h2 id=&quot;referencias&quot;&gt;&lt;a class=&quot;anchor&quot; href=&quot;#referencias&quot;&gt;¶&lt;/a&gt;Referencias&lt;/h2&gt;
 532&lt;ul&gt;
 533&lt;li&gt;&lt;a href=&quot;https://blog.yugabyte.com/apache-cassandra-architecture-how-it-works-lightweight-transactions/&quot;&gt;Apache Cassandra Architecture Fundamentals – The Distributed SQL Blog&lt;/a&gt;&lt;/li&gt;
 534&lt;li&gt;&lt;a href=&quot;https://cassandra.apache.org/&quot;&gt;Apache Cassandra&lt;/a&gt;&lt;/li&gt;
 535&lt;li&gt;&lt;a href=&quot;https://www.datastax.com/blog/2019/05/how-apache-cassandratm-balances-consistency-availability-and-performance&quot;&gt;How Apache Cassandra™ Balances Consistency, Availability, and Performance – Datasax&lt;/a&gt;&lt;/li&gt;
 536&lt;li&gt;&lt;a href=&quot;https://blog.yugabyte.com/apache-cassandra-architecture-how-it-works-lightweight-transactions/&quot;&gt;Apache Cassandra Architecture Fundamentals&lt;/a&gt;&lt;/li&gt;
 537&lt;/ul&gt;
 538&lt;/main&gt;
 539&lt;/body&gt;
 540&lt;/html&gt;
 541 </content></entry><entry><title>Privado: NoSQL evaluation</title><id>dist/nosql-evaluation/index.html</id><updated>2020-03-27T23:00:00+00:00</updated><published>2020-03-15T23:00:00+00:00</published><summary>This evaluation is based on the criteria for the first delivery described by Trabajos en grupo sobre Bases de Datos NoSQL.</summary><content type="html" src="dist/nosql-evaluation/index.html">&lt;!DOCTYPE html&gt;
 542&lt;html&gt;
 543&lt;head&gt;
 544&lt;meta charset=&quot;utf-8&quot; /&gt;
 545&lt;meta name=&quot;viewport&quot; content=&quot;width=device-width, initial-scale=1&quot; /&gt;
 546&lt;title&gt;Privado: NoSQL evaluation&lt;/title&gt;
 547&lt;link rel=&quot;stylesheet&quot; href=&quot;../css/style.css&quot;&gt;
 548&lt;/head&gt;
 549&lt;body&gt;
 550&lt;main&gt;
 551&lt;p&gt;This evaluation is based on the criteria for the first delivery described by Trabajos en grupo sobre Bases de Datos NoSQL.&lt;/p&gt;
 552&lt;div class=&quot;date-created-modified&quot;&gt;Created 2020-03-16&lt;br&gt;
 553Modified 2020-03-28&lt;/div&gt;
 554&lt;p&gt;I have chosen to evaluate the following people and works:&lt;/p&gt;
 555&lt;ul&gt;
 556&lt;li&gt;a12: Classmate (username) with Druid.&lt;/li&gt;
 557&lt;li&gt;a21: Classmate (username) with Neo4J.&lt;/li&gt;
 558&lt;/ul&gt;
 559&lt;h2 class=&quot;title&quot; id=&quot;classmate_s_evaluation&quot;&gt;&lt;a class=&quot;anchor&quot; href=&quot;#classmate_s_evaluation&quot;&gt;¶&lt;/a&gt;Classmate’s Evaluation&lt;/h2&gt;
 560&lt;p&gt;&lt;strong&gt;Grading: A.&lt;/strong&gt;&lt;/p&gt;
 561&lt;p&gt;The post evaluated is Bases de datos NoSQL – Apache Druid – Primera entrega.&lt;/p&gt;
 562&lt;p&gt;It is a very well-written, complete post, with each section meeting one of the points in the required criteria. The only thing that bothered me a little is the abuse of strong emphasis in the text, which I found quite distracting. However, the content deserves the highest grading.&lt;/p&gt;
 563&lt;h2 id=&quot;classmate_s_evaluation_2&quot;&gt;&lt;a class=&quot;anchor&quot; href=&quot;#classmate_s_evaluation_2&quot;&gt;¶&lt;/a&gt;Classmate’s Evaluation&lt;/h2&gt;
 564&lt;p&gt;&lt;strong&gt;Grading: A.&lt;/strong&gt;&lt;/p&gt;
 565&lt;p&gt;The post evaluated is Bases de datos NoSQL – Neo4j – Primera entrega.&lt;/p&gt;
 566&lt;p&gt;Well-written post, although a bit smaller than Classmate’s, but that’s not really an issue. It still talks about everything it should talk and includes photos to go along the text which help. There is no noticeable wrong things in it, so it gets the highest grading as well.&lt;/p&gt;
 567&lt;/main&gt;
 568&lt;/body&gt;
 569&lt;/html&gt;
 570 </content></entry><entry><title>Mining of Massive Datasets</title><id>dist/mining-of-massive-datasets/index.html</id><updated>2020-03-27T23:00:00+00:00</updated><published>2020-03-15T23:00:00+00:00</published><summary>In this post we will talk about the Chapter 1 of the book Mining of Massive Datasets Leskovec, J. et al., available online, and I will summarize and share my thoughts on it.</summary><content type="html" src="dist/mining-of-massive-datasets/index.html">&lt;!DOCTYPE html&gt;
 571&lt;html&gt;
 572&lt;head&gt;
 573&lt;meta charset=&quot;utf-8&quot; /&gt;
 574&lt;meta name=&quot;viewport&quot; content=&quot;width=device-width, initial-scale=1&quot; /&gt;
 575&lt;title&gt;Mining of Massive Datasets&lt;/title&gt;
 576&lt;link rel=&quot;stylesheet&quot; href=&quot;../css/style.css&quot;&gt;
 577&lt;/head&gt;
 578&lt;body&gt;
 579&lt;main&gt;
 580&lt;p&gt;In this post we will talk about the Chapter 1 of the book Mining of Massive Datasets Leskovec, J. et al., available online, and I will summarize and share my thoughts on it.&lt;/p&gt;
 581&lt;div class=&quot;date-created-modified&quot;&gt;Created 2020-03-16&lt;br&gt;
 582Modified 2020-03-28&lt;/div&gt;
 583&lt;p&gt;Data mining often refers to the discovery of models for data, where the model can be for statistics, machine learning, summarizing, extracting features, or other computational approaches to perform complex queries on the data.&lt;/p&gt;
 584&lt;p&gt;Commonly, problems related to data mining involve discovering unusual events hidden in massive data sets. There is another problem when trying to achieve Total Information Awareness (TIA), though, a project that was proposed by the Bush administration but shut down. The problem is, if you look at so much data, and try to find activities that look like (for example) terrorist behavior, inevitably one will find other illicit activities that are not terrorism with bad consequences. So it is important to narrow the activities we are looking for, in this case.&lt;/p&gt;
 585&lt;p&gt;When looking at data, even completely random data, for a certain event type, the event will likely occur. With more data, it will occur more times. However, these are bogus results. The Bonferroni correction gives a statistically sound way to avoid most of these bogus results, however, the Bonferroni’s Principle can be used as an informal version to achieve the same thing.&lt;/p&gt;
 586&lt;p&gt;For that, we calculate the expected number of occurrences of the events you are looking for on the assumption that data is random. If this number is way larger than the number of real instances one hoped to find, then nearly everything will be Bogus.&lt;/p&gt;
 587&lt;hr /&gt;
 588&lt;p&gt;When analysing documents, some words will be more important than others, and can help determine the topic of the document. One could think the most repeated words are the most important, but that’s far from the truth. The most common words are the stop-words, which carry no meaning, reason why we should remove them prior to processing. We are mostly looking for rare nouns.&lt;/p&gt;
 589&lt;p&gt;There are of course formal measures for how concentrated into relatively few documents are the occurrences of a given word, known as TF.IDF (Term Frequency times In-verse Document Frequency). We won’t go into details on how to compute it, because there are multiple ways.&lt;/p&gt;
 590&lt;p&gt;Hash functions are also frequently used, because they can turn hash keys into a bucket number (the index of the bucket where this hash key belongs). They «randomize» and spread the universe of keys into a smaller number of buckets, useful for storage and access.&lt;/p&gt;
 591&lt;p&gt;An index is an efficient structure to query for values given a key, and can be built with hash functions and buckets.&lt;/p&gt;
 592&lt;p&gt;Having all of these is important when analysing documents when doing data mining, because otherwise it would take far too long.&lt;/p&gt;
 593&lt;/main&gt;
 594&lt;/body&gt;
 595&lt;/html&gt;
 596 </content></entry><entry><title>MongoDB: Operaciones Básicas y Arquitectura</title><id>dist/mongodb-operaciones-basicas-y-arquitectura/index.html</id><updated>2020-03-19T23:00:00+00:00</updated><published>2020-03-04T23:00:00+00:00</published><summary>Este es el segundo post en la serie sobre MongoDB, con una breve descripción de las operaciones básicas (tales como inserción, recuperación e indexado), y ejecución por completo junto con el modelo de datos y arquitectura.</summary><content type="html" src="dist/mongodb-operaciones-basicas-y-arquitectura/index.html">&lt;!DOCTYPE html&gt;
 597&lt;html&gt;
 598&lt;head&gt;
 599&lt;meta charset=&quot;utf-8&quot; /&gt;
 600&lt;meta name=&quot;viewport&quot; content=&quot;width=device-width, initial-scale=1&quot; /&gt;
 601&lt;title&gt;MongoDB: Operaciones Básicas y Arquitectura&lt;/title&gt;
 602&lt;link rel=&quot;stylesheet&quot; href=&quot;../css/style.css&quot;&gt;
 603&lt;/head&gt;
 604&lt;body&gt;
 605&lt;main&gt;
 606&lt;p&gt;Este es el segundo post en la serie sobre MongoDB, con una breve descripción de las operaciones básicas (tales como inserción, recuperación e indexado), y ejecución por completo junto con el modelo de datos y arquitectura.&lt;/p&gt;
 607&lt;div class=&quot;date-created-modified&quot;&gt;Created 2020-03-05&lt;br&gt;
 608Modified 2020-03-20&lt;/div&gt;
 609&lt;p&gt;Otros posts en esta serie:&lt;/p&gt;
 610&lt;ul&gt;
 611&lt;li&gt;&lt;a href=&quot;/blog/mdad/mongodb-introduction/&quot;&gt;MongoDB: Introducción&lt;/a&gt;&lt;/li&gt;
 612&lt;li&gt;&lt;a href=&quot;/blog/mdad/mongodb-operaciones-basicas-y-arquitectura/&quot;&gt;MongoDB: Operaciones Básicas y Arquitectura&lt;/a&gt; (este post)&lt;/li&gt;
 613&lt;/ul&gt;
 614&lt;p&gt;Este post está hecho en colaboración con un compañero, y en él veremos algunos ejemplos de las operaciones básicas (&lt;a href=&quot;https://stackify.com/what-are-crud-operations/&quot;&gt;CRUD&lt;/a&gt;) sobre MongoDB.&lt;/p&gt;
 615&lt;hr /&gt;
 616&lt;p&gt;Empezaremos viendo cómo creamos una nueva base de datos dentro de MongoDB y una nueva colección donde poder insertar nuestros documentos.&lt;/p&gt;
 617&lt;h2 class=&quot;title&quot; id=&quot;creación_de_una_base_de_datos_e_inserción_de_un_primer_documento&quot;&gt;&lt;a class=&quot;anchor&quot; href=&quot;#creación_de_una_base_de_datos_e_inserción_de_un_primer_documento&quot;&gt;¶&lt;/a&gt;Creación de una base de datos e inserción de un primer documento&lt;/h2&gt;
 618&lt;p&gt;Podemos ver las bases de datos que tenemos disponibles ejecutando el comando:&lt;/p&gt;
 619&lt;pre&gt;&lt;code&gt;&amp;gt; show databases
 620admin   0.000GB
 621config  0.000GB
 622local   0.000GB
 623&lt;/code&gt;&lt;/pre&gt;
 624&lt;p&gt;Para crear una nueva base de datos, o utilizar una de las que tenemos creadas ejecutamos &lt;code&gt;use&lt;/code&gt; junto con el nombre que le vamos a dar:&lt;/p&gt;
 625&lt;pre&gt;&lt;code&gt;&amp;gt; use new_DB
 626switched to db new_DB
 627&lt;/code&gt;&lt;/pre&gt;
 628&lt;p&gt;Una vez hecho esto, podemos ver que si volvemos a ejecutar «show databases», la nueva base de datos no aparece. Esto es porque para que Mongo registre una base de datos en la lista de las existentes, necesitamos insertar al menos un nuevo documento en una colección de esta. Lo podemos hacer de la siguiente forma:&lt;/p&gt;
 629&lt;pre&gt;&lt;code&gt;&amp;gt; db.movie.insert({&amp;quot;name&amp;quot;:&amp;quot;tutorials point&amp;quot;})
 630WriteResult({ &amp;quot;nInserted&amp;quot; : 1 })
 631
 632&amp;gt; show databases
 633admin       0.000GB
 634config      0.000GB
 635local       0.000GB
 636movie       0.000GB
 637&lt;/code&gt;&lt;/pre&gt;
 638&lt;p&gt;Al igual que podemos ver las bases de datos existentes, también podemos consultar las colecciones que existen dentro de estas. Siguiendo la anterior ejecución, si ejecutamos:&lt;/p&gt;
 639&lt;pre&gt;&lt;code&gt;&amp;gt; show collections
 640movie
 641&lt;/code&gt;&lt;/pre&gt;
 642&lt;h3 id=&quot;borrar_base_de_datos&quot;&gt;&lt;a class=&quot;anchor&quot; href=&quot;#borrar_base_de_datos&quot;&gt;¶&lt;/a&gt;Borrar base de datos&lt;/h3&gt;
 643&lt;p&gt;Para borrar una base de datos tenemos que ejecutar el siguiente comando:&lt;/p&gt;
 644&lt;pre&gt;&lt;code&gt;&amp;gt; db.dropDatabase()
 645{ &amp;quot;dropped&amp;quot; : &amp;quot;new_DB&amp;quot;, &amp;quot;ok&amp;quot; : 1 }
 646&lt;/code&gt;&lt;/pre&gt;
 647&lt;h3 id=&quot;crear_colección&quot;&gt;&lt;a class=&quot;anchor&quot; href=&quot;#crear_colección&quot;&gt;¶&lt;/a&gt;Crear colección&lt;/h3&gt;
 648&lt;p&gt;Para crear una colección podemos hacerlo de dos formas. O bien mediante el comando:&lt;/p&gt;
 649&lt;pre&gt;&lt;code&gt;db.createCollection(&amp;lt;nombre de la colección&amp;gt;, opciones)
 650&lt;/code&gt;&lt;/pre&gt;
 651&lt;p&gt;Donde el primer parámetro es el nombre que le queremos asignar a la colección, y los siguientes, todos opcionales, pueden ser (entre otros):&lt;/p&gt;
 652&lt;table class=&quot;&quot;&gt;
 653 &lt;thead&gt;
 654  &lt;tr&gt;
 655   &lt;th&gt;
 656    Campo
 657   &lt;/th&gt;
 658   &lt;th&gt;
 659    Tipo
 660   &lt;/th&gt;
 661   &lt;th&gt;
 662    Descripción
 663   &lt;/th&gt;
 664  &lt;/tr&gt;
 665 &lt;/thead&gt;
 666 &lt;tbody&gt;
 667  &lt;tr&gt;
 668   &lt;td&gt;
 669    &lt;code&gt;
 670     capped
 671    &lt;/code&gt;
 672   &lt;/td&gt;
 673   &lt;td&gt;
 674    Booleano
 675   &lt;/td&gt;
 676   &lt;td&gt;
 677    Si es
 678    &lt;code&gt;
 679     true
 680    &lt;/code&gt;
 681    ,
 682 permite una colección limitada. Una colección limitada es una colección
 683 de tamaño fijo que sobrescribe automáticamente sus entradas más 
 684antiguas cuando alcanza su tamaño máximo. Si especifica
 685    &lt;code&gt;
 686     true
 687    &lt;/code&gt;
 688    , también debe especificar el parámetro de
 689    &lt;code&gt;
 690     size
 691    &lt;/code&gt;
 692    .
 693   &lt;/td&gt;
 694  &lt;/tr&gt;
 695  &lt;tr&gt;
 696   &lt;td&gt;
 697    &lt;code&gt;
 698     autoIndexId
 699    &lt;/code&gt;
 700   &lt;/td&gt;
 701   &lt;td&gt;
 702    Booleano
 703   &lt;/td&gt;
 704   &lt;td&gt;
 705    Si es
 706    &lt;code&gt;
 707     true
 708    &lt;/code&gt;
 709    crea automáticamente un índice en el campo
 710    &lt;code&gt;
 711     _id
 712    &lt;/code&gt;
 713    . Por defecto es
 714    &lt;code&gt;
 715     false
 716    &lt;/code&gt;
 717   &lt;/td&gt;
 718  &lt;/tr&gt;
 719  &lt;tr&gt;
 720   &lt;td&gt;
 721    &lt;code&gt;
 722     size
 723    &lt;/code&gt;
 724   &lt;/td&gt;
 725   &lt;td&gt;
 726    Número
 727   &lt;/td&gt;
 728   &lt;td&gt;
 729    Especifica el tamaño máximo en bytes para una colección limitada. Es obligatorio si el campo
 730    &lt;code&gt;
 731     capped
 732    &lt;/code&gt;
 733    está a
 734    &lt;code&gt;
 735     true
 736    &lt;/code&gt;
 737    .
 738   &lt;/td&gt;
 739  &lt;/tr&gt;
 740  &lt;tr&gt;
 741   &lt;td&gt;
 742    &lt;code&gt;
 743     max
 744    &lt;/code&gt;
 745   &lt;/td&gt;
 746   &lt;td&gt;
 747    Número
 748   &lt;/td&gt;
 749   &lt;td&gt;
 750    Especifica el número máximo de documentos que están permitidos en la colección limitada.
 751   &lt;/td&gt;
 752  &lt;/tr&gt;
 753 &lt;/tbody&gt;
 754&lt;/table&gt;
 755&lt;pre&gt;&lt;code&gt;&amp;gt; use test
 756switched to db test
 757
 758&amp;gt; db.createCollection(&amp;quot;mycollection&amp;quot;)
 759{ &amp;quot;ok&amp;quot; : 1 }
 760
 761&amp;gt; db.createCollection(&amp;quot;mycol&amp;quot;, {capped : true, autoIndexId: true, size: 6142800, max: 10000})
 762{
 763    &amp;quot;note&amp;quot; : &amp;quot;the autoIndexId option is deprecated and will be removed in a future release&amp;quot;,
 764    &amp;quot;ok&amp;quot; : 1
 765}
 766
 767&amp;gt; show collections
 768mycol
 769mycollection
 770&lt;/code&gt;&lt;/pre&gt;
 771&lt;p&gt;Como se ha visto anteriormente al crear la base de datos, podemos insertar un documento en una colección sin que la hayamos creado anteriormente. Esto es porque MongoDB crea automáticamente una colección cuando insertas algún documento en ella:&lt;/p&gt;
 772&lt;pre&gt;&lt;code&gt;&amp;gt; db.tutorialspoint.insert({&amp;quot;name&amp;quot;:&amp;quot;tutorialspoint&amp;quot;})
 773WriteResult({ &amp;quot;nInserted&amp;quot; : 1 })
 774
 775&amp;gt; show collections
 776mycol
 777mycollection
 778tutorialspoint
 779&lt;/code&gt;&lt;/pre&gt;
 780&lt;h3 id=&quot;borrar_colección&quot;&gt;&lt;a class=&quot;anchor&quot; href=&quot;#borrar_colección&quot;&gt;¶&lt;/a&gt;Borrar colección&lt;/h3&gt;
 781&lt;p&gt;Para borrar una colección basta con situarnos en la base de datos que la contiene, y ejecutar lo siguiente:&lt;/p&gt;
 782&lt;pre&gt;&lt;code&gt;db.&amp;lt;nombre_de_la_colección&amp;gt;.drop()
 783&lt;/code&gt;&lt;/pre&gt;
 784&lt;pre&gt;&lt;code&gt;&amp;gt; db.mycollection.drop()
 785true
 786
 787&amp;gt; show collections
 788mycol
 789tutorialspoint
 790&lt;/code&gt;&lt;/pre&gt;
 791&lt;h3 id=&quot;insertar_documento&quot;&gt;&lt;a class=&quot;anchor&quot; href=&quot;#insertar_documento&quot;&gt;¶&lt;/a&gt;Insertar documento&lt;/h3&gt;
 792&lt;p&gt;Para insertar datos en una colección de MongoDB necesitaremos usar el método &lt;code&gt;insert()&lt;/code&gt; o &lt;code&gt;save()&lt;/code&gt;.&lt;/p&gt;
 793&lt;p&gt;Ejemplo del método &lt;code&gt;insert&lt;/code&gt;:&lt;/p&gt;
 794&lt;pre&gt;&lt;code&gt;&amp;gt; db.colection.insert({
 795... title: 'Esto es una prueba para MDAD',
 796... description: 'MongoDB es una BD no SQL',
 797... by: 'Classmate and Me',
 798... tags: ['mongodb', 'database'],
 799... likes: 100
 800... })
 801WriteResults({ &amp;quot;nInserted&amp;quot; : 1 })
 802&lt;/code&gt;&lt;/pre&gt;
 803&lt;p&gt;En este ejemplo solo se ha insertado un único documento, pero podemos insertar los que queramos separándolos de la siguiente forma:&lt;/p&gt;
 804&lt;pre&gt;&lt;code&gt;db.collection.insert({documento}, {documento2}, {documento3})
 805&lt;/code&gt;&lt;/pre&gt;
 806&lt;p&gt;No hace falta especificar un ID ya que el propio mongo asigna un ID a cada documento automáticamente, aunque nos da la opción de poder asignarle uno mediante el atributo &lt;code&gt;_id&lt;/code&gt; en la inserción de los datos&lt;/p&gt;
 807&lt;p&gt;Como se indica en el título de este apartado también se puede insertar mediante el método &lt;code&gt;db.coleccion.save(documento)&lt;/code&gt;, funcionando este como el método &lt;code&gt;insert&lt;/code&gt;.&lt;/p&gt;
 808&lt;h3 id=&quot;método_&quot;&gt;&lt;a class=&quot;anchor&quot; href=&quot;#método_&quot;&gt;¶&lt;/a&gt;Método &lt;code&gt;find()&lt;/code&gt;&lt;/h3&gt;
 809&lt;p&gt;El método find en MongoDB es el que nos permite realizar consultas a las colecciones de nuestra base de datos:&lt;/p&gt;
 810&lt;pre&gt;&lt;code&gt;db.&amp;lt;nombre_de_la_colección&amp;gt;.find()
 811&lt;/code&gt;&lt;/pre&gt;
 812&lt;p&gt;Este método mostrará de una forma no estructurada todos los documentos de la colección. Si le añadimos la función &lt;code&gt;pretty&lt;/code&gt; a este método, se mostrarán de una manera más «bonita».&lt;/p&gt;
 813&lt;pre&gt;&lt;code&gt;&amp;gt; db.colection.find()
 814{ &amp;quot;_id&amp;quot;: ObjectId(&amp;quot;5e738f0989f85a7eafdf044a&amp;quot;), &amp;quot;title&amp;quot; : &amp;quot;Esto es una prueba para MDAD&amp;quot;, &amp;quot;description&amp;quot; : &amp;quot;MongoDB es una BD no SQL&amp;quot;, &amp;quot;by&amp;quot; : &amp;quot;Classmate and Me&amp;quot;, &amp;quot;tags&amp;quot; : [ &amp;quot;mongodb&amp;quot;, &amp;quot;database&amp;quot; ], &amp;quot;likes&amp;quot; : 100 }
 815
 816&amp;gt; db.colection.find().pretty()
 817{
 818    &amp;quot;_id&amp;quot;: ObjectId(&amp;quot;5e738f0989f85a7eafdf044a&amp;quot;),
 819    &amp;quot;title&amp;quot; : &amp;quot;Esto es una prueba para MDAD&amp;quot;,
 820    &amp;quot;description&amp;quot; : &amp;quot;MongoDB es una BD no SQL&amp;quot;,
 821    &amp;quot;by&amp;quot; : &amp;quot;Classmate and Me&amp;quot;,
 822    &amp;quot;tags&amp;quot; : [
 823        &amp;quot;mongodb&amp;quot;,
 824        &amp;quot;database&amp;quot;
 825    ],
 826    &amp;quot;likes&amp;quot; : 100
 827}
 828&lt;/code&gt;&lt;/pre&gt;
 829&lt;p&gt;Los equivalentes del &lt;code&gt;where&lt;/code&gt; en las bases de datos relacionales son:&lt;/p&gt;
 830&lt;table class=&quot;&quot;&gt;
 831 &lt;thead&gt;
 832  &lt;tr&gt;
 833   &lt;th&gt;
 834    Operación
 835   &lt;/th&gt;
 836   &lt;th&gt;
 837    Sintaxis
 838   &lt;/th&gt;
 839   &lt;th&gt;
 840    Ejemplo
 841   &lt;/th&gt;
 842   &lt;th&gt;
 843    Equivalente en RDBMS
 844   &lt;/th&gt;
 845  &lt;/tr&gt;
 846 &lt;/thead&gt;
 847 &lt;tbody&gt;
 848  &lt;tr&gt;
 849   &lt;td&gt;
 850    Igual
 851   &lt;/td&gt;
 852   &lt;td&gt;
 853    &lt;code&gt;
 854     {&amp;lt;clave&amp;gt;:&amp;lt;valor&amp;gt;}
 855    &lt;/code&gt;
 856   &lt;/td&gt;
 857   &lt;td&gt;
 858    &lt;code&gt;
 859     db.mycol.find({&quot;by&quot;:&quot;Classmate and Me&quot;})
 860    &lt;/code&gt;
 861   &lt;/td&gt;
 862   &lt;td&gt;
 863    &lt;code&gt;
 864     where by = 'Classmate and Me'
 865    &lt;/code&gt;
 866   &lt;/td&gt;
 867  &lt;/tr&gt;
 868  &lt;tr&gt;
 869   &lt;td&gt;
 870    Menor que
 871   &lt;/td&gt;
 872   &lt;td&gt;
 873    &lt;code&gt;
 874     {&amp;lt;clave&amp;gt;:{$lt:&amp;lt;valor&amp;gt;}}
 875    &lt;/code&gt;
 876   &lt;/td&gt;
 877   &lt;td&gt;
 878    &lt;code&gt;
 879     db.mycol.find({&quot;likes&quot;:{$lt:60}})
 880    &lt;/code&gt;
 881   &lt;/td&gt;
 882   &lt;td&gt;
 883    &lt;code&gt;
 884     where likes &amp;lt; 60
 885    &lt;/code&gt;
 886   &lt;/td&gt;
 887  &lt;/tr&gt;
 888  &lt;tr&gt;
 889   &lt;td&gt;
 890    Menor o igual que
 891   &lt;/td&gt;
 892   &lt;td&gt;
 893    &lt;code&gt;
 894     {&amp;lt;clave&amp;gt;:{$lte:&amp;lt;valor&amp;gt;}}
 895    &lt;/code&gt;
 896   &lt;/td&gt;
 897   &lt;td&gt;
 898    &lt;code&gt;
 899     db.mycol.find({&quot;likes&quot;:{$lte:60}})
 900    &lt;/code&gt;
 901   &lt;/td&gt;
 902   &lt;td&gt;
 903    &lt;code&gt;
 904     where likes &amp;lt;= 60
 905    &lt;/code&gt;
 906   &lt;/td&gt;
 907  &lt;/tr&gt;
 908  &lt;tr&gt;
 909   &lt;td&gt;
 910    Mayor que
 911   &lt;/td&gt;
 912   &lt;td&gt;
 913    &lt;code&gt;
 914     {&amp;lt;clave&amp;gt;:{$gt:&amp;lt;valor&amp;gt;}}
 915    &lt;/code&gt;
 916   &lt;/td&gt;
 917   &lt;td&gt;
 918    &lt;code&gt;
 919     db.mycol.find({&quot;likes&quot;:{$gt:60}})
 920    &lt;/code&gt;
 921   &lt;/td&gt;
 922   &lt;td&gt;
 923    &lt;code&gt;
 924     where likes &amp;gt; 60
 925    &lt;/code&gt;
 926   &lt;/td&gt;
 927  &lt;/tr&gt;
 928  &lt;tr&gt;
 929   &lt;td&gt;
 930    Mayor o igual que
 931   &lt;/td&gt;
 932   &lt;td&gt;
 933    &lt;code&gt;
 934     {&amp;lt;clave&amp;gt;:{$gte:&amp;lt;valor&amp;gt;}}
 935    &lt;/code&gt;
 936   &lt;/td&gt;
 937   &lt;td&gt;
 938    &lt;code&gt;
 939     db.mycol.find({&quot;likes&quot;:{$gte:60}})
 940    &lt;/code&gt;
 941   &lt;/td&gt;
 942   &lt;td&gt;
 943    &lt;code&gt;
 944     where likes &amp;gt;= 60
 945    &lt;/code&gt;
 946   &lt;/td&gt;
 947  &lt;/tr&gt;
 948  &lt;tr&gt;
 949   &lt;td&gt;
 950    No igual
 951   &lt;/td&gt;
 952   &lt;td&gt;
 953    &lt;code&gt;
 954     {&amp;lt;clave&amp;gt;:{$ne:&amp;lt;valor&amp;gt;}}
 955    &lt;/code&gt;
 956   &lt;/td&gt;
 957   &lt;td&gt;
 958    &lt;code&gt;
 959     db.mycol.find({&quot;likes&quot;:{$ne:60}})
 960    &lt;/code&gt;
 961   &lt;/td&gt;
 962   &lt;td&gt;
 963    &lt;code&gt;
 964     where likes != 60
 965    &lt;/code&gt;
 966   &lt;/td&gt;
 967  &lt;/tr&gt;
 968 &lt;/tbody&gt;
 969&lt;/table&gt;
 970&lt;p&gt;En el método &lt;code&gt;find()&lt;/code&gt; podemos añadir condiciones AND y OR de la siguiente manera:&lt;/p&gt;
 971&lt;pre&gt;&lt;code&gt;(AND)
 972&amp;gt; db.colection.find({$and:[{&amp;quot;by&amp;quot;:&amp;quot;Classmate and Me&amp;quot;},{&amp;quot;title&amp;quot;: &amp;quot;Esto es una prueba para MDAD&amp;quot;}]}).pretty()
 973
 974(OR)
 975&amp;gt; db.colection.find({$or:[{&amp;quot;by&amp;quot;:&amp;quot;Classmate and Me&amp;quot;},{&amp;quot;title&amp;quot;: &amp;quot;Esto es una prueba para MDAD&amp;quot;}]}).pretty()
 976
 977(Ambos a la vez)
 978&amp;gt; db.colection.find({&amp;quot;likes&amp;quot;: {$gt:10}, $or: [{&amp;quot;by&amp;quot;: &amp;quot;Classmate and Me&amp;quot;}, {&amp;quot;title&amp;quot;: &amp;quot;Esto es una prueba para MDAD&amp;quot;}]}).pretty()
 979&lt;/code&gt;&lt;/pre&gt;
 980&lt;p&gt;La última llamada con ambos a la vez equivalente en una consulta SQL a:&lt;/p&gt;
 981&lt;pre&gt;&lt;code&gt;where likes&amp;gt;10 AND (by = 'Classmate and Me' OR title = 'Esto es una prueba para MDAD')
 982&lt;/code&gt;&lt;/pre&gt;
 983&lt;h3 id=&quot;actualizar_un_documento&quot;&gt;&lt;a class=&quot;anchor&quot; href=&quot;#actualizar_un_documento&quot;&gt;¶&lt;/a&gt;Actualizar un documento&lt;/h3&gt;
 984&lt;p&gt;En MongoDB se hace utilizando el método &lt;code&gt;update&lt;/code&gt;:&lt;/p&gt;
 985&lt;pre&gt;&lt;code&gt;db.&amp;lt;nombre_colección&amp;gt;.update(&amp;lt;criterio_de_selección&amp;gt;, &amp;lt;dato_actualizado&amp;gt;)
 986&lt;/code&gt;&lt;/pre&gt;
 987&lt;p&gt;Para este ejemplo vamos a actualizar el documento que hemos insertado en el apartado anterior:&lt;/p&gt;
 988&lt;pre&gt;&lt;code&gt;&amp;gt; db.colection.update({'title':'Esto es una prueba para MDAD'},{$set:{'title':'Título actualizado'}})
 989WriteResult({ &amp;quot;nMatched&amp;quot; : 1, &amp;quot;nUpserted&amp;quot; : 0, &amp;quot;nModified&amp;quot; : 1 })
 990&amp;gt; db.colection.find().pretty()
 991{
 992    &amp;quot;_id&amp;quot;: ObjectId(&amp;quot;5e738f0989f85a7eafdf044a&amp;quot;),
 993    &amp;quot;title&amp;quot; : &amp;quot;Título actualizado&amp;quot;,
 994    &amp;quot;description&amp;quot; : &amp;quot;MongoDB es una BD no SQL&amp;quot;,
 995    &amp;quot;by&amp;quot; : &amp;quot;Classmate and Me&amp;quot;,
 996    &amp;quot;tags&amp;quot; : [
 997        &amp;quot;mongodb&amp;quot;,
 998        &amp;quot;database&amp;quot;
 999    ],
1000    &amp;quot;likes&amp;quot; : 100
1001}
1002&lt;/code&gt;&lt;/pre&gt;
1003&lt;p&gt;Anteriormente se ha mencionado el método &lt;code&gt;save()&lt;/code&gt; para la inserción de documentos, pero también podemos utilizarlo para sustituir documentos enteros por uno nuevo:&lt;/p&gt;
1004&lt;pre&gt;&lt;code&gt;&amp;gt; db.&amp;lt;nombre_de_la_colección&amp;gt;.save({_id:ObjectId(), &amp;lt;nuevo_documento&amp;gt;})
1005&lt;/code&gt;&lt;/pre&gt;
1006&lt;p&gt;Con nuestro documento:&lt;/p&gt;
1007&lt;pre&gt;&lt;code&gt;&amp;gt; db.colection.save(
1008...   {
1009...     &amp;quot;_id&amp;quot;: ObjectId(&amp;quot;5e738f0989f85a7eafdf044a&amp;quot;), &amp;quot;title&amp;quot;: &amp;quot;Este es el nuevo título&amp;quot;, &amp;quot;by&amp;quot;: &amp;quot;MDAD&amp;quot;
1010...   }
1011... )
1012WriteResult({ &amp;quot;nMatched&amp;quot; : 1, &amp;quot;nUpserted&amp;quot; : 0, &amp;quot;nModified&amp;quot; : 1 })
1013
1014&amp;gt; db.colection.find()
1015{
1016    &amp;quot;_id&amp;quot;: ObjectId(&amp;quot;5e738f0989f85a7eafdf044a&amp;quot;),
1017    &amp;quot;title&amp;quot;: &amp;quot;Este es el nuevo título&amp;quot;,
1018    &amp;quot;by&amp;quot;: &amp;quot;MDAD&amp;quot;
1019}
1020&lt;/code&gt;&lt;/pre&gt;
1021&lt;h3 id=&quot;borrar_documento&quot;&gt;&lt;a class=&quot;anchor&quot; href=&quot;#borrar_documento&quot;&gt;¶&lt;/a&gt;Borrar documento&lt;/h3&gt;
1022&lt;p&gt;Para borrar un documento utilizaremos el método &lt;code&gt;remove()&lt;/code&gt; de la siguiente manera:&lt;/p&gt;
1023&lt;pre&gt;&lt;code&gt;db.&amp;lt;nombre_de_la_colección&amp;gt;.remove(&amp;lt;criterio_de_borrado&amp;gt;)
1024&lt;/code&gt;&lt;/pre&gt;
1025&lt;p&gt;Considerando la colección del apartado anterior borraremos el único documento que tenemos:&lt;/p&gt;
1026&lt;pre&gt;&lt;code&gt;&amp;gt; db.colection.remove({'title': 'Este es el nuevo título'})
1027WriteResult({ &amp;quot;nRemoved&amp;quot; : 1 })
1028&amp;gt; db.colection.find().pretty()
1029&amp;gt;
1030&lt;/code&gt;&lt;/pre&gt;
1031&lt;p&gt;Para borrar todos los documentos de una colección usamos:&lt;/p&gt;
1032&lt;pre&gt;&lt;code&gt;db.&amp;lt;colección&amp;gt;.remove({})
1033&lt;/code&gt;&lt;/pre&gt;
1034&lt;h3 id=&quot;indexación&quot;&gt;&lt;a class=&quot;anchor&quot; href=&quot;#indexación&quot;&gt;¶&lt;/a&gt;Indexación&lt;/h3&gt;
1035&lt;p&gt;MongDB nos permite crear índices sobre atributos de una colección de la siguiente forma:&lt;/p&gt;
1036&lt;pre&gt;&lt;code&gt;db.&amp;lt;colección&amp;gt;.createIndex( {&amp;lt;atributo&amp;gt;:&amp;lt;opciones&amp;gt;})
1037&lt;/code&gt;&lt;/pre&gt;
1038&lt;p&gt;Como ejemplo:&lt;/p&gt;
1039&lt;pre&gt;&lt;code&gt;&amp;gt; db.mycol.createIndex({&amp;quot;title&amp;quot;:1})
1040{
1041    &amp;quot;createdCollectionAutomatically&amp;quot; : false,
1042    &amp;quot;numIndexesBefore&amp;quot; : 1,
1043    &amp;quot;numIndexesAfter&amp;quot; : 2,
1044    &amp;quot;ok&amp;quot; : 1
1045}
1046&lt;/code&gt;&lt;/pre&gt;
1047&lt;p&gt;Si queremos más de un atributo en el índice lo haremos así:&lt;/p&gt;
1048&lt;pre&gt;&lt;code&gt;&amp;gt; db.mycol.ensureIndex({&amp;quot;title&amp;quot;:1,&amp;quot;description&amp;quot;:-1})
1049&lt;/code&gt;&lt;/pre&gt;
1050&lt;p&gt;Los valores que puede tomar son &lt;code&gt;+1&lt;/code&gt; para ascendente o &lt;code&gt;-1&lt;/code&gt; para descendente.&lt;/p&gt;
1051&lt;h3 id=&quot;referencias&quot;&gt;&lt;a class=&quot;anchor&quot; href=&quot;#referencias&quot;&gt;¶&lt;/a&gt;Referencias&lt;/h3&gt;
1052&lt;ul&gt;
1053&lt;li&gt;Manual MongoDB. (n.d.). &lt;a href=&quot;https://docs.mongodb.com/manual/&quot;&gt;https://docs.mongodb.com/manual/&lt;/a&gt;&lt;/li&gt;
1054&lt;li&gt;MongoDB Tutorial – Tutorialspoint. (n.d.). – &lt;a href=&quot;https://www.tutorialspoint.com/mongodb/index.htm&quot;&gt;https://www.tutorialspoint.com/mongodb/index.htm&lt;/a&gt;&lt;/li&gt;
1055&lt;/ul&gt;
1056&lt;/main&gt;
1057&lt;/body&gt;
1058&lt;/html&gt;
1059 </content></entry><entry><title>MongoDB: Introducción</title><id>dist/mongodb-introduction/index.html</id><updated>2020-03-19T23:00:00+00:00</updated><published>2020-03-04T23:00:00+00:00</published><summary>Este es el primer post en la serie sobre Mongo, en el cuál introduciremos dicha bases de datos NoSQL y veremos sus características e instalación.</summary><content type="html" src="dist/mongodb-introduction/index.html">&lt;!DOCTYPE html&gt;
1060&lt;html&gt;
1061&lt;head&gt;
1062&lt;meta charset=&quot;utf-8&quot; /&gt;
1063&lt;meta name=&quot;viewport&quot; content=&quot;width=device-width, initial-scale=1&quot; /&gt;
1064&lt;title&gt;MongoDB: Introducción&lt;/title&gt;
1065&lt;link rel=&quot;stylesheet&quot; href=&quot;../css/style.css&quot;&gt;
1066&lt;/head&gt;
1067&lt;body&gt;
1068&lt;main&gt;
1069&lt;p&gt;Este es el primer post en la serie sobre Mongo, en el cuál introduciremos dicha bases de datos NoSQL y veremos sus características e instalación.&lt;/p&gt;
1070&lt;div class=&quot;date-created-modified&quot;&gt;Created 2020-03-05&lt;br&gt;
1071Modified 2020-03-20&lt;/div&gt;
1072&lt;p&gt;Otros posts en esta serie:&lt;/p&gt;
1073&lt;ul&gt;
1074&lt;li&gt;&lt;a href=&quot;/blog/mdad/mongodb-introduction/&quot;&gt;MongoDB: Introducción&lt;/a&gt; (este post)&lt;/li&gt;
1075&lt;li&gt;&lt;a href=&quot;/blog/mdad/mongodb-operaciones-basicas-y-arquitectura/&quot;&gt;MongoDB: Operaciones Básicas y Arquitectura&lt;/a&gt;&lt;/li&gt;
1076&lt;/ul&gt;
1077&lt;p&gt;Este post está hecho en colaboración con un compañero.&lt;/p&gt;
1078&lt;hr /&gt;
1079&lt;p&gt;&lt;img src=&quot;0LRP4__jIIkJ-0gl8j2RDzWscL1Rto-NwvdqzmYk0jmYBIVbJ78n1ZLByPgV.png&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;
1080&lt;h2 class=&quot;title&quot; id=&quot;definición&quot;&gt;&lt;a class=&quot;anchor&quot; href=&quot;#definición&quot;&gt;¶&lt;/a&gt;Definición&lt;/h2&gt;
1081&lt;p&gt;MongoDB es una base de datos orientada a documentos. Esto quiere decir que en lugar de guardar los datos en registros, guarda los datos en documentos. Estos documentos son almacenados en BSON, que es una representación binaria de JSON. Una de las principales diferencias respecto a las bases de datos relacionales es que no necesita seguir ningún esquema, los documentos de una misma colección pueden tener esquemas diferentes.&lt;/p&gt;
1082&lt;p&gt;MongoDB está escrito en C++, aunque las consultas se hacen pasando objetos JSON como parámetro.&lt;/p&gt;
1083&lt;pre&gt;&lt;code&gt;{
1084        &amp;quot;_id&amp;quot; : ObjectId(&amp;quot;52f602d787945c344bb4bda5&amp;quot;),
1085        &amp;quot;name&amp;quot; : &amp;quot;Tyrion&amp;quot;,
1086        &amp;quot;hobbies&amp;quot; : [ 
1087            &amp;quot;books&amp;quot;, 
1088            &amp;quot;girls&amp;quot;, 
1089            &amp;quot;wine&amp;quot;
1090        ],
1091        &amp;quot;friends&amp;quot; : [ 
1092            {
1093                &amp;quot;name&amp;quot; : &amp;quot;Bronn&amp;quot;,
1094                &amp;quot;ocuppation&amp;quot; : &amp;quot;sellsword&amp;quot;
1095            }, 
1096            {
1097                &amp;quot;name&amp;quot; : &amp;quot;Shae&amp;quot;,
1098                &amp;quot;ocuppation&amp;quot; : &amp;quot;handmaiden&amp;quot;
1099            }
1100        ]
1101 }
1102&lt;/code&gt;&lt;/pre&gt;
1103&lt;h2 id=&quot;características&quot;&gt;&lt;a class=&quot;anchor&quot; href=&quot;#características&quot;&gt;¶&lt;/a&gt;Características&lt;/h2&gt;
1104&lt;p&gt;&lt;img src=&quot;WxZenSwSsimGvXVu5XH4cFUd3kr3Is_arrdSZGX8Hi0Ligqgw_ZTvGSIeXZm.png&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;
1105&lt;p&gt;MongoDB alcanza un balance perfecto entre rendimiento y funcionalidad gracias a su sistema de consulta de contenidos. Pero sus características principales no se limitan solo a esto, también cuenta con otras que lo posicionan como el preferido de muchos desarrolladores de aplicaciones como aplicaciones móviles, gaming, logging o e-commerce.&lt;/p&gt;
1106&lt;p&gt;Algunas de las principales características de esta base de datos son:&lt;/p&gt;
1107&lt;ul&gt;
1108&lt;li&gt;Almacenamiento orientado a documentos (documentos JSON con esquemas dinámicos).&lt;/li&gt;
1109&lt;li&gt;Soporte Full index: puede crear índices sobre cualquier atributo y añadir múltiples índices secundarios.&lt;/li&gt;
1110&lt;li&gt;Replicación y alta disponibilidad: espejos entre LANs y WANs.&lt;/li&gt;
1111&lt;li&gt;Auto-Sharding: escalabilidad horizontal sin comprometer la funcionalidad, está limitada, actualmente, a 20 nodos, aunque el objetivo es alcanzar una cifra cercana a los 1000.&lt;/li&gt;
1112&lt;li&gt;Consultas ricas y basadas en documentos.&lt;/li&gt;
1113&lt;li&gt;Rápidas actualizaciones en el contexto.&lt;/li&gt;
1114&lt;li&gt;Soporte comercial, capacitación y consultoría disponibles.&lt;/li&gt;
1115&lt;li&gt;También puede ser utilizada para el almacenamiento de archivos aprovechando la capacidad de MongoDB para el balanceo de carga y la replicación de datos.&lt;/li&gt;
1116&lt;/ul&gt;
1117&lt;p&gt;En cuanto a la arquitectura, podríamos decir que divide en tres partes: las bases de datos, las colecciones y los documentos (que contienen los campos de cada entrada).&lt;/p&gt;
1118&lt;ul&gt;
1119&lt;li&gt;&lt;strong&gt;Base de datos&lt;/strong&gt;: cada una de las bases de datos tiene un conjunto propio de archivos en el sistema de archivos con diversas bases de datos existentes en un solo servidor.&lt;/li&gt;
1120&lt;li&gt;&lt;strong&gt;Colección&lt;/strong&gt;: un conjunto de documentos de base de datos. El equivalente RDBMS de la colección es una tabla. Toda colección existe dentro de una única base de datos.&lt;/li&gt;
1121&lt;li&gt;&lt;strong&gt;Documento&lt;/strong&gt;: un conjunto de pares clave/valor. Los documentos están asociados con esquemas dinámicos. La ventaja de tener esquemas dinámicos es que el documento en una sola colección no tiene que tener la misma estructura o campos. &lt;/li&gt;
1122&lt;/ul&gt;
1123&lt;h2 id=&quot;arista_dentro_del_teorema_cap&quot;&gt;&lt;a class=&quot;anchor&quot; href=&quot;#arista_dentro_del_teorema_cap&quot;&gt;¶&lt;/a&gt;Arista dentro del Teorema CAP&lt;/h2&gt;
1124&lt;p&gt;&lt;img src=&quot;t73Q1t-HXfWij-Q1o5AYEnO39Kz2oyLLCdQz6lWQQPaSQWamlDMjmptAn97h.png&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;
1125&lt;p&gt;MongoDB es CP por defecto, es decir, garantiza consistencia y tolerancia a particiones (fallos). Pero también podemos configurar el nivel de consistencia, eligiendo el número de nodos a los que se replicarán los datos. O podemos configurar si se pueden leer datos de los nodos secundarios (en MongoDB solo hay un servidor principal, que es el único que acepta inserciones o modificaciones). Si permitimos leer de un nodo secundario mediante la replicación, sacrificamos consistencia, pero ganamos disponibilidad.&lt;/p&gt;
1126&lt;h2 id=&quot;descarga_e_instalación&quot;&gt;&lt;a class=&quot;anchor&quot; href=&quot;#descarga_e_instalación&quot;&gt;¶&lt;/a&gt;Descarga e instalación&lt;/h2&gt;
1127&lt;h3 id=&quot;windows&quot;&gt;&lt;a class=&quot;anchor&quot; href=&quot;#windows&quot;&gt;¶&lt;/a&gt;Windows&lt;/h3&gt;
1128&lt;p&gt;Descargar el archivo desde &lt;a href=&quot;https://www.mongodb.com/download-center#production&quot;&gt;https://www.mongodb.com/download-center#production&lt;/a&gt;&lt;/p&gt;
1129&lt;ol&gt;
1130&lt;li&gt;Doble clic en el archivo &lt;code&gt;.msi&lt;/code&gt;&lt;/li&gt;
1131&lt;li&gt;El instalador de Windows lo guía a través del proceso de instalación.
1132Si elige la opción de instalación personalizada, puede especificar un directorio de instalación. 
1133MongoDB no tiene ninguna otra dependencia del sistema. Puede instalar y ejecutar MongoDB desde cualquier carpeta que elija.&lt;/li&gt;
1134&lt;li&gt;Ejecutar el &lt;code&gt;.exe&lt;/code&gt; que hemos instalado.&lt;/li&gt;
1135&lt;/ol&gt;
1136&lt;h3 id=&quot;linux&quot;&gt;&lt;a class=&quot;anchor&quot; href=&quot;#linux&quot;&gt;¶&lt;/a&gt;Linux&lt;/h3&gt;
1137&lt;p&gt;Abrimos una terminal y ejecutamos:&lt;/p&gt;
1138&lt;pre&gt;&lt;code&gt;sudo apt-get update
1139sudo apt install -y mongodb-org
1140&lt;/code&gt;&lt;/pre&gt;
1141&lt;p&gt;Luego comprobamos el estado del servicio:&lt;/p&gt;
1142&lt;pre&gt;&lt;code&gt;sudo systemctl start mongod
1143sudo systemctl status mongod
1144&lt;/code&gt;&lt;/pre&gt;
1145&lt;p&gt;Finalmente ejecutamos la base de datos con el comando:&lt;/p&gt;
1146&lt;pre&gt;&lt;code&gt;sudo mongo
1147&lt;/code&gt;&lt;/pre&gt;
1148&lt;h3 id=&quot;macos&quot;&gt;&lt;a class=&quot;anchor&quot; href=&quot;#macos&quot;&gt;¶&lt;/a&gt;macOS&lt;/h3&gt;
1149&lt;p&gt;Abrimos una terminal y ejecutamos:&lt;/p&gt;
1150&lt;pre&gt;&lt;code&gt;brew update
1151brew install mongodb
1152&lt;/code&gt;&lt;/pre&gt;
1153&lt;p&gt;Iniciamos el servicio:&lt;/p&gt;
1154&lt;pre&gt;&lt;code&gt;brew services start mongodb
1155&lt;/code&gt;&lt;/pre&gt;
1156&lt;h2 id=&quot;referencias&quot;&gt;&lt;a class=&quot;anchor&quot; href=&quot;#referencias&quot;&gt;¶&lt;/a&gt;Referencias&lt;/h2&gt;
1157&lt;ul&gt;
1158&lt;li&gt;&lt;a href=&quot;https://expertoenbigdata.com/que-es-mongodb/#La_arquitectura_de_MongoDB&quot;&gt;Todo lo que debes saber sobre MongoDB&lt;/a&gt;&lt;/li&gt;
1159&lt;li&gt;&lt;a href=&quot;https://www.ecured.cu/MongoDB&quot;&gt;MongoDB – EcuRed&lt;/a&gt;&lt;/li&gt;
1160&lt;li&gt;&lt;a href=&quot;https://mappinggis.com/2014/07/mongodb-y-gis/&quot;&gt;Bases de datos NoSQL, MongoDB y GIS – MappingGIS&lt;/a&gt;&lt;/li&gt;
1161&lt;li&gt;&lt;a href=&quot;https://es.slideshare.net/maxfontana90/caractersticas-mongo-db&quot;&gt;Características MONGO DB&lt;/a&gt;&lt;/li&gt;
1162&lt;li&gt;&lt;a href=&quot;https://openwebinars.net/blog/que-es-mongodb&quot;&gt;Qué es MongoDB y características&lt;/a&gt;&lt;/li&gt;
1163&lt;li&gt;&lt;a href=&quot;https://www.genbeta.com/desarrollo/mongodb-que-es-como-funciona-y-cuando-podemos-usarlo-o-no&quot;&gt;MongoDB. Qué es, cómo funciona y cuándo podemos usarlo (o no)&lt;/a&gt;&lt;/li&gt;
1164&lt;li&gt;&lt;a href=&quot;https://docs.mongodb.com/&quot;&gt;MongoDB Documentation&lt;/a&gt;&lt;/li&gt;
1165&lt;li&gt;&lt;a href=&quot;https://www.genbeta.com/desarrollo/nosql-clasificacion-de-las-bases-de-datos-segun-el-teorema-cap&quot;&gt;NoSQL: Clasificación de las bases de datos según el teorema CAP&lt;/a&gt;&lt;/li&gt;
1166&lt;/ul&gt;
1167&lt;/main&gt;
1168&lt;/body&gt;
1169&lt;/html&gt;
1170 </content></entry><entry><title>Cassandra: Operaciones Básicas y Arquitectura</title><id>dist/cassandra-operaciones-basicas-y-arquitectura/index.html</id><updated>2020-03-19T23:00:00+00:00</updated><published>2020-03-04T23:00:00+00:00</published><summary>Este es el segundo post en la serie sobre Cassandra, con una breve descripción de las operaciones básicas (tales como inserción, recuperación e indexado), y ejecución por completo junto con el modelo de datos y arquitectura.</summary><content type="html" src="dist/cassandra-operaciones-basicas-y-arquitectura/index.html">&lt;!DOCTYPE html&gt;
1171&lt;html&gt;
1172&lt;head&gt;
1173&lt;meta charset=&quot;utf-8&quot; /&gt;
1174&lt;meta name=&quot;viewport&quot; content=&quot;width=device-width, initial-scale=1&quot; /&gt;
1175&lt;title&gt;Cassandra: Operaciones Básicas y Arquitectura&lt;/title&gt;
1176&lt;link rel=&quot;stylesheet&quot; href=&quot;../css/style.css&quot;&gt;
1177&lt;/head&gt;
1178&lt;body&gt;
1179&lt;main&gt;
1180&lt;p&gt;Este es el segundo post en la serie sobre Cassandra, con una breve descripción de las operaciones básicas (tales como inserción, recuperación e indexado), y ejecución por completo junto con el modelo de datos y arquitectura.&lt;/p&gt;
1181&lt;div class=&quot;date-created-modified&quot;&gt;Created 2020-03-05&lt;br&gt;
1182Modified 2020-03-20&lt;/div&gt;
1183&lt;p&gt;Otros posts en esta serie:&lt;/p&gt;
1184&lt;ul&gt;
1185&lt;li&gt;&lt;a href=&quot;/blog/mdad/cassandra-introduccion/&quot;&gt;Cassandra: Introducción&lt;/a&gt;&lt;/li&gt;
1186&lt;li&gt;&lt;a href=&quot;/blog/mdad/cassandra-operaciones-basicas-y-arquitectura/&quot;&gt;Cassandra: Operaciones Básicas y Arquitectura&lt;/a&gt; (este post)&lt;/li&gt;
1187&lt;/ul&gt;
1188&lt;p&gt;Este post está hecho en colaboración con un compañero.&lt;/p&gt;
1189&lt;hr /&gt;
1190&lt;p&gt;Antes de poder ejecutar ninguna consulta, debemos lanzar la base de datos en caso de que no se encuentre en ejecución aún. Para ello, en una terminal, lanzamos el binario de &lt;code&gt;cassandra&lt;/code&gt;:&lt;/p&gt;
1191&lt;pre&gt;&lt;code&gt;$ cassandra-3.11.6/bin/cassandra
1192&lt;/code&gt;&lt;/pre&gt;
1193&lt;p&gt;Sin cerrar esta consola, abrimos otra en la que podamos usar la &lt;a href=&quot;https://cassandra.apache.org/doc/latest/tools/cqlsh.html&quot;&gt;CQL shell&lt;/a&gt;:&lt;/p&gt;
1194&lt;pre&gt;&lt;code&gt;$ cassandra-3.11.6/bin/cqlsh
1195Connected to Test Cluster at 127.0.0.1:9042.
1196[cqlsh 5.0.1 | Cassandra 3.11.6 | CQL spec 3.4.4 | Native protocol v4]
1197Use HELP for help.
1198cqlsh&amp;gt;
1199&lt;/code&gt;&lt;/pre&gt;
1200&lt;h2 class=&quot;title&quot; id=&quot;crear&quot;&gt;&lt;a class=&quot;anchor&quot; href=&quot;#crear&quot;&gt;¶&lt;/a&gt;Crear&lt;/h2&gt;
1201&lt;h3 id=&quot;crear_una_base_de_datos&quot;&gt;&lt;a class=&quot;anchor&quot; href=&quot;#crear_una_base_de_datos&quot;&gt;¶&lt;/a&gt;Crear una base de datos&lt;/h3&gt;
1202&lt;p&gt;Cassandra denomina a las «bases de datos» como «espacio de claves» (keyspace en inglés).&lt;/p&gt;
1203&lt;pre&gt;&lt;code&gt;cqlsh&amp;gt; create keyspace helloworld with replication = {'class': 'SimpleStrategy', 'replication_factor': 3};
1204&lt;/code&gt;&lt;/pre&gt;
1205&lt;p&gt;Cuando creamos un nuevo &lt;em&gt;keyspace&lt;/em&gt;, indicamos el nombre y la estrategia de replicación a usar. Nosotros usamos la estrategia simple con un factor 3 de replicación.&lt;/p&gt;
1206&lt;h3 id=&quot;crear_una_tabla&quot;&gt;&lt;a class=&quot;anchor&quot; href=&quot;#crear_una_tabla&quot;&gt;¶&lt;/a&gt;Crear una tabla&lt;/h3&gt;
1207&lt;p&gt;Una vez estemos dentro de un &lt;em&gt;keyspace&lt;/em&gt;, podemos crear tablas. Vamos a crear una tabla llamada «greetings» con identificador (número entero), mensaje (texto) y lenguaje (&lt;code&gt;varchar&lt;/code&gt;).&lt;/p&gt;
1208&lt;pre&gt;&lt;code&gt;cqlsh&amp;gt; use helloworld;
1209cqlsh:helloworld&amp;gt; create table greetings(id int primary key, message text, lang varchar);
1210&lt;/code&gt;&lt;/pre&gt;
1211&lt;h3 id=&quot;crear_una_fila&quot;&gt;&lt;a class=&quot;anchor&quot; href=&quot;#crear_una_fila&quot;&gt;¶&lt;/a&gt;Crear una fila&lt;/h3&gt;
1212&lt;p&gt;Insertar nuevas filas es similar a otros sistemas gestores de datos, mediante la sentencia &lt;code&gt;INSERT&lt;/code&gt;:&lt;/p&gt;
1213&lt;pre&gt;&lt;code&gt;cqlsh:helloworld&amp;gt; insert into greetings(id, message, lang) values(1, '¡Bienvenido!', 'es');
1214cqlsh:helloworld&amp;gt; insert into greetings(id, message, lang) values(2, 'Welcome!', 'es');
1215&lt;/code&gt;&lt;/pre&gt;
1216&lt;h2 id=&quot;leer&quot;&gt;&lt;a class=&quot;anchor&quot; href=&quot;#leer&quot;&gt;¶&lt;/a&gt;Leer&lt;/h2&gt;
1217&lt;p&gt;La lectura se lleva a cabo mediante la sentencia &lt;code&gt;SELECT&lt;/code&gt;:&lt;/p&gt;
1218&lt;pre&gt;&lt;code&gt;cqlsh:helloworld&amp;gt; select * from greetings;
1219
1220 id | lang | message
1221----+------+--------------
1222  1 |   es | ¡Bienvenido!
1223  2 |   es |     Welcome!
1224
1225(2 rows)
1226&lt;/code&gt;&lt;/pre&gt;
1227&lt;p&gt;&lt;code&gt;cqlsh&lt;/code&gt; colorea la salida, lo cuál resulta muy útil para identificar la clave primaria y distintos tipos de datos como texto, cadenas o números:&lt;/p&gt;
1228&lt;p&gt;&lt;img src=&quot;image.png&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;
1229&lt;h2 id=&quot;actualizar&quot;&gt;&lt;a class=&quot;anchor&quot; href=&quot;#actualizar&quot;&gt;¶&lt;/a&gt;Actualizar&lt;/h2&gt;
1230&lt;p&gt;La actualización se lleva a cabo con la sentencia &lt;code&gt;UPDATE&lt;/code&gt;. Vamos a arreglar el fallo que hemos cometido al insertar «Welcome!» como español:&lt;/p&gt;
1231&lt;pre&gt;&lt;code&gt;cqlsh:helloworld&amp;gt; update greetings set lang = 'en' where id = 2;
1232&lt;/code&gt;&lt;/pre&gt;
1233&lt;h2 id=&quot;indexar&quot;&gt;&lt;a class=&quot;anchor&quot; href=&quot;#indexar&quot;&gt;¶&lt;/a&gt;Indexar&lt;/h2&gt;
1234&lt;pre&gt;&lt;code&gt;cqlsh:helloworld&amp;gt; create index langIndex on greetings(lang);
1235&lt;/code&gt;&lt;/pre&gt;
1236&lt;h2 id=&quot;borrar&quot;&gt;&lt;a class=&quot;anchor&quot; href=&quot;#borrar&quot;&gt;¶&lt;/a&gt;Borrar&lt;/h2&gt;
1237&lt;p&gt;Finalmente, el borrado se lleva a cabo con la sentencia &lt;code&gt;DELETE&lt;/code&gt;. Es posible borrar solo campos individuales, lo cuál los pone a nulos:&lt;/p&gt;
1238&lt;pre&gt;&lt;code&gt;cqlsh:helloworld&amp;gt; delete message from greetings where id = 1;
1239&lt;/code&gt;&lt;/pre&gt;
1240&lt;p&gt;Para eliminar la fila entera, basta con no especificar la columna:&lt;/p&gt;
1241&lt;pre&gt;&lt;code&gt;cqlsh:helloworld&amp;gt; delete from greetings where id = 1;
1242&lt;/code&gt;&lt;/pre&gt;
1243&lt;h2 id=&quot;referencias&quot;&gt;&lt;a class=&quot;anchor&quot; href=&quot;#referencias&quot;&gt;¶&lt;/a&gt;Referencias&lt;/h2&gt;
1244&lt;ul&gt;
1245&lt;li&gt;&lt;a href=&quot;https://www.tutorialspoint.com/cassandra/cassandra_create_keyspace.htm&quot;&gt;tutorialspoint – Creating a Keyspace using Cqlsh&lt;/a&gt;&lt;/li&gt;
1246&lt;li&gt;&lt;a href=&quot;https://www.tutorialspoint.com/cassandra/cassandra_cql_datatypes.htm&quot;&gt;tutorialspoint – Cassandra – CQL Datatypes&lt;/a&gt;&lt;/li&gt;
1247&lt;li&gt;&lt;a href=&quot;https://www.tutorialspoint.com/cassandra/cassandra_create_table.htm&quot;&gt;tutorialspoint – Cassandra – Create Table&lt;/a&gt;&lt;/li&gt;
1248&lt;li&gt;&lt;a href=&quot;https://data-flair.training/blogs/cassandra-crud-operation/&quot;&gt;Data Flair – Cassandra Crud Operation – Create, Update, Read &amp;amp; Delete&lt;/a&gt;&lt;/li&gt;
1249&lt;li&gt;&lt;a href=&quot;https://cassandra.apache.org/doc/latest/cql/indexes.html&quot;&gt;Cassandra Documentation – Secondary Indexes&lt;/a&gt;&lt;/li&gt;
1250&lt;/ul&gt;
1251&lt;/main&gt;
1252&lt;/body&gt;
1253&lt;/html&gt;
1254 </content></entry><entry><title>Visualizing Cáceres’ OpenData</title><id>dist/visualizing-caceres-opendata/index.html</id><updated>2020-03-18T23:00:00+00:00</updated><published>2020-03-08T23:00:00+00:00</published><summary>The city of Cáceres has online services to provide </summary><content type="html" src="dist/visualizing-caceres-opendata/index.html">&lt;!DOCTYPE html&gt;
1255&lt;html&gt;
1256&lt;head&gt;
1257&lt;meta charset=&quot;utf-8&quot; /&gt;
1258&lt;meta name=&quot;viewport&quot; content=&quot;width=device-width, initial-scale=1&quot; /&gt;
1259&lt;title&gt;Visualizing Cáceres’ OpenData&lt;/title&gt;
1260&lt;link rel=&quot;stylesheet&quot; href=&quot;../css/style.css&quot;&gt;
1261&lt;/head&gt;
1262&lt;body&gt;
1263&lt;main&gt;
1264&lt;p&gt;The city of Cáceres has online services to provide &lt;a href=&quot;http://opendata.caceres.es/&quot;&gt;Open Data&lt;/a&gt; over a wide range of &lt;a href=&quot;http://opendata.caceres.es/dataset&quot;&gt;categories&lt;/a&gt;, all of which are very interesting to explore!&lt;/p&gt;
1265&lt;div class=&quot;date-created-modified&quot;&gt;Created 2020-03-09&lt;br&gt;
1266Modified 2020-03-19&lt;/div&gt;
1267&lt;p&gt;We have chosen two different datasets, and will explore four different ways to visualize the data.&lt;/p&gt;
1268&lt;p&gt;This post is co-authored with Classmate.&lt;/p&gt;
1269&lt;h2 class=&quot;title&quot; id=&quot;obtain_the_data&quot;&gt;&lt;a class=&quot;anchor&quot; href=&quot;#obtain_the_data&quot;&gt;¶&lt;/a&gt;Obtain the data&lt;/h2&gt;
1270&lt;p&gt;We are interested in the JSON format for the &lt;a href=&quot;http://opendata.caceres.es/dataset/informacion-del-padron-de-caceres-2017&quot;&gt;census in 2017&lt;/a&gt; and those for the &lt;a href=&quot;http://opendata.caceres.es/dataset/vias-urbanas-caceres&quot;&gt;vias of the city&lt;/a&gt;. This way, we can explore the population and their location in interesting ways! You may follow those two links and select the JSON format under Resources to download it.&lt;/p&gt;
1271&lt;p&gt;Why JSON? We will be using &lt;a href=&quot;https://python.org/&quot;&gt;Python&lt;/a&gt; (3.7 or above) and &lt;a href=&quot;https://matplotlib.org/&quot;&gt;matplotlib&lt;/a&gt; for quick iteration, and loading the data with &lt;a href=&quot;https://docs.python.org/3/library/json.html&quot;&gt;Python’s &lt;code&gt;json&lt;/code&gt; module&lt;/a&gt; will be trivial.&lt;/p&gt;
1272&lt;h2 id=&quot;implementation&quot;&gt;&lt;a class=&quot;anchor&quot; href=&quot;#implementation&quot;&gt;¶&lt;/a&gt;Implementation&lt;/h2&gt;
1273&lt;h3 id=&quot;imports_and_constants&quot;&gt;&lt;a class=&quot;anchor&quot; href=&quot;#imports_and_constants&quot;&gt;¶&lt;/a&gt;Imports and constants&lt;/h3&gt;
1274&lt;p&gt;We are going to need a lot of things in this code, such as &lt;code&gt;json&lt;/code&gt; to load the data, &lt;code&gt;matplotlib&lt;/code&gt; to visualize it, and other data types and type hinting for use in the code.&lt;/p&gt;
1275&lt;p&gt;We also want automatic download of the JSON files if they’re missing, so we add their URLs and download paths as constants.&lt;/p&gt;
1276&lt;pre&gt;&lt;code&gt;import json
1277import re
1278import os
1279import sys
1280import urllib.request
1281import matplotlib.pyplot as plt
1282from dataclasses import dataclass
1283from collections import namedtuple
1284from datetime import date
1285from pathlib import Path
1286from typing import Optional
1287
1288CENSUS_URL = 'http://opendata.caceres.es/GetData/GetData?dataset=om:InformacionCENSUS&amp;amp;year=2017&amp;amp;format=json'
1289VIAS_URL = 'http://opendata.caceres.es/GetData/GetData?dataset=om:Via&amp;amp;format=json'
1290
1291CENSUS_JSON = Path('data/demografia/Padrón_Cáceres_2017.json')
1292VIAS_JSON = Path('data/via/Vías_Cáceres.json')
1293&lt;/code&gt;&lt;/pre&gt;
1294&lt;h3 id=&quot;data_classes&quot;&gt;&lt;a class=&quot;anchor&quot; href=&quot;#data_classes&quot;&gt;¶&lt;/a&gt;Data classes&lt;/h3&gt;
1295&lt;p&gt;&lt;a href=&quot;https://lexi-lambda.github.io/blog/2019/11/05/parse-don-t-validate/&quot;&gt;Parse, don’t validate&lt;/a&gt;. By defining a clear data model, we will be able to tell at a glance what information we have available. It will also be typed, so we won’t be confused as to what is what! Python 3.7 introduces &lt;code&gt;[dataclasses](https://docs.python.org/3/library/dataclasses.html)&lt;/code&gt;, which are a wonderful feature to define… well, data classes concisely.&lt;/p&gt;
1296&lt;p&gt;We also have a &lt;code&gt;[namedtuple](https://docs.python.org/3/library/collections.html#collections.namedtuple)&lt;/code&gt; for points, because it’s extremely common to represent them as tuples.&lt;/p&gt;
1297&lt;pre&gt;&lt;code&gt;Point = namedtuple('Point', 'long lat')
1298
1299@dataclass
1300class Census:
1301    year: int
1302    via: int
1303    count_per_year: dict
1304    count_per_city: dict
1305    count_per_gender: dict
1306    count_per_nationality: dict
1307    time_year: int
1308
1309@dataclass
1310class Via:
1311    name: str
1312    kind: str
1313    code: int
1314    history: Optional[str]
1315    old_name: Optional[str]
1316    length: Optional[float]
1317    start: Optional[Point]
1318    middle: Optional[Point]
1319    end: Optional[Point]
1320    geometry: Optional[list]
1321&lt;/code&gt;&lt;/pre&gt;
1322&lt;h3 id=&quot;helper_methods&quot;&gt;&lt;a class=&quot;anchor&quot; href=&quot;#helper_methods&quot;&gt;¶&lt;/a&gt;Helper methods&lt;/h3&gt;
1323&lt;p&gt;We will have a little helper method to automatically download the JSON when missing. This is just for convenience, we could as well just download it manually. But it is fun to automate things.&lt;/p&gt;
1324&lt;pre&gt;&lt;code&gt;def ensure_file(file, url):
1325    if not file.is_file():
1326        print('Downloading', file.name, 'because it was missing...', end='', flush=True, file=sys.stderr)
1327        file.parent.mkdir(parents=True, exist_ok=True)
1328        urllib.request.urlretrieve(url, file)
1329        print(' Done.', file=sys.stderr)
1330&lt;/code&gt;&lt;/pre&gt;
1331&lt;h3 id=&quot;parsing_the_data&quot;&gt;&lt;a class=&quot;anchor&quot; href=&quot;#parsing_the_data&quot;&gt;¶&lt;/a&gt;Parsing the data&lt;/h3&gt;
1332&lt;p&gt;I will be honest, parsing Cáceres’ OpenData is a pain in the neck! The official descriptions are huge and not all that helpful. Maybe if one needs documentation for a specific field. But luckily for us, the names are pretty self-descriptive, and we can explore the data to get a feel for what we will find.&lt;/p&gt;
1333&lt;p&gt;We define two methods, one to iterate over &lt;code&gt;Census&lt;/code&gt; values, and another to iterate over &lt;code&gt;Via&lt;/code&gt; values. Here’s where our friend &lt;code&gt;[re](https://docs.python.org/3/library/re.html)&lt;/code&gt; comes in, and oh boy the format of the data…&lt;/p&gt;
1334&lt;p&gt;For example, the year and via identifier are best extracted from the URI! The information is also available in the &lt;code&gt;rdfs_label&lt;/code&gt; field, but that’s just a Spanish text! At least the URI will be more reliable… hopefully.&lt;/p&gt;
1335&lt;p&gt;Birth date. They could have used a JSON list, but nah, that would’ve been too simple. Instead, you are given a string separated by semicolons. The values? They could have been dictionaries with names for «year» and «age», but nah! That would’ve been too simple! Instead, you are given strings that look like «2001 (7)», and that’s the year and the count.&lt;/p&gt;
1336&lt;p&gt;The birth place? Sometimes it’s «City (Province) (Count)», but sometimes the province is missing. Gender? Semicolon-separated. And there are only two genders. I know a few people who would be upset just reading this, but it’s not my data, it’s theirs. Oh, and plenty of things are optional. That was a lot of &lt;code&gt;AttributeError: 'NoneType' object has no attribute 'foo'&lt;/code&gt; to work through!&lt;/p&gt;
1337&lt;p&gt;But as a reward, we have nicely typed data, and we no longer have to deal with this mess when trying to visualize it. For brevity, we will only be showing how to parse the census data, and not the data for the vias. This post is already long enough on its own.&lt;/p&gt;
1338&lt;pre&gt;&lt;code&gt;def iter_census(file):
1339    with file.open() as fd:
1340        data = json.load(fd)
1341
1342    for row in data['results']['bindings']:
1343        year, via = map(int, row['uri']['value'].split('/')[-1].split('-'))
1344
1345        count_per_year = {}
1346        for item in row['schema_birthDate']['value'].split(';'):
1347            y, c = map(int, re.match(r'(\d+) \((\d+)\)', item).groups())
1348            count_per_year[y] = c
1349
1350        count_per_city = {}
1351        for item in row['schema_birthPlace']['value'].split(';'):
1352            match = re.match(r'([^(]+) \(([^)]+)\) \((\d+)\)', item)
1353            if match:
1354                l, _province, c = match.groups()
1355            else:
1356                l, c = re.match(r'([^(]+) \((\d+)\)', item).groups()
1357
1358            count_per_city[l] = int(c)
1359
1360        count_per_gender = {}
1361        for item in row['foaf_gender']['value'].split(';'):
1362            g, c = re.match(r'([^(]+) \((\d+)\)', item).groups()
1363            count_per_gender[g] = int(c)
1364
1365        count_per_nationality = {}
1366        for item in row['schema_nationality']['value'].split(';'):
1367            match = re.match(r'([^(]+) \((\d+)\)', item)
1368            if match:
1369                g, c = match.groups()
1370            else:
1371                g, _alt_name, c = re.match(r'([^(]+) \(([^)]+)\) \((\d+)\)', item).groups()
1372
1373            count_per_nationality[g] = int(c)
1374        time_year = int(row['time_year']['value'])
1375
1376        yield Census(
1377            year=year,
1378            via=via,
1379            count_per_year=count_per_year,
1380            count_per_city=count_per_city,
1381            count_per_gender=count_per_gender,
1382            count_per_nationality=count_per_nationality,
1383            time_year=time_year,
1384        )
1385&lt;/code&gt;&lt;/pre&gt;
1386&lt;h2 id=&quot;visualizing_the_data&quot;&gt;&lt;a class=&quot;anchor&quot; href=&quot;#visualizing_the_data&quot;&gt;¶&lt;/a&gt;Visualizing the data&lt;/h2&gt;
1387&lt;p&gt;Here comes the fun part! After parsing all the desired data from the mentioned JSON files, we plotted the data in four different graphics making use of Python’s &lt;a href=&quot;https://matplotlib.org/&quot;&gt;&lt;code&gt;matplotlib&lt;/code&gt; library.&lt;/a&gt; This powerful library helps with the creation of different visualizations in Python.&lt;/p&gt;
1388&lt;h3 id=&quot;visualizing_the_genders_in_a_pie_chart&quot;&gt;&lt;a class=&quot;anchor&quot; href=&quot;#visualizing_the_genders_in_a_pie_chart&quot;&gt;¶&lt;/a&gt;Visualizing the genders in a pie chart&lt;/h3&gt;
1389&lt;p&gt;After seeing that there are only two genders in the data of the census, we, displeased, started work in a chart for it. The pie chart was the best option since we wanted to show only the percentages of each gender. The result looks like this:&lt;/p&gt;
1390&lt;p&gt;&lt;img src=&quot;pie_chart.png&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;
1391&lt;p&gt;Pretty straight forward, isn’t it? To display this wonderful graphic, we used the following code:&lt;/p&gt;
1392&lt;pre&gt;&lt;code&gt;def pie_chart(ax, data):
1393    lists = sorted(data.items())
1394
1395    x, y = zip(*lists)
1396    ax.pie(y, labels=x, autopct='%1.1f%%',
1397            shadow=True, startangle=90)
1398    ax.axis('equal')  # Equal aspect ratio ensures that pie is drawn as a circle.
1399&lt;/code&gt;&lt;/pre&gt;
1400&lt;p&gt;We pass the axis as the input parameter (later we will explain why) and the data collected from the JSON regarding the genders, which are in a dictionary with the key being the labels and the values the tally of each gender. We sort the data and with some unpacking magic we split it into two values: &lt;code&gt;x&lt;/code&gt; being the labels and &lt;code&gt;y&lt;/code&gt; the amount of each gender.&lt;/p&gt;
1401&lt;p&gt;After that we plot the pie chart with the data and labels from &lt;code&gt;y&lt;/code&gt; and &lt;code&gt;x&lt;/code&gt;, we specify that we want the percentage with one decimal place with the &lt;code&gt;autopct&lt;/code&gt; parameter, we enable shadows for the presentation, and specify the start angle at 90º.&lt;/p&gt;
1402&lt;h3 id=&quot;date_tick_labels&quot;&gt;&lt;a class=&quot;anchor&quot; href=&quot;#date_tick_labels&quot;&gt;¶&lt;/a&gt;Date tick labels&lt;/h3&gt;
1403&lt;p&gt;We wanted to know how many of the living people were born on each year, so we are making a date plot! In the census we have the year each person was born in, and using that information is an easy task after parsing the data (parsing was an important task of this work). The result looks as follows:&lt;/p&gt;
1404&lt;p&gt;&lt;img src=&quot;date_tick.png&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;
1405&lt;p&gt;How did we do this? The following code was used:&lt;/p&gt;
1406&lt;pre&gt;&lt;code&gt;def date_tick(ax, data):
1407    lists = sorted(data.items())
1408
1409    x, y = zip(*lists)
1410    x = [date(year, 1, 1) for year in x]
1411    ax.plot(x, y)
1412&lt;/code&gt;&lt;/pre&gt;
1413&lt;p&gt;Again, we pass in an axis and the data related with the year born, we sort it, split it into two lists, being the keys the years and the values the number per year. After that, we put the years in a date format for the plot to be more accurate. Finally, we plot the values into that wonderful graphic.&lt;/p&gt;
1414&lt;h3 id=&quot;stacked_bar_chart&quot;&gt;&lt;a class=&quot;anchor&quot; href=&quot;#stacked_bar_chart&quot;&gt;¶&lt;/a&gt;Stacked bar chart&lt;/h3&gt;
1415&lt;p&gt;We wanted to know if there was any relation between the latitudes and count per gender, so we developed the following code:&lt;/p&gt;
1416&lt;pre&gt;&lt;code&gt;def stacked_bar_chart(ax, data):
1417    labels = []
1418    males = []
1419    females = []
1420
1421    for latitude, genders in data.items():
1422        labels.append(str(latitude))
1423        males.append(genders['Male'])
1424        females.append(genders['Female'])
1425
1426    ax.bar(labels, males, label='Males')
1427    ax.bar(labels, females, bottom=males, label='Females')
1428
1429    ax.set_ylabel('Counts')
1430    ax.set_xlabel('Latitudes')
1431    ax.legend()
1432&lt;/code&gt;&lt;/pre&gt;
1433&lt;p&gt;The key of the data dictionary is the latitude rounded to two decimals, and value is another dictionary, which is composed by the key that is the name of the gender and the value, the number of people per gender. So, in a single entry of the data dictionary we have the latitude and how many people per gender are in that latitude.&lt;/p&gt;
1434&lt;p&gt;We iterate the dictionary to extract the different latitudes and people per gender (because we know only two genders are used, we hardcode it to two lists). Then we plot them putting the &lt;code&gt;males&lt;/code&gt; and &lt;code&gt;females&lt;/code&gt; lists at the bottom and set the labels of each axis. The result is the following:&lt;/p&gt;
1435&lt;p&gt;&lt;img src=&quot;stacked_bar_chart-1.png&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;
1436&lt;h3 id=&quot;scatter_plots&quot;&gt;&lt;a class=&quot;anchor&quot; href=&quot;#scatter_plots&quot;&gt;¶&lt;/a&gt;Scatter plots&lt;/h3&gt;
1437&lt;p&gt;This last graphic was very tricky to get right. It’s incredibly hard to find the extent of a city online! We were getting confused because some of the points were way farther than the centre of Cáceres, and the city background is a bit stretched even if the coordinates appear correct. But in the end, we did a pretty good job on it.&lt;/p&gt;
1438&lt;pre&gt;&lt;code&gt;def scatter_map(ax, data):
1439    xs = []
1440    ys = []
1441    areas = []
1442    for (long, lat), count in data.items():
1443        xs.append(long)
1444        ys.append(lat)
1445        areas.append(count / 100)
1446
1447    if CACERES_MAP.is_file():
1448        ax.imshow(plt.imread(str(CACERES_MAP)), extent=CACERES_EXTENT)
1449    else:
1450        print('Note:', CACERES_MAP, 'does not exist, not showing it', file=sys.stderr)
1451
1452    ax.scatter(xs, ys, areas, alpha=0.1)
1453&lt;/code&gt;&lt;/pre&gt;
1454&lt;p&gt;This time, the keys in the data dictionary are points and the values are the total count of people in that point. We use a normal &lt;code&gt;for&lt;/code&gt; loop to create the different lists. For the areas on how big the circles we are going to represent will be, we divide the count of people by some number, like &lt;code&gt;100&lt;/code&gt;, or otherwise they would be huge.&lt;/p&gt;
1455&lt;p&gt;If the file of the map is present, we render it so that we can get a sense on where the points are, but if the file is missing we print a warning.&lt;/p&gt;
1456&lt;p&gt;At last, we draw the scatter plot with some low alpha value (there’s a lot of overlapping points). The result is &lt;em&gt;absolutely gorgeous&lt;/em&gt;. (For some definitions of gorgeous, anyway):&lt;/p&gt;
1457&lt;p&gt;&lt;img src=&quot;scatter_map.png&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;
1458&lt;p&gt;Just for fun, here’s what it looks like if we don’t divide the count by 100 and lower the opacity to &lt;code&gt;0.01&lt;/code&gt;:&lt;/p&gt;
1459&lt;p&gt;&lt;img src=&quot;scatter_map-2.png&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;
1460&lt;p&gt;That’s a big solid blob, and the opacity is only set to &lt;code&gt;0.01&lt;/code&gt;!&lt;/p&gt;
1461&lt;h3 id=&quot;drawing_all_the_graphs_in_the_same_window&quot;&gt;&lt;a class=&quot;anchor&quot; href=&quot;#drawing_all_the_graphs_in_the_same_window&quot;&gt;¶&lt;/a&gt;Drawing all the graphs in the same window&lt;/h3&gt;
1462&lt;p&gt;To draw all the graphs in the same window instead of getting four different windows we made use of the &lt;a href=&quot;https://matplotlib.org/3.2.0/api/_as_gen/matplotlib.pyplot.subplots.html&quot;&gt;&lt;code&gt;subplots&lt;/code&gt; function&lt;/a&gt;, like this:&lt;/p&gt;
1463&lt;pre&gt;&lt;code&gt;fig, axes = plt.subplots(2, 2)
1464&lt;/code&gt;&lt;/pre&gt;
1465&lt;p&gt;This will create a matrix of two by two of axes that we store in the axes variable (fitting name!). Following this code are the different calls to the methods commented before, where we access each individual axis and pass it to the methods to draw on:&lt;/p&gt;
1466&lt;pre&gt;&lt;code&gt;pie_chart(axes[0, 0], genders)
1467date_tick(axes[0, 1], years)
1468stacked_bar_chart(axes[1, 0], latitudes)
1469scatter_map(axes[1, 1], positions)
1470&lt;/code&gt;&lt;/pre&gt;
1471&lt;p&gt;Lastly, we plot the different graphics:&lt;/p&gt;
1472&lt;pre&gt;&lt;code&gt;plt.show()
1473&lt;/code&gt;&lt;/pre&gt;
1474&lt;p&gt;Wrapping everything together, here’s the result:&lt;/p&gt;
1475&lt;p&gt;&lt;img src=&quot;figures-1.png&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;
1476&lt;p&gt;The numbers in some of the graphs are a bit crammed together, but we’ll blame that on &lt;code&gt;matplotlib&lt;/code&gt;.&lt;/p&gt;
1477&lt;h2 id=&quot;closing_words&quot;&gt;&lt;a class=&quot;anchor&quot; href=&quot;#closing_words&quot;&gt;¶&lt;/a&gt;Closing words&lt;/h2&gt;
1478&lt;p&gt;Wow, that was a long journey! We hope that this post helped you pick some interest in data exploration, it’s such a fun world. We also offer the full download for the code below, because we know it’s quite a bit!&lt;/p&gt;
1479&lt;p&gt;Which of the graphs was your favourite? I personally like the count per date, I think it’s nice to see the growth. Let us know in the comments below!&lt;/p&gt;
1480&lt;p&gt;&lt;em&gt;download removed&lt;/em&gt;&lt;/p&gt;
1481&lt;/main&gt;
1482&lt;/body&gt;
1483&lt;/html&gt;
1484 </content></entry><entry><title>What is an algorithm?</title><id>dist/what-is-an-algorithm/index.html</id><updated>2020-03-17T23:00:00+00:00</updated><published>2020-02-24T23:00:00+00:00</published><summary>Algorithms are a sequence of instructions that can be followed to achieve </summary><content type="html" src="dist/what-is-an-algorithm/index.html">&lt;!DOCTYPE html&gt;
1485&lt;html&gt;
1486&lt;head&gt;
1487&lt;meta charset=&quot;utf-8&quot; /&gt;
1488&lt;meta name=&quot;viewport&quot; content=&quot;width=device-width, initial-scale=1&quot; /&gt;
1489&lt;title&gt;What is an algorithm?&lt;/title&gt;
1490&lt;link rel=&quot;stylesheet&quot; href=&quot;../css/style.css&quot;&gt;
1491&lt;/head&gt;
1492&lt;body&gt;
1493&lt;main&gt;
1494&lt;p&gt;Algorithms are a sequence of instructions that can be followed to achieve &lt;em&gt;something&lt;/em&gt;. That something can be anything, and depends entirely on your problem!&lt;/p&gt;
1495&lt;div class=&quot;date-created-modified&quot;&gt;Created 2020-02-25&lt;br&gt;
1496Modified 2020-03-18&lt;/div&gt;
1497&lt;p&gt;For example, a recipe to cook some really nice food is an algorithm: it guides you, step by step, to cook something nice. People dealing with mathemathics also apply algorithms to transform their data. And computers &lt;em&gt;love&lt;/em&gt; algorithms, too!&lt;/p&gt;
1498&lt;p&gt;In reality, any computer program can basically be thought as an algorithm. It contains a series of instructions for the computer to execute. Running them is a process that takes time, consumes input and produces output. This is also why terms like «procedure» come up when talking about them.&lt;/p&gt;
1499&lt;p&gt;Computer programs (their algorithms) are normally written in some more specific language, like Java or Python. The instructions are very clear here, which is what we need! A natural language like English is a lot harder to process, and ambiguous. I’m sure you’ve been in arguments because the other person didn’t understand you!&lt;/p&gt;
1500&lt;h2 class=&quot;title&quot; id=&quot;references&quot;&gt;&lt;a class=&quot;anchor&quot; href=&quot;#references&quot;&gt;¶&lt;/a&gt;References&lt;/h2&gt;
1501&lt;ul&gt;
1502&lt;li&gt;algorithm – definition and meaning: &lt;a href=&quot;https://www.wordnik.com/words/algorithm&quot;&gt;https://www.wordnik.com/words/algorithm&lt;/a&gt;&lt;/li&gt;
1503&lt;li&gt;Algorithm: &lt;a href=&quot;https://en.wikipedia.org/wiki/Algorithm&quot;&gt;https://en.wikipedia.org/wiki/Algorithm&lt;/a&gt;&lt;/li&gt;
1504&lt;li&gt;What is a «computer algorithm»?: &lt;a href=&quot;https://computer.howstuffworks.com/what-is-a-computer-algorithm.htm&quot;&gt;https://computer.howstuffworks.com/what-is-a-computer-algorithm.htm&lt;/a&gt;&lt;/li&gt;
1505&lt;/ul&gt;
1506&lt;/main&gt;
1507&lt;/body&gt;
1508&lt;/html&gt;
1509 </content></entry><entry><title>Introduction to NoSQL</title><id>dist/introduction-to-nosql/index.html</id><updated>2020-03-17T23:00:00+00:00</updated><published>2020-02-24T23:00:00+00:00</published><summary>This post will primarly focus on the talk held in the </summary><content type="html" src="dist/introduction-to-nosql/index.html">&lt;!DOCTYPE html&gt;
1510&lt;html&gt;
1511&lt;head&gt;
1512&lt;meta charset=&quot;utf-8&quot; /&gt;
1513&lt;meta name=&quot;viewport&quot; content=&quot;width=device-width, initial-scale=1&quot; /&gt;
1514&lt;title&gt;Introduction to NoSQL&lt;/title&gt;
1515&lt;link rel=&quot;stylesheet&quot; href=&quot;../css/style.css&quot;&gt;
1516&lt;/head&gt;
1517&lt;body&gt;
1518&lt;main&gt;
1519&lt;p&gt;This post will primarly focus on the talk held in the &lt;a href=&quot;https://youtu.be/qI_g07C_Q5I&quot;&gt;GOTO 2012 conference: Introduction to NoSQL by Martin Fowler&lt;/a&gt;. It can be seen as an informal, summarized transcript of the talk&lt;/p&gt;
1520&lt;div class=&quot;date-created-modified&quot;&gt;Created 2020-02-25&lt;br&gt;
1521Modified 2020-03-18&lt;/div&gt;
1522&lt;hr /&gt;
1523&lt;p&gt;The relational database model is affected by the &lt;em&gt;&lt;a href=&quot;https://en.wikipedia.org/wiki/Object-relational_impedance_mismatch&quot;&gt;impedance mismatch problem&lt;/a&gt;&lt;/em&gt;. This occurs because we have to match our high-level design with the separate columns and rows used by relational databases.&lt;/p&gt;
1524&lt;p&gt;Taking the in-memory objects and putting them into a relational database (which were dominant at the time) simply didn’t work out. Why? Relational databases were more than just databases, they served as a an integration mechanism across applications, up to the 2000s. For 20 years!&lt;/p&gt;
1525&lt;p&gt;With the rise of the Internet and the sheer amount of traffic, databases needed to scale. Unfortunately, relational databases only scale well vertically (by upgrading a &lt;em&gt;single&lt;/em&gt; node). This is &lt;em&gt;very&lt;/em&gt; expensive, and not something many could afford.&lt;/p&gt;
1526&lt;p&gt;The problem are those pesky &lt;code&gt;JOIN&lt;/code&gt;‘s, and its friends &lt;code&gt;GROUP BY&lt;/code&gt;. Because our program and reality model don’t match the tables used by SQL, we have to rely on them to query the data. It is because the model doesn’t map directly.&lt;/p&gt;
1527&lt;p&gt;Furthermore, graphs don’t map very well at all to relational models.&lt;/p&gt;
1528&lt;p&gt;We needed a way to scale horizontally (by increasing the &lt;em&gt;amount&lt;/em&gt; of nodes), something relational databases were not designed to do.&lt;/p&gt;
1529&lt;blockquote&gt;
1530&lt;p&gt;&lt;em&gt;We need to do something different, relational across nodes is an unnatural act&lt;/em&gt;&lt;/p&gt;
1531&lt;/blockquote&gt;
1532&lt;p&gt;This inspired the NoSQL movement.&lt;/p&gt;
1533&lt;blockquote&gt;
1534&lt;p&gt;&lt;em&gt;#nosql was only meant to be a hashtag to advertise it, but unfortunately it’s how it is called now&lt;/em&gt;&lt;/p&gt;
1535&lt;/blockquote&gt;
1536&lt;p&gt;It is not possible to define NoSQL, but we can identify some of its characteristics:&lt;/p&gt;
1537&lt;ul&gt;
1538&lt;li&gt;Non-relational&lt;/li&gt;
1539&lt;li&gt;&lt;strong&gt;Cluster-friendly&lt;/strong&gt; (this was the original spark)&lt;/li&gt;
1540&lt;li&gt;Open-source (until now, generally)&lt;/li&gt;
1541&lt;li&gt;21st century web culture&lt;/li&gt;
1542&lt;li&gt;Schema-less (easier integration or conjugation of several models, structure aggregation)&lt;/li&gt;
1543&lt;/ul&gt;
1544&lt;p&gt;These databases use different data models to those used by the relational model. However, it is possible to identify 4 broad chunks (some may say 3, or even 2!):&lt;/p&gt;
1545&lt;ul&gt;
1546&lt;li&gt;&lt;strong&gt;Key-value store&lt;/strong&gt;. With a certain key, you obtain the value corresponding to it. It knows nothing else, nor does it care. We say the data is opaque.&lt;/li&gt;
1547&lt;li&gt;&lt;strong&gt;Document-based&lt;/strong&gt;. It stores an entire mass of documents with complex structure, normally through the use of JSON (XML has been left behind). Then, you can ask for certain fields, structures, or portions. We say the data is transparent.&lt;/li&gt;
1548&lt;li&gt;&lt;strong&gt;Column-family&lt;/strong&gt;. There is a «row key», and within it we store multiple «column families» (columns that fit together, our aggregate). We access by row-key and column-family name.&lt;/li&gt;
1549&lt;/ul&gt;
1550&lt;p&gt;All of these kind of serve to store documents without any &lt;em&gt;explicit&lt;/em&gt; schema. Just shove in anything! This gives a lot of flexibility and ease of migration, except… that’s not really true. There’s an &lt;em&gt;implicit&lt;/em&gt; schema when querying.&lt;/p&gt;
1551&lt;p&gt;For example, a query where we may do &lt;code&gt;anOrder['price'] * anOrder['quantity']&lt;/code&gt; is assuming that &lt;code&gt;anOrder&lt;/code&gt; has both a &lt;code&gt;price&lt;/code&gt; and a &lt;code&gt;quantity&lt;/code&gt;, and that both of these can be multiplied together. «Schema-less» is a fuzzy term.&lt;/p&gt;
1552&lt;p&gt;However, it is the lack of a &lt;em&gt;fixed&lt;/em&gt; schema that gives flexibility.&lt;/p&gt;
1553&lt;p&gt;One could argue that the line between key-value and document-based is very fuzzy, and they would be right! Key-value databases often let you include additional metadata that behaves like an index, and in document-based, documents often have an identifier anyway.&lt;/p&gt;
1554&lt;p&gt;The common notion between these three types is what matters. They save an entire structure as an &lt;em&gt;unit&lt;/em&gt;. We can refer to these as «Aggregate Oriented Databases». Aggregate, because we group things when designing or modeling our systems, as opposed to relational databases that scatter the information across many tables.&lt;/p&gt;
1555&lt;p&gt;There exists a notable outlier, though, and that’s:&lt;/p&gt;
1556&lt;ul&gt;
1557&lt;li&gt;&lt;strong&gt;Graph&lt;/strong&gt; databases. They use a node-and-arc graph structure. They are great for moving on relationships across things. Ironically, relational databases are not very good at jumping across relationships! It is possibly to perform very interesting queries in graph databases which would be really hard and costly on relational models. Unlike the aggregated databases, graphs break things into even smaller units.
1558NoSQL is not &lt;em&gt;the&lt;/em&gt; solution. It depends on how you’ll work with your data. Do you need an aggregate database? Will you have a lot of relationships? Or would the relational model be good fit for you?&lt;/li&gt;
1559&lt;/ul&gt;
1560&lt;p&gt;NoSQL, however, is a good fit for large-scale projects (data will &lt;em&gt;always&lt;/em&gt; grow) and faster development (the impedance mismatch is drastically reduced).&lt;/p&gt;
1561&lt;p&gt;Regardless of our choice, it is important to remember that NoSQL is a young technology, which is still evolving really fast (SQL has been stable for &lt;em&gt;decades&lt;/em&gt;). But the &lt;em&gt;polyglot persistence&lt;/em&gt; is what matters. One must know the alternatives, and be able to choose.&lt;/p&gt;
1562&lt;hr /&gt;
1563&lt;p&gt;Relational databases have the well-known ACID properties: Atomicity, Consistency, Isolation and Durability.&lt;/p&gt;
1564&lt;p&gt;NoSQL (except graph-based!) are about being BASE instead: Basically Available, Soft state, Eventual consistency.&lt;/p&gt;
1565&lt;p&gt;SQL needs transactions because we don’t want to perform a read while we’re only half-way done with a write! The readers and writers are the problem, and ensuring consistency results in a performance hit, even if the risk is low (two writers are extremely rare but it still must be handled).&lt;/p&gt;
1566&lt;p&gt;NoSQL on the other hand doesn’t need ACID because the aggregate &lt;em&gt;is&lt;/em&gt; the transaction boundary. Even before NoSQL itself existed! Any update is atomic by nature. When updating many documents it &lt;em&gt;is&lt;/em&gt; a problem, but this is very rare.&lt;/p&gt;
1567&lt;p&gt;We have to distinguish between logical and replication consistency. During an update and if a conflict occurs, it must be resolved to preserve the logical consistency. Replication consistency on the other hand is preserveed when distributing the data across many machines, for example during sharding or copies.&lt;/p&gt;
1568&lt;p&gt;Replication buys us more processing power and resillence (at the cost of more storage) in case some of the nodes die. But what happens if what dies is the communication across the nodes? We could drop the requests and preserve the consistency, or accept the risk to continue and instead preserve the availability.&lt;/p&gt;
1569&lt;p&gt;The choice on whether trading consistency for availability is acceptable or not depends on the domain rules. It is the domain’s choice, the business people will choose. If you’re Amazon, you always want to be able to sell, but if you’re a bank, you probably don’t want your clients to have negative numbers in their account!&lt;/p&gt;
1570&lt;p&gt;Regardless of what we do, in a distributed system, the CAP theorem always applies: Consistecy, Availability, Partitioning-tolerancy (error tolerancy). It is &lt;strong&gt;impossible&lt;/strong&gt; to guarantee all 3 at 100%. Most of the times, it does work, but it is mathematically impossible to guarantee at 100%.&lt;/p&gt;
1571&lt;p&gt;A database has to choose what to give up at some point. When designing a distributed system, this must be considered. Normally, the choice is made between consistency or response time.&lt;/p&gt;
1572&lt;h2 class=&quot;title&quot; id=&quot;further_reading&quot;&gt;&lt;a class=&quot;anchor&quot; href=&quot;#further_reading&quot;&gt;¶&lt;/a&gt;Further reading&lt;/h2&gt;
1573&lt;ul&gt;
1574&lt;li&gt;&lt;a href=&quot;https://www.martinfowler.com/articles/nosql-intro-original.pdf&quot;&gt;The future is: &lt;del&gt;NoSQL Databases&lt;/del&gt; Polyglot Persistence&lt;/a&gt;&lt;/li&gt;
1575&lt;li&gt;&lt;a href=&quot;https://www.thoughtworks.com/insights/blog/nosql-databases-overview&quot;&gt;NoSQL Databases: An Overview&lt;/a&gt;&lt;/li&gt;
1576&lt;/ul&gt;
1577&lt;/main&gt;
1578&lt;/body&gt;
1579&lt;/html&gt;
1580 </content></entry><entry><title>Big Data</title><id>dist/big-data/index.html</id><updated>2020-03-17T23:00:00+00:00</updated><published>2020-02-24T23:00:00+00:00</published><summary>Big Data sounds like a buzzword you may be hearing everywhere, but it’s actually here to stay!</summary><content type="html" src="dist/big-data/index.html">&lt;!DOCTYPE html&gt;
1581&lt;html&gt;
1582&lt;head&gt;
1583&lt;meta charset=&quot;utf-8&quot; /&gt;
1584&lt;meta name=&quot;viewport&quot; content=&quot;width=device-width, initial-scale=1&quot; /&gt;
1585&lt;title&gt;Big Data&lt;/title&gt;
1586&lt;link rel=&quot;stylesheet&quot; href=&quot;../css/style.css&quot;&gt;
1587&lt;/head&gt;
1588&lt;body&gt;
1589&lt;main&gt;
1590&lt;p&gt;Big Data sounds like a buzzword you may be hearing everywhere, but it’s actually here to stay!&lt;/p&gt;
1591&lt;div class=&quot;date-created-modified&quot;&gt;Created 2020-02-25&lt;br&gt;
1592Modified 2020-03-18&lt;/div&gt;
1593&lt;h2 class=&quot;title&quot; id=&quot;what_is_big_data_&quot;&gt;&lt;a class=&quot;anchor&quot; href=&quot;#what_is_big_data_&quot;&gt;¶&lt;/a&gt;What is Big Data?&lt;/h2&gt;
1594&lt;p&gt;And why is it so important? We use this term to refer to the large amount of data available, rapidly growing every day, that cannot be processed in conventional ways. It’s not only about the amount, it’s also about the variety and rate of growth.&lt;/p&gt;
1595&lt;p&gt;Thanks to technological advancements, there are new ways to process this insane amount of data, which would otherwise be too costly for processing in traditional database systems.&lt;/p&gt;
1596&lt;h2 id=&quot;where_does_data_come_from_&quot;&gt;&lt;a class=&quot;anchor&quot; href=&quot;#where_does_data_come_from_&quot;&gt;¶&lt;/a&gt;Where does data come from?&lt;/h2&gt;
1597&lt;p&gt;It can be pictures in your phone, industry transactions, messages in social networks, a sensor in the mountains. It can come from anywhere, which makes the data very varied.&lt;/p&gt;
1598&lt;p&gt;Just to give some numbers, over 12TB of data is generated on Twitter &lt;em&gt;daily&lt;/em&gt;. If you purchase a laptop today (as of March 2020), the disk will be roughly 1TB, maybe 2TB. Twitter would fill 6 of those drives every day!&lt;/p&gt;
1599&lt;p&gt;What about Facebook? It is estimated they store around 100PB of photos and videos. That would be 50000 laptop disks. Not a small number. And let’s not talk about worldwide network traffic…&lt;/p&gt;
1600&lt;h2 id=&quot;what_data_can_be_exploited_&quot;&gt;&lt;a class=&quot;anchor&quot; href=&quot;#what_data_can_be_exploited_&quot;&gt;¶&lt;/a&gt;What data can be exploited?&lt;/h2&gt;
1601&lt;p&gt;So, we have a lot of data. Should we attempt and process everything? We can distinguish several categories.&lt;/p&gt;
1602&lt;ul&gt;
1603&lt;li&gt;&lt;strong&gt;Web and Social Media&lt;/strong&gt;: Clickstream Data, Twitter Feeds, Facebook Postings, Web content… Stuff coming from social networks.&lt;/li&gt;
1604&lt;li&gt;&lt;strong&gt;Biometrics&lt;/strong&gt;: Facial Recognion, Genetics… Any kind of personal recognition.&lt;/li&gt;
1605&lt;li&gt;&lt;strong&gt;Machine-to-Machine&lt;/strong&gt;: Utility Smart Meter Readings, RFID Readings, Oil Rig Sensor Readings, GPS Signals… Any sensor shared with other machines.&lt;/li&gt;
1606&lt;li&gt;&lt;strong&gt;Human Generated&lt;/strong&gt;: Call Center Voice Recordings, Email, Electronic Medical Records… Even the voice notes one sends over WhatsApp count.&lt;/li&gt;
1607&lt;li&gt;&lt;strong&gt;Big Transaction Data&lt;/strong&gt;: Healthcare Claims, Telecommunications Call Detail Records, Utility Billing Records… Financial transactions.&lt;/li&gt;
1608&lt;/ul&gt;
1609&lt;p&gt;But asking what to process is asking the wrong question. Instead, one should think about «What problem am I trying to solve?».&lt;/p&gt;
1610&lt;h2 id=&quot;how_to_exploit_this_data_&quot;&gt;&lt;a class=&quot;anchor&quot; href=&quot;#how_to_exploit_this_data_&quot;&gt;¶&lt;/a&gt;How to exploit this data?&lt;/h2&gt;
1611&lt;p&gt;What are some of the ways to deal with this data? If the problem fits the Map-Reduce paradigm then Hadoop is a great option! Hadoop is inspired by Google File System (GFS), and achieves great parallelism across the nodes of a cluster, and has the following components:&lt;/p&gt;
1612&lt;ul&gt;
1613&lt;li&gt;&lt;strong&gt;Hadoop Distributed File System&lt;/strong&gt;. Data is divided into smaller «blocks» and distributed across the cluster, which makes it possible to execute the mapping and reduction in smaller subsets, and makes it possible to scale horizontally.&lt;/li&gt;
1614&lt;li&gt;&lt;strong&gt;Hadoop MapReduce&lt;/strong&gt;. First, a data set is «mapped» into a different set, and data becomes a list of tuples (key, value). The «reduce» step works on these tuples and combines them into a smaller subset.&lt;/li&gt;
1615&lt;li&gt;&lt;strong&gt;Hadoop Common&lt;/strong&gt;. These are a set of libraries that ease working with Hadoop.&lt;/li&gt;
1616&lt;/ul&gt;
1617&lt;h2 id=&quot;key_insights&quot;&gt;&lt;a class=&quot;anchor&quot; href=&quot;#key_insights&quot;&gt;¶&lt;/a&gt;Key insights&lt;/h2&gt;
1618&lt;p&gt;Big Data is a field whose goal is to extract information from very large sets of data, and find ways to do so. To summarize its different dimensions, we can refer to what’s known as «the Four V’s of Big Data»:&lt;/p&gt;
1619&lt;ul&gt;
1620&lt;li&gt;&lt;strong&gt;Volume&lt;/strong&gt;. Really large quantities.&lt;/li&gt;
1621&lt;li&gt;&lt;strong&gt;Velocity&lt;/strong&gt;. Processing response time matters!&lt;/li&gt;
1622&lt;li&gt;&lt;strong&gt;Variety&lt;/strong&gt;. Data comes from plenty of sources.&lt;/li&gt;
1623&lt;li&gt;&lt;strong&gt;Veracity.&lt;/strong&gt; Can we trust all sources, though?&lt;/li&gt;
1624&lt;/ul&gt;
1625&lt;p&gt;Some sources talk about a fifth V for &lt;strong&gt;Value&lt;/strong&gt;; because processing this data is costly, it is important we can get value out of it.&lt;/p&gt;
1626&lt;p&gt;…And some other sources go as high as seven V’s, including &lt;strong&gt;Viability&lt;/strong&gt; and &lt;strong&gt;Visualization&lt;/strong&gt;. Computers can’t take decissions on their own (yet), a human has to. And they can only do so if they’re presented the data (and visualize it) in a meaningful way.&lt;/p&gt;
1627&lt;h2 id=&quot;infographics&quot;&gt;&lt;a class=&quot;anchor&quot; href=&quot;#infographics&quot;&gt;¶&lt;/a&gt;Infographics&lt;/h2&gt;
1628&lt;p&gt;Let’s see some pictures, we all love pictures:&lt;/p&gt;
1629&lt;p&gt;&lt;img src=&quot;4-Vs-of-big-data.jpg&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;
1630&lt;h2 id=&quot;common_patterns&quot;&gt;&lt;a class=&quot;anchor&quot; href=&quot;#common_patterns&quot;&gt;¶&lt;/a&gt;Common patterns&lt;/h2&gt;
1631&lt;h2 id=&quot;references&quot;&gt;&lt;a class=&quot;anchor&quot; href=&quot;#references&quot;&gt;¶&lt;/a&gt;References&lt;/h2&gt;
1632&lt;ul&gt;
1633&lt;li&gt;¿Qué es Big Data? – &lt;a href=&quot;https://www.ibm.com/developerworks/ssa/local/im/que-es-big-data/&quot;&gt;https://www.ibm.com/developerworks/ssa/local/im/que-es-big-data/&lt;/a&gt;&lt;/li&gt;
1634&lt;li&gt;The Four V’s of Big Data – &lt;a href=&quot;https://www.ibmbigdatahub.com/infographic/four-vs-big-data&quot;&gt;https://www.ibmbigdatahub.com/infographic/four-vs-big-data&lt;/a&gt;&lt;/li&gt;
1635&lt;li&gt;Big data – &lt;a href=&quot;https://en.wikipedia.org/wiki/Big_data&quot;&gt;https://en.wikipedia.org/wiki/Big_data&lt;/a&gt;&lt;/li&gt;
1636&lt;li&gt;Las 5 V’s del Big Data – &lt;a href=&quot;https://www.quanticsolutions.es/big-data/las-5-vs-del-big-data&quot;&gt;https://www.quanticsolutions.es/big-data/las-5-vs-del-big-data&lt;/a&gt;&lt;/li&gt;
1637&lt;li&gt;Las 7 V del Big data: Características más importantes – &lt;a href=&quot;https://www.iic.uam.es/innovacion/big-data-caracteristicas-mas-importantes-7-v/#viabilidad&quot;&gt;https://www.iic.uam.es/innovacion/big-data-caracteristicas-mas-importantes-7-v/&lt;/a&gt;&lt;/li&gt;
1638&lt;/ul&gt;
1639&lt;/main&gt;
1640&lt;/body&gt;
1641&lt;/html&gt;
1642 </content></entry></feed>