Wednesday, March 11, 2020

mapreduce example to join and convert row based structured data into hierarchical pattern like json or xml


The structured to hierarchical pattern is used to convert the format of data . This pattern can be used when we need to transform the row-based data to a hierarchical format, such as JSON or XML.If you have multiple data set which requires you to first do a join, an expensive operation, then extract the data that allows you to do your real work you should take advantage of hierarchical data to avoid doing joins in which case we can use this pattern.So basically there are couple of scenarios in which this pattern is applicable i,e You have data sources that are linked by some set of foreign keys or Your data is structured and row-based.
Steps to achieve hierarchical data
1. If you wish to combine multiple data sources into a hierarchical data structure, a Hadoop class called MultipleInputs from org.apache.hadoop.mapreduce.lib.input is extremely valuable. MultipleInputs allows you to specify different input paths and different mapper classes for each input. The configuration is done in the driver. If you are loading data from only one source in this pattern,you don’t need this step.
2. The mappers load the data and parse the records into one cohesive format so that your work in the reducers is easier. The output key should reflect how you want to identify the root of each hierarchical record.
3.The reducer receives the data from all the different sources key by key. All of the data for a particular grouping is going to be provided for you in one iterator, so all that is left for you to do is build the hierarchical data structure from the list of data items.
UseCases for the pattern
1. Pre joining data – Data arrives in disjointed structured data sets, and for analytical purposes it would be easier to bring the data together into more complex objects. By doing this, you are setting up your data to take advantage of the NoSQL model of analysis.
2. Preparing data for HBase or MongoDB – HBase is a natural way to store this data, so you can use this method to bring the data together in preparation for loading into HBase or MongoDB.
Achieving the same using Pig
The COGROUP method in Pig does a great job of bringing data together while preserving the original structure. However, using the predefined
keywords to do any sort of real analysis on a complex record is more challenging out of the box. For this, a user-defined function is the right way to go.
  1. Employee_Info = LOAD '/input/data/employee' AS PigStorage(',');
  2. Employee_Info = LOAD '/input/data/department' AS PigStorage(',');
  3. grouped_data = COGROUP Employee_Info BY $1, Employee_Info BY $1;
  4. analyzed_data = FOREACH grouped_data GENERATE udfFunction(group, $1, $2);
Performance Consideration
There are two performance concerns that you need to pay attention to when using this pattern. First, you need to be aware of how much data is being sent to the reducers from the mappers, and second you need to be aware of the memory footprint of the object that the reducer builds.The next major concern is the possibility of hot spots in the data that could result in an obscenely large record. With large data sets, it is conceivable that a particular output record is going to have a lot of data associated with it.Imagine a specific key is having million of records and if you are building some sort of XML object, all of those data at one point might be stored in memory before writing the object out. This can cause you to blow out the heap of the Java Virtual machine, which obviously should be avoided.
Problem To Solve
Lets take a Telecom domain example . We take two different dataset one is a hlog data which helps in determining the speed of the line and the second is the line management data and lets call it dsl data. Given the two dataset which are in the row based structure make a join of the dataset to fetch all the required fields from the two dataset and covert the data into a hierarchical json format.We are fetching the supplier_id,system_id,vendor_id and version_id from the dsl data set and the hlog from the hlog dataset and also we are using the device and port id as the forieng key to join the two dataset. Below is the expected output
  1. {
  2. "Test_Device_id_345_/shelf=0/slot=1/port=0":{
  3. "supplier_id":"Test_Vendor_Id",
  4. "system_id":"B5004244434D0000", "hlog":"03ff03ff03ff03ff03ff03ff03ff03ff03ff03ff03ff03ff03ff03ff03ff03ff03ff03ff03ff03ff03ff03ff03ff03ff03ff03ff0af01b101b201b201b301b401b601b701b701b801b801ba01b901ba01b901b901ba",
  5. "vendor_id":"B5004244434D0000",
  6. "version_id":"A2pv6C038m"
  7. }
  8. }
Sample input data

Here is a sample hlog input data attached hblog.csv
Here is a sample dsl input data attached Samle dsl.csv
Driver Code
Lets start with driver code . As we have two different dataset with different representations we need to parse the two input dataset differently.These cases are handled elegantly by using the MultipleInputs class, which allows you to specify the InputFormat and Mapper to use on a per-path basis. For example, we have hlog data that we want to combine with the dsl data for our analysis, then we might set up the input as follows:
  1. MultipleInputs.addInputPath(sampleJob, new Path(args[0]), TextInputFormat.class,
  2. SpeedHlogDeltaDataMapper.class);
  3. MultipleInputs.addInputPath(sampleJob, new Path(args[1]), TextInputFormat.class,
  4. DsllDataMapper.class);
This code replaces the usual calls to FileInputFormat.addInputPath() and job.setMapperClass().The important thing is that the map outputs have the same types, since the reducers see the aggregated map outputs and are not aware of the different mappers used to produce them.
The MultipleInputs class has an overloaded version of addInputPath() that doesn’t take a mapper this is useful when you only have one mapper (set using the Job’s setMapperClass() method) but multiple input formats.
  1. public static void addInputPath(Job job, Path path,Class<? extends InputFormat> inputFormatClass)
  1. import java.io.File;
  2. import java.io.IOException;
  3. import org.apache.commons.io.FileUtils;
  4. import org.apache.hadoop.conf.Configuration;
  5. import org.apache.hadoop.fs.Path;
  6. import org.apache.hadoop.io.Text;
  7. import org.apache.hadoop.mapreduce.Job;
  8. import org.apache.hadoop.mapreduce.lib.input.MultipleInputs;
  9. import org.apache.hadoop.mapreduce.lib.input.TextInputFormat;
  10. import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;
  11. import com.hadoop.design.summarization.blog.ConfigurationFactory;
  12. public class DriverStructuredToHierarchical {
  13. public static void main(String[] args) throws IOException, ClassNotFoundException, InterruptedException {
  14. /*
  15. * I have used my local path in windows change the path as per your
  16. * local machine
  17. */
  18. args = new String[] { "Replace this string with Input Path location for hlog data",
  19. "Replace this string with Input Path location for dsl data"
  20. "Replace this string with output Path location" };
  21. /* delete the output directory before running the job */
  22. FileUtils.deleteDirectory(new File(args[2]));
  23. /* set the hadoop system parameter */
  24. System.setProperty("hadoop.home.dir", "Replace this string with hadoop home directory location");
  25. if (args.length != 3) {
  26. System.err.println("Please specify the input and output path");
  27. System.exit(-1);
  28. }
  29. Configuration conf = ConfigurationFactory.getInstance();
  30. Job sampleJob = Job.getInstance(conf);
  31. sampleJob.setJarByClass(DriverStructuredToHierarchical.class);
  32. TextOutputFormat.setOutputPath(sampleJob, new Path(args[2]));
  33. sampleJob.setOutputKeyClass(Text.class);
  34. sampleJob.setOutputValueClass(Text.class);
  35. sampleJob.setReducerClass(SpeedHlogDslJoinReducer.class);
  36. MultipleInputs.addInputPath(sampleJob, new Path(args[0]), TextInputFormat.class,
  37. SpeedHlogDeltaDataMapper.class);
  38. MultipleInputs.addInputPath(sampleJob, new Path(args[1]), TextInputFormat.class, DsllDataMapper.class);
  39. sampleJob.getConfiguration().set("validCount", "1");
  40. sampleJob.getConfiguration().set("totalCount", "1");
  41. @SuppressWarnings("unused")
  42. int code = sampleJob.waitForCompletion(true) ? 0 : 1;
  43. }
  44. }
Mapper Code
In this case, there are two mapper classes, one for hlog i,e SpeedHlogDeltaDataMapper and one for dsl i,e DsllDataMapper. In both, we extract the device and port id which are in the index 5 and 6 in the hlog data and 2 and 3 in the dsl data to use it as the output key. We output the input value prepended with a character ‘H’ for a hlog data or ‘D’ for a dsl data so we know which data set the record came from during the reduce phase.
  1. import java.io.IOException;
  2. import org.apache.hadoop.io.Text;
  3. import org.apache.hadoop.mapreduce.Mapper;
  4. public class SpeedHlogDeltaDataMapper extends Mapper<Object, Text, Text, Text> {
  5. private Text outkey = new Text();
  6. private Text outvalue = new Text();
  7. public static final String COMMA = ",";
  8. @Override
  9. public void map(Object key, Text value, Context context) throws IOException, InterruptedException {
  10. String[] values = value.toString().split(",", -1);
  11. String neid = values[5];
  12. String portid = values[6];
  13. outkey.set(neid + portid);
  14. outvalue.set("H" + values[4] + COMMA + values[5] + COMMA + values[6] + COMMA
  15. + values[7]+COMMA+values[8]);
  16. context.write(outkey, outvalue);
  17. }
  18. }
  1. import java.io.IOException;
  2. import org.apache.hadoop.io.Text;
  3. import org.apache.hadoop.mapreduce.Mapper;
  4. public class DsllDataMapper extends Mapper<Object, Text, Text, Text> {
  5. private Text outkey = new Text();
  6. private Text outvalue = new Text();
  7. public static final String COMMA = ",";
  8. @Override
  9. public void map(Object key, Text value, Context context) throws IOException, InterruptedException {
  10. String data = value.toString();
  11. String[] field = data.split(",", -1);
  12. if (null != field && field.length > 62) {
  13. String neid = field[2];
  14. String portid = field[3];
  15. outkey.set(neid + portid);
  16. outvalue.set("D" + field[0] + COMMA + field[5] + COMMA + field[7].toString() + COMMA + field[62] + COMMA
  17. + field[63]);
  18. context.write(outkey, outvalue);
  19. }
  20. }
  21. }
Json Builder
We are using the json api from org.json.JSONObject jar file which can be downloaded from maven using the below artifactId.
  1. <dependency>
  2. <groupId>org.json</groupId>
  3. <artifactId>json</artifactId>
  4. <version>20170516</version>
  5. </dependency>
we we will be using the below code to convert the final data into a json format before writing the output in the reducer.We will be passing the parent key which will be a combination of device and the port id and map to frame the key value pairs in the json.
  1. import java.util.Map;
  2. import org.json.JSONObject;
  3. public class JsonBuilder {
  4. public String buildJson(String parentKey, Map<String, String> jsonMap) {
  5. JSONObject jsonString = new JSONObject();
  6. for (Map.Entry<String, String> entry : jsonMap.entrySet()) {
  7. jsonString.put(entry.getKey(), entry.getValue());
  8. }
  9. return new JSONObject().put(parentKey, jsonString).toString();
  10. }
  11. }
Reducer code
The reducer builds the hierarchical JSON object using the code above. All the values are iterated to get the required fields . We know which record is which by the flag we added to the value. These flags are removed before adding these values to the respective list. Then we check that there is a mapping between the hlog and the dsl data by checking the corresponding lists.If the mapping is found we retrieve the hlog from the hlog data and vendor_id,system_id,version_id and supplier_id from the dsl data. Finally using our JsonBuilder we convert the data into a json.
  1. import java.io.IOException;
  2. import java.text.ParseException;
  3. import java.util.ArrayList;
  4. import java.util.HashMap;
  5. import java.util.Map;
  6. import org.apache.hadoop.io.NullWritable;
  7. import org.apache.hadoop.io.Text;
  8. import org.apache.hadoop.mapreduce.Reducer;
  9. public class SpeedHlogDslJoinReducer extends Reducer<Text, Text, NullWritable, Text> {
  10. private ArrayList<Text> listH = new ArrayList<Text>();
  11. private ArrayList<Text> listD = new ArrayList<Text>();
  12. @Override
  13. public void reduce(Text key, Iterable<Text> values, Context context) throws IOException, InterruptedException {
  14. listH.clear();
  15. listD.clear();
  16. for (Text text : values) {
  17. if (text.charAt(0) == 'H') {
  18. listH.add(new Text(text.toString().substring(1)));
  19. } else if (text.charAt(0) == 'D') {
  20. listD.add(new Text(text.toString().substring(1)));
  21. }
  22. }
  23. try {
  24. executeConversionLogic(context);
  25. } catch (ParseException e) {
  26. throw new IOException("Its a parse exception wrapped in IOException " + e.getMessage());
  27. }
  28. }
  29. private void executeConversionLogic(Context context) throws IOException, InterruptedException, ParseException {
  30. if (!listH.isEmpty() && !listD.isEmpty()) {
  31. for (Text hlogText : listH) {
  32. String[] hlog = hlogText.toString().split(",");
  33. for (Text dslText : listD) {
  34. String[] dsl = dslText.toString().split(",", -1);
  35. Map<String, String> maps = new HashMap<String, String>();
  36. maps.put("vendor_id", dsl[2]);
  37. maps.put("system_id", dsl[3]);
  38. maps.put("version_id", dsl[4]);
  39. maps.put("supplier_id", dsl[0]);
  40. maps.put("hlog", hlog[4]);
  41. JsonBuilder jsonBuilder = new JsonBuilder();
  42. String json = jsonBuilder.buildJson(hlog[1] + "_" + hlog[2], maps);
  43. context.write(NullWritable.get(), new Text(json));
  44. break;
  45. }
  46. }
  47. }
  48. }
  49. }


Credit : map-reduce-example-join-convert-row-based-structured-data-hierarchical-pattern-like-json-xml/

No comments:

Post a Comment

Recent Post

Databricks Delta table merge Example

here's some sample code that demonstrates a merge operation on a Delta table using PySpark:   from pyspark.sql import SparkSession # cre...