Evaluated exercise - hadoop pig mahout and spark - your job


Evaluated Exercise - HADOOP, PIG, MAHOUT and SPARK

Task 1: Apache PIG - Analyzing LogFiles

Dataset: RStudio CRAN Log-Files

Instructions: Please deliver all commands in your documentation (use a word-document and convert it later into a pdf); Also add screen-shots of the various steps and of results; Use FOREACH-statements to reduce the dataset (keep only variables which are necessary to solve a task).

1. Import RStudio Log Files from one month (e.g., November 2017) into HDFS

a. Download from RStudio CRAN log files page

b. Unzip the files

c. Import the complete directory into HDFS into a folder RLogFiles

2. Pig Latin: Top-100-packages (by operating system)

a. Load log-file of one day (e.g., 1st of November 2017)

b. Dump the first 10 entries on screen (attach a screen shot into your report) to check if it works or not

c. Count the number of occurences of different packages;

d. Count the number of occurences of different packages by operating system;

e. Store the results of both operations in HDFS;

3. sqoop, MySQL and R:

a. Export the results of both operations (package frequencies and package frequencies by operating systems) via sqoop into MySQL;

b. Access the tables by R/RStudio and display the results (Top-10-results in bar charts)

4. Pig Latin: Number of individual users each day

a. Load the log-files into HDFS

b. Count the number of distinct users each day

5. Pig Latin: Average number of packages downloaded by an individual user each day

a. Load the log-files into HDFS

b. Calculate the average number of packages download by an individual user each day

6. Pig Latin: Task Views

a. Task Views are collections of R packages of a certain topic (check the CRAN webpage)

b. We are interested if these Task Views are used by R users: count the number of package ctv downloaded each day)

c. Visualize the results in R (line chart) [follow the step in No. 3 or import the results directly into R)

7. Pig Latin: Download volume (in MB) of Top-10-packages

a. Use CRAN to find out the package size of the Top-10-packages (use WindowsPackage file size) in MB. Round to 1 decimal place.

b. Enter this information into a text file together with the name (should be the same as in log-files)

c. Import this file into HDFS

d. Load the file in Pig (I assume that the RStudio CRAN Log Files are available already)

e. Filter out the Top-10-packages in Pig

f. Add the size information

g. Calculate the download volume of each of the 10 packages by day

h. Export the results and display the results in R

Task 2: Apache PIG and Apache Mahout - Predicting Loan Success

Dataset: Lending Club Dataset (Q3/2017). - Filename: lc.2017q3.EvalExer.csv

Instructions: Please deliver all commands in your documentation (use a word-document and convert it later into a pdf); Also add screen-shots of the various steps and of results; Use FOREACH-statements to reduce the dataset (keep only variables which are necessary to solve a task).

1. Import File into HDFS

a. Download the file from Moodle - File: lc.2017q3.EvalExer.csv

b. Unzip the file

c. Check the structure and the variables in the file

d. Load the file into HDFS system

2. Load the file into PIG: Data Understanding and Data Preparation

a. Load the file into PIG from HDFS

b. Check all variables in the file and clean the variables

c. Filter the variables, generate new variables, etc.

d. Copy the final set of variables into a new file and store it into HDFS

3. Split the file into train- and test-datasets

a. Split the file using the provided macro in PIG into a training-file (70% of cases) and a test-file (30% of cases)

b. Store both files into HDFS using PIG

Choose one of tasks 4 or 5 (either task 4: logistic regression or task 5; Random Forest)

4. Conduct a logistic regression using Apache Mahout

a. Note: You cannot run a logistic regression in Mahout based on HDFS files. You have to export both files from HDFS first!

b. Run a logistic regression on the training data set

c. Check the model based on the test dataset

d. Report the results

5. Conduct a Random Forest

a. Note: Run the Random Forest Model based on the HDFS files. You do not need to export them first!

b. Run a RandomForest Model

c. Tune the model (at least a little bit)

d. Check the model based on the test data set

e. Report the results

Task 3: Apache Spark - Building Interest Rates Calculator

Context

You are working as Data Scientist in a project for Lending Club (check web page). The company wants to offer potential customers an online tool to predict potential interest rates based on the purpose and other variables. Your job is to set up a model to look for possible influences on interest rates (variable int_rate) and to set up a multivariate model to predict it.

In the last step you must prepare a management presentation with core findings.

Dataset - Visit the Lending Club Statistics page and download ONE OF THE EXISTING data sets.

Tasks:

  • Set up a new jupyter notebook
  • Use pandas and seaborn for visualization
  • Report all steps
  • Clean the dataset first (outside jupyter, use an editor!)
  • Target variable: interest rate
  • Import the dataset into hdfs
  • Apply CRISP DM

Steps:

Preliminary Steps

  • Select the variables shown in the table above

Data Understanding

  • Analyze the variables in dataset you selected (Schema, First rows, Descriptive Statistics / Frequency Tables, (Charts),...

Data Preparation

  • Missing Values
  • Transformation of all categorical variables
  • Split into Test and Training Dataset ...

Modeling

Models -

  • Model 1: Multiple Linear Regression
  • Model 2: Integrate polynomials (simply square the metric variables (measurement level scale) and interaction effects for selected variables!
  • Model 3: Tune Model 2 (you can use a grid search or just manipulate some hyperparameters like regularization parameter, lambda etc.)

Evaluate all Model Fits

Core Parameter is Coefficient of Determination R2

Always control for overfitting (just compare training and test datasets and reduce the complexity of the models if necessary)

Check the distribution of the error part

Report a final model which fits best to the data (due to R2 and overfitting).

Management Presentation

Present the core finding on a maximum of 6 slides (only task 3). Summarize core findings. You are addressing General Management!

Attachment:- Assignment File.rar

Request for Solution File

Ask an Expert for Answer!!
Computer Engineering: Evaluated exercise - hadoop pig mahout and spark - your job
Reference No:- TGS02683090

Expected delivery within 24 Hours