Wite and run hadoop code mappers and reducers to find in


Task:

You are to write and run Hadoop code (mappers and reducers) to find in the Last.fm dataset (a) how often U2 is mentioned as artist and (b) songs that have been played more than 600 million times. Both questions are answered with two different implementations (implantation for (a) builds on the implementation of (b)) so you should submit your anwers with two mappers and two reducers.

Result: You need to submit (a) the code and (b) the result executed on the data provided. The code can be written in Java or in Python.

Assessment: Each student will submit their own solution. You are free to work on this exercise in groups of up to five students if you prefer. You need to indicate on your submission if you solved it as a group (and who else is in the group). The mark will depend on the correctness of the result and to lesser degree on the code.

Data Description

The data is provided by Last.fm and is a file containing tab delimited lines. Each lines tells how often a particular user has been listened to by a particular user of Last.fm. The format is as follows:

user-mboxsha1 \t musicbrainz-artist-id \t artist-name \t plays and an example looks as follows:
000063d3fe1cf2ba248b9e3c3f0334845a27a6be \t a3cb23fc-acd3-4ce0- 8f36-1e5aa6a18432 \t u2 \t 31

The example tells us that user 000063d3fe1cf2ba248b9e3c3f0334845a27a6be (obfuscated for reasons of privacy) listened to the song a3cb23fc-acd3-4ce0- 8f36-1e5aa6a18432 (obfuscated for technical reasons) by u2 31 times.

The goal of your implementation is to find (a) how often U2 is mentioned as artist and (b) the songs (artist name and title of the song), which have been played at least 600 million times.

Hints

1. The structure of the dataset is well defined but be aware that there may always be certain anomalies (missing values etc.)

2. Capitalization of artists names may not be consistent.

3. The song titles are not unique. Different artists may have songs with the same titles.

4. Use the sample dataset to understand the format (and test parsing).

Dataset

The dataset is located in S3 at s3://lastfmdataba/plays and is 30GB in size. You can use hadoop distcp s3n://lastfmdataba/plays /tmp/plays to load the file into HDFS (loads it into the /tmp directory).

Attachment:- sampledata.txt

Request for Solution File

Ask an Expert for Answer!!
Python Programming: Wite and run hadoop code mappers and reducers to find in
Reference No:- TGS01184238

Expected delivery within 24 Hours