Mcis6273 data mining assignment clustering and basic


Data Mining Assignment: Clustering and Basic Classification

OBJECTIVES

Learn some of the clustering features of Scikit-learn to do partitioning and hierarchical clustering (k-Means and hierarchical clustering algorithms);

Learn about document clustering, and document similarity scoring using TFIDF;

Using built-in k-nearest neighbor and interpret the output in Scikit-learn - this is an extension of the implementation you did last time;

Learn how to binarize categorical variables in Scikit-learn;

Learn how to use DecisionTreeClassifier to build a basic classifier.

DATA FOR PART 1 -

We will be using data from IMDB and working with movie data. IMDB is a movie database that is widely used to learn about (and rate) movies. Much of the work around movies focuses on predicting ratings - for example, the Net?ix Prize contest was designed to encourage developers to explore better algorithms for rating movies. Instead of predicting ratings, we will work in- stead with clustering the plots of movies. Data will come from the OMDB API which allows a developer to extract information from IMDB programmatically since there is no open public API directly published by IMDB.

You can view the notebook here to see how the data was extracted, but you can skip that step and look directly at the file which is the output of that data. Also, you can find the data for this assignment in data directory, and in it you will see a TSV file called

data/top1000_movie_summaries.tsv.

BACKGROUND FOR PART 1

Document clustering is a common task in text mining and has broad applications in a variety of contexts. In the unsupervised context, such clustering provides insights into a set of documents and the common features they share. In the supervised context such clustering allows one to train and subsequently classify documents. For example, if one were to determine of a document is of a certain kind (e.g. legal, academic) one can use labeled instances to learn the features that would allow the discrimination of unlabeled/unseen instances.

There are several good resources in information retrieval that you may want to bookmark for future reference in text mining and information retrieval generally:

Manning, C.D., Raghavan, P. and Schütze, H. (2008) Introduction to Information Retrieval. doi: https://dx.doi.org/10.1017/CBO9780511809071; Available at: Stanford NLP - Information Retrieval.

DOCUMENT ANALYSIS: TERM FREQUENCY (TF) AND INVERSE DOCUMENT FREQUENCY (TF) The intuition behind analyzing words in documents hinges on the following:

  • terms that are frequent in documents are given higher importance than those that are infrequent
  • terms that are frequent across documents are not considered as important

that is common words across an entire corpus are discounted while those that are common within documents are boosted.

Part 2 - Classification With k-Nearest Neighbors

In HW1 you learned about and used the k-NN algorithm. You computed k neighbors based on actual data. This algorithm can also be used to do what is called a lazy learner because it learns from the testing phase instead of the training phase. This has performance issues unto itself since all the data it has to be seen. It can, nonetheless, be used as a way to do supervised classification since it has learned all the class labels already.

You will first start with just a warm-up of the using the Nearest Neighbors algorithm already implented in Scikit-learn.

Part 3 - Classification With Decision Trees

As we learned, decision trees are a powerful way to build classifiers, especially since the output is interpretable. By using information gain such as entropy and gini coefficient, nodes can be chosen that split the data in meaningful ways allowing the leaf nodes to provide the labels of a set of decisions as one follows each attribute at a decision point.

Attachment:- Assignment.rar

Request for Solution File

Ask an Expert for Answer!!
Dissertation: Mcis6273 data mining assignment clustering and basic
Reference No:- TGS01711814

Expected delivery within 24 Hours