F21dl - data mining and machine learning - experiment with


The point: this coursework is designed to give you experience with, and hence more understanding of:

- Overfitting: finding a classifier that does very well on your training data doesn't mean it will do well on unseen (test) data.
- The relationship between overfitting and complexity of the classifier - the more degrees of freedom in your classifier, the more chances it has to overfit the training data.
- The relationship between overfitting and the size of the training set.
- Bespoke machine learning: you don't have to just use one of the standard types of classifier - the ‘client' may specifically want a certain type of classifier (here, a ruleset that works in a certain way), and you can develop algorithms that try to find the best possible such classifier.

Students wishing to complete the below tasks in other languages, such as R, Matlab, Python are welcome to do so, assuming they have prior knowledge of these languages.

In the below task spec, the assumption is made that the majority of the class uses Weka. Please adapt the below instructions accordingly if you use a different programming language.

1. Convert the above files into arff format and load them to Weka.

Dealing with big data sets: in CW2, you were given several options how to deal with large data sets in Weka (increasing heap size for Weka GUI, using Weka command line with increased heap, wrapping Weka command line within scripts that automate the experiments, or just reducing the size of the data set using Weka methods of randomization and attribute selections). You will have to make one such decision for this coursework, too.

2. Create folders on your computer to store classifiers, screenshots and results of all your experiments, as explained below.

Your coursework will consist of two parts - in Part-1 you will work with Decision trees and in Part -2 - with Linear Classifiers and Neural Networks.

For each of the two parts, you will do the following:

3. Using the provided data sets, and Weka's facility for 10-fold cross validation, run the classifier, and note its accuracy for varying learning parameters provided by Weka. (Below you will find more instructions on those.) Record all your findings and explain them. Make sure you understand and can explain logically the meaning of the confusion matrix, as well as the information contained in the "Detailed Accuracy" field: TP Rate, FP Rate, Precision, Recall, F Measure, ROC Area.

4. Use Visualization tools to analyze and understand the results: Weka has comprehensive tools for visualization of, and manipulation with, Decision trees and Neural Networks.

5. Repeat steps 3 and 4, this time using testing data set instead of Weka's cross validation.

6. Make new training and testing sets, by moving 3000 of the instances in the testing set into the training set. Then, repeat steps 3 and 4.

7. Make new training and testing sets again, this time enlarging the training set with 6000 instances from the testing set, and again repeat steps 3 and 4.

8. Analyse your results from the point of view of the problem of classifier over-fitting.

Part 1. Decision tree learning.

In this part, you are asked to explore the following three decision tree algorithms implemented in Weka
1. J48 Algorithm
2. User Classifier (This option allows you to construct decision trees semi-manually)
3. One other Decision tree algorithm.
You should compare their relative performance on the given data set. For this:
- Experiment with various decision tree parameters: binary splits or multiple branching, prunning, confidence threshold for pruning, and the minimal number of instances permissible per leaf.
- Experiment with their relative performance based on the output of confusion matrices as well as other metrics (TP Rate, FP Rate, Precision, Recall, F Measure, ROC Area). Note that different algorithms can perform differently on various metrics. Does it happen in your experiments? - Discuss.
- When working with User Classifier, you will learn to work with both Data and Tree Visualizers in Weka. Please reduce the number of attributes as in CW2 to prototype more efficiently in Visualizers.
- Record all the above results by going through the steps 3-8.

Part 2. Neural Networks.

In this part, you will work with the MultilayerPerceptron algorithm in Weka.

- Run MultilayerPerceptron. Experiment with various Neural Network parameters: add or remove

nodes, layers and connections, vary the learning rate, epochs and momentum, and validation threshold.
- You will need to work with Weka's Neural Network Visualiser in order to perform some of the above tasks. You are allowed to use smaller data sets when working with the Visualiser.
- Experiment with relative performance of Neural Networks and changing parameters. Base your comparative study on the output of confusion matrices as well as other metrics (TP Rate, FP Rate, Precision, Recall, F Measure, ROC Area).
- Record all the above results by going through the steps 3-8.

9. Deep Learning and Deep Neural Networks have gained popularity recently. Do some research (using the www and the recommended textbook) to find out more about Deep Learning. Use algorithms and tools available in Weka or on-line. Write a one page essay comparing Neural Networks and Deep Neural Networks.

Request for Solution File

Ask an Expert for Answer!!
Dissertation: F21dl - data mining and machine learning - experiment with
Reference No:- TGS02541580

Expected delivery within 24 Hours