Fill the missing entries for filling any feature you can


Purpose

In this assessment, you need to demonstrate your skills for applying regularized logistic regression to perform two-class and multi-class classification for real-world tasks. You also need to demonstrate your skill in recognizing under-fitting/overfitting situations

Instructions

This is group assessment task. Students will be required to analyse a given real-world scenario and contribute to the classifier design.

The group response to problem solution should not exceed 30 pages. Students will be required to consolidate their individual solutions and propose best solution that evidences each group member's contribution along with a rationale for the group's response to solving the problem.

Task A - Binary Classification

For this problem, we will use a subset of here. Note that this dataset has some information missing.

1.1 Data Munging

Cleaning the data is essential when dealing with real world problems. Training and testing data is stored in "data/wisconsin_data" folder. You have to perform the following:

- Read the training and testing data. Print the number of features in the dataset.

- For the data label, print the total number of 1's and 0's in the training and testing data. Comment on the class distribution. Is it balanced or unbalanced?

- Print the number of features with missing entries.

- Fill the missing entries. For filling any feature, you can use either mean or median value of the feature values from observed entries.

- Normalize the training and testing data.

1.2 Logistic Regression Train logistic regression models with L1 regularization and L2 regularization using alpha = 0.1

and lambda = 0.1. Report accuracy, precision, recall, f1-score and print the confusion matrix.
1.3 Choosing the best hyper-parameter
For L1 model, choose the best alpha value from the following set:

{0.1,1,3,10,33,100,333,1000, 3333, 10000, 33333}.

For L2 model, choose the best lambda value from the following set:

{0.001, 0.003, 0.01, 0.03, 0.1,0.3,1,3,10,33}.

To choose the best hyperparameter (alpha/lambda) value, you have to do the following:

- For each value of hyperparameter, perform 100 random splits of training data into training and validation data.

- Find the average validation accuracy for each 100 train/validate pairs. The best hyperparameter will be the one that gives maximum validation accuracy. Use the best alpha and lambda parameter to re-train your final L1 and L2 regularized model. Evaluate the prediction performance on the test data and report the following:

- Precision

- Accuracy

- The top 5 features selected in decreasing order of feature weights.

- Confusion matrix

Finally, discuss if there is any sign of underfitting or overfitting with appropriate reasoning.

Task B Multiclass Classification

For this experiment, we will use a small subset of MNIST dataset for handwritten digits. This dataset has no missing data. You will have to implement one-versus-rest scheme to perform multi-class classification using a binary classifier based on L1 regularized logistic regression.

2.1 Read and understand the data, create a default One-vs-Rest Classifier
1- Use the data from the file reduced_mnist.csv in the data directory. Begin by reading the data. Print the following information:

- Number of data points

- Total number of features

- Unique labels in the data

2- Split the data into 70% training data and 30% test data. Fit a One-vs-Rest Classifier (which uses Logistic regression classifier with alpha=1) on training data, and report accuracy, precision, recall on testing data.

2.2 Choosing the best hyper-parameter

1- As in section 1.3 above, now create 10 random splits of training data into training and validation data. Choose the best value of alpha from the following set: {0.1, 1, 3, 10, 33, 100, 333, 1000, 3333, 10000, 33333}. To choose the best alpha hyperparameter value, you have to do the following:

- For each value of hyperparameter, perform 10 random splits of training data into training and validation data as said above.

- For each value of hyperparameter, use its 10 random splits and find the average training and validation accuracy.

- On a graph, plot both the average training accuracy (in red) and average validation accuracy (in blue) w.r.t. each hyperparameter setting. Comment on this graph by identifying regions of overfitting and underfitting.

- Print the best value of alpha hyperparameter.

2- Evaluate the prediction performance on test data and report the following:

- Total number of non-zero features in the final model.

- The confusion matrix

- Precision, recall and accuracy for each class.

Finally, discuss if there is any sign of underfitting or overfitting with appropriate reasoning

Attachment:- Machine learning.zip

Request for Solution File

Ask an Expert for Answer!!
Dissertation: Fill the missing entries for filling any feature you can
Reference No:- TGS02902146

Expected delivery within 24 Hours