Construct the weight vector of the maximum margin


Part 1. Text Reading:

Decision Trees Reasoning with uncertainty Course Slides

Part 2. Problems:

(Note: Please include any external reference materials other than the textbook. Use the APA format where appropriate.)

Problem 2.1: Decision Tree

For this question you need to refer to the decision tree section in the Course Slides (Module 2-2) posted on Blackboard.

One major issue for any decision tree algorithm is how to choose an attribute based on which the data set can be categorized and a well-balanced tree can be created. The most traditional approach is called the ID3 algorithm proposed by Quinlan in 1986. The detailed ID3 algorithm is shown in the slides. The textbook provides some discussions on the algorithm in Section 18.3.

For this problem please follow the ID3 algorithm and manually calculate the values based on a data set similar to (but not the same as) the one in the course slides. This exercise should help you get deep insights on the execution of the ID3 algorithm. Please note that concepts discussed here (for example, entropy, information gain) are very important in information theory and signal processing fields. The new data set is shown as follows. In this example one row is removed from the original set and all other rows remain the same.

Following the conventions used in the slides, please show a manual process and calculate the

following values: Entropy(Sweather=sunny), Entropy (Sweather = windy), Entropy(Sweather = rainy)

Gain (S, weather), Gain (S, parents) and Gain (S, money). Based on the last three values, which attribute should be chosen to split on?

Please show detailed process how you obtain the solutions.

Weekend  Weather  Parents  Money  Decision (Category)  
W1  Sunny  Yes  Rich  Cinema 
W2  Sunny  No  Rich  Tennis 
W3  Windy  Yes  Rich  Cinema 
W4  Rainy  Yes  Poor  Cinema 
W5  Rainy  No  Rich  Stay in 
W6  Rainy  Yes  Poor  Cinema 
W7  Windy  No  Poor  Cinema 
W8  Windy  No  Rich  Shopping 
W9  Windy  Yes  Rich  Cinema 

Problem 2.2:

The Decision Tree inductive learning algorithm may be used to generate "IF ... THEN" rules that are consistent with a set of given examples. Consider an example where 10 binary input variables X1, X2, , X10 are used to classify a binary output variable (Y).

(i) At most how many examples do we need to exhaustively enumerate every possible combination of inputs?

(ii) At most how many leaf nodes can a decision tree have if it is consistent with a training set containing 100 examples?

Please show detailed process how you obtain the solutions.

Problem 2.3. Bayes Theorem

A quality control manager has used algorithm C4.5 to come up with rules that classify items based on several input factors. The output has two classes -- Accept and Reject. Test results with the rule set indicate that 5% of the good items are classified as Reject and 2% of the bad items classified as Accept.

Historical data suggests that two percent of the items are bad. Based on this information, what is the conditional probability that:

(i) An item classified as Reject is actually good?

(ii) An item classified as Accept is actually bad?

Please show detailed process how you obtain the solutions.

Problem 2.4: Support Vector Machine

Consider the following set of training data.

x1  x2  class 
1 1
2 2
2 0
3 1
0 0
1 0
0 1
-1 1

(i) Plot these six training points in a two-dimensional space (with x1 and x2). Are the classes {+, -} linearly separable? Why?

(ii) Construct the weight vector of the maximum margin hyperplane by inspection and identify the support vectors.

(iii) If you remove one of the support vectors, does the size of the optimal margin decrease, stay the same, or increase? Justify your answer.

(iv) Is your answer to (iii) also true for any dataset in a 2-dimentioanl space? Provide a counterexample if it is not true, or give a short proof if it is true. When we have another dataset in a space with more than two dimensions, do you have the same answer? Justify.

Solution Preview :

Prepared by a verified Expert
Data Structure & Algorithms: Construct the weight vector of the maximum margin
Reference No:- TGS01132058

Now Priced at $150 (50% Discount)

Recommended (96%)

Rated (4.8/5)