Write an algorithm called find-g to nd a maximally-general


Question 1

Well-posed Machine Learning problems

(a)  What is required to defi ne a well-posed learning problem ?

(b)  Here are two potential real-world application tasks for machine learning:

1. a winery wishes to uncover relationships between records of the quantitative analyses of its wines from the lab and some key subjective descriptions applied to its wine (e.g. dry, fruity, light, etc.)

2. you want to predict students' marks in the nal exam of COMP9417 given their marks from the other assessable components in the course | you may assume that the corresponding data from previous years is available Pick one of the tasks and state how you would de ne it as a well-posed machine learning problem in terms of the above requirements.

(c)  Suggest a learning algorithm for the problem you chose (give the name, and in a sentence explain why it would be a good choice).

Question 2

Concept Learning

(a)  Write an algorithm called \Find-G" to nd a maximally-general consistent hypothesis. You can assume the data will be noise-free and that the target concept is in the hypothesis space.

(b)  Outline the steps in a proof that Find-G will never fail to cover a positive example in the training set.

Question 3

Learning for Numeric Prediction

(a) Let the weights of a two-input perceptron be: w0 = 0:2, w1 = 0:5 and w2 = 0:5. Assuming that x0 = 1, what is the output of the perceptron when:

[i]  x1 = 1 and x2 = 1 ?
[ii]  x1 = 1 and x2 = 1 ?
Letting w0 = 0:2 and keeping x0 = 1, w1 = 0:5 and w2 = 0:5, what is the perceptron output when:
[iii]  x1 = 1 and x2 = 1 ?
[iv]  x1 = 1 and x2 = 1 ?

(b)  Here is a regression tree with leaf nodes denoted A, B and C:
X <= 5 : A
X > 5 :
| X <= 9: B
| X > 9: C

This is the training set from which the regression tree was learned

X Class
1   8
3  11
4   8
6   3
7   6
8   2
9   5
11 12
12 15
14 15

Write down the output (class) values and number of instances that appear in each of the leaf nodes A, B and C of the tree.

Question 4

Neural and Tree Learning on Continuous Attributes

(a)  In general, feedforward neural networks (multi-layer perceptrons) trained by error back-propagation are:

(i) fast to train, and fast to run on unseen examples
(ii) slow to train, and fast to run on unseen examples
(iii) fast to train, and slow to run on unseen examples
(iv) slow to train, and slow to run on unseen examples

In one sentence, explain your choice of answer.

Suppose you have a decision tree (DT) and a multi-layer perceptron (MLP) that have been trained on data sampled from a two-class target function, with all attributes numeric. You can think of both models as graphs whose edges are labelled with numbers: weights in the MLP and threshold constants for attribute tests in the DT.

(b)  Compare and contrast the roles of these numbers in the two models.
(c)  Compare and contrast the methods of learning these numbers in the two models.

Solution Preview :

Prepared by a verified Expert
Database Management System: Write an algorithm called find-g to nd a maximally-general
Reference No:- TGS0665129

Now Priced at $70 (50% Discount)

Recommended (97%)

Rated (4.9/5)