Why using the maximum number of training


A financial company hires your team to develop back-propagation neural network(s) for predicting the next-week trend of five stocks (i.e. go up, go down, or remain the same). In the meantime, the company also provides you the data for each stock in the past 15 years. Each data record consists of 20 attributes (such as index values, revenues, earnings per share, capital investment, and so on). Please answer the following questions:
1) The team member A suggests that you should develop a single neural network that can handle all these stocks. But member B insists that you have to develop five networks (one for each stock). Whom do you think is correct and why?
2) During training, at a certain point, you notice that the error rate (of each training cycle) has been oscillating (i.e. it decreases in the round n, increases in the round n+1, and decreases again in the round n+2, and so on). What might be the reason for this phenomenon and what you should do about it?
3) Assuming you carefully trained the network or networks with all 10 attributes of data objects for 20,000 cycles ((i.e., epochs) and never observed any
3
oscillation. The total error rate has been decreasing all the time till a very, very small value. So everything looks great. However, when you tested the network(s) with a new testing data set (or new testing data sets), the outcomes of the neural network(s) were very bad (low accuracy). Can you think of at least two possible reasons and how to avoid them?
4) Why using the maximum number of training epochs or the maximum acceptable errors in each epoch as the termination condition for BP network training could be problematic? What is the problem that such termination conditions may cause?

Request for Solution File

Ask an Expert for Answer!!
Basic Computer Science: Why using the maximum number of training
Reference No:- TGS0107510

Expected delivery within 24 Hours