Artificial Neural Networks.ppt
《Artificial Neural Networks.ppt》由会员分享,可在线阅读,更多相关《Artificial Neural Networks.ppt(41页珍藏版)》请在麦多课文档分享上搜索。
1、Artificial Neural Networks,Brian Talecki CSC 8520 Villanova University,Articifial Intelligence,ANN - Artificial Neural Network,A set of algebraic equations and functions which determine the best output given a set of inputs. An artificial neural network is modeled on a very simplified version of the
2、 a human neuron which make up the human nervous system. Although the brain operates at 1 millionth the speed of modern computers, it functions faster than computers because of the parallel processing structure of the nervous system.,Human Nerve Cell,picture from: G5AIAI Introduction to AI by Graham
3、Kendall www.cs.nott.ac.uk/gxk/courses/g5aiai,At the synapse the nerve cell releases a chemical compounds called neurotransmitters, which excite or inhibit a chemical / electrical discharge in the neighboring nerve cells. The summation of the responses of the adjacent neurons will elicit the appropri
4、ate response in the neuron.,Brief History of ANN,McCulloch and Pitts (1943) designed the first neural network Hebb (1949) who developed the first learning rule. If two neurons were active at the same time then the strength between them should be increased. Rosenblatt (1958) introduced the concept of
5、 a perceptron which performed pattern recognition. Widrow and Hoff (1960) introduced the concept of the ADALINE (ADAptive Linear Element) . The training rule was based on the idea of Least-Mean-Squares learning rule which minimizing the error between the computed output and the desired output. Minsk
6、y and Papert (1969) stated that the perceptron was limited in its ability to recognize features that were separated by linear boundaries. “Neural Net Winter” Kohonen and Anderson independently developed neural networks that acted like memories. Webros(1974) developed the concept of back propagation
7、of an error to train the weights of the neural network. McCelland and Rumelhart (1986) published the paper on back propagation algorithm. “Rebirth of neural networks”. Today - they are everywhere a decision can be made.,Source : G5AIAI - Introduction to Artificial Intelligence Graham Kendall:,Basic
8、Neural Network,Inputs normally a vector of measured parameters Bias may/may not be added f() transfer or activation function Outputs = f( W p + b),f(),W,Outputs,Inputs,b - Bias, Wp +b,T,Activation Functions,Source: Supervised Neural Network Introduction CISC 873. Data Mining Yabin Meng,Log Sigmoidal
9、 Function,Source: Artificial Neural Networks Colin P. Fahey http:/ Limit Function,1.0,-1.0,x,y,Log Sigmoid and Derivative,Source : The Scientist and Engineers Guide to Digital Signal Processing by Steven Smith,Derivative of the Log Sigmoidal Function,s(x) = (1 + e ) s(x) = -(1+e ) * (-e ) = e * (1+
10、e )= ( e ) * ( 1 ) (1+ e ) ( 1 + e )= (1 + e 1) * ( 1 )( 1+ e ) ( 1 + e )= (1 - ( 1 ) ) * ( 1 ) (1+ e ) (1 + e ) s(x) = (1-s(x) * s(x),-1,-x,-2,-x,-x,-x,-2,-x,-x,-x,-x,-x,-x,-x,-x,-x,Derivative is important for the back error propagation algorithm used to train multilayer neural networks.,Example :
11、Single Neuron,Given : W = 1.3, p = 2.0, b = 3.0Wp + b = 1.3(2.0) + 3.0 = 5.6 Linear:f(5.6) = 5.6Hard limitf(5.6) = 1.0Log Sigmoidalf(5.6) = 1/(1+exp(-5.6)= 1/(1+0.0037)= .9963,Simple Neural Network,One neuron with a linear activation function = Straight Line Recall the equation of a straight Line :
12、y = mx +bm is the slope (weight), b is the y-intercept (bias).,Bad,Good,Decision Boundary,p2,p1,Mp1 + b = p2,Mp1 + b p2,Perceptron Learning,Extend our simple perceptron to two inputs and hard limit activation function,F(),W,bias,Output,W1,W2,o = f ( W p + b) W is the weight matrix p is the input vec
13、tor o is our scalar output,p1,p2,Hard limit function,T,Rules of Matrix Math,Addition/Subtraction1 2 3 9 8 7 10 10 104 5 6 +/- 6 5 4 = 10 10 107 8 9 3 2 1 10 10 10Multiplication by a scalar Transposea 1 2 = a 2a 1 = 1 23 4 3a 4a 2Matrix Multiplication2 4 5 = 18 , 5 2 4 = 10 202 2 4 8,T,Data Points fo
14、r the AND Function,q1 = 0 , o1 = 00q2 = 1 , o2 = 00q3 = 0 , o3 = 01q4 = 1 , o4 = 11,Truth Table P1 P2 O,0 0 00 1 01 0 01 1 1,Weight Vector and the Decision Boundary,W = 1.01.0,Magnitude and Direction,Decision Boundary is the line where W p = b or W p b = 0,T,T,W p b,W p b,T,T,As we adjust the weight
15、s and biases of the neural network, we change the magnitude and direction of the weight vector or the slope and intercept of the decision boundary,Perceptron Learning Rule,Adjusting the weights of the Perceptron Perceptron Error : Difference between the desired and derived outputs. e = Desired Deriv
16、edWhen e = 1W new = Wold + pWhen e = -1W new = Wold - pWhen e = 0W new = WoldSimplifingW new = Wold + * epb new = bold + e is the learning rate ( = 1 for the perceptron).,AND Function Example,Start with W1 = 1, W2 = 1, and b = -1W p + b = t - a = e1 1 0 + -1 = 0 - 0 = 0 N/C01 1 0 + -1 = 0 - 1 = -1 1
17、 1 0 1 + -2 = 0 - 0 = 0 N/C01 0 1 + -2 = 1 - 0 = 11,T,W p + b = t - a = e2 1 0 + -1 = 0 - 0 = 0 N/C02 1 0 + -1 = 0 - 1 = -1 1 2 0 1 + -2 = 0 - 1 = -101 0 1 + -3 = 1 - 0 = 11,T,W p + b = t - a = e2 1 0 + -2 = 0 - 0 = 0 N/C02 1 0 + -2 = 0 - 0 = 0 N/C1 2 1 1 + -2 = 0 - 1 = -1 01 1 1 + -3 = 1 - 0 = 11,T
18、,W p + b = t - a = e2 2 0 + -2 = 0 - 0 = 0 N/C02 2 0 + -2 = 0 - 1 = -11 2 1 1 + -3 = 0 - 0 = 0 N/C02 1 1 + -3 = 1 - 1 = 0 N/C1,T,W p + b = t - a = e2 1 0 + -3 = 0 - 0 = 0 N/C02 1 0 + -3 = 0 - 0 = 0 N/C1 Done !,T,2,f(),1,Hardlim(),p1,p2,-3,XOR Function,Truth TableX Y Z = (X and not Y) or (not X and Y
19、)0 0 00 1 11 0 11 1 0,1,0,No single decision boundary can separate the favorable and unfavorable outcomes.,z,x,y,We will need a more complicated neural net to realize this function,Circuit Diagram,XOR Function Multilayer Perceptron,W5,W6,W1,W3,W2,W4,f1(),f1(),f(),z,b2,b11,b12,x,y,Z = f (W5*f1(W1*x +
20、 W4*y+b11) +W6*f1(W2*x + W3*y+b12)+b2),Weights of the neural net are independent of each other, so that we can compute the partial derivatives of z with respect to the weights of the network.,i.e. z / W1, z / W2, z / W3, z / W4, z / W5, z / W6,Back Propagation Diagram,Neural Networks and Logistic Re
- 1.请仔细阅读文档,确保文档完整性,对于不预览、不比对内容而直接下载带来的问题本站不予受理。
- 2.下载的文档,不会出现我们的网址水印。
- 3、该文档所得收入(下载+内容+预览)归上传者、原创作者;如果您是本文档原作者,请点此认领!既往收益都归您。
下载文档到电脑,查找使用更方便
2000 积分 0人已下载
下载 | 加入VIP,交流精品资源 |
- 配套讲稿:
如PPT文件的首页显示word图标,表示该PPT已包含配套word讲稿。双击word图标可打开word文档。
- 特殊限制:
部分文档作品中含有的国旗、国徽等图片,仅作为作品整体效果示例展示,禁止商用。设计者仅对作品中独创性部分享有著作权。
- 关 键 词:
- ARTIFICIALNEURALNETWORKSPPT
