Radial Basis Function, RBF network

Radial basis function

radial basis function is that each basis function depends only on the radial distance (typically Euclidean) from a centre $\mu_j$, so that $\phi_j(x)=h(x-\mu_j)$.

$f(x)$ is expressed as a linear combination of radial basis functions, one
centred on every data point

The values of the coefficients ${w_n}$ are found by least squares.

RBF Network

(to be continued)

Question: What are the similarities and differences between MLP, RBF networks and SVMs?

  • All three techniques are based on the perceptron. In MLPs, the earlier layers are perceptrons, in RBFs they are radial basis functions and SVMs thet are the features corresponding to the eigenvalues of a kernel.
  • All three can be used for regression and classification.
  • MLPs are trained using back-propagation of errors. They have non-unique solution. Complexity depends on nunber of hidden nodes. Liable to over-fit the training data. Often use ad hoc methods such as early stopping to stop over-fitting. Can have many output nodes.
  • RBFs typically use unsupervised learning to choose the centres for the input layer. The labelled data used to train the final layer(a perceptron). Training is fast. Can have many output nodes. Often use regulariser on the output layer. The solution found is unique.
  • SVMs use a kernel function to perform a mapping into a very high dimensional feature space. An optimally stable perceptron is used in the feature space. This controls the capacity of the learning machine reducing the problem of over-fitting. The learning algorithm uses quadratic optimisation. The computation complexity grows as the number training patterns cubed. For very large datasets SVMs can become impractical. The solution found is unque.

Reference

  1. ML学习笔记之——径向基网络
  2. Bishop PRML chapter 6.3
  3. question from (COMP3008 2009-2010 Q4)