From 39bd65578150edf38a2dbd39be74afb918276eb3 Mon Sep 17 00:00:00 2001 From: hellboy Date: Sun, 7 Mar 2010 00:41:03 +0100 Subject: [PATCH] gostyle.tex: nn, training --- tex/gostyle.tex | 37 ++++++++++++++++++++++++++++++------- 1 file changed, 30 insertions(+), 7 deletions(-) diff --git a/tex/gostyle.tex b/tex/gostyle.tex index 719cb11..2332ade 100644 --- a/tex/gostyle.tex +++ b/tex/gostyle.tex @@ -540,10 +540,13 @@ During our research, exponentialy decreasing weight has proven to be sufficient. As an alternative to the k-Nearest Neigbors algorithm (section \ref{knn}), we have used a classificator based on feed-forward artificial neural networks \cite{TODO}. -These are known for their ability to generalize and find correlations and patterns between -input and output data. +Neural networks (NN) are known for their ability to generalize and find correlations and patterns between +input and output data. Neural network is an adaptive system and it +must undergo a certain training before it can be reasonably used. Basically, we use +information for \emph{reference players} (for which either \emph{pattern vectors} and +\emph{style vectors} are known) as training data. -\subsubsection{A brief introduction to NN}?TODO rename to activation?? +\subsubsection{Computation and activation of the NN} Technically, neural network is a network of interconnected computational units called neurons. A feedforward neural network has a layered topology; it usually has one \emph{input layer}, one \emph{output layer} and an arbitrary number of \emph{hidden layers} between. @@ -564,12 +567,31 @@ called \emph{activation function} and its purpose is to bound outputs of neurons function is a sigmoid function.\footnote{The sigmoid function is a special case of the logistic function; it is defined by the formula $\sigma(x)=\frac{1}{1+e^{-(rx+k)}}$, parameters control the growth rate ($r$) and the x-position ($k$).} - \subsubsection{Training} -TODO: arch, training, ... +The training of the feedforward neural network usually involves some +modification of supervised Backpropagation learning algorithm. \cite{TODO} +We use first-order optimization algorithm called RPROP \cite{TODO}. + +Because the \emph{reference set} is not usually very large, we have devised a simple method for its extension. +This enhancement is based upon adding random linear combinations of \emph{style and pattern vectors} to the training set. + +As insinuated above, the training set consist of pairs of input vectors (\emph{pattern vectors}) and +desired output vectors (\emph{style vectors}). The training set $T$ is then enlarged by some linear combinations. +\begin{equation} +T_{base} = \{(\vec p_r, \vec s_r) | r \in R\}\\ +\end{equation} +\begin{equation} +T_{ext} = \{(\vec p, \vec s) | \exists D \supseteq R : p = \sum_{d \in D}{g_d \vec p_d}, s = \sum_{d \in D}{g_d \vec s_d}\} +\end{equation} +TODO zabudovat $g_d$ dovnitr? +where $g_d, d \in D$ are random coeficients, so that $\sum_{d \in D}{g_d} = 1$. The training set +is then obtained by: +\begin{equation} +T = T_{base} \cup SomeFiniteSubset(T_{ext}) +\end{equation} + +The network is trained by repeat -The training of the feedforward neural network usually involves some modification of Backpropagation learning algorithm. -We use RPROP \subsubsection{Architecture details} @@ -580,6 +602,7 @@ We use RPROP We have implemented the data mining methods as an open-source project ``gostyle'' (TODO). TODO. PCA: In our implementation, we use a~library called MDP \cite{MDP}. +TODO libfann \section{Style Components Analysis} -- 2.11.4.GIT