From 619717e18e02dbeaf63fa84485b1acd3e6cd2fca Mon Sep 17 00:00:00 2001 From: Petr Baudis Date: Sun, 14 Mar 2010 03:20:38 +0100 Subject: [PATCH] tex: Typeset nits --- tex/gostyle.tex | 9 ++------- 1 file changed, 2 insertions(+), 7 deletions(-) diff --git a/tex/gostyle.tex b/tex/gostyle.tex index c5d5028..e46b041 100644 --- a/tex/gostyle.tex +++ b/tex/gostyle.tex @@ -685,9 +685,8 @@ where $J$ is the previous layer, while $y_j$ is the activation for neurons from Function $f()$ is a~so-called \emph{activation function} and its purpose is to bound the outputs of neurons. A typical example of an activation function is the sigmoid function.% -\footnote{A special case of the logistic function, defined by the formula -$\sigma(x)=\frac{1}{1+e^{-(rx+k)}}$; parameters control the growth rate ($r$) -and the x-position ($k$).} +\footnote{A special case of the logistic function $\sigma(x)=(1+e^{-(rx+k)})^{-1}$. +Parameters control the growth rate $r$ and the x-position $k$.} \subsubsection{Training} Training of the feed-forward neural network usually involves some @@ -752,15 +751,11 @@ When training the classifier for $\vec O$ element $o_i$ of class $c = \lfloor o_i/k \rfloor$, we assume the $\vec R$ elements are normally distributed and feed the classifier information in the form - $$ \vec R \mid c $$ - estimating the mean $\mu_c$ and standard deviation $\sigma_c$ of each $\vec R$ element for each encountered $c$. Then, we can query the built probability model on - $$ \max_c P(c \mid \vec R) $$ - obtaining the most probable class $i$ for an arbitrary $\vec R$. Each probability is obtained using the normal distribution formula: $$ P(c \mid x) = {1\over \sqrt{2\pi\sigma_c^2}}\exp{-(x-\mu_c)^2\over2\sigma_c^2} $$ -- 2.11.4.GIT