clanek: normality of errors
[gostyle.git] / PAPERS / clanek_go_congress / clanek.tex
blobe7161eda860aaf88a7fd862c9ab50e55339ef3e6
1 \documentclass[12pt,a4paper,notitlepage]{article}
3 \usepackage[a4paper,vmargin={20mm,20mm},hmargin={20mm,20mm}]{geometry}
5 %% Použité kódování znaků: obvykle latin2, cp1250 nebo utf8:
6 \usepackage[utf8]{inputenc}
8 %% Ostatní balíčky
9 \usepackage[titletoc]{appendix}
10 \usepackage{graphicx}
11 \usepackage{wrapfig}
12 \usepackage{color}
13 \usepackage[multiple]{footmisc}
14 \usepackage{amsthm}
15 \usepackage{amsmath}
16 \usepackage{threeparttable}
17 \usepackage{longtable}
18 \usepackage{tabularx}
19 \usepackage{amsfonts}
20 \usepackage{caption}
21 \usepackage[lined, ruled, boxed, linesnumbered]{algorithm2e}
23 \usepackage[round]{natbib} % sazba pouzite literatury
25 \usepackage{psfrag}
27 \usepackage{psgo,array}
28 \usepackage{url} % sazba URL
30 \usepackage[ps2pdf,unicode]{hyperref} % Musí být za všemi ostatními balíčky
31 \usepackage{breakurl}
34 %\hypersetup{pdftitle=Meta-learning methods for analyzing Go playing trends}
35 %\hypersetup{pdfauthor=Josef Moudřík}
37 \begin{document}
39 % paper title
40 % can use linebreaks \\ within to get better formatting as desired
41 %\title{On Move Pattern Trends\\in Large Go Games Corpus}
42 \title{Evaluating Go Game Records\\for Prediction of Player Attributes }
44 % use \thanks{} to gain access to the first footnote area
45 % a separate \thanks must be used for each paragraph as LaTeX2e's \thanks
46 % was not built to handle multiple paragraphs
47 \author{Josef~Moud\v{r}\'{i}k%
48 \thanks{J. Moud\v{r}\'{i}k is student at the Faculty of Math and Physics, Charles University, Prague, CZ.},~Petr~Baudi\v{s}%
49 \thanks{P. Baudi\v{s} is an independent researcher. Part of his work
50 in this area has been supported by the SUSE Labs and the Department
51 of Applied Mathematics, Faculty of Mathematics and Physics, Charles University.}}
52 \maketitle
54 \begin{abstract}
55 We propose a~way of extracting and aggrgating per-move evaluations from sets of Go game records.
56 The evaluations capture different aspects of the games such as the played patterns
57 or statistics of sente/gote sequences (among others); using machine learning
58 algorithms, they can be used to predict arbitrary relevant target variables.
59 We apply this methodology to predict strength and playing style (e.g.
60 territoriality or aggressivity) of a player and make our predictor
61 available as an online tool, a part of the GoStyle project.
62 %% No, na tohle neni v clanku misto, pze to ma mit jen 8 stranek
63 % navic bych tyhle veci chtel zverejnit i samy o sobe, nejak dukladnejc,
65 %By inspecting the dependencies between the evaluations and the target variable,
66 %we are able to tell which patterns are bad or good (in case of strength as the
67 %target variable), or which moves e.g. constitute the territorial style of play.
69 We propose a number of possible applications including seeding real-work ranks
70 of internet players, aiding in Go study and tuning of Go-playing programs, or
71 a contribution to Go-theoretical discussion on the scope of ``playing style''.
72 \end{abstract}
75 \section{Introduction}
76 The field of Computer Go usually focuses on the problem
77 of creating a~program to play the game by finding the best move from a~given
78 board position \citep{GellySilver2008}. We focus on analyzing existing game
79 records with the aim of helping humans to play and understand the game better
80 instead.
82 Go is a~two-player full-information board game played
83 on a~square grid (usually $19\times19$ lines) with black and white
84 stones; the goal of the game is to surround the most territory and
85 capture enemy stones. We assume basic familiarity with the game.
87 Since the game has a worldwide popularity, large collections
88 of Go game records have been compiled, covering both amateur and professional games,
89 e.g. \citep{KGS,GoGoD}.
90 So far, not much has been done to analyze these records using computers.
91 There are programs that serve as tools to study the opening phase of the game
92 by giving simple statistics of next move candidates based on professional
93 games~\citep{Kombilo,MoyoGo}.
94 The professional games have also been used in computer Go;
95 patterns from the professional games
96 are used as a heuristic to improve the tree
97 search, e.g.~\citep{PatElo}. Apart from these, we are not aware of
98 any principially different usage.
100 Following up on our initial research \citep{GoStyleArxiv},
101 we present a deeper approach. We extract different
102 kinds of information from the records to create a complex
103 evaluation of the game sample. The \emph{evaluation} is a vector
104 composed of independent features -- each of the features
105 captures different aspect of the sample. For example,
106 we use statistics of most frequent
107 local patterns played, statistics of high and low plays
108 in different game stages, etc.
110 Using machine learning, the evaluation of the sample
111 can be used to predict relevant variables. In this work
112 for instance,
113 the sample consists of the games of a player
114 and we predict his strength or playing style.
116 This paper is organized as follows. Section~\ref{sec:feat}
117 presents the features comprising the evaluation.
118 Section~\ref{sec:mach} gives details about the machine
119 learning method we have used.
120 In Section~\ref{sec:expe} we give details about our
121 datasets -- for prediction of strength and style -- and
122 show how precisely can the prediction be conducted.
123 Section~\ref{sec:disc} discusses applications and future work.
125 \section{Feature Extraction}
126 \label{sec:feat}
127 This section presents the methods for extracting the evaluation
128 vector (we call it $ev$) from a set of games. Because we aggregate
129 data by player, each game in the set is accompanied by the color
130 which specifies our player of interest.
131 The sample is therefore regarded as a \emph{set
132 of colored games}, $GC = \{ (game_1, color_1), \ldots\}$.
134 The evaluation vector $ev$ is composed by concatenating several
135 sub-vectors we call \emph{features} -- examples include the
136 aforementioned local patterns or statistics of sente and gote
137 sequences. These will be detailed in the rest of this section.
138 Some of the details are edited for brevity,
139 please see \citep{Moudrik13} for an extended description.
141 \subsection{Raw Game Processing}
142 The games\footnote{
143 We use the standard \emph{.sgf} file format as input, \citep{SGF}.
144 } are processed by the Pachi Go
145 Engine~\citep{Pachi} which exports a variety of analytical data
146 about each move in the game.
147 For each move,
148 Pachi outputs a list of key-value pairs regarding the current move:
150 \begin{itemize}
151 \item \textbf{atari flag} --- whether the move put enemy stones in atari,
152 \item \textbf{atari escape flag} --- whether the move saved own stones from atari,
153 \item \textbf{capture} --- number of enemy stones the move captured,
154 \item \textbf{contiguity to last move} --- the gridcular
155 distance\footnotemark[2] from the last move,
156 \item \textbf{board edge distance} --- the distance from
157 the nearest edge of the board,
158 \item \textbf{spatial pattern} --- configuration of stones around the played move.
159 \end{itemize}
161 We use this information to compute the higher level features given below.
162 The spatial pattern pictures positions of stones around the current move up to
163 a certain distance.\footnote{
164 \label{grid}
165 The distance is given by the {\em gridcular} metric
166 $d(x,y) = |\delta x| + |\delta y| + \max(|\delta x|, |\delta y|)$, which produces
167 a circle-like structure on the Go board square grid \citep{SpatPat}.
168 Spatial patterns of sizes 2 to 6 are regarded.
171 \subsection{Patterns}
172 The first feature collects a statistics of $N = 400$ most frequently ocurring
173 spatial patterns (together with both atari flags). The list of the $N$ most frequently
174 played patterns is computed beforehand from the whole database of games. The patterns
175 are normalized to be black to play and are invariant under rotation and mirroring.
177 Given a set of colored games $GC$ we then count how many times was each of the $N$
178 patterns played -- thus obtaining a vector $\vec c$ of counts ($|\vec c| = 400$).
179 With simple occurence count however, particular counts $c_i$ increase proportionally to
180 number of games in $GC$. To maintain invariancy under the number of games in the sample,
181 a normalization is needed. We do this by dividing the $\vec c$ by $|GC|$, though other schemes
182 are possible, see \citep{Moudrik13}.
184 \subsection{$\omega$-local Sente and Gote Sequences}
185 Because the concept of sente and gote is very important in real games, we devised
186 a statistics which tries to capture distribution of sente and gote plays in the games
187 from the sample. Because deciding what moves are sente or gote can be hard even
188 for human players, we restricted ourselves to what we call $\omega$-local (sente
189 and gote) sequences. We say that a move is $\omega$-local (with respect
190 to the previous move) if its gridcular distance from previous move
191 is smaller than a fixed number $\omega$; in this work, we used $\omega=10$.
192 The simplification has a clear assumption -- the responses to
193 a sente move are always local.
194 Of course, this assumption might not always hold, but
195 the feature proves to be useful nonetheless.
197 We then partition each game into $\omega$-local sequences (that is, each move in the
198 sequence is $\omega$-local with respect to its directly previous move) and observe
199 whether the player who started the sequence is different from the player who ended it.
200 If it is so, the $\omega$-local sequence is said to be sente for the player who started it
201 because he gets to play somewhere else first (tenuki). Similarly if the player who
202 started the sequence had to respond last we say that the sequence is gote for him.
203 Based on this partitioning, we can count the average number of sente and gote
204 sequences per game from the sample $GC$. These two numbers, along with their difference,
205 form the second feature.
207 \subsection{Border Distance}
208 The third feature is a two dimensional histogram counting the average number of moves
209 in the sample played low or high in different game stages. The original idea was to help
210 to distinguish between territorial and influence based moves in the opening.
212 The first dimension is specified by the move's border distance,
213 the second one by the number of the current move from the beginning of the game.
214 The granularity of each dimension is given by intervals dividing the domains.
215 We use the
216 $$ByDist = \{ \langle1, 2\rangle, \langle 3 \rangle, \langle4\rangle, \langle 5, \infty)\}$$
217 division for the border distance dimension
218 (distinguishing between the first 2 lines, 3rd line of territory, 4th line of influence and
219 higher plays for the rest).
220 The move number division is given by
221 $$ByMoves = \{ \langle1, 10\rangle, \langle 11, 64\rangle, \langle 65,200\rangle, \langle 201, \infty)\}$$
222 with the motivation to (very roughly) distinguish
223 between the opening (say first ten moves), early middle game (moves 11-64), middle game
224 and endgame.
226 If we use the $ByMoves$ and $ByDist$ intervals to divide the domains, we obtain a histogram
227 of total $|ByMoves| \times |ByDist| = 16$ fields. For each move in the games $GC$,
228 we increase the count in the
229 appropriate histogram field. In the end, the whole histogram is normalized
230 to establish invariancy under the number of games scanned by dividing the
231 histogram elements by $|GC|$. These 16 numbers form the third feature.
233 \subsection{Captured Stones}
234 Apart from the border distance feature, we also maintain a two-dimensional histogram
235 which counts the numbers of captured stones in different game stages. The motivation is
236 simple -- especially beginners tend to capture stones because ``they could'' instead of
237 because it is the ``best move''. For example, such a capture might
238 be a grave mistake in the opening.
240 As before, one of the dimensions is given by the intervals
241 $$ByMoves = \{ \langle1, 60\rangle, \langle 61, 240\rangle, \langle 241, \infty)\}$$
242 which try to specify the game stages (roughly: opening, middle game, endgame).\footnote{
243 The division into game stages is coarser than for the previous
244 feature because captures occur relatively infrequently. Finer graining
245 would require more data.
247 The second dimension has a fixed size of three bins. Along the number of captives
248 of the player of interest (the first bin), we also count the number of his
249 opponent's captives (the second bin) and a difference between the two numbers
250 (the third bin). Together, we obtain a histogram of $|ByMoves| \times 3 = 9$ elements.
252 Again, the colored games $GC$ are processed move by move by increasing
253 the counts of captivated stones (or 0) in the appropriate field.
254 The 9 numbers (again normalized by dividing by $|GC|$) are the output of the fourth
255 feature.
257 \subsection{Win/Loss Statistics}
258 Finally, we used a simple feature that keeps statistics of
259 wins and losses and whether they were by points or by resignation\footnote{
260 We disregard forfeited, unfinished or jigo games in this feature
261 because the frequency of these events is so small it would
262 require a very large dataset to utilize them reliably.
264 For example, many weak players continue playing games that are already lost
265 until the end, mainly because their counting is not very good (they do not
266 know there is no way to win), while professionals do not hesitate to resign
267 if they think that nothing can be done.
269 In the colored games of $GC$, we count how many times did the player of interest:
270 \begin{itemize}
271 \item win by counting,
272 \item win by resignation,
273 \item lost by counting,
274 \item and lost by resignation.
275 \end{itemize}
276 Again, we divide these four numbers by $|GC|$ to maintain the invariancy under the number of games
277 in $GC$. Furthermore, for the games won or lost in counting we count the average
278 size of the win or loss in points. The six numbers form the last feature.
280 \section{Machine Learning}
281 \label{sec:mach}
282 So far, we have considered how we can turn a set of coloured games $GC$ into
283 an evaluation vector. Now, we are going to show how to utilize the evaluation.
284 To predict various player attributes, we start with an input dataset $D$ consisting
285 of pairs $D=\{ (GC_i, y_i), \dots\}$, where $GC_i$
286 corresponds to a set of colored games of $i$-th player and $y_i$ is the
287 target attribute. The $y_i$ might be fairly arbitrary, as long as it has
288 \emph{some} relation to the $GC_i$. For example, $y_i$ might be $i$'s strength.
290 Now, let's denote our evaluation process we presented before as $eval$ and
291 let $ev_i$ be evaluation of $i$-th player, $ev_i = eval(GC_i)$. Then,
292 we can transform $D$ into $D_{ev} = \{(ev_i, y_i), \dots\}$, which forms
293 our training dataset.
295 The task of our machine learning algorithm is to generalize the knowledge
296 from the dataset $D_{ev}$ to predict correct $y_X$ even for previously unseen $GC_X$.
297 In the case of strength, we might therefore be able to predict strength $y_X$
298 of an unknown player $X$ given a set of his games $GC_X$ (from which we can
299 compute the evaluation $ev_X$).
301 After testing multiple approaches, we have settled on using a bagged artificial neural network
302 as the predictor.
303 Neural networks are a standard technique in machine learning. The network is
304 composed of simple computational units which are organized in a layered topology,
305 as described e.g. in monograph by \citet{haykin_nn}.
306 We have used a simple feedforward neural network with 20~hidden units, trained
307 using the RPROP algorithm \citep{Riedmiller1993} for at most 100~iterations.
309 The bagging \citep{breimanbag96} is a method that combines an ensemble of
310 $N$ models (trained on differently sampled data) to improve their
311 performance and robustness. In this work, we used a bag of $N=20$ neural networks.
313 \subsection{Measuring the Performance}
315 To assess the efficiency of our method and give estimates of its precision for
316 unseen inputs, we measure the performance of our algorithm given a dataset $D_{ev}$.
317 A standard way to do this is to divide the $D_{ev}$ into training
318 and testing parts and compute the error of the method on the testing part.
320 A commonly used measure is the mean square error ($MSE$) which estimates variance of
321 the error distribution. We use its square root ($RMSE$) which is an estimate of
322 standard deviation of the predictions,
323 $$ RMSE = \sqrt{\frac{1}{|T_s|} \sum_{(ev, y) \in T_s}{ (predict(ev) - y)^2}} $$
324 where the machine learning model $predict$ is trained on the
325 training data $T_r$ and $T_s$ denotes the testing data.
327 \subsubsection*{Cross-Validation}
329 For the error estimation to be robust, we use the method of cross-validation \citep{crossval},
330 which is a standard statistical technique for robust estimation of parameters.
331 We split the data into $k$ disjunct subsets (called \emph{folds}) and then
332 iteratively compose the training and testing sets and measure errors.
333 %In each of the $k$ iterations, $k$-th fold is chosen as the testing data, and
334 %all the remaining $k-1$ folds form the training data. The division into the folds is
335 %done randomly, and so that the folds have approximately the
336 %same size.
337 In this work, we have used 5-fold cross validation.
339 \section{Experiments and Results}
340 \label{sec:expe}
342 \subsection{Strength}
343 One of the two major domains we have tested our framework on is the prediction of player
344 strength.
346 \subsubsection*{Dataset}
347 We have collected a large sample of games from the public
348 archives of the Kiseido Go server~\citep{KGSArchives}.
349 The sample consists of over 100 000 records of games in the \emph{.sgf} format~\citep{SGF}.
351 For each rank $r$ in the range of 6-dan to 20-kyu, we gathered a
352 list of players $P_r$ of the particular rank. To avoid biases caused by
353 different strategies, the sample only consists of games played on $19 \times 19$ board without
354 handicap stones.
355 The set of colored games $GC_p$ for a~player $p \in P_r$ consists of the games player $p$
356 played when he had the rank $r$. We only use the $GC_p$ if the number of
357 games is not smaller than 10 games; if the sample is larger than 50 games, we
358 randomly choose a subset of the sample (the size of subset is uniformly randomly
359 chosen from interval $\langle 10, 50\rangle$).\footnote{
360 By cutting the number of games to a fixed number (say 50) for large
361 samples, we would create an artificial disproportion in sizes of $GC_p$,
362 which could introduce bias into the process.
365 For each of the 26 ranks, we gathered 120 such $GC_p$'s.
366 The target variable $y$ to learn from directly corresponds to the ranks:
367 $y=20$ for rank of 20-kyu, $y=1$ for 1-kyu, $y=0$ for 1-dan, $y=-5$
368 for 6-dan, other values similarly. (With increasing strength, the $y$
369 decreases.)
371 \subsubsection*{Results}
373 The performance of the prediction of strength is given in Table~\ref{tab:str_reg_res}.
374 The table compares the performance of the Bagged neural network (Section~\ref{sec:mach}),
375 with simple reference method of Mean regression, which just constantly
376 predicts average of the strengths in the dataset regardless of the evaluation vector.
378 The results show that the prediction of strength has standard deviation $\sigma$
379 (estimated by the $RMSE$ error) of approximately $2.7$ rank.
380 Because the errors are normally distributed,
381 we can say that 68\% of predictions fall within distance of
382 $\sigma$ from the real strength and 95\% of predictions are within $2\sigma$.
384 \begin{table}
385 \begin{center}
386 \begin{tabular}{|c|c|c|}
387 \hline
388 \textbf{Machine learning method} & $\mathbf{RMSE}$ error\\
390 \hline
391 Mean regression & 7.507 \\ \hline
392 Bagged NN & 2.712 \\
394 \hline
395 \end{tabular}
396 \end{center}
397 \caption{
398 $RMSE$ performance of the strength prediction. The mean regression is
399 a reference model which predicts constant value (average of the
400 strengths in the dataset) regardless of the set of games. The results
401 are computed by 5-fold cross-validation.
403 \label{tab:str_reg_res}
404 \end{table}
406 \subsection{Style}
408 The second domain is the prediction of different aspects of player styles.
410 \subsubsection*{Dataset}
411 The collection of games in this dataset comes from the Games of Go on Disk database by \citet{GoGoD}.
412 This database contains more than 70 000 professional games, spanning from the ancient times
413 to the present.
415 We chose 25 popular professional players (mainly from the 20th century) and
416 asked several experts (professional and strong amateur players)
417 to evaluate these players using a questionnaire. The experts (Alexander
418 Dinerchtein 3-pro, Motoki Noguchi 7-dan,
419 Vladimír Daněk 5-dan, Lukáš Podpěra 5-dan and Vít Brunner 4-dan)
420 were asked to assess the players on four scales, each ranging from 1 to 10.
422 %\begin{table}[h!]
423 \begin{center}
424 %\caption{Styles}
425 \begin{tabular}{|c|c|c|}
426 \hline
427 \textbf{Style} & \textbf{1} & \textbf{10}\\ \hline
428 Territoriality & Moyo & Territory \\
429 Orthodoxity & Classic & Novel \\
430 Aggressivity& Calm & Fighting \\
431 Thickness & Safe & Shinogi \\ \hline
432 \end{tabular}
433 \end{center}
434 %\caption[Definition of the style scales]{
435 %The definition of the style scales.
437 %\label{tab:style_def}
438 %\end{table}
440 The scales try to reflect
441 some of the traditionally perceived playing styles.\footnote{
442 Refer also to~\citet{GoGoD:styles}, or~\citet{senseis:styles} regarding Go playing styles.
444 For example, the first scale (\emph{territoriality})
445 stresses whether a player prefers safe, yet inherently smaller territory (number 10 on the scale),
446 or roughly sketched large territory (\emph{moyo}, 1 on the scale), which is however insecure.
449 For each of the selected professionals, we took 192 of his games from the GoGoD database
450 at random. We divided these games (at random) into 12 colored sets $GC$ of 16 games.
451 The target variable (for each of the four styles) $y$ is given by average of the answers of
452 the experts. Results of the questionnaire are published online in~\citep{style_quest}.
454 \subsubsection*{Results}
456 The results of style prediction are given in Table~\ref{tab:sty_reg_res}.
457 Given that the style scales have range of 1 to 10, we consider the average
458 standard deviation from correct answers of around 1.6 to be a good precision.
460 We should note that the mean regression has very small $RMSE$
461 for the scale of thickness.
462 This stems from the fact that the experts' answers from the questionnaire
463 have themselves very little variance --- our conclusion is that the scale of
464 thickness was not chosen well. Refer to~\citep{style_quest} for further discussion.
466 \begin{table}[h]
467 \begin{center}
468 \begin{tabular}{|c|c|c|c|c|}
469 \hline
471 \multicolumn{4}{|c|}{ $\mathbf{RMSE}$ error } \\
472 \hline
473 \textbf{Method} & Territoriality & Orthodoxity & Aggressivity & Thickness \\
476 \hline
477 Mean regression & 2.403 & 2.421 & 2.179 & 1.682 \\
478 Bagged NN & 1.527 & 1.734 & 1.548 & 1.572 \\
479 \hline
480 \end{tabular}
481 \end{center}
482 \caption{
483 $RMSE$ performance for prediction of different styles. The mean regression is
484 a reference model which predicts constant value (average of the
485 values of each particular style) regardless of the set of games. The results
486 are computed by 5-fold cross-validation.
488 \label{tab:sty_reg_res}
489 \end{table}
492 \section{Discussion}
493 \label{sec:disc}
495 The results in both domains have shown that our evaluations are useful in predicting
496 different kinds of player attributes. This can have a number of possible applications.
498 So far, we have utilized some of our findings in an online web
499 application\footnote{\url{http://gostyle.j2m.cz/webapp.html}}. Based on data submitted
500 by the users, it evaluates their games and predicts their playing style
501 and recommends relevant professional players to review. Of course,
502 our methods for style estimation are trained on very strong players and they might
503 not thus be fully generalizable to ordinary players. Weak players might not have a consistent
504 style or the whole concept of style might not be even applicable for them. Estimating this
505 effect is however not easily possible, since we do not have data about weak players' styles.
506 Our webapp allows the users to submit their own opinion about their style, therefore we should
507 be able to consider this effect in the future research.
509 Other possible applications include helping the ranking algorithms to converge faster ---
510 usually, the ranking of a player is determined from his opponents' ranking by looking
511 at the numbers of wins and losses (e.g. by computing an Elo rating). Our methods might improve this
512 by including the domain knowledge.
513 Similarly, a computer Go program can quickly classify the level of its
514 human opponent based on the evaluation from their previous games
515 and auto-adjust its difficulty settings accordingly
516 to provide more even games for beginners.
518 It is also possible to study dependencies between single elements of the evaluation vector
519 and the target variable $y$ directly. By pinpointing e.g. the patterns
520 of the strongest correlation with bad strength (players who play them are weak), we can
521 warn the users not to play these. We have made some initial research into this in~\citep{Moudrik13},
522 we do not present these results here because of space constraints.
524 \section{Conclusion}
525 \label{sec:conc}
526 We presented a method for evaluating players based on a sample of their games.
527 These summary evaluations turn out to be useful in many cases --- they allow us to predict
528 different player attributes (such as strength or playing style) with reasonable accuracy.
529 We believe that the applications of these findings can help to improve both human and computer
530 understanding in the game of Go.
532 \section{Implementation}
533 \label{sec:impl}
535 The code used in this work
536 is released online as a part of GoStyle project~\citep{GoStyleWeb}.
537 The majority of the source code is implemented in
538 the Python programming language~\citep{Python27}.
540 The machine learning was implemented using the
541 Orange Datamining suite~\citep{curk05} and
542 the Fast Artificial Neural Network library FANN~\citep{Nissen2003}.
543 We used the Pachi Go engine~\citep{Pachi} for the raw game processing.
545 \bibliographystyle{abbrvnat}
546 \bibliography{clanek}
548 \end{document}