From 1c1750a2fa9f8dc08896202666867e621090ac2e Mon Sep 17 00:00:00 2001 From: Joe Moudrik Date: Mon, 21 May 2012 12:27:37 +0200 Subject: [PATCH] gostyle.tex: Third (final??) review. Main changes: reduced footnotes, re-format one-sentence paragraphs, cut down num of (semi-)colons. --- tex/POSUDEK3 | 23 ++++++++ tex/gostyle.bib | 2 +- tex/gostyle.tex | 164 ++++++++++++++++++++++++++------------------------------ tex/makefile | 2 +- 4 files changed, 102 insertions(+), 89 deletions(-) create mode 100644 tex/POSUDEK3 diff --git a/tex/POSUDEK3 b/tex/POSUDEK3 new file mode 100644 index 0000000..001a2b2 --- /dev/null +++ b/tex/POSUDEK3 @@ -0,0 +1,23 @@ +Comments to the Reviewer 1 + +Foremostly, I would like to thank for pointing out the problems in our paper. + +The main problem was the excessive use of footnotes and lot of one sentence paragraphs. + +We have cut down the number of footnotes to maximum of 2 footnotes per page. Furthermore, we have reformated/merged the one-sentence paragraphs and cut down the use of colons & semicolons as well, hoping to make our paper more fluent. + +> REFERENCES +> The References contain many errors: +> - Use of "et al." rather than full author lists. +> - Some years at start, some years at end. +> - Some years bracketed, some not bracketed. +> - Other formatting errors, etc. + +I fully aggree, that the References formating is not optimal in every regard, this is however caused by the official IEEE Transactions bibliography style. Different documents (Book, Article, Electronic, Tech report, ..) have different position (and bracketing) of the year field. I took special care to make sure that all the fields in all the references are present (if possible), and I am afraid I cannot make it any better than the current version. + +The use of et.al. in the paper refers mostly to online project with at least partial public contribution, where gathering list of all the authors (likely very long) is very complicated (if possible). We use full author list in other cases. + +-------------------- +Comments to the Reviewer 2 + +We have fixed the typo. Thank you. diff --git a/tex/gostyle.bib b/tex/gostyle.bib index 65c377b..fb6bebb 100644 --- a/tex/gostyle.bib +++ b/tex/gostyle.bib @@ -229,7 +229,7 @@ @ELECTRONIC{KGSAnalytics, author = {Kazuhiro}, - year = {2012}, + year = {2010}, title = {{KGS} {A}nalytics}, url = {http://kgs.gosquares.net/}, owner = {pasky} diff --git a/tex/gostyle.tex b/tex/gostyle.tex index d962171..67e1881 100644 --- a/tex/gostyle.tex +++ b/tex/gostyle.tex @@ -46,15 +46,23 @@ \usepackage{soul} +% FIRST REVISION %ENABLE: %\newcommand{\rv}[1]{\ul{#1}} %DISABLE: \newcommand{\rv}[1]{#1} +% SECOND REVISION %ENABLE: -\newcommand{\rvv}[1]{\ul{#1}} +%\newcommand{\rvv}[1]{\ul{#1}} %DISABLE: -%\newcommand{\rvv}[1]{#1} +\newcommand{\rvv}[1]{#1} + +% THIRD REVISION +%ENABLE: +\newcommand{\rvvv}[1]{\ul{#1}} +%DISABLE: +%\newcommand{\rvvv}[1]{#1} \usepackage{algorithm} \usepackage{algorithmic} @@ -371,7 +379,7 @@ only little has been done with the available data. We are aware only of uses for simple win/loss statistics \cite{KGSAnalytics} \cite{ProGoR} and ``next move'' statistics on a~specific position \cite{Kombilo} \cite{MoyoGo}. -\rvv{Additionaly, a simple machine learning technique based on GNU Go's}\cite{GnuGo}\rvv{ +\rvvv{Additionally}, \rvv{a simple machine learning technique based on GNU Go's}\cite{GnuGo}\rvv{ move evaluation feature has recently been presented in}\cite{CompAwar}\rvv{. The authors used decision trees to predict whether a given user belongs into one of three classes based on his strength (causal, intermediate or advanced player). This method is however limited by the @@ -436,16 +444,16 @@ used when computing Elo ratings for candidate patterns in Computer Go play We use these features: \begin{itemize} -\item capture move flag -\item atari move flag -\item atari escape flag +\item capture move flag\rvvv{,} +\item atari move flag\rvvv{,} +\item atari escape flag\rvvv{,} \item contiguity-to-last flag% \footnote{We do not consider contiguity features in some cases when we are working on small game samples and need to reduce pattern diversity.} ---- whether the move has been played in one of 8 neighbors of the last move -\item contiguity-to-second-last flag -\item board edge distance --- only up to distance 4 -\item spatial pattern --- configuration of stones around the played move +--- whether the move has been played in one of 8 neighbors of the last move\rvvv{,} +\item contiguity-to-second-last flag\rvvv{,} +\item board edge distance --- only up to distance 4\rvvv{,} +\item and spatial pattern --- configuration of stones around the played move. \end{itemize} The spatial patterns are normalized (using a dictionary) to be always @@ -510,8 +518,8 @@ However, we have found that this method is not universally beneficial. In our styles case study (sec. \ref{style-analysis}), this normalization produced PCA decomposition with significant dimensions corresponding better to some of the prior knowledge and more instructive for manual -inspection, but ultimately worsened accuracy of our classifiers; -we conjecture from this that the most frequently occuring patterns are +inspection, but ultimately worsened accuracy of our classifiers. +\rvvv{From this we conjecture} that the most frequently occuring patterns are also most important for classification of major style aspects. \subsection{Implementation} @@ -520,7 +528,6 @@ We have implemented the data extraction by making use of the pattern features matching implementation within the Pachi Go-playing program \cite{Pachi}, \rvv{which works according to the Elo-rating pattern selection scheme} \cite{PatElo}. - We extract information on players by converting the SGF game records to GTP stream \cite{GTP} that feeds Pachi's {\tt patternscan} engine, \rv{producing} a~single {\em patternspec} (string representation @@ -536,7 +543,7 @@ we analyze the data using several basic data minining techniques. The first two methods {\em (analytic)} rely purely on single data set and serve to show internal structure and correlations within the data set. -Principal Component Analysis \rvv{\emph{(PCA)}} \cite{Jolliffe1986} +\rvvv{\emph{ Principal Component Analysis}} \rvv{\emph{(PCA)}} \cite{Jolliffe1986} finds orthogonal vector components that \rv{represent} the largest variance \rv{of values within the dataset. That is, PCA will produce vectors representing @@ -555,8 +562,9 @@ that are negatively sensitive to pattern vector component correlations. \rv{On the other hand,} Sociomaps \cite{Sociomaps} \cite{TeamProf} \cite{SociomapsPersonal} produce spatial representation of the data set elements (e.g. players) based on -similarity of their data set features; we can then project other -information on the map to illutrate its connection to the data set.% +similarity of their data set features\rvvv{. Projecting some other +information on this map helps illustrate connections within the data set.} + % Pryc v ramci snizeni poctu footnotu %\footnote{\rv{We also attempted to visualise the player relationships %using Kohonen maps, but that did not produce very useful results.}} @@ -567,7 +575,6 @@ an \emph{output vector} $\vec O$ \rv{to} each pattern vector $\vec P$, from the game sample} --- e.g.~\rv{assessment of} the playing style, player's strength or even meta-information like the player's era or the country of origin. - Initially, the methods must be calibrated (trained) on some prior knowledge, usually in the form of \emph{reference pairs} of pattern vectors and the associated output vectors. @@ -580,29 +587,26 @@ and the methods can be compared by the mean square error (MSE) on testing data s The most trivial method is approximation by the PCA representation matrix, provided that the PCA dimensions have some already well-defined -\rv{interpretation}; this can be true for single-dimensional information like +\rv{interpretation}\rvvv{. This} can be true for single-dimensional information like the playing strength. - Aside of that, we test the $k$-Nearest Neighbors (\emph{$k$-NN}) classifier \cite{CoverHart1967} that approximates $\vec O$ by composing the output vectors of $k$ reference pattern vectors closest to $\vec P$. -Another classifier is a~multi-layer feed-forward Artificial Neural Network \rv{(see e.g. }\cite{Haykin1994}\rv{)}: -the neural network can learn correlations between input and output vectors -and generalize the ``knowledge'' to unknown vectors; it can be more flexible +Another classifier is a~multi-layer feed-forward Artificial Neural Network \rv{(see e.g. }\cite{Haykin1994}\rv{)}. +The neural network can learn correlations between input and output vectors +and generalize the ``knowledge'' to unknown vectors\rvvv{. The neural network} can be more flexible in the interpretation of different pattern vector elements and discern more complex relations than the $k$-NN classifier, but may not be as stable and expects larger training sample. Finally, a commonly used classifier in statistical inference is -the Naive Bayes Classifier \cite{Bayes}; -it can infer relative probability of membership +the Naive Bayes Classifier \cite{Bayes}\rvvv{. It} can infer relative probability of membership in various classes based on previous evidence (training patterns). \subsection{Statistical Methods} We use couple of general statistical analysis \rv{methods} together with the particular techniques. - \label{pearson} To find correlations within or between extracted data and some prior knowledge (player rank, style vector), we compute the well-known @@ -627,7 +631,6 @@ We use Principal Component Analysis to reduce the dimensions of the pattern vectors while preserving as much information as possible, assuming inter-dependencies between pattern vector dimensions are linear. - \rv{Technically}, PCA is an eigenvalue decomposition of a~covariance matrix of centered pattern vectors, producing a~linear mapping $o$ from $n$-dimensional vector space to a~reduced $m$-dimensional vector space. @@ -670,9 +673,8 @@ The whole process is described in the Algorithm \ref{alg:pca}. Sociomaps are a general mechanism for \rv{visualizing} relationships on a 2D plane such that \rv{given} ordering of the \rv{player} distances in the dataset is preserved in distances on the plane. - -In our particular case,% -\footnote{A special case of the {\em Subject-to-Object Relation Mapping (STORM)} indirect sociomap.} +In our particular case, +%\footnote{A special case of the {\em Subject-to-Object Relation Mapping (STORM)} indirect sociomap.} we will consider a dataset $\vec S$ of small-dimensional vectors $\vec s_i$. First, we estimate the {\em significance} of difference {\rv of} each two subjects. @@ -715,13 +717,12 @@ uniformly correlate with similarities in players' output vectors. We require a set of reference players $R$ with known \emph{pattern vectors} $\vec p_r$ and \emph{output vectors} $\vec o_r$. - $\vec O$ is approximated as weighted average of \emph{output vectors} $\vec o_i$ of $k$ players with \emph{pattern vectors} $\vec p_i$ closest to $\vec P$. This is illustrated in the Algorithm \ref{alg:knn}. Note that the weight is a function of distance and is not explicitly defined in Algorithm \ref{alg:knn}. -During our research, exponentially decreasing weight has proven to be sufficient.% -\footnote{We present concrete formulas in each of the case studies.} +During our research, exponentially decreasing weight has proven to be sufficient\rvvv{, +as detailed in each of the case studies.} \begin{algorithm} \caption{k-Nearest Neighbors} @@ -754,10 +755,9 @@ until the error on the training set is reasonably small. \subsubsection{Computation and activation of the NN} Technically, the neural network is a network of interconnected computational units called neurons. -A feed-forward neural network has a layered topology; -it usually has one \emph{input layer}, one \emph{output layer} +A feed-forward neural network has a layered topology\rvvv{. It} +usually has one \emph{input layer}, one \emph{output layer} and an arbitrary number of \emph{hidden layers} between. - Each neuron $i$ gets input from all neurons in the previous layer, each connection having specific weight $w_{ij}$. @@ -783,12 +783,12 @@ Parameters control the growth rate $r$ and the x-position $k$.} Training of the feed-forward neural network usually involves some modification of supervised Backpropagation learning algorithm. We use first-order optimization algorithm called RPROP \cite{Riedmiller1993}. - +% %Because the \emph{reference set} is usually not very large, %we have devised a simple method for its extension. %This enhancement is based upon adding random linear combinations %of \emph{style and pattern vectors} to the training set. - +% As outlined above, the training set $T$ consists of $(\vec p_i, \vec o_i)$ pairs. The training algorithm is shown in Algorithm \ref{alg:tnn}. @@ -831,7 +831,6 @@ In order to approximate the player's output vector $\vec O$ based on pattern vector $\vec P$, we will compute each element of the output vector separately, covering the output domain by several $k$-sized discrete intervals (classes). - \rv{In fact, we use the PCA-represented input $\vec R$ (using the 10 most significant dimensions), since it better fits the pre-requisites of the Bayes classifier -- values in each dimension are more independent and @@ -846,7 +845,6 @@ $$ \vec R \mid c $$ estimating the mean $\mu_c$ and standard deviation $\sigma_c$ of each $\vec R$ element for each encountered $c$ (see algorithm \ref{alg:tnb}). - Then, we can query the built probability model on $$ \max_c P(c \mid \vec R) $$ obtaining the most probable class $i$ for an arbitrary $\vec R$ @@ -875,13 +873,12 @@ $$ P(c \mid x) = {1\over \sqrt{2\pi\sigma_c^2}}\exp{-(x-\mu_c)^2\over2\sigma_c^2 We have implemented the data mining methods as the ``gostyle'' open-source framework \cite{GoStyle}, made available under the GNU GPL licence. - The majority of our basic processing and \rv{analysis is} implemented in the Python \cite{Python25} programming language. + We use several external libraries, most notably the MDP library \cite{MDP} \rv{for the PCA analysis}. The neural network \rv{component} is written using the libfann C library \cite{Nissen2003}. The Naive Bayes Classifier \rv{is built around} the {\tt AI::NaiveBayes1} Perl module \cite{NaiveBayes1}. - The sociomap has been visualised using the Team Profile Analyzer \cite{TPA} which is a part of the Sociomap suite \cite{SociomapSite}. @@ -899,25 +896,27 @@ which is a part of the Sociomap suite \cite{SociomapSite}. First, we have used our framework to analyse correlations of pattern vectors and playing strength. Like in other competitively played board games, Go players receive real-world {\em rating number} based on tournament games, -and {\em rank} based on their rating.% -\footnote{Elo-type rating system \cite{GoR} is usually used, -corresponding to even win chances for game of two players with the same rank, -and about 2:3 win chance for the stronger in case of one rank difference.} +and {\em rank} based on their rating. +%\footnote{Elo-type rating system \cite{GoR} is usually used, +%corresponding to even win chances for game of two players with the same rank, +%and about 2:3 win chance for the stronger in case of one rank difference.} %\footnote{Professional ranks and dan ranks in some Asia countries may be assigned differently.} The amateur ranks range from 30-kyu (beginner) to 1-kyu (intermediate) and then follows 1-dan to 9-dan %\footnote{7-dan in some systems.} (top-level player). -Multiple independent real-world ranking scales exist -(geographically based), \rv{while} online servers \rv{also} maintain their own user rank \rv{list}; -the difference between scales can be up to several ranks and the rank -distributions also differ. \cite{RankComparison} + +There are multiple independent real-world ranking scales +(geographically based), \rv{while} online servers \rv{also} maintain their own user rank \rv{list}\rvvv{. +The} difference between scales can be up to several ranks and the rank +distributions also differ \cite{RankComparison}. \subsection{Data source} As the source game collection, we use \rvv{the} Go Teaching Ladder reviews archive %\footnote{The reviews contain comments and variations --- we consider only the main variation with the actual played game.} \cite{GTL}. This collection contains 7700 games of players with strength ranging from 30-kyu to 4-dan; we consider only even games with clear rank information. + Since the rank information is provided by the users and may not be consistent, we are forced to take a simplified look at the ranks, discarding the differences between various systems and thus somewhat @@ -925,7 +924,6 @@ increasing error in our model.\footnote{Since our results seem satisfying, we did not pursue to try another collection; one could e.g. look at game archives of some Go server to work within single more-or-less consistent rank model.} - We represent the rank in our dataset \rv{as an integer in the range} $[-3,30]$ with positive numbers representing the kyu ranks and numbers smaller than 1 representing the dan ranks: 4-dan maps to $-3$, 1-dan to $0$, etc. @@ -937,7 +935,6 @@ rank correspondence in the first PCA dimension% \footnote{The eigenvalue of the second dimension was four times smaller, with no discernable structure revealed within the lower-order eigenvectors.} (figure \ref{fig:strength_pca}). - We measure the accuracy of the strength approximation by the first PCA dimension using Pearson's $r$ (see \ref{pearson}), yielding very satisfying value of $r=0.979$ implying extremely strong correlation.% @@ -947,9 +944,9 @@ implying extremely strong correlation.% of a set of players grouped by strength is indeed their strength and confirms that our methodics is correct. At the same time, this result suggests that it is possible to accurately estimate -player's strength from a sample of his games,% -\footnote{The point is of course that the pattern analysis can be done even if we do not know the opponent's strength, or even the game result.} +player's strength \rvvv{just} from a sample of his games, as we confirm below.} +%\footnote{The point is of course that the pattern analysis can be done even if we do not know the opponent's strength, or even the game result.} \rv{When investigating a player's $\vec p$, the PCA decomposition could be also useful for study suggestions --- a program could examine the pattern gradient at the @@ -968,8 +965,7 @@ and a simple PCA-based classifier (sec. }\ref{PCA}\rv{).} \subsubsection{Reference (Training) Data} \rv{We have trained the tested classifiers using one pattern vector per rank (aggregate over all games played by some player declaring the given rank), -then performing PCA analysis to reduced the dimension of pattern vectors.} - +then performing PCA analysis to reduce the dimension of pattern vectors.} We have explored the influence of different game sample sizes (\rv{$G$}) on the classification accuracy to \rv{determine the} practicality and scaling abilities of the classifiers. @@ -977,11 +973,11 @@ In order to reduce the diversity of patterns (negatively impacting accuracy on small samples), we do not consider the contiguity pattern features. The classifiers were compared by running a many-fold validation by repeatedly and -exhaustively taking disjunct \rv{$G$}--game samples of the same rank from the collection% -\footnote{Arbitrary game numbers are approximated by pattern file sizes, -iteratively selecting all games of randomly selected player -of the required strength.} +exhaustively taking disjunct \rv{$G$}--game samples of the same rank from the collection and measuring the standard error of the classifier. +Arbitrary game numbers were approximated by pattern file sizes, +iteratively selecting all games of randomly selected player +of the required strength. %We have randomly separated $10\%$ of the game database as a testing set, %Using the most %of players within the test group yields MSE TODO, thus providing @@ -1004,7 +1000,6 @@ For samples of 2 games, the neural network is even slightly better on average. However, due to the decreasing number of training vectors with increasing game sample sizes, the neural network gets unusable for large sample sizes. The table therefore only shows the neural network results for samples of 17 games and smaller.} - \rv{PCA-based classifier (the most significant PCA eigenvector position is simply directly taken as a~rank) and a random classifier are listed mainly for the sake of comparison, because they do not perform competetively.} @@ -1087,11 +1082,12 @@ the pattern vector $\vec p$ to a style vector $\vec s$. \subsubsection{Game database} The source game collection is GoGoD Winter 2008 \cite{GoGoD} containing 55000 professional games, dating from the early Go history 1500 years ago to the present. -We consider only games of a small subset of players (table \ref{fig:style_marks}); -we have chosen them for being well-known within the players community, +We consider only games of a small subset of players (table \ref{fig:style_marks})\rvvv{. These players +were chosen} for being well-known within the players community, having large number of played games in our collection and not playing too long -ago.\footnote{Over time, many commonly used sequences get altered, adopted and -dismissed; usual playing conditions can also differ significantly.} +ago. +%\footnote{Over time, many commonly used sequences get altered, adopted and +%dismissed\rvvv{. Usual} playing conditions can also differ significantly.} \subsubsection{Expert-based knowledge} \label{style-vectors} @@ -1131,7 +1127,6 @@ room for confusion, except possibly in the case of ``thickness'' --- but the concept is not easy to pin-point succintly and we also did not add extra comments on the style aspects to the questionnaire deliberately to accurately reflect any diversity in understanding of the terms. - Averaging this expert based evaluation yields \emph{reference style vector} $\vec s_r$ (of dimension $4$) for each player $r$ from the set of \emph{reference players} $R$. @@ -1243,12 +1238,13 @@ Chen Yaoye & $6.0 \pm 1.0$ & $4.0 \pm 1.0$ & $6.0 \pm 1.0$ & $5.5 \pm \end{figure} We have looked at the ten most significant dimensions of the pattern data -yielded by the PCA analysis of the reference player set% -\footnote{We also tried to observe PCA effect of removing outlying Takemiya -Masaki. That way, the second dimension strongly -correlated to territoriality and third dimension strongly correlacted to era, -however the first dimension remained mysteriously uncorrelated and with no -obvious interpretation.} +yielded by the PCA analysis of the reference player set +%\footnote{ +%We also tried to observe PCA effect of removing outlying Takemiya +%Masaki. That way, the second dimension strongly +%correlated to territoriality and third dimension strongly correlacted to era, +%however the first dimension remained mysteriously uncorrelated and with no +%obvious interpretation.} (fig. \ref{fig:style_pca} shows the first three). We have again computed the Pearson's $r$ for all combinations of PCA dimensions and dimensions of the prior knowledge style vectors to find correlations. @@ -1523,7 +1519,6 @@ but professionals usually play it only in special contexts.}} or (more likely, in our opinion) that novel players are more likely to get into unorthodox situation that require resorting to the tsuke-nobi sequence.} - We believe that the next step in interpreting our analytical results will be more refined prior information input and precise analysis of the outputs by Go experts. @@ -1669,7 +1664,6 @@ that. We believe that our findings might be useful for many applications in the area of Go support software as well as Go-playing computer engines. - \rv{However, our foremost aim is to use the style analysis as an excellent teaching aid} --- classifying style dimensions based on player's pattern vector, many study recommendations @@ -1689,10 +1683,10 @@ before playing in their first real tournament. \rv{Similarly, a computer Go program can quickly} classify the level of its \rv{human opponent} based on the pattern vector from \rv{their previous games} and auto-adjust its difficulty settings accordingly -to provide more even games for beginners.% -\footnote{The program can also do this based on win-loss statistics, +to provide more even games for beginners. +\rvvv{This can also be achieved using} win-loss statistics, but pattern vector analysis \rv{should} converge faster \rv{initially, -providing much better user experience}.} +providing much better user experience}. We hope that more strong players will look into the style dimensions found by our statistical analysis --- analysis of most played patterns of prospective @@ -1815,8 +1809,8 @@ and for style scales calibration. It can be argued that many players adjust their style by game conditions (Go development era, handicap, komi and color, time limits, opponent) -or that styles might express differently in various game stages; -these factors should be explored by building pattern vectors more +or that styles might express differently in various game stages\rvvv{. +These} factors should be explored by building pattern vectors more carefully than by simply considering all moves in all games of a player. Impact of handicap and uneven games on by-strength $\vec p$ distribution should be also investigated. @@ -1827,7 +1821,7 @@ $\vec p$ distribution should be also investigated. We have proposed a way to extract summary pattern information from game collections and combined this with various data mining methods to show correspondence of our pattern summaries with various player -meta-information like playing strength, era of play or playing style, +meta-information \rvvv{such as} playing strength, era of play or playing style, as ranked by expert players. We have implemented and measured our proposals in two case studies: per-rank characteristics of amateur players and per-player style/era characteristics of well-known @@ -1936,17 +1930,13 @@ by R\'{e}mi Coulom's paper \cite{PatElo} on the extraction of pattern informatio % if you will not have a photo at all: \begin{IEEEbiographynophoto}{Petr Baudi\v{s}} Received M.Sc. degree in Theoretical Computer Science at Charles University, Prague in 2012. -Doing research in the fields of Computer Go, Monte Carlo Methods -and Version Control Systems. -Plays Go with the rank of 2-kyu on European tournaments -and 2-dan on the KGS Go Server. +\rvvv{He is doing} research in the fields of Computer Go, Monte Carlo Methods and Version Control Systems. +\rvvv{He plays} Go with the rank of 2-kyu on European tournaments and 2-dan on the KGS Go Server. \end{IEEEbiographynophoto} \begin{IEEEbiographynophoto}{Josef Moud\v{r}\'{i}k} -Received B.Sc. degree in Informatics at Charles University, Prague in 2009, -currently a graduate student. -Doing research in the fields of Neural Networks and Cognitive Sciences. -His Go skills are not worth mentioning. +Received B.Sc. degree in Informatics at Charles University, Prague in 2009, currently a graduate student. +\rvvv{He is doing} research in the fields of Neural Networks and Cognitive Sciences. His Go skills are not worth mentioning. \end{IEEEbiographynophoto} % insert where needed to balance the two columns on the last page with diff --git a/tex/makefile b/tex/makefile index 70d9653..74e6b62 100644 --- a/tex/makefile +++ b/tex/makefile @@ -1,6 +1,6 @@ gostyle.dvi: gostyle.tex gostyle.bib patcountdist.eps strength-pca.eps style-pca.eps sociomap.eps makefile rm -f gostyle.bbl - latex gostyle && bibtex gostyle && latex gostyle && latex gostyle + latex gostyle && bibtex gostyle && latex gostyle && latex gostyle && latex gostyle gostyle.pdf: gostyle.dvi dvipdf gostyle.dvi gostyle.pdf -- 2.11.4.GIT