Fixed atom test to work on Windows.
[wine/multimedia.git] / documentation / testing.sgml
blob72addae11068e656a64204fe9810046d5c23d3c5
1 <chapter id="testing">
2 <title>Writing Conformance tests</title>
4 <para>
5 Note: This part of the documentation is still very much a work in
6 progress and is in no way complete.
7 </para>
9 <sect1 id="testing-intro">
10 <title>Introduction</title>
11 <para>
12 With more The Windows API follows no standard, it is itself a defacto
13 standard, and deviations from that standard, even small ones, often
14 cause applications to crash or misbehave in some way. Furthermore
15 a conformance test suite is the most accurate (if not necessarily
16 the most complete) form of API documentation and can be used to
17 supplement the Windows API documentation.
18 </para>
19 <para>
20 Writing a conformance test suite for more than 10000 APIs is no small
21 undertaking. Fortunately it can prove very useful to the development
22 of Wine way before it is complete.
23 <itemizedlist>
24 <listitem>
25 <para>
26 The conformance test suite must run on Windows. This is
27 necessary to provide a reasonable way to verify its accuracy.
28 Furthermore the tests must pass successfully on all Windows
29 platforms (tests not relevant to a given platform should be
30 skipped).
31 </para>
32 <para>
33 A consequence of this is that the test suite will provide a
34 great way to detect variations in the API between different
35 Windows versions. For instance, this can provide insights
36 into the differences between the, often undocumented, Win9x and
37 NT Windows families.
38 </para>
39 <para>
40 However, one must remember that the goal of Wine is to run
41 Windows applications on Linux, not to be a clone of any specific
42 Windows version. So such variations must only be tested for when
43 relevant to that goal.
44 </para>
45 </listitem>
46 <listitem>
47 <para>
48 Writing conformance tests is also an easy way to discover
49 bugs in Wine. Of course, before fixing the bugs discovered in
50 this way, one must first make sure that the new tests do pass
51 successfully on at least one Windows 9x and one Windows NT
52 version.
53 </para>
54 <para>
55 Bugs discovered this way should also be easier to fix. Unlike
56 some mysterious application crashes, when a conformance test
57 fails, the expected behavior and APIs tested for are known thus
58 greatly simplifying the diagnosis.
59 </para>
60 </listitem>
61 <listitem>
62 <para>
63 To detect regressions. Simply running the test suite regularly
64 in Wine turns it into a great tool to detect regressions.
65 When a test fails, one immediately knows what was the expected
66 behavior and which APIs are involved. Thus regressions caught
67 this way should be detected earlier, because it is easy to run
68 all tests on a regular basis, and easier to fix because of the
69 reduced diagnosis work.
70 </para>
71 </listitem>
72 <listitem>
73 <para>
74 Tests written in advance of the Wine development (possibly even
75 by non Wine developpers) can also simplify the work of the
76 futur implementer by making it easier for him to check the
77 correctness of his code.
78 </para>
79 </listitem>
80 <listitem>
81 <para>
82 Conformance tests will also come in handy when testing Wine on
83 new (or not as widely used) architectures such as FreeBSD,
84 Solaris x86 or even non-x86 systems. Even when the port does
85 not involve any significant change in the thread management,
86 exception handling or other low-level aspects of Wine, new
87 architectures can expose subtle bugs that can be hard to
88 diagnose when debugging regular (complex) applications.
89 </para>
90 </listitem>
91 </itemizedlist>
92 </para>
93 </sect1>
95 <sect1 id="testing-what">
96 <title>What to test for?</title>
97 <para>
98 The first thing to test for is the documented behavior of APIs
99 and such as CreateFile. For instance one can create a file using a
100 long pathname, check that the behavior is correct when the file
101 already exists, try to open the file using the corresponding short
102 pathname, convert the filename to Unicode and try to open it using
103 CreateFileW, and all other things which are documented and that
104 applications rely on.
105 </para>
106 <para>
107 While the testing framework is not specifically geared towards this
108 type of tests, it is also possible to test the behavior of Windows
109 messages. To do so, create a window, preferably a hidden one so that
110 it does not steal the focus when running the tests, and send messages
111 to that window or to controls in that window. Then, in the message
112 procedure, check that you receive the expected messages and with the
113 correct parameters.
114 </para>
115 <para>
116 For instance you could create an edit control and use WM_SETTEXT to
117 set its contents, possibly check length restrictions, and verify the
118 results using WM_GETTEXT. Similarly one could create a listbox and
119 check the effect of LB_DELETESTRING on the list's number of items,
120 selected items list, highlighted item, etc.
121 </para>
122 <para>
123 However, undocumented behavior should not be tested for unless there
124 is an application that relies on this behavior, and in that case the
125 test should mention that application, or unless one can strongly
126 expect applications to rely on this behavior, typically APIs that
127 return the required buffer size when the buffer pointer is NULL.
128 </para>
129 </sect1>
131 <sect1 id="testing-perl-vs-c">
132 <title>Why have both Perl and C tests?</title>
133 <para>
134 </para>
135 </sect1>
137 <sect1 id="testing-running">
138 <title>Running the tests on Windows</title>
139 <para>
140 The simplest way to run the tests in Wine is to type 'make test' in
141 the Wine sources top level directory. This will run all the Wine
142 conformance tests.
143 </para>
144 <para>
145 The tests for a specific Wine library are located in a 'tests'
146 directory in that library's directory. Each test is contained in a
147 file, either a '.pl' file (e.g. <filename>dlls/kernel/tests/atom.pl</>)
148 for a test written in perl, or a '.c' file (e.g.
149 <filename>dlls/kernel/tests/thread.c</>) for a test written in C. Each
150 file itself contains many checks concerning one or more related APIs.
151 </para>
152 <para>
153 So to run all the tests related to a given Wine library, go to the
154 corresponding 'tests' directory and type 'make test'. This will
155 compile the C tests, run the tests, and create an
156 '<replaceable>xxx</>.ok' file for each test that passes successfully.
157 And if you only want to run the tests contained in the
158 <filename>thread.c</> file of the kernel library, you would do:
159 <screen>
160 <prompt>$ </>cd dlls/kernel/tests
161 <prompt>$ </>make thread.ok
162 </screen>
163 </para>
164 <para>
165 Note that if the test has already been run and is up to date (i.e. if
166 neither the kernel library nor the <filename>thread.c</> file has
167 changed since the <filename>thread.ok</> file was created), then make
168 will say so. To force the test to be re-run, delete the
169 <filename>thread.ok</> file, and run the make command again.
170 </para>
171 <para>
172 You can also run tests manually using a command similar to the
173 following:
174 <screen>
175 <prompt>$ </>runtest -q -M kernel32.dll -p kernel32_test.exe.so thread.c
176 <prompt>$ </>runtest -p kernel32_test.exe.so thread.c
177 thread.c: 86 tests executed, 5 marked as todo, 0 failures.
178 </screen>
179 The '-P wine' options defines the platform that is currently being
180 tested; the '-q' option causes the testing framework not to report
181 statistics about the number of successfull and failed tests. Run
182 <command>runtest -h</> for more details.
183 </para>
184 </sect1>
186 <sect1 id="testing-c-test">
187 <title>Inside a C test</title>
189 <para>
190 When writing new checks you can either modify an existing test file or
191 add a new one. If your tests are related to the tests performed by an
192 existing file, then add them to that file. Otherwise create a new .c
193 file in the tests directory and add that file to the
194 <varname>CTESTS</> variable in <filename>Makefile.in</>.
195 </para>
196 <para>
197 A new test file will look something like the following:
198 <screen>
199 #include &lt;wine/test.h&gt;
200 #include &lt;winbase.h&gt;
202 /* Maybe auxiliary functions and definitions here */
204 START_TEST(paths)
206 /* Write your checks there or put them in functions you will call from
207 * there
210 </screen>
211 </para>
212 <para>
213 The test's entry point is the START_TEST section. This is where
214 execution will start. You can put all your tests in that section but
215 it may be better to split related checks in functions you will call
216 from the START_TEST section. The parameter to START_TEST must match
217 the name of the C file. So in the above example the C file would be
218 called <filename>paths.c</>.
219 </para>
220 <para>
221 Tests should start by including the <filename>wine/test.h</> header.
222 This header will provide you access to all the testing framework
223 functions. You can then include the windows header you need, but make
224 sure to not include any Unix or Wine specific header: tests must
225 compile on Windows.
226 </para>
227 <!-- FIXME: Can we include windows.h now? We should be able to but currently __WINE__ is defined thus making it impossible. -->
228 <!-- FIXME: Add recommendations about what to print in case of a failure: be informative -->
229 <para>
230 You can use <function>trace</> to print informational messages. Note
231 that these messages will only be printed if 'runtest -v' is being used.
232 <screen>
233 trace("testing GlobalAddAtomA");
234 trace("foo=%d",foo);
235 </screen>
236 <!-- FIXME: Make sure trace supports %d... -->
237 </para>
238 <para>
239 Then just call functions and use <function>ok</> to make sure that
240 they behaved as expected:
241 <screen>
242 ATOM atom = GlobalAddAtomA( "foobar" );
243 ok( GlobalFindAtomA( "foobar" ) == atom, "could not find atom foobar" );
244 ok( GlobalFindAtomA( "FOOBAR" ) == atom, "could not find atom FOOBAR" );
245 </screen>
246 The first parameter of <function>ok</> is an expression which must
247 evaluate to true if the test was successful. The next parameter is a
248 printf-compatible format string which is displayed in case the test
249 failed, and the following optional parameters depend on the format
250 string.
251 </para>
252 <para>
253 It is important to display an informative message when a test fails:
254 a good error message will help the Wine developper identify exactly
255 what went wrong without having to add too many other printfs. For
256 instance it may be useful to print the error code if relevant, or the
257 expected value and effective value. In that respect, for some tests
258 you may want to define a macro such as the following:
259 <screen>
260 #define eq(received, expected, label, type) \
261 ok((received) == (expected), "%s: got " type " instead of " type, (label),(received),(expected))
265 eq( b, curr_val, "SPI_{GET,SET}BEEP", "%d" );
266 </screen>
267 </para>
268 <para>
269 Note
270 </para>
271 </sect1>
273 <sect1 id="testing-platforms">
274 <title>Handling platform issues</title>
275 <para>
276 Some checks may be written before they pass successfully in Wine.
277 Without some mechanism, such checks would potentially generate
278 hundred of known failures for months each time the tests are being run.
279 This would make it hard to detect new failures caused by a regression.
280 or to detect that a patch fixed a long standing issue.
281 </para>
282 <para>
283 Thus the Wine testing framework has the concept of platforms and
284 groups of checks can be declared as expected to fail on some of them.
285 In the most common case, one would declare a group of tests as
286 expected to fail in Wine. To do so, use the following construct:
287 <screen>
288 todo_wine {
289 SetLastError( 0xdeadbeef );
290 ok( GlobalAddAtomA(0) == 0 && GetLastError() == 0xdeadbeef, "failed to add atom 0" );
292 </screen>
293 On Windows the above check would be performed normally, but on Wine it
294 would be expected to fail, and not cause the failure of the whole
295 test. However. If that check were to succeed in Wine, it would
296 cause the test to fail, thus making it easy to detect when something
297 has changed that fixes a bug. Also note that todo checks are accounted
298 separately from regular checks so that the testing statistics remain
299 meaningful. Finally, note that todo sections can be nested so that if
300 a test only fails on the cygwin and reactos platforms, one would
301 write:
302 <screen>
303 todo("cygwin") {
304 todo("reactos") {
308 </screen>
309 <!-- FIXME: Would we really have platforms such as reactos, cygwin, freebsd & co? -->
310 But specific platforms should not be nested inside a todo_wine section
311 since that would be redundant.
312 </para>
313 <para>
314 When writing tests you will also encounter differences between Windows
315 9x and Windows NT platforms. Such differences should be treated
316 differently from the platform issues mentioned above. In particular
317 you should remember that the goal of Wine is not to be a clone of any
318 specific Windows version but to run Windows applications on Unix.
319 </para>
320 <para>
321 So, if an API returns a different error code on Windows 9x and
322 Windows NT, your check should just verify that Wine returns one or
323 the other:
324 <screen>
325 ok ( GetLastError() == WIN9X_ERROR || GetLastError() == NT_ERROR, ...);
326 </screen>
327 </para>
328 <para>
329 If an API is only present on some Windows platforms, then use
330 LoadLibrary and GetProcAddress to check if it is implemented and
331 invoke it. Remember, tests must run on all Windows platforms.
332 Similarly, conformance tests should nor try to correlate the Windows
333 version returned by GetVersion with whether given APIs are
334 implemented or not. Again, the goal of Wine is to run Windows
335 applications (which do not do such checks), and not be a clone of a
336 specific Windows version.
337 </para>
338 <para>
339 FIXME: What about checks that cause the process to crash due to a bug?
340 </para>
341 </sect1>
344 <!-- FIXME: Strategies for testing threads, testing network stuff,
345 file handling, eq macro... -->
347 </chapter>
349 <!-- Keep this comment at the end of the file
350 Local variables:
351 mode: sgml
352 sgml-parent-document:("wine-doc.sgml" "set" "book" "part" "chapter" "")
353 End: