Store the windows and system directories as long path names.
[wine.git] / documentation / testing.sgml
blobdae7d4b9eb053d426ee0d0dd52f21efad87ec2c5
1 <chapter id="testing">
2 <title>Writing Conformance tests</title>
4 <sect1 id="testing-intro">
5 <title>Introduction</title>
6 <para>
7 With more The Windows API follows no standard, it is itself a de facto
8 standard, and deviations from that standard, even small ones, often
9 cause applications to crash or misbehave in some way. Furthermore
10 a conformance test suite is the most accurate (if not necessarily
11 the most complete) form of API documentation and can be used to
12 supplement the Windows API documentation.
13 </para>
14 <para>
15 Writing a conformance test suite for more than 10000 APIs is no small
16 undertaking. Fortunately it can prove very useful to the development
17 of Wine way before it is complete.
18 <itemizedlist>
19 <listitem>
20 <para>
21 The conformance test suite must run on Windows. This is
22 necessary to provide a reasonable way to verify its accuracy.
23 Furthermore the tests must pass successfully on all Windows
24 platforms (tests not relevant to a given platform should be
25 skipped).
26 </para>
27 <para>
28 A consequence of this is that the test suite will provide a
29 great way to detect variations in the API between different
30 Windows versions. For instance, this can provide insights
31 into the differences between the, often undocumented, Win9x and
32 NT Windows families.
33 </para>
34 <para>
35 However, one must remember that the goal of Wine is to run
36 Windows applications on Linux, not to be a clone of any specific
37 Windows version. So such variations must only be tested for when
38 relevant to that goal.
39 </para>
40 </listitem>
41 <listitem>
42 <para>
43 Writing conformance tests is also an easy way to discover
44 bugs in Wine. Of course, before fixing the bugs discovered in
45 this way, one must first make sure that the new tests do pass
46 successfully on at least one Windows 9x and one Windows NT
47 version.
48 </para>
49 <para>
50 Bugs discovered this way should also be easier to fix. Unlike
51 some mysterious application crashes, when a conformance test
52 fails, the expected behavior and APIs tested for are known thus
53 greatly simplifying the diagnosis.
54 </para>
55 </listitem>
56 <listitem>
57 <para>
58 To detect regressions. Simply running the test suite regularly
59 in Wine turns it into a great tool to detect regressions.
60 When a test fails, one immediately knows what was the expected
61 behavior and which APIs are involved. Thus regressions caught
62 this way should be detected earlier, because it is easy to run
63 all tests on a regular basis, and easier to fix because of the
64 reduced diagnosis work.
65 </para>
66 </listitem>
67 <listitem>
68 <para>
69 Tests written in advance of the Wine development (possibly even
70 by non Wine developers) can also simplify the work of the
71 future implementer by making it easier for him to check the
72 correctness of his code.
73 </para>
74 </listitem>
75 <listitem>
76 <para>
77 Conformance tests will also come in handy when testing Wine on
78 new (or not as widely used) architectures such as FreeBSD,
79 Solaris x86 or even non-x86 systems. Even when the port does
80 not involve any significant change in the thread management,
81 exception handling or other low-level aspects of Wine, new
82 architectures can expose subtle bugs that can be hard to
83 diagnose when debugging regular (complex) applications.
84 </para>
85 </listitem>
86 </itemizedlist>
87 </para>
88 </sect1>
91 <sect1 id="testing-what">
92 <title>What to test for?</title>
93 <para>
94 The first thing to test for is the documented behavior of APIs
95 and such as CreateFile. For instance one can create a file using a
96 long pathname, check that the behavior is correct when the file
97 already exists, try to open the file using the corresponding short
98 pathname, convert the filename to Unicode and try to open it using
99 CreateFileW, and all other things which are documented and that
100 applications rely on.
101 </para>
102 <para>
103 While the testing framework is not specifically geared towards this
104 type of tests, it is also possible to test the behavior of Windows
105 messages. To do so, create a window, preferably a hidden one so that
106 it does not steal the focus when running the tests, and send messages
107 to that window or to controls in that window. Then, in the message
108 procedure, check that you receive the expected messages and with the
109 correct parameters.
110 </para>
111 <para>
112 For instance you could create an edit control and use WM_SETTEXT to
113 set its contents, possibly check length restrictions, and verify the
114 results using WM_GETTEXT. Similarly one could create a listbox and
115 check the effect of LB_DELETESTRING on the list's number of items,
116 selected items list, highlighted item, etc.
117 </para>
118 <para>
119 However, undocumented behavior should not be tested for unless there
120 is an application that relies on this behavior, and in that case the
121 test should mention that application, or unless one can strongly
122 expect applications to rely on this behavior, typically APIs that
123 return the required buffer size when the buffer pointer is NULL.
124 </para>
125 </sect1>
128 <sect1 id="testing-wine">
129 <title>Running the tests in Wine</title>
130 <para>
131 The simplest way to run the tests in Wine is to type 'make test' in
132 the Wine sources top level directory. This will run all the Wine
133 conformance tests.
134 </para>
135 <para>
136 The tests for a specific Wine library are located in a 'tests'
137 directory in that library's directory. Each test is contained in a
138 file (e.g. <filename>dlls/kernel/tests/thread.c</>). Each
139 file itself contains many checks concerning one or more related APIs.
140 </para>
141 <para>
142 So to run all the tests related to a given Wine library, go to the
143 corresponding 'tests' directory and type 'make test'. This will
144 compile the tests, run them, and create an '<replaceable>xxx</>.ok'
145 file for each test that passes successfully. And if you only want to
146 run the tests contained in the <filename>thread.c</> file of the
147 kernel library, you would do:
148 <screen>
149 <prompt>$ </>cd dlls/kernel/tests
150 <prompt>$ </>make thread.ok
151 </screen>
152 </para>
153 <para>
154 Note that if the test has already been run and is up to date (i.e. if
155 neither the kernel library nor the <filename>thread.c</> file has
156 changed since the <filename>thread.ok</> file was created), then make
157 will say so. To force the test to be re-run, delete the
158 <filename>thread.ok</> file, and run the make command again.
159 </para>
160 <para>
161 You can also run tests manually using a command similar to the
162 following:
163 <screen>
164 <prompt>$ </>../../../tools/runtest -q -M kernel32.dll -p kernel32_test.exe.so thread.c
165 <prompt>$ </>../../../tools/runtest -p kernel32_test.exe.so thread.c
166 thread.c: 86 tests executed, 5 marked as todo, 0 failures.
167 </screen>
168 The '-P wine' options defines the platform that is currently being
169 tested. Remove the '-q' option if you want the testing framework
170 to report statistics about the number of successful and failed tests.
171 Run <command>runtest -h</> for more details.
172 </para>
173 </sect1>
176 <sect1 id="cross-compiling-tests">
177 <title>Cross-compiling the tests with MinGW</title>
178 <sect2>
179 <title>Setup of the MinGW cross-compiling environment</title>
180 <para>
181 Here are some instructions to setup MinGW on different Linux
182 distributions and *BSD.
183 </para>
184 <sect3>
185 <title>Debian GNU/Linux</title>
186 <para>
187 On Debian all you need to do is type <command>apt-get install
188 mingw32</>.
189 </para>
190 </sect3>
191 <sect3>
192 <title>Red Hat Linux like rpm systems</title>
193 <para>
194 This includes Fedora Core, Red Hat Enterprise Linux, Mandrake,
195 most probably SuSE Linux too, etc. But this list isn't exhaustive;
196 the following steps should probably work on any rpm based system.
197 </para>
198 <para>
199 Download and install the latest rpm's from
200 <ulink url="http://mirzam.it.vu.nl/mingw/">MinGW RPM packages</>.
201 Alternatively you can follow the instructions on that page and
202 build your own packages from the source rpm's listed there as well.
203 </para>
204 </sect3>
205 <sect3>
206 <title>*BSD</title>
207 <para>
208 The *BSD systems have in their ports collection a port for the
209 MinGW cross-compiling environment. Please see the documentation
210 of your system about how to build and install a port.
211 </para>
212 </sect3>
213 </sect2>
214 <sect2>
215 <title>Compiling the tests</title>
216 <para>
217 Having the cross-compiling environment set up the generation of the
218 Windows executables is easy by using the Wine build system.
219 </para>
220 <para>
221 If you had already run <command>configure</>, then delete
222 <filename>config.cache</> and re-run <command>configure</>.
223 You can then run <command>make crosstest</>. To sum up:
224 <screen>
225 <prompt>$ </><userinput>rm config.cache</>
226 <prompt>$ </><userinput>./configure</>
227 <prompt>$ </><userinput>make crosstest</>
228 </screen>
229 </para>
230 </sect2>
231 </sect1>
234 <sect1 id="testing-windows">
235 <title>Building and running the tests on Windows</title>
236 <sect2>
237 <title>Using pre-compiled binaries</title>
238 <para>
239 Unfortunately there are no pre-compiled binaries yet. However if
240 send an email to the Wine development list you can probably get
241 someone to send them to you, and maybe motivate some kind soul to
242 put in place a mechanism for publishing such binaries on a regular
243 basis.
244 </para>
245 </sect2>
246 <sect2>
247 <title>With Visual C++</title>
248 <screen>
249 Visual Studio 6 users:
250 - MSVC headers may not work well, try with Wine headers
251 - Ensure that you have the "processor pack" from
252 <ulink url="http://msdn.microsoft.com/vstudio/downloads/tools/ppack/default.aspx">http://msdn.microsoft.com/vstudio/downloads/tools/ppack/default.aspx</>
253 as well as the latest service packs. The processor pack fixes <emphasis>"error C2520: conversion from unsigned
254 __int64 to double not implemented, use signed __int64"</>
255 </screen>
256 <itemizedlist>
257 <listitem><para>
258 get the Wine sources
259 </para></listitem>
260 <listitem><para>
261 Run msvcmaker to generate Visual C++ project files for the tests.
262 'msvcmaker' is a perl script so you may be able to run it on
263 Windows.
264 <screen>
265 <prompt>$ </>./tools/winapi/msvcmaker --no-wine
266 </screen>
267 </para></listitem>
268 <listitem><para>
269 If the previous steps were done on your Linux development
270 machine, make the Wine sources accessible to the Windows machine
271 on which you are going to compile them. Typically you would do
272 this using Samba but copying them altogether would work too.
273 </para></listitem>
274 <listitem><para>
275 On the Windows machine, open the <filename>winetest.dsw</>
276 workspace. This will load each test's project. For each test there
277 are two configurations: one compiles the test with the Wine
278 headers, and the other uses the Visual C++ headers. Some tests
279 will compile fine with the former, but most will require the
280 latter.
281 </para></listitem>
282 <listitem><para>
283 Open the <menuchoice><guimenu>Build</> <guimenu>Batch
284 build...</></> menu and select the tests and build configurations
285 you want to build. Then click on <guibutton>Build</>.
286 </para></listitem>
287 <listitem><para>
288 To run a specific test from Visual C++, go to
289 <menuchoice><guimenu>Project</> <guimenu>Settings...</></>. There
290 select that test's project and build configuration and go to the
291 <guilabel>Debug</> tab. There type the name of the specific test
292 to run (e.g. 'thread') in the <guilabel>Program arguments</>
293 field. Validate your change by clicking on <guibutton>Ok</> and
294 start the test by clicking the red exclamation mark (or hitting
295 'F5' or any other usual method).
296 </para></listitem>
297 <listitem><para>
298 You can also run the tests from the command line. You will find
299 them in either <filename>Output\Win32_Wine_Headers</> or
300 <filename>Output\Win32_MSVC_Headers</> depending on the build
301 method. So to run the kernel 'path' tests you would do:
302 <screen>
303 <prompt>C:\&gt;</>cd dlls\kernel\tests\Output\Win32_MSVC_Headers
304 <prompt>C:\dlls\kernel\tests\Output\Win32_MSVC_Headers&gt;</>kernel32_test path
305 </screen>
306 </para></listitem>
307 </itemizedlist>
308 </sect2>
309 <sect2>
310 <title>With MinGW</title>
311 <para>
312 Wine's build system already has support for building tests with a MinGW
313 cross-compiler. See the section above called 'Setup of the MinGW
314 cross-compiling environment' for instructions on how to set things up.
315 When you have a MinGW environment installed all you need to do is rerun
316 configure and it should detect the MinGW compiler and tools. Then run
317 'make crosstest' to start building the tests.
318 </para>
319 </sect2>
320 </sect1>
323 <sect1 id="testing-test">
324 <title>Inside a test</title>
326 <para>
327 When writing new checks you can either modify an existing test file or
328 add a new one. If your tests are related to the tests performed by an
329 existing file, then add them to that file. Otherwise create a new .c
330 file in the tests directory and add that file to the
331 <varname>CTESTS</> variable in <filename>Makefile.in</>.
332 </para>
333 <para>
334 A new test file will look something like the following:
335 <screen>
336 #include &lt;wine/test.h&gt;
337 #include &lt;winbase.h&gt;
339 /* Maybe auxiliary functions and definitions here */
341 START_TEST(paths)
343 /* Write your checks there or put them in functions you will call from
344 * there
347 </screen>
348 </para>
349 <para>
350 The test's entry point is the START_TEST section. This is where
351 execution will start. You can put all your tests in that section but
352 it may be better to split related checks in functions you will call
353 from the START_TEST section. The parameter to START_TEST must match
354 the name of the C file. So in the above example the C file would be
355 called <filename>paths.c</>.
356 </para>
357 <para>
358 Tests should start by including the <filename>wine/test.h</> header.
359 This header will provide you access to all the testing framework
360 functions. You can then include the windows header you need, but make
361 sure to not include any Unix or Wine specific header: tests must
362 compile on Windows.
363 </para>
364 <para>
365 You can use <function>trace</> to print informational messages. Note
366 that these messages will only be printed if 'runtest -v' is being used.
367 <screen>
368 trace("testing GlobalAddAtomA");
369 trace("foo=%d",foo);
370 </screen>
371 </para>
372 <para>
373 Then just call functions and use <function>ok</> to make sure that
374 they behaved as expected:
375 <screen>
376 ATOM atom = GlobalAddAtomA( "foobar" );
377 ok( GlobalFindAtomA( "foobar" ) == atom, "could not find atom foobar" );
378 ok( GlobalFindAtomA( "FOOBAR" ) == atom, "could not find atom FOOBAR" );
379 </screen>
380 The first parameter of <function>ok</> is an expression which must
381 evaluate to true if the test was successful. The next parameter is a
382 printf-compatible format string which is displayed in case the test
383 failed, and the following optional parameters depend on the format
384 string.
385 </para>
386 </sect1>
388 <sect1 id="testing-error-messages">
389 <title>Writing good error messages</title>
390 <para>
391 The message that is printed when a test fails is
392 <emphasis>extremely</> important.
393 </para>
394 <para>
395 Someone will take your test, run it on a Windows platform that
396 you don't have access to, and discover that it fails. They will then
397 post an email with the output of the test, and in particular your
398 error message. Someone, maybe you, will then have to figure out from
399 this error message why the test failed.
400 </para>
401 <para>
402 If the error message contains all the relevant information that will
403 be easy. If not, then it will require modifying the test, finding
404 someone to compile it on Windows, sending the modified version to the
405 original tester and waiting for his reply. In other words, it will
406 be long and painful.
407 </para>
408 <para>
409 So how do you write a good error message? Let's start with an example
410 of a bad error message:
411 <screen>
412 ok(GetThreadPriorityBoost(curthread,&amp;disabled)!=0,
413 "GetThreadPriorityBoost Failed");
414 </screen>
415 This will yield:
416 <screen>
417 thread.c:123: Test failed: GetThreadPriorityBoost Failed
418 </screen>
419 </para>
420 <para>
421 Did you notice how the error message provides no information about
422 why the test failed? We already know from the line number exactly
423 which test failed. In fact the error message gives strictly no
424 information that cannot already be obtained by reading the code. In
425 other words it provides no more information than an empty string!
426 </para>
427 <para>
428 Let's look at how to rewrite it:
429 <screen>
430 BOOL rc;
432 rc=GetThreadPriorityBoost(curthread,&amp;disabled);
433 ok(rc!=0 && disabled==0,"rc=%d error=%ld disabled=%d",
434 rc,GetLastError(),disabled);
435 </screen>
436 This will yield:
437 <screen>
438 thread.c:123: Test failed: rc=0 error=120 disabled=0
439 </screen>
440 </para>
441 <para>
442 When receiving such a message, one would check the source, see that
443 it's a call to GetThreadPriorityBoost, that the test failed not
444 because the API returned the wrong value, but because it returned an
445 error code. Furthermore we see that GetLastError() returned 120 which
446 winerror.h defines as ERROR_CALL_NOT_IMPLEMENTED. So the source of
447 the problem is obvious: this Windows platform (here Windows 98) does
448 not support this API and thus the test must be modified to detect
449 such a condition and skip the test.
450 </para>
451 <para>
452 So a good error message should provide all the information which
453 cannot be obtained by reading the source, typically the function
454 return value, error codes, and any function output parameter. Even if
455 more information is needed to fully understand a problem,
456 systematically providing the above is easy and will help cut down the
457 number of iterations required to get to a resolution.
458 </para>
459 <para>
460 It may also be a good idea to dump items that may be hard to retrieve
461 from the source, like the expected value in a test if it is the
462 result of an earlier computation, or comes from a large array of test
463 values (e.g. index 112 of _pTestStrA in vartest.c). In that respect,
464 for some tests you may want to define a macro such as the following:
465 <screen>
466 #define eq(received, expected, label, type) \
467 ok((received) == (expected), "%s: got " type " instead of " type, (label),(received),(expected))
471 eq( b, curr_val, "SPI_{GET,SET}BEEP", "%d" );
472 </screen>
473 </para>
474 </sect1>
477 <sect1 id="testing-platforms">
478 <title>Handling platform issues</title>
479 <para>
480 Some checks may be written before they pass successfully in Wine.
481 Without some mechanism, such checks would potentially generate
482 hundred of known failures for months each time the tests are being run.
483 This would make it hard to detect new failures caused by a regression.
484 or to detect that a patch fixed a long standing issue.
485 </para>
486 <para>
487 Thus the Wine testing framework has the concept of platforms and
488 groups of checks can be declared as expected to fail on some of them.
489 In the most common case, one would declare a group of tests as
490 expected to fail in Wine. To do so, use the following construct:
491 <screen>
492 todo_wine {
493 SetLastError( 0xdeadbeef );
494 ok( GlobalAddAtomA(0) == 0 && GetLastError() == 0xdeadbeef, "failed to add atom 0" );
496 </screen>
497 On Windows the above check would be performed normally, but on Wine it
498 would be expected to fail, and not cause the failure of the whole
499 test. However. If that check were to succeed in Wine, it would
500 cause the test to fail, thus making it easy to detect when something
501 has changed that fixes a bug. Also note that todo checks are accounted
502 separately from regular checks so that the testing statistics remain
503 meaningful. Finally, note that todo sections can be nested so that if
504 a test only fails on the cygwin and reactos platforms, one would
505 write:
506 <screen>
507 todo("cygwin") {
508 todo("reactos") {
512 </screen>
513 <!-- FIXME: Would we really have platforms such as reactos, cygwin, freebsd & co? -->
514 But specific platforms should not be nested inside a todo_wine section
515 since that would be redundant.
516 </para>
517 <para>
518 When writing tests you will also encounter differences between Windows
519 9x and Windows NT platforms. Such differences should be treated
520 differently from the platform issues mentioned above. In particular
521 you should remember that the goal of Wine is not to be a clone of any
522 specific Windows version but to run Windows applications on Unix.
523 </para>
524 <para>
525 So, if an API returns a different error code on Windows 9x and
526 Windows NT, your check should just verify that Wine returns one or
527 the other:
528 <screen>
529 ok ( GetLastError() == WIN9X_ERROR || GetLastError() == NT_ERROR, ...);
530 </screen>
531 </para>
532 <para>
533 If an API is only present on some Windows platforms, then use
534 LoadLibrary and GetProcAddress to check if it is implemented and
535 invoke it. Remember, tests must run on all Windows platforms.
536 Similarly, conformance tests should nor try to correlate the Windows
537 version returned by GetVersion with whether given APIs are
538 implemented or not. Again, the goal of Wine is to run Windows
539 applications (which do not do such checks), and not be a clone of a
540 specific Windows version.
541 </para>
542 <!--para>
543 FIXME: What about checks that cause the process to crash due to a bug?
544 </para-->
545 </sect1>
548 <!-- FIXME: Strategies for testing threads, testing network stuff,
549 file handling, eq macro... -->
551 </chapter>
553 <!-- Keep this comment at the end of the file
554 Local variables:
555 mode: sgml
556 sgml-parent-document:("wine-devel.sgml" "set" "book" "part" "chapter" "")
557 End: