1 .. _development_coding:
6 While there is much to be said for a solid and community-oriented design
7 process, the proof of any kernel development project is in the resulting
8 code. It is the code which will be examined by other developers and merged
9 (or not) into the mainline tree. So it is the quality of this code which
10 will determine the ultimate success of the project.
12 This section will examine the coding process. We'll start with a look at a
13 number of ways in which kernel developers can go wrong. Then the focus
14 will shift toward doing things right and the tools which can help in that
24 The kernel has long had a standard coding style, described in
25 :ref:`Documentation/process/coding-style.rst <codingstyle>`. For much of
26 that time, the policies described in that file were taken as being, at most,
27 advisory. As a result, there is a substantial amount of code in the kernel
28 which does not meet the coding style guidelines. The presence of that code
29 leads to two independent hazards for kernel developers.
31 The first of these is to believe that the kernel coding standards do not
32 matter and are not enforced. The truth of the matter is that adding new
33 code to the kernel is very difficult if that code is not coded according to
34 the standard; many developers will request that the code be reformatted
35 before they will even review it. A code base as large as the kernel
36 requires some uniformity of code to make it possible for developers to
37 quickly understand any part of it. So there is no longer room for
38 strangely-formatted code.
40 Occasionally, the kernel's coding style will run into conflict with an
41 employer's mandated style. In such cases, the kernel's style will have to
42 win before the code can be merged. Putting code into the kernel means
43 giving up a degree of control in a number of ways - including control over
44 how the code is formatted.
46 The other trap is to assume that code which is already in the kernel is
47 urgently in need of coding style fixes. Developers may start to generate
48 reformatting patches as a way of gaining familiarity with the process, or
49 as a way of getting their name into the kernel changelogs - or both. But
50 pure coding style fixes are seen as noise by the development community;
51 they tend to get a chilly reception. So this type of patch is best
52 avoided. It is natural to fix the style of a piece of code while working
53 on it for other reasons, but coding style changes should not be made for
56 The coding style document also should not be read as an absolute law which
57 can never be transgressed. If there is a good reason to go against the
58 style (a line which becomes far less readable if split to fit within the
59 80-column limit, for example), just do it.
61 Note that you can also use the ``clang-format`` tool to help you with
62 these rules, to quickly re-format parts of your code automatically,
63 and to review full files in order to spot coding style mistakes,
64 typos and possible improvements. It is also handy for sorting ``#includes``,
65 for aligning variables/macros, for reflowing text and other similar tasks.
66 See the file :ref:`Documentation/process/clang-format.rst <clangformat>`
73 Computer Science professors teach students to make extensive use of
74 abstraction layers in the name of flexibility and information hiding.
75 Certainly the kernel makes extensive use of abstraction; no project
76 involving several million lines of code could do otherwise and survive.
77 But experience has shown that excessive or premature abstraction can be
78 just as harmful as premature optimization. Abstraction should be used to
79 the level required and no further.
81 At a simple level, consider a function which has an argument which is
82 always passed as zero by all callers. One could retain that argument just
83 in case somebody eventually needs to use the extra flexibility that it
84 provides. By that time, though, chances are good that the code which
85 implements this extra argument has been broken in some subtle way which was
86 never noticed - because it has never been used. Or, when the need for
87 extra flexibility arises, it does not do so in a way which matches the
88 programmer's early expectation. Kernel developers will routinely submit
89 patches to remove unused arguments; they should, in general, not be added
92 Abstraction layers which hide access to hardware - often to allow the bulk
93 of a driver to be used with multiple operating systems - are especially
94 frowned upon. Such layers obscure the code and may impose a performance
95 penalty; they do not belong in the Linux kernel.
97 On the other hand, if you find yourself copying significant amounts of code
98 from another kernel subsystem, it is time to ask whether it would, in fact,
99 make sense to pull out some of that code into a separate library or to
100 implement that functionality at a higher level. There is no value in
101 replicating the same code throughout the kernel.
104 #ifdef and preprocessor use in general
105 **************************************
107 The C preprocessor seems to present a powerful temptation to some C
108 programmers, who see it as a way to efficiently encode a great deal of
109 flexibility into a source file. But the preprocessor is not C, and heavy
110 use of it results in code which is much harder for others to read and
111 harder for the compiler to check for correctness. Heavy preprocessor use
112 is almost always a sign of code which needs some cleanup work.
114 Conditional compilation with #ifdef is, indeed, a powerful feature, and it
115 is used within the kernel. But there is little desire to see code which is
116 sprinkled liberally with #ifdef blocks. As a general rule, #ifdef use
117 should be confined to header files whenever possible.
118 Conditionally-compiled code can be confined to functions which, if the code
119 is not to be present, simply become empty. The compiler will then quietly
120 optimize out the call to the empty function. The result is far cleaner
121 code which is easier to follow.
123 C preprocessor macros present a number of hazards, including possible
124 multiple evaluation of expressions with side effects and no type safety.
125 If you are tempted to define a macro, consider creating an inline function
126 instead. The code which results will be the same, but inline functions are
127 easier to read, do not evaluate their arguments multiple times, and allow
128 the compiler to perform type checking on the arguments and return value.
134 Inline functions present a hazard of their own, though. Programmers can
135 become enamored of the perceived efficiency inherent in avoiding a function
136 call and fill a source file with inline functions. Those functions,
137 however, can actually reduce performance. Since their code is replicated
138 at each call site, they end up bloating the size of the compiled kernel.
139 That, in turn, creates pressure on the processor's memory caches, which can
140 slow execution dramatically. Inline functions, as a rule, should be quite
141 small and relatively rare. The cost of a function call, after all, is not
142 that high; the creation of large numbers of inline functions is a classic
143 example of premature optimization.
145 In general, kernel programmers ignore cache effects at their peril. The
146 classic time/space tradeoff taught in beginning data structures classes
147 often does not apply to contemporary hardware. Space *is* time, in that a
148 larger program will run slower than one which is more compact.
150 More recent compilers take an increasingly active role in deciding whether
151 a given function should actually be inlined or not. So the liberal
152 placement of "inline" keywords may not just be excessive; it could also be
159 In May, 2006, the "Devicescape" networking stack was, with great
160 fanfare, released under the GPL and made available for inclusion in the
161 mainline kernel. This donation was welcome news; support for wireless
162 networking in Linux was considered substandard at best, and the Devicescape
163 stack offered the promise of fixing that situation. Yet, this code did not
164 actually make it into the mainline until June, 2007 (2.6.22). What
167 This code showed a number of signs of having been developed behind
168 corporate doors. But one large problem in particular was that it was not
169 designed to work on multiprocessor systems. Before this networking stack
170 (now called mac80211) could be merged, a locking scheme needed to be
173 Once upon a time, Linux kernel code could be developed without thinking
174 about the concurrency issues presented by multiprocessor systems. Now,
175 however, this document is being written on a dual-core laptop. Even on
176 single-processor systems, work being done to improve responsiveness will
177 raise the level of concurrency within the kernel. The days when kernel
178 code could be written without thinking about locking are long past.
180 Any resource (data structures, hardware registers, etc.) which could be
181 accessed concurrently by more than one thread must be protected by a lock.
182 New code should be written with this requirement in mind; retrofitting
183 locking after the fact is a rather more difficult task. Kernel developers
184 should take the time to understand the available locking primitives well
185 enough to pick the right tool for the job. Code which shows a lack of
186 attention to concurrency will have a difficult path into the mainline.
192 One final hazard worth mentioning is this: it can be tempting to make a
193 change (which may bring big improvements) which causes something to break
194 for existing users. This kind of change is called a "regression," and
195 regressions have become most unwelcome in the mainline kernel. With few
196 exceptions, changes which cause regressions will be backed out if the
197 regression cannot be fixed in a timely manner. Far better to avoid the
198 regression in the first place.
200 It is often argued that a regression can be justified if it causes things
201 to work for more people than it creates problems for. Why not make a
202 change if it brings new functionality to ten systems for each one it
203 breaks? The best answer to this question was expressed by Linus in July,
208 So we don't fix bugs by introducing new problems. That way lies
209 madness, and nobody ever knows if you actually make any real
210 progress at all. Is it two steps forwards, one step back, or one
211 step forward and two steps back?
213 (http://lwn.net/Articles/243460/).
215 An especially unwelcome type of regression is any sort of change to the
216 user-space ABI. Once an interface has been exported to user space, it must
217 be supported indefinitely. This fact makes the creation of user-space
218 interfaces particularly challenging: since they cannot be changed in
219 incompatible ways, they must be done right the first time. For this
220 reason, a great deal of thought, clear documentation, and wide review for
221 user-space interfaces is always required.
227 For now, at least, the writing of error-free code remains an ideal that few
228 of us can reach. What we can hope to do, though, is to catch and fix as
229 many of those errors as possible before our code goes into the mainline
230 kernel. To that end, the kernel developers have put together an impressive
231 array of tools which can catch a wide variety of obscure problems in an
232 automated way. Any problem caught by the computer is a problem which will
233 not afflict a user later on, so it stands to reason that the automated
234 tools should be used whenever possible.
236 The first step is simply to heed the warnings produced by the compiler.
237 Contemporary versions of gcc can detect (and warn about) a large number of
238 potential errors. Quite often, these warnings point to real problems.
239 Code submitted for review should, as a rule, not produce any compiler
240 warnings. When silencing warnings, take care to understand the real cause
241 and try to avoid "fixes" which make the warning go away without addressing
244 Note that not all compiler warnings are enabled by default. Build the
245 kernel with "make EXTRA_CFLAGS=-W" to get the full set.
247 The kernel provides several configuration options which turn on debugging
248 features; most of these are found in the "kernel hacking" submenu. Several
249 of these options should be turned on for any kernel used for development or
250 testing purposes. In particular, you should turn on:
252 - ENABLE_WARN_DEPRECATED, ENABLE_MUST_CHECK, and FRAME_WARN to get an
253 extra set of warnings for problems like the use of deprecated interfaces
254 or ignoring an important return value from a function. The output
255 generated by these warnings can be verbose, but one need not worry about
256 warnings from other parts of the kernel.
258 - DEBUG_OBJECTS will add code to track the lifetime of various objects
259 created by the kernel and warn when things are done out of order. If
260 you are adding a subsystem which creates (and exports) complex objects
261 of its own, consider adding support for the object debugging
264 - DEBUG_SLAB can find a variety of memory allocation and use errors; it
265 should be used on most development kernels.
267 - DEBUG_SPINLOCK, DEBUG_ATOMIC_SLEEP, and DEBUG_MUTEXES will find a
268 number of common locking errors.
270 There are quite a few other debugging options, some of which will be
271 discussed below. Some of them have a significant performance impact and
272 should not be used all of the time. But some time spent learning the
273 available options will likely be paid back many times over in short order.
275 One of the heavier debugging tools is the locking checker, or "lockdep."
276 This tool will track the acquisition and release of every lock (spinlock or
277 mutex) in the system, the order in which locks are acquired relative to
278 each other, the current interrupt environment, and more. It can then
279 ensure that locks are always acquired in the same order, that the same
280 interrupt assumptions apply in all situations, and so on. In other words,
281 lockdep can find a number of scenarios in which the system could, on rare
282 occasion, deadlock. This kind of problem can be painful (for both
283 developers and users) in a deployed system; lockdep allows them to be found
284 in an automated manner ahead of time. Code with any sort of non-trivial
285 locking should be run with lockdep enabled before being submitted for
288 As a diligent kernel programmer, you will, beyond doubt, check the return
289 status of any operation (such as a memory allocation) which can fail. The
290 fact of the matter, though, is that the resulting failure recovery paths
291 are, probably, completely untested. Untested code tends to be broken code;
292 you could be much more confident of your code if all those error-handling
293 paths had been exercised a few times.
295 The kernel provides a fault injection framework which can do exactly that,
296 especially where memory allocations are involved. With fault injection
297 enabled, a configurable percentage of memory allocations will be made to
298 fail; these failures can be restricted to a specific range of code.
299 Running with fault injection enabled allows the programmer to see how the
300 code responds when things go badly. See
301 Documentation/fault-injection/fault-injection.txt for more information on
302 how to use this facility.
304 Other kinds of errors can be found with the "sparse" static analysis tool.
305 With sparse, the programmer can be warned about confusion between
306 user-space and kernel-space addresses, mixture of big-endian and
307 small-endian quantities, the passing of integer values where a set of bit
308 flags is expected, and so on. Sparse must be installed separately (it can
309 be found at https://sparse.wiki.kernel.org/index.php/Main_Page if your
310 distributor does not package it); it can then be run on the code by adding
311 "C=1" to your make command.
313 The "Coccinelle" tool (http://coccinelle.lip6.fr/) is able to find a wide
314 variety of potential coding problems; it can also propose fixes for those
315 problems. Quite a few "semantic patches" for the kernel have been packaged
316 under the scripts/coccinelle directory; running "make coccicheck" will run
317 through those semantic patches and report on any problems found. See
318 Documentation/dev-tools/coccinelle.rst for more information.
320 Other kinds of portability errors are best found by compiling your code for
321 other architectures. If you do not happen to have an S/390 system or a
322 Blackfin development board handy, you can still perform the compilation
323 step. A large set of cross compilers for x86 systems can be found at
325 http://www.kernel.org/pub/tools/crosstool/
327 Some time spent installing and using these compilers will help avoid
334 Documentation has often been more the exception than the rule with kernel
335 development. Even so, adequate documentation will help to ease the merging
336 of new code into the kernel, make life easier for other developers, and
337 will be helpful for your users. In many cases, the addition of
338 documentation has become essentially mandatory.
340 The first piece of documentation for any patch is its associated
341 changelog. Log entries should describe the problem being solved, the form
342 of the solution, the people who worked on the patch, any relevant
343 effects on performance, and anything else that might be needed to
344 understand the patch. Be sure that the changelog says *why* the patch is
345 worth applying; a surprising number of developers fail to provide that
348 Any code which adds a new user-space interface - including new sysfs or
349 /proc files - should include documentation of that interface which enables
350 user-space developers to know what they are working with. See
351 Documentation/ABI/README for a description of how this documentation should
352 be formatted and what information needs to be provided.
354 The file :ref:`Documentation/admin-guide/kernel-parameters.rst
355 <kernelparameters>` describes all of the kernel's boot-time parameters.
356 Any patch which adds new parameters should add the appropriate entries to
359 Any new configuration options must be accompanied by help text which
360 clearly explains the options and when the user might want to select them.
362 Internal API information for many subsystems is documented by way of
363 specially-formatted comments; these comments can be extracted and formatted
364 in a number of ways by the "kernel-doc" script. If you are working within
365 a subsystem which has kerneldoc comments, you should maintain them and add
366 them, as appropriate, for externally-available functions. Even in areas
367 which have not been so documented, there is no harm in adding kerneldoc
368 comments for the future; indeed, this can be a useful activity for
369 beginning kernel developers. The format of these comments, along with some
370 information on how to create kerneldoc templates can be found at
371 :ref:`Documentation/doc-guide/ <doc_guide>`.
373 Anybody who reads through a significant amount of existing kernel code will
374 note that, often, comments are most notable by their absence. Once again,
375 the expectations for new code are higher than they were in the past;
376 merging uncommented code will be harder. That said, there is little desire
377 for verbosely-commented code. The code should, itself, be readable, with
378 comments explaining the more subtle aspects.
380 Certain things should always be commented. Uses of memory barriers should
381 be accompanied by a line explaining why the barrier is necessary. The
382 locking rules for data structures generally need to be explained somewhere.
383 Major data structures need comprehensive documentation in general.
384 Non-obvious dependencies between separate bits of code should be pointed
385 out. Anything which might tempt a code janitor to make an incorrect
386 "cleanup" needs a comment saying why it is done the way it is. And so on.
392 The binary interface provided by the kernel to user space cannot be broken
393 except under the most severe circumstances. The kernel's internal
394 programming interfaces, instead, are highly fluid and can be changed when
395 the need arises. If you find yourself having to work around a kernel API,
396 or simply not using a specific functionality because it does not meet your
397 needs, that may be a sign that the API needs to change. As a kernel
398 developer, you are empowered to make such changes.
400 There are, of course, some catches. API changes can be made, but they need
401 to be well justified. So any patch making an internal API change should be
402 accompanied by a description of what the change is and why it is
403 necessary. This kind of change should also be broken out into a separate
404 patch, rather than buried within a larger patch.
406 The other catch is that a developer who changes an internal API is
407 generally charged with the task of fixing any code within the kernel tree
408 which is broken by the change. For a widely-used function, this duty can
409 lead to literally hundreds or thousands of changes - many of which are
410 likely to conflict with work being done by other developers. Needless to
411 say, this can be a large job, so it is best to be sure that the
412 justification is solid. Note that the Coccinelle tool can help with
413 wide-ranging API changes.
415 When making an incompatible API change, one should, whenever possible,
416 ensure that code which has not been updated is caught by the compiler.
417 This will help you to be sure that you have found all in-tree uses of that
418 interface. It will also alert developers of out-of-tree code that there is
419 a change that they need to respond to. Supporting out-of-tree code is not
420 something that kernel developers need to be worried about, but we also do
421 not have to make life harder for out-of-tree developers than it needs to