1 <!-- doc/src/sgml/wal.sgml -->
4 <title>Reliability and the Write-Ahead Log
</title>
7 This chapter explains how to control the reliability of
8 <productname>PostgreSQL
</productname>, including details about the
12 <sect1 id=
"wal-reliability">
13 <title>Reliability
</title>
16 Reliability is an important property of any serious database
17 system, and
<productname>PostgreSQL
</productname> does everything possible to
18 guarantee reliable operation. One aspect of reliable operation is
19 that all data recorded by a committed transaction should be stored
20 in a nonvolatile area that is safe from power loss, operating
21 system failure, and hardware failure (except failure of the
22 nonvolatile area itself, of course). Successfully writing the data
23 to the computer's permanent storage (disk drive or equivalent)
24 ordinarily meets this requirement. In fact, even if a computer is
25 fatally damaged, if the disk drives survive they can be moved to
26 another computer with similar hardware and all committed
27 transactions will remain intact.
31 While forcing data to the disk platters periodically might seem like
32 a simple operation, it is not. Because disk drives are dramatically
33 slower than main memory and CPUs, several layers of caching exist
34 between the computer's main memory and the disk platters.
35 First, there is the operating system's buffer cache, which caches
36 frequently requested disk blocks and combines disk writes. Fortunately,
37 all operating systems give applications a way to force writes from
38 the buffer cache to disk, and
<productname>PostgreSQL
</productname> uses those
39 features. (See the
<xref linkend=
"guc-wal-sync-method"/> parameter
40 to adjust how this is done.)
44 Next, there might be a cache in the disk drive controller; this is
45 particularly common on
<acronym>RAID
</acronym> controller cards. Some of
46 these caches are
<firstterm>write-through
</firstterm>, meaning writes are sent
47 to the drive as soon as they arrive. Others are
48 <firstterm>write-back
</firstterm>, meaning data is sent to the drive at
49 some later time. Such caches can be a reliability hazard because the
50 memory in the disk controller cache is volatile, and will lose its
51 contents in a power failure. Better controller cards have
52 <firstterm>battery-backup units
</firstterm> (
<acronym>BBU
</acronym>s), meaning
53 the card has a battery that
54 maintains power to the cache in case of system power loss. After power
55 is restored the data will be written to the disk drives.
59 And finally, most disk drives have caches. Some are write-through
60 while some are write-back, and the same concerns about data loss
61 exist for write-back drive caches as for disk controller
62 caches. Consumer-grade IDE and SATA drives are particularly likely
63 to have write-back caches that will not survive a power failure. Many
64 solid-state drives (SSD) also have volatile write-back caches.
68 These caches can typically be disabled; however, the method for doing
69 this varies by operating system and drive type:
75 On
<productname>Linux
</productname>, IDE and SATA drives can be queried using
76 <command>hdparm -I
</command>; write caching is enabled if there is
77 a
<literal>*
</literal> next to
<literal>Write cache
</literal>.
<command>hdparm -W
0</command>
78 can be used to turn off write caching. SCSI drives can be queried
79 using
<ulink url=
"http://sg.danny.cz/sg/sdparm.html"><application>sdparm
</application></ulink>.
80 Use
<command>sdparm --get=WCE
</command> to check
81 whether the write cache is enabled and
<command>sdparm --clear=WCE
</command>
88 On
<productname>FreeBSD
</productname>, IDE drives can be queried using
89 <command>atacontrol
</command> and write caching turned off using
90 <literal>hw.ata.wc=
0</literal> in
<filename>/boot/loader.conf
</filename>;
91 SCSI drives can be queried using
<command>camcontrol identify
</command>,
92 and the write cache both queried and changed using
93 <command>sdparm
</command> when available.
99 On
<productname>Solaris
</productname>, the disk write cache is controlled by
100 <command>format -e
</command>.
101 (The Solaris
<acronym>ZFS
</acronym> file system is safe with disk write-cache
102 enabled because it issues its own disk cache flush commands.)
108 On
<productname>Windows
</productname>, if
<varname>wal_sync_method
</varname> is
109 <literal>open_datasync
</literal> (the default), write caching can be disabled
110 by unchecking
<literal>My Computer\Open\
<replaceable>disk drive
</replaceable>\Properties\Hardware\Properties\Policies\Enable write caching on the disk
</literal>.
111 Alternatively, set
<varname>wal_sync_method
</varname> to
112 <literal>fdatasync
</literal> (NTFS only) or
<literal>fsync
</literal>,
113 which prevent write caching.
119 On
<productname>macOS
</productname>, write caching can be prevented by
120 setting
<varname>wal_sync_method
</varname> to
<literal>fsync_writethrough
</literal>.
126 Recent SATA drives (those following
<acronym>ATAPI-
6</acronym> or later)
127 offer a drive cache flush command (
<command>FLUSH CACHE EXT
</command>),
128 while SCSI drives have long supported a similar command
129 <command>SYNCHRONIZE CACHE
</command>. These commands are not directly
130 accessible to
<productname>PostgreSQL
</productname>, but some file systems
131 (e.g.,
<acronym>ZFS
</acronym>,
<acronym>ext4
</acronym>) can use them to flush
132 data to the platters on write-back-enabled drives. Unfortunately, such
133 file systems behave suboptimally when combined with battery-backup unit
134 (
<acronym>BBU
</acronym>) disk controllers. In such setups, the synchronize
135 command forces all data from the controller cache to the disks,
136 eliminating much of the benefit of the BBU. You can run the
137 <xref linkend=
"pgtestfsync"/> program to see
138 if you are affected. If you are affected, the performance benefits
139 of the BBU can be regained by turning off write barriers in
140 the file system or reconfiguring the disk controller, if that is
141 an option. If write barriers are turned off, make sure the battery
142 remains functional; a faulty battery can potentially lead to data loss.
143 Hopefully file system and disk controller designers will eventually
144 address this suboptimal behavior.
148 When the operating system sends a write request to the storage hardware,
149 there is little it can do to make sure the data has arrived at a truly
150 non-volatile storage area. Rather, it is the
151 administrator's responsibility to make certain that all storage components
152 ensure integrity for both data and file-system metadata.
153 Avoid disk controllers that have non-battery-backed write caches.
154 At the drive level, disable write-back caching if the
155 drive cannot guarantee the data will be written before shutdown.
156 If you use SSDs, be aware that many of these do not honor cache flush
158 You can test for reliable I/O subsystem behavior using
<ulink
159 url=
"https://brad.livejournal.com/2116715.html"><filename>diskchecker.pl
</filename></ulink>.
163 Another risk of data loss is posed by the disk platter write
164 operations themselves. Disk platters are divided into sectors,
165 commonly
512 bytes each. Every physical read or write operation
166 processes a whole sector.
167 When a write request arrives at the drive, it might be for some multiple
168 of
512 bytes (
<productname>PostgreSQL
</productname> typically writes
8192 bytes, or
169 16 sectors, at a time), and the process of writing could fail due
170 to power loss at any time, meaning some of the
512-byte sectors were
171 written while others were not. To guard against such failures,
172 <productname>PostgreSQL
</productname> periodically writes full page images to
173 permanent WAL storage
<emphasis>before
</emphasis> modifying the actual page on
174 disk. By doing this, during crash recovery
<productname>PostgreSQL
</productname> can
175 restore partially-written pages from WAL. If you have file-system software
176 that prevents partial page writes (e.g., ZFS), you can turn off
177 this page imaging by turning off the
<xref
178 linkend=
"guc-full-page-writes"/> parameter. Battery-Backed Unit
179 (BBU) disk controllers do not prevent partial page writes unless
180 they guarantee that data is written to the BBU as full (
8kB) pages.
183 <productname>PostgreSQL
</productname> also protects against some kinds of data corruption
184 on storage devices that may occur because of hardware errors or media failure over time,
185 such as reading/writing garbage data.
189 Each individual record in a WAL file is protected by a CRC-
32 (
32-bit) check
190 that allows us to tell if record contents are correct. The CRC value
191 is set when we write each WAL record and checked during crash recovery,
192 archive recovery and replication.
197 Data pages are not currently checksummed by default, though full page images
198 recorded in WAL records will be protected; see
<link
199 linkend=
"app-initdb-data-checksums"><application>initdb
</application></link>
200 for details about enabling data checksums.
205 Internal data structures such as
<filename>pg_xact
</filename>,
<filename>pg_subtrans
</filename>,
<filename>pg_multixact
</filename>,
206 <filename>pg_serial
</filename>,
<filename>pg_notify
</filename>,
<filename>pg_stat
</filename>,
<filename>pg_snapshots
</filename> are not directly
207 checksummed, nor are pages protected by full page writes. However, where
208 such data structures are persistent, WAL records are written that allow
209 recent changes to be accurately rebuilt at crash recovery and those
210 WAL records are protected as discussed above.
215 Individual state files in
<filename>pg_twophase
</filename> are protected by CRC-
32.
220 Temporary data files used in larger SQL queries for sorts,
221 materializations and intermediate results are not currently checksummed,
222 nor will WAL records be written for changes to those files.
228 <productname>PostgreSQL
</productname> does not protect against correctable memory errors
229 and it is assumed you will operate using RAM that uses industry standard
230 Error Correcting Codes (ECC) or better protection.
234 <sect1 id=
"checksums">
235 <title>Data Checksums
</title>
237 <primary>checksums
</primary>
241 By default, data pages are not protected by checksums, but this can
242 optionally be enabled for a cluster. When enabled, each data page includes
243 a checksum that is updated when the page is written and verified each time
244 the page is read. Only data pages are protected by checksums; internal data
245 structures and temporary files are not.
249 Checksums are normally enabled when the cluster is initialized using
<link
250 linkend=
"app-initdb-data-checksums"><application>initdb
</application></link>.
251 They can also be enabled or disabled at a later time as an offline
252 operation. Data checksums are enabled or disabled at the full cluster
253 level, and cannot be specified individually for databases or tables.
257 The current state of checksums in the cluster can be verified by viewing the
258 value of the read-only configuration variable
<xref
259 linkend=
"guc-data-checksums" /> by issuing the command
<command>SHOW
260 data_checksums
</command>.
264 When attempting to recover from page corruptions, it may be necessary to
265 bypass the checksum protection. To do this, temporarily set the
266 configuration parameter
<xref linkend=
"guc-ignore-checksum-failure" />.
269 <sect2 id=
"checksums-offline-enable-disable">
270 <title>Off-line Enabling of Checksums
</title>
273 The
<link linkend=
"app-pgchecksums"><application>pg_checksums
</application></link>
274 application can be used to enable or disable data checksums, as well as
275 verify checksums, on an offline cluster.
281 <sect1 id=
"wal-intro">
282 <title>Write-Ahead Logging (
<acronym>WAL
</acronym>)
</title>
284 <indexterm zone=
"wal">
285 <primary>WAL
</primary>
289 <primary>transaction log
</primary>
294 <firstterm>Write-Ahead Logging
</firstterm> (
<acronym>WAL
</acronym>)
295 is a standard method for ensuring data integrity. A detailed
296 description can be found in most (if not all) books about
297 transaction processing. Briefly,
<acronym>WAL
</acronym>'s central
298 concept is that changes to data files (where tables and indexes
299 reside) must be written only after those changes have been logged,
300 that is, after WAL records describing the changes have been flushed
301 to permanent storage. If we follow this procedure, we do not need
302 to flush data pages to disk on every transaction commit, because we
303 know that in the event of a crash we will be able to recover the
304 database using the log: any changes that have not been applied to
305 the data pages can be redone from the WAL records. (This is
306 roll-forward recovery, also known as REDO.)
311 Because
<acronym>WAL
</acronym> restores database file
312 contents after a crash, journaled file systems are not necessary for
313 reliable storage of the data files or WAL files. In fact, journaling
314 overhead can reduce performance, especially if journaling
315 causes file system
<emphasis>data
</emphasis> to be flushed
316 to disk. Fortunately, data flushing during journaling can
317 often be disabled with a file system mount option, e.g.,
318 <literal>data=writeback
</literal> on a Linux ext3 file system.
319 Journaled file systems do improve boot speed after a crash.
325 Using
<acronym>WAL
</acronym> results in a
326 significantly reduced number of disk writes, because only the WAL
327 file needs to be flushed to disk to guarantee that a transaction is
328 committed, rather than every data file changed by the transaction.
329 The WAL file is written sequentially,
330 and so the cost of syncing the WAL is much less than the cost of
331 flushing the data pages. This is especially true for servers
332 handling many small transactions touching different parts of the data
333 store. Furthermore, when the server is processing many small concurrent
334 transactions, one
<function>fsync
</function> of the WAL file may
335 suffice to commit many transactions.
339 <acronym>WAL
</acronym> also makes it possible to support on-line
340 backup and point-in-time recovery, as described in
<xref
341 linkend=
"continuous-archiving"/>. By archiving the WAL data we can support
342 reverting to any time instant covered by the available WAL data:
343 we simply install a prior physical backup of the database, and
344 replay the WAL just as far as the desired time. What's more,
345 the physical backup doesn't have to be an instantaneous snapshot
346 of the database state
— if it is made over some period of time,
347 then replaying the WAL for that period will fix any internal
352 <sect1 id=
"wal-async-commit">
353 <title>Asynchronous Commit
</title>
356 <primary>synchronous commit
</primary>
360 <primary>asynchronous commit
</primary>
364 <firstterm>Asynchronous commit
</firstterm> is an option that allows transactions
365 to complete more quickly, at the cost that the most recent transactions may
366 be lost if the database should crash. In many applications this is an
367 acceptable trade-off.
371 As described in the previous section, transaction commit is normally
372 <firstterm>synchronous
</firstterm>: the server waits for the transaction's
373 <acronym>WAL
</acronym> records to be flushed to permanent storage
374 before returning a success indication to the client. The client is
375 therefore guaranteed that a transaction reported to be committed will
376 be preserved, even in the event of a server crash immediately after.
377 However, for short transactions this delay is a major component of the
378 total transaction time. Selecting asynchronous commit mode means that
379 the server returns success as soon as the transaction is logically
380 completed, before the
<acronym>WAL
</acronym> records it generated have
381 actually made their way to disk. This can provide a significant boost
382 in throughput for small transactions.
386 Asynchronous commit introduces the risk of data loss. There is a short
387 time window between the report of transaction completion to the client
388 and the time that the transaction is truly committed (that is, it is
389 guaranteed not to be lost if the server crashes). Thus asynchronous
390 commit should not be used if the client will take external actions
391 relying on the assumption that the transaction will be remembered.
392 As an example, a bank would certainly not use asynchronous commit for
393 a transaction recording an ATM's dispensing of cash. But in many
394 scenarios, such as event logging, there is no need for a strong
395 guarantee of this kind.
399 The risk that is taken by using asynchronous commit is of data loss,
400 not data corruption. If the database should crash, it will recover
401 by replaying
<acronym>WAL
</acronym> up to the last record that was
402 flushed. The database will therefore be restored to a self-consistent
403 state, but any transactions that were not yet flushed to disk will
404 not be reflected in that state. The net effect is therefore loss of
405 the last few transactions. Because the transactions are replayed in
406 commit order, no inconsistency can be introduced
— for example,
407 if transaction B made changes relying on the effects of a previous
408 transaction A, it is not possible for A's effects to be lost while B's
409 effects are preserved.
413 The user can select the commit mode of each transaction, so that
414 it is possible to have both synchronous and asynchronous commit
415 transactions running concurrently. This allows flexible trade-offs
416 between performance and certainty of transaction durability.
417 The commit mode is controlled by the user-settable parameter
418 <xref linkend=
"guc-synchronous-commit"/>, which can be changed in any of
419 the ways that a configuration parameter can be set. The mode used for
420 any one transaction depends on the value of
421 <varname>synchronous_commit
</varname> when transaction commit begins.
425 Certain utility commands, for instance
<command>DROP TABLE
</command>, are
426 forced to commit synchronously regardless of the setting of
427 <varname>synchronous_commit
</varname>. This is to ensure consistency
428 between the server's file system and the logical state of the database.
429 The commands supporting two-phase commit, such as
<command>PREPARE
430 TRANSACTION
</command>, are also always synchronous.
434 If the database crashes during the risk window between an
435 asynchronous commit and the writing of the transaction's
436 <acronym>WAL
</acronym> records,
437 then changes made during that transaction
<emphasis>will
</emphasis> be lost.
439 risk window is limited because a background process (the
<quote>WAL
440 writer
</quote>) flushes unwritten
<acronym>WAL
</acronym> records to disk
441 every
<xref linkend=
"guc-wal-writer-delay"/> milliseconds.
442 The actual maximum duration of the risk window is three times
443 <varname>wal_writer_delay
</varname> because the WAL writer is
444 designed to favor writing whole pages at a time during busy periods.
449 An immediate-mode shutdown is equivalent to a server crash, and will
450 therefore cause loss of any unflushed asynchronous commits.
455 Asynchronous commit provides behavior different from setting
456 <xref linkend=
"guc-fsync"/> = off.
457 <varname>fsync
</varname> is a server-wide
458 setting that will alter the behavior of all transactions. It disables
459 all logic within
<productname>PostgreSQL
</productname> that attempts to synchronize
460 writes to different portions of the database, and therefore a system
461 crash (that is, a hardware or operating system crash, not a failure of
462 <productname>PostgreSQL
</productname> itself) could result in arbitrarily bad
463 corruption of the database state. In many scenarios, asynchronous
464 commit provides most of the performance improvement that could be
465 obtained by turning off
<varname>fsync
</varname>, but without the risk
470 <xref linkend=
"guc-commit-delay"/> also sounds very similar to
471 asynchronous commit, but it is actually a synchronous commit method
472 (in fact,
<varname>commit_delay
</varname> is ignored during an
473 asynchronous commit).
<varname>commit_delay
</varname> causes a delay
474 just before a transaction flushes
<acronym>WAL
</acronym> to disk, in
475 the hope that a single flush executed by one such transaction can also
476 serve other transactions committing at about the same time. The
477 setting can be thought of as a way of increasing the time window in
478 which transactions can join a group about to participate in a single
479 flush, to amortize the cost of the flush among multiple transactions.
484 <sect1 id=
"wal-configuration">
485 <title><acronym>WAL
</acronym> Configuration
</title>
488 There are several
<acronym>WAL
</acronym>-related configuration parameters that
489 affect database performance. This section explains their use.
490 Consult
<xref linkend=
"runtime-config"/> for general information about
491 setting server configuration parameters.
495 <firstterm>Checkpoints
</firstterm><indexterm><primary>checkpoint
</primary></indexterm>
496 are points in the sequence of transactions at which it is guaranteed
497 that the heap and index data files have been updated with all
498 information written before that checkpoint. At checkpoint time, all
499 dirty data pages are flushed to disk and a special checkpoint record is
500 written to the WAL file. (The change records were previously flushed
501 to the
<acronym>WAL
</acronym> files.)
502 In the event of a crash, the crash recovery procedure looks at the latest
503 checkpoint record to determine the point in the WAL (known as the redo
504 record) from which it should start the REDO operation. Any changes made to
505 data files before that point are guaranteed to be already on disk.
506 Hence, after a checkpoint, WAL segments preceding the one containing
507 the redo record are no longer needed and can be recycled or removed. (When
508 <acronym>WAL
</acronym> archiving is being done, the WAL segments must be
509 archived before being recycled or removed.)
513 The checkpoint requirement of flushing all dirty data pages to disk
514 can cause a significant I/O load. For this reason, checkpoint
515 activity is throttled so that I/O begins at checkpoint start and completes
516 before the next checkpoint is due to start; this minimizes performance
517 degradation during checkpoints.
521 The server's checkpointer process automatically performs
522 a checkpoint every so often. A checkpoint is begun every
<xref
523 linkend=
"guc-checkpoint-timeout"/> seconds, or if
524 <xref linkend=
"guc-max-wal-size"/> is about to be exceeded,
525 whichever comes first.
526 The default settings are
5 minutes and
1 GB, respectively.
527 If no WAL has been written since the previous checkpoint, new checkpoints
528 will be skipped even if
<varname>checkpoint_timeout
</varname> has passed.
529 (If WAL archiving is being used and you want to put a lower limit on how
530 often files are archived in order to bound potential data loss, you should
531 adjust the
<xref linkend=
"guc-archive-timeout"/> parameter rather than the
532 checkpoint parameters.)
533 It is also possible to force a checkpoint by using the SQL
534 command
<command>CHECKPOINT
</command>.
538 Reducing
<varname>checkpoint_timeout
</varname> and/or
539 <varname>max_wal_size
</varname> causes checkpoints to occur
540 more often. This allows faster after-crash recovery, since less work
541 will need to be redone. However, one must balance this against the
542 increased cost of flushing dirty data pages more often. If
543 <xref linkend=
"guc-full-page-writes"/> is set (as is the default), there is
544 another factor to consider. To ensure data page consistency,
545 the first modification of a data page after each checkpoint results in
546 logging the entire page content. In that case,
547 a smaller checkpoint interval increases the volume of output to the WAL,
548 partially negating the goal of using a smaller interval,
549 and in any case causing more disk I/O.
553 Checkpoints are fairly expensive, first because they require writing
554 out all currently dirty buffers, and second because they result in
555 extra subsequent WAL traffic as discussed above. It is therefore
556 wise to set the checkpointing parameters high enough so that checkpoints
557 don't happen too often. As a simple sanity check on your checkpointing
558 parameters, you can set the
<xref linkend=
"guc-checkpoint-warning"/>
559 parameter. If checkpoints happen closer together than
560 <varname>checkpoint_warning
</varname> seconds,
561 a message will be output to the server log recommending increasing
562 <varname>max_wal_size
</varname>. Occasional appearance of such
563 a message is not cause for alarm, but if it appears often then the
564 checkpoint control parameters should be increased. Bulk operations such
565 as large
<command>COPY
</command> transfers might cause a number of such warnings
566 to appear if you have not set
<varname>max_wal_size
</varname> high
571 To avoid flooding the I/O system with a burst of page writes,
572 writing dirty buffers during a checkpoint is spread over a period of time.
573 That period is controlled by
574 <xref linkend=
"guc-checkpoint-completion-target"/>, which is
575 given as a fraction of the checkpoint interval (configured by using
576 <varname>checkpoint_timeout
</varname>).
577 The I/O rate is adjusted so that the checkpoint finishes when the
579 <varname>checkpoint_timeout
</varname> seconds have elapsed, or before
580 <varname>max_wal_size
</varname> is exceeded, whichever is sooner.
581 With the default value of
0.9,
582 <productname>PostgreSQL
</productname> can be expected to complete each checkpoint
583 a bit before the next scheduled checkpoint (at around
90% of the last checkpoint's
584 duration). This spreads out the I/O as much as possible so that the checkpoint
585 I/O load is consistent throughout the checkpoint interval. The disadvantage of
586 this is that prolonging checkpoints affects recovery time, because more WAL
587 segments will need to be kept around for possible use in recovery. A user
588 concerned about the amount of time required to recover might wish to reduce
589 <varname>checkpoint_timeout
</varname> so that checkpoints occur more frequently
590 but still spread the I/O across the checkpoint interval. Alternatively,
591 <varname>checkpoint_completion_target
</varname> could be reduced, but this would
592 result in times of more intense I/O (during the checkpoint) and times of less I/O
593 (after the checkpoint completed but before the next scheduled checkpoint) and
594 therefore is not recommended.
595 Although
<varname>checkpoint_completion_target
</varname> could be set as high as
596 1.0, it is typically recommended to set it to no higher than
0.9 (the default)
597 since checkpoints include some other activities besides writing dirty buffers.
598 A setting of
1.0 is quite likely to result in checkpoints not being
599 completed on time, which would result in performance loss due to
600 unexpected variation in the number of WAL segments needed.
604 On Linux and POSIX platforms
<xref linkend=
"guc-checkpoint-flush-after"/>
605 allows you to force OS pages written by the checkpoint to be
606 flushed to disk after a configurable number of bytes. Otherwise, these
607 pages may be kept in the OS's page cache, inducing a stall when
608 <literal>fsync
</literal> is issued at the end of a checkpoint. This setting will
609 often help to reduce transaction latency, but it also can have an adverse
610 effect on performance; particularly for workloads that are bigger than
611 <xref linkend=
"guc-shared-buffers"/>, but smaller than the OS's page cache.
615 The number of WAL segment files in
<filename>pg_wal
</filename> directory depends on
616 <varname>min_wal_size
</varname>,
<varname>max_wal_size
</varname> and
617 the amount of WAL generated in previous checkpoint cycles. When old WAL
618 segment files are no longer needed, they are removed or recycled (that is,
619 renamed to become future segments in the numbered sequence). If, due to a
620 short-term peak of WAL output rate,
<varname>max_wal_size
</varname> is
621 exceeded, the unneeded segment files will be removed until the system
622 gets back under this limit. Below that limit, the system recycles enough
623 WAL files to cover the estimated need until the next checkpoint, and
624 removes the rest. The estimate is based on a moving average of the number
625 of WAL files used in previous checkpoint cycles. The moving average
626 is increased immediately if the actual usage exceeds the estimate, so it
627 accommodates peak usage rather than average usage to some extent.
628 <varname>min_wal_size
</varname> puts a minimum on the amount of WAL files
629 recycled for future usage; that much WAL is always recycled for future use,
630 even if the system is idle and the WAL usage estimate suggests that little
635 Independently of
<varname>max_wal_size
</varname>,
636 the most recent
<xref linkend=
"guc-wal-keep-size"/> megabytes of
637 WAL files plus one additional WAL file are
638 kept at all times. Also, if WAL archiving is used, old segments cannot be
639 removed or recycled until they are archived. If WAL archiving cannot keep up
640 with the pace that WAL is generated, or if
<varname>archive_command
</varname>
641 or
<varname>archive_library
</varname>
642 fails repeatedly, old WAL files will accumulate in
<filename>pg_wal
</filename>
643 until the situation is resolved. A slow or failed standby server that
644 uses a replication slot will have the same effect (see
645 <xref linkend=
"streaming-replication-slots"/>).
649 In archive recovery or standby mode, the server periodically performs
650 <firstterm>restartpoints
</firstterm>,
<indexterm><primary>restartpoint
</primary></indexterm>
651 which are similar to checkpoints in normal operation: the server forces
652 all its state to disk, updates the
<filename>pg_control
</filename> file to
653 indicate that the already-processed WAL data need not be scanned again,
654 and then recycles any old WAL segment files in the
<filename>pg_wal
</filename>
656 Restartpoints can't be performed more frequently than checkpoints on the
657 primary because restartpoints can only be performed at checkpoint records.
658 A restartpoint is triggered when a checkpoint record is reached if at
659 least
<varname>checkpoint_timeout
</varname> seconds have passed since the last
660 restartpoint, or if WAL size is about to exceed
661 <varname>max_wal_size
</varname>. However, because of limitations on when a
662 restartpoint can be performed,
<varname>max_wal_size
</varname> is often exceeded
663 during recovery, by up to one checkpoint cycle's worth of WAL.
664 (
<varname>max_wal_size
</varname> is never a hard limit anyway, so you should
665 always leave plenty of headroom to avoid running out of disk space.)
669 There are two commonly used internal
<acronym>WAL
</acronym> functions:
670 <function>XLogInsertRecord
</function> and
<function>XLogFlush
</function>.
671 <function>XLogInsertRecord
</function> is used to place a new record into
672 the
<acronym>WAL
</acronym> buffers in shared memory. If there is no
673 space for the new record,
<function>XLogInsertRecord
</function> will have
674 to write (move to kernel cache) a few filled
<acronym>WAL
</acronym>
675 buffers. This is undesirable because
<function>XLogInsertRecord
</function>
676 is used on every database low level modification (for example, row
677 insertion) at a time when an exclusive lock is held on affected
678 data pages, so the operation needs to be as fast as possible. What
679 is worse, writing
<acronym>WAL
</acronym> buffers might also force the
680 creation of a new WAL segment, which takes even more
681 time. Normally,
<acronym>WAL
</acronym> buffers should be written
682 and flushed by an
<function>XLogFlush
</function> request, which is
683 made, for the most part, at transaction commit time to ensure that
684 transaction records are flushed to permanent storage. On systems
685 with high WAL output,
<function>XLogFlush
</function> requests might
686 not occur often enough to prevent
<function>XLogInsertRecord
</function>
687 from having to do writes. On such systems
688 one should increase the number of
<acronym>WAL
</acronym> buffers by
689 modifying the
<xref linkend=
"guc-wal-buffers"/> parameter. When
690 <xref linkend=
"guc-full-page-writes"/> is set and the system is very busy,
691 setting
<varname>wal_buffers
</varname> higher will help smooth response times
692 during the period immediately following each checkpoint.
696 The
<xref linkend=
"guc-commit-delay"/> parameter defines for how many
697 microseconds a group commit leader process will sleep after acquiring a
698 lock within
<function>XLogFlush
</function>, while group commit
699 followers queue up behind the leader. This delay allows other server
700 processes to add their commit records to the WAL buffers so that all of
701 them will be flushed by the leader's eventual sync operation. No sleep
702 will occur if
<xref linkend=
"guc-fsync"/> is not enabled, or if fewer
703 than
<xref linkend=
"guc-commit-siblings"/> other sessions are currently
704 in active transactions; this avoids sleeping when it's unlikely that
705 any other session will commit soon. Note that on some platforms, the
706 resolution of a sleep request is ten milliseconds, so that any nonzero
707 <varname>commit_delay
</varname> setting between
1 and
10000
708 microseconds would have the same effect. Note also that on some
709 platforms, sleep operations may take slightly longer than requested by
714 Since the purpose of
<varname>commit_delay
</varname> is to allow the
715 cost of each flush operation to be amortized across concurrently
716 committing transactions (potentially at the expense of transaction
717 latency), it is necessary to quantify that cost before the setting can
718 be chosen intelligently. The higher that cost is, the more effective
719 <varname>commit_delay
</varname> is expected to be in increasing
720 transaction throughput, up to a point. The
<xref
721 linkend=
"pgtestfsync"/> program can be used to measure the average time
722 in microseconds that a single WAL flush operation takes. A value of
723 half of the average time the program reports it takes to flush after a
724 single
8kB write operation is often the most effective setting for
725 <varname>commit_delay
</varname>, so this value is recommended as the
726 starting point to use when optimizing for a particular workload. While
727 tuning
<varname>commit_delay
</varname> is particularly useful when the
728 WAL is stored on high-latency rotating disks, benefits can be
729 significant even on storage media with very fast sync times, such as
730 solid-state drives or RAID arrays with a battery-backed write cache;
731 but this should definitely be tested against a representative workload.
732 Higher values of
<varname>commit_siblings
</varname> should be used in
733 such cases, whereas smaller
<varname>commit_siblings
</varname> values
734 are often helpful on higher latency media. Note that it is quite
735 possible that a setting of
<varname>commit_delay
</varname> that is too
736 high can increase transaction latency by so much that total transaction
741 When
<varname>commit_delay
</varname> is set to zero (the default), it
742 is still possible for a form of group commit to occur, but each group
743 will consist only of sessions that reach the point where they need to
744 flush their commit records during the window in which the previous
745 flush operation (if any) is occurring. At higher client counts a
746 <quote>gangway effect
</quote> tends to occur, so that the effects of group
747 commit become significant even when
<varname>commit_delay
</varname> is
748 zero, and thus explicitly setting
<varname>commit_delay
</varname> tends
749 to help less. Setting
<varname>commit_delay
</varname> can only help
750 when (
1) there are some concurrently committing transactions, and (
2)
751 throughput is limited to some degree by commit rate; but with high
752 rotational latency this setting can be effective in increasing
753 transaction throughput with as few as two clients (that is, a single
754 committing client with one sibling transaction).
758 The
<xref linkend=
"guc-wal-sync-method"/> parameter determines how
759 <productname>PostgreSQL
</productname> will ask the kernel to force
760 <acronym>WAL
</acronym> updates out to disk.
761 All the options should be the same in terms of reliability, with
762 the exception of
<literal>fsync_writethrough
</literal>, which can sometimes
763 force a flush of the disk cache even when other options do not do so.
764 However, it's quite platform-specific which one will be the fastest.
765 You can test the speeds of different options using the
<xref
766 linkend=
"pgtestfsync"/> program.
767 Note that this parameter is irrelevant if
<varname>fsync
</varname>
772 Enabling the
<xref linkend=
"guc-wal-debug"/> configuration parameter
773 (provided that
<productname>PostgreSQL
</productname> has been
774 compiled with support for it) will result in each
775 <function>XLogInsertRecord
</function> and
<function>XLogFlush
</function>
776 <acronym>WAL
</acronym> call being logged to the server log. This
777 option might be replaced by a more general mechanism in the future.
781 There are two internal functions to write WAL data to disk:
782 <function>XLogWrite
</function> and
<function>issue_xlog_fsync
</function>.
783 When
<xref linkend=
"guc-track-wal-io-timing"/> is enabled, the total
784 amounts of time
<function>XLogWrite
</function> writes and
785 <function>issue_xlog_fsync
</function> syncs WAL data to disk are counted as
786 <literal>wal_write_time
</literal> and
<literal>wal_sync_time
</literal> in
787 <xref linkend=
"pg-stat-wal-view"/>, respectively.
788 <function>XLogWrite
</function> is normally called by
789 <function>XLogInsertRecord
</function> (when there is no space for the new
790 record in WAL buffers),
<function>XLogFlush
</function> and the WAL writer,
791 to write WAL buffers to disk and call
<function>issue_xlog_fsync
</function>.
792 <function>issue_xlog_fsync
</function> is normally called by
793 <function>XLogWrite
</function> to sync WAL files to disk.
794 If
<varname>wal_sync_method
</varname> is either
795 <literal>open_datasync
</literal> or
<literal>open_sync
</literal>,
796 a write operation in
<function>XLogWrite
</function> guarantees to sync written
797 WAL data to disk and
<function>issue_xlog_fsync
</function> does nothing.
798 If
<varname>wal_sync_method
</varname> is either
<literal>fdatasync
</literal>,
799 <literal>fsync
</literal>, or
<literal>fsync_writethrough
</literal>,
800 the write operation moves WAL buffers to kernel cache and
801 <function>issue_xlog_fsync
</function> syncs them to disk. Regardless
802 of the setting of
<varname>track_wal_io_timing
</varname>, the number
803 of times
<function>XLogWrite
</function> writes and
804 <function>issue_xlog_fsync
</function> syncs WAL data to disk are also
805 counted as
<literal>wal_write
</literal> and
<literal>wal_sync
</literal>
806 in
<structname>pg_stat_wal
</structname>, respectively.
810 The
<xref linkend=
"guc-recovery-prefetch"/> parameter can be used to reduce
811 I/O wait times during recovery by instructing the kernel to initiate reads
812 of disk blocks that will soon be needed but are not currently in
813 <productname>PostgreSQL
</productname>'s buffer pool.
814 The
<xref linkend=
"guc-maintenance-io-concurrency"/> and
815 <xref linkend=
"guc-wal-decode-buffer-size"/> settings limit prefetching
816 concurrency and distance, respectively. By default, it is set to
817 <literal>try
</literal>, which enables the feature on systems where
818 <function>posix_fadvise
</function> is available.
822 <sect1 id=
"wal-internals">
823 <title>WAL Internals
</title>
825 <indexterm zone=
"wal-internals">
826 <primary>LSN
</primary>
830 <acronym>WAL
</acronym> is automatically enabled; no action is
831 required from the administrator except ensuring that the
832 disk-space requirements for the
<acronym>WAL
</acronym> files are met,
833 and that any necessary tuning is done (see
<xref
834 linkend=
"wal-configuration"/>).
838 <acronym>WAL
</acronym> records are appended to the
<acronym>WAL
</acronym>
839 files as each new record is written. The insert position is described by
840 a Log Sequence Number (
<acronym>LSN
</acronym>) that is a byte offset into
841 the WAL, increasing monotonically with each new record.
842 <acronym>LSN
</acronym> values are returned as the datatype
843 <link linkend=
"datatype-pg-lsn"><type>pg_lsn
</type></link>. Values can be
844 compared to calculate the volume of
<acronym>WAL
</acronym> data that
845 separates them, so they are used to measure the progress of replication
850 <acronym>WAL
</acronym> files are stored in the directory
851 <filename>pg_wal
</filename> under the data directory, as a set of
852 segment files, normally each
16 MB in size (but the size can be changed
853 by altering the
<option>--wal-segsize
</option> <application>initdb
</application> option). Each segment is
854 divided into pages, normally
8 kB each (this size can be changed via the
855 <option>--with-wal-blocksize
</option> configure option). The WAL record headers
856 are described in
<filename>access/xlogrecord.h
</filename>; the record
857 content is dependent on the type of event that is being logged. Segment
858 files are given ever-increasing numbers as names, starting at
859 <filename>000000010000000000000001</filename>. The numbers do not wrap,
860 but it will take a very, very long time to exhaust the
861 available stock of numbers.
865 It is advantageous if the WAL is located on a different disk from the
866 main database files. This can be achieved by moving the
867 <filename>pg_wal
</filename> directory to another location (while the server
868 is shut down, of course) and creating a symbolic link from the
869 original location in the main data directory to the new location.
873 The aim of
<acronym>WAL
</acronym> is to ensure that the log is
874 written before database records are altered, but this can be subverted by
875 disk drives
<indexterm><primary>disk drive
</primary></indexterm> that falsely report a
876 successful write to the kernel,
877 when in fact they have only cached the data and not yet stored it
878 on the disk. A power failure in such a situation might lead to
879 irrecoverable data corruption. Administrators should try to ensure
880 that disks holding
<productname>PostgreSQL
</productname>'s
881 <acronym>WAL
</acronym> files do not make such false reports.
882 (See
<xref linkend=
"wal-reliability"/>.)
886 After a checkpoint has been made and the WAL flushed, the
887 checkpoint's position is saved in the file
888 <filename>pg_control
</filename>. Therefore, at the start of recovery,
889 the server first reads
<filename>pg_control
</filename> and
890 then the checkpoint record; then it performs the REDO operation by
891 scanning forward from the WAL location indicated in the checkpoint
892 record. Because the entire content of data pages is saved in the
893 WAL on the first page modification after a checkpoint (assuming
894 <xref linkend=
"guc-full-page-writes"/> is not disabled), all pages
895 changed since the checkpoint will be restored to a consistent
900 To deal with the case where
<filename>pg_control
</filename> is
901 corrupt, we should support the possibility of scanning existing WAL
902 segments in reverse order
— newest to oldest
— in order to find the
903 latest checkpoint. This has not been implemented yet.
904 <filename>pg_control
</filename> is small enough (less than one disk page)
905 that it is not subject to partial-write problems, and as of this writing
906 there have been no reports of database failures due solely to the inability
907 to read
<filename>pg_control
</filename> itself. So while it is
908 theoretically a weak spot,
<filename>pg_control
</filename> does not
909 seem to be a problem in practice.