Rework lwlocknames.txt to become lwlocklist.h
[pgsql.git] / src / backend / storage / lmgr / predicate.c
blob3f378c0099b35c0ed2021a6c1dac2b60f5bcaaad
1 /*-------------------------------------------------------------------------
3 * predicate.c
4 * POSTGRES predicate locking
5 * to support full serializable transaction isolation
8 * The approach taken is to implement Serializable Snapshot Isolation (SSI)
9 * as initially described in this paper:
11 * Michael J. Cahill, Uwe Röhm, and Alan D. Fekete. 2008.
12 * Serializable isolation for snapshot databases.
13 * In SIGMOD '08: Proceedings of the 2008 ACM SIGMOD
14 * international conference on Management of data,
15 * pages 729-738, New York, NY, USA. ACM.
16 * http://doi.acm.org/10.1145/1376616.1376690
18 * and further elaborated in Cahill's doctoral thesis:
20 * Michael James Cahill. 2009.
21 * Serializable Isolation for Snapshot Databases.
22 * Sydney Digital Theses.
23 * University of Sydney, School of Information Technologies.
24 * http://hdl.handle.net/2123/5353
27 * Predicate locks for Serializable Snapshot Isolation (SSI) are SIREAD
28 * locks, which are so different from normal locks that a distinct set of
29 * structures is required to handle them. They are needed to detect
30 * rw-conflicts when the read happens before the write. (When the write
31 * occurs first, the reading transaction can check for a conflict by
32 * examining the MVCC data.)
34 * (1) Besides tuples actually read, they must cover ranges of tuples
35 * which would have been read based on the predicate. This will
36 * require modelling the predicates through locks against database
37 * objects such as pages, index ranges, or entire tables.
39 * (2) They must be kept in RAM for quick access. Because of this, it
40 * isn't possible to always maintain tuple-level granularity -- when
41 * the space allocated to store these approaches exhaustion, a
42 * request for a lock may need to scan for situations where a single
43 * transaction holds many fine-grained locks which can be coalesced
44 * into a single coarser-grained lock.
46 * (3) They never block anything; they are more like flags than locks
47 * in that regard; although they refer to database objects and are
48 * used to identify rw-conflicts with normal write locks.
50 * (4) While they are associated with a transaction, they must survive
51 * a successful COMMIT of that transaction, and remain until all
52 * overlapping transactions complete. This even means that they
53 * must survive termination of the transaction's process. If a
54 * top level transaction is rolled back, however, it is immediately
55 * flagged so that it can be ignored, and its SIREAD locks can be
56 * released any time after that.
58 * (5) The only transactions which create SIREAD locks or check for
59 * conflicts with them are serializable transactions.
61 * (6) When a write lock for a top level transaction is found to cover
62 * an existing SIREAD lock for the same transaction, the SIREAD lock
63 * can be deleted.
65 * (7) A write from a serializable transaction must ensure that an xact
66 * record exists for the transaction, with the same lifespan (until
67 * all concurrent transaction complete or the transaction is rolled
68 * back) so that rw-dependencies to that transaction can be
69 * detected.
71 * We use an optimization for read-only transactions. Under certain
72 * circumstances, a read-only transaction's snapshot can be shown to
73 * never have conflicts with other transactions. This is referred to
74 * as a "safe" snapshot (and one known not to be is "unsafe").
75 * However, it can't be determined whether a snapshot is safe until
76 * all concurrent read/write transactions complete.
78 * Once a read-only transaction is known to have a safe snapshot, it
79 * can release its predicate locks and exempt itself from further
80 * predicate lock tracking. READ ONLY DEFERRABLE transactions run only
81 * on safe snapshots, waiting as necessary for one to be available.
84 * Lightweight locks to manage access to the predicate locking shared
85 * memory objects must be taken in this order, and should be released in
86 * reverse order:
88 * SerializableFinishedListLock
89 * - Protects the list of transactions which have completed but which
90 * may yet matter because they overlap still-active transactions.
92 * SerializablePredicateListLock
93 * - Protects the linked list of locks held by a transaction. Note
94 * that the locks themselves are also covered by the partition
95 * locks of their respective lock targets; this lock only affects
96 * the linked list connecting the locks related to a transaction.
97 * - All transactions share this single lock (with no partitioning).
98 * - There is never a need for a process other than the one running
99 * an active transaction to walk the list of locks held by that
100 * transaction, except parallel query workers sharing the leader's
101 * transaction. In the parallel case, an extra per-sxact lock is
102 * taken; see below.
103 * - It is relatively infrequent that another process needs to
104 * modify the list for a transaction, but it does happen for such
105 * things as index page splits for pages with predicate locks and
106 * freeing of predicate locked pages by a vacuum process. When
107 * removing a lock in such cases, the lock itself contains the
108 * pointers needed to remove it from the list. When adding a
109 * lock in such cases, the lock can be added using the anchor in
110 * the transaction structure. Neither requires walking the list.
111 * - Cleaning up the list for a terminated transaction is sometimes
112 * not done on a retail basis, in which case no lock is required.
113 * - Due to the above, a process accessing its active transaction's
114 * list always uses a shared lock, regardless of whether it is
115 * walking or maintaining the list. This improves concurrency
116 * for the common access patterns.
117 * - A process which needs to alter the list of a transaction other
118 * than its own active transaction must acquire an exclusive
119 * lock.
121 * SERIALIZABLEXACT's member 'perXactPredicateListLock'
122 * - Protects the linked list of predicate locks held by a transaction.
123 * Only needed for parallel mode, where multiple backends share the
124 * same SERIALIZABLEXACT object. Not needed if
125 * SerializablePredicateListLock is held exclusively.
127 * PredicateLockHashPartitionLock(hashcode)
128 * - The same lock protects a target, all locks on that target, and
129 * the linked list of locks on the target.
130 * - When more than one is needed, acquire in ascending address order.
131 * - When all are needed (rare), acquire in ascending index order with
132 * PredicateLockHashPartitionLockByIndex(index).
134 * SerializableXactHashLock
135 * - Protects both PredXact and SerializableXidHash.
137 * SerialControlLock
138 * - Protects SerialControlData members
140 * SerialSLRULock
141 * - Protects SerialSlruCtl
143 * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
144 * Portions Copyright (c) 1994, Regents of the University of California
147 * IDENTIFICATION
148 * src/backend/storage/lmgr/predicate.c
150 *-------------------------------------------------------------------------
153 * INTERFACE ROUTINES
155 * housekeeping for setting up shared memory predicate lock structures
156 * InitPredicateLocks(void)
157 * PredicateLockShmemSize(void)
159 * predicate lock reporting
160 * GetPredicateLockStatusData(void)
161 * PageIsPredicateLocked(Relation relation, BlockNumber blkno)
163 * predicate lock maintenance
164 * GetSerializableTransactionSnapshot(Snapshot snapshot)
165 * SetSerializableTransactionSnapshot(Snapshot snapshot,
166 * VirtualTransactionId *sourcevxid)
167 * RegisterPredicateLockingXid(void)
168 * PredicateLockRelation(Relation relation, Snapshot snapshot)
169 * PredicateLockPage(Relation relation, BlockNumber blkno,
170 * Snapshot snapshot)
171 * PredicateLockTID(Relation relation, ItemPointer tid, Snapshot snapshot,
172 * TransactionId tuple_xid)
173 * PredicateLockPageSplit(Relation relation, BlockNumber oldblkno,
174 * BlockNumber newblkno)
175 * PredicateLockPageCombine(Relation relation, BlockNumber oldblkno,
176 * BlockNumber newblkno)
177 * TransferPredicateLocksToHeapRelation(Relation relation)
178 * ReleasePredicateLocks(bool isCommit, bool isReadOnlySafe)
180 * conflict detection (may also trigger rollback)
181 * CheckForSerializableConflictOut(Relation relation, TransactionId xid,
182 * Snapshot snapshot)
183 * CheckForSerializableConflictIn(Relation relation, ItemPointer tid,
184 * BlockNumber blkno)
185 * CheckTableForSerializableConflictIn(Relation relation)
187 * final rollback checking
188 * PreCommit_CheckForSerializationFailure(void)
190 * two-phase commit support
191 * AtPrepare_PredicateLocks(void);
192 * PostPrepare_PredicateLocks(TransactionId xid);
193 * PredicateLockTwoPhaseFinish(TransactionId xid, bool isCommit);
194 * predicatelock_twophase_recover(TransactionId xid, uint16 info,
195 * void *recdata, uint32 len);
198 #include "postgres.h"
200 #include "access/parallel.h"
201 #include "access/slru.h"
202 #include "access/transam.h"
203 #include "access/twophase.h"
204 #include "access/twophase_rmgr.h"
205 #include "access/xact.h"
206 #include "access/xlog.h"
207 #include "miscadmin.h"
208 #include "pgstat.h"
209 #include "port/pg_lfind.h"
210 #include "storage/predicate.h"
211 #include "storage/predicate_internals.h"
212 #include "storage/proc.h"
213 #include "storage/procarray.h"
214 #include "utils/guc_hooks.h"
215 #include "utils/rel.h"
216 #include "utils/snapmgr.h"
218 /* Uncomment the next line to test the graceful degradation code. */
219 /* #define TEST_SUMMARIZE_SERIAL */
222 * Test the most selective fields first, for performance.
224 * a is covered by b if all of the following hold:
225 * 1) a.database = b.database
226 * 2) a.relation = b.relation
227 * 3) b.offset is invalid (b is page-granularity or higher)
228 * 4) either of the following:
229 * 4a) a.offset is valid (a is tuple-granularity) and a.page = b.page
230 * or 4b) a.offset is invalid and b.page is invalid (a is
231 * page-granularity and b is relation-granularity
233 #define TargetTagIsCoveredBy(covered_target, covering_target) \
234 ((GET_PREDICATELOCKTARGETTAG_RELATION(covered_target) == /* (2) */ \
235 GET_PREDICATELOCKTARGETTAG_RELATION(covering_target)) \
236 && (GET_PREDICATELOCKTARGETTAG_OFFSET(covering_target) == \
237 InvalidOffsetNumber) /* (3) */ \
238 && (((GET_PREDICATELOCKTARGETTAG_OFFSET(covered_target) != \
239 InvalidOffsetNumber) /* (4a) */ \
240 && (GET_PREDICATELOCKTARGETTAG_PAGE(covering_target) == \
241 GET_PREDICATELOCKTARGETTAG_PAGE(covered_target))) \
242 || ((GET_PREDICATELOCKTARGETTAG_PAGE(covering_target) == \
243 InvalidBlockNumber) /* (4b) */ \
244 && (GET_PREDICATELOCKTARGETTAG_PAGE(covered_target) \
245 != InvalidBlockNumber))) \
246 && (GET_PREDICATELOCKTARGETTAG_DB(covered_target) == /* (1) */ \
247 GET_PREDICATELOCKTARGETTAG_DB(covering_target)))
250 * The predicate locking target and lock shared hash tables are partitioned to
251 * reduce contention. To determine which partition a given target belongs to,
252 * compute the tag's hash code with PredicateLockTargetTagHashCode(), then
253 * apply one of these macros.
254 * NB: NUM_PREDICATELOCK_PARTITIONS must be a power of 2!
256 #define PredicateLockHashPartition(hashcode) \
257 ((hashcode) % NUM_PREDICATELOCK_PARTITIONS)
258 #define PredicateLockHashPartitionLock(hashcode) \
259 (&MainLWLockArray[PREDICATELOCK_MANAGER_LWLOCK_OFFSET + \
260 PredicateLockHashPartition(hashcode)].lock)
261 #define PredicateLockHashPartitionLockByIndex(i) \
262 (&MainLWLockArray[PREDICATELOCK_MANAGER_LWLOCK_OFFSET + (i)].lock)
264 #define NPREDICATELOCKTARGETENTS() \
265 mul_size(max_predicate_locks_per_xact, add_size(MaxBackends, max_prepared_xacts))
267 #define SxactIsOnFinishedList(sxact) (!dlist_node_is_detached(&(sxact)->finishedLink))
270 * Note that a sxact is marked "prepared" once it has passed
271 * PreCommit_CheckForSerializationFailure, even if it isn't using
272 * 2PC. This is the point at which it can no longer be aborted.
274 * The PREPARED flag remains set after commit, so SxactIsCommitted
275 * implies SxactIsPrepared.
277 #define SxactIsCommitted(sxact) (((sxact)->flags & SXACT_FLAG_COMMITTED) != 0)
278 #define SxactIsPrepared(sxact) (((sxact)->flags & SXACT_FLAG_PREPARED) != 0)
279 #define SxactIsRolledBack(sxact) (((sxact)->flags & SXACT_FLAG_ROLLED_BACK) != 0)
280 #define SxactIsDoomed(sxact) (((sxact)->flags & SXACT_FLAG_DOOMED) != 0)
281 #define SxactIsReadOnly(sxact) (((sxact)->flags & SXACT_FLAG_READ_ONLY) != 0)
282 #define SxactHasSummaryConflictIn(sxact) (((sxact)->flags & SXACT_FLAG_SUMMARY_CONFLICT_IN) != 0)
283 #define SxactHasSummaryConflictOut(sxact) (((sxact)->flags & SXACT_FLAG_SUMMARY_CONFLICT_OUT) != 0)
285 * The following macro actually means that the specified transaction has a
286 * conflict out *to a transaction which committed ahead of it*. It's hard
287 * to get that into a name of a reasonable length.
289 #define SxactHasConflictOut(sxact) (((sxact)->flags & SXACT_FLAG_CONFLICT_OUT) != 0)
290 #define SxactIsDeferrableWaiting(sxact) (((sxact)->flags & SXACT_FLAG_DEFERRABLE_WAITING) != 0)
291 #define SxactIsROSafe(sxact) (((sxact)->flags & SXACT_FLAG_RO_SAFE) != 0)
292 #define SxactIsROUnsafe(sxact) (((sxact)->flags & SXACT_FLAG_RO_UNSAFE) != 0)
293 #define SxactIsPartiallyReleased(sxact) (((sxact)->flags & SXACT_FLAG_PARTIALLY_RELEASED) != 0)
296 * Compute the hash code associated with a PREDICATELOCKTARGETTAG.
298 * To avoid unnecessary recomputations of the hash code, we try to do this
299 * just once per function, and then pass it around as needed. Aside from
300 * passing the hashcode to hash_search_with_hash_value(), we can extract
301 * the lock partition number from the hashcode.
303 #define PredicateLockTargetTagHashCode(predicatelocktargettag) \
304 get_hash_value(PredicateLockTargetHash, predicatelocktargettag)
307 * Given a predicate lock tag, and the hash for its target,
308 * compute the lock hash.
310 * To make the hash code also depend on the transaction, we xor the sxid
311 * struct's address into the hash code, left-shifted so that the
312 * partition-number bits don't change. Since this is only a hash, we
313 * don't care if we lose high-order bits of the address; use an
314 * intermediate variable to suppress cast-pointer-to-int warnings.
316 #define PredicateLockHashCodeFromTargetHashCode(predicatelocktag, targethash) \
317 ((targethash) ^ ((uint32) PointerGetDatum((predicatelocktag)->myXact)) \
318 << LOG2_NUM_PREDICATELOCK_PARTITIONS)
322 * The SLRU buffer area through which we access the old xids.
324 static SlruCtlData SerialSlruCtlData;
326 #define SerialSlruCtl (&SerialSlruCtlData)
328 #define SERIAL_PAGESIZE BLCKSZ
329 #define SERIAL_ENTRYSIZE sizeof(SerCommitSeqNo)
330 #define SERIAL_ENTRIESPERPAGE (SERIAL_PAGESIZE / SERIAL_ENTRYSIZE)
333 * Set maximum pages based on the number needed to track all transactions.
335 #define SERIAL_MAX_PAGE (MaxTransactionId / SERIAL_ENTRIESPERPAGE)
337 #define SerialNextPage(page) (((page) >= SERIAL_MAX_PAGE) ? 0 : (page) + 1)
339 #define SerialValue(slotno, xid) (*((SerCommitSeqNo *) \
340 (SerialSlruCtl->shared->page_buffer[slotno] + \
341 ((((uint32) (xid)) % SERIAL_ENTRIESPERPAGE) * SERIAL_ENTRYSIZE))))
343 #define SerialPage(xid) (((uint32) (xid)) / SERIAL_ENTRIESPERPAGE)
345 typedef struct SerialControlData
347 int headPage; /* newest initialized page */
348 TransactionId headXid; /* newest valid Xid in the SLRU */
349 TransactionId tailXid; /* oldest xmin we might be interested in */
350 } SerialControlData;
352 typedef struct SerialControlData *SerialControl;
354 static SerialControl serialControl;
357 * When the oldest committed transaction on the "finished" list is moved to
358 * SLRU, its predicate locks will be moved to this "dummy" transaction,
359 * collapsing duplicate targets. When a duplicate is found, the later
360 * commitSeqNo is used.
362 static SERIALIZABLEXACT *OldCommittedSxact;
366 * These configuration variables are used to set the predicate lock table size
367 * and to control promotion of predicate locks to coarser granularity in an
368 * attempt to degrade performance (mostly as false positive serialization
369 * failure) gracefully in the face of memory pressure.
371 int max_predicate_locks_per_xact; /* in guc_tables.c */
372 int max_predicate_locks_per_relation; /* in guc_tables.c */
373 int max_predicate_locks_per_page; /* in guc_tables.c */
376 * This provides a list of objects in order to track transactions
377 * participating in predicate locking. Entries in the list are fixed size,
378 * and reside in shared memory. The memory address of an entry must remain
379 * fixed during its lifetime. The list will be protected from concurrent
380 * update externally; no provision is made in this code to manage that. The
381 * number of entries in the list, and the size allowed for each entry is
382 * fixed upon creation.
384 static PredXactList PredXact;
387 * This provides a pool of RWConflict data elements to use in conflict lists
388 * between transactions.
390 static RWConflictPoolHeader RWConflictPool;
393 * The predicate locking hash tables are in shared memory.
394 * Each backend keeps pointers to them.
396 static HTAB *SerializableXidHash;
397 static HTAB *PredicateLockTargetHash;
398 static HTAB *PredicateLockHash;
399 static dlist_head *FinishedSerializableTransactions;
402 * Tag for a dummy entry in PredicateLockTargetHash. By temporarily removing
403 * this entry, you can ensure that there's enough scratch space available for
404 * inserting one entry in the hash table. This is an otherwise-invalid tag.
406 static const PREDICATELOCKTARGETTAG ScratchTargetTag = {0, 0, 0, 0};
407 static uint32 ScratchTargetTagHash;
408 static LWLock *ScratchPartitionLock;
411 * The local hash table used to determine when to combine multiple fine-
412 * grained locks into a single courser-grained lock.
414 static HTAB *LocalPredicateLockHash = NULL;
417 * Keep a pointer to the currently-running serializable transaction (if any)
418 * for quick reference. Also, remember if we have written anything that could
419 * cause a rw-conflict.
421 static SERIALIZABLEXACT *MySerializableXact = InvalidSerializableXact;
422 static bool MyXactDidWrite = false;
425 * The SXACT_FLAG_RO_UNSAFE optimization might lead us to release
426 * MySerializableXact early. If that happens in a parallel query, the leader
427 * needs to defer the destruction of the SERIALIZABLEXACT until end of
428 * transaction, because the workers still have a reference to it. In that
429 * case, the leader stores it here.
431 static SERIALIZABLEXACT *SavedSerializableXact = InvalidSerializableXact;
433 /* local functions */
435 static SERIALIZABLEXACT *CreatePredXact(void);
436 static void ReleasePredXact(SERIALIZABLEXACT *sxact);
438 static bool RWConflictExists(const SERIALIZABLEXACT *reader, const SERIALIZABLEXACT *writer);
439 static void SetRWConflict(SERIALIZABLEXACT *reader, SERIALIZABLEXACT *writer);
440 static void SetPossibleUnsafeConflict(SERIALIZABLEXACT *roXact, SERIALIZABLEXACT *activeXact);
441 static void ReleaseRWConflict(RWConflict conflict);
442 static void FlagSxactUnsafe(SERIALIZABLEXACT *sxact);
444 static bool SerialPagePrecedesLogically(int64 page1, int64 page2);
445 static void SerialInit(void);
446 static void SerialAdd(TransactionId xid, SerCommitSeqNo minConflictCommitSeqNo);
447 static SerCommitSeqNo SerialGetMinConflictCommitSeqNo(TransactionId xid);
448 static void SerialSetActiveSerXmin(TransactionId xid);
450 static uint32 predicatelock_hash(const void *key, Size keysize);
451 static void SummarizeOldestCommittedSxact(void);
452 static Snapshot GetSafeSnapshot(Snapshot origSnapshot);
453 static Snapshot GetSerializableTransactionSnapshotInt(Snapshot snapshot,
454 VirtualTransactionId *sourcevxid,
455 int sourcepid);
456 static bool PredicateLockExists(const PREDICATELOCKTARGETTAG *targettag);
457 static bool GetParentPredicateLockTag(const PREDICATELOCKTARGETTAG *tag,
458 PREDICATELOCKTARGETTAG *parent);
459 static bool CoarserLockCovers(const PREDICATELOCKTARGETTAG *newtargettag);
460 static void RemoveScratchTarget(bool lockheld);
461 static void RestoreScratchTarget(bool lockheld);
462 static void RemoveTargetIfNoLongerUsed(PREDICATELOCKTARGET *target,
463 uint32 targettaghash);
464 static void DeleteChildTargetLocks(const PREDICATELOCKTARGETTAG *newtargettag);
465 static int MaxPredicateChildLocks(const PREDICATELOCKTARGETTAG *tag);
466 static bool CheckAndPromotePredicateLockRequest(const PREDICATELOCKTARGETTAG *reqtag);
467 static void DecrementParentLocks(const PREDICATELOCKTARGETTAG *targettag);
468 static void CreatePredicateLock(const PREDICATELOCKTARGETTAG *targettag,
469 uint32 targettaghash,
470 SERIALIZABLEXACT *sxact);
471 static void DeleteLockTarget(PREDICATELOCKTARGET *target, uint32 targettaghash);
472 static bool TransferPredicateLocksToNewTarget(PREDICATELOCKTARGETTAG oldtargettag,
473 PREDICATELOCKTARGETTAG newtargettag,
474 bool removeOld);
475 static void PredicateLockAcquire(const PREDICATELOCKTARGETTAG *targettag);
476 static void DropAllPredicateLocksFromTable(Relation relation,
477 bool transfer);
478 static void SetNewSxactGlobalXmin(void);
479 static void ClearOldPredicateLocks(void);
480 static void ReleaseOneSerializableXact(SERIALIZABLEXACT *sxact, bool partial,
481 bool summarize);
482 static bool XidIsConcurrent(TransactionId xid);
483 static void CheckTargetForConflictsIn(PREDICATELOCKTARGETTAG *targettag);
484 static void FlagRWConflict(SERIALIZABLEXACT *reader, SERIALIZABLEXACT *writer);
485 static void OnConflict_CheckForSerializationFailure(const SERIALIZABLEXACT *reader,
486 SERIALIZABLEXACT *writer);
487 static void CreateLocalPredicateLockHash(void);
488 static void ReleasePredicateLocksLocal(void);
491 /*------------------------------------------------------------------------*/
494 * Does this relation participate in predicate locking? Temporary and system
495 * relations are exempt.
497 static inline bool
498 PredicateLockingNeededForRelation(Relation relation)
500 return !(relation->rd_id < FirstUnpinnedObjectId ||
501 RelationUsesLocalBuffers(relation));
505 * When a public interface method is called for a read, this is the test to
506 * see if we should do a quick return.
508 * Note: this function has side-effects! If this transaction has been flagged
509 * as RO-safe since the last call, we release all predicate locks and reset
510 * MySerializableXact. That makes subsequent calls to return quickly.
512 * This is marked as 'inline' to eliminate the function call overhead in the
513 * common case that serialization is not needed.
515 static inline bool
516 SerializationNeededForRead(Relation relation, Snapshot snapshot)
518 /* Nothing to do if this is not a serializable transaction */
519 if (MySerializableXact == InvalidSerializableXact)
520 return false;
523 * Don't acquire locks or conflict when scanning with a special snapshot.
524 * This excludes things like CLUSTER and REINDEX. They use the wholesale
525 * functions TransferPredicateLocksToHeapRelation() and
526 * CheckTableForSerializableConflictIn() to participate in serialization,
527 * but the scans involved don't need serialization.
529 if (!IsMVCCSnapshot(snapshot))
530 return false;
533 * Check if we have just become "RO-safe". If we have, immediately release
534 * all locks as they're not needed anymore. This also resets
535 * MySerializableXact, so that subsequent calls to this function can exit
536 * quickly.
538 * A transaction is flagged as RO_SAFE if all concurrent R/W transactions
539 * commit without having conflicts out to an earlier snapshot, thus
540 * ensuring that no conflicts are possible for this transaction.
542 if (SxactIsROSafe(MySerializableXact))
544 ReleasePredicateLocks(false, true);
545 return false;
548 /* Check if the relation doesn't participate in predicate locking */
549 if (!PredicateLockingNeededForRelation(relation))
550 return false;
552 return true; /* no excuse to skip predicate locking */
556 * Like SerializationNeededForRead(), but called on writes.
557 * The logic is the same, but there is no snapshot and we can't be RO-safe.
559 static inline bool
560 SerializationNeededForWrite(Relation relation)
562 /* Nothing to do if this is not a serializable transaction */
563 if (MySerializableXact == InvalidSerializableXact)
564 return false;
566 /* Check if the relation doesn't participate in predicate locking */
567 if (!PredicateLockingNeededForRelation(relation))
568 return false;
570 return true; /* no excuse to skip predicate locking */
574 /*------------------------------------------------------------------------*/
577 * These functions are a simple implementation of a list for this specific
578 * type of struct. If there is ever a generalized shared memory list, we
579 * should probably switch to that.
581 static SERIALIZABLEXACT *
582 CreatePredXact(void)
584 SERIALIZABLEXACT *sxact;
586 if (dlist_is_empty(&PredXact->availableList))
587 return NULL;
589 sxact = dlist_container(SERIALIZABLEXACT, xactLink,
590 dlist_pop_head_node(&PredXact->availableList));
591 dlist_push_tail(&PredXact->activeList, &sxact->xactLink);
592 return sxact;
595 static void
596 ReleasePredXact(SERIALIZABLEXACT *sxact)
598 Assert(ShmemAddrIsValid(sxact));
600 dlist_delete(&sxact->xactLink);
601 dlist_push_tail(&PredXact->availableList, &sxact->xactLink);
604 /*------------------------------------------------------------------------*/
607 * These functions manage primitive access to the RWConflict pool and lists.
609 static bool
610 RWConflictExists(const SERIALIZABLEXACT *reader, const SERIALIZABLEXACT *writer)
612 dlist_iter iter;
614 Assert(reader != writer);
616 /* Check the ends of the purported conflict first. */
617 if (SxactIsDoomed(reader)
618 || SxactIsDoomed(writer)
619 || dlist_is_empty(&reader->outConflicts)
620 || dlist_is_empty(&writer->inConflicts))
621 return false;
624 * A conflict is possible; walk the list to find out.
626 * The unconstify is needed as we have no const version of
627 * dlist_foreach().
629 dlist_foreach(iter, &unconstify(SERIALIZABLEXACT *, reader)->outConflicts)
631 RWConflict conflict =
632 dlist_container(RWConflictData, outLink, iter.cur);
634 if (conflict->sxactIn == writer)
635 return true;
638 /* No conflict found. */
639 return false;
642 static void
643 SetRWConflict(SERIALIZABLEXACT *reader, SERIALIZABLEXACT *writer)
645 RWConflict conflict;
647 Assert(reader != writer);
648 Assert(!RWConflictExists(reader, writer));
650 if (dlist_is_empty(&RWConflictPool->availableList))
651 ereport(ERROR,
652 (errcode(ERRCODE_OUT_OF_MEMORY),
653 errmsg("not enough elements in RWConflictPool to record a read/write conflict"),
654 errhint("You might need to run fewer transactions at a time or increase max_connections.")));
656 conflict = dlist_head_element(RWConflictData, outLink, &RWConflictPool->availableList);
657 dlist_delete(&conflict->outLink);
659 conflict->sxactOut = reader;
660 conflict->sxactIn = writer;
661 dlist_push_tail(&reader->outConflicts, &conflict->outLink);
662 dlist_push_tail(&writer->inConflicts, &conflict->inLink);
665 static void
666 SetPossibleUnsafeConflict(SERIALIZABLEXACT *roXact,
667 SERIALIZABLEXACT *activeXact)
669 RWConflict conflict;
671 Assert(roXact != activeXact);
672 Assert(SxactIsReadOnly(roXact));
673 Assert(!SxactIsReadOnly(activeXact));
675 if (dlist_is_empty(&RWConflictPool->availableList))
676 ereport(ERROR,
677 (errcode(ERRCODE_OUT_OF_MEMORY),
678 errmsg("not enough elements in RWConflictPool to record a potential read/write conflict"),
679 errhint("You might need to run fewer transactions at a time or increase max_connections.")));
681 conflict = dlist_head_element(RWConflictData, outLink, &RWConflictPool->availableList);
682 dlist_delete(&conflict->outLink);
684 conflict->sxactOut = activeXact;
685 conflict->sxactIn = roXact;
686 dlist_push_tail(&activeXact->possibleUnsafeConflicts, &conflict->outLink);
687 dlist_push_tail(&roXact->possibleUnsafeConflicts, &conflict->inLink);
690 static void
691 ReleaseRWConflict(RWConflict conflict)
693 dlist_delete(&conflict->inLink);
694 dlist_delete(&conflict->outLink);
695 dlist_push_tail(&RWConflictPool->availableList, &conflict->outLink);
698 static void
699 FlagSxactUnsafe(SERIALIZABLEXACT *sxact)
701 dlist_mutable_iter iter;
703 Assert(SxactIsReadOnly(sxact));
704 Assert(!SxactIsROSafe(sxact));
706 sxact->flags |= SXACT_FLAG_RO_UNSAFE;
709 * We know this isn't a safe snapshot, so we can stop looking for other
710 * potential conflicts.
712 dlist_foreach_modify(iter, &sxact->possibleUnsafeConflicts)
714 RWConflict conflict =
715 dlist_container(RWConflictData, inLink, iter.cur);
717 Assert(!SxactIsReadOnly(conflict->sxactOut));
718 Assert(sxact == conflict->sxactIn);
720 ReleaseRWConflict(conflict);
724 /*------------------------------------------------------------------------*/
727 * Decide whether a Serial page number is "older" for truncation purposes.
728 * Analogous to CLOGPagePrecedes().
730 static bool
731 SerialPagePrecedesLogically(int64 page1, int64 page2)
733 TransactionId xid1;
734 TransactionId xid2;
736 xid1 = ((TransactionId) page1) * SERIAL_ENTRIESPERPAGE;
737 xid1 += FirstNormalTransactionId + 1;
738 xid2 = ((TransactionId) page2) * SERIAL_ENTRIESPERPAGE;
739 xid2 += FirstNormalTransactionId + 1;
741 return (TransactionIdPrecedes(xid1, xid2) &&
742 TransactionIdPrecedes(xid1, xid2 + SERIAL_ENTRIESPERPAGE - 1));
745 #ifdef USE_ASSERT_CHECKING
746 static void
747 SerialPagePrecedesLogicallyUnitTests(void)
749 int per_page = SERIAL_ENTRIESPERPAGE,
750 offset = per_page / 2;
751 int64 newestPage,
752 oldestPage,
753 headPage,
754 targetPage;
755 TransactionId newestXact,
756 oldestXact;
758 /* GetNewTransactionId() has assigned the last XID it can safely use. */
759 newestPage = 2 * SLRU_PAGES_PER_SEGMENT - 1; /* nothing special */
760 newestXact = newestPage * per_page + offset;
761 Assert(newestXact / per_page == newestPage);
762 oldestXact = newestXact + 1;
763 oldestXact -= 1U << 31;
764 oldestPage = oldestXact / per_page;
767 * In this scenario, the SLRU headPage pertains to the last ~1000 XIDs
768 * assigned. oldestXact finishes, ~2B XIDs having elapsed since it
769 * started. Further transactions cause us to summarize oldestXact to
770 * tailPage. Function must return false so SerialAdd() doesn't zero
771 * tailPage (which may contain entries for other old, recently-finished
772 * XIDs) and half the SLRU. Reaching this requires burning ~2B XIDs in
773 * single-user mode, a negligible possibility.
775 headPage = newestPage;
776 targetPage = oldestPage;
777 Assert(!SerialPagePrecedesLogically(headPage, targetPage));
780 * In this scenario, the SLRU headPage pertains to oldestXact. We're
781 * summarizing an XID near newestXact. (Assume few other XIDs used
782 * SERIALIZABLE, hence the minimal headPage advancement. Assume
783 * oldestXact was long-running and only recently reached the SLRU.)
784 * Function must return true to make SerialAdd() create targetPage.
786 * Today's implementation mishandles this case, but it doesn't matter
787 * enough to fix. Verify that the defect affects just one page by
788 * asserting correct treatment of its prior page. Reaching this case
789 * requires burning ~2B XIDs in single-user mode, a negligible
790 * possibility. Moreover, if it does happen, the consequence would be
791 * mild, namely a new transaction failing in SimpleLruReadPage().
793 headPage = oldestPage;
794 targetPage = newestPage;
795 Assert(SerialPagePrecedesLogically(headPage, targetPage - 1));
796 #if 0
797 Assert(SerialPagePrecedesLogically(headPage, targetPage));
798 #endif
800 #endif
803 * Initialize for the tracking of old serializable committed xids.
805 static void
806 SerialInit(void)
808 bool found;
811 * Set up SLRU management of the pg_serial data.
813 SerialSlruCtl->PagePrecedes = SerialPagePrecedesLogically;
814 SimpleLruInit(SerialSlruCtl, "serializable",
815 serializable_buffers, 0, "pg_serial",
816 LWTRANCHE_SERIAL_BUFFER, LWTRANCHE_SERIAL_SLRU,
817 SYNC_HANDLER_NONE, false);
818 #ifdef USE_ASSERT_CHECKING
819 SerialPagePrecedesLogicallyUnitTests();
820 #endif
821 SlruPagePrecedesUnitTests(SerialSlruCtl, SERIAL_ENTRIESPERPAGE);
824 * Create or attach to the SerialControl structure.
826 serialControl = (SerialControl)
827 ShmemInitStruct("SerialControlData", sizeof(SerialControlData), &found);
829 Assert(found == IsUnderPostmaster);
830 if (!found)
833 * Set control information to reflect empty SLRU.
835 LWLockAcquire(SerialControlLock, LW_EXCLUSIVE);
836 serialControl->headPage = -1;
837 serialControl->headXid = InvalidTransactionId;
838 serialControl->tailXid = InvalidTransactionId;
839 LWLockRelease(SerialControlLock);
844 * GUC check_hook for serializable_buffers
846 bool
847 check_serial_buffers(int *newval, void **extra, GucSource source)
849 return check_slru_buffers("serializable_buffers", newval);
853 * Record a committed read write serializable xid and the minimum
854 * commitSeqNo of any transactions to which this xid had a rw-conflict out.
855 * An invalid commitSeqNo means that there were no conflicts out from xid.
857 static void
858 SerialAdd(TransactionId xid, SerCommitSeqNo minConflictCommitSeqNo)
860 TransactionId tailXid;
861 int64 targetPage;
862 int slotno;
863 int64 firstZeroPage;
864 bool isNewPage;
865 LWLock *lock;
867 Assert(TransactionIdIsValid(xid));
869 targetPage = SerialPage(xid);
870 lock = SimpleLruGetBankLock(SerialSlruCtl, targetPage);
873 * In this routine, we must hold both SerialControlLock and the SLRU bank
874 * lock simultaneously while making the SLRU data catch up with the new
875 * state that we determine.
877 LWLockAcquire(SerialControlLock, LW_EXCLUSIVE);
880 * If no serializable transactions are active, there shouldn't be anything
881 * to push out to the SLRU. Hitting this assert would mean there's
882 * something wrong with the earlier cleanup logic.
884 tailXid = serialControl->tailXid;
885 Assert(TransactionIdIsValid(tailXid));
888 * If the SLRU is currently unused, zero out the whole active region from
889 * tailXid to headXid before taking it into use. Otherwise zero out only
890 * any new pages that enter the tailXid-headXid range as we advance
891 * headXid.
893 if (serialControl->headPage < 0)
895 firstZeroPage = SerialPage(tailXid);
896 isNewPage = true;
898 else
900 firstZeroPage = SerialNextPage(serialControl->headPage);
901 isNewPage = SerialPagePrecedesLogically(serialControl->headPage,
902 targetPage);
905 if (!TransactionIdIsValid(serialControl->headXid)
906 || TransactionIdFollows(xid, serialControl->headXid))
907 serialControl->headXid = xid;
908 if (isNewPage)
909 serialControl->headPage = targetPage;
911 LWLockAcquire(lock, LW_EXCLUSIVE);
913 if (isNewPage)
915 /* Initialize intervening pages. */
916 while (firstZeroPage != targetPage)
918 (void) SimpleLruZeroPage(SerialSlruCtl, firstZeroPage);
919 firstZeroPage = SerialNextPage(firstZeroPage);
921 slotno = SimpleLruZeroPage(SerialSlruCtl, targetPage);
923 else
924 slotno = SimpleLruReadPage(SerialSlruCtl, targetPage, true, xid);
926 SerialValue(slotno, xid) = minConflictCommitSeqNo;
927 SerialSlruCtl->shared->page_dirty[slotno] = true;
929 LWLockRelease(lock);
930 LWLockRelease(SerialControlLock);
934 * Get the minimum commitSeqNo for any conflict out for the given xid. For
935 * a transaction which exists but has no conflict out, InvalidSerCommitSeqNo
936 * will be returned.
938 static SerCommitSeqNo
939 SerialGetMinConflictCommitSeqNo(TransactionId xid)
941 TransactionId headXid;
942 TransactionId tailXid;
943 SerCommitSeqNo val;
944 int slotno;
946 Assert(TransactionIdIsValid(xid));
948 LWLockAcquire(SerialControlLock, LW_SHARED);
949 headXid = serialControl->headXid;
950 tailXid = serialControl->tailXid;
951 LWLockRelease(SerialControlLock);
953 if (!TransactionIdIsValid(headXid))
954 return 0;
956 Assert(TransactionIdIsValid(tailXid));
958 if (TransactionIdPrecedes(xid, tailXid)
959 || TransactionIdFollows(xid, headXid))
960 return 0;
963 * The following function must be called without holding SLRU bank lock,
964 * but will return with that lock held, which must then be released.
966 slotno = SimpleLruReadPage_ReadOnly(SerialSlruCtl,
967 SerialPage(xid), xid);
968 val = SerialValue(slotno, xid);
969 LWLockRelease(SimpleLruGetBankLock(SerialSlruCtl, SerialPage(xid)));
970 return val;
974 * Call this whenever there is a new xmin for active serializable
975 * transactions. We don't need to keep information on transactions which
976 * precede that. InvalidTransactionId means none active, so everything in
977 * the SLRU can be discarded.
979 static void
980 SerialSetActiveSerXmin(TransactionId xid)
982 LWLockAcquire(SerialControlLock, LW_EXCLUSIVE);
985 * When no sxacts are active, nothing overlaps, set the xid values to
986 * invalid to show that there are no valid entries. Don't clear headPage,
987 * though. A new xmin might still land on that page, and we don't want to
988 * repeatedly zero out the same page.
990 if (!TransactionIdIsValid(xid))
992 serialControl->tailXid = InvalidTransactionId;
993 serialControl->headXid = InvalidTransactionId;
994 LWLockRelease(SerialControlLock);
995 return;
999 * When we're recovering prepared transactions, the global xmin might move
1000 * backwards depending on the order they're recovered. Normally that's not
1001 * OK, but during recovery no serializable transactions will commit, so
1002 * the SLRU is empty and we can get away with it.
1004 if (RecoveryInProgress())
1006 Assert(serialControl->headPage < 0);
1007 if (!TransactionIdIsValid(serialControl->tailXid)
1008 || TransactionIdPrecedes(xid, serialControl->tailXid))
1010 serialControl->tailXid = xid;
1012 LWLockRelease(SerialControlLock);
1013 return;
1016 Assert(!TransactionIdIsValid(serialControl->tailXid)
1017 || TransactionIdFollows(xid, serialControl->tailXid));
1019 serialControl->tailXid = xid;
1021 LWLockRelease(SerialControlLock);
1025 * Perform a checkpoint --- either during shutdown, or on-the-fly
1027 * We don't have any data that needs to survive a restart, but this is a
1028 * convenient place to truncate the SLRU.
1030 void
1031 CheckPointPredicate(void)
1033 int truncateCutoffPage;
1035 LWLockAcquire(SerialControlLock, LW_EXCLUSIVE);
1037 /* Exit quickly if the SLRU is currently not in use. */
1038 if (serialControl->headPage < 0)
1040 LWLockRelease(SerialControlLock);
1041 return;
1044 if (TransactionIdIsValid(serialControl->tailXid))
1046 int tailPage;
1048 tailPage = SerialPage(serialControl->tailXid);
1051 * It is possible for the tailXid to be ahead of the headXid. This
1052 * occurs if we checkpoint while there are in-progress serializable
1053 * transaction(s) advancing the tail but we are yet to summarize the
1054 * transactions. In this case, we cutoff up to the headPage and the
1055 * next summary will advance the headXid.
1057 if (SerialPagePrecedesLogically(tailPage, serialControl->headPage))
1059 /* We can truncate the SLRU up to the page containing tailXid */
1060 truncateCutoffPage = tailPage;
1062 else
1063 truncateCutoffPage = serialControl->headPage;
1065 else
1067 /*----------
1068 * The SLRU is no longer needed. Truncate to head before we set head
1069 * invalid.
1071 * XXX: It's possible that the SLRU is not needed again until XID
1072 * wrap-around has happened, so that the segment containing headPage
1073 * that we leave behind will appear to be new again. In that case it
1074 * won't be removed until XID horizon advances enough to make it
1075 * current again.
1077 * XXX: This should happen in vac_truncate_clog(), not in checkpoints.
1078 * Consider this scenario, starting from a system with no in-progress
1079 * transactions and VACUUM FREEZE having maximized oldestXact:
1080 * - Start a SERIALIZABLE transaction.
1081 * - Start, finish, and summarize a SERIALIZABLE transaction, creating
1082 * one SLRU page.
1083 * - Consume XIDs to reach xidStopLimit.
1084 * - Finish all transactions. Due to the long-running SERIALIZABLE
1085 * transaction, earlier checkpoints did not touch headPage. The
1086 * next checkpoint will change it, but that checkpoint happens after
1087 * the end of the scenario.
1088 * - VACUUM to advance XID limits.
1089 * - Consume ~2M XIDs, crossing the former xidWrapLimit.
1090 * - Start, finish, and summarize a SERIALIZABLE transaction.
1091 * SerialAdd() declines to create the targetPage, because headPage
1092 * is not regarded as in the past relative to that targetPage. The
1093 * transaction instigating the summarize fails in
1094 * SimpleLruReadPage().
1096 truncateCutoffPage = serialControl->headPage;
1097 serialControl->headPage = -1;
1100 LWLockRelease(SerialControlLock);
1103 * Truncate away pages that are no longer required. Note that no
1104 * additional locking is required, because this is only called as part of
1105 * a checkpoint, and the validity limits have already been determined.
1107 SimpleLruTruncate(SerialSlruCtl, truncateCutoffPage);
1110 * Write dirty SLRU pages to disk
1112 * This is not actually necessary from a correctness point of view. We do
1113 * it merely as a debugging aid.
1115 * We're doing this after the truncation to avoid writing pages right
1116 * before deleting the file in which they sit, which would be completely
1117 * pointless.
1119 SimpleLruWriteAll(SerialSlruCtl, true);
1122 /*------------------------------------------------------------------------*/
1125 * InitPredicateLocks -- Initialize the predicate locking data structures.
1127 * This is called from CreateSharedMemoryAndSemaphores(), which see for
1128 * more comments. In the normal postmaster case, the shared hash tables
1129 * are created here. Backends inherit the pointers
1130 * to the shared tables via fork(). In the EXEC_BACKEND case, each
1131 * backend re-executes this code to obtain pointers to the already existing
1132 * shared hash tables.
1134 void
1135 InitPredicateLocks(void)
1137 HASHCTL info;
1138 long max_table_size;
1139 Size requestSize;
1140 bool found;
1142 #ifndef EXEC_BACKEND
1143 Assert(!IsUnderPostmaster);
1144 #endif
1147 * Compute size of predicate lock target hashtable. Note these
1148 * calculations must agree with PredicateLockShmemSize!
1150 max_table_size = NPREDICATELOCKTARGETENTS();
1153 * Allocate hash table for PREDICATELOCKTARGET structs. This stores
1154 * per-predicate-lock-target information.
1156 info.keysize = sizeof(PREDICATELOCKTARGETTAG);
1157 info.entrysize = sizeof(PREDICATELOCKTARGET);
1158 info.num_partitions = NUM_PREDICATELOCK_PARTITIONS;
1160 PredicateLockTargetHash = ShmemInitHash("PREDICATELOCKTARGET hash",
1161 max_table_size,
1162 max_table_size,
1163 &info,
1164 HASH_ELEM | HASH_BLOBS |
1165 HASH_PARTITION | HASH_FIXED_SIZE);
1168 * Reserve a dummy entry in the hash table; we use it to make sure there's
1169 * always one entry available when we need to split or combine a page,
1170 * because running out of space there could mean aborting a
1171 * non-serializable transaction.
1173 if (!IsUnderPostmaster)
1175 (void) hash_search(PredicateLockTargetHash, &ScratchTargetTag,
1176 HASH_ENTER, &found);
1177 Assert(!found);
1180 /* Pre-calculate the hash and partition lock of the scratch entry */
1181 ScratchTargetTagHash = PredicateLockTargetTagHashCode(&ScratchTargetTag);
1182 ScratchPartitionLock = PredicateLockHashPartitionLock(ScratchTargetTagHash);
1185 * Allocate hash table for PREDICATELOCK structs. This stores per
1186 * xact-lock-of-a-target information.
1188 info.keysize = sizeof(PREDICATELOCKTAG);
1189 info.entrysize = sizeof(PREDICATELOCK);
1190 info.hash = predicatelock_hash;
1191 info.num_partitions = NUM_PREDICATELOCK_PARTITIONS;
1193 /* Assume an average of 2 xacts per target */
1194 max_table_size *= 2;
1196 PredicateLockHash = ShmemInitHash("PREDICATELOCK hash",
1197 max_table_size,
1198 max_table_size,
1199 &info,
1200 HASH_ELEM | HASH_FUNCTION |
1201 HASH_PARTITION | HASH_FIXED_SIZE);
1204 * Compute size for serializable transaction hashtable. Note these
1205 * calculations must agree with PredicateLockShmemSize!
1207 max_table_size = (MaxBackends + max_prepared_xacts);
1210 * Allocate a list to hold information on transactions participating in
1211 * predicate locking.
1213 * Assume an average of 10 predicate locking transactions per backend.
1214 * This allows aggressive cleanup while detail is present before data must
1215 * be summarized for storage in SLRU and the "dummy" transaction.
1217 max_table_size *= 10;
1219 PredXact = ShmemInitStruct("PredXactList",
1220 PredXactListDataSize,
1221 &found);
1222 Assert(found == IsUnderPostmaster);
1223 if (!found)
1225 int i;
1227 dlist_init(&PredXact->availableList);
1228 dlist_init(&PredXact->activeList);
1229 PredXact->SxactGlobalXmin = InvalidTransactionId;
1230 PredXact->SxactGlobalXminCount = 0;
1231 PredXact->WritableSxactCount = 0;
1232 PredXact->LastSxactCommitSeqNo = FirstNormalSerCommitSeqNo - 1;
1233 PredXact->CanPartialClearThrough = 0;
1234 PredXact->HavePartialClearedThrough = 0;
1235 requestSize = mul_size((Size) max_table_size,
1236 sizeof(SERIALIZABLEXACT));
1237 PredXact->element = ShmemAlloc(requestSize);
1238 /* Add all elements to available list, clean. */
1239 memset(PredXact->element, 0, requestSize);
1240 for (i = 0; i < max_table_size; i++)
1242 LWLockInitialize(&PredXact->element[i].perXactPredicateListLock,
1243 LWTRANCHE_PER_XACT_PREDICATE_LIST);
1244 dlist_push_tail(&PredXact->availableList, &PredXact->element[i].xactLink);
1246 PredXact->OldCommittedSxact = CreatePredXact();
1247 SetInvalidVirtualTransactionId(PredXact->OldCommittedSxact->vxid);
1248 PredXact->OldCommittedSxact->prepareSeqNo = 0;
1249 PredXact->OldCommittedSxact->commitSeqNo = 0;
1250 PredXact->OldCommittedSxact->SeqNo.lastCommitBeforeSnapshot = 0;
1251 dlist_init(&PredXact->OldCommittedSxact->outConflicts);
1252 dlist_init(&PredXact->OldCommittedSxact->inConflicts);
1253 dlist_init(&PredXact->OldCommittedSxact->predicateLocks);
1254 dlist_node_init(&PredXact->OldCommittedSxact->finishedLink);
1255 dlist_init(&PredXact->OldCommittedSxact->possibleUnsafeConflicts);
1256 PredXact->OldCommittedSxact->topXid = InvalidTransactionId;
1257 PredXact->OldCommittedSxact->finishedBefore = InvalidTransactionId;
1258 PredXact->OldCommittedSxact->xmin = InvalidTransactionId;
1259 PredXact->OldCommittedSxact->flags = SXACT_FLAG_COMMITTED;
1260 PredXact->OldCommittedSxact->pid = 0;
1261 PredXact->OldCommittedSxact->pgprocno = INVALID_PROC_NUMBER;
1263 /* This never changes, so let's keep a local copy. */
1264 OldCommittedSxact = PredXact->OldCommittedSxact;
1267 * Allocate hash table for SERIALIZABLEXID structs. This stores per-xid
1268 * information for serializable transactions which have accessed data.
1270 info.keysize = sizeof(SERIALIZABLEXIDTAG);
1271 info.entrysize = sizeof(SERIALIZABLEXID);
1273 SerializableXidHash = ShmemInitHash("SERIALIZABLEXID hash",
1274 max_table_size,
1275 max_table_size,
1276 &info,
1277 HASH_ELEM | HASH_BLOBS |
1278 HASH_FIXED_SIZE);
1281 * Allocate space for tracking rw-conflicts in lists attached to the
1282 * transactions.
1284 * Assume an average of 5 conflicts per transaction. Calculations suggest
1285 * that this will prevent resource exhaustion in even the most pessimal
1286 * loads up to max_connections = 200 with all 200 connections pounding the
1287 * database with serializable transactions. Beyond that, there may be
1288 * occasional transactions canceled when trying to flag conflicts. That's
1289 * probably OK.
1291 max_table_size *= 5;
1293 RWConflictPool = ShmemInitStruct("RWConflictPool",
1294 RWConflictPoolHeaderDataSize,
1295 &found);
1296 Assert(found == IsUnderPostmaster);
1297 if (!found)
1299 int i;
1301 dlist_init(&RWConflictPool->availableList);
1302 requestSize = mul_size((Size) max_table_size,
1303 RWConflictDataSize);
1304 RWConflictPool->element = ShmemAlloc(requestSize);
1305 /* Add all elements to available list, clean. */
1306 memset(RWConflictPool->element, 0, requestSize);
1307 for (i = 0; i < max_table_size; i++)
1309 dlist_push_tail(&RWConflictPool->availableList,
1310 &RWConflictPool->element[i].outLink);
1315 * Create or attach to the header for the list of finished serializable
1316 * transactions.
1318 FinishedSerializableTransactions = (dlist_head *)
1319 ShmemInitStruct("FinishedSerializableTransactions",
1320 sizeof(dlist_head),
1321 &found);
1322 Assert(found == IsUnderPostmaster);
1323 if (!found)
1324 dlist_init(FinishedSerializableTransactions);
1327 * Initialize the SLRU storage for old committed serializable
1328 * transactions.
1330 SerialInit();
1334 * Estimate shared-memory space used for predicate lock table
1336 Size
1337 PredicateLockShmemSize(void)
1339 Size size = 0;
1340 long max_table_size;
1342 /* predicate lock target hash table */
1343 max_table_size = NPREDICATELOCKTARGETENTS();
1344 size = add_size(size, hash_estimate_size(max_table_size,
1345 sizeof(PREDICATELOCKTARGET)));
1347 /* predicate lock hash table */
1348 max_table_size *= 2;
1349 size = add_size(size, hash_estimate_size(max_table_size,
1350 sizeof(PREDICATELOCK)));
1353 * Since NPREDICATELOCKTARGETENTS is only an estimate, add 10% safety
1354 * margin.
1356 size = add_size(size, size / 10);
1358 /* transaction list */
1359 max_table_size = MaxBackends + max_prepared_xacts;
1360 max_table_size *= 10;
1361 size = add_size(size, PredXactListDataSize);
1362 size = add_size(size, mul_size((Size) max_table_size,
1363 sizeof(SERIALIZABLEXACT)));
1365 /* transaction xid table */
1366 size = add_size(size, hash_estimate_size(max_table_size,
1367 sizeof(SERIALIZABLEXID)));
1369 /* rw-conflict pool */
1370 max_table_size *= 5;
1371 size = add_size(size, RWConflictPoolHeaderDataSize);
1372 size = add_size(size, mul_size((Size) max_table_size,
1373 RWConflictDataSize));
1375 /* Head for list of finished serializable transactions. */
1376 size = add_size(size, sizeof(dlist_head));
1378 /* Shared memory structures for SLRU tracking of old committed xids. */
1379 size = add_size(size, sizeof(SerialControlData));
1380 size = add_size(size, SimpleLruShmemSize(serializable_buffers, 0));
1382 return size;
1387 * Compute the hash code associated with a PREDICATELOCKTAG.
1389 * Because we want to use just one set of partition locks for both the
1390 * PREDICATELOCKTARGET and PREDICATELOCK hash tables, we have to make sure
1391 * that PREDICATELOCKs fall into the same partition number as their
1392 * associated PREDICATELOCKTARGETs. dynahash.c expects the partition number
1393 * to be the low-order bits of the hash code, and therefore a
1394 * PREDICATELOCKTAG's hash code must have the same low-order bits as the
1395 * associated PREDICATELOCKTARGETTAG's hash code. We achieve this with this
1396 * specialized hash function.
1398 static uint32
1399 predicatelock_hash(const void *key, Size keysize)
1401 const PREDICATELOCKTAG *predicatelocktag = (const PREDICATELOCKTAG *) key;
1402 uint32 targethash;
1404 Assert(keysize == sizeof(PREDICATELOCKTAG));
1406 /* Look into the associated target object, and compute its hash code */
1407 targethash = PredicateLockTargetTagHashCode(&predicatelocktag->myTarget->tag);
1409 return PredicateLockHashCodeFromTargetHashCode(predicatelocktag, targethash);
1414 * GetPredicateLockStatusData
1415 * Return a table containing the internal state of the predicate
1416 * lock manager for use in pg_lock_status.
1418 * Like GetLockStatusData, this function tries to hold the partition LWLocks
1419 * for as short a time as possible by returning two arrays that simply
1420 * contain the PREDICATELOCKTARGETTAG and SERIALIZABLEXACT for each lock
1421 * table entry. Multiple copies of the same PREDICATELOCKTARGETTAG and
1422 * SERIALIZABLEXACT will likely appear.
1424 PredicateLockData *
1425 GetPredicateLockStatusData(void)
1427 PredicateLockData *data;
1428 int i;
1429 int els,
1431 HASH_SEQ_STATUS seqstat;
1432 PREDICATELOCK *predlock;
1434 data = (PredicateLockData *) palloc(sizeof(PredicateLockData));
1437 * To ensure consistency, take simultaneous locks on all partition locks
1438 * in ascending order, then SerializableXactHashLock.
1440 for (i = 0; i < NUM_PREDICATELOCK_PARTITIONS; i++)
1441 LWLockAcquire(PredicateLockHashPartitionLockByIndex(i), LW_SHARED);
1442 LWLockAcquire(SerializableXactHashLock, LW_SHARED);
1444 /* Get number of locks and allocate appropriately-sized arrays. */
1445 els = hash_get_num_entries(PredicateLockHash);
1446 data->nelements = els;
1447 data->locktags = (PREDICATELOCKTARGETTAG *)
1448 palloc(sizeof(PREDICATELOCKTARGETTAG) * els);
1449 data->xacts = (SERIALIZABLEXACT *)
1450 palloc(sizeof(SERIALIZABLEXACT) * els);
1453 /* Scan through PredicateLockHash and copy contents */
1454 hash_seq_init(&seqstat, PredicateLockHash);
1456 el = 0;
1458 while ((predlock = (PREDICATELOCK *) hash_seq_search(&seqstat)))
1460 data->locktags[el] = predlock->tag.myTarget->tag;
1461 data->xacts[el] = *predlock->tag.myXact;
1462 el++;
1465 Assert(el == els);
1467 /* Release locks in reverse order */
1468 LWLockRelease(SerializableXactHashLock);
1469 for (i = NUM_PREDICATELOCK_PARTITIONS - 1; i >= 0; i--)
1470 LWLockRelease(PredicateLockHashPartitionLockByIndex(i));
1472 return data;
1476 * Free up shared memory structures by pushing the oldest sxact (the one at
1477 * the front of the SummarizeOldestCommittedSxact queue) into summary form.
1478 * Each call will free exactly one SERIALIZABLEXACT structure and may also
1479 * free one or more of these structures: SERIALIZABLEXID, PREDICATELOCK,
1480 * PREDICATELOCKTARGET, RWConflictData.
1482 static void
1483 SummarizeOldestCommittedSxact(void)
1485 SERIALIZABLEXACT *sxact;
1487 LWLockAcquire(SerializableFinishedListLock, LW_EXCLUSIVE);
1490 * This function is only called if there are no sxact slots available.
1491 * Some of them must belong to old, already-finished transactions, so
1492 * there should be something in FinishedSerializableTransactions list that
1493 * we can summarize. However, there's a race condition: while we were not
1494 * holding any locks, a transaction might have ended and cleaned up all
1495 * the finished sxact entries already, freeing up their sxact slots. In
1496 * that case, we have nothing to do here. The caller will find one of the
1497 * slots released by the other backend when it retries.
1499 if (dlist_is_empty(FinishedSerializableTransactions))
1501 LWLockRelease(SerializableFinishedListLock);
1502 return;
1506 * Grab the first sxact off the finished list -- this will be the earliest
1507 * commit. Remove it from the list.
1509 sxact = dlist_head_element(SERIALIZABLEXACT, finishedLink,
1510 FinishedSerializableTransactions);
1511 dlist_delete_thoroughly(&sxact->finishedLink);
1513 /* Add to SLRU summary information. */
1514 if (TransactionIdIsValid(sxact->topXid) && !SxactIsReadOnly(sxact))
1515 SerialAdd(sxact->topXid, SxactHasConflictOut(sxact)
1516 ? sxact->SeqNo.earliestOutConflictCommit : InvalidSerCommitSeqNo);
1518 /* Summarize and release the detail. */
1519 ReleaseOneSerializableXact(sxact, false, true);
1521 LWLockRelease(SerializableFinishedListLock);
1525 * GetSafeSnapshot
1526 * Obtain and register a snapshot for a READ ONLY DEFERRABLE
1527 * transaction. Ensures that the snapshot is "safe", i.e. a
1528 * read-only transaction running on it can execute serializably
1529 * without further checks. This requires waiting for concurrent
1530 * transactions to complete, and retrying with a new snapshot if
1531 * one of them could possibly create a conflict.
1533 * As with GetSerializableTransactionSnapshot (which this is a subroutine
1534 * for), the passed-in Snapshot pointer should reference a static data
1535 * area that can safely be passed to GetSnapshotData.
1537 static Snapshot
1538 GetSafeSnapshot(Snapshot origSnapshot)
1540 Snapshot snapshot;
1542 Assert(XactReadOnly && XactDeferrable);
1544 while (true)
1547 * GetSerializableTransactionSnapshotInt is going to call
1548 * GetSnapshotData, so we need to provide it the static snapshot area
1549 * our caller passed to us. The pointer returned is actually the same
1550 * one passed to it, but we avoid assuming that here.
1552 snapshot = GetSerializableTransactionSnapshotInt(origSnapshot,
1553 NULL, InvalidPid);
1555 if (MySerializableXact == InvalidSerializableXact)
1556 return snapshot; /* no concurrent r/w xacts; it's safe */
1558 LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
1561 * Wait for concurrent transactions to finish. Stop early if one of
1562 * them marked us as conflicted.
1564 MySerializableXact->flags |= SXACT_FLAG_DEFERRABLE_WAITING;
1565 while (!(dlist_is_empty(&MySerializableXact->possibleUnsafeConflicts) ||
1566 SxactIsROUnsafe(MySerializableXact)))
1568 LWLockRelease(SerializableXactHashLock);
1569 ProcWaitForSignal(WAIT_EVENT_SAFE_SNAPSHOT);
1570 LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
1572 MySerializableXact->flags &= ~SXACT_FLAG_DEFERRABLE_WAITING;
1574 if (!SxactIsROUnsafe(MySerializableXact))
1576 LWLockRelease(SerializableXactHashLock);
1577 break; /* success */
1580 LWLockRelease(SerializableXactHashLock);
1582 /* else, need to retry... */
1583 ereport(DEBUG2,
1584 (errcode(ERRCODE_T_R_SERIALIZATION_FAILURE),
1585 errmsg_internal("deferrable snapshot was unsafe; trying a new one")));
1586 ReleasePredicateLocks(false, false);
1590 * Now we have a safe snapshot, so we don't need to do any further checks.
1592 Assert(SxactIsROSafe(MySerializableXact));
1593 ReleasePredicateLocks(false, true);
1595 return snapshot;
1599 * GetSafeSnapshotBlockingPids
1600 * If the specified process is currently blocked in GetSafeSnapshot,
1601 * write the process IDs of all processes that it is blocked by
1602 * into the caller-supplied buffer output[]. The list is truncated at
1603 * output_size, and the number of PIDs written into the buffer is
1604 * returned. Returns zero if the given PID is not currently blocked
1605 * in GetSafeSnapshot.
1608 GetSafeSnapshotBlockingPids(int blocked_pid, int *output, int output_size)
1610 int num_written = 0;
1611 dlist_iter iter;
1612 SERIALIZABLEXACT *blocking_sxact = NULL;
1614 LWLockAcquire(SerializableXactHashLock, LW_SHARED);
1616 /* Find blocked_pid's SERIALIZABLEXACT by linear search. */
1617 dlist_foreach(iter, &PredXact->activeList)
1619 SERIALIZABLEXACT *sxact =
1620 dlist_container(SERIALIZABLEXACT, xactLink, iter.cur);
1622 if (sxact->pid == blocked_pid)
1624 blocking_sxact = sxact;
1625 break;
1629 /* Did we find it, and is it currently waiting in GetSafeSnapshot? */
1630 if (blocking_sxact != NULL && SxactIsDeferrableWaiting(blocking_sxact))
1632 /* Traverse the list of possible unsafe conflicts collecting PIDs. */
1633 dlist_foreach(iter, &blocking_sxact->possibleUnsafeConflicts)
1635 RWConflict possibleUnsafeConflict =
1636 dlist_container(RWConflictData, inLink, iter.cur);
1638 output[num_written++] = possibleUnsafeConflict->sxactOut->pid;
1640 if (num_written >= output_size)
1641 break;
1645 LWLockRelease(SerializableXactHashLock);
1647 return num_written;
1651 * Acquire a snapshot that can be used for the current transaction.
1653 * Make sure we have a SERIALIZABLEXACT reference in MySerializableXact.
1654 * It should be current for this process and be contained in PredXact.
1656 * The passed-in Snapshot pointer should reference a static data area that
1657 * can safely be passed to GetSnapshotData. The return value is actually
1658 * always this same pointer; no new snapshot data structure is allocated
1659 * within this function.
1661 Snapshot
1662 GetSerializableTransactionSnapshot(Snapshot snapshot)
1664 Assert(IsolationIsSerializable());
1667 * Can't use serializable mode while recovery is still active, as it is,
1668 * for example, on a hot standby. We could get here despite the check in
1669 * check_transaction_isolation() if default_transaction_isolation is set
1670 * to serializable, so phrase the hint accordingly.
1672 if (RecoveryInProgress())
1673 ereport(ERROR,
1674 (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
1675 errmsg("cannot use serializable mode in a hot standby"),
1676 errdetail("default_transaction_isolation is set to \"serializable\"."),
1677 errhint("You can use \"SET default_transaction_isolation = 'repeatable read'\" to change the default.")));
1680 * A special optimization is available for SERIALIZABLE READ ONLY
1681 * DEFERRABLE transactions -- we can wait for a suitable snapshot and
1682 * thereby avoid all SSI overhead once it's running.
1684 if (XactReadOnly && XactDeferrable)
1685 return GetSafeSnapshot(snapshot);
1687 return GetSerializableTransactionSnapshotInt(snapshot,
1688 NULL, InvalidPid);
1692 * Import a snapshot to be used for the current transaction.
1694 * This is nearly the same as GetSerializableTransactionSnapshot, except that
1695 * we don't take a new snapshot, but rather use the data we're handed.
1697 * The caller must have verified that the snapshot came from a serializable
1698 * transaction; and if we're read-write, the source transaction must not be
1699 * read-only.
1701 void
1702 SetSerializableTransactionSnapshot(Snapshot snapshot,
1703 VirtualTransactionId *sourcevxid,
1704 int sourcepid)
1706 Assert(IsolationIsSerializable());
1709 * If this is called by parallel.c in a parallel worker, we don't want to
1710 * create a SERIALIZABLEXACT just yet because the leader's
1711 * SERIALIZABLEXACT will be installed with AttachSerializableXact(). We
1712 * also don't want to reject SERIALIZABLE READ ONLY DEFERRABLE in this
1713 * case, because the leader has already determined that the snapshot it
1714 * has passed us is safe. So there is nothing for us to do.
1716 if (IsParallelWorker())
1717 return;
1720 * We do not allow SERIALIZABLE READ ONLY DEFERRABLE transactions to
1721 * import snapshots, since there's no way to wait for a safe snapshot when
1722 * we're using the snap we're told to. (XXX instead of throwing an error,
1723 * we could just ignore the XactDeferrable flag?)
1725 if (XactReadOnly && XactDeferrable)
1726 ereport(ERROR,
1727 (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
1728 errmsg("a snapshot-importing transaction must not be READ ONLY DEFERRABLE")));
1730 (void) GetSerializableTransactionSnapshotInt(snapshot, sourcevxid,
1731 sourcepid);
1735 * Guts of GetSerializableTransactionSnapshot
1737 * If sourcevxid is valid, this is actually an import operation and we should
1738 * skip calling GetSnapshotData, because the snapshot contents are already
1739 * loaded up. HOWEVER: to avoid race conditions, we must check that the
1740 * source xact is still running after we acquire SerializableXactHashLock.
1741 * We do that by calling ProcArrayInstallImportedXmin.
1743 static Snapshot
1744 GetSerializableTransactionSnapshotInt(Snapshot snapshot,
1745 VirtualTransactionId *sourcevxid,
1746 int sourcepid)
1748 PGPROC *proc;
1749 VirtualTransactionId vxid;
1750 SERIALIZABLEXACT *sxact,
1751 *othersxact;
1753 /* We only do this for serializable transactions. Once. */
1754 Assert(MySerializableXact == InvalidSerializableXact);
1756 Assert(!RecoveryInProgress());
1759 * Since all parts of a serializable transaction must use the same
1760 * snapshot, it is too late to establish one after a parallel operation
1761 * has begun.
1763 if (IsInParallelMode())
1764 elog(ERROR, "cannot establish serializable snapshot during a parallel operation");
1766 proc = MyProc;
1767 Assert(proc != NULL);
1768 GET_VXID_FROM_PGPROC(vxid, *proc);
1771 * First we get the sxact structure, which may involve looping and access
1772 * to the "finished" list to free a structure for use.
1774 * We must hold SerializableXactHashLock when taking/checking the snapshot
1775 * to avoid race conditions, for much the same reasons that
1776 * GetSnapshotData takes the ProcArrayLock. Since we might have to
1777 * release SerializableXactHashLock to call SummarizeOldestCommittedSxact,
1778 * this means we have to create the sxact first, which is a bit annoying
1779 * (in particular, an elog(ERROR) in procarray.c would cause us to leak
1780 * the sxact). Consider refactoring to avoid this.
1782 #ifdef TEST_SUMMARIZE_SERIAL
1783 SummarizeOldestCommittedSxact();
1784 #endif
1785 LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
1788 sxact = CreatePredXact();
1789 /* If null, push out committed sxact to SLRU summary & retry. */
1790 if (!sxact)
1792 LWLockRelease(SerializableXactHashLock);
1793 SummarizeOldestCommittedSxact();
1794 LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
1796 } while (!sxact);
1798 /* Get the snapshot, or check that it's safe to use */
1799 if (!sourcevxid)
1800 snapshot = GetSnapshotData(snapshot);
1801 else if (!ProcArrayInstallImportedXmin(snapshot->xmin, sourcevxid))
1803 ReleasePredXact(sxact);
1804 LWLockRelease(SerializableXactHashLock);
1805 ereport(ERROR,
1806 (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
1807 errmsg("could not import the requested snapshot"),
1808 errdetail("The source process with PID %d is not running anymore.",
1809 sourcepid)));
1813 * If there are no serializable transactions which are not read-only, we
1814 * can "opt out" of predicate locking and conflict checking for a
1815 * read-only transaction.
1817 * The reason this is safe is that a read-only transaction can only become
1818 * part of a dangerous structure if it overlaps a writable transaction
1819 * which in turn overlaps a writable transaction which committed before
1820 * the read-only transaction started. A new writable transaction can
1821 * overlap this one, but it can't meet the other condition of overlapping
1822 * a transaction which committed before this one started.
1824 if (XactReadOnly && PredXact->WritableSxactCount == 0)
1826 ReleasePredXact(sxact);
1827 LWLockRelease(SerializableXactHashLock);
1828 return snapshot;
1831 /* Initialize the structure. */
1832 sxact->vxid = vxid;
1833 sxact->SeqNo.lastCommitBeforeSnapshot = PredXact->LastSxactCommitSeqNo;
1834 sxact->prepareSeqNo = InvalidSerCommitSeqNo;
1835 sxact->commitSeqNo = InvalidSerCommitSeqNo;
1836 dlist_init(&(sxact->outConflicts));
1837 dlist_init(&(sxact->inConflicts));
1838 dlist_init(&(sxact->possibleUnsafeConflicts));
1839 sxact->topXid = GetTopTransactionIdIfAny();
1840 sxact->finishedBefore = InvalidTransactionId;
1841 sxact->xmin = snapshot->xmin;
1842 sxact->pid = MyProcPid;
1843 sxact->pgprocno = MyProcNumber;
1844 dlist_init(&sxact->predicateLocks);
1845 dlist_node_init(&sxact->finishedLink);
1846 sxact->flags = 0;
1847 if (XactReadOnly)
1849 dlist_iter iter;
1851 sxact->flags |= SXACT_FLAG_READ_ONLY;
1854 * Register all concurrent r/w transactions as possible conflicts; if
1855 * all of them commit without any outgoing conflicts to earlier
1856 * transactions then this snapshot can be deemed safe (and we can run
1857 * without tracking predicate locks).
1859 dlist_foreach(iter, &PredXact->activeList)
1861 othersxact = dlist_container(SERIALIZABLEXACT, xactLink, iter.cur);
1863 if (!SxactIsCommitted(othersxact)
1864 && !SxactIsDoomed(othersxact)
1865 && !SxactIsReadOnly(othersxact))
1867 SetPossibleUnsafeConflict(sxact, othersxact);
1872 * If we didn't find any possibly unsafe conflicts because every
1873 * uncommitted writable transaction turned out to be doomed, then we
1874 * can "opt out" immediately. See comments above the earlier check
1875 * for PredXact->WritableSxactCount == 0.
1877 if (dlist_is_empty(&sxact->possibleUnsafeConflicts))
1879 ReleasePredXact(sxact);
1880 LWLockRelease(SerializableXactHashLock);
1881 return snapshot;
1884 else
1886 ++(PredXact->WritableSxactCount);
1887 Assert(PredXact->WritableSxactCount <=
1888 (MaxBackends + max_prepared_xacts));
1891 /* Maintain serializable global xmin info. */
1892 if (!TransactionIdIsValid(PredXact->SxactGlobalXmin))
1894 Assert(PredXact->SxactGlobalXminCount == 0);
1895 PredXact->SxactGlobalXmin = snapshot->xmin;
1896 PredXact->SxactGlobalXminCount = 1;
1897 SerialSetActiveSerXmin(snapshot->xmin);
1899 else if (TransactionIdEquals(snapshot->xmin, PredXact->SxactGlobalXmin))
1901 Assert(PredXact->SxactGlobalXminCount > 0);
1902 PredXact->SxactGlobalXminCount++;
1904 else
1906 Assert(TransactionIdFollows(snapshot->xmin, PredXact->SxactGlobalXmin));
1909 MySerializableXact = sxact;
1910 MyXactDidWrite = false; /* haven't written anything yet */
1912 LWLockRelease(SerializableXactHashLock);
1914 CreateLocalPredicateLockHash();
1916 return snapshot;
1919 static void
1920 CreateLocalPredicateLockHash(void)
1922 HASHCTL hash_ctl;
1924 /* Initialize the backend-local hash table of parent locks */
1925 Assert(LocalPredicateLockHash == NULL);
1926 hash_ctl.keysize = sizeof(PREDICATELOCKTARGETTAG);
1927 hash_ctl.entrysize = sizeof(LOCALPREDICATELOCK);
1928 LocalPredicateLockHash = hash_create("Local predicate lock",
1929 max_predicate_locks_per_xact,
1930 &hash_ctl,
1931 HASH_ELEM | HASH_BLOBS);
1935 * Register the top level XID in SerializableXidHash.
1936 * Also store it for easy reference in MySerializableXact.
1938 void
1939 RegisterPredicateLockingXid(TransactionId xid)
1941 SERIALIZABLEXIDTAG sxidtag;
1942 SERIALIZABLEXID *sxid;
1943 bool found;
1946 * If we're not tracking predicate lock data for this transaction, we
1947 * should ignore the request and return quickly.
1949 if (MySerializableXact == InvalidSerializableXact)
1950 return;
1952 /* We should have a valid XID and be at the top level. */
1953 Assert(TransactionIdIsValid(xid));
1955 LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
1957 /* This should only be done once per transaction. */
1958 Assert(MySerializableXact->topXid == InvalidTransactionId);
1960 MySerializableXact->topXid = xid;
1962 sxidtag.xid = xid;
1963 sxid = (SERIALIZABLEXID *) hash_search(SerializableXidHash,
1964 &sxidtag,
1965 HASH_ENTER, &found);
1966 Assert(!found);
1968 /* Initialize the structure. */
1969 sxid->myXact = MySerializableXact;
1970 LWLockRelease(SerializableXactHashLock);
1975 * Check whether there are any predicate locks held by any transaction
1976 * for the page at the given block number.
1978 * Note that the transaction may be completed but not yet subject to
1979 * cleanup due to overlapping serializable transactions. This must
1980 * return valid information regardless of transaction isolation level.
1982 * Also note that this doesn't check for a conflicting relation lock,
1983 * just a lock specifically on the given page.
1985 * One use is to support proper behavior during GiST index vacuum.
1987 bool
1988 PageIsPredicateLocked(Relation relation, BlockNumber blkno)
1990 PREDICATELOCKTARGETTAG targettag;
1991 uint32 targettaghash;
1992 LWLock *partitionLock;
1993 PREDICATELOCKTARGET *target;
1995 SET_PREDICATELOCKTARGETTAG_PAGE(targettag,
1996 relation->rd_locator.dbOid,
1997 relation->rd_id,
1998 blkno);
2000 targettaghash = PredicateLockTargetTagHashCode(&targettag);
2001 partitionLock = PredicateLockHashPartitionLock(targettaghash);
2002 LWLockAcquire(partitionLock, LW_SHARED);
2003 target = (PREDICATELOCKTARGET *)
2004 hash_search_with_hash_value(PredicateLockTargetHash,
2005 &targettag, targettaghash,
2006 HASH_FIND, NULL);
2007 LWLockRelease(partitionLock);
2009 return (target != NULL);
2014 * Check whether a particular lock is held by this transaction.
2016 * Important note: this function may return false even if the lock is
2017 * being held, because it uses the local lock table which is not
2018 * updated if another transaction modifies our lock list (e.g. to
2019 * split an index page). It can also return true when a coarser
2020 * granularity lock that covers this target is being held. Be careful
2021 * to only use this function in circumstances where such errors are
2022 * acceptable!
2024 static bool
2025 PredicateLockExists(const PREDICATELOCKTARGETTAG *targettag)
2027 LOCALPREDICATELOCK *lock;
2029 /* check local hash table */
2030 lock = (LOCALPREDICATELOCK *) hash_search(LocalPredicateLockHash,
2031 targettag,
2032 HASH_FIND, NULL);
2034 if (!lock)
2035 return false;
2038 * Found entry in the table, but still need to check whether it's actually
2039 * held -- it could just be a parent of some held lock.
2041 return lock->held;
2045 * Return the parent lock tag in the lock hierarchy: the next coarser
2046 * lock that covers the provided tag.
2048 * Returns true and sets *parent to the parent tag if one exists,
2049 * returns false if none exists.
2051 static bool
2052 GetParentPredicateLockTag(const PREDICATELOCKTARGETTAG *tag,
2053 PREDICATELOCKTARGETTAG *parent)
2055 switch (GET_PREDICATELOCKTARGETTAG_TYPE(*tag))
2057 case PREDLOCKTAG_RELATION:
2058 /* relation locks have no parent lock */
2059 return false;
2061 case PREDLOCKTAG_PAGE:
2062 /* parent lock is relation lock */
2063 SET_PREDICATELOCKTARGETTAG_RELATION(*parent,
2064 GET_PREDICATELOCKTARGETTAG_DB(*tag),
2065 GET_PREDICATELOCKTARGETTAG_RELATION(*tag));
2067 return true;
2069 case PREDLOCKTAG_TUPLE:
2070 /* parent lock is page lock */
2071 SET_PREDICATELOCKTARGETTAG_PAGE(*parent,
2072 GET_PREDICATELOCKTARGETTAG_DB(*tag),
2073 GET_PREDICATELOCKTARGETTAG_RELATION(*tag),
2074 GET_PREDICATELOCKTARGETTAG_PAGE(*tag));
2075 return true;
2078 /* not reachable */
2079 Assert(false);
2080 return false;
2084 * Check whether the lock we are considering is already covered by a
2085 * coarser lock for our transaction.
2087 * Like PredicateLockExists, this function might return a false
2088 * negative, but it will never return a false positive.
2090 static bool
2091 CoarserLockCovers(const PREDICATELOCKTARGETTAG *newtargettag)
2093 PREDICATELOCKTARGETTAG targettag,
2094 parenttag;
2096 targettag = *newtargettag;
2098 /* check parents iteratively until no more */
2099 while (GetParentPredicateLockTag(&targettag, &parenttag))
2101 targettag = parenttag;
2102 if (PredicateLockExists(&targettag))
2103 return true;
2106 /* no more parents to check; lock is not covered */
2107 return false;
2111 * Remove the dummy entry from the predicate lock target hash, to free up some
2112 * scratch space. The caller must be holding SerializablePredicateListLock,
2113 * and must restore the entry with RestoreScratchTarget() before releasing the
2114 * lock.
2116 * If lockheld is true, the caller is already holding the partition lock
2117 * of the partition containing the scratch entry.
2119 static void
2120 RemoveScratchTarget(bool lockheld)
2122 bool found;
2124 Assert(LWLockHeldByMe(SerializablePredicateListLock));
2126 if (!lockheld)
2127 LWLockAcquire(ScratchPartitionLock, LW_EXCLUSIVE);
2128 hash_search_with_hash_value(PredicateLockTargetHash,
2129 &ScratchTargetTag,
2130 ScratchTargetTagHash,
2131 HASH_REMOVE, &found);
2132 Assert(found);
2133 if (!lockheld)
2134 LWLockRelease(ScratchPartitionLock);
2138 * Re-insert the dummy entry in predicate lock target hash.
2140 static void
2141 RestoreScratchTarget(bool lockheld)
2143 bool found;
2145 Assert(LWLockHeldByMe(SerializablePredicateListLock));
2147 if (!lockheld)
2148 LWLockAcquire(ScratchPartitionLock, LW_EXCLUSIVE);
2149 hash_search_with_hash_value(PredicateLockTargetHash,
2150 &ScratchTargetTag,
2151 ScratchTargetTagHash,
2152 HASH_ENTER, &found);
2153 Assert(!found);
2154 if (!lockheld)
2155 LWLockRelease(ScratchPartitionLock);
2159 * Check whether the list of related predicate locks is empty for a
2160 * predicate lock target, and remove the target if it is.
2162 static void
2163 RemoveTargetIfNoLongerUsed(PREDICATELOCKTARGET *target, uint32 targettaghash)
2165 PREDICATELOCKTARGET *rmtarget PG_USED_FOR_ASSERTS_ONLY;
2167 Assert(LWLockHeldByMe(SerializablePredicateListLock));
2169 /* Can't remove it until no locks at this target. */
2170 if (!dlist_is_empty(&target->predicateLocks))
2171 return;
2173 /* Actually remove the target. */
2174 rmtarget = hash_search_with_hash_value(PredicateLockTargetHash,
2175 &target->tag,
2176 targettaghash,
2177 HASH_REMOVE, NULL);
2178 Assert(rmtarget == target);
2182 * Delete child target locks owned by this process.
2183 * This implementation is assuming that the usage of each target tag field
2184 * is uniform. No need to make this hard if we don't have to.
2186 * We acquire an LWLock in the case of parallel mode, because worker
2187 * backends have access to the leader's SERIALIZABLEXACT. Otherwise,
2188 * we aren't acquiring LWLocks for the predicate lock or lock
2189 * target structures associated with this transaction unless we're going
2190 * to modify them, because no other process is permitted to modify our
2191 * locks.
2193 static void
2194 DeleteChildTargetLocks(const PREDICATELOCKTARGETTAG *newtargettag)
2196 SERIALIZABLEXACT *sxact;
2197 PREDICATELOCK *predlock;
2198 dlist_mutable_iter iter;
2200 LWLockAcquire(SerializablePredicateListLock, LW_SHARED);
2201 sxact = MySerializableXact;
2202 if (IsInParallelMode())
2203 LWLockAcquire(&sxact->perXactPredicateListLock, LW_EXCLUSIVE);
2205 dlist_foreach_modify(iter, &sxact->predicateLocks)
2207 PREDICATELOCKTAG oldlocktag;
2208 PREDICATELOCKTARGET *oldtarget;
2209 PREDICATELOCKTARGETTAG oldtargettag;
2211 predlock = dlist_container(PREDICATELOCK, xactLink, iter.cur);
2213 oldlocktag = predlock->tag;
2214 Assert(oldlocktag.myXact == sxact);
2215 oldtarget = oldlocktag.myTarget;
2216 oldtargettag = oldtarget->tag;
2218 if (TargetTagIsCoveredBy(oldtargettag, *newtargettag))
2220 uint32 oldtargettaghash;
2221 LWLock *partitionLock;
2222 PREDICATELOCK *rmpredlock PG_USED_FOR_ASSERTS_ONLY;
2224 oldtargettaghash = PredicateLockTargetTagHashCode(&oldtargettag);
2225 partitionLock = PredicateLockHashPartitionLock(oldtargettaghash);
2227 LWLockAcquire(partitionLock, LW_EXCLUSIVE);
2229 dlist_delete(&predlock->xactLink);
2230 dlist_delete(&predlock->targetLink);
2231 rmpredlock = hash_search_with_hash_value
2232 (PredicateLockHash,
2233 &oldlocktag,
2234 PredicateLockHashCodeFromTargetHashCode(&oldlocktag,
2235 oldtargettaghash),
2236 HASH_REMOVE, NULL);
2237 Assert(rmpredlock == predlock);
2239 RemoveTargetIfNoLongerUsed(oldtarget, oldtargettaghash);
2241 LWLockRelease(partitionLock);
2243 DecrementParentLocks(&oldtargettag);
2246 if (IsInParallelMode())
2247 LWLockRelease(&sxact->perXactPredicateListLock);
2248 LWLockRelease(SerializablePredicateListLock);
2252 * Returns the promotion limit for a given predicate lock target. This is the
2253 * max number of descendant locks allowed before promoting to the specified
2254 * tag. Note that the limit includes non-direct descendants (e.g., both tuples
2255 * and pages for a relation lock).
2257 * Currently the default limit is 2 for a page lock, and half of the value of
2258 * max_pred_locks_per_transaction - 1 for a relation lock, to match behavior
2259 * of earlier releases when upgrading.
2261 * TODO SSI: We should probably add additional GUCs to allow a maximum ratio
2262 * of page and tuple locks based on the pages in a relation, and the maximum
2263 * ratio of tuple locks to tuples in a page. This would provide more
2264 * generally "balanced" allocation of locks to where they are most useful,
2265 * while still allowing the absolute numbers to prevent one relation from
2266 * tying up all predicate lock resources.
2268 static int
2269 MaxPredicateChildLocks(const PREDICATELOCKTARGETTAG *tag)
2271 switch (GET_PREDICATELOCKTARGETTAG_TYPE(*tag))
2273 case PREDLOCKTAG_RELATION:
2274 return max_predicate_locks_per_relation < 0
2275 ? (max_predicate_locks_per_xact
2276 / (-max_predicate_locks_per_relation)) - 1
2277 : max_predicate_locks_per_relation;
2279 case PREDLOCKTAG_PAGE:
2280 return max_predicate_locks_per_page;
2282 case PREDLOCKTAG_TUPLE:
2285 * not reachable: nothing is finer-granularity than a tuple, so we
2286 * should never try to promote to it.
2288 Assert(false);
2289 return 0;
2292 /* not reachable */
2293 Assert(false);
2294 return 0;
2298 * For all ancestors of a newly-acquired predicate lock, increment
2299 * their child count in the parent hash table. If any of them have
2300 * more descendants than their promotion threshold, acquire the
2301 * coarsest such lock.
2303 * Returns true if a parent lock was acquired and false otherwise.
2305 static bool
2306 CheckAndPromotePredicateLockRequest(const PREDICATELOCKTARGETTAG *reqtag)
2308 PREDICATELOCKTARGETTAG targettag,
2309 nexttag,
2310 promotiontag;
2311 LOCALPREDICATELOCK *parentlock;
2312 bool found,
2313 promote;
2315 promote = false;
2317 targettag = *reqtag;
2319 /* check parents iteratively */
2320 while (GetParentPredicateLockTag(&targettag, &nexttag))
2322 targettag = nexttag;
2323 parentlock = (LOCALPREDICATELOCK *) hash_search(LocalPredicateLockHash,
2324 &targettag,
2325 HASH_ENTER,
2326 &found);
2327 if (!found)
2329 parentlock->held = false;
2330 parentlock->childLocks = 1;
2332 else
2333 parentlock->childLocks++;
2335 if (parentlock->childLocks >
2336 MaxPredicateChildLocks(&targettag))
2339 * We should promote to this parent lock. Continue to check its
2340 * ancestors, however, both to get their child counts right and to
2341 * check whether we should just go ahead and promote to one of
2342 * them.
2344 promotiontag = targettag;
2345 promote = true;
2349 if (promote)
2351 /* acquire coarsest ancestor eligible for promotion */
2352 PredicateLockAcquire(&promotiontag);
2353 return true;
2355 else
2356 return false;
2360 * When releasing a lock, decrement the child count on all ancestor
2361 * locks.
2363 * This is called only when releasing a lock via
2364 * DeleteChildTargetLocks (i.e. when a lock becomes redundant because
2365 * we've acquired its parent, possibly due to promotion) or when a new
2366 * MVCC write lock makes the predicate lock unnecessary. There's no
2367 * point in calling it when locks are released at transaction end, as
2368 * this information is no longer needed.
2370 static void
2371 DecrementParentLocks(const PREDICATELOCKTARGETTAG *targettag)
2373 PREDICATELOCKTARGETTAG parenttag,
2374 nexttag;
2376 parenttag = *targettag;
2378 while (GetParentPredicateLockTag(&parenttag, &nexttag))
2380 uint32 targettaghash;
2381 LOCALPREDICATELOCK *parentlock,
2382 *rmlock PG_USED_FOR_ASSERTS_ONLY;
2384 parenttag = nexttag;
2385 targettaghash = PredicateLockTargetTagHashCode(&parenttag);
2386 parentlock = (LOCALPREDICATELOCK *)
2387 hash_search_with_hash_value(LocalPredicateLockHash,
2388 &parenttag, targettaghash,
2389 HASH_FIND, NULL);
2392 * There's a small chance the parent lock doesn't exist in the lock
2393 * table. This can happen if we prematurely removed it because an
2394 * index split caused the child refcount to be off.
2396 if (parentlock == NULL)
2397 continue;
2399 parentlock->childLocks--;
2402 * Under similar circumstances the parent lock's refcount might be
2403 * zero. This only happens if we're holding that lock (otherwise we
2404 * would have removed the entry).
2406 if (parentlock->childLocks < 0)
2408 Assert(parentlock->held);
2409 parentlock->childLocks = 0;
2412 if ((parentlock->childLocks == 0) && (!parentlock->held))
2414 rmlock = (LOCALPREDICATELOCK *)
2415 hash_search_with_hash_value(LocalPredicateLockHash,
2416 &parenttag, targettaghash,
2417 HASH_REMOVE, NULL);
2418 Assert(rmlock == parentlock);
2424 * Indicate that a predicate lock on the given target is held by the
2425 * specified transaction. Has no effect if the lock is already held.
2427 * This updates the lock table and the sxact's lock list, and creates
2428 * the lock target if necessary, but does *not* do anything related to
2429 * granularity promotion or the local lock table. See
2430 * PredicateLockAcquire for that.
2432 static void
2433 CreatePredicateLock(const PREDICATELOCKTARGETTAG *targettag,
2434 uint32 targettaghash,
2435 SERIALIZABLEXACT *sxact)
2437 PREDICATELOCKTARGET *target;
2438 PREDICATELOCKTAG locktag;
2439 PREDICATELOCK *lock;
2440 LWLock *partitionLock;
2441 bool found;
2443 partitionLock = PredicateLockHashPartitionLock(targettaghash);
2445 LWLockAcquire(SerializablePredicateListLock, LW_SHARED);
2446 if (IsInParallelMode())
2447 LWLockAcquire(&sxact->perXactPredicateListLock, LW_EXCLUSIVE);
2448 LWLockAcquire(partitionLock, LW_EXCLUSIVE);
2450 /* Make sure that the target is represented. */
2451 target = (PREDICATELOCKTARGET *)
2452 hash_search_with_hash_value(PredicateLockTargetHash,
2453 targettag, targettaghash,
2454 HASH_ENTER_NULL, &found);
2455 if (!target)
2456 ereport(ERROR,
2457 (errcode(ERRCODE_OUT_OF_MEMORY),
2458 errmsg("out of shared memory"),
2459 errhint("You might need to increase %s.", "max_pred_locks_per_transaction")));
2460 if (!found)
2461 dlist_init(&target->predicateLocks);
2463 /* We've got the sxact and target, make sure they're joined. */
2464 locktag.myTarget = target;
2465 locktag.myXact = sxact;
2466 lock = (PREDICATELOCK *)
2467 hash_search_with_hash_value(PredicateLockHash, &locktag,
2468 PredicateLockHashCodeFromTargetHashCode(&locktag, targettaghash),
2469 HASH_ENTER_NULL, &found);
2470 if (!lock)
2471 ereport(ERROR,
2472 (errcode(ERRCODE_OUT_OF_MEMORY),
2473 errmsg("out of shared memory"),
2474 errhint("You might need to increase %s.", "max_pred_locks_per_transaction")));
2476 if (!found)
2478 dlist_push_tail(&target->predicateLocks, &lock->targetLink);
2479 dlist_push_tail(&sxact->predicateLocks, &lock->xactLink);
2480 lock->commitSeqNo = InvalidSerCommitSeqNo;
2483 LWLockRelease(partitionLock);
2484 if (IsInParallelMode())
2485 LWLockRelease(&sxact->perXactPredicateListLock);
2486 LWLockRelease(SerializablePredicateListLock);
2490 * Acquire a predicate lock on the specified target for the current
2491 * connection if not already held. This updates the local lock table
2492 * and uses it to implement granularity promotion. It will consolidate
2493 * multiple locks into a coarser lock if warranted, and will release
2494 * any finer-grained locks covered by the new one.
2496 static void
2497 PredicateLockAcquire(const PREDICATELOCKTARGETTAG *targettag)
2499 uint32 targettaghash;
2500 bool found;
2501 LOCALPREDICATELOCK *locallock;
2503 /* Do we have the lock already, or a covering lock? */
2504 if (PredicateLockExists(targettag))
2505 return;
2507 if (CoarserLockCovers(targettag))
2508 return;
2510 /* the same hash and LW lock apply to the lock target and the local lock. */
2511 targettaghash = PredicateLockTargetTagHashCode(targettag);
2513 /* Acquire lock in local table */
2514 locallock = (LOCALPREDICATELOCK *)
2515 hash_search_with_hash_value(LocalPredicateLockHash,
2516 targettag, targettaghash,
2517 HASH_ENTER, &found);
2518 locallock->held = true;
2519 if (!found)
2520 locallock->childLocks = 0;
2522 /* Actually create the lock */
2523 CreatePredicateLock(targettag, targettaghash, MySerializableXact);
2526 * Lock has been acquired. Check whether it should be promoted to a
2527 * coarser granularity, or whether there are finer-granularity locks to
2528 * clean up.
2530 if (CheckAndPromotePredicateLockRequest(targettag))
2533 * Lock request was promoted to a coarser-granularity lock, and that
2534 * lock was acquired. It will delete this lock and any of its
2535 * children, so we're done.
2538 else
2540 /* Clean up any finer-granularity locks */
2541 if (GET_PREDICATELOCKTARGETTAG_TYPE(*targettag) != PREDLOCKTAG_TUPLE)
2542 DeleteChildTargetLocks(targettag);
2548 * PredicateLockRelation
2550 * Gets a predicate lock at the relation level.
2551 * Skip if not in full serializable transaction isolation level.
2552 * Skip if this is a temporary table.
2553 * Clear any finer-grained predicate locks this session has on the relation.
2555 void
2556 PredicateLockRelation(Relation relation, Snapshot snapshot)
2558 PREDICATELOCKTARGETTAG tag;
2560 if (!SerializationNeededForRead(relation, snapshot))
2561 return;
2563 SET_PREDICATELOCKTARGETTAG_RELATION(tag,
2564 relation->rd_locator.dbOid,
2565 relation->rd_id);
2566 PredicateLockAcquire(&tag);
2570 * PredicateLockPage
2572 * Gets a predicate lock at the page level.
2573 * Skip if not in full serializable transaction isolation level.
2574 * Skip if this is a temporary table.
2575 * Skip if a coarser predicate lock already covers this page.
2576 * Clear any finer-grained predicate locks this session has on the relation.
2578 void
2579 PredicateLockPage(Relation relation, BlockNumber blkno, Snapshot snapshot)
2581 PREDICATELOCKTARGETTAG tag;
2583 if (!SerializationNeededForRead(relation, snapshot))
2584 return;
2586 SET_PREDICATELOCKTARGETTAG_PAGE(tag,
2587 relation->rd_locator.dbOid,
2588 relation->rd_id,
2589 blkno);
2590 PredicateLockAcquire(&tag);
2594 * PredicateLockTID
2596 * Gets a predicate lock at the tuple level.
2597 * Skip if not in full serializable transaction isolation level.
2598 * Skip if this is a temporary table.
2600 void
2601 PredicateLockTID(Relation relation, ItemPointer tid, Snapshot snapshot,
2602 TransactionId tuple_xid)
2604 PREDICATELOCKTARGETTAG tag;
2606 if (!SerializationNeededForRead(relation, snapshot))
2607 return;
2610 * Return if this xact wrote it.
2612 if (relation->rd_index == NULL)
2614 /* If we wrote it; we already have a write lock. */
2615 if (TransactionIdIsCurrentTransactionId(tuple_xid))
2616 return;
2620 * Do quick-but-not-definitive test for a relation lock first. This will
2621 * never cause a return when the relation is *not* locked, but will
2622 * occasionally let the check continue when there really *is* a relation
2623 * level lock.
2625 SET_PREDICATELOCKTARGETTAG_RELATION(tag,
2626 relation->rd_locator.dbOid,
2627 relation->rd_id);
2628 if (PredicateLockExists(&tag))
2629 return;
2631 SET_PREDICATELOCKTARGETTAG_TUPLE(tag,
2632 relation->rd_locator.dbOid,
2633 relation->rd_id,
2634 ItemPointerGetBlockNumber(tid),
2635 ItemPointerGetOffsetNumber(tid));
2636 PredicateLockAcquire(&tag);
2641 * DeleteLockTarget
2643 * Remove a predicate lock target along with any locks held for it.
2645 * Caller must hold SerializablePredicateListLock and the
2646 * appropriate hash partition lock for the target.
2648 static void
2649 DeleteLockTarget(PREDICATELOCKTARGET *target, uint32 targettaghash)
2651 dlist_mutable_iter iter;
2653 Assert(LWLockHeldByMeInMode(SerializablePredicateListLock,
2654 LW_EXCLUSIVE));
2655 Assert(LWLockHeldByMe(PredicateLockHashPartitionLock(targettaghash)));
2657 LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
2659 dlist_foreach_modify(iter, &target->predicateLocks)
2661 PREDICATELOCK *predlock =
2662 dlist_container(PREDICATELOCK, targetLink, iter.cur);
2663 bool found;
2665 dlist_delete(&(predlock->xactLink));
2666 dlist_delete(&(predlock->targetLink));
2668 hash_search_with_hash_value
2669 (PredicateLockHash,
2670 &predlock->tag,
2671 PredicateLockHashCodeFromTargetHashCode(&predlock->tag,
2672 targettaghash),
2673 HASH_REMOVE, &found);
2674 Assert(found);
2676 LWLockRelease(SerializableXactHashLock);
2678 /* Remove the target itself, if possible. */
2679 RemoveTargetIfNoLongerUsed(target, targettaghash);
2684 * TransferPredicateLocksToNewTarget
2686 * Move or copy all the predicate locks for a lock target, for use by
2687 * index page splits/combines and other things that create or replace
2688 * lock targets. If 'removeOld' is true, the old locks and the target
2689 * will be removed.
2691 * Returns true on success, or false if we ran out of shared memory to
2692 * allocate the new target or locks. Guaranteed to always succeed if
2693 * removeOld is set (by using the scratch entry in PredicateLockTargetHash
2694 * for scratch space).
2696 * Warning: the "removeOld" option should be used only with care,
2697 * because this function does not (indeed, can not) update other
2698 * backends' LocalPredicateLockHash. If we are only adding new
2699 * entries, this is not a problem: the local lock table is used only
2700 * as a hint, so missing entries for locks that are held are
2701 * OK. Having entries for locks that are no longer held, as can happen
2702 * when using "removeOld", is not in general OK. We can only use it
2703 * safely when replacing a lock with a coarser-granularity lock that
2704 * covers it, or if we are absolutely certain that no one will need to
2705 * refer to that lock in the future.
2707 * Caller must hold SerializablePredicateListLock exclusively.
2709 static bool
2710 TransferPredicateLocksToNewTarget(PREDICATELOCKTARGETTAG oldtargettag,
2711 PREDICATELOCKTARGETTAG newtargettag,
2712 bool removeOld)
2714 uint32 oldtargettaghash;
2715 LWLock *oldpartitionLock;
2716 PREDICATELOCKTARGET *oldtarget;
2717 uint32 newtargettaghash;
2718 LWLock *newpartitionLock;
2719 bool found;
2720 bool outOfShmem = false;
2722 Assert(LWLockHeldByMeInMode(SerializablePredicateListLock,
2723 LW_EXCLUSIVE));
2725 oldtargettaghash = PredicateLockTargetTagHashCode(&oldtargettag);
2726 newtargettaghash = PredicateLockTargetTagHashCode(&newtargettag);
2727 oldpartitionLock = PredicateLockHashPartitionLock(oldtargettaghash);
2728 newpartitionLock = PredicateLockHashPartitionLock(newtargettaghash);
2730 if (removeOld)
2733 * Remove the dummy entry to give us scratch space, so we know we'll
2734 * be able to create the new lock target.
2736 RemoveScratchTarget(false);
2740 * We must get the partition locks in ascending sequence to avoid
2741 * deadlocks. If old and new partitions are the same, we must request the
2742 * lock only once.
2744 if (oldpartitionLock < newpartitionLock)
2746 LWLockAcquire(oldpartitionLock,
2747 (removeOld ? LW_EXCLUSIVE : LW_SHARED));
2748 LWLockAcquire(newpartitionLock, LW_EXCLUSIVE);
2750 else if (oldpartitionLock > newpartitionLock)
2752 LWLockAcquire(newpartitionLock, LW_EXCLUSIVE);
2753 LWLockAcquire(oldpartitionLock,
2754 (removeOld ? LW_EXCLUSIVE : LW_SHARED));
2756 else
2757 LWLockAcquire(newpartitionLock, LW_EXCLUSIVE);
2760 * Look for the old target. If not found, that's OK; no predicate locks
2761 * are affected, so we can just clean up and return. If it does exist,
2762 * walk its list of predicate locks and move or copy them to the new
2763 * target.
2765 oldtarget = hash_search_with_hash_value(PredicateLockTargetHash,
2766 &oldtargettag,
2767 oldtargettaghash,
2768 HASH_FIND, NULL);
2770 if (oldtarget)
2772 PREDICATELOCKTARGET *newtarget;
2773 PREDICATELOCKTAG newpredlocktag;
2774 dlist_mutable_iter iter;
2776 newtarget = hash_search_with_hash_value(PredicateLockTargetHash,
2777 &newtargettag,
2778 newtargettaghash,
2779 HASH_ENTER_NULL, &found);
2781 if (!newtarget)
2783 /* Failed to allocate due to insufficient shmem */
2784 outOfShmem = true;
2785 goto exit;
2788 /* If we created a new entry, initialize it */
2789 if (!found)
2790 dlist_init(&newtarget->predicateLocks);
2792 newpredlocktag.myTarget = newtarget;
2795 * Loop through all the locks on the old target, replacing them with
2796 * locks on the new target.
2798 LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
2800 dlist_foreach_modify(iter, &oldtarget->predicateLocks)
2802 PREDICATELOCK *oldpredlock =
2803 dlist_container(PREDICATELOCK, targetLink, iter.cur);
2804 PREDICATELOCK *newpredlock;
2805 SerCommitSeqNo oldCommitSeqNo = oldpredlock->commitSeqNo;
2807 newpredlocktag.myXact = oldpredlock->tag.myXact;
2809 if (removeOld)
2811 dlist_delete(&(oldpredlock->xactLink));
2812 dlist_delete(&(oldpredlock->targetLink));
2814 hash_search_with_hash_value
2815 (PredicateLockHash,
2816 &oldpredlock->tag,
2817 PredicateLockHashCodeFromTargetHashCode(&oldpredlock->tag,
2818 oldtargettaghash),
2819 HASH_REMOVE, &found);
2820 Assert(found);
2823 newpredlock = (PREDICATELOCK *)
2824 hash_search_with_hash_value(PredicateLockHash,
2825 &newpredlocktag,
2826 PredicateLockHashCodeFromTargetHashCode(&newpredlocktag,
2827 newtargettaghash),
2828 HASH_ENTER_NULL,
2829 &found);
2830 if (!newpredlock)
2832 /* Out of shared memory. Undo what we've done so far. */
2833 LWLockRelease(SerializableXactHashLock);
2834 DeleteLockTarget(newtarget, newtargettaghash);
2835 outOfShmem = true;
2836 goto exit;
2838 if (!found)
2840 dlist_push_tail(&(newtarget->predicateLocks),
2841 &(newpredlock->targetLink));
2842 dlist_push_tail(&(newpredlocktag.myXact->predicateLocks),
2843 &(newpredlock->xactLink));
2844 newpredlock->commitSeqNo = oldCommitSeqNo;
2846 else
2848 if (newpredlock->commitSeqNo < oldCommitSeqNo)
2849 newpredlock->commitSeqNo = oldCommitSeqNo;
2852 Assert(newpredlock->commitSeqNo != 0);
2853 Assert((newpredlock->commitSeqNo == InvalidSerCommitSeqNo)
2854 || (newpredlock->tag.myXact == OldCommittedSxact));
2856 LWLockRelease(SerializableXactHashLock);
2858 if (removeOld)
2860 Assert(dlist_is_empty(&oldtarget->predicateLocks));
2861 RemoveTargetIfNoLongerUsed(oldtarget, oldtargettaghash);
2866 exit:
2867 /* Release partition locks in reverse order of acquisition. */
2868 if (oldpartitionLock < newpartitionLock)
2870 LWLockRelease(newpartitionLock);
2871 LWLockRelease(oldpartitionLock);
2873 else if (oldpartitionLock > newpartitionLock)
2875 LWLockRelease(oldpartitionLock);
2876 LWLockRelease(newpartitionLock);
2878 else
2879 LWLockRelease(newpartitionLock);
2881 if (removeOld)
2883 /* We shouldn't run out of memory if we're moving locks */
2884 Assert(!outOfShmem);
2886 /* Put the scratch entry back */
2887 RestoreScratchTarget(false);
2890 return !outOfShmem;
2894 * Drop all predicate locks of any granularity from the specified relation,
2895 * which can be a heap relation or an index relation. If 'transfer' is true,
2896 * acquire a relation lock on the heap for any transactions with any lock(s)
2897 * on the specified relation.
2899 * This requires grabbing a lot of LW locks and scanning the entire lock
2900 * target table for matches. That makes this more expensive than most
2901 * predicate lock management functions, but it will only be called for DDL
2902 * type commands that are expensive anyway, and there are fast returns when
2903 * no serializable transactions are active or the relation is temporary.
2905 * We don't use the TransferPredicateLocksToNewTarget function because it
2906 * acquires its own locks on the partitions of the two targets involved,
2907 * and we'll already be holding all partition locks.
2909 * We can't throw an error from here, because the call could be from a
2910 * transaction which is not serializable.
2912 * NOTE: This is currently only called with transfer set to true, but that may
2913 * change. If we decide to clean up the locks from a table on commit of a
2914 * transaction which executed DROP TABLE, the false condition will be useful.
2916 static void
2917 DropAllPredicateLocksFromTable(Relation relation, bool transfer)
2919 HASH_SEQ_STATUS seqstat;
2920 PREDICATELOCKTARGET *oldtarget;
2921 PREDICATELOCKTARGET *heaptarget;
2922 Oid dbId;
2923 Oid relId;
2924 Oid heapId;
2925 int i;
2926 bool isIndex;
2927 bool found;
2928 uint32 heaptargettaghash;
2931 * Bail out quickly if there are no serializable transactions running.
2932 * It's safe to check this without taking locks because the caller is
2933 * holding an ACCESS EXCLUSIVE lock on the relation. No new locks which
2934 * would matter here can be acquired while that is held.
2936 if (!TransactionIdIsValid(PredXact->SxactGlobalXmin))
2937 return;
2939 if (!PredicateLockingNeededForRelation(relation))
2940 return;
2942 dbId = relation->rd_locator.dbOid;
2943 relId = relation->rd_id;
2944 if (relation->rd_index == NULL)
2946 isIndex = false;
2947 heapId = relId;
2949 else
2951 isIndex = true;
2952 heapId = relation->rd_index->indrelid;
2954 Assert(heapId != InvalidOid);
2955 Assert(transfer || !isIndex); /* index OID only makes sense with
2956 * transfer */
2958 /* Retrieve first time needed, then keep. */
2959 heaptargettaghash = 0;
2960 heaptarget = NULL;
2962 /* Acquire locks on all lock partitions */
2963 LWLockAcquire(SerializablePredicateListLock, LW_EXCLUSIVE);
2964 for (i = 0; i < NUM_PREDICATELOCK_PARTITIONS; i++)
2965 LWLockAcquire(PredicateLockHashPartitionLockByIndex(i), LW_EXCLUSIVE);
2966 LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
2969 * Remove the dummy entry to give us scratch space, so we know we'll be
2970 * able to create the new lock target.
2972 if (transfer)
2973 RemoveScratchTarget(true);
2975 /* Scan through target map */
2976 hash_seq_init(&seqstat, PredicateLockTargetHash);
2978 while ((oldtarget = (PREDICATELOCKTARGET *) hash_seq_search(&seqstat)))
2980 dlist_mutable_iter iter;
2983 * Check whether this is a target which needs attention.
2985 if (GET_PREDICATELOCKTARGETTAG_RELATION(oldtarget->tag) != relId)
2986 continue; /* wrong relation id */
2987 if (GET_PREDICATELOCKTARGETTAG_DB(oldtarget->tag) != dbId)
2988 continue; /* wrong database id */
2989 if (transfer && !isIndex
2990 && GET_PREDICATELOCKTARGETTAG_TYPE(oldtarget->tag) == PREDLOCKTAG_RELATION)
2991 continue; /* already the right lock */
2994 * If we made it here, we have work to do. We make sure the heap
2995 * relation lock exists, then we walk the list of predicate locks for
2996 * the old target we found, moving all locks to the heap relation lock
2997 * -- unless they already hold that.
3001 * First make sure we have the heap relation target. We only need to
3002 * do this once.
3004 if (transfer && heaptarget == NULL)
3006 PREDICATELOCKTARGETTAG heaptargettag;
3008 SET_PREDICATELOCKTARGETTAG_RELATION(heaptargettag, dbId, heapId);
3009 heaptargettaghash = PredicateLockTargetTagHashCode(&heaptargettag);
3010 heaptarget = hash_search_with_hash_value(PredicateLockTargetHash,
3011 &heaptargettag,
3012 heaptargettaghash,
3013 HASH_ENTER, &found);
3014 if (!found)
3015 dlist_init(&heaptarget->predicateLocks);
3019 * Loop through all the locks on the old target, replacing them with
3020 * locks on the new target.
3022 dlist_foreach_modify(iter, &oldtarget->predicateLocks)
3024 PREDICATELOCK *oldpredlock =
3025 dlist_container(PREDICATELOCK, targetLink, iter.cur);
3026 PREDICATELOCK *newpredlock;
3027 SerCommitSeqNo oldCommitSeqNo;
3028 SERIALIZABLEXACT *oldXact;
3031 * Remove the old lock first. This avoids the chance of running
3032 * out of lock structure entries for the hash table.
3034 oldCommitSeqNo = oldpredlock->commitSeqNo;
3035 oldXact = oldpredlock->tag.myXact;
3037 dlist_delete(&(oldpredlock->xactLink));
3040 * No need for retail delete from oldtarget list, we're removing
3041 * the whole target anyway.
3043 hash_search(PredicateLockHash,
3044 &oldpredlock->tag,
3045 HASH_REMOVE, &found);
3046 Assert(found);
3048 if (transfer)
3050 PREDICATELOCKTAG newpredlocktag;
3052 newpredlocktag.myTarget = heaptarget;
3053 newpredlocktag.myXact = oldXact;
3054 newpredlock = (PREDICATELOCK *)
3055 hash_search_with_hash_value(PredicateLockHash,
3056 &newpredlocktag,
3057 PredicateLockHashCodeFromTargetHashCode(&newpredlocktag,
3058 heaptargettaghash),
3059 HASH_ENTER,
3060 &found);
3061 if (!found)
3063 dlist_push_tail(&(heaptarget->predicateLocks),
3064 &(newpredlock->targetLink));
3065 dlist_push_tail(&(newpredlocktag.myXact->predicateLocks),
3066 &(newpredlock->xactLink));
3067 newpredlock->commitSeqNo = oldCommitSeqNo;
3069 else
3071 if (newpredlock->commitSeqNo < oldCommitSeqNo)
3072 newpredlock->commitSeqNo = oldCommitSeqNo;
3075 Assert(newpredlock->commitSeqNo != 0);
3076 Assert((newpredlock->commitSeqNo == InvalidSerCommitSeqNo)
3077 || (newpredlock->tag.myXact == OldCommittedSxact));
3081 hash_search(PredicateLockTargetHash, &oldtarget->tag, HASH_REMOVE,
3082 &found);
3083 Assert(found);
3086 /* Put the scratch entry back */
3087 if (transfer)
3088 RestoreScratchTarget(true);
3090 /* Release locks in reverse order */
3091 LWLockRelease(SerializableXactHashLock);
3092 for (i = NUM_PREDICATELOCK_PARTITIONS - 1; i >= 0; i--)
3093 LWLockRelease(PredicateLockHashPartitionLockByIndex(i));
3094 LWLockRelease(SerializablePredicateListLock);
3098 * TransferPredicateLocksToHeapRelation
3099 * For all transactions, transfer all predicate locks for the given
3100 * relation to a single relation lock on the heap.
3102 void
3103 TransferPredicateLocksToHeapRelation(Relation relation)
3105 DropAllPredicateLocksFromTable(relation, true);
3110 * PredicateLockPageSplit
3112 * Copies any predicate locks for the old page to the new page.
3113 * Skip if this is a temporary table or toast table.
3115 * NOTE: A page split (or overflow) affects all serializable transactions,
3116 * even if it occurs in the context of another transaction isolation level.
3118 * NOTE: This currently leaves the local copy of the locks without
3119 * information on the new lock which is in shared memory. This could cause
3120 * problems if enough page splits occur on locked pages without the processes
3121 * which hold the locks getting in and noticing.
3123 void
3124 PredicateLockPageSplit(Relation relation, BlockNumber oldblkno,
3125 BlockNumber newblkno)
3127 PREDICATELOCKTARGETTAG oldtargettag;
3128 PREDICATELOCKTARGETTAG newtargettag;
3129 bool success;
3132 * Bail out quickly if there are no serializable transactions running.
3134 * It's safe to do this check without taking any additional locks. Even if
3135 * a serializable transaction starts concurrently, we know it can't take
3136 * any SIREAD locks on the page being split because the caller is holding
3137 * the associated buffer page lock. Memory reordering isn't an issue; the
3138 * memory barrier in the LWLock acquisition guarantees that this read
3139 * occurs while the buffer page lock is held.
3141 if (!TransactionIdIsValid(PredXact->SxactGlobalXmin))
3142 return;
3144 if (!PredicateLockingNeededForRelation(relation))
3145 return;
3147 Assert(oldblkno != newblkno);
3148 Assert(BlockNumberIsValid(oldblkno));
3149 Assert(BlockNumberIsValid(newblkno));
3151 SET_PREDICATELOCKTARGETTAG_PAGE(oldtargettag,
3152 relation->rd_locator.dbOid,
3153 relation->rd_id,
3154 oldblkno);
3155 SET_PREDICATELOCKTARGETTAG_PAGE(newtargettag,
3156 relation->rd_locator.dbOid,
3157 relation->rd_id,
3158 newblkno);
3160 LWLockAcquire(SerializablePredicateListLock, LW_EXCLUSIVE);
3163 * Try copying the locks over to the new page's tag, creating it if
3164 * necessary.
3166 success = TransferPredicateLocksToNewTarget(oldtargettag,
3167 newtargettag,
3168 false);
3170 if (!success)
3173 * No more predicate lock entries are available. Failure isn't an
3174 * option here, so promote the page lock to a relation lock.
3177 /* Get the parent relation lock's lock tag */
3178 success = GetParentPredicateLockTag(&oldtargettag,
3179 &newtargettag);
3180 Assert(success);
3183 * Move the locks to the parent. This shouldn't fail.
3185 * Note that here we are removing locks held by other backends,
3186 * leading to a possible inconsistency in their local lock hash table.
3187 * This is OK because we're replacing it with a lock that covers the
3188 * old one.
3190 success = TransferPredicateLocksToNewTarget(oldtargettag,
3191 newtargettag,
3192 true);
3193 Assert(success);
3196 LWLockRelease(SerializablePredicateListLock);
3200 * PredicateLockPageCombine
3202 * Combines predicate locks for two existing pages.
3203 * Skip if this is a temporary table or toast table.
3205 * NOTE: A page combine affects all serializable transactions, even if it
3206 * occurs in the context of another transaction isolation level.
3208 void
3209 PredicateLockPageCombine(Relation relation, BlockNumber oldblkno,
3210 BlockNumber newblkno)
3213 * Page combines differ from page splits in that we ought to be able to
3214 * remove the locks on the old page after transferring them to the new
3215 * page, instead of duplicating them. However, because we can't edit other
3216 * backends' local lock tables, removing the old lock would leave them
3217 * with an entry in their LocalPredicateLockHash for a lock they're not
3218 * holding, which isn't acceptable. So we wind up having to do the same
3219 * work as a page split, acquiring a lock on the new page and keeping the
3220 * old page locked too. That can lead to some false positives, but should
3221 * be rare in practice.
3223 PredicateLockPageSplit(relation, oldblkno, newblkno);
3227 * Walk the list of in-progress serializable transactions and find the new
3228 * xmin.
3230 static void
3231 SetNewSxactGlobalXmin(void)
3233 dlist_iter iter;
3235 Assert(LWLockHeldByMe(SerializableXactHashLock));
3237 PredXact->SxactGlobalXmin = InvalidTransactionId;
3238 PredXact->SxactGlobalXminCount = 0;
3240 dlist_foreach(iter, &PredXact->activeList)
3242 SERIALIZABLEXACT *sxact =
3243 dlist_container(SERIALIZABLEXACT, xactLink, iter.cur);
3245 if (!SxactIsRolledBack(sxact)
3246 && !SxactIsCommitted(sxact)
3247 && sxact != OldCommittedSxact)
3249 Assert(sxact->xmin != InvalidTransactionId);
3250 if (!TransactionIdIsValid(PredXact->SxactGlobalXmin)
3251 || TransactionIdPrecedes(sxact->xmin,
3252 PredXact->SxactGlobalXmin))
3254 PredXact->SxactGlobalXmin = sxact->xmin;
3255 PredXact->SxactGlobalXminCount = 1;
3257 else if (TransactionIdEquals(sxact->xmin,
3258 PredXact->SxactGlobalXmin))
3259 PredXact->SxactGlobalXminCount++;
3263 SerialSetActiveSerXmin(PredXact->SxactGlobalXmin);
3267 * ReleasePredicateLocks
3269 * Releases predicate locks based on completion of the current transaction,
3270 * whether committed or rolled back. It can also be called for a read only
3271 * transaction when it becomes impossible for the transaction to become
3272 * part of a dangerous structure.
3274 * We do nothing unless this is a serializable transaction.
3276 * This method must ensure that shared memory hash tables are cleaned
3277 * up in some relatively timely fashion.
3279 * If this transaction is committing and is holding any predicate locks,
3280 * it must be added to a list of completed serializable transactions still
3281 * holding locks.
3283 * If isReadOnlySafe is true, then predicate locks are being released before
3284 * the end of the transaction because MySerializableXact has been determined
3285 * to be RO_SAFE. In non-parallel mode we can release it completely, but it
3286 * in parallel mode we partially release the SERIALIZABLEXACT and keep it
3287 * around until the end of the transaction, allowing each backend to clear its
3288 * MySerializableXact variable and benefit from the optimization in its own
3289 * time.
3291 void
3292 ReleasePredicateLocks(bool isCommit, bool isReadOnlySafe)
3294 bool partiallyReleasing = false;
3295 bool needToClear;
3296 SERIALIZABLEXACT *roXact;
3297 dlist_mutable_iter iter;
3300 * We can't trust XactReadOnly here, because a transaction which started
3301 * as READ WRITE can show as READ ONLY later, e.g., within
3302 * subtransactions. We want to flag a transaction as READ ONLY if it
3303 * commits without writing so that de facto READ ONLY transactions get the
3304 * benefit of some RO optimizations, so we will use this local variable to
3305 * get some cleanup logic right which is based on whether the transaction
3306 * was declared READ ONLY at the top level.
3308 bool topLevelIsDeclaredReadOnly;
3310 /* We can't be both committing and releasing early due to RO_SAFE. */
3311 Assert(!(isCommit && isReadOnlySafe));
3313 /* Are we at the end of a transaction, that is, a commit or abort? */
3314 if (!isReadOnlySafe)
3317 * Parallel workers mustn't release predicate locks at the end of
3318 * their transaction. The leader will do that at the end of its
3319 * transaction.
3321 if (IsParallelWorker())
3323 ReleasePredicateLocksLocal();
3324 return;
3328 * By the time the leader in a parallel query reaches end of
3329 * transaction, it has waited for all workers to exit.
3331 Assert(!ParallelContextActive());
3334 * If the leader in a parallel query earlier stashed a partially
3335 * released SERIALIZABLEXACT for final clean-up at end of transaction
3336 * (because workers might still have been accessing it), then it's
3337 * time to restore it.
3339 if (SavedSerializableXact != InvalidSerializableXact)
3341 Assert(MySerializableXact == InvalidSerializableXact);
3342 MySerializableXact = SavedSerializableXact;
3343 SavedSerializableXact = InvalidSerializableXact;
3344 Assert(SxactIsPartiallyReleased(MySerializableXact));
3348 if (MySerializableXact == InvalidSerializableXact)
3350 Assert(LocalPredicateLockHash == NULL);
3351 return;
3354 LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
3357 * If the transaction is committing, but it has been partially released
3358 * already, then treat this as a roll back. It was marked as rolled back.
3360 if (isCommit && SxactIsPartiallyReleased(MySerializableXact))
3361 isCommit = false;
3364 * If we're called in the middle of a transaction because we discovered
3365 * that the SXACT_FLAG_RO_SAFE flag was set, then we'll partially release
3366 * it (that is, release the predicate locks and conflicts, but not the
3367 * SERIALIZABLEXACT itself) if we're the first backend to have noticed.
3369 if (isReadOnlySafe && IsInParallelMode())
3372 * The leader needs to stash a pointer to it, so that it can
3373 * completely release it at end-of-transaction.
3375 if (!IsParallelWorker())
3376 SavedSerializableXact = MySerializableXact;
3379 * The first backend to reach this condition will partially release
3380 * the SERIALIZABLEXACT. All others will just clear their
3381 * backend-local state so that they stop doing SSI checks for the rest
3382 * of the transaction.
3384 if (SxactIsPartiallyReleased(MySerializableXact))
3386 LWLockRelease(SerializableXactHashLock);
3387 ReleasePredicateLocksLocal();
3388 return;
3390 else
3392 MySerializableXact->flags |= SXACT_FLAG_PARTIALLY_RELEASED;
3393 partiallyReleasing = true;
3394 /* ... and proceed to perform the partial release below. */
3397 Assert(!isCommit || SxactIsPrepared(MySerializableXact));
3398 Assert(!isCommit || !SxactIsDoomed(MySerializableXact));
3399 Assert(!SxactIsCommitted(MySerializableXact));
3400 Assert(SxactIsPartiallyReleased(MySerializableXact)
3401 || !SxactIsRolledBack(MySerializableXact));
3403 /* may not be serializable during COMMIT/ROLLBACK PREPARED */
3404 Assert(MySerializableXact->pid == 0 || IsolationIsSerializable());
3406 /* We'd better not already be on the cleanup list. */
3407 Assert(!SxactIsOnFinishedList(MySerializableXact));
3409 topLevelIsDeclaredReadOnly = SxactIsReadOnly(MySerializableXact);
3412 * We don't hold XidGenLock lock here, assuming that TransactionId is
3413 * atomic!
3415 * If this value is changing, we don't care that much whether we get the
3416 * old or new value -- it is just used to determine how far
3417 * SxactGlobalXmin must advance before this transaction can be fully
3418 * cleaned up. The worst that could happen is we wait for one more
3419 * transaction to complete before freeing some RAM; correctness of visible
3420 * behavior is not affected.
3422 MySerializableXact->finishedBefore = XidFromFullTransactionId(TransamVariables->nextXid);
3425 * If it's not a commit it's either a rollback or a read-only transaction
3426 * flagged SXACT_FLAG_RO_SAFE, and we can clear our locks immediately.
3428 if (isCommit)
3430 MySerializableXact->flags |= SXACT_FLAG_COMMITTED;
3431 MySerializableXact->commitSeqNo = ++(PredXact->LastSxactCommitSeqNo);
3432 /* Recognize implicit read-only transaction (commit without write). */
3433 if (!MyXactDidWrite)
3434 MySerializableXact->flags |= SXACT_FLAG_READ_ONLY;
3436 else
3439 * The DOOMED flag indicates that we intend to roll back this
3440 * transaction and so it should not cause serialization failures for
3441 * other transactions that conflict with it. Note that this flag might
3442 * already be set, if another backend marked this transaction for
3443 * abort.
3445 * The ROLLED_BACK flag further indicates that ReleasePredicateLocks
3446 * has been called, and so the SerializableXact is eligible for
3447 * cleanup. This means it should not be considered when calculating
3448 * SxactGlobalXmin.
3450 MySerializableXact->flags |= SXACT_FLAG_DOOMED;
3451 MySerializableXact->flags |= SXACT_FLAG_ROLLED_BACK;
3454 * If the transaction was previously prepared, but is now failing due
3455 * to a ROLLBACK PREPARED or (hopefully very rare) error after the
3456 * prepare, clear the prepared flag. This simplifies conflict
3457 * checking.
3459 MySerializableXact->flags &= ~SXACT_FLAG_PREPARED;
3462 if (!topLevelIsDeclaredReadOnly)
3464 Assert(PredXact->WritableSxactCount > 0);
3465 if (--(PredXact->WritableSxactCount) == 0)
3468 * Release predicate locks and rw-conflicts in for all committed
3469 * transactions. There are no longer any transactions which might
3470 * conflict with the locks and no chance for new transactions to
3471 * overlap. Similarly, existing conflicts in can't cause pivots,
3472 * and any conflicts in which could have completed a dangerous
3473 * structure would already have caused a rollback, so any
3474 * remaining ones must be benign.
3476 PredXact->CanPartialClearThrough = PredXact->LastSxactCommitSeqNo;
3479 else
3482 * Read-only transactions: clear the list of transactions that might
3483 * make us unsafe. Note that we use 'inLink' for the iteration as
3484 * opposed to 'outLink' for the r/w xacts.
3486 dlist_foreach_modify(iter, &MySerializableXact->possibleUnsafeConflicts)
3488 RWConflict possibleUnsafeConflict =
3489 dlist_container(RWConflictData, inLink, iter.cur);
3491 Assert(!SxactIsReadOnly(possibleUnsafeConflict->sxactOut));
3492 Assert(MySerializableXact == possibleUnsafeConflict->sxactIn);
3494 ReleaseRWConflict(possibleUnsafeConflict);
3498 /* Check for conflict out to old committed transactions. */
3499 if (isCommit
3500 && !SxactIsReadOnly(MySerializableXact)
3501 && SxactHasSummaryConflictOut(MySerializableXact))
3504 * we don't know which old committed transaction we conflicted with,
3505 * so be conservative and use FirstNormalSerCommitSeqNo here
3507 MySerializableXact->SeqNo.earliestOutConflictCommit =
3508 FirstNormalSerCommitSeqNo;
3509 MySerializableXact->flags |= SXACT_FLAG_CONFLICT_OUT;
3513 * Release all outConflicts to committed transactions. If we're rolling
3514 * back clear them all. Set SXACT_FLAG_CONFLICT_OUT if any point to
3515 * previously committed transactions.
3517 dlist_foreach_modify(iter, &MySerializableXact->outConflicts)
3519 RWConflict conflict =
3520 dlist_container(RWConflictData, outLink, iter.cur);
3522 if (isCommit
3523 && !SxactIsReadOnly(MySerializableXact)
3524 && SxactIsCommitted(conflict->sxactIn))
3526 if ((MySerializableXact->flags & SXACT_FLAG_CONFLICT_OUT) == 0
3527 || conflict->sxactIn->prepareSeqNo < MySerializableXact->SeqNo.earliestOutConflictCommit)
3528 MySerializableXact->SeqNo.earliestOutConflictCommit = conflict->sxactIn->prepareSeqNo;
3529 MySerializableXact->flags |= SXACT_FLAG_CONFLICT_OUT;
3532 if (!isCommit
3533 || SxactIsCommitted(conflict->sxactIn)
3534 || (conflict->sxactIn->SeqNo.lastCommitBeforeSnapshot >= PredXact->LastSxactCommitSeqNo))
3535 ReleaseRWConflict(conflict);
3539 * Release all inConflicts from committed and read-only transactions. If
3540 * we're rolling back, clear them all.
3542 dlist_foreach_modify(iter, &MySerializableXact->inConflicts)
3544 RWConflict conflict =
3545 dlist_container(RWConflictData, inLink, iter.cur);
3547 if (!isCommit
3548 || SxactIsCommitted(conflict->sxactOut)
3549 || SxactIsReadOnly(conflict->sxactOut))
3550 ReleaseRWConflict(conflict);
3553 if (!topLevelIsDeclaredReadOnly)
3556 * Remove ourselves from the list of possible conflicts for concurrent
3557 * READ ONLY transactions, flagging them as unsafe if we have a
3558 * conflict out. If any are waiting DEFERRABLE transactions, wake them
3559 * up if they are known safe or known unsafe.
3561 dlist_foreach_modify(iter, &MySerializableXact->possibleUnsafeConflicts)
3563 RWConflict possibleUnsafeConflict =
3564 dlist_container(RWConflictData, outLink, iter.cur);
3566 roXact = possibleUnsafeConflict->sxactIn;
3567 Assert(MySerializableXact == possibleUnsafeConflict->sxactOut);
3568 Assert(SxactIsReadOnly(roXact));
3570 /* Mark conflicted if necessary. */
3571 if (isCommit
3572 && MyXactDidWrite
3573 && SxactHasConflictOut(MySerializableXact)
3574 && (MySerializableXact->SeqNo.earliestOutConflictCommit
3575 <= roXact->SeqNo.lastCommitBeforeSnapshot))
3578 * This releases possibleUnsafeConflict (as well as all other
3579 * possible conflicts for roXact)
3581 FlagSxactUnsafe(roXact);
3583 else
3585 ReleaseRWConflict(possibleUnsafeConflict);
3588 * If we were the last possible conflict, flag it safe. The
3589 * transaction can now safely release its predicate locks (but
3590 * that transaction's backend has to do that itself).
3592 if (dlist_is_empty(&roXact->possibleUnsafeConflicts))
3593 roXact->flags |= SXACT_FLAG_RO_SAFE;
3597 * Wake up the process for a waiting DEFERRABLE transaction if we
3598 * now know it's either safe or conflicted.
3600 if (SxactIsDeferrableWaiting(roXact) &&
3601 (SxactIsROUnsafe(roXact) || SxactIsROSafe(roXact)))
3602 ProcSendSignal(roXact->pgprocno);
3607 * Check whether it's time to clean up old transactions. This can only be
3608 * done when the last serializable transaction with the oldest xmin among
3609 * serializable transactions completes. We then find the "new oldest"
3610 * xmin and purge any transactions which finished before this transaction
3611 * was launched.
3613 * For parallel queries in read-only transactions, it might run twice. We
3614 * only release the reference on the first call.
3616 needToClear = false;
3617 if ((partiallyReleasing ||
3618 !SxactIsPartiallyReleased(MySerializableXact)) &&
3619 TransactionIdEquals(MySerializableXact->xmin,
3620 PredXact->SxactGlobalXmin))
3622 Assert(PredXact->SxactGlobalXminCount > 0);
3623 if (--(PredXact->SxactGlobalXminCount) == 0)
3625 SetNewSxactGlobalXmin();
3626 needToClear = true;
3630 LWLockRelease(SerializableXactHashLock);
3632 LWLockAcquire(SerializableFinishedListLock, LW_EXCLUSIVE);
3634 /* Add this to the list of transactions to check for later cleanup. */
3635 if (isCommit)
3636 dlist_push_tail(FinishedSerializableTransactions,
3637 &MySerializableXact->finishedLink);
3640 * If we're releasing a RO_SAFE transaction in parallel mode, we'll only
3641 * partially release it. That's necessary because other backends may have
3642 * a reference to it. The leader will release the SERIALIZABLEXACT itself
3643 * at the end of the transaction after workers have stopped running.
3645 if (!isCommit)
3646 ReleaseOneSerializableXact(MySerializableXact,
3647 isReadOnlySafe && IsInParallelMode(),
3648 false);
3650 LWLockRelease(SerializableFinishedListLock);
3652 if (needToClear)
3653 ClearOldPredicateLocks();
3655 ReleasePredicateLocksLocal();
3658 static void
3659 ReleasePredicateLocksLocal(void)
3661 MySerializableXact = InvalidSerializableXact;
3662 MyXactDidWrite = false;
3664 /* Delete per-transaction lock table */
3665 if (LocalPredicateLockHash != NULL)
3667 hash_destroy(LocalPredicateLockHash);
3668 LocalPredicateLockHash = NULL;
3673 * Clear old predicate locks, belonging to committed transactions that are no
3674 * longer interesting to any in-progress transaction.
3676 static void
3677 ClearOldPredicateLocks(void)
3679 dlist_mutable_iter iter;
3682 * Loop through finished transactions. They are in commit order, so we can
3683 * stop as soon as we find one that's still interesting.
3685 LWLockAcquire(SerializableFinishedListLock, LW_EXCLUSIVE);
3686 LWLockAcquire(SerializableXactHashLock, LW_SHARED);
3687 dlist_foreach_modify(iter, FinishedSerializableTransactions)
3689 SERIALIZABLEXACT *finishedSxact =
3690 dlist_container(SERIALIZABLEXACT, finishedLink, iter.cur);
3692 if (!TransactionIdIsValid(PredXact->SxactGlobalXmin)
3693 || TransactionIdPrecedesOrEquals(finishedSxact->finishedBefore,
3694 PredXact->SxactGlobalXmin))
3697 * This transaction committed before any in-progress transaction
3698 * took its snapshot. It's no longer interesting.
3700 LWLockRelease(SerializableXactHashLock);
3701 dlist_delete_thoroughly(&finishedSxact->finishedLink);
3702 ReleaseOneSerializableXact(finishedSxact, false, false);
3703 LWLockAcquire(SerializableXactHashLock, LW_SHARED);
3705 else if (finishedSxact->commitSeqNo > PredXact->HavePartialClearedThrough
3706 && finishedSxact->commitSeqNo <= PredXact->CanPartialClearThrough)
3709 * Any active transactions that took their snapshot before this
3710 * transaction committed are read-only, so we can clear part of
3711 * its state.
3713 LWLockRelease(SerializableXactHashLock);
3715 if (SxactIsReadOnly(finishedSxact))
3717 /* A read-only transaction can be removed entirely */
3718 dlist_delete_thoroughly(&(finishedSxact->finishedLink));
3719 ReleaseOneSerializableXact(finishedSxact, false, false);
3721 else
3724 * A read-write transaction can only be partially cleared. We
3725 * need to keep the SERIALIZABLEXACT but can release the
3726 * SIREAD locks and conflicts in.
3728 ReleaseOneSerializableXact(finishedSxact, true, false);
3731 PredXact->HavePartialClearedThrough = finishedSxact->commitSeqNo;
3732 LWLockAcquire(SerializableXactHashLock, LW_SHARED);
3734 else
3736 /* Still interesting. */
3737 break;
3740 LWLockRelease(SerializableXactHashLock);
3743 * Loop through predicate locks on dummy transaction for summarized data.
3745 LWLockAcquire(SerializablePredicateListLock, LW_SHARED);
3746 dlist_foreach_modify(iter, &OldCommittedSxact->predicateLocks)
3748 PREDICATELOCK *predlock =
3749 dlist_container(PREDICATELOCK, xactLink, iter.cur);
3750 bool canDoPartialCleanup;
3752 LWLockAcquire(SerializableXactHashLock, LW_SHARED);
3753 Assert(predlock->commitSeqNo != 0);
3754 Assert(predlock->commitSeqNo != InvalidSerCommitSeqNo);
3755 canDoPartialCleanup = (predlock->commitSeqNo <= PredXact->CanPartialClearThrough);
3756 LWLockRelease(SerializableXactHashLock);
3759 * If this lock originally belonged to an old enough transaction, we
3760 * can release it.
3762 if (canDoPartialCleanup)
3764 PREDICATELOCKTAG tag;
3765 PREDICATELOCKTARGET *target;
3766 PREDICATELOCKTARGETTAG targettag;
3767 uint32 targettaghash;
3768 LWLock *partitionLock;
3770 tag = predlock->tag;
3771 target = tag.myTarget;
3772 targettag = target->tag;
3773 targettaghash = PredicateLockTargetTagHashCode(&targettag);
3774 partitionLock = PredicateLockHashPartitionLock(targettaghash);
3776 LWLockAcquire(partitionLock, LW_EXCLUSIVE);
3778 dlist_delete(&(predlock->targetLink));
3779 dlist_delete(&(predlock->xactLink));
3781 hash_search_with_hash_value(PredicateLockHash, &tag,
3782 PredicateLockHashCodeFromTargetHashCode(&tag,
3783 targettaghash),
3784 HASH_REMOVE, NULL);
3785 RemoveTargetIfNoLongerUsed(target, targettaghash);
3787 LWLockRelease(partitionLock);
3791 LWLockRelease(SerializablePredicateListLock);
3792 LWLockRelease(SerializableFinishedListLock);
3796 * This is the normal way to delete anything from any of the predicate
3797 * locking hash tables. Given a transaction which we know can be deleted:
3798 * delete all predicate locks held by that transaction and any predicate
3799 * lock targets which are now unreferenced by a lock; delete all conflicts
3800 * for the transaction; delete all xid values for the transaction; then
3801 * delete the transaction.
3803 * When the partial flag is set, we can release all predicate locks and
3804 * in-conflict information -- we've established that there are no longer
3805 * any overlapping read write transactions for which this transaction could
3806 * matter -- but keep the transaction entry itself and any outConflicts.
3808 * When the summarize flag is set, we've run short of room for sxact data
3809 * and must summarize to the SLRU. Predicate locks are transferred to a
3810 * dummy "old" transaction, with duplicate locks on a single target
3811 * collapsing to a single lock with the "latest" commitSeqNo from among
3812 * the conflicting locks..
3814 static void
3815 ReleaseOneSerializableXact(SERIALIZABLEXACT *sxact, bool partial,
3816 bool summarize)
3818 SERIALIZABLEXIDTAG sxidtag;
3819 dlist_mutable_iter iter;
3821 Assert(sxact != NULL);
3822 Assert(SxactIsRolledBack(sxact) || SxactIsCommitted(sxact));
3823 Assert(partial || !SxactIsOnFinishedList(sxact));
3824 Assert(LWLockHeldByMe(SerializableFinishedListLock));
3827 * First release all the predicate locks held by this xact (or transfer
3828 * them to OldCommittedSxact if summarize is true)
3830 LWLockAcquire(SerializablePredicateListLock, LW_SHARED);
3831 if (IsInParallelMode())
3832 LWLockAcquire(&sxact->perXactPredicateListLock, LW_EXCLUSIVE);
3833 dlist_foreach_modify(iter, &sxact->predicateLocks)
3835 PREDICATELOCK *predlock =
3836 dlist_container(PREDICATELOCK, xactLink, iter.cur);
3837 PREDICATELOCKTAG tag;
3838 PREDICATELOCKTARGET *target;
3839 PREDICATELOCKTARGETTAG targettag;
3840 uint32 targettaghash;
3841 LWLock *partitionLock;
3843 tag = predlock->tag;
3844 target = tag.myTarget;
3845 targettag = target->tag;
3846 targettaghash = PredicateLockTargetTagHashCode(&targettag);
3847 partitionLock = PredicateLockHashPartitionLock(targettaghash);
3849 LWLockAcquire(partitionLock, LW_EXCLUSIVE);
3851 dlist_delete(&predlock->targetLink);
3853 hash_search_with_hash_value(PredicateLockHash, &tag,
3854 PredicateLockHashCodeFromTargetHashCode(&tag,
3855 targettaghash),
3856 HASH_REMOVE, NULL);
3857 if (summarize)
3859 bool found;
3861 /* Fold into dummy transaction list. */
3862 tag.myXact = OldCommittedSxact;
3863 predlock = hash_search_with_hash_value(PredicateLockHash, &tag,
3864 PredicateLockHashCodeFromTargetHashCode(&tag,
3865 targettaghash),
3866 HASH_ENTER_NULL, &found);
3867 if (!predlock)
3868 ereport(ERROR,
3869 (errcode(ERRCODE_OUT_OF_MEMORY),
3870 errmsg("out of shared memory"),
3871 errhint("You might need to increase %s.", "max_pred_locks_per_transaction")));
3872 if (found)
3874 Assert(predlock->commitSeqNo != 0);
3875 Assert(predlock->commitSeqNo != InvalidSerCommitSeqNo);
3876 if (predlock->commitSeqNo < sxact->commitSeqNo)
3877 predlock->commitSeqNo = sxact->commitSeqNo;
3879 else
3881 dlist_push_tail(&target->predicateLocks,
3882 &predlock->targetLink);
3883 dlist_push_tail(&OldCommittedSxact->predicateLocks,
3884 &predlock->xactLink);
3885 predlock->commitSeqNo = sxact->commitSeqNo;
3888 else
3889 RemoveTargetIfNoLongerUsed(target, targettaghash);
3891 LWLockRelease(partitionLock);
3895 * Rather than retail removal, just re-init the head after we've run
3896 * through the list.
3898 dlist_init(&sxact->predicateLocks);
3900 if (IsInParallelMode())
3901 LWLockRelease(&sxact->perXactPredicateListLock);
3902 LWLockRelease(SerializablePredicateListLock);
3904 sxidtag.xid = sxact->topXid;
3905 LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
3907 /* Release all outConflicts (unless 'partial' is true) */
3908 if (!partial)
3910 dlist_foreach_modify(iter, &sxact->outConflicts)
3912 RWConflict conflict =
3913 dlist_container(RWConflictData, outLink, iter.cur);
3915 if (summarize)
3916 conflict->sxactIn->flags |= SXACT_FLAG_SUMMARY_CONFLICT_IN;
3917 ReleaseRWConflict(conflict);
3921 /* Release all inConflicts. */
3922 dlist_foreach_modify(iter, &sxact->inConflicts)
3924 RWConflict conflict =
3925 dlist_container(RWConflictData, inLink, iter.cur);
3927 if (summarize)
3928 conflict->sxactOut->flags |= SXACT_FLAG_SUMMARY_CONFLICT_OUT;
3929 ReleaseRWConflict(conflict);
3932 /* Finally, get rid of the xid and the record of the transaction itself. */
3933 if (!partial)
3935 if (sxidtag.xid != InvalidTransactionId)
3936 hash_search(SerializableXidHash, &sxidtag, HASH_REMOVE, NULL);
3937 ReleasePredXact(sxact);
3940 LWLockRelease(SerializableXactHashLock);
3944 * Tests whether the given top level transaction is concurrent with
3945 * (overlaps) our current transaction.
3947 * We need to identify the top level transaction for SSI, anyway, so pass
3948 * that to this function to save the overhead of checking the snapshot's
3949 * subxip array.
3951 static bool
3952 XidIsConcurrent(TransactionId xid)
3954 Snapshot snap;
3956 Assert(TransactionIdIsValid(xid));
3957 Assert(!TransactionIdEquals(xid, GetTopTransactionIdIfAny()));
3959 snap = GetTransactionSnapshot();
3961 if (TransactionIdPrecedes(xid, snap->xmin))
3962 return false;
3964 if (TransactionIdFollowsOrEquals(xid, snap->xmax))
3965 return true;
3967 return pg_lfind32(xid, snap->xip, snap->xcnt);
3970 bool
3971 CheckForSerializableConflictOutNeeded(Relation relation, Snapshot snapshot)
3973 if (!SerializationNeededForRead(relation, snapshot))
3974 return false;
3976 /* Check if someone else has already decided that we need to die */
3977 if (SxactIsDoomed(MySerializableXact))
3979 ereport(ERROR,
3980 (errcode(ERRCODE_T_R_SERIALIZATION_FAILURE),
3981 errmsg("could not serialize access due to read/write dependencies among transactions"),
3982 errdetail_internal("Reason code: Canceled on identification as a pivot, during conflict out checking."),
3983 errhint("The transaction might succeed if retried.")));
3986 return true;
3990 * CheckForSerializableConflictOut
3991 * A table AM is reading a tuple that has been modified. If it determines
3992 * that the tuple version it is reading is not visible to us, it should
3993 * pass in the top level xid of the transaction that created it.
3994 * Otherwise, if it determines that it is visible to us but it has been
3995 * deleted or there is a newer version available due to an update, it
3996 * should pass in the top level xid of the modifying transaction.
3998 * This function will check for overlap with our own transaction. If the given
3999 * xid is also serializable and the transactions overlap (i.e., they cannot see
4000 * each other's writes), then we have a conflict out.
4002 void
4003 CheckForSerializableConflictOut(Relation relation, TransactionId xid, Snapshot snapshot)
4005 SERIALIZABLEXIDTAG sxidtag;
4006 SERIALIZABLEXID *sxid;
4007 SERIALIZABLEXACT *sxact;
4009 if (!SerializationNeededForRead(relation, snapshot))
4010 return;
4012 /* Check if someone else has already decided that we need to die */
4013 if (SxactIsDoomed(MySerializableXact))
4015 ereport(ERROR,
4016 (errcode(ERRCODE_T_R_SERIALIZATION_FAILURE),
4017 errmsg("could not serialize access due to read/write dependencies among transactions"),
4018 errdetail_internal("Reason code: Canceled on identification as a pivot, during conflict out checking."),
4019 errhint("The transaction might succeed if retried.")));
4021 Assert(TransactionIdIsValid(xid));
4023 if (TransactionIdEquals(xid, GetTopTransactionIdIfAny()))
4024 return;
4027 * Find sxact or summarized info for the top level xid.
4029 sxidtag.xid = xid;
4030 LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
4031 sxid = (SERIALIZABLEXID *)
4032 hash_search(SerializableXidHash, &sxidtag, HASH_FIND, NULL);
4033 if (!sxid)
4036 * Transaction not found in "normal" SSI structures. Check whether it
4037 * got pushed out to SLRU storage for "old committed" transactions.
4039 SerCommitSeqNo conflictCommitSeqNo;
4041 conflictCommitSeqNo = SerialGetMinConflictCommitSeqNo(xid);
4042 if (conflictCommitSeqNo != 0)
4044 if (conflictCommitSeqNo != InvalidSerCommitSeqNo
4045 && (!SxactIsReadOnly(MySerializableXact)
4046 || conflictCommitSeqNo
4047 <= MySerializableXact->SeqNo.lastCommitBeforeSnapshot))
4048 ereport(ERROR,
4049 (errcode(ERRCODE_T_R_SERIALIZATION_FAILURE),
4050 errmsg("could not serialize access due to read/write dependencies among transactions"),
4051 errdetail_internal("Reason code: Canceled on conflict out to old pivot %u.", xid),
4052 errhint("The transaction might succeed if retried.")));
4054 if (SxactHasSummaryConflictIn(MySerializableXact)
4055 || !dlist_is_empty(&MySerializableXact->inConflicts))
4056 ereport(ERROR,
4057 (errcode(ERRCODE_T_R_SERIALIZATION_FAILURE),
4058 errmsg("could not serialize access due to read/write dependencies among transactions"),
4059 errdetail_internal("Reason code: Canceled on identification as a pivot, with conflict out to old committed transaction %u.", xid),
4060 errhint("The transaction might succeed if retried.")));
4062 MySerializableXact->flags |= SXACT_FLAG_SUMMARY_CONFLICT_OUT;
4065 /* It's not serializable or otherwise not important. */
4066 LWLockRelease(SerializableXactHashLock);
4067 return;
4069 sxact = sxid->myXact;
4070 Assert(TransactionIdEquals(sxact->topXid, xid));
4071 if (sxact == MySerializableXact || SxactIsDoomed(sxact))
4073 /* Can't conflict with ourself or a transaction that will roll back. */
4074 LWLockRelease(SerializableXactHashLock);
4075 return;
4079 * We have a conflict out to a transaction which has a conflict out to a
4080 * summarized transaction. That summarized transaction must have
4081 * committed first, and we can't tell when it committed in relation to our
4082 * snapshot acquisition, so something needs to be canceled.
4084 if (SxactHasSummaryConflictOut(sxact))
4086 if (!SxactIsPrepared(sxact))
4088 sxact->flags |= SXACT_FLAG_DOOMED;
4089 LWLockRelease(SerializableXactHashLock);
4090 return;
4092 else
4094 LWLockRelease(SerializableXactHashLock);
4095 ereport(ERROR,
4096 (errcode(ERRCODE_T_R_SERIALIZATION_FAILURE),
4097 errmsg("could not serialize access due to read/write dependencies among transactions"),
4098 errdetail_internal("Reason code: Canceled on conflict out to old pivot."),
4099 errhint("The transaction might succeed if retried.")));
4104 * If this is a read-only transaction and the writing transaction has
4105 * committed, and it doesn't have a rw-conflict to a transaction which
4106 * committed before it, no conflict.
4108 if (SxactIsReadOnly(MySerializableXact)
4109 && SxactIsCommitted(sxact)
4110 && !SxactHasSummaryConflictOut(sxact)
4111 && (!SxactHasConflictOut(sxact)
4112 || MySerializableXact->SeqNo.lastCommitBeforeSnapshot < sxact->SeqNo.earliestOutConflictCommit))
4114 /* Read-only transaction will appear to run first. No conflict. */
4115 LWLockRelease(SerializableXactHashLock);
4116 return;
4119 if (!XidIsConcurrent(xid))
4121 /* This write was already in our snapshot; no conflict. */
4122 LWLockRelease(SerializableXactHashLock);
4123 return;
4126 if (RWConflictExists(MySerializableXact, sxact))
4128 /* We don't want duplicate conflict records in the list. */
4129 LWLockRelease(SerializableXactHashLock);
4130 return;
4134 * Flag the conflict. But first, if this conflict creates a dangerous
4135 * structure, ereport an error.
4137 FlagRWConflict(MySerializableXact, sxact);
4138 LWLockRelease(SerializableXactHashLock);
4142 * Check a particular target for rw-dependency conflict in. A subroutine of
4143 * CheckForSerializableConflictIn().
4145 static void
4146 CheckTargetForConflictsIn(PREDICATELOCKTARGETTAG *targettag)
4148 uint32 targettaghash;
4149 LWLock *partitionLock;
4150 PREDICATELOCKTARGET *target;
4151 PREDICATELOCK *mypredlock = NULL;
4152 PREDICATELOCKTAG mypredlocktag;
4153 dlist_mutable_iter iter;
4155 Assert(MySerializableXact != InvalidSerializableXact);
4158 * The same hash and LW lock apply to the lock target and the lock itself.
4160 targettaghash = PredicateLockTargetTagHashCode(targettag);
4161 partitionLock = PredicateLockHashPartitionLock(targettaghash);
4162 LWLockAcquire(partitionLock, LW_SHARED);
4163 target = (PREDICATELOCKTARGET *)
4164 hash_search_with_hash_value(PredicateLockTargetHash,
4165 targettag, targettaghash,
4166 HASH_FIND, NULL);
4167 if (!target)
4169 /* Nothing has this target locked; we're done here. */
4170 LWLockRelease(partitionLock);
4171 return;
4175 * Each lock for an overlapping transaction represents a conflict: a
4176 * rw-dependency in to this transaction.
4178 LWLockAcquire(SerializableXactHashLock, LW_SHARED);
4180 dlist_foreach_modify(iter, &target->predicateLocks)
4182 PREDICATELOCK *predlock =
4183 dlist_container(PREDICATELOCK, targetLink, iter.cur);
4184 SERIALIZABLEXACT *sxact = predlock->tag.myXact;
4186 if (sxact == MySerializableXact)
4189 * If we're getting a write lock on a tuple, we don't need a
4190 * predicate (SIREAD) lock on the same tuple. We can safely remove
4191 * our SIREAD lock, but we'll defer doing so until after the loop
4192 * because that requires upgrading to an exclusive partition lock.
4194 * We can't use this optimization within a subtransaction because
4195 * the subtransaction could roll back, and we would be left
4196 * without any lock at the top level.
4198 if (!IsSubTransaction()
4199 && GET_PREDICATELOCKTARGETTAG_OFFSET(*targettag))
4201 mypredlock = predlock;
4202 mypredlocktag = predlock->tag;
4205 else if (!SxactIsDoomed(sxact)
4206 && (!SxactIsCommitted(sxact)
4207 || TransactionIdPrecedes(GetTransactionSnapshot()->xmin,
4208 sxact->finishedBefore))
4209 && !RWConflictExists(sxact, MySerializableXact))
4211 LWLockRelease(SerializableXactHashLock);
4212 LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
4215 * Re-check after getting exclusive lock because the other
4216 * transaction may have flagged a conflict.
4218 if (!SxactIsDoomed(sxact)
4219 && (!SxactIsCommitted(sxact)
4220 || TransactionIdPrecedes(GetTransactionSnapshot()->xmin,
4221 sxact->finishedBefore))
4222 && !RWConflictExists(sxact, MySerializableXact))
4224 FlagRWConflict(sxact, MySerializableXact);
4227 LWLockRelease(SerializableXactHashLock);
4228 LWLockAcquire(SerializableXactHashLock, LW_SHARED);
4231 LWLockRelease(SerializableXactHashLock);
4232 LWLockRelease(partitionLock);
4235 * If we found one of our own SIREAD locks to remove, remove it now.
4237 * At this point our transaction already has a RowExclusiveLock on the
4238 * relation, so we are OK to drop the predicate lock on the tuple, if
4239 * found, without fearing that another write against the tuple will occur
4240 * before the MVCC information makes it to the buffer.
4242 if (mypredlock != NULL)
4244 uint32 predlockhashcode;
4245 PREDICATELOCK *rmpredlock;
4247 LWLockAcquire(SerializablePredicateListLock, LW_SHARED);
4248 if (IsInParallelMode())
4249 LWLockAcquire(&MySerializableXact->perXactPredicateListLock, LW_EXCLUSIVE);
4250 LWLockAcquire(partitionLock, LW_EXCLUSIVE);
4251 LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
4254 * Remove the predicate lock from shared memory, if it wasn't removed
4255 * while the locks were released. One way that could happen is from
4256 * autovacuum cleaning up an index.
4258 predlockhashcode = PredicateLockHashCodeFromTargetHashCode
4259 (&mypredlocktag, targettaghash);
4260 rmpredlock = (PREDICATELOCK *)
4261 hash_search_with_hash_value(PredicateLockHash,
4262 &mypredlocktag,
4263 predlockhashcode,
4264 HASH_FIND, NULL);
4265 if (rmpredlock != NULL)
4267 Assert(rmpredlock == mypredlock);
4269 dlist_delete(&(mypredlock->targetLink));
4270 dlist_delete(&(mypredlock->xactLink));
4272 rmpredlock = (PREDICATELOCK *)
4273 hash_search_with_hash_value(PredicateLockHash,
4274 &mypredlocktag,
4275 predlockhashcode,
4276 HASH_REMOVE, NULL);
4277 Assert(rmpredlock == mypredlock);
4279 RemoveTargetIfNoLongerUsed(target, targettaghash);
4282 LWLockRelease(SerializableXactHashLock);
4283 LWLockRelease(partitionLock);
4284 if (IsInParallelMode())
4285 LWLockRelease(&MySerializableXact->perXactPredicateListLock);
4286 LWLockRelease(SerializablePredicateListLock);
4288 if (rmpredlock != NULL)
4291 * Remove entry in local lock table if it exists. It's OK if it
4292 * doesn't exist; that means the lock was transferred to a new
4293 * target by a different backend.
4295 hash_search_with_hash_value(LocalPredicateLockHash,
4296 targettag, targettaghash,
4297 HASH_REMOVE, NULL);
4299 DecrementParentLocks(targettag);
4305 * CheckForSerializableConflictIn
4306 * We are writing the given tuple. If that indicates a rw-conflict
4307 * in from another serializable transaction, take appropriate action.
4309 * Skip checking for any granularity for which a parameter is missing.
4311 * A tuple update or delete is in conflict if we have a predicate lock
4312 * against the relation or page in which the tuple exists, or against the
4313 * tuple itself.
4315 void
4316 CheckForSerializableConflictIn(Relation relation, ItemPointer tid, BlockNumber blkno)
4318 PREDICATELOCKTARGETTAG targettag;
4320 if (!SerializationNeededForWrite(relation))
4321 return;
4323 /* Check if someone else has already decided that we need to die */
4324 if (SxactIsDoomed(MySerializableXact))
4325 ereport(ERROR,
4326 (errcode(ERRCODE_T_R_SERIALIZATION_FAILURE),
4327 errmsg("could not serialize access due to read/write dependencies among transactions"),
4328 errdetail_internal("Reason code: Canceled on identification as a pivot, during conflict in checking."),
4329 errhint("The transaction might succeed if retried.")));
4332 * We're doing a write which might cause rw-conflicts now or later.
4333 * Memorize that fact.
4335 MyXactDidWrite = true;
4338 * It is important that we check for locks from the finest granularity to
4339 * the coarsest granularity, so that granularity promotion doesn't cause
4340 * us to miss a lock. The new (coarser) lock will be acquired before the
4341 * old (finer) locks are released.
4343 * It is not possible to take and hold a lock across the checks for all
4344 * granularities because each target could be in a separate partition.
4346 if (tid != NULL)
4348 SET_PREDICATELOCKTARGETTAG_TUPLE(targettag,
4349 relation->rd_locator.dbOid,
4350 relation->rd_id,
4351 ItemPointerGetBlockNumber(tid),
4352 ItemPointerGetOffsetNumber(tid));
4353 CheckTargetForConflictsIn(&targettag);
4356 if (blkno != InvalidBlockNumber)
4358 SET_PREDICATELOCKTARGETTAG_PAGE(targettag,
4359 relation->rd_locator.dbOid,
4360 relation->rd_id,
4361 blkno);
4362 CheckTargetForConflictsIn(&targettag);
4365 SET_PREDICATELOCKTARGETTAG_RELATION(targettag,
4366 relation->rd_locator.dbOid,
4367 relation->rd_id);
4368 CheckTargetForConflictsIn(&targettag);
4372 * CheckTableForSerializableConflictIn
4373 * The entire table is going through a DDL-style logical mass delete
4374 * like TRUNCATE or DROP TABLE. If that causes a rw-conflict in from
4375 * another serializable transaction, take appropriate action.
4377 * While these operations do not operate entirely within the bounds of
4378 * snapshot isolation, they can occur inside a serializable transaction, and
4379 * will logically occur after any reads which saw rows which were destroyed
4380 * by these operations, so we do what we can to serialize properly under
4381 * SSI.
4383 * The relation passed in must be a heap relation. Any predicate lock of any
4384 * granularity on the heap will cause a rw-conflict in to this transaction.
4385 * Predicate locks on indexes do not matter because they only exist to guard
4386 * against conflicting inserts into the index, and this is a mass *delete*.
4387 * When a table is truncated or dropped, the index will also be truncated
4388 * or dropped, and we'll deal with locks on the index when that happens.
4390 * Dropping or truncating a table also needs to drop any existing predicate
4391 * locks on heap tuples or pages, because they're about to go away. This
4392 * should be done before altering the predicate locks because the transaction
4393 * could be rolled back because of a conflict, in which case the lock changes
4394 * are not needed. (At the moment, we don't actually bother to drop the
4395 * existing locks on a dropped or truncated table at the moment. That might
4396 * lead to some false positives, but it doesn't seem worth the trouble.)
4398 void
4399 CheckTableForSerializableConflictIn(Relation relation)
4401 HASH_SEQ_STATUS seqstat;
4402 PREDICATELOCKTARGET *target;
4403 Oid dbId;
4404 Oid heapId;
4405 int i;
4408 * Bail out quickly if there are no serializable transactions running.
4409 * It's safe to check this without taking locks because the caller is
4410 * holding an ACCESS EXCLUSIVE lock on the relation. No new locks which
4411 * would matter here can be acquired while that is held.
4413 if (!TransactionIdIsValid(PredXact->SxactGlobalXmin))
4414 return;
4416 if (!SerializationNeededForWrite(relation))
4417 return;
4420 * We're doing a write which might cause rw-conflicts now or later.
4421 * Memorize that fact.
4423 MyXactDidWrite = true;
4425 Assert(relation->rd_index == NULL); /* not an index relation */
4427 dbId = relation->rd_locator.dbOid;
4428 heapId = relation->rd_id;
4430 LWLockAcquire(SerializablePredicateListLock, LW_EXCLUSIVE);
4431 for (i = 0; i < NUM_PREDICATELOCK_PARTITIONS; i++)
4432 LWLockAcquire(PredicateLockHashPartitionLockByIndex(i), LW_SHARED);
4433 LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
4435 /* Scan through target list */
4436 hash_seq_init(&seqstat, PredicateLockTargetHash);
4438 while ((target = (PREDICATELOCKTARGET *) hash_seq_search(&seqstat)))
4440 dlist_mutable_iter iter;
4443 * Check whether this is a target which needs attention.
4445 if (GET_PREDICATELOCKTARGETTAG_RELATION(target->tag) != heapId)
4446 continue; /* wrong relation id */
4447 if (GET_PREDICATELOCKTARGETTAG_DB(target->tag) != dbId)
4448 continue; /* wrong database id */
4451 * Loop through locks for this target and flag conflicts.
4453 dlist_foreach_modify(iter, &target->predicateLocks)
4455 PREDICATELOCK *predlock =
4456 dlist_container(PREDICATELOCK, targetLink, iter.cur);
4458 if (predlock->tag.myXact != MySerializableXact
4459 && !RWConflictExists(predlock->tag.myXact, MySerializableXact))
4461 FlagRWConflict(predlock->tag.myXact, MySerializableXact);
4466 /* Release locks in reverse order */
4467 LWLockRelease(SerializableXactHashLock);
4468 for (i = NUM_PREDICATELOCK_PARTITIONS - 1; i >= 0; i--)
4469 LWLockRelease(PredicateLockHashPartitionLockByIndex(i));
4470 LWLockRelease(SerializablePredicateListLock);
4475 * Flag a rw-dependency between two serializable transactions.
4477 * The caller is responsible for ensuring that we have a LW lock on
4478 * the transaction hash table.
4480 static void
4481 FlagRWConflict(SERIALIZABLEXACT *reader, SERIALIZABLEXACT *writer)
4483 Assert(reader != writer);
4485 /* First, see if this conflict causes failure. */
4486 OnConflict_CheckForSerializationFailure(reader, writer);
4488 /* Actually do the conflict flagging. */
4489 if (reader == OldCommittedSxact)
4490 writer->flags |= SXACT_FLAG_SUMMARY_CONFLICT_IN;
4491 else if (writer == OldCommittedSxact)
4492 reader->flags |= SXACT_FLAG_SUMMARY_CONFLICT_OUT;
4493 else
4494 SetRWConflict(reader, writer);
4497 /*----------------------------------------------------------------------------
4498 * We are about to add a RW-edge to the dependency graph - check that we don't
4499 * introduce a dangerous structure by doing so, and abort one of the
4500 * transactions if so.
4502 * A serialization failure can only occur if there is a dangerous structure
4503 * in the dependency graph:
4505 * Tin ------> Tpivot ------> Tout
4506 * rw rw
4508 * Furthermore, Tout must commit first.
4510 * One more optimization is that if Tin is declared READ ONLY (or commits
4511 * without writing), we can only have a problem if Tout committed before Tin
4512 * acquired its snapshot.
4513 *----------------------------------------------------------------------------
4515 static void
4516 OnConflict_CheckForSerializationFailure(const SERIALIZABLEXACT *reader,
4517 SERIALIZABLEXACT *writer)
4519 bool failure;
4521 Assert(LWLockHeldByMe(SerializableXactHashLock));
4523 failure = false;
4525 /*------------------------------------------------------------------------
4526 * Check for already-committed writer with rw-conflict out flagged
4527 * (conflict-flag on W means that T2 committed before W):
4529 * R ------> W ------> T2
4530 * rw rw
4532 * That is a dangerous structure, so we must abort. (Since the writer
4533 * has already committed, we must be the reader)
4534 *------------------------------------------------------------------------
4536 if (SxactIsCommitted(writer)
4537 && (SxactHasConflictOut(writer) || SxactHasSummaryConflictOut(writer)))
4538 failure = true;
4540 /*------------------------------------------------------------------------
4541 * Check whether the writer has become a pivot with an out-conflict
4542 * committed transaction (T2), and T2 committed first:
4544 * R ------> W ------> T2
4545 * rw rw
4547 * Because T2 must've committed first, there is no anomaly if:
4548 * - the reader committed before T2
4549 * - the writer committed before T2
4550 * - the reader is a READ ONLY transaction and the reader was concurrent
4551 * with T2 (= reader acquired its snapshot before T2 committed)
4553 * We also handle the case that T2 is prepared but not yet committed
4554 * here. In that case T2 has already checked for conflicts, so if it
4555 * commits first, making the above conflict real, it's too late for it
4556 * to abort.
4557 *------------------------------------------------------------------------
4559 if (!failure && SxactHasSummaryConflictOut(writer))
4560 failure = true;
4561 else if (!failure)
4563 dlist_iter iter;
4565 dlist_foreach(iter, &writer->outConflicts)
4567 RWConflict conflict =
4568 dlist_container(RWConflictData, outLink, iter.cur);
4569 SERIALIZABLEXACT *t2 = conflict->sxactIn;
4571 if (SxactIsPrepared(t2)
4572 && (!SxactIsCommitted(reader)
4573 || t2->prepareSeqNo <= reader->commitSeqNo)
4574 && (!SxactIsCommitted(writer)
4575 || t2->prepareSeqNo <= writer->commitSeqNo)
4576 && (!SxactIsReadOnly(reader)
4577 || t2->prepareSeqNo <= reader->SeqNo.lastCommitBeforeSnapshot))
4579 failure = true;
4580 break;
4585 /*------------------------------------------------------------------------
4586 * Check whether the reader has become a pivot with a writer
4587 * that's committed (or prepared):
4589 * T0 ------> R ------> W
4590 * rw rw
4592 * Because W must've committed first for an anomaly to occur, there is no
4593 * anomaly if:
4594 * - T0 committed before the writer
4595 * - T0 is READ ONLY, and overlaps the writer
4596 *------------------------------------------------------------------------
4598 if (!failure && SxactIsPrepared(writer) && !SxactIsReadOnly(reader))
4600 if (SxactHasSummaryConflictIn(reader))
4602 failure = true;
4604 else
4606 dlist_iter iter;
4609 * The unconstify is needed as we have no const version of
4610 * dlist_foreach().
4612 dlist_foreach(iter, &unconstify(SERIALIZABLEXACT *, reader)->inConflicts)
4614 const RWConflict conflict =
4615 dlist_container(RWConflictData, inLink, iter.cur);
4616 const SERIALIZABLEXACT *t0 = conflict->sxactOut;
4618 if (!SxactIsDoomed(t0)
4619 && (!SxactIsCommitted(t0)
4620 || t0->commitSeqNo >= writer->prepareSeqNo)
4621 && (!SxactIsReadOnly(t0)
4622 || t0->SeqNo.lastCommitBeforeSnapshot >= writer->prepareSeqNo))
4624 failure = true;
4625 break;
4631 if (failure)
4634 * We have to kill a transaction to avoid a possible anomaly from
4635 * occurring. If the writer is us, we can just ereport() to cause a
4636 * transaction abort. Otherwise we flag the writer for termination,
4637 * causing it to abort when it tries to commit. However, if the writer
4638 * is a prepared transaction, already prepared, we can't abort it
4639 * anymore, so we have to kill the reader instead.
4641 if (MySerializableXact == writer)
4643 LWLockRelease(SerializableXactHashLock);
4644 ereport(ERROR,
4645 (errcode(ERRCODE_T_R_SERIALIZATION_FAILURE),
4646 errmsg("could not serialize access due to read/write dependencies among transactions"),
4647 errdetail_internal("Reason code: Canceled on identification as a pivot, during write."),
4648 errhint("The transaction might succeed if retried.")));
4650 else if (SxactIsPrepared(writer))
4652 LWLockRelease(SerializableXactHashLock);
4654 /* if we're not the writer, we have to be the reader */
4655 Assert(MySerializableXact == reader);
4656 ereport(ERROR,
4657 (errcode(ERRCODE_T_R_SERIALIZATION_FAILURE),
4658 errmsg("could not serialize access due to read/write dependencies among transactions"),
4659 errdetail_internal("Reason code: Canceled on conflict out to pivot %u, during read.", writer->topXid),
4660 errhint("The transaction might succeed if retried.")));
4662 writer->flags |= SXACT_FLAG_DOOMED;
4667 * PreCommit_CheckForSerializationFailure
4668 * Check for dangerous structures in a serializable transaction
4669 * at commit.
4671 * We're checking for a dangerous structure as each conflict is recorded.
4672 * The only way we could have a problem at commit is if this is the "out"
4673 * side of a pivot, and neither the "in" side nor the pivot has yet
4674 * committed.
4676 * If a dangerous structure is found, the pivot (the near conflict) is
4677 * marked for death, because rolling back another transaction might mean
4678 * that we fail without ever making progress. This transaction is
4679 * committing writes, so letting it commit ensures progress. If we
4680 * canceled the far conflict, it might immediately fail again on retry.
4682 void
4683 PreCommit_CheckForSerializationFailure(void)
4685 dlist_iter near_iter;
4687 if (MySerializableXact == InvalidSerializableXact)
4688 return;
4690 Assert(IsolationIsSerializable());
4692 LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
4695 * Check if someone else has already decided that we need to die. Since
4696 * we set our own DOOMED flag when partially releasing, ignore in that
4697 * case.
4699 if (SxactIsDoomed(MySerializableXact) &&
4700 !SxactIsPartiallyReleased(MySerializableXact))
4702 LWLockRelease(SerializableXactHashLock);
4703 ereport(ERROR,
4704 (errcode(ERRCODE_T_R_SERIALIZATION_FAILURE),
4705 errmsg("could not serialize access due to read/write dependencies among transactions"),
4706 errdetail_internal("Reason code: Canceled on identification as a pivot, during commit attempt."),
4707 errhint("The transaction might succeed if retried.")));
4710 dlist_foreach(near_iter, &MySerializableXact->inConflicts)
4712 RWConflict nearConflict =
4713 dlist_container(RWConflictData, inLink, near_iter.cur);
4715 if (!SxactIsCommitted(nearConflict->sxactOut)
4716 && !SxactIsDoomed(nearConflict->sxactOut))
4718 dlist_iter far_iter;
4720 dlist_foreach(far_iter, &nearConflict->sxactOut->inConflicts)
4722 RWConflict farConflict =
4723 dlist_container(RWConflictData, inLink, far_iter.cur);
4725 if (farConflict->sxactOut == MySerializableXact
4726 || (!SxactIsCommitted(farConflict->sxactOut)
4727 && !SxactIsReadOnly(farConflict->sxactOut)
4728 && !SxactIsDoomed(farConflict->sxactOut)))
4731 * Normally, we kill the pivot transaction to make sure we
4732 * make progress if the failing transaction is retried.
4733 * However, we can't kill it if it's already prepared, so
4734 * in that case we commit suicide instead.
4736 if (SxactIsPrepared(nearConflict->sxactOut))
4738 LWLockRelease(SerializableXactHashLock);
4739 ereport(ERROR,
4740 (errcode(ERRCODE_T_R_SERIALIZATION_FAILURE),
4741 errmsg("could not serialize access due to read/write dependencies among transactions"),
4742 errdetail_internal("Reason code: Canceled on commit attempt with conflict in from prepared pivot."),
4743 errhint("The transaction might succeed if retried.")));
4745 nearConflict->sxactOut->flags |= SXACT_FLAG_DOOMED;
4746 break;
4752 MySerializableXact->prepareSeqNo = ++(PredXact->LastSxactCommitSeqNo);
4753 MySerializableXact->flags |= SXACT_FLAG_PREPARED;
4755 LWLockRelease(SerializableXactHashLock);
4758 /*------------------------------------------------------------------------*/
4761 * Two-phase commit support
4765 * AtPrepare_Locks
4766 * Do the preparatory work for a PREPARE: make 2PC state file
4767 * records for all predicate locks currently held.
4769 void
4770 AtPrepare_PredicateLocks(void)
4772 SERIALIZABLEXACT *sxact;
4773 TwoPhasePredicateRecord record;
4774 TwoPhasePredicateXactRecord *xactRecord;
4775 TwoPhasePredicateLockRecord *lockRecord;
4776 dlist_iter iter;
4778 sxact = MySerializableXact;
4779 xactRecord = &(record.data.xactRecord);
4780 lockRecord = &(record.data.lockRecord);
4782 if (MySerializableXact == InvalidSerializableXact)
4783 return;
4785 /* Generate an xact record for our SERIALIZABLEXACT */
4786 record.type = TWOPHASEPREDICATERECORD_XACT;
4787 xactRecord->xmin = MySerializableXact->xmin;
4788 xactRecord->flags = MySerializableXact->flags;
4791 * Note that we don't include the list of conflicts in our out in the
4792 * statefile, because new conflicts can be added even after the
4793 * transaction prepares. We'll just make a conservative assumption during
4794 * recovery instead.
4797 RegisterTwoPhaseRecord(TWOPHASE_RM_PREDICATELOCK_ID, 0,
4798 &record, sizeof(record));
4801 * Generate a lock record for each lock.
4803 * To do this, we need to walk the predicate lock list in our sxact rather
4804 * than using the local predicate lock table because the latter is not
4805 * guaranteed to be accurate.
4807 LWLockAcquire(SerializablePredicateListLock, LW_SHARED);
4810 * No need to take sxact->perXactPredicateListLock in parallel mode
4811 * because there cannot be any parallel workers running while we are
4812 * preparing a transaction.
4814 Assert(!IsParallelWorker() && !ParallelContextActive());
4816 dlist_foreach(iter, &sxact->predicateLocks)
4818 PREDICATELOCK *predlock =
4819 dlist_container(PREDICATELOCK, xactLink, iter.cur);
4821 record.type = TWOPHASEPREDICATERECORD_LOCK;
4822 lockRecord->target = predlock->tag.myTarget->tag;
4824 RegisterTwoPhaseRecord(TWOPHASE_RM_PREDICATELOCK_ID, 0,
4825 &record, sizeof(record));
4828 LWLockRelease(SerializablePredicateListLock);
4832 * PostPrepare_Locks
4833 * Clean up after successful PREPARE. Unlike the non-predicate
4834 * lock manager, we do not need to transfer locks to a dummy
4835 * PGPROC because our SERIALIZABLEXACT will stay around
4836 * anyway. We only need to clean up our local state.
4838 void
4839 PostPrepare_PredicateLocks(TransactionId xid)
4841 if (MySerializableXact == InvalidSerializableXact)
4842 return;
4844 Assert(SxactIsPrepared(MySerializableXact));
4846 MySerializableXact->pid = 0;
4847 MySerializableXact->pgprocno = INVALID_PROC_NUMBER;
4849 hash_destroy(LocalPredicateLockHash);
4850 LocalPredicateLockHash = NULL;
4852 MySerializableXact = InvalidSerializableXact;
4853 MyXactDidWrite = false;
4857 * PredicateLockTwoPhaseFinish
4858 * Release a prepared transaction's predicate locks once it
4859 * commits or aborts.
4861 void
4862 PredicateLockTwoPhaseFinish(TransactionId xid, bool isCommit)
4864 SERIALIZABLEXID *sxid;
4865 SERIALIZABLEXIDTAG sxidtag;
4867 sxidtag.xid = xid;
4869 LWLockAcquire(SerializableXactHashLock, LW_SHARED);
4870 sxid = (SERIALIZABLEXID *)
4871 hash_search(SerializableXidHash, &sxidtag, HASH_FIND, NULL);
4872 LWLockRelease(SerializableXactHashLock);
4874 /* xid will not be found if it wasn't a serializable transaction */
4875 if (sxid == NULL)
4876 return;
4878 /* Release its locks */
4879 MySerializableXact = sxid->myXact;
4880 MyXactDidWrite = true; /* conservatively assume that we wrote
4881 * something */
4882 ReleasePredicateLocks(isCommit, false);
4886 * Re-acquire a predicate lock belonging to a transaction that was prepared.
4888 void
4889 predicatelock_twophase_recover(TransactionId xid, uint16 info,
4890 void *recdata, uint32 len)
4892 TwoPhasePredicateRecord *record;
4894 Assert(len == sizeof(TwoPhasePredicateRecord));
4896 record = (TwoPhasePredicateRecord *) recdata;
4898 Assert((record->type == TWOPHASEPREDICATERECORD_XACT) ||
4899 (record->type == TWOPHASEPREDICATERECORD_LOCK));
4901 if (record->type == TWOPHASEPREDICATERECORD_XACT)
4903 /* Per-transaction record. Set up a SERIALIZABLEXACT. */
4904 TwoPhasePredicateXactRecord *xactRecord;
4905 SERIALIZABLEXACT *sxact;
4906 SERIALIZABLEXID *sxid;
4907 SERIALIZABLEXIDTAG sxidtag;
4908 bool found;
4910 xactRecord = (TwoPhasePredicateXactRecord *) &record->data.xactRecord;
4912 LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
4913 sxact = CreatePredXact();
4914 if (!sxact)
4915 ereport(ERROR,
4916 (errcode(ERRCODE_OUT_OF_MEMORY),
4917 errmsg("out of shared memory")));
4919 /* vxid for a prepared xact is INVALID_PROC_NUMBER/xid; no pid */
4920 sxact->vxid.procNumber = INVALID_PROC_NUMBER;
4921 sxact->vxid.localTransactionId = (LocalTransactionId) xid;
4922 sxact->pid = 0;
4923 sxact->pgprocno = INVALID_PROC_NUMBER;
4925 /* a prepared xact hasn't committed yet */
4926 sxact->prepareSeqNo = RecoverySerCommitSeqNo;
4927 sxact->commitSeqNo = InvalidSerCommitSeqNo;
4928 sxact->finishedBefore = InvalidTransactionId;
4930 sxact->SeqNo.lastCommitBeforeSnapshot = RecoverySerCommitSeqNo;
4933 * Don't need to track this; no transactions running at the time the
4934 * recovered xact started are still active, except possibly other
4935 * prepared xacts and we don't care whether those are RO_SAFE or not.
4937 dlist_init(&(sxact->possibleUnsafeConflicts));
4939 dlist_init(&(sxact->predicateLocks));
4940 dlist_node_init(&sxact->finishedLink);
4942 sxact->topXid = xid;
4943 sxact->xmin = xactRecord->xmin;
4944 sxact->flags = xactRecord->flags;
4945 Assert(SxactIsPrepared(sxact));
4946 if (!SxactIsReadOnly(sxact))
4948 ++(PredXact->WritableSxactCount);
4949 Assert(PredXact->WritableSxactCount <=
4950 (MaxBackends + max_prepared_xacts));
4954 * We don't know whether the transaction had any conflicts or not, so
4955 * we'll conservatively assume that it had both a conflict in and a
4956 * conflict out, and represent that with the summary conflict flags.
4958 dlist_init(&(sxact->outConflicts));
4959 dlist_init(&(sxact->inConflicts));
4960 sxact->flags |= SXACT_FLAG_SUMMARY_CONFLICT_IN;
4961 sxact->flags |= SXACT_FLAG_SUMMARY_CONFLICT_OUT;
4963 /* Register the transaction's xid */
4964 sxidtag.xid = xid;
4965 sxid = (SERIALIZABLEXID *) hash_search(SerializableXidHash,
4966 &sxidtag,
4967 HASH_ENTER, &found);
4968 Assert(sxid != NULL);
4969 Assert(!found);
4970 sxid->myXact = (SERIALIZABLEXACT *) sxact;
4973 * Update global xmin. Note that this is a special case compared to
4974 * registering a normal transaction, because the global xmin might go
4975 * backwards. That's OK, because until recovery is over we're not
4976 * going to complete any transactions or create any non-prepared
4977 * transactions, so there's no danger of throwing away.
4979 if ((!TransactionIdIsValid(PredXact->SxactGlobalXmin)) ||
4980 (TransactionIdFollows(PredXact->SxactGlobalXmin, sxact->xmin)))
4982 PredXact->SxactGlobalXmin = sxact->xmin;
4983 PredXact->SxactGlobalXminCount = 1;
4984 SerialSetActiveSerXmin(sxact->xmin);
4986 else if (TransactionIdEquals(sxact->xmin, PredXact->SxactGlobalXmin))
4988 Assert(PredXact->SxactGlobalXminCount > 0);
4989 PredXact->SxactGlobalXminCount++;
4992 LWLockRelease(SerializableXactHashLock);
4994 else if (record->type == TWOPHASEPREDICATERECORD_LOCK)
4996 /* Lock record. Recreate the PREDICATELOCK */
4997 TwoPhasePredicateLockRecord *lockRecord;
4998 SERIALIZABLEXID *sxid;
4999 SERIALIZABLEXACT *sxact;
5000 SERIALIZABLEXIDTAG sxidtag;
5001 uint32 targettaghash;
5003 lockRecord = (TwoPhasePredicateLockRecord *) &record->data.lockRecord;
5004 targettaghash = PredicateLockTargetTagHashCode(&lockRecord->target);
5006 LWLockAcquire(SerializableXactHashLock, LW_SHARED);
5007 sxidtag.xid = xid;
5008 sxid = (SERIALIZABLEXID *)
5009 hash_search(SerializableXidHash, &sxidtag, HASH_FIND, NULL);
5010 LWLockRelease(SerializableXactHashLock);
5012 Assert(sxid != NULL);
5013 sxact = sxid->myXact;
5014 Assert(sxact != InvalidSerializableXact);
5016 CreatePredicateLock(&lockRecord->target, targettaghash, sxact);
5021 * Prepare to share the current SERIALIZABLEXACT with parallel workers.
5022 * Return a handle object that can be used by AttachSerializableXact() in a
5023 * parallel worker.
5025 SerializableXactHandle
5026 ShareSerializableXact(void)
5028 return MySerializableXact;
5032 * Allow parallel workers to import the leader's SERIALIZABLEXACT.
5034 void
5035 AttachSerializableXact(SerializableXactHandle handle)
5038 Assert(MySerializableXact == InvalidSerializableXact);
5040 MySerializableXact = (SERIALIZABLEXACT *) handle;
5041 if (MySerializableXact != InvalidSerializableXact)
5042 CreateLocalPredicateLockHash();