Make backfill batch selection exclude rows inserted or updated after backfill start #648
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Backfill only rows present at backfill start. This is a second approach to solving #583; the first one is #634.
Change the backfill algorithm to only backfill rows that were present at the start of the backfill process. Rows inserted or updated after backfill start will be backfilled by the already-installed
up
trigger and do not need to be backfilled by the backfill process (although doing so is safe from a correctness perspective).Avoiding backfilling rows that were inserted or updated after the backfill start ensures that the backfill process is guaranteed to terminate, even if a large number of rows are inserted or updated during the backfill process.
The new algorithm works as follows:
pgroll
schema. The batch table is used to store the primary key values of each batch of rows to be updated during the backfill process. The table holds at mostbatchSize
rows at a time and isTRUNCATE
d at the start of each batch.REPEATABLE READ
transaction and take a transaction snapshot. This transaction remains open for the duration of the backfill so that other transactions can use the snapshot.INSERT INTO
a table. The transaction that does theINSERT INTO
uses the snapshot taken in step 1 so that only rows present at the start of the backfill are visible.ON UPDATE
trigger to fire for the affected rows.The 'batch table' is necessary as a temporary store of the primary key values of the rows to be updated because the per-batch transaction that selects the rows to be updated runs in a
REPEATABLE READ
transaction (by necessity to use the transaction snapshot). Trying to update the selected batch of rows in the same transaction would fail with serialization errors in the case where a row in the batch had been updated by a transaction committed after the snapshot was taken. Such serialization errors can safely be ignored, as any rows updated after the snapshot was taken will already have been backfilled by theup
trigger. In order to avoid the serialization errors therefore, the batch of rows to be updated is written to the 'batch table' from where the batch can beUPDATE
d from aREAD COMMITTED
transaction that can not encounter serialization errors.The largest drawback of this approach is that it requires holding a transaction open during the backfill process. Long-running transactions can cause bloat in the database by preventing vacuuming of dead rows.