Skip to content

Commit

Permalink
Merge pull request #6006 from EnterpriseDB/release/2024-08-29a
Browse files Browse the repository at this point in the history
Release/2024-08-29a
  • Loading branch information
gvasquezvargas authored Aug 29, 2024
2 parents 8abb45f + 27fc779 commit 9bac6fc
Show file tree
Hide file tree
Showing 9 changed files with 40 additions and 29 deletions.
5 changes: 4 additions & 1 deletion advocacy_docs/pg_extensions/extensionrefs.json
Original file line number Diff line number Diff line change
Expand Up @@ -122,5 +122,8 @@
"sql_profiler":"https://www.enterprisedb.com/docs/pem/latest/profiling_workloads/using_sql_profiler/",
"pg_squeeze":"https://www.enterprisedb.com/docs/pg_extensions/pg_squeeze/",
"wal2json":"https://www.enterprisedb.com/docs/pg_extensions/wal2json/",
"system_stats":"https://www.enterprisedb.com/docs/pg_extensions/system_stats/"
"system_stats":"https://www.enterprisedb.com/docs/pg_extensions/system_stats/",
"orafce":"https://github.com/orafce/orafce",
"plv8":"https://github.com/plv8/plv8",
"bluefin":"https://www.enterprisedb.com/docs/pg_extensions/advanced_storage_pack/"
}
5 changes: 4 additions & 1 deletion advocacy_docs/pg_extensions/index.mdx

Large diffs are not rendered by default.

Original file line number Diff line number Diff line change
Expand Up @@ -12,13 +12,13 @@ New features, enhancements, bug fixes, and other changes in Replication Server 7
| Enhancement | EDB Postgres Replication Server now supports batch replication for Oracle CLOB and BLOB types in Synchronize mode. This significantly reduces the time required to replicate a large number of row changes containing CLOB/BLOB type columns. | #102185 <br/> #35513 |
| Enhancement | The transaction commit timestamp from the origin node is now preserved when replicating to a target node. This is required to process the timestamp-based UPDATE conflicts when EDB Postgres Replication Server is used in a hybrid cluster where the EPRS Publication node is shared with EDB Postgres Distributed (PGD). | |
| Bug&nbsp;fix | Fixed a corner case issue where the synchronization from the target MMR node failed to resume. | #102604 <br/> #35539 |
| Bug&nbsp;fix | Fixed an issue where some of the processed entries in the `rrep_tx_monitor` table were not cleaned up. | |
| Bug&nbsp;fix | Fixed an issue where some of the processed entries in the `rrep_tx_monitor` table weren't cleaned up. | |

## End-of-support notice

Replication Server 6.2 is no longer a supported version.

To ensure that your usage of Replication Server is supported, please upgrade any installations with version 6.2 to version 7. See the end-of-support notes that follow:
To ensure that your usage of Replication Server is supported, upgrade any installations with version 6.2 to version 7. See the end-of-support notes that follow:

**Software:** Replication Server

Expand Down
19 changes: 12 additions & 7 deletions product_docs/docs/pem/9/monitoring_performance/probes.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -165,13 +165,18 @@ Use the **General** tab to modify the definition of an existing probe or to spec

To invoke a script on a Windows system, set the registry entry for `AllowBatchJobSteps` to `true` and restart the PEM agent.

- Use the **Target Type** list to select the object type for the probe to monitor. **Target Type** is disabled if **Collection method** is WMI. Configure the **Target Type** in probes to ensure that the collected metrics are relevant and accurate for the intended level of the system hierarchy. With the **Target Type**, you can effectively monitor various components, ranging from individual database objects like tables and indexes, up to server-wide monitoring.
- Use the **Target Type** list to select the object type for the probe to monitor. **Target Type** is disabled if **Collection method** is WMI. Configure **Target Type** in probes to ensure that the collected metrics are relevant and accurate for the intended level of the system hierarchy. By setting the target type, you can effectively monitor various components, ranging from individual database objects like tables and indexes to server-wide monitoring.

<details><summary>Target Type configuration matrix</summary>
<details><summary>Target type configuration matrix</summary>

The following table provides an overview of **Target Type** configurations, their monitoring scope, and the mandatory fields you must provide when configuring them.
The following table provides an overview of the target type configurations, their monitoring scope, and the mandatory fields for configuring them.

In this table, the target type column determines the object you want to monitor with the probe. The execution level column determines if the monitoring probes executes once per-database, per-server, per-agent, or per-extension level. The mandatory column indicates the coloumns you must configure in the probe query to ensure the required data is collected. Finally, the probe examples column provides some existing probes you can explore to better understand how probes are used in practice.
In this table:

- The **Target type** column determines the object you want to monitor with the probe.
- The **Execution level** column determines if the monitoring probes execute once per database, per server, per agent, or per extension level.
- The **Mandatory columns** column indicates the coloumns you must configure in the probe query to ensure the required data is collected.
- The **Probe examples** column provides some existing probes you can explore to better understand how probes are used in practice.

| Target type | Execution level | Mandatory columns | Probe examples |
|-------------|-----------------|------------------------------------------------------|----------------|
Expand All @@ -187,8 +192,8 @@ Use the **General** tab to modify the definition of an existing probe or to spec
| Extension | Extension | None | Extension |

!!!note
- The custom probes set to a database or larger **Target Type** (including schema, table, index, view, sequence, and functions) collects the information at database level.
- The system probes set to a schema or larger **Target Type** collects the information up to schema level.
- The custom probes set to a database or larger target type (including schema, table, index, view, sequence, and functions) collect the information at the database level.
- The system probes set to a schema or larger target type collect the information up to the schema level.
!!!

</details>
Expand All @@ -211,7 +216,7 @@ Use the **General** tab to modify the definition of an existing probe or to spec

Use the **Columns** tab to define the columns in which to store the probe data. To define a new column:
1. Navigate to the **Columns** tab and select **Add** in the upper-right corner.
2. Provide a column name in the **Name** field.
2. In the **Name** field, provide a column name.
3. Select **Edit** (to the left of the new column name) to provide information about the column:

- Provide a descriptive name for the column in the **Name** field.
Expand Down
2 changes: 1 addition & 1 deletion product_docs/docs/pgd/5/monitoring/sql.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -534,7 +534,7 @@ bdrdb=# SELECT * FROM bdr.monitor_group_versions();
## Monitoring Raft consensus

Raft consensus must be working cluster-wide at all times. The impact
of running a EDB Postgres Distributed cluster without Raft consensus working might be as
of running an EDB Postgres Distributed cluster without Raft consensus working might be as
follows:

- The replication of PGD data changes might still work correctly.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,12 +3,12 @@ title: Joining a heterogeneous cluster
---


A PGD 4.0 node can join a EDB Postgres Distributed cluster running 3.7.x at a specific
A PGD 4.0 node can join an EDB Postgres Distributed cluster running 3.7.x at a specific
minimum maintenance release (such as 3.7.6) or a mix of 3.7 and 4.0 nodes.
This procedure is useful when you want to upgrade not just the PGD
major version but also the underlying PostgreSQL major
version. You can achieve this by joining a 3.7 node running on
PostgreSQL 12 or 13 to a EDB Postgres Distributed cluster running 3.6.x on
PostgreSQL 12 or 13 to an EDB Postgres Distributed cluster running 3.6.x on
PostgreSQL 11. The new node can also
run on the same PostgreSQL major release as all of the nodes in the
existing cluster.
Expand Down
12 changes: 6 additions & 6 deletions product_docs/docs/pgd/5/testingandtuning.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ description: Learn how to test and tune EDB Postgres Distributed clusters.
You can test PGD applications using the following approaches:

- [Trusted Postgres Architect](#trusted-postgres-architect)
- [pgd_bench with CAMO/Failover options](#pgd_bench)
- [pgd_bench with CAMO/failover options](#pgd_bench)


### Trusted Postgres Architect
Expand Down Expand Up @@ -37,7 +37,7 @@ extended in PGD 5.0 in the form of a new application: pgd_bench.
directory. The utility is based on the PostgreSQL pgbench tool but
supports benchmarking CAMO transactions and PGD-specific workloads.

Functionality of pgd_bench is a superset of those of pgbench but
Functionality of pgd_bench is a superset of pgbench functionality but
requires the BDR extension to be installed to work properly.

Key differences include:
Expand All @@ -46,7 +46,7 @@ Key differences include:
pgbench scenario to prevent global lock timeouts in certain cases.
- `VACUUM` command in the standard scenario is executed on all nodes.
- pgd_bench releases are tied to the releases of the BDR extension
and are built against the corresponding Postgres distribution. This is
and are built against the corresponding Postgres distribution. This information is
reflected in the output of the `--version` flag.

The current version allows you to run failover tests while using CAMO or
Expand All @@ -60,7 +60,7 @@ mode in which pgbench should run (default: regular)
```

- Use `-m camo` or `-m failover` to specify the mode for pgd_bench.
You can use The `-m failover` specification to test failover in
You can use the `-m failover` specification to test failover in
regular PGD deployments.

```
Expand Down Expand Up @@ -89,7 +89,7 @@ to get the status of in-flight transactions. Aborted and in-flight transactions
are retried in CAMO mode.

In failover mode, if you specify `--retry`, then in-flight transactions are
retried. In this scenario there's no way to find the status of in-flight
retried. In this scenario, there's no way to find the status of in-flight
transactions.

### Notes on pgd_bench usage
Expand Down Expand Up @@ -126,7 +126,7 @@ Complex applications require some thought to maintain scalability.

If you think you're having performance problems, develop performance tests using
the benchmarking tools. pgd_bench allows you to write custom test scripts specific
to your use case so you can understand the overheads of your SQL and measure the
to your use case so you can understand the overhead of your SQL and measure the
impact of concurrent execution.

If PGD is running slow, then we suggest the following:
Expand Down
10 changes: 5 additions & 5 deletions product_docs/docs/pgd/5/transaction-streaming.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ limitations:
- If the transaction aborts, the work (changes received by each subscriber
and the associated storage I/O) is wasted.

However, starting with version 3.7, PGD supports parallel apply, enabling multiple writer
However, starting with version 3.7, PGD supports Parallel Apply, enabling multiple writer
processes on each subscriber. This capability is leveraged to provide the following enhancements:

- Decoded transactions can be streamed directly to a writer on the subscriber.
Expand All @@ -41,7 +41,7 @@ processes on each subscriber. This capability is leveraged to provide the follow

### Caveats

- You must enable parallel apply.
- You must enable Parallel Apply.
- Workloads consisting of many small and conflicting transactions can lead to
frequent deadlocks between writers.

Expand Down Expand Up @@ -108,16 +108,16 @@ streaming, the subscriber requests transaction streaming.
If the publisher can provide transaction streaming, it
streams transactions whenever the transaction size exceeds the threshold set in
`logical_decoding_work_mem`. The publisher usually has no control over whether
the transactions is streamed to a file or to a writer. Except for some
the transactions are streamed to a file or to a writer. Except for some
situations (such as COPY), it might hint for the subscriber to stream the
transaction to a writer (if possible).

The subscriber can stream transactions received from the publisher to
either a writer or a file. The decision is based on several factors:

- If parallel apply is off (`num_writers = 1`), then it's streamed to a file.
- If Parallel Apply is off (`num_writers = 1`), then it's streamed to a file.
(writer 0 is always reserved for non-streamed transactions.)
- If parallel apply is on but all writers are already busy handling streamed
- If Parallel Apply is on but all writers are already busy handling streamed
transactions, then the new transaction is streamed to a file. See
[Monitoring PGD writers](monitoring/sql#monitoring-pgd-writers) to check PGD
writer status.
Expand Down
8 changes: 4 additions & 4 deletions product_docs/docs/pgd/5/twophase.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ redirects:
---

!!!Note
Two-phase commit is not available with Group Commit or CAMO. See [Durability limitations](durability/limitations).
Two-phase commit isn't available with Group Commit or CAMO. See [Durability limitations](durability/limitations).
!!!

An application can explicitly opt to use two-phase commit with PGD. See
Expand All @@ -19,14 +19,14 @@ software components:

- An application program (AP) that defines transaction boundaries and specifies
actions that constitute a transaction
- Resource managers (RMs, such as databases or file-access systems) that provide
- Resource managers (RMs), such as databases or file-access systems, that provide
access to shared resources
- A separate component called a transaction manager (TM) that assigns identifiers
to transactions, monitors their progress, and takes responsibility for
transaction completion and for failure recovery

PGD supports explicit external 2PC using the `PREPARE TRANSACTION` and
`COMMIT PREPARED`/`ROLLBACK PREPARED` commands. Externally, a EDB Postgres Distributed cluster
`COMMIT PREPARED`/`ROLLBACK PREPARED` commands. Externally, an EDB Postgres Distributed cluster
appears to be a single resource manager to the transaction manager for a
single session.

Expand All @@ -44,7 +44,7 @@ or the global commit scope. Future releases might enable this combination.
## Use

Two-phase commits with a local commit scope work exactly like standard
PostgreSQL. Use the local commit scope and disable CAMO.
PostgreSQL. Use the local commit scope and disable CAMO:

```sql
BEGIN;
Expand Down

2 comments on commit 9bac6fc

@github-actions
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@github-actions
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

πŸŽ‰ Published on https://edb-docs.netlify.app as production
πŸš€ Deployed on https://66d04d306bfc17e2bf7d60f8--edb-docs.netlify.app

Please sign in to comment.