Aurora Global Database

Description

Amazon Aurora is a MySQL and PostgreSQL-compatible relational database built for the cloud that combines the performance and availability of traditional enterprise databases with the simplicity and cost-effectiveness of open source databases.

Amazon Aurora is up to five times faster than standardMySQL databases and three times faster than standard PostgreSQL databases. It provides the security, availability, and reliability of commercial databases at 1/10th the cost. Amazon Aurora is fully managed by RDS, which automates time-consuming administration tasks like hardware provisioning, database setup, patching, and backups.

Amazon Aurora features a distributed, fault-tolerant, self-healing storage system that auto-scales up to 128TB per database instance. It delivers high performance and availability with up to 15 low-latency read replicas, point-in-time recovery, continuous backup to Amazon S3, and replication across three Availability Zones.

Amazon Aurora global databases span multiple AWS Regions, enabling low latency global reads and providing fast recovery from the rare outage that might affect an entire AWS Region. An Aurora global database has a primary DB cluster in one Region, and up to five secondary DB clusters in different Regions.

Lab Schema

IaC

Deployment scenarios

The following lab can be deployed manually using the Step-by-Step solution below or automatically using Terraform. If you choose the IaC way, clone code from the repo on the right, deploy using Terraform, and jump directly to the "Test Area - failover" section below.

Config - first cluster

1.Create Database #1

Using AWS Console, create a database. Select:

  • Aurora PostgreSQL-Compatible Edition
  • Provisioned capacity
  • latest DB version that supports global database feature

2.Create Database #2

Using AWS Console, provide additional parameters, such as:

  • Production as template type
  • DB cluster identifier
  • DB password (or generate random one)
  • Memory-optimized DB instance class

3.Create Database #3

At the end, provide details about AZ deployment as well as networking

4.Create cluster nodes

Wait for Aurora Regional cluster to be created. If you selected AZ deployment, two nodes (writer and reader) will be deployed.

5.Create cluster

A newly created cluster is built on top of two nodes, each deployed in a different availability zone.

Config - clients

6.Deploy test EC2 instances

Deploy test EC2 instances in each Region. Install postgresql client (if you deployed solution using IaC code, you can use a dedicated Launch Template that utilizes spot instances.)

7.(optional) Confirm Spot Instances reservation

If client spot instances have been requested as spot instances, confirm status as fulfilled

8. Cluster Endpoints

Using AWS Console, record Cluster writer and reader FQDNs

9. Test Endpoints

Using client confirm that cluster is available via both endpoints but only writer allows to provide write operations.

Config - global database

10. Create global database

Select cluster created in steps 1-5 and execute "Add AWS Region". As a result following actions will be executed:

  • the global database will be created
  • the existing cluster will be added to the global database as a Primary Cluster
  • a new cluster will be created in the selected region
  • the new cluster will be added to the global database as Secondary Cluster

11. Global database parameters

Provide global database and second cluster initial parameters

12. Secondary Cluster Deployment

Wait for the Secondary Cluster to be created. Confirm deployment Region that is different from the original one (where Primary Cluster has been enabled)

13. Global database

Global database if online and configured using two clusters in two separate regions. Bear in mind that only one cluster is in read-write mode (only one writer is active)

Test Area - failover

14. Global database fail over

Using AWS Console provide controlled failover

15. Promote Secondary Cluster to be Primary #1

Choose cluster that will be promoted to Primary Cluster

16. Promote Secondary Cluster to be Primary #2

Wait for failover process to be completed

17. Promote Secondary Cluster to be Primary #3

Confirm that the failover process has been finished successfully