Patterns to ace the AWS Solutions Architect Pro exam

raji krishnamoorthy
Towards AWS
Published in
8 min readMar 3, 2024

--

Exploring Reliability

AWS Solutions Architect Professional certification is one such exam that stands as a testament to one’s expertise and commitment to mastering Amazon Web Services (AWS). This certification not only validates an individual’s ability to design and deploy dynamically scalable, highly available, fault-tolerant, and reliable applications on AWS but also tests their knowledge across a broad array of AWS services.

Am not going to talk about the resources needed for the exam’s preparation, there are enough available in the public domain. During exam preparation, I practiced a few sample exams and curated all the questions and the right answers.

I observed that the questions in the AWS Solutions Architect Professional certification exam can be intricately connected to the foundational principles outlined in the AWS Well-Architected Framework.

Specifically, the framework’s six pillars serve as a compass guiding the design and operation of efficient, secure, and resilient cloud architectures.

I took the exam in February 2024 and I didn’t find any question directly related to Sustainability. I appeared for the exam and cleared it with a score of 849 in my first attempt after 4 months of preparation. This accomplishment has given me a unique perspective on the exam’s structure and the kind of questions asked.

In this blog series, I will take you through a detailed exploration of typical scenarios encountered in the certification exam, categorized under each pillar of the AWS WAR.

This blog delves into the Reliability pillar, discussing scenario patterns commonly encountered in practice exams as well as those I experienced during my certification exam.

#1 : Data delivery via REST APIs.
A company offers data via a REST API, utilizing Amazon API Gateway, AWS Lambda, Route 53, and DynamoDB. They seek a solution to enable regional failover for their API, ensuring uninterrupted access across different AWS Regions for their customers.

To ensure your API remains available even in the face of regional AWS outages, you must be familiar with a few critical AWS services and concepts. Firstly, understand Amazon API Gateway for setting up and managing your API, and AWS Lambda for handling backend operations without managing servers. You should also know how to utilize Amazon DynamoDB and convert tables to global tables for cross-region data replication. Crucially, learn to use Amazon Route 53 for DNS management, specifically how to set up failover records to switch traffic between regions based on health checks automatically. This setup requires deploying your API Gateway and Lambda functions in a secondary region and configuring Route 53 to monitor the health of your primary API endpoint.

#2 : Two-Tier Web Application with High Availability
A business plans to migrate its web application from a local data center to AWS to support user growth, using Amazon Aurora PostgreSQL, EC2 Auto Scaling, and Elastic Load Balancing. The goal is to find a solution that ensures scalable, uninterrupted service for an expanding user base.

When you see a migration question talking about scalability and uninterrupted service, you must look for key words such as expanding user base in this case. First, understand Amazon Aurora PostgreSQL, focusing on enabling Auto Scaling for Aurora Replicas to adjust database capacity according to workload changes automatically. Familiarize yourself with EC2 Auto Scaling to ensure your application can handle increases in user traffic by automatically adjusting the number of EC2 instances. Learn how to deploy and configure an Application Load Balancer (ALB), including setting up round-robin routing for distributing incoming application traffic evenly across multiple targets, such as EC2 instances. Additionally, grasp the concept of sticky sessions in ALB to maintain user session state across requests.

#3 : Multi-Tier Web Application with a Disaster Recovery Plan
A company’s multi-tier web application, utilizing Amazon EC2, ALB, Auto Scaling, and RDS with a read replica, seeks to achieve a Recovery Time Objective (RTO) in less than 15 minutes without an active-active budget. They require a strategy for automatic failover to a backup AWS Region.

The keyword here is “without active-active budget”, which means you can use tools like serverless and Infrastructure as Code (IaC). Check if your options has any of these tools mentioned. For the solution, you must understand how to use Amazon RDS with read replicas for database scalability and how to promote a read replica in a backup region using AWS Lambda. Knowledge of EC2 Auto Scaling to dynamically manage EC2 instances and Elastic Load Balancing (ELB) to distribute traffic is crucial. You should also be comfortable configuring Amazon Route 53 for DNS management, including setting up health checks and failover routing policies. Additionally, know how to send notifications with Amazon Simple Notification Service (SNS) based on health check status.

#4 : Burdening a single EC2 instance supported by caching
You will find scenarios where an application is reliant on a single EC2 instance with data storage managed by an ElastiCache for Redis single-node cluster and an RDS instance. You will be asked to design a solution that enables automatic recovery from any failure, addressing high availability and resilience with the least possible downtime.

Understand how to leverage Elastic Load Balancer (ELB) to distribute incoming traffic across multiple Amazon EC2 instances, ensuring no single point of failure. You must know how to configure an Auto Scaling group to maintain application availability, automatically adjusting the number of EC2 instances based on demand. Familiarize yourself with Amazon RDS Multi-AZ deployments for high availability in your database layer, allowing automatic failover to a standby instance in case of an outage. Lastly, grasp ElastiCache for Redis replication groups with Multi-AZ enabled for resilience in your caching layer.

#5 : Accessing services using DNS
Design a new service that will be accessible over TCP using a static port. This design will focus on ensuring high availability and redundancy across multiple Availability Zones, as well as ensuring that the service can be publicly accessed via a DNS name. It is essential that the service is accessible through a fixed address to public access.

Knowledge of Elastic IP addresses and their role in providing a static point of access across Availability Zones is necessary for these kinds of scenarios. You should have hands-on with Network Load Balancers (NLB) for managing traffic distribution across the EC2 instances and exposing the designated TCP port securely. Additionally, be familiar with setting up and managing target groups and registering EC2 instances with the NLB. Lastly, a good grasp of Route 53 for creating A (alias) record sets to associate the NLB’s DNS name with a friendly domain name is required for making the service publicly accessible.

#6 : Data Lake Hosting in Amazon S3 with SFTP
An organization manages a data lake on AWS, receiving daily financial records via SFTP from various sources. Their SFTP server, hosted on an EC2 instance within a VPC’s public subnet, transfers these files to the data lake with cron jobs. You need to enhance this SFTP setup’s performance and scalability within AWS’s environment.

Firstly, understanding AWS Transfer for SFTP is necessary to appreciate the benefits it offers to organizations when we migrate existing SFTP server to AWS. Get hands-on with how Amazon Route 53 is configured to update DNS records via an endpoint. A good understanding of DNS management through Route 53 would be needed.

#7 : Migrating Lambda and DB in an Organization
An organization needs to move Lambda functions and an Amazon Aurora database from one account to another within an AWS Organizations setup. The migration strategy must ensure uninterrupted operations of critical data processing applications.

Firstly, understand how to manage and deploy AWS Lambda functions. Knowledge of Amazon Aurora’s operations, particularly in cloning databases and managing database clusters, familiarity with AWS Resource Access Manager (AWS RAM) for sharing resources across accounts would be needed. Understanding the role of AWS Organizations in helping us to navigate within the multi-account environment effectively is required.

#8 : Lambda for Asynchronous Processing
An organization must adapt an AWS-hosted asynchronous HTTP application for regional failover. Initially, the setup involves an AWS Lambda function triggered by an Amazon API Gateway endpoint, both located in one AWS region. To achieve cross-regional resilience you will be asked to rearchitect this application, ensuring it remains operational even if the primary region becomes unavailable.

Key word is “primary region becomes unavailable”, which means you cannot afford to have any outage. In addition to understanding of AWS Lambda and Amazon API Gateway, you should know to replicate their setup in an additional region apart from the primary region. Familiarity with Amazon Route 53 and its failover routing policy is essential for directing traffic between the primary and secondary API Gateway endpoints based on the health of the application in each region. Understanding of deployment strategies, regional data residency considerations, and application latency implications will be needed to handle this scenario.

#9: Near zero downtime
An application relying on an Amazon RDS for MySQL database, encounters a significant outage in spite of the DB instance deployed in multi-AZ mode. The business cannot tolerate any outage, it mandates to reduce the downtime to near zero. You are asked to optimize the existing architecture.

Learn how to configure RDS Proxy to reduce database workload and improve failover times. Knowledge on migrating from Amazon RDS to Amazon Aurora MySQL, focus on Aurora’s high availability, durability, and auto-scaling capabilities. Understand the migration process, learn to create and manage Amazon Aurora Replicas for read scaling and failover purposes.

#10 : Improve Resiliency without architecture change
A web application is built on Amazon EC2 instances and an RDS for MySQL database, with DNS managed via Amazon Route 53. You will be asked to enhance application resiliency without major architectural changes, the goal is to achieve given Recovery Point Objective (RPO) and Recovery Time Objective (RTO) targets for the architecture, while ensuring minimal latency post-failover.

Do a hands-on lab on AWS Elastic Disaster Recovery. Understand the benefits of Aurora MySQL, especially its high availability, durability, and scalability features. Know how to create cross-Region read replicas for the RDS DB instance to ensure data durability and availability across geographical locations. Understand the setup of an ALB in a secondary region to distribute incoming application traffic efficiently across multiple EC2 instances. Learn to configure AWS Global Accelerator to direct traffic to multiple AWS Regions and be familiar with how it associates with Application Load Balancer (ALB).

#11: Protecting Backup Vault from cyber threat
An organization operating multiple applications across numerous AWS accounts takes proactive measures against ransomware threats. Using AWS Organizations, they prioritize securing backups from privileged-user credential compromises. The scenario will ask about identifying critical steps to ensure backup integrity and resilience to safeguard against cyber threats.

Familiarity with AWS Backup is needed, particularly in setting up cross-account backups and configuring Backup vaults. Good understanding of AWS Organizations, particularly focusing on Service Control Policies (SCPs) to enforce account-level permissions and prevent unauthorized modifications to Backup vaults.

#12 : Modernizing to handle peak times
An organization faces performance challenges with their cloud-hosted application during peak usage times. The application architecture includes two Amazon EC2 instances managed by an Application Load Balancer, with a MySQL database on a separate EC2 instance experiencing high read load. Static content is served from frequently updated Amazon EBS volumes attached to the EC2 instances. You need to design a solution to enhance application reliability and manage increased demand during peak times.

In this scenario, both application and database workloads are unable to withstand the load. Unless the question says things like “minimal architecture change”, go all out in modernizing the architecture. Here, you need to understand containerization principles and learn how to deploy applications using Amazon Elastic Container Service (ECS). Familiarity with creating and managing an Amazon Elastic File System (EFS) for scalable file storage, which can be mounted across multiple containers would be good. Hands-on in setting up and integrating the ECS service with an Application Load Balancer (ALB) for efficient traffic distribution is needed. Finally, understanding Amazon Aurora MySQL Serverless v2 capabilities and how one can leverage a reader DB instance for improved read performance would be sufficient.

Stay tuned for my future blogs, where we will explore additional pillars and scenarios, further unboxing the complexities of AWS Solutions Architecture Professional certification.

--

--

Information Technology Enthusiast, love writing on science and technology; believes in the union of art and science.