Patterns to ace the AWS Solutions Architect Pro exam | Operational Excellence

raji krishnamoorthy
7 min readMar 24, 2024

This is the fifth and last blog of my series Patterns to ace the AWS Solutions Architect Pro exam. Here we are going to delve into questions around Operational Excellence pillar of AWS WAR, discussing scenario patterns commonly encountered in practice exams as well as those I experienced during my certification exam.

#1: Simplify microservices architecture

A business is transitioning its traditional web application to a microservices architecture on containers, managing separate versions for production and testing. The application faces fluctuating loads, with defined minimum and maximum thresholds. The goal is to develop a serverless solution that simplifies operations.

You should be familiar with Amazon Elastic Container Registry (ECR) and Amazon Elastic Container Service (ECS) for container orchestration. Knowledge of ECS Fargate for serverless container execution, auto-scaling to adjust resources based on load, and Application Load Balancers (ALB) for traffic distribution would be needed to answer correctly for this scenario.

#2: Enhance user experience during technical glitches

An ecommerce platform on AWS occasionally faces a 502 error after updates, caused by malformed HTTP headers. Despite a successful reload fixing the issue, a solution to improve user experience is needed. The architect must implement a custom error page to replace the standard error message during this interim period, ensuring seamless customer interaction even when technical issues arise.

You should know how to configure custom error responses with Amazon CloudFront, for directing users to the custom pages during errors. Additionally, understanding DNS management within Amazon Route 53, including modifying DNS records to redirect traffic to a custom error page, is necessary or simply do this lab and you would be good to handle this type of scenario.

#3: Minimal operational overhead for an EBS storage setup

A company’s video upload platform, utilizing Amazon EBS for storage and custom software for video recognition and categorization, seeks to streamline operations. The current setup involves EC2 instances with Auto Scaling and variable traffic handling. The goal is to minimize operational overhead and third-party dependencies by leveraging AWS managed services for a more efficient architecture.

This scenario has storage, a custom recognition task and variable traffic. When you see “operational overhead” in the question, technology modernization is the answer to most of the scenarios. Look for options that talk about moving from EBS to S3, EC2 to serverless compute. Additionally, have knowledge about all the AWS services even though you haven’t worked on them. The question talks about video recognition, understanding how to integrate and use the Amazon Rekognition API for video processing is required.

#4 : Enhance database credentials management

A company wants to transition an application’s database credentials management to a more secure approach. Previously, credentials were stored in an encrypted file on Amazon S3. They want to adopt strong, randomly generated passwords, managed by an AWS service, ensuring minimal operational overhead.

Anytime you see a requirement around modernizing credentials management, automated password rotation, you can opt for AWS Secrets Manager. You should be familiar with all it’s features and the benefits one can get by integrating it with AWS Lambda.

#5: Simplify multi-domain redirects on AWS

A company has acquired multiple domain names for online marketing efforts and seeks an efficient way to redirect visitors to specific URLs based on the domain they access. These domains and their corresponding target URLs are listed in a JSON document, with DNS management handled through Amazon Route 53. The objective is to implement a service that seamlessly redirects both HTTP and HTTPS requests to the appropriate URLs with minimal operational effort, ensuring a streamlined visitor experience.

This type of scenario involving domain redirects needs expertise on multiple AWS services. Familiarity with Amazon Route 53 for DNS management is crucial. Understanding how to create and configure an Application Load Balancer (ALB) with both HTTP and HTTPS listeners is essential for handling incoming traffic effectively. Knowledge of Amazon CloudFront and integrating with Lambda@Edge functions, expertise in AWS Certificate Manager (ACM) for creating an SSL certificate are needed here.

#6 : Multi-Region Resiliency for S3

To enhance resilience and availability, the company seeks to expand their infrastructure across multiple regions, having already set up an additional S3 bucket in another region. The challenge lies in identifying the most efficient approach to ensure seamless asset availability across these regions with minimal operational effort, maintaining optimal application performance and user experience.

If you understand Cross-Region Replication (CRR) well, you should be able to handle this scenario. Understanding the replication configuration setup including IAM roles and permissions for S3, familiarity with Amazon CloudFront’s origin group where S3 primary and secondary region buckets are added as origins is what is needed for this scenario.

#7: Control inbound rules creation in security groups

A company utilizes AWS Organizations to manage 10 accounts, divided between Production (Prod) and Non-Production (NonProd) Organizational Units (OUs). AWS Config monitors configurations across these accounts, with an Amazon EventBridge rule alerting through Amazon SNS upon creation of overly permissive EC2 security group inbound rules. The goal is to prevent the creation of such rules in the NonProd OU with minimal operational effort, ensuring a robust security posture without complicating management processes.

One must be familiar with how to use Amazon EventBridge to detect and react to changes in the status of AWS Config events. You will be able to answer scenarios that talks about multi account management with AWS Organizations, AWS Config for compliance and a solution to bring in proactive monitoring to avoid compliance drift. Read this link from Config developer guide.

#8: Transition to serverless for Git Webhooks

A company aims to evolve their Git repository webhook functionality from an EC2-based architecture to a serverless model within AWS. Their current setup involves an on-premises Git repository, with webhooks triggering AWS cloud functionalities via EC2 instances under an Auto Scaling group, managed by an Application Load Balancer (ALB). The objective is to find a serverless solution that reduces operational overhead while seamlessly integrating with the existing Git server and its webhook configurations.

You should have expertise on sending webhooks on AWS with AWS Lambda. Each piece of webhook logic is encapsulated in separate Lambda functions, doing a lab exercise on this should give you more clarity. I found this link to be helpful.

#9: Optimizing database interactions

A web application, designed to serve users within a specific AWS Region, utilizes Amazon API Gateway and an AWS Lambda function for operations, primarily querying an Amazon Aurora MySQL database configured with three read replicas. Despite the regional proximity and architecture designed for efficiency, performance issues have emerged during high-load scenarios, characterized by an excessive number of database connections.

If you understand what is RDS Proxy and how it can be used to reduce the load of increased database interactions, you should be able to answer. Also read about certain tips to optimize AWS Lambda functions for better resource utilization.

#10 : Centrally manage and disseminate IP addresses

There is a company using AWS Organizations that has set up a transit account with a shared transit gateway for inter-account connectivity. This configuration extends to facilitate AWS Site-to-Site VPN connections linking the company’s global offices. With AWS Config enabled across all accounts, there’s a need to centrally manage and disseminate internal IP address ranges pertinent to these offices. You are asked to design a solution that will allow developers to securely access applications by referring to these IP ranges.

Firstly, understand what managed prefix lists is all about. This AWS documentation would be a good starting point. Get familiarized with Resource Access Manager (RAM) and how it can be used to manage network routing. Do a quick lab referencing this AWS blog, this should suffice to handle this kind of scenario.

#11 : Business Continuity in AWS for on-premise database

An on-premise application uses a MySQL database and shares physical server resources with other applications, needs fail-safe in AWS. The key is to transition with minimal operational impact, considering all on-premises applications are compatible with Amazon EC2. The goal is to identify an AWS-based solution that provides robust business continuity with the least operational overhead.

Knowledge about AWS Elastic Disaster Recovery is the key. You should know how to install and configure the AWS Replication Agent on your source servers for synchronizing data to AWS. A practical experience with conducting failover and fallback operations, including testing and validation processes would be good to handle this scenario.

#12 : Streamline data sharing with AWS Data Exchange

A company leveraging AWS for data aggregation and transformation sells processed data to various customers. Historically, data distribution involved downloading files from Amazon Redshift and sharing them via FTP, a method that has become cumbersome as the customer base expands. To simplify and secure data distribution, the company plans to use AWS Data Exchange. They also want to verify customer identities before sharing data with them. You are asked to design a solution that will meet this requirement with minimal operational overhead.

You should know how to set up a datashare connecting AWS Data Exchange to Amazon Redshift cluster. Familiarity with AWS Data Exchange for creating, managing, and sharing data products with subscribers is needed for handling this scenario.

AWS Solutions Architect Professional certification is a comprehensive exam designed to cover all aspects of AWS cloud, challenging us to demonstrate a profound level of expertise and understanding. The 60 scenarios discussed in this series represent only a fraction of the spectrum of knowledge tested. They are intended to provide a glimpse into the exam’s structure and the type of questions you may encounter, rather than an exhaustive overview of all possible scenarios.

Below is a summary of all the scenarios discussed in this series.

60 scenarios unpacked for AWS SAP

Enjoy your learning journey!

--

--

raji krishnamoorthy

Information Technology Enthusiast, love writing on science and technology; believes in the union of art and science.