Patterns to ace the AWS Solutions Architect Pro exam | Performance Efficiency

raji krishnamoorthy
6 min readMar 11, 2024

This is the fourth blog of my series Patterns to ace the AWS Solutions Architect Pro exam. Here we are going to delve into questions around Performance Efficiency pillar of AWS WAR, discussing scenario patterns commonly encountered in practice exams as well as those I experienced during my certification exam.

#1: Migrating to AWS with Legacy Device Support
A company aiming to move its on-premises applications, accessed by various consumer devices including older models, to AWS intends to use serverless technologies while ensuring backward compatibility. The challenge lies in adapting responses for older devices that don’t support specific HTTP headers, necessitating a solution that filters these headers based on the device’s User-Agent. The goal is to maintain seamless access for all devices post-migration.

“Filter headers based on user-agent” — these words point towards Amazon CloudFront. You should know how to configure CloudFront distributions to interact seamlessly with Application Load Balancers (ALB) and knowledge of CloudFront functions especially awareness of the use cases they are meant for.

#2: Log Integrity with Auto Scaling EC2 Instances
Amazon EC2 instances within an Auto Scaling group hosting an application faces challenges in log file management. With the dynamic scaling of EC2 instances, the security team notes missing log files from terminated instances. The task is to establish a process ensuring all log files are consistently copied to a central Amazon S3 bucket, even from instances that are scaled down.

You need to be proficient with AWS Systems Manager for executing scripts on EC2 instances. Understanding of Auto Scaling groups and lifecycle hooks to manage instance termination processes is needed. Familiarity with Amazon EventBridge for event detection and triggering responses to specific lifecycle events, knowledge of how AWS Lambda functions can invoke Systems Manager are necessary. Additionally, understanding the SendCommand operation within Systems Manager to initiate scripts and how to interact with Auto Scaling lifecycle hooks to manage instance terminations would be needed to handle this scenario.

#3: Scaling for Enhanced Media Content Delivery
A company’s blog site, hosted on Amazon EC2 instances and utilizing an Amazon EFS volume for content storage, faces performance issues due to a significant increase in traffic after introducing video content. Users experience buffering and timeouts, especially during peak traffic times. The challenge is to find a cost-efficient and scalable solution to improve site accessibility and video streaming quality.

Most of the time when the question has content around images or video and you are asked to improve the performance, the answer would be Amazon S3. Since the question talks about a blog site with users distributed across, opt for Amazon CloudFront.

#4 : Streamlining Alert Integration Across AWS Accounts
A company managing multiple AWS accounts with AWS Organizations aims to enhance security by integrating a third-party alerting system using Amazon SNS topics. The solutions architect employs an AWS CloudFormation template for the SNS topic creation and utilizes stack sets for automated deployment across all member accounts, with trusted access enabled. The focus is on the deployment process for these CloudFormation StackSets to ensure uniform alerting capabilities company-wide.

This scenario is about how one can make CloudFormation StackSets scalable across an AWS Organization. One must be aware of the permissions, knowledge of automatic deployment in CloudFormation StackSets.

#5 : Optimizing Cloud Workloads for Efficiency and Scalability
A company uses an Amazon EC2 instance to run a Python script for data processing from an S3 bucket every 10 minutes, with significant idle time due to the script’s efficiency. Seeking high availability, scalability, and reduced management overhead, they aim to optimize their cloud infrastructure for better resource utilization and operational efficiency.

The key ask in the scenario is operational efficiency and cost, which point towards serverless architectures. Familiarity with when to use S3 event notifications to trigger Lambda functions is all that is needed to tackle these types of scenarios.

#6: Migrating Legacy Email Services to Amazon SES
A company transitions from an on-premises setup to AWS, facing challenges with a legacy SMTP service, that lacks TLS encryption and relies on outdated protocols. To modernize, the company opts for Amazon Simple Email Service (SES), having prepared the SES domain and adjusted service limits. The focus is on adapting the critical application to utilize Amazon SES for outbound emails, ensuring enhanced security and reliability.

Good knowledge on Amazon SES is sufficient to handle this scenario. Refer this documentation from AWS.

#7: Scaling Genomics Data Analysis with AWS
A life sciences company is transitioning its genomics data analysis from an on-premises setup with capacity limitations to AWS, aiming for scalability and faster processing times. They utilize Docker containers for processing large volumes of genomics data, facing a demand for processing approximately 200 GB per genome with an expected 15 job requests daily. The goal is to leverage AWS services, such as Amazon S3 for storage, while ensuring seamless data transfer and efficient job execution to enhance research outcomes.

You should know how AWS DataSync can be used to transfer data to Amazon S3. And then integrate S3 events with AWS Lambda function invoking an AWS Step Functions workflow. Experience with Amazon Elastic Container Registry (Amazon ECR) for storing and managing Docker images is needed. Proficiency in configuring and utilizing AWS Batch to manage and run containerized jobs is needed. Practice this lab to get a hang on DataSync.

#8: Simplifying Document Processing Migration to AWS
A company is moving its document processing system to AWS, with Amazon S3 for storage and direct customer access. However, they face challenges updating a key server to support S3 API, requiring a solution that allows fast local file access during processing and ensures public availability within 30 minutes post-processing. The focus is on achieving this with minimal effort, leveraging AWS capabilities to bridge the gap between legacy systems and cloud-based storage.

Practice this lab to handle this scenario. You should know to set up Amazon S3 File Gateway and mount it on EC2 instance by using NFS. Also read about RefreshCache API of AWS Storage Gateway.

#9: Migrate applications with diverse usage patterns
Applications developed in different technologies, vary in memory usage, with some requiring up to 2.5 GB during peak times. There is also a billing report application with extensive processing needs. The goal is to find a solution that accommodates the diverse requirements and usage patterns of these applications, ensuring efficient month-end processing and occasional usage without compromising on performance.

Good knowledge on how Amazon ECS containers auto scaling works is needed. Refer this AWS blog. You will learn about task scaling in ECS.

#10 : Ensuring Real-Time Data Consistency Across Regions
A company requires a solution to synchronize its Amazon RDS for MySQL database between the US and Europe, supporting real-time data access and updates for customers in both regions without latency or data staleness issues. This involves configuring the database to handle writes and immediate data visibility across continents, ensuring seamless, consistent user experiences globally.

All that is needed to tackle a real-time data consistency across different regions is to practice this lab on migrating from Amazon RDS MySQL to Amazon Aurora MySQL. Try to remember the 7 steps.

#11: Discover application dependencies for Cloud Migration
A company embarking on AWS migration faces challenges with an unclear application landscape, consisting of physical machines and VMs, and a specific application with latency-sensitive dependencies on a custom IP protocol. Despite collecting data with AWS Application Discovery Agent, the company needs a strategy to identify all critical dependencies for simultaneous migration to ensure low-latency performance remains intact.

The moment you see migration scenarios involving application dependencies, opt for Migration Hub. The key to handle this question is familiarity with network graphs within Migration Hub, usage of Amazon Athena.

#12: Improving mobile game performance
A mobile game company faces challenges with its on-premises backend during peak hours, including insufficient server capacity and latency issues in accessing player session data. Management seeks a cloud-based solution to address these issues without altering the existing REST API model, aiming for scalability to handle varying loads and improved data access speed.

Added to API Gateway and Lambda integration, knowledge of Amazon DynamoDB, especially its on-demand capacity mode for handling variable workloads, for session data with low latency is required.

In the next blog, we will explore scenarios related to the Operational Excellence pillar of AWS Well-Architected Review, commonly found in the AWS Solutions Architecture Professional certification.

--

--

raji krishnamoorthy

Information Technology Enthusiast, love writing on science and technology; believes in the union of art and science.