Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

2025 AWS Certified Solutions Architect Flashcards: Key Concepts, Terminologies, and Exam, Exams of Computer Science

2025 AWS Certified Solutions Architect Flashcards: Key Concepts, Terminologies, and Exam Topics for Associate and Professional Certification Success

Typology: Exams

2024/2025

Available from 07/06/2025

quality-docs
quality-docs 🇺🇸

357 documents

1 / 34

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
2025 AWS Certified Solutions Architect
Flashcards: Key Concepts, Terminologies, and
Exam Topics for Associate and Professional
Certification Success
Here are the multiple-choice questions with rationales and the correct answers
indicated:
Question 1:
Your company policies require encryption of sensitive data at rest. You are considering
the possible options for protecting data while storing it at rest on an EBS data volume,
attached to an EC2 instance. Which of these options would allow you to encrypt your
data at rest? (Choose three.)
A. Implement third party volume encryption tools - Correct Answer
B. Encrypt data inside your applications before storing it on EBS - Correct Answer
C. Encrypt data using native data encryption drivers at the file system level - Correct
Answer
D. Use IAM roles to restrict access to the EBS volume.
E. Implement network-level encryption using VPC Flow Logs.
Rationale:
A. Implement third party volume encryption tools: There are various third-party
software solutions available that can encrypt EBS volumes at the operating
system level. This encrypts the data before it's written to the EBS volume.
B. Encrypt data inside your applications before storing it on EBS: Encrypting the
data within the application layer ensures that the data is protected even before it
reaches the EBS volume. This provides end-to-end encryption controlled by the
application.
C. Encrypt data using native data encryption drivers at the file system level:
Operating systems often provide native encryption features (like BitLocker for
Windows or dm-crypt/LUKS for Linux) that can encrypt the file system residing on
the EBS volume. This encrypts the data as it's written to the file system.
pf3
pf4
pf5
pf8
pf9
pfa
pfd
pfe
pff
pf12
pf13
pf14
pf15
pf16
pf17
pf18
pf19
pf1a
pf1b
pf1c
pf1d
pf1e
pf1f
pf20
pf21
pf22

Partial preview of the text

Download 2025 AWS Certified Solutions Architect Flashcards: Key Concepts, Terminologies, and Exam and more Exams Computer Science in PDF only on Docsity!

2025 AWS Certified Solutions Architect

Flashcards: Key Concepts, Terminologies, and

Exam Topics for Associate and Professional

Certification Success

Here are the multiple-choice questions with rationales and the correct answers indicated: Question 1: Your company policies require encryption of sensitive data at rest. You are considering the possible options for protecting data while storing it at rest on an EBS data volume, attached to an EC2 instance. Which of these options would allow you to encrypt your data at rest? (Choose three.) A. Implement third party volume encryption tools - Correct Answer B. Encrypt data inside your applications before storing it on EBS - Correct Answer C. Encrypt data using native data encryption drivers at the file system level - Correct Answer D. Use IAM roles to restrict access to the EBS volume. E. Implement network-level encryption using VPC Flow Logs. Rationale:

  • A. Implement third party volume encryption tools: There are various third-party software solutions available that can encrypt EBS volumes at the operating system level. This encrypts the data before it's written to the EBS volume.
  • B. Encrypt data inside your applications before storing it on EBS: Encrypting the data within the application layer ensures that the data is protected even before it reaches the EBS volume. This provides end-to-end encryption controlled by the application.
  • C. Encrypt data using native data encryption drivers at the file system level: Operating systems often provide native encryption features (like BitLocker for Windows or dm-crypt/LUKS for Linux) that can encrypt the file system residing on the EBS volume. This encrypts the data as it's written to the file system.
  • D. Use IAM roles to restrict access to the EBS volume: IAM roles control who can attach or detach EBS volumes to EC2 instances. While this is crucial for security, it doesn't encrypt the data on the volume itself.
  • E. Implement network-level encryption using VPC Flow Logs: VPC Flow Logs capture information about the IP traffic going to and from network interfaces in your VPC. They^1 do not encrypt the data at rest on the EBS volume. Network encryption (like TLS/SSL for data in transit) is a different security layer. Question 2: A customer is deploying an SSL enabled web application to AWS and would like to implement a separation of roles between the EC2 service administrators that are entitled to login to instances as well as making API calls and the security officers who will maintain and have exclusive access to the application's X.509 certificate that contains the private key. A. Store the certificate on the EC2 instances' local file system and use IAM instance profiles to restrict access. B. Store the certificate in an S3 bucket encrypted with KMS and grant access only to the security officers' IAM role. C. Configure IAM policies authorizing access to the certificate store only to the security officers and terminate SSL on an ELB - Correct Answer D. Store the certificate in AWS Secrets Manager and grant access only to the security officers' IAM role. Retrieve the certificate on the EC2 instances using their IAM role. Rationale:
  • C. Configure IAM policies authorizing access to the certificate store only to the security officers and terminate SSL on an ELB: This option provides the best separation of roles. By terminating SSL on the Elastic Load Balancer (ELB), the security officers are responsible for managing the certificate on the ELB, and the EC2 instances handle unencrypted traffic internally. EC2 administrators can manage the instances without needing direct access to the private key stored on the ELB. IAM policies can strictly control who can manage the ELB's certificates.
  • A. Store the certificate on the EC2 instances' local file system and use IAM instance profiles to restrict access: This doesn't provide a clear separation of roles. EC2 administrators with access to the instances' file system could potentially access the certificate.
  • B. Store the certificate in an S3 bucket encrypted with KMS and grant access only to the security officers' IAM role: While this improves security compared to local storage, the EC2 instances would still need permissions to decrypt the

account to the Production OU will enforce the root SCP, ensuring long-term policy adherence without requiring ongoing exceptions.

  • A. Update the root SCP to explicitly allow the required AWS Config actions for the new account using a Condition element: While technically possible, modifying the root SCP with conditions specific to a single account introduces complexity and increases the long-term maintenance burden of the root SCP.
  • B. Create a new OU named Exceptions for the new account and do not apply the root SCP to this OU: This creates a permanent exception to the organization's policies for the new account, which is generally not a desirable long-term solution for consistent policy enforcement.
  • D. Remove the deny list SCP from the root and implement allow list SCPs in the Production OU: This is a significant change to the organization's policy management strategy. While allow lists are often considered more secure in the long run, it requires a complete overhaul of the existing SCPs and is not the least disruptive way to address the immediate issue without introducing additional long-term maintenance for the current policies. Question 4: A company is running a two-tier web-based application in an on-premises data center. The application layer consists of a single server running a stateful application. The application connects to a PostgreSQL database running on a separate server. The application's user base is expected to grow significantly, so the company is migrating the application and database to AWS. The solution will use Amazon Aurora PostgreSQL, Amazon EC2 Auto Scaling, and Elastic Load Balancing. Which solution will provide a consistent user experience that will allow the application and database tiers to scale? A. Enable Aurora Auto Scaling for the Aurora Writer instance. Use a Network Load Balancer with instance ID routing. B. Enable Aurora Auto Scaling for Aurora Replicas. Use an Application Load Balancer with the round robin routing. C. Enable Aurora Auto Scaling for the Aurora Writer instance. Use an Application Load Balancer with the least outstanding requests routing and sticky sessions enabled. D. Enable Aurora Auto Scaling for Aurora Replicas. Use an Application Load Balancer with the round robin routing and sticky sessions enabled - Correct Answer Rationale:
  • D. Enable Aurora Auto Scaling for Aurora Replicas. Use an Application Load Balancer with the round robin routing and sticky sessions enabled:

o Aurora Auto Scaling for Aurora Replicas: Scaling read replicas helps distribute read traffic, improving database performance and responsiveness as the user base grows. While the writer instance handles writes, scaling replicas ensures read operations don't become a bottleneck. o Application Load Balancer (ALB) with round robin routing and sticky sessions enabled: An ALB distributes incoming HTTP/HTTPS traffic across multiple application instances in the Auto Scaling group. Round robin ensures traffic is distributed evenly. Sticky sessions are crucial for a stateful application. They ensure that a user's requests are consistently routed to the same application instance for the duration of their session, preserving the application's state.

  • A. Enable Aurora Auto Scaling for the Aurora Writer instance. Use a Network Load Balancer with instance ID routing: Scaling the writer primarily addresses write capacity. Instance ID routing with an NLB would send traffic directly to specific instances, which is not ideal for a dynamically scaling Auto Scaling group and doesn't inherently handle stateful applications.
  • B. Enable Aurora Auto Scaling for Aurora Replicas. Use an Application Load Balancer with the round robin routing: Round robin alone without sticky sessions will break the stateful application as user requests might be routed to different stateless instances after scaling events.
  • C. Enable Aurora Auto Scaling for the Aurora Writer instance. Use an Application Load Balancer with the least outstanding requests routing and sticky sessions enabled: While sticky sessions are correct for the stateful application, scaling the writer primarily addresses write capacity. Scaling read replicas is more relevant for handling increased user traffic, which typically involves more read operations. Least outstanding requests routing is a valid load balancing algorithm but doesn't negate the need for scaling read replicas for read-heavy workloads. Question 5: A company uses a service to collect metadata from applications that the company hosts on premises. Consumer devices such as TVs and internet radios access the applications. Many older devices do not support certain HTTP headers and exhibit errors when these headers are present in responses. The company has configured an on-premises load balancer to remove the unsupported headers from responses sent to older devices, which the company identified by the User-Agent headers. The company wants to migrate the service to AWS, adopt serverless technologies, and retain the

Gateway to transform the responses based on the User-Agent header: API Gateway usage plans primarily manage access and throttling, not response transformation based on header content. While API Gateway can perform some transformations, it's less suited for conditional header removal based on the User-Agent compared to CloudFront Functions.

  • B. Create an Application Load Balancer (ALB) for the metadata service. Configure the ALB to invoke the correct Lambda function for each type of request. Configure an ALB listener rule to remove the problematic headers based on the value of the User-Agent header: ALB listener rules are primarily for routing traffic based on request characteristics, not for dynamic response modification based on headers like User-Agent.
  • D. Create an API Gateway HTTP API for the metadata service. Configure the API Gateway to integrate with the Lambda functions. Configure a custom authorizer in API Gateway to inspect the User-Agent header and modify the responses: Custom authorizers in API Gateway are mainly for authentication and authorization, not for modifying response headers based on the User-Agent. Question 6: A retail company needs to provide a series of data files to another company, which is its business partner. These files are saved in an Amazon S3 bucket under Account A, which belongs to the retail company. The business partner company wants one of its IAM users, User_DataProcessor, to access the files from its own AWS account (Account B). Which combination of steps must the companies take so that User_DataProcessor can access the S3 bucket successfully? (Choose two.) A. In Account A, create an IAM user named User_ForPartner and provide the access keys to the business partner. B. In Account B, create an IAM role with permissions to access the S3 bucket in Account A and provide the ARN of this role to Account A. C. In Account A, set the S3 bucket policy to the following: Most Voted D. In Account B, set the permissions of User_DataProcessor to the following: E. In Account A, create a bucket access point and grant access to the business partner's AWS account ID. Rationale: The correct two steps are C and D (although the provided text is incomplete for D). Here's the complete reasoning:
  • C. In Account A, set the S3 bucket policy to the following: The S3 bucket policy in Account A (the retail company's account) needs to grant s3:GetObject permission to the IAM user in Account B (the business partner's account). This is achieved by specifying the principal as the ARN of the IAM user in Account B. A sample bucket policy would look like this: JSON { "Version": "2012- 10 - 17", "Statement": [ { "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::ACCOUNT_B_ID:user/User_DataProcessor" }, "Action": "s3:GetObject", "Resource": "arn:aws:s3:::YOUR_BUCKET_NAME/*" } ] }
  • D. In Account B, set the permissions of User_DataProcessor to the following: The IAM user User_DataProcessor in Account B needs permissions to access the S bucket in Account A. This is typically done by attaching an IAM policy to User_DataProcessor that allows them to perform the s3:GetObject action on the specified S3 bucket in Account A. A sample IAM policy for User_DataProcessor would look like this: JSON { "Version": "2012- 10 - 17", "Statement": [ { "Effect": "Allow",

A company is hosting a critical application on a single Amazon EC2 instance. The application uses an Amazon ElastiCache for Redis single-node cluster for an in-memory data store. The application uses an Amazon RDS for MariaDB DB instance for a relational database. For the application to function, each piece of the infrastructure must be healthy and must be in an active state.A solutions architect needs to improve the application's architecture so that the infrastructure can automatically recover from failure with the least possible downtime.Which combination of steps will meet these requirements? (Choose three.) - - correct ans- - AUse an Elastic Load Balancer to distribute traffic across multiple EC2 instances. Ensure that the EC2 instances are part of an Auto Scaling group that has a minimum capacity of two instances. D. Modify the DB instance to create a Multi-AZ deployment that extends across two Availability Zones. F. Create a replication group for the ElastiCache for Redis cluster. Enable Multi-AZ on the cluster. A retail company is operating its ecommerce application on AWS. The application runs on Amazon EC2 instances behind an Application Load Balancer (ALB). The company uses an Amazon RDS DB instance as the database backend. Amazon CloudFront is configured with one origin that points to the ALB. Static content is cached. Amazon Route 53 is used to host all public zones.After an update of the application, the ALB occasionally returns a 502 status code (Bad Gateway) error. The root cause is malformed HTTP headers that are returned to the ALB. The webpage returns successfully when a solutions architect reloads the webpage immediately after the error occurs.While the company is working on the problem, the solutions architect needs to provide a custom error page instead of the standard ALB error page to visitors.Which combination of steps will meet this requirement with the LEAST amount of operational overhead? (Choose two.) - - correct ans- - A. Create an Amazon S bucket. Configure the S3 bucket to host a static webpage. Upload the custom error pages to Amazon S E. Add a custom error response by configuring a CloudFront custom error page. Modify DNS records to point to a publicly accessible web page. A company has many AWS accounts and uses AWS Organizations to manage all of them. A solutions architect must implement a solution that the company can use to share a common network across multiple accounts.The company's infrastructure team has a dedicated infrastructure account that has a VPC. The infrastructure team must use this account to manage the network. Individual accounts cannot have the ability to

manage their own networks. However, individual accounts must be able to create AWS resources within subnets.Which combination of actions should the solutions architect perform to meet these requirements? (Choose two.) - - correct ans- - B. Enable resource sharing from the AWS Organizations management account. D. Create a resource share in AWS Resource Access Manager in the infrastructure account. Select the specific AWS Organizations OU that will use the shared network. Select each subnet to associate with the resource share. A company wants to use a third-party software-as-a-service (SaaS) application. The third-party SaaS application is consumed through several API calls. The third-party SaaS application also runs on AWS inside a VPC.The company will consume the third-party SaaS application from inside a VPC. The company has internal security policies that mandate the use of private connectivity that does not traverse the internet. No resources that run in the company VPC are allowed to be accessed from outside the company's VPC. All permissions must conform to the principles of least privilege.Which solution meets these requirements? - - correct ans- - A. Create an AWS PrivateLink interface VPC endpoint. Connect this endpoint to the endpoint service that the third- party SaaS application provides. Create a security group to limit the access to the endpoint. Associate the security group with the endpoint. A company needs to implement a patching process for its servers. The on-premises servers and Amazon EC2 instances use a variety of tools to perform patching. Management requires a single report showing the patch status of all the servers and instances.Which set of actions should a solutions architect take to meet these requirements? - - correct ans- - A. Use AWS Systems Manager to manage patches on the on-premises servers and EC2 instances. Use Systems Manager to generate patch compliance reports. A company is running an application on several Amazon EC2 instances in an Auto Scaling group behind an Application Load Balancer. The load on the application varies throughout the day, and EC2 instances are scaled in and out on a regular basis. Log files from the EC2 instances are copied to a central Amazon S3 bucket every 15 minutes. The security team discovers that log files are missing from some of the terminated EC instances.Which set of actions will ensure that log files are copied to the central S bucket from the terminated EC2 instances? - - correct ans- - B. Create an AWS Systems Manager document with a script to copy log files to Amazon S3. Create an Auto Scaling lifecycle hook and an Amazon EventBridge rule to detect lifecycle events from the Auto

gateway. Delete the existing private virtual interface from the existing connection. Create the second Direct Connect connection. Create a new private virtual interface on each connection, and connect both private virtual interfaces to the Direct Connect gateway. Connect the Direct Connect gateway to the single VPC. A company has a web application that allows users to upload short videos. The videos are stored on Amazon EBS volumes and analyzed by custom recognition software for categorization.The website contains static content that has variable traffic with peaks in certain months. The architecture consists of Amazon EC2 instances running in an Auto Scaling group for the web application and EC2 instances running in an Auto Scaling group to process an Amazon SQS queue. The company wants to re-architect the application to reduce operational overhead using AWS managed services where possible and remove dependencies on third-party software.Which solution meets these requirements? - - correct ans- - C. Host the web application in Amazon S3. Store the uploaded videos in Amazon S3. Use S3 event notification to publish events to the SQS queue. Process the SQS queue with an AWS Lambda function that calls the Amazon Rekognition API to categorize the videos. A company has a serverless application comprised of Amazon CloudFront, Amazon API Gateway, and AWS Lambda functions. The current deployment process of the application code is to create a new version number of the Lambda function and run an AWS CLI script to update. If the new function version has errors, another CLI script reverts by deploying the previous working version of the function. The company would like to decrease the time to deploy new versions of the application logic provided by the Lambda functions, and also reduce the time to detect and revert when errors are identified.How can this be accomplished? - - correct ans- - B. Use AWS SAM and built-in AWS CodeDeploy to deploy the new Lambda version, gradually shift traffic to the new version, and use pre-traffic and post-traffic test functions to verify code. Rollback if Amazon CloudWatch alarms are triggered. A company is planning to store a large number of archived documents and make the documents available to employees through the corporate intranet. Employees will access the system by connecting through a client VPN service that is attached to a VPC. The data must not be accessible to the public.The documents that the company is storing are copies of data that is held on physical media elsewhere. The number of requests will be low. Availability and speed of retrieval are not concerns of the company.Which solution will meet these requirements at the LOWEST cost? - - correct ans- - A. Create an Amazon S3 bucket. Configure the S3 bucket to use the S3 One Zone-

Infrequent Access (S3 One Zone-IA) storage class as default. Configure the S3 bucket for website hosting. Create an S3 interface endpoint. Configure the S3 bucket to allow access only through that endpoint. A company is using an on-premises Active Directory service for user authentication. The company wants to use the same authentication service to sign in to the company's AWS accounts, which are using AWS Organizations. AWS Site-to-Site VPN connectivity already exists between the on-premises environment and all the company's AWS accounts.The company's security policy requires conditional access to the accounts based on user groups and roles. User identities must be managed in a single location.Which solution will meet these requirements? - - correct ans- - A. Configure AWS IAM Identity Center (AWS Single Sign-On) to connect to Active Directory by using SAML 2.0. Enable automatic provisioning by using the System for Cross-domain Identity Management (SCIM) v2.0 protocol. Grant access to the AWS accounts by using attribute-based access controls (ABACs). A software company has deployed an application that consumes a REST API by using Amazon API Gateway, AWS Lambda functions, and an Amazon DynamoDB table. The application is showing an increase in the number of errors during PUT requests. Most of the PUT calls come from a small number of clients that are authenticated with specific API keys.A solutions architect has identified that a large number of the PUT requests originate from one client. The API is noncritical, and clients can tolerate retries of unsuccessful calls. However, the errors are displayed to customers and are causing damage to the API's reputation.What should the solutions architect recommend to improve the customer experience? - - correct ans- - B. Implement API throttling through a usage plan at the API Gateway level. Ensure that the client application handles code 429 replies without error. A company is running a data-intensive application on AWS. The application runs on a cluster of hundreds of Amazon EC2 instances. A shared file system also runs on several EC2 instances that store 200 TB of data. The application reads and modifies the data on the shared file system and generates a report. The job runs once monthly, reads a subset of the files from the shared file system, and takes about 72 hours to complete. The compute instances scale in an Auto Scaling group, but the instances that host the shared file system run continuously. The compute and storage instances are all in the same AWS Region.A solutions architect needs to reduce costs by replacing the shared file system instances. The file system must provide high performance access to the needed data for the duration of the 72-hour run.Which solution will provide the

version of the application, the security engineer wants to implement the following application design changes to improve security:The database must use strong, randomly generated passwords stored in a secure AWS managed service.The application resources must be deployed through AWS CloudFormation.The application must rotate credentials for the database every 90 days.A solutions architect will generate a CloudFormation template to deploy the application.Which resources specified in the CloudFormation template will meet the security engineer's requirements with the LEAST amount of operational overhead? - - correct ans- - A. Generate the database password as a secret resource using AWS Secrets Manager. Create an AWS Lambda function resource to rotate the database password. Specify a Secrets Manager RotationSchedule resource to rotate the database password every 90 days. A company is storing data in several Amazon DynamoDB tables. A solutions architect must use a serverless architecture to make the data accessible publicly through a simple API over HTTPS. The solution must scale automatically in response to demand.Which solutions meet these requirements? (Choose two.) - - correct ans- - A. Create an Amazon API Gateway REST API. Configure this API with direct integrations to DynamoDB by using API Gateway's AWS integration type. C. Create an Amazon API Gateway HTTP API. Configure this API with integrations to AWS Lambda functions that return data from the DynamoDB tables. A company has registered 10 new domain names. The company uses the domains for online marketing. The company needs a solution that will redirect online visitors to a specific URL for each domain. All domains and target URLs are defined in a JSON document. All DNS records are managed by Amazon Route 53.A solutions architect must implement a redirect service that accepts HTTP and HTTPS requests.Which combination of steps should the solutions architect take to meet these requirements with the LEAST amount of operational effort? (Choose three.) - - correct ans- - C. Create an AWS Lambda function that uses the JSON document in combination with the event message to look up and respond with a redirect URL. E. Create an Amazon CloudFront distribution. Deploy a Lambda@Edge function. F. Create an SSL certificate by using AWS Certificate Manager (ACM). Include the domains as Subject Alternative Names. A company that has multiple AWS accounts is using AWS Organizations. The company's AWS accounts host VPCs, Amazon EC2 instances, and containers.The company's

compliance team has deployed a security tool in each VPC where the company has deployments. The security tools run on EC2 instances and send information to the AWS account that is dedicated for the compliance team. The company has tagged all the compliance-related resources with a key of "costCenter" and a value or "compliance".The company wants to identify the cost of the security tools that are running on the EC2 instances so that the company can charge the compliance team's AWS account. The cost calculation must be as accurate as possible.What should a solutions architect do to meet these requirements? - - correct ans- - A. In the management account of the organization, activate the costCenter user-defined tag. Configure monthly AWS Cost and Usage Reports to save to an Amazon S3 bucket in the management account. Use the tag breakdown in the report to obtain the total cost for the costCenter tagged resources. A company has 50 AWS accounts that are members of an organization in AWS Organizations. Each account contains multiple VPCs. The company wants to use AWS Transit Gateway to establish connectivity between the VPCs in each member account. Each time a new member account is created, the company wants to automate the process of creating a new VPC and a transit gateway attachment.Which combination of steps will meet these requirements? (Choose two.) - - correct ans- - A. From the management account, share the transit gateway with member accounts by using AWS Resource Access Manager. C. Launch an AWS CloudFormation stack set from the management account that automatically creates a new VPC and a VPC transit gateway attachment in a member account. Associate the attachment with the transit gateway in the management account by using the transit gateway ID. An enterprise company wants to allow its developers to purchase third-party software through AWS Marketplace. The company uses an AWS Organizations account structure with full features enabled, and has a shared services account in each organizational unit (OU) that will be used by procurement managers. The procurement team's policy indicates that developers should be able to obtain third-party software from an approved list only and use Private Marketplace in AWS Marketplace to achieve this requirement. The procurement team wants administration of Private Marketplace to be restricted to a role named procurement-manager-role, which could be assumed by procurement managers. Other IAM users, groups, roles, and account administrators in the company should be denied Private Marketplace administrative access.What is the MOST efficient way to design an architecture to meet these requirements? - - correct ans- - C. Create an IAM role named procurement-manager-role in all the shared services

A company is storing data on premises on a Windows file server. The company produces 5 GB of new data daily.The company migrated part of its Windows-based workload to AWS and needs the data to be available on a file system in the cloud. The company already has established an AWS Direct Connect connection between the on- premises network and AWS.Which data migration strategy should the company use? - - correct ans- - B. Use AWS DataSync to schedule a daily task to replicate data between the on-premises Windows file server and Amazon FSx. A company's solutions architect is reviewing a web application that runs on AWS. The application references static assets in an Amazon S3 bucket in the us-east-1 Region. The company needs resiliency across multiple AWS Regions. The company already has created an S3 bucket in a second Region.Which solution will meet these requirements with the LEAST operational overhead? - - correct ans- - C. Configure replication on the S3 bucket in us-east-1 to replicate objects to the S3 bucket in the second Region. Set up an Amazon CloudFront distribution with an origin group that contains the two S buckets as origins. A company is hosting a three-tier web application in an on-premises environment. Due to a recent surge in traffic that resulted in downtime and a significant financial impact, company management has ordered that the application be moved to AWS. The application is written in .NET and has a dependency on a MySQL database. A solutions architect must design a scalable and highly available solution to meet the demand of 200,000 daily users.Which steps should the solutions architect take to design an appropriate solution? - - correct ans- - B. Use AWS CloudFormation to launch a stack containing an Application Load Balancer (ALB) in front of an Amazon EC2 Auto Scaling group spanning three Availability Zones. The stack should launch a Multi-AZ deployment of an Amazon Aurora MySQL DB cluster with a Retain deletion policy. Use an Amazon Route 53 alias record to route traffic from the company's domain to the ALB. A company is using AWS Organizations to manage multiple AWS accounts. For security purposes, the company requires the creation of an Amazon Simple Notification Service (Amazon SNS) topic that enables integration with a third-party alerting system in all the Organizations member accounts.A solutions architect used an AWS CloudFormation template to create the SNS topic and stack sets to automate the deployment of CloudFormation stacks. Trusted access has been enabled in Organizations.What should the solutions architect do to deploy the CloudFormation StackSets in all AWS accounts? - - correct ans- - C. Create a stack set in the Organizations management

account. Use service-managed permissions. Set deployment options to deploy to the organization. Enable CloudFormation StackSets automatic deployment. A company wants to migrate its workloads from on premises to AWS. The workloads run on Linux and Windows. The company has a large on-premises infrastructure that consists of physical machines and VMs that host numerous applications.The company must capture details about the system configuration, system performance, running processes, and network connections of its on-premises workloads. The company also must divide the on-premises applications into groups for AWS migrations. The company needs recommendations for Amazon EC2 instance types so that the company can run its workloads on AWS in the most cost-effective manner.Which combination of steps should a solutions architect take to meet these requirements? (Choose three.) - - correct ans- - A. Assess the existing applications by installing AWS Application Discovery Agent on the physical machines and VMs. D. Group servers into applications for migration by using AWS Migration Hub. E. Generate recommended instance types and associated costs by using AWS Migration Hub. A company is hosting an image-processing service on AWS in a VPC. The VPC extends across two Availability Zones. Each Availability Zone contains one public subnet and one private subnet.The service runs on Amazon EC2 instances in the private subnets. An Application Load Balancer in the public subnets is in front of the service. The service needs to communicate with the internet and does so through two NAT gateways. The service uses Amazon S3 for image storage. The EC2 instances retrieve approximately 1 ТВ of data from an S3 bucket each day.The company has promoted the service as highly secure. A solutions architect must reduce cloud expenditures as much as possible without compromising the service's security posture or increasing the time spent on ongoing operations.Which solution will meet these requirements? - - correct ans- - C. Set up an S3 gateway VPC endpoint in the VPAttach an endpoint policy to the endpoint to allow the required actions on the S3 bucket. A company recently deployed an application on AWS. The application uses Amazon DynamoDB. The company measured the application load and configured the RCUs and WCUs on the DynamoDB table to match the expected peak load. The peak load occurs once a week for a 4-hour period and is double the average load. The application load is close to the average load for the rest of the week. The access pattern includes many more writes to the table than reads of the table.A solutions architect needs to