Spring Sale 65% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: exams65

ExamsBrite Dumps

AWS Certified Solutions Architect - Professional Question and Answers

AWS Certified Solutions Architect - Professional

Last Update Feb 28, 2026
Total Questions : 625

We are offering FREE SAP-C02 Amazon Web Services exam questions. All you do is to just go and sign up. Give your details, prepare SAP-C02 free exam questions and then go for complete pool of AWS Certified Solutions Architect - Professional test questions that will help you more.

SAP-C02 pdf

SAP-C02 PDF

$36.75  $104.99
SAP-C02 Engine

SAP-C02 Testing Engine

$43.75  $124.99
SAP-C02 PDF + Engine

SAP-C02 PDF + Testing Engine

$57.75  $164.99
Questions 1

A company is storing data in several Amazon DynamoDB tables. A solutions architect must use a serverless architecture to make the data accessible publicly through a simple API over HTTPS. The solution must scale automatically in response to demand.

Which solutions meet these requirements? (Choose two.)

Options:

A.  

Create an Amazon API Gateway REST API. Configure this API with direct integrations to DynamoDB by using API Gateway’s AWS integration type.

B.  

Create an Amazon API Gateway HTTP API. Configure this API with direct integrations to Dynamo DB by using API Gateway’s AWS integration type.

C.  

Create an Amazon API Gateway HTTP API. Configure this API with integrations to AWS Lambda functions that return data from the DynamoDB tables.

D.  

Create an accelerator in AWS Global Accelerator. Configure this accelerator with AWS Lambda@Edge function integrations that return data from the DynamoDB tables.

E.  

Create a Network Load Balancer. Configure listener rules to forward requests to the appropriate AWS Lambda functions

Discussion 0
Questions 2

A company generates approximately 20 GB of data multiple times each day. The company uses AWS DataSync to copy all data from on-premises storage to Amazon S3 every 6 hours for further processing.

The analytics team wants to modify the copy process to copy only data relevant to the analytics team and ignore the rest of the data. The team wants to copy data as soon as possible and receive a notification when the copy process is finished.

Which combination of steps will meet these requirements MOST cost-effectively? (Select THREE.)

Options:

A.  

Modify the data generation process on premises to create a manifest file at the end of the copy process with the names of the objects to be copied to Amazon S3. Create a custom script to upload the manifest file to an S3 bucket.

B.  

Modify the data generation process on premises to create a manifest file at the end of the copy process with the names of the objects to be copied to Amazon S3. Create an AWS Lambda function to load the manifest file data into an Amazon DynamoDB table.

C.  

Create an AWS Lambda function that Amazon EventBridge invokes when the manifest file is loaded into Amazon DynamoDB. Configure the Lambda function to copy the data from on-premises storage to the S3 bucket that uses the manifest file.

D.  

Create an AWS Lambda function that an S3 Event Notification invokes when the manifest file is uploaded. Configure the Lambda function to invoke the DataSync task by calling the StartTaskExecution API action with a manifest.

E.  

Create an Amazon SNS topic. Create an Amazon EventBridge rule to send an email notification to the SNS topic when the DataSync task execution status changes to SUCCESS or to ERROR.

F.  

Create an Amazon SNS topic. Create an AWS Lambda function to send an email notification to the SNS topic when the DataSync task execution status changes to SUCCESS or to ERROR.

Discussion 0
Questions 3

A company uses AWS Organizations. The company creates a central VPC in an AWS account that is designated for networking in a single AWS Region. The central VPC has an AWS Site-to-Site VPN connection to the company's on-premises network. A solutions architect must create another AWS account that uses the same networking resources that the central VPC uses.

Which solution meets these requirements MOST cost-effectively?

Options:

A.  

Create a VPC in the new AWS account. Create a new Site-to-Site VPN connection for the on-premises connection.

B.  

Use AWS Resource Access Manager to share the VPN connection in the central VPC with the new AWS account.

C.  

Create a VPC in the new AWS account. Configure a virtual private gateway to connect to the central VP

C.  

D.  

Use AWS Resource Access Manager to share the subnets in the central VPC with the new AWS account.

Discussion 0
Questions 4

A solutions architect is reviewing an application's resilience before launch. The application runs on an Amazon EC2 instance that is deployed in a private subnet of a VPC.

The EC2 instance is provisioned by an Auto Scaling group that has a minimum capacity of I and a maximum capacity of I. The application stores data on an Amazon RDS for MySQL DB instance. The VPC has subnets configured in three Availability Zones and is configured with a single NAT gateway.

The solutions architect needs to recommend a solution to ensure that the application will operate across multiple Availability Zones.

Which solution will meet this requirement?

Options:

A.  

Deploy an additional NAT gateway in the other Availability Zones. Update the route tables with appropriate routes. Modify the RDS for MySQL DB instance to aMulti-AZ configuration. Configure the Auto Scaling group to launch instances across Availability Zones. Set the minimum capacity and maximum capacity of theAuto Scaling group to 3.

B.  

Replace the NAT gateway with a virtual private gateway. Replace the RDS for MySQL DB instance with an Amazon Aurora MySQL DB cluster. Configure theAuto Scaling group to launch instances across all subnets in the VPC. Set the minimum capacity and maximum capacity of the Auto Scaling group to 3.

C.  

Replace the NAT gateway with a NAT instance. Migrate the RDS for MySQL DB instance to an RDS for PostgreSQL DB instance. Launch a new EC2 instance in the other Availability Zones.

D.  

Deploy an additional NAT gateway in the other Availability Zones. Update the route tables with appropriate routes. Modify the RDS for MySQL DB instance toturn on automatic backups and retain the backups for 7 days. Configure the Auto Scaling group to launch instances across all subnets in the VPC. Keeptheminimum capacity and the maximum capacity of the Auto Scaling group at 1.

Discussion 0
Questions 5

A company runs a highly available data collection application on Amazon EC2 in the eu-north-1 Region. The application collects data from end-user devices and writes records to an Amazon Kinesis data stream and a set of AWS Lambda functions that process the records. The company persists the output of the record processing to an Amazon S3 bucket in eu-north-1. The company uses the data in the S3 bucket as a data source for Amazon Athena.

The company wants to increase its global presence. A solutions architect must launch the data collection capabilities in the sa-east-1 and ap-northeast-1 Regions. The solutions architect deploys the application, the Kinesis data stream, and the Lambda functions in the two new Regions. The solutions architect keeps the S3 bucket in eu-north-1 to meet a requirement to centralize the data analysis.

During testing of the new setup, the solutions architect notices a significant lag on the arrival of data from the new Regions to the S3 bucket.

Which solution will improve this lag time the MOST?

Options:

A.  

In each of the two new Regions, set up the Lambda functions to run in a VPC. Set up an S3 gateway endpoint in that VPC.

B.  

Turn on S3 Transfer Acceleration on the S3 bucket in eu-north-1. Change the application to use the new S3 accelerated endpoint when the application uploads data to the S3 bucket.

C.  

Create an S3 bucket in each of the two new Regions. Set the application in each new Region to upload to its respective S3 bucket. Set up S3 Cross-Region Replication to replicate data to the S3 bucket in eu-north-1.

D.  

Increase the memory requirements of the Lambda functions to ensure that they have multiple cores available. Use the multipart upload feature when the application uploads data to Amazon S3 from Lambda.

Discussion 0
Questions 6

A company has multiple AWS accounts. The company recently had a security audit that revealed many unencrypted Amazon Elastic Block Store (Amazon EBS) volumes attached to Amazon EC2 instances.

A solutions architect must encrypt the unencrypted volumes and ensure that unencrypted volumes will be detected automatically in the future. Additionally, the company wants a solution that can centrally manage multiple AWS accounts with a focus on compliance and security.

Which combination of steps should the solutions architect take to meet these requirements? (Choose two.)

Options:

A.  

Create an organization in AWS Organizations. Set up AWS Control Tower, and turn on the strongly recommended guardrails. Join all accounts to the organization. Categorize the AWS accounts into OUs.

B.  

Use the AWS CLI to list all the unencrypted volumes in all the AWS accounts. Run a script to encrypt all the unencrypted volumes in place.

C.  

Create a snapshot of each unencrypted volume. Create a new encrypted volume from the unencrypted snapshot. Detach the existing volume, and replace it with the encrypted volume.

D.  

Create an organization in AWS Organizations. Set up AWS Control Tower, and turn on the mandatory guardrails. Join all accounts to the organization. Categorize the AWS accounts into OUs.

E.  

Turn on AWS CloudTrail. Configure an Amazon EventBridge (Amazon CloudWatch Events) rule to detect and automatically encrypt unencrypted volumes.

Discussion 0
Questions 7

A company is migrating to AWS and needs to inventory physical and virtual servers, apps, and database relationships to properly rightsize and plan migration.

Options:

A.  

Use Migration Evaluator with Agentless Collector.

B.  

Use Migration Hub with Discovery Agent and Strategy Recommendations.

C.  

Use Migration Hub with Agentless Collector and Migration Service.

D.  

Use Migration Hub import tool.

Discussion 0
Questions 8

A company has applications in an AWS account that is named Source. The account is in an organization in AWS Organizations. One of the applications uses AWS Lambda functions and store’s inventory data in an Amazon Aurora database. The application deploys the Lambda functions by using a deployment package. The company has configured automated backups for Aurora.

The company wants to migrate the Lambda functions and the Aurora database to a new AWS account that is named Target. The application processes critical data, so the company must minimize downtime.

Which solution will meet these requirements?

Options:

A.  

Download the Lambda function deployment package from the Source account. Use the deployment package and create new Lambda functions in the Target account. Share the automated Aurora DB cluster snapshot with the Target account.

B.  

Download the Lambda function deployment package from the Source account. Use the deployment package and create new Lambda functions in the Target account Share the Aurora DB cluster with the Target account by using AWS Resource Access Manager {AWS RAM). Grant the Target account permission to clone the Aurora DB cluster.

C.  

Use AWS Resource Access Manager (AWS RAM) to share the Lambda functions and the Aurora DB cluster with the Target account. Grant the Target account permission to clone the Aurora DB cluster.

D.  

Use AWS Resource Access Manager (AWS RAM) to share the Lambda functions with the Target account. Share the automated Aurora DB cluster snapshot with the Target account.

Discussion 0
Questions 9

A company runs a processing engine in the AWS Cloud The engine processes environmental data from logistics centers to calculate a sustainability index The company has millions of devices in logistics centers that are spread across Europe The devices send information to the processing engine through a RESTful API

The API experiences unpredictable bursts of traffic The company must implement a solution to process all data that the devices send to the processing engine Data loss is unacceptable

Which solution will meet these requirements?

Options:

A.  

Create an Application Load Balancer (ALB) for the RESTful API Create an Amazon Simple Queue Service (Amazon SQS) queue Create a listener and a target group for the ALB Add the SQS queue as the target Use a container that runs in Amazon Elastic Container Service (Amazon ECS) with the Fargate launch type to process messages in the queue

B.  

Create an Amazon API Gateway HTTP API that implements the RESTful API Create an Amazon Simple Queue Service (Amazon SQS) queue Create an API Gateway service integration with the SQS queue Create an AWS Lambda function toprocess messages in the SQS queue

C.  

Create an Amazon API Gateway REST API that implements the RESTful API Create a fleet of Amazon EC2 instances in an Auto Scaling group Create an API Gateway Auto Scaling group proxy integration Use the EC2 instances to process incoming data

D.  

Create an Amazon CloudFront distribution for the RESTful API Create a data stream in Amazon Kinesis Data Streams Set the data stream as the origin for the distribution Create an AWS Lambda function to consume and process data in the data stream

Discussion 0
Questions 10

A company plans to migrate a legacy on-premises application to AWS. The application is a Java web application that runs on Apache Tomcat with a PostgreSQL database.

The company does not have access to the source code but can deploy the application Java Archive (JAR) files. The application has increased traffic at the end of each month.

Which solution will meet these requirements with the LEAST operational overhead?

Options:

A.  

Launch Amazon EC2 instances in multiple Availability Zones. Deploy Tomcat and PostgreSQL to all the instances by using Amazon EFS mount points. Use AWS Step Functions to deploy additional EC2 instances to scale for increased traffic.

B.  

Provision Amazon EKS in an Auto Scaling group across multiple AWS Regions. Deploy Tomcat and PostgreSQL in the container images. Use a Network Load Balancer to scale for increased traffic.

C.  

Refactor the Java application into Python-based containers. Use AWS Lambda functions for the application logic. Store application data in Amazon DynamoDB global tables. Use AWS Storage Gateway and Lambda concurrency to scale for increased traffic.

D.  

Use AWS Elastic Beanstalk to deploy the Tomcat servers with auto scaling in multiple Availability Zones. Store application data in an Amazon RDS for PostgreSQL database. Deploy Amazon CloudFront and an Application Load Balancer to scale for increased traffic.

Discussion 0
Questions 11

A company runs a software-as-a-service

Which solution meets these requirements'?

Options:

A.  

Create an Amazon CloudWatch alarm action that triggers a Lambda function to add an Amazon RDS for MySQL read replica when resource utilization hits a threshold

B.  

Migrate the database to Amazon Aurora, and add a read replica Add a database connection pool outside of the Lambda handler function

C.  

Migrate the database to Amazon Aurora and add a read replica Use Amazon Route 53 weighted records

D.  

Migrate the database to Amazon Aurora and add an Aurora Replica Configure Amazon RDS Proxy to manage database connection pools

Discussion 0
Questions 12

A company has deployed an application on AWS Elastic Beanstalk. The application uses Amazon Aurora for the database layer. An Amazon CloudFront distribution serves web requests and includes the Elastic Beanstalk domain name as the origin server. The distribution is configured with an alternate domain name that visitors use when they access the application.

Each week, the company takes the application out of service for routine maintenance. During the time that the application is unavailable, the company wants visitors to receive an informational message instead of a CloudFront error message.

A solutions architect creates an Amazon S3 bucket as the first step in the process.

Which combination of steps should the solutions architect take next to meet the requirements? (Choose three.)

Options:

A.  

Upload static informational content to the S3 bucket.

B.  

Create a new CloudFront distribution. Set the S3 bucket as the origin.

C.  

Set the S3 bucket as a second origin in the original CloudFront distribution. Configure the distribution and the S3 bucket to use an origin access identity (OAI).

D.  

During the weekly maintenance, edit the default cache behavior to use the S3 origin. Revert the change when the maintenance is complete.

E.  

During the weekly maintenance, create a cache behavior for the S3 origin on the new distribution. Set the path pattern to \ Set the precedence to 0. Delete the cache behavior when the maintenance is complete.

F.  

During the weekly maintenance, configure Elastic Beanstalk to serve traffic from the S3 bucket.

Discussion 0
Questions 13

A company has developed APIs that use Amazon API Gateway with Regional endpoints. The APIs call AWS Lambda functions that use API Gateway authentication mechanisms. After a design review, a solutions architect identifies a set of APIs that do not require public access.

The solutions architect must design a solution to make the set of APIs accessible only from a VPC. All APIs need to be called with an authenticated user.

Which solution will meet these requirements with the LEAST amount of effort?

Options:

A.  

Create an internal Application Load Balancer (ALB). Create a target group. Select the Lambda function to call. Use the ALB DNS name to call the API from the VPC.

B.  

Remove the DNS entry that is associated with the API in API Gateway. Create a hosted zone in Amazon Route 53. Create a CNAME record in the hosted zone. Update the API in API Gateway with the CNAME record. Use the CNAME record to call the API from the VPC.

C.  

Update the API endpoint from Regional to private in API Gateway. Create an interface VPC endpoint in the VP

C.  

Create a resource policy, and attach it to the API. Use the VPC endpoint to call the API from the VP

C.  

D.  

Deploy the Lambda functions inside the VPC. Provision an EC2 instance, and install an Apache server. From the Apache server, call the Lambda functions. Use the internal CNAME record of the EC2 instance to call the API from the VPC.

Discussion 0
Questions 14

A company built an ecommerce website on AWS using a three-tier web architecture. The application is Java-based and composed of an Amazon CloudFront distribution, an Apache web server layer of Amazon EC2 instances in an Auto Scaling group, and a backend Amazon Aurora MySQL database.

Last month, during a promotional sales event, users reported errors and timeouts while adding items to their shopping carts. The operations team recovered the logs created by the web servers and reviewed Aurora DB cluster performance metrics. Some of the web servers were terminated before logs could be collected and the Aurora metrics were not sufficient for query performance analysis.

Which combination of steps must the solutions architect take to improve application performance visibility during peak traffic events? (Choose three.)

Options:

A.  

Configure the Aurora MySQL DB cluster to publish slow query and error logs to Amazon CloudWatch Logs.

B.  

Implement the AWS X-Ray SDK to trace incoming HTTP requests on the EC2 instances and implement tracing of SQL queries with the X-Ray SDK for Java.

C.  

Configure the Aurora MySQL DB cluster to stream slow query and error logs to Amazon Kinesis

D.  

Install and configure an Amazon CloudWatch Logs agent on the EC2 instances to send the Apache logs to CloudWatch Logs.

E.  

Enable and configure AWS CloudTrail to collect and analyze application activity from Amazon EC2 and Aurora.

F.  

Enable Aurora MySQL DB cluster performance benchmarking and publish the stream to AWS X-Ray.

Discussion 0
Questions 15

A company has deployed production workloads on Amazon EC2 On-Demand Instances and Amazon RDS for PostgreSQL DB instances in multiple environments. The company has the AWS Business Support plan. A solutions architect must optimize the cost of the workloads without negatively affecting the availability or compute capacity of the workloads. Which solution will meet these requirements?

Options:

A.  

Use AWS Cost and Usage Reports to analyze the most expensive instances and usage patterns. Use AWS Lambda to terminate underutilized instances. Purchase Compute Savings Plans for instances for highly utilized workloads.

B.  

Use AWS Budgets to track spending for each environment. Configure AWS Trusted Advisor cost optimization checks to rightsize instances. Create billing alerts in Amazon CloudWatch. Terminate underutilized instances. Purchase Reserved Instances for highly utilized workloads.

C.  

Opt in to AWS Compute Optimizer. Use Compute Optimizer and AWS Trusted Advisor to identify underutilized instances. Implement recommendations from Compute Optimizer, modify instance types, rightsize instances, and apply Auto Scaling groups. Purchase a Compute Savings Plan.

D.  

Use AWS Cost Explorer recommendations to rightsize underutilized instances. Create billing alerts in Amazon CloudWatch. Replace the EC2 On-Demand Instances with Spot Instances for underutilized instances. Stop any instances that are not in use.

Discussion 0
Questions 16

A company is hosting an application on AWS for a project that will run for the next 3 years. The application consists of 20 Amazon EC2 On-Demand Instances that are registered in a target group for a Network Load Balancer (NLB). The instances are spread across two Availability Zones. The application is stateless and runs 24 hours a day, 7 days a week.

The company receives reports from users who are experiencing slow responses from the application. Performance metrics show that the instances are at 10% CPU utilization during normal application use. However, the CPU utilization increases to 100% at busy times, which typically last for a few hours.

The company needs a new architecture to resolve the problem of slow responses from the application.

Which solution will meet these requirements MOST cost-effectively?

Options:

A.  

Create an Auto Scaling group. Attach the Auto Scaling group to the target group of the NLB. Set the minimum capacity to 20 and the desired capacity to 28. Purchase Reserved Instances for 20 instances.

B.  

Create a Spot Fleet that has a request type of request. Set the TotalTargetCapacity parameter to 20. Set the DefaultTargetCapacityType parameter to On-Demand. Specify the NLB when creating the Spot Fleet.

C.  

Create a Spot Fleet that has a request type of maintain. Set the TotalTargetCapacity parameter to 20. Set the DefaultTargetCapacityType parameter to Spot. Replace the NLB with an Application Load Balancer.

D.  

Create an Auto Scaling group. Attach the Auto Scaling group to the target group of the NLB. Set the minimum capacity to 4 and the maximum capacity to 28. Purchase Reserved Instances for four instances.

Discussion 0
Questions 17

A company's CISO has asked a Solutions Architect to re-engineer the company's current CI/CD practices to make sure patch deployments to its applications can happen as quickly as possible with minimal downtime if vulnerabilities are discovered. The company must also be able to quickly roll back a change in case of errors.

The web application is deployed in a fleet of Amazon EC2 instances behind an Application Load Balancer. The company is currently using GitHub to host the application source code, and has configured an AWS CodeBuild project to build the application. The company also intends to use AWS CodePipeline to trigger builds from GitHub commits using the existing CodeBuild project.

What CI/CD configuration meets all of the requirements?

Options:

A.  

Configure CodePipeline with a deploy stage using AWS CodeDeploy configured for in-place deployment. Monitor the newly deployed code, and, if there are any issues, push another code update.

B.  

Configure CodePipeline with a deploy stage using AWS CodeDeploy configured for blue/green deployments. Monitor the newly deployed code, and, if there are any issues, trigger a manual rollback using CodeDeploy.

C.  

Configure CodePipeline with a deploy stage using AWS CloudFormation to create a pipeline for test and production stacks. Monitor the newly deployed code, and, if there are any issues, push another code update.

D.  

Configure the CodePipeline with a deploy stage using AWS OpsWorks and in-place deployments. Monitor the newly deployed code, and, if there are any issues, push another code update.

Discussion 0
Questions 18

Question:

A company needs to migratesome Oracle databases to AWSwhile keeping otherson-premisesfor compliance. The on-prem databases containspatial dataand runcron jobs. The solution must allowquerying on-prem data as foreign tablesfrom AWS.

Options:

A.  

Use DynamoDB, SCT, and Lambda. Move spatial data to S3 and query with Athena.

B.  

Use RDS for SQL Server and AWS Glue crawlers for Oracle access.

C.  

Use EC2-hosted Oracle with Application Migration Service. Use Step Functions for cron.

D.  

Use RDS for PostgreSQL with DMS and SCT. Use PostgreSQL foreign data wrappers. Connectvia Direct Connect.

Discussion 0
Questions 19

A large company runs workloads in VPCs that are deployed across hundreds of AWS accounts. Each VPC consists to public subnets and private subnets that span across multiple Availability Zones. NAT gateways are deployed in the public subnets and allow outbound connectivity to the internet from the private subnets.

A solutions architect is working on a hub-and-spoke design. All private subnets in the spoke VPCs must route traffic to the internet through an egress VPC. The solutions architect already has deployed a NAT gateway in an egress VPC in a central AWS account.

Which set of additional steps should the solutions architect take to meet these requirements?

Options:

A.  

Create peering connections between the egress VPC and the spoke VPCs. Configure the required routing to allow access to the internet.

B.  

Create a transit gateway, and share it with the existing AWS accounts. Attach existing VPCs to the transit gateway Configure the required routing to allow access to the internet.

C.  

Create a transit gateway in every account. Attach the NAT gateway to the transit gateways. Configure the required routing to allow access to the internet.

D.  

Create an AWS PrivateLink connection between the egress VPC and the spoke VPCs. Configure the required routing to allow access to the internet

Discussion 0
Questions 20

Question:

A company is deploying a newbig data analytics clusteracross multiple Availability Zones. All nodes must haveread/write access to shared file storagethat ishighly available,POSIX-compatible, andhigh-throughput.

Options:

A.  

Use AWS Storage Gateway (file gateway) backed by Amazon S3

B.  

Use Amazon EFS in General Purpose performance mode

C.  

Use Amazon EBS with Multi-Attach

D.  

Use Amazon EFS with Max I/O performance mode

Discussion 0
Questions 21

A company needs to gather data from an experiment in a remote location that does not have internet connectivity. During the experiment, sensors that are connected to a total network will generate 6 TB of data in a preprimary formal over the course of 1 week. The sensors can be configured to upload their data files to an FTP server periodically, but the sensors do not have their own FTP server. The sensors also do not support other protocols. The company needs to collect the data centrally and move lie data to object storage in the AWS Cloud as soon. as possible after the experiment.

Which solution will meet these requirements?

Options:

A.  

Order an AWS Snowball Edge Compute Optimized device. Connect the device to the local network. Configure AWS DataSync with a target bucket name, and unload the data over NFS to the device. After the experiment return the device to AWS so that the data can be loaded into Amazon S3.

B.  

Order an AWS Snowcone device, including an Amazon Linux 2 AMI. Connect the device to the local network. Launch an Amazon EC2 instance on the device. Create a shell script that periodically downloads data from each sensor. After the experiment, return the device to AWS so that the data can be loaded as an Amazon Elastic Block Store [Amazon EBS) volume.

C.  

Order an AWS Snowcone device, including an Amazon Linux 2 AMI. Connect the device to the local network. Launch an Amazon EC2 instance on the device. Install and configure an FTP server on the EC2 instance. Configure the sensors to upload data to the EC2 instance. After the experiment, return the device to AWS so that the data can be loaded into Amazon S3.

D.  

Order an AWS Snowcone device. Connect the device to the local network. Configure the device to use Amazon FSx. Configure the sensors to upload data to the device. Configure AWS DataSync on the device to synchronize the uploaded data with an Amazon S3 bucket Return the device to AWS so that the data can be loaded as an Amazon Elastic Block Store (Amazon EBS) volume.

Discussion 0
Questions 22

A company has an IoT data lake that is stored in Amazon S3. Data scientists in a separate AWS account need to analyze the data on Amazon EC2 instances in a VPC. Company policy requires that only authorized networks access the IoT data. The EC2 instances already have an IAM role that allows access to Amazon S3. An S3 access point exists on the data lake S3 bucket.

The company needs to provide secure access to the S3 data lake for the EC2 instances while complying with the policy that requires access from only authorized networks.

Which combination of steps will meet these requirements? (Select TWO.)

Options:

A.  

Create a gateway VPC endpoint for Amazon S3 in the data scientists’ VPC.

B.  

Update the S3 access point settings to block public access.

C.  

Update the EC2 instance role. Add a policy with a condition that denies the s3:GetObject action when the value for the s3:DataAccessPointArn condition key is a valid access point ARN.

D.  

Update the VPC route table to route S3 traffic to the S3 access point.

E.  

Add an S3 bucket policy with a condition that allows the s3:GetObject action when the value for the s3:DataAccessPointArn condition key is a valid access point ARN.

Discussion 0
Questions 23

A company is deploying a distributed in-memory database on a fleet of Amazon EC2 instances. The fleet consists of a primary node and eight worker nodes. The primary node is responsible for monitoring cluster health, accepting user requests, distributing user requests to worker nodes, and sending an aggregate response back to a client. Worker nodes communicate with each other to replicate data partitions.

The company requires the lowest possible networking latency to achieve maximum performance.

Which solution will meet these requirements?

Options:

A.  

Launch memory optimized EC2 instances in a partition placement group.

B.  

Launch compute optimized EC2 instances in a partition placement group.

C.  

Launch memory optimized EC2 instances in a cluster placement group

D.  

Launch compute optimized EC2 instances in a spread placement group.

Discussion 0
Questions 24

A company has an online learning platform that teaches data science. The platform uses the AWS Cloud to provision on-demand lab environments for its students. Each student receives a dedicated AWS account for a short time. Students need access to ml.p2.xlarge instances to run a single Amazon SageMaker machine learning training job and to deploy the inference endpoint. Account provisioning is automated. The accounts are members of an organization in AWS Organizations with all features enabled. The accounts must be provisioned in the ap-southeast-2 Region. The default resource usage quotas are not sufficient for the accounts. A solutions architect must enhance the account provisioning process to include automated quota increases. Which solution will meet these requirements?

Options:

A.  

Create a quota request template in the us-east-1 Region in the organization's management account. Enable template association. Add a quota for SageMaker in ap-southeast-2 for ml.p2.xlarge training job usage. Set the desired quota to 1. Add a quota for SageMaker in ap-southeast-2 for ml.p2.xlarge endpoint usage. Set the desired quota to 1.

B.  

Create a quota request template in the us-east-1 Region in the organization's management account. Enable template association. Add a quota for SageMaker in ap-southeast-2 for ml.p2.xlarge training warm pool usage. Set the desired quota to 2.

C.  

Create a quota request template in ap-southeast-2 in the organization's management account. Enable template association. Add a quota for SageMaker in the us-east-1 Region for ml.p2.xlarge training job usage. Set the desired quota to 1. Add a quota for SageMaker in us-east-1 for ml.p2.xlarge endpoint usage. Set the desired quota to 1.

D.  

Create a quota request template in ap-southeast-2 in the organization's management account. Enable template association. Add a quota for SageMaker in the us-east-1 Region for ml.p2.xlarge training warm pool usage. Set the desired quota to 2.

Discussion 0
Questions 25

A team collects and routes behavioral data for an entire company The company runs a Multi-AZ VPC environment with public subnets, private subnets, and in internet gateway Each public subnet also contains a NAT gateway Most of the company's applications read from and write to Amazon Kinesis Data Streams. Most of the workloads am in private subnets.

A solutions architect must review the infrastructure The solutions architect needs to reduce costs and maintain the function of the applications The solutions architect uses Cost Explorer and notices that the cost in the EC2-Other category is consistently high A further review shows that NatGateway-Bytes charges are increasing the cost in the EC2-Other category.

What should the solutions architect do to meet these requirements?

Options:

A.  

Enable VPC Flow Logs. Use Amazon Athena to analyze the logs for traffic that can be removed. Ensure that security groups are Mocking traffic that is responsible for high costs.

B.  

Add an interface VPC endpoint for Kinesis Data Streams to the VPC. Ensure that applications have the correct IAM permissions to use the interface VPC endpoint.

C.  

Enable VPC Flow Logs and Amazon Detective Review Detective findings for traffic that is not related to Kinesis Data Streams Configure security groups to block that traffic

D.  

Add an interface VPC endpoint for Kinesis Data Streams to the VPC. Ensure that the VPC endpoint policy allows traffic from the applications.

Discussion 0
Questions 26

A software as a service (SaaS) company has developed a multi-tenant environment. The company uses Amazon DynamoDB tables that the tenants share tor the storage layer. The company uses AWS Lambda functions for the application services.

The company wants to offer a tiered subscription model that is based on resource consumption by each tenant Each tenant is identified by a unique tenant ID that is sent as part of each request to the Lambda functions The company has created an AWS Cost and Usage Report (AWS CUR) in an AWS account The company wants to allocate the DynamoDB costs to each tenant to match that tenant"s resource consumption

Which solution will provide a granular view of the DynamoDB cost for each tenant with the LEAST operational effort?

Options:

A.  

Associate a new lag that is named tenant ID with each table in DynamoDB Activate the tag as a cost allocation tag m the AWS Billing and Cost Management console Deploy new Lambda function code to log the tenant ID in Amazon CloudWatch Logs Use the AWS CUR to separate DynamoDB consumption cost for each tenant ID

B.  

Configure the Lambda functions to log the tenant ID and the number of RCUs and WCUs consumed from DynamoDB for each transaction to Amazon CloudWatch Logs Deploy another Lambda function to calculate the tenant costs by using the logged capacity units and the overall DynamoDB cost from the AWS Cost Explorer API Create an Amazon EventBridge rule to invoke the calculation Lambda function on a schedule.

C.  

Create a new partition key that associates DynamoDB items with individual tenants Deploy a Lambda function to populate the new column as part of each transaction Deploy another Lambda function to calculate the tenant costs by using Amazon Athena to calculate the number of tenant items from DynamoDB and the overall DynamoDB cost from the AWS CUR Create an Amazon EventBridge rule to invoke the calculation Lambda function on a schedule

D.  

Deploy a Lambda function to log the tenant ID the size of each response, and the duration of the transaction call as custom metrics to Amazon CloudWatch Logs Use CloudWatch Logs Insights to query the custom metrics for each tenant. Use AWS Pricing Calculator to obtain the overall DynamoDB costs and to calculate the tenant costs

Discussion 0
Questions 27

A company needs to store and process image data that will be uploaded from mobile devices using a custom mobile app. Usage peaks between 8 AM and 5 PM on weekdays, with thousands of uploads per minute. The app is rarely used at any other time. A user is notified when image processing is complete.

Which combination of actions should a solutions architect take to ensure image processing can scale to handle the load? (Select THREE.)

Options:

A.  

Upload files from the mobile software directly to Amazon S3. Use S3 event notifications to create a message in an Amazon MQ queue.

B.  

Upload files from the mobile software directly to Amazon S3. Use S3 event notifications to create a message in an Amazon Simple Queue Service (Amazon SOS) standard queue.

C.  

Invoke an AWS Lambda function to perform image processing when a message is available in the queue.

D.  

Invoke an S3 Batch Operations job to perform image processing when a message is available in the queue

E.  

Send a push notification to the mobile app by using Amazon Simple Notification Service (Amazon SNS) when processing is complete.

F.  

Send a push notification to the mobile app by using Amazon Simple Email Service (Amazon SES) when processing is complete.

Discussion 0
Questions 28

A company is migrating a document processing workload to AWS. Client applications upload documents to an Amazon S3 bucket for processing. A document processing engine runs on an Amazon EC2 Linux instance and requires Portable Operating System Interface (POSIX)-compliant file system access to read, generate, and modify files during processing. The processed documents must be automatically available in the S3 bucket for client applications to download.

The company cannot directly modify the document processing engine to use the S3 API. The company needs a solution that provides the EC2 instance with file system access. The solution must maintain automatic synchronization with the S3 bucket for both input and output files.

Which solution will meet these requirements?

Options:

A.  

Configure AWS DataSync to connect to the EC2 instance without an agent. Configure a DataSync task in enhanced mode to synchronize the processed documents to and from Amazon S3.

B.  

Configure an Amazon FSx for Lustre file system with import and export policies that are linked to the S3 bucket. Install the Lustre client on the EC2 instance and mount the file system.

C.  

Create an Amazon EFS file system. Set the data repository associations to the S3 bucket. Install the EFS client and mount the file system. Create an automatic import and export policy for new and changed objects.

D.  

Set up an Amazon S3 File Gateway. Initiate a RefreshCache API call to update the S3 File Gateway when changes occur in Amazon S3.

Discussion 0
Questions 29

A company is running an application in the AWS Cloud. The application collects and stores a large amount of unstructured data in an Amazon S3 bucket. The S3 bucket contains several terabytes of data and uses the S3 Standard storage class. The data increases in size by several gigabytes every day.

The company needs to query and analyze the data. The company does not access data that is more than 1-year-old. However, the company must retain all the data indefinitely for compliance reasons.

Which solution will meet these requirements MOST cost-effectively?

Options:

A.  

Use S3 Select to query the data. Create an S3 Lifecycle policy to transition data that is more than 1 year old to S3 Glacier Deep Archive.

B.  

Use Amazon Redshift Spectrum to query the data. Create an S3 Lifecycle policy to transition data that is more than 1 year old to S3 Glacier Deep Archive.

C.  

Use an AWS Glue Data Catalog and Amazon Athena to query the data. Create an S3 Lifecycle policy to transition data that is more than 1 year old to S3 Glacier Deep Archive.

D.  

Use Amazon Redshift Spectrum to query the data. Create an S3 Lifecycle policy to transition data that is more than 1 year old to S3 Intelligent-Tiering.

Discussion 0
Questions 30

Question:

A company provisions short-lived AWS accounts for students. Each account needs access to ml.p2.xlarge SageMaker instances for training and inference. The default quotas are insufficient.

How should quota increases be automated during account provisioning?

Options:

A.  

Create a quota request template inus-east-1, enable template association, and add quotas for ml.p2.xlarge training and endpoint usage in ap-southeast-2.

B.  

Use ml.p2.xlarge training warm pool quota in ap-southeast-2.

C.  

Create the template in ap-southeast-2 for SageMaker quotas in us-east-1.

D.  

Use warm pool quotas in us-east-1.

Discussion 0
Questions 31

A company uses AWS CloudFormation to deploy applications within multiple VPCs that are all attached to a transit gateway Each VPC that sends traffic to the public internet must send the traffic through a shared services VPC Each subnet within a VPC uses the default VPC route table and the traffic is routed to the transit gateway The transit gateway uses its default route table for any VPC attachment

A security audit reveals that an Amazon EC2 instance that is deployed within a VPC can communicate with an EC2 instance that is deployed in any of the company's other VPCs A solutions architect needs to limit the traffic between the VPCs. Each VPC must be able to communicate only with a predefined, limited set of authorized VPCs.

What should the solutions architect do to meet these requirements'?

Options:

A.  

Update the network ACL of each subnet within a VPC to allow outbound traffic only to the authorized VPCs Remove all deny rules except the default deny rule.

B.  

Update all the security groups that are used within a VPC to deny outbound traffic to security groups that are used within the unauthorized VPCs

C.  

Create a dedicated transit gateway route table for each VPC attachment. Route traffic only to the authorized VPCs.

D.  

Update the mam route table of each VPC to route traffic only to the authorized VPCs through the transit gateway

Discussion 0
Questions 32

A company has five development teams that have each created five AWS accounts to develop and host applications. To track spending, the development teams log in to each account every month, record the current cost from the AWS Billing and Cost Management console, and provide the information to the company's finance team.

The company has strict compliance requirements and needs to ensure that resources are created only in AWS Regions in the United States. However, some resources have been created in other Regions.

A solutions architect needs to implement a solution that gives the finance team the ability to track and consolidate expenditures for all the accounts. The solution also must ensure that the company can create resources only in Regions in the United States.

Which combination of steps will meet these requirements in the MOST operationally efficient way? (Select THREE.)

Options:

A.  

Create a new account to serve as a management account. Create an Amazon S3 bucket for the finance learn Use AWS Cost and Usage Reports to create monthly reports and to store the data in the finance team's S3 bucket.

B.  

Create a new account to serve as a management account. Deploy an organization in AWS Organizations with all features enabled. Invite all the existing accounts to the organization. Ensure that each account accepts the invitation.

C.  

Create an OU that includes all the development teams. Create an SCP that allows the creation of resources only in Regions that are in the United States. Apply the SCP to the OU.

D.  

Create an OU that includes all the development teams. Create an SCP that denies (he creation of resources in Regions that are outside the United States. Apply the SCP to the OU.

E.  

Create an 1AM role in the management account Attach a policy that includes permissions to view the Billing and Cost Management console. Allow the finance learn users to assume the role. Use AWS Cost Explorer and the Billing and Cost Management console to analyze cost.

F.  

Create an 1AM role in each AWS account. Attach a policy that includes permissions to view the Billing and Cost Management console. Allow the finance team users to assume the role.

Discussion 0
Questions 33

A retail company has an on-premises data center in Europe. The company also has a multi-Region AWS presence that includes the eu-west-1 and us-east-1 Regions. The company wants to be able to route network traffic from its on-premises infrastructure into VPCs in either of those Regions. The company also needs to support traffic that is routed directly between VPCs in those Regions. No single points of failure can exist on the network.

The company already has created two 1 Gbps AWS Direct Connect connections from its on-premises data center. Each connection goes into a separate Direct Connect location in Europe for high availability. These two locations are named DX-A and DX-B, respectively. Each Region has a single AWS Transit Gateway that is configured to route all inter-VPC traffic within that Region.

Which solution will meet these requirements?

Options:

A.  

Create a private VIF from the DX-A connection into a Direct Connect gateway. Create a private VIF from the DX-B connection into the same Direct Connect gateway for high availability. Associate both the eu-west-1 and us-east-1 transit gateways with the Direct Connect gateway. Peer the transit gateways with each other to support cross-Region routing.

B.  

Create a transit VIF from the DX-A connection into a Direct Connect gateway. Associate the eu-west-1 transit gateway with this Direct Connect gateway. Create a transit VIF from the DX-B connection into a separate Direct Connect gateway. Associate the us-east-1 transit gateway with this separate Direct Connect gateway. Peer the Direct Connect gateways with each other to support high availability and cross-Region routing.

C.  

Create a transit VIF from the DX-A connection into a Direct Connect gateway. Create a transit VIF from the DX-B connection into the same Direct Connect gateway for high availability. Associate both the eu-west-1 and us-east-1 transit gateways with this Direct Connect gateway. Configure the Direct Connect gateway to route traffic between the transit gateways.

D.  

Create a transit VIF from the DX-A connection into a Direct Connect gateway. Create a transit VIF from the DX-B connection into the same Direct Connect gateway for high availability. Associate both the eu-west-1 and us-east-1 transit gateways with this Direct Connect gateway. Peer the transit gateways with each other to support cross-Region routing.

Discussion 0
Questions 34

A company is using Amazon OpenSearch Service to analyze data. The company loads data into an OpenSearch Service cluster with 10 data nodes from an Amazon S3 bucket that uses S3 Standard storage. The data resides in the cluster for 1 month for read-only analysis. After 1 month, the company deletes the index that contains the data from the cluster. For compliance purposes, the company must retain a copy of all input data.

The company is concerned about ongoing costs and asks a solutions architect to recommend a new solution.

Which solution will meet these requirements MOST cost-effectively?

Options:

A.  

Replace all the data nodes with UltraWarm nodes to handle the expected capacity. Transition the input data from S3 Standard to S3 Glacier Deep Archive when the company loads the data into the cluster.

B.  

Reduce the number of data nodes in the cluster to 2 Add UltraWarm nodes to handle the expected capacity. Configure the indexes to transition to UltraWarm when OpenSearch Service ingests the data. Transition the input data to S3 Glacier Deep Archive after 1 month by using an S3 Lifecycle policy.

C.  

Reduce the number of data nodes in the cluster to 2. Add UltraWarm nodes to handle the expected capacity. Configure the indexes to transition to UltraWarm when OpenSearch Service ingests the data. Add cold storage nodes to the cluster Transition the indexes from UltraWarm to cold storage. Delete the input data from the S3 bucket after 1 month by using an S3 Lifecycle policy.

D.  

Reduce the number of data nodes in the cluster to 2. Add instance-backed data nodes to handle the expected capacity. Transition the input data from S3 Standard to S3 Glacier Deep Archive when the company loads the data into the cluster.

Discussion 0
Questions 35

A company hosts a web application on AWS in the us-east-1 Region The application servers are distributed across three Availability Zones behind an Application Load Balancer. The database is hosted in a MySQL database on an Amazon EC2 instance A solutions architect needs to design a Cross-Region data recovery solution using AWS services with an RTO of less than 5 minutes and an RPO of less than 1 minute. The solutions architect is deploying application servers in us-west-2, and has configured Amazon Route 53 hearth checks and DNS failover to us-west-2

Which additional step should the solutions architect take?

Options:

A.  

Migrate the database to an Amazon RDS tor MySQL instance with a cross-Region read replica in us-west-2

B.  

Migrate the database to an Amazon Aurora global database with the primary in us-east-1 and the secondary in us-west-2

C.  

Migrate the database to an Amazon RDS for MySQL instance with a Multi-AZ deployment.

D.  

Create a MySQL standby database on an Amazon EC2 instance in us-west-2

Discussion 0
Questions 36

A retail company is mounting IoT sensors in all of its stores worldwide. During the manufacturing of each sensor, the company's private certificate authority (CA) issues an X.509 certificate that contains a unique serial number. The company then deploys each certificate to its respective sensor.

A solutions architect needs to give the sensors the ability to send data to AWS after they are installed. Sensors must not be able to send data to AWS until they are installed.

Which solution will meet these requirements?

Options:

A.  

Create an AWS Lambda function that can validate the serial number. Create an AWS IoT Core provisioning template. Include the SerialNumber parameter in the Parameters section. Add the Lambda function as a pre-provisioning hook. During manufacturing, call the RegisterThing API operation and specify the template and parameters.

B.  

Create an AWS Step Functions state machine that can validate the serial number. Create an AWS IoT Core provisioning template. Include the SerialNumber parameter in the Parameters section. Specify the Step Functions state machine to validate parameters. Call the StartThingRegistrationTask API operation during installation.

C.  

Create an AWS Lambda function that can validate the serial number. Create an AWS IoT Core provisioning template. Include the SerialNumber parameter in the Parameters section. Add the Lambda function as a pre-provisioning hook. Register the CA with AWS IoT Core, specify the provisioning template, and set the allow-auto-registration parameter.

D.  

Create an AWS IoT Core provisioning template. Include the SerialNumber parameter in the Parameters section. Include parameter validation in the template. Provision a claim certificate and a private key for each device that uses the CA. Grant AWS IoT Core service permissions to update AWS IoT things during provisioning.

Discussion 0
Questions 37

A financial services company has an asset management product that thousands of customers use around the world. The customers provide feedback about the product

through surveys. The company is building a new analytical solution that runs on Amazon EMR to analyze the data from these surveys. The following user personas need to access the analytical solution to perform different actions:

• Administrator: Provisions the EMR cluster for the analytics team based on the team's requirements

• Data engineer: Runs E TL scripts to process, transform, and enrich the datasets

• Data analyst: Runs SQL and Hive queries on the data

A solutions architect must ensure that all the user personas have least privilege access to only the resources that they need. The user personas must be able to launch only applications that are approved and authorized. The solution also must ensure tagging for all resources that the user personas create.

Which solution will meet these requirements?

Options:

A.  

Create IAM roles for each user persona. Attach identity-based policies to define which actions the user who assumes the role can perform. Create an AWSConfig rule to check for noncompliant resources. Configure the rule to notify the administrator to remediate the noncompliant resources.

B.  

Set up Kerberos-based authentication for EMR clusters upon launch. Specify a Kerberos security configuration along with cluster-specific Kerberos options.

C.  

Use AWS Service Catalog to control the Amazon EMR versions available for deployment, the cluster configuration, and the permissions for each user persona.

D.  

Launch the EMR cluster by using AWS CloudFormation. Attach resource-based policies to the EMR cluster during cluster creation. Create an AWS Config rule to check for noncompliant clusters and noncompliant Amazon S3 buckets. Configure the rule to notify the administrator to remediate the noncompliant resources.

Discussion 0
Questions 38

A company is planning to migrate its applications from an on-premises data center to AWS. The on-premises data center has an AWS Direct Connect connection. The company needs to test IPv6 connectivity in the VPC so that the applications can communicate with more customers worldwide.

A solutions architect has created a VPC with an IPv6 CIDR block.

Which networking configurations will meet these requirements? (Select TWO.)

Options:

A.  

Launch an Amazon EC2 instance into a public subnet. Associate an IPv6 address with the instance during launch. Configure a security group, a network ACL, and route tables for IPv6 communication. Associate a virtual private gateway in the VPC with a Direct Connect gateway.

B.  

Launch an Amazon EC2 instance into a private subnet. Associate an IPv6 address with the instance during launch. Configure a security group, a network ACL, and route tables for IPv6 communication. Create a route that directs all IPv6 traffic from the private subnet to a NAT gateway.

C.  

Launch an Amazon EC2 instance into a public subnet. Associate an IPv6 address with the instance during launch. Configure a security group, a network ACL, and route tables for IPv6 communication. Create a route that directs all IPv6 traffic from the public subnet to an internet gateway.

D.  

Launch an Amazon EC2 instance into a private subnet. Associate an IPv6 address with the instance during launch. Configure a security group, a network ACL, and route tables for IPv6 communication. Create a route that directs all IPv6 traffic from the private subnet to a NAT instance.

E.  

Launch an Amazon EC2 instance into a private subnet. Associate an IPv6 address with the instance during launch. Configure a security group, a network ACL, and route tables for IPv6 communication. Create a route that directs all IPv6 traffic from the private subnet to an egress-only internet gateway.

Discussion 0
Questions 39

A global ecommerce company has many data centers worldwide. The company needs scalable cloud storage for legacy file applications. Requirements:

Must support iSCSI access from on-premises servers.

Must support point-in-time snapshots via AWS Backup.

Must retain low-latency access to frequently accessed data.Which solution will meet these requirements?

Options:

A.  

Provision an AWS Storage Gateway tape gateway with S3 and AWS Backup.

B.  

Use Amazon FSx File Gateway and S3 File Gateway. Use AWS Backup.

C.  

Provision an AWS Storage Gateway volume gateway in cache mode. Back up the volumes using AWS Backup.

D.  

Provision an AWS Storage Gateway file gateway in cache mode. Use AWS Backup.

Discussion 0
Questions 40

A company has a serverless application comprised of Amazon CloudFront, Amazon API Gateway, and AWS Lambda functions. The current deployment process of the application code is to create a new version number of the Lambda function and run an AWS CLI script to update. If the new function version has errors, another CLI script reverts by deploying the previous working version of the function. The company would like to decrease the time to deploy new versions of the application logic provided by the Lambda functions, and also reduce the time to detect and revert when errors are identified.

How can this be accomplished?

Options:

A.  

Create and deploy nested AWS CloudFormation stacks with the parent stack consisting of the AWS CloudFront distribution and API Gateway, and the child stack containing the Lambda function. For changes to Lambda, create an AWS CloudFormation change set and deploy; if errors are triggered, revert the AWS CloudFormation change set to the previous version.

B.  

Use AWS SAM and built-in AWS CodeDeploy to deploy the new Lambda version, gradually shift traffic to the new version, and use pre-traffic and post-traffic test functions to verify code. Rollback if Amazon CloudWatch alarms are triggered.

C.  

Refactor the AWS CLI scripts into a single script that deploys the new Lambda version. When deployment is completed, the script tests execute. If errors are detected, revert to the previous Lambda version.

D.  

Create and deploy an AWS CloudFormation stack that consists of a new API Gateway endpoint that references the new Lambda version. Change the CloudFront origin to the new API Gateway endpoint, monitor errors and if detected, change the AWS CloudFront origin to the previous API Gateway endpoint.

Discussion 0
Questions 41

A company is developing a new service that will be accessed using TCP on a static port A solutions architect must ensure that the service is highly available, has redundancy across Availability Zones, and is accessible using the DNS name myservice.com, which is publicly accessible The service must use fixed address assignments so other companies can add the addresses to their allow lists.

Assuming that resources are deployed in multiple Availability Zones in a single Region, which solution will meet these requirements?

Options:

A.  

Create Amazon EC2 instances with an Elastic IP address for each instance Create a Network Load Balancer (NLB) and expose the static TCP port Register EC2instances with the NLB Create a new name server record set named my service com, and assign the Elastic IP addresses of the EC2 instances to the record set Provide the Elastic IP addresses of the EC2 instances to the other companies to add to their allow lists

B.  

Create an Amazon ECS cluster and a service definition for the application Create and assign public IP addresses for the ECS cluster Create a Network Load Balancer (NLB) and expose the TCP port Create a target group and assign the ECS cluster name to the NLB Create a new A record set named my service com and assign the public IP addresses of the ECS cluster to the record set Provide the public IP addresses of the ECS cluster to the other com

C.  

Create Amazon EC2 instances for the service Create one Elastic IP address for each Availability Zone Create a Network Load Balancer (NLB) and expose the assigned TCP port Assign the Elastic IP addresses to the NLB for each Availability Zone Create a target group and register the EC2 instances with the NLB Create a new A (alias) record set named my service com, and assign the NLB DNS name to the record set.

D.  

Create an Amazon ECS cluster and a service definition for the application Create and assign public IP address for each host in the cluster Create an Application Load Balancer (ALB) and expose the static TCP port Create a target group and assign the ECS service definition name to the ALB Create a new CNAME record set and associate the public IP addresses to the record set Provide the Elastic IP addresses of the Amazon EC2 instances to the ot

Discussion 0
Questions 42

An events company runs a ticketing platform on AWS. The company's customers configure and schedule their events on the platform The events result in large increases of traffic to the platform The company knows the date and time of each customer's events

The company runs the platform on an Amazon Elastic Container Service (Amazon ECS) cluster The ECS cluster consists of Amazon EC2 On-Demand Instances that are in an Auto Scaling group. The Auto Scaling group uses a predictive scaling policy

The ECS cluster makes frequent requests to an Amazon S3 bucket to download ticket assets The ECS cluster and the S3 bucket are in the same AWS Region and the same AWS account Traffic between the ECS cluster and the S3 bucket flows across a NAT gateway

The company needs to optimize the cost of the platform without decreasing the platform's availability

Which combination of steps will meet these requirements? (Select TWO)

Options:

A.  

Create a gateway VPC endpoint for the S3 bucket

B.  

Add another ECS capacity provider that uses an Auto Scaling group of Spot Instances Configure the new capacity provider strategy to have the same weight as the existing capacity provider strategy

C.  

Create On-Demand Capacity Reservations for the applicable instance type for the time period of the scheduled scaling policies

D.  

Enable S3 Transfer Acceleration on the S3 bucket

E.  

Replace the predictive scaling policy with scheduled scaling policies for the scheduled events

Discussion 0
Questions 43

A company has a website that runs on four Amazon EC2 instances that are behind an Application Load Balancer (ALB). When the ALB detects that an EC2 instance is no longer available, an Amazon CloudWatch alarm enters the ALARM state. A member of the company's operations team then manually adds a new EC2 instance behind the ALB.

A solutions architect needs to design a highly available solution that automatically handles the replacement of EC2 instances. The company needs to minimize downtime during the switch to the new solution.

Which set of steps should the solutions architect take to meet these requirements?

Options:

A.  

Delete the existing ALB. Create an Auto Scaling group that is configured to handle the web application traffic. Attach a new launch template to the Auto Scaling group. Create a new ALB. Attach the Auto Scaling group to the new ALB. Attach the existing EC2 instances to the Auto Scaling group.

B.  

Create an Auto Scaling group that is configured to handle the web application traffic. Attach a new launch template to the Auto Scaling group. Attach the Auto Scaling group to the existing AL

B.  

Attach the existing EC2 instances to the Auto Scaling group.

C.  

Delete the existing ALB and the EC2 instances. Create an Auto Scaling group that is configuredto handle the web application traffic. Attach a new launch template to the Auto Scaling group. Create a new ALB. Attach the Auto Scaling group to the new ALB. Wait for the Auto Scaling group to launch the minimum number of EC2 instances.

D.  

Create an Auto Scaling group that is configured to handle the web application traffic. Attach a new launch template to the Auto Scaling group. Attach the Auto Scaling group to the existing ALB. Wait for the existing ALB to register the existing EC2 instances with the Auto Scaling group.

Discussion 0
Questions 44

A company uses a Grafana data visualization solution that runs on a single Amazon EC2 instance to monitor the health of the company's AWS workloads. The company has invested time and effort to create dashboards that the company wants to preserve. The dashboards need to be highly available and cannot be down for longer than 10 minutes. The company needs to minimize ongoing maintenance.

Which solution will meet these requirements with the LEAST operational overhead?

Options:

A.  

Migrate to Amazon CloudWatch dashboards. Recreate the dashboards to match the existing Grafana dashboards. Use automatic dashboards where possible.

B.  

Create an Amazon Managed Grafana workspace. Configure a new Amazon CloudWatch data source. Export dashboards from the existing Grafana instance. Import the dashboards into the new workspace.

C.  

Create an AMI that has Grafana pre-installed. Store the existing dashboards in Amazon Elastic File System (Amazon EFS). Create an Auto Scaling group that uses the new AMI. Set the Auto Scaling group's minimum, desired, and maximum number of instances to one. Create an Application Load Balancer that serves at least two Availability Zones.

D.  

Configure AWS Backup to back up the EC2 instance that runs Grafana once each hour. Restore the EC2 instance from the most recent snapshot in an alternate Availability Zone when required.

Discussion 0
Questions 45

A company has an application that runs on Amazon EC2 instances in an Amazon EC2 Auto Scaling group. The company uses AWS CodePipeline to deploy the application. The instances that run in the Auto Scaling group are constantly changing because of scaling events.

When the company deploys new application code versions, the company installs the AWS CodeDeploy agent on any new target EC2 instances and associates the instances with the CodeDeploy deployment group. The application is set to go live within the next 24 hours.

What should a solutions architect recommend to automate the application deployment process with the LEAST amount of operational overhead?

Options:

A.  

Configure Amazon EventBridge to invoke an AWS Lambda function when a new EC2 instance is launched into the Auto Scaling group. Code the Lambda function to associate the EC2 instances with the CodeDeploy deployment group.

B.  

Write a script to suspend Amazon EC2 Auto Scaling operations before the deployment of new code When the deployment is complete, create a new AMI and configure the Auto Scaling group's launch template to use the new AMI for new launches. Resume Amazon EC2 Auto Scaling operations.

C.  

Create a new AWS CodeBuild project that creates a new AMI that contains the new code Configure CodeBuild to update the Auto Scaling group's launch template to the new AMI. Run an Amazon EC2 Auto Scaling instance refresh operation.

D.  

Create a new AMI that has the CodeDeploy agent installed. Configure the Auto Scaling group's launch template to use the new AMI. Associate the CodeDeploy deployment group with the Auto Scaling group instead of the EC2 instances.

Discussion 0
Questions 46

A software as a service (SaaS) based company provides a case management solution to customers A3 part of the solution. The company uses a standalone Simple Mail Transfer Protocol (SMTP) server to send email messages from an application. The application also stores an email template for acknowledgement email messages that populate customer data before the application sends the email message to the customer.

The company plans to migrate this messaging functionality to the AWS Cloud and needs to minimize operational overhead.

Which solution will meet these requirements MOST cost-effectively?

Options:

A.  

Set up an SMTP server on Amazon EC2 instances by using an AMI from the AWS Marketplace. Store the email template in an Amazon S3 bucket. Create an AWS Lambda function to retrieve the template from the S3 bucket and to merge the customer data from the application with the template. Use an SDK in the Lambda function to send the email message.

B.  

Set up Amazon Simple Email Service (Amazon SES) to send email messages. Store the email template in an Amazon S3 bucket. Create an AWS Lambda function to retrieve the template from the S3 bucket and to merge the customer data from the application with the template. Use an SDK in the Lambda function to send the email message.

C.  

Set up an SMTP server on Amazon EC2 instances by using an AMI from the AWS Marketplace. Store the email template in Amazon Simple Email Service (Amazon SES) with parameters for the customer data.Create an AWS Lambda function to call the SES template and to pass customer data to replace the parameters. Use the AWS Marketplace SMTP server to send the email message.

D.  

Set up Amazon Simple Email Service (Amazon SES) to send email messages. Store the email template on Amazon SES with parameters for the customer data. Create an AWS Lambda function to call the SendTemplatedEmail API operation and to pass customer data to replace the parameters and the email destination.

Discussion 0
Questions 47

A company is running an application on premises. The application uses a set of web servers that host a static React-based single-page application (SPA), a Node.js API, and a MYSQL database server. The database is read intensive. The company will need to expand the database's storage at an unpredictable rate.

The company must migrate the application to AWS. The company also must modernize the architecture to reduce infrastructure management and increase scalability.

Which solution will meet these requirements with the LEAST operational overhead?

Options:

A.  

Use AWS Database Migration Service (AWS DMS) to migrate the database to Amazon RDS for MySQL. Use AWS Application Migration Service to migrate theweb application to a fleet of Amazon EC2 instances behind an Elastic Load Balancing (ELB) load balancer. Use a Spot Fleet with a request type of request to host the API.

B.  

Use AWS Database Migration Service (AWS DMS) to migrate the database to Amazon Aurora MySQL. Copy the web files to an Amazon S3 bucket and set upweb hosting. Copy the API code to AWS Lambda functions. Configure Amazon API Gateway to point to the Lambda functions.

C.  

Use AWS Database Migration Service (AWS DMS) to migrate the database to a MySQL database that runs on Amazon EC2 instances. Use AWS DataSync tomigrate the web files and API files to an Amazon FSx for Windows File Server file system. Set up a fleet of EC2 instances in an Auto Scaling group as web servers. Mount the FSx for Windows File Server file system.

D.  

Use AWS Application Migration Service to migrate the database to Amazon EC2 instances. Copy the web files to containers that run on Amazon ElasticKubernetes Service (Amazon EKS). Set up an Elastic Load Balancing (ELB) load balancer for the EC2 instances and EKS containers. Copy the API code to AWS Lambda functions. Configure Amazon API Gateway to point to the Lambda functions.

Discussion 0
Questions 48

A company has developed a new release of a popular video game and wants to make it available for public download The new release package is approximately 5 GB in size. The company provides downloads for existing releases from a Linux-based publicly facing FTP site hosted in an on-premises data center The company expects the new release will be downloaded by users worldwide The company wants a solution that provides improved download performance and low transfer costs regardless of a user's location

Which solutions will meet these requirements'?

Options:

A.  

Store the game files on Amazon EBS volumes mounted on Amazon EC2 instances within an Auto Scaling group Configure an FTP service on the EC2 instances Use an Application Load Balancer in front of the Auto Scaling group. Publish the game download URL for users to downloadthe package

B.  

Store the game files on Amazon EFS volumes that are attached to Amazon EC2 instances within an Auto Scaling group Configure an FTP service on each of the EC2 instances Use an Application Load Balancer in front of the Auto Scaling group Publish the game download URL for users to download the package

C.  

Configure Amazon Route 53 and an Amazon S3 bucket for website hosting Upload the game files to the S3 bucket Use Amazon CloudFront for the website Publish the game download URL for users to download the package

D.  

Configure Amazon Route 53 and an Amazon S3 bucket for website hosting Upload the game files to the S3 bucket Set Requester Pays for the S3 bucket Publish the game download URL for users to download the package

Discussion 0
Questions 49

An international delivery company hosts a delivery management system on AWS. Drivers use the system to upload confirmation of delivery. Confirmation includes the recipient's signature or a photo of the package with the recipient. The driver's handheld device uploads signatures and photos through FTP to a single Amazon EC2 instance. Each handheld device saves a file in a directory based on the signed-in user, and the file name matches the delivery number. The EC2 instance then adds metadata to the file after querying a central database to pull delivery information. The file is then placed in Amazon S3 for archiving.

As the company expands, drivers report that the system is rejecting connections. The FTP server is having problems because of dropped connections and memory issues. In response to these problems, a system engineer schedules a cron task to reboot the EC2 instance every 30 minutes. The billing team reports that files are not always in the archive and that the central system is not always updated.

A solutions architect needs to design a solution that maximizes scalability to ensure that the archive always receives the files and that systems are always updated. The handheld devices cannot be modified, so the company cannot deploy a new application.

Which solution will meet these requirements?

Options:

A.  

Create an AMI of the existing EC2 instance. Create an Auto Scaling group of EC2 instances behind an Application Load Balancer. Configure the Auto Scaling group to have a minimum of three instances.

B.  

Use AWS Transfer Family to create an FTP server that places the files in Amazon Elastic File System (Amazon EFS). Mount the EFS volume to the existing EC2 instance. Point the EC2 instance to the new path for file processing.

C.  

Use AWS Transfer Family to create an FTP server that places the files in Amazon S3. Use an S3 event notification through Amazon Simple Notification Service (Amazon SNS) to invoke an AWS Lambda function. Configure the Lambda function to add the metadata and update the delivery system.

D.  

Update the handheld devices to place the files directly in Amazon S3. Use an S3 eventnotification through Amazon Simple Queue Service (Amazon SQS) to invoke an AWS Lambda function. Configure the Lambda function to add the metadata and update the delivery system.

Discussion 0
Questions 50

A company has an application that stores user-uploaded videos in an Amazon S3 bucket that uses S3 Standard storage. Users access the videos frequently in the first 180 days after the videos are uploaded. Access after 180 days is rare. Named users and anonymous users access the videos. Most of the videos are more than 100 MB in size. Users often have poor internet connectivity when they upload videos, resulting in failed uploads. The company uses multipart uploads for the videos. A solutions architect needs to optimize the S3 costs of the application. Which combination of actions will meet these requirements? (Select TWO.)

Options:

A.  

Configure the S3 bucket to be a Requester Pays bucket.

B.  

Use S3 Transfer Acceleration to upload the videos to the S3 bucket.

C.  

Create an S3 Lifecycle configuration to expire incomplete multipart uploads 7 days after initiation.

D.  

Create an S3 Lifecycle configuration to transition objects to S3 Glacier Instant Retrieval after 1 day.

E.  

Create an S3 Lifecycle configuration to transition objects to S3 Standard-Infrequent Access (S3 Standard-IA) after 180 days.

Discussion 0
Questions 51

A company is building an application on Amazon EMR to analyze data. The following user groups need to perform different actions:

• Administrator: Provision EMR clusters from different configurations.

• Data engineer: Create an EMR cluster from a small set of available configurations. Run ETL scripts to process data.

• Data analyst: Create an EMR cluster with a specific configuration. Run SQL queries and Apache Hive queries on the data.

A solutions architect must design a solution that gives each group the ability to launch only its authorized EMR configurations. The solution must provide the groups with least privilege access to only the resources that they need. The solution also must provide tagging for all resources that the groups create.

Which solution will meet these requirements?

Options:

A.  

Configure AWS Service Catalog to control the Amazon EMR versions available for deployment, the cluster configurations, and the permissions for each user group.

B.  

Configure Kerberos-based authentication for EMR clusters when the EMR clusters launch. Specify a Kerberos security configuration and cluster-specific Kerberos options.

C.  

Create IAM roles for each user group. Attach policies to the roles to define allowed actions for users. Create an AWS Config rule to check for noncompliant resources. Configure the rule to notify the company to address noncompliant resources.

D.  

Use AWS CloudFormation to launch EMR clusters with attached resource policies. Create an AWS Config rule to check for noncompliant resources. Configure the rule to notify the company to address noncompliant resources.

Discussion 0
Questions 52

Question:

An application uses CloudFront, App Runner, and two S3 buckets — one for static assets and one for user-uploaded content. User content is infrequently accessed after 30 days. Users are located only in Europe.

How can the companyoptimize cost?

Options:

A.  

Expire S3 objects after 30 days.

B.  

Transition S3 content toGlacier Deep Archiveafter 30 days.

C.  

Use Spot Instances with App Runner.

D.  

Add auto scaling to Aurora read replica.

E.  

UseCloudFront Price Class 200(Europe & U.S. only).

Discussion 0
Questions 53

A company is building a software-as-a-service (SaaS) solution on AWS. The company has deployed an Amazon API Gateway REST API with AWS Lambda integration in multiple AWS Regions and in the same production account.

The company offers tiered pricing that gives customers the ability to pay for the capacity to make a certain number of API calls per second. The premium tier offers up to 3,000 calls per second, and customers are identified by a unique API key. Several premium tier customers in various Regions report that they receive error responses of 429 Too Many Requests from multiple API methods during peak usage hours. Logs indicate that the Lambda function is never invoked.

What could be the cause of the error messages for these customers?

Options:

A.  

The Lambda function reached its concurrency limit.

B.  

The Lambda function its Region limit for concurrency.

C.  

The company reached its API Gateway account limit for calls per second.

D.  

The company reached its API Gateway default per-method limit for calls per second.

Discussion 0
Questions 54

A company has a legacy application that runs on multiple .NET Framework components. The components share the same Microsoft SQL Server database and

communicate with each other asynchronously by using Microsoft Message Queueing (MSMQ).

The company is starting a migration to containerized .NET Core components and wants to refactor the application to run on AWS. The .NET Core components require complex orchestration. The company must have full control over networking and host configuration. The application's database model is strongly relational.

Which solution will meet these requirements?

Options:

A.  

Host the .NET Core components on AWS App Runner. Host the database on Amazon RDS for SQL Server. Use Amazon EventBridge for asynchronous messaging.

B.  

Host the .NET Core components on Amazon Elastic Container Service (Amazon ECS) with the AWS Fargate launch type. Host the database on Amazon DynamoD

B.  

Use Amazon Simple Notification Service (Amazon SNS) for asynchronous messaging.

C.  

Host the .NET Core components on AWS Elastic Beanstalk. Host the database on Amazon Aurora PostgreSQL Serverless v2. Use Amazon Managed Streaming for Apache Kafka (Amazon MSK) for asynchronous messaging.

D.  

Host the .NET Core components on Amazon Elastic Container Service (Amazon ECS) with the Amazon EC2 launch type. Host the database on Amazon Aurora MySQL Serverless v2. Use Amazon Simple Queue Service (Amazon SQS) for asynchronous messaging.

Discussion 0
Questions 55

A digital marketing company has multiple AWS accounts that belong to various teams. The creative team uses an Amazon S3 bucket in its AWS account to securely store images and media files that are used as content for the company's marketing campaigns. The creative team wants to share the S3 bucket with the strategy team so that the strategy team can view the objects.

A solutions architect has created an IAM role that is named strategy_reviewer in the Strategy account. The solutions architect also has set up a custom AWS Key Management Service (AWS KMS) key in the Creative account and has associated the key with the S3 bucket. However, when users from the Strategy account assume the IAM role and try to access objects in the S3 bucket, they receive an Account.

The solutions architect must ensure that users in the Strategy account can access the S3 bucket. The solution must provide these users with only the minimum permissions that they need.

Which combination of steps should the solutions architect take to meet these requirements? (Select THREE.)

Options:

A.  

Create a bucket policy that includes read permissions for the S3 bucket. Set the principal ofthe bucket policy to the account ID of the Strategy account

B.  

Update the strategy_reviewer IAM role to grant full permissions for the S3 bucket and to grant decrypt permissions for the custom KMS key.

C.  

Update the custom KMS key policy in the Creative account to grant decrypt permissions to the strategy_reviewer IAM role.

D.  

Create a bucket policy that includes read permissions for the S3 bucket. Set the principal of the bucket policy to an anonymous user.

E.  

Update the custom KMS key policy in the Creative account to grant encrypt permissions to the strategy_reviewer IAM role.

F.  

Update the strategy_reviewer IAM role to grant read permissions for the S3 bucket and to grant decrypt permissions for the custom KMS key

Discussion 0
Questions 56

A company creates an Amazon API Gateway API and shares the API with an external development team. The API uses AWS Lambda functions and is deployed to a stage that is named Production.

The external development team is the sole consumer of the API. The API experiences sudden increases of usage at specific times, leading to concerns about increased costs. The company needs to limit cost and usage without reworking the Lambda functions.

Which solution will meet these requirements MOST cost-effectivery?

Options:

A.  

Configure the API to send requests to Amazon SQS queues instead of directly to the Lambda functions. Update the Lambda functions to consume messages from the queues and to process the requests. Set up the queues to invoke the Lambda functions when new messages arrive.

B.  

Configure provisioned concurrency for each Lambda function. Use AWS Application Auto Scaling to register the Lambda functions as targets. Set up scaling schedules to increase and decrease capacity to match changes in API usage.

C.  

Create an API Gateway API key and an AWS WAF Regional web ACL. Associate the web ACL with the Production stage. Add a rate-based rule to the web ACL. In the rule, specify the rate limit and a custom request aggregation that uses the X-API-Key header. Share the API key with the external development team.

D.  

Create an API Gateway API key and usage plan. Define throttling limits and quotas in the usage plan. Associate the usage plan with the Production stage and the API key. Share the API key with the external development team.

Discussion 0
Questions 57

A company is replicating an application in a secondary AWS Region. The application in the primary Region reads from and writes to several Amazon DynamoDB tables. The application also reads customer data from an Amazon RDS for MySQL DB instance.

The company plans to use the secondary Region as part of a disaster recovery plan. The application in the secondary Region must function without dependencies on the primary Region.

Which solution will meet these requirements with the LEAST development effort?

Options:

A.  

Configure DynamoDB global tables. Replicate the required tables to the secondary Region. Create a read replica of the RDS DB instance in the secondary Region. Configure the secondary application to use the DynamoDB tables and the read replica in the secondary Region.

B.  

Use DynamoDB Accelerator (DAX) to cache the required tables in the secondary Region. Create a read replica of the RDS DB instance in the secondary Region. Configure the secondary application to use DAX and the read replica in the secondary Region.

C.  

Configure DynamoDB global tables. Replicate the required tables to the secondary Region. Enable Multi-AZ for the RDS DB instance. Configure the standby replica to be created in the secondary Region. Configure the secondary application to use the DynamoDB tables and the standby replica in the secondary Region.

D.  

Set up DynamoDB streams from the primary Region. Process the streams in the secondary Region to populate new DynamoDB tables. Create a read replica of the RDS DB instance in the secondary Region. Configure the secondary application to use the DynamoDB tables and the read replica in the secondary Region.

Discussion 0
Questions 58

A company has several AWS accounts. A development team is building an automation framework for cloud governance and remediation processes. The automation framework uses AWS Lambda functions in a centralized account. A solutions architect must implement a least privilege permissions policy that allows the Lambda functions to run in each of the company's AWS accounts.

Which combination of steps will meet these requirements? (Choose two.)

Options:

A.  

In the centralized account, create an IAM role that has the Lambda service as a trusted entity. Add an inline policy to assume the roles of the other AWS accounts.

B.  

In the other AWS accounts, create an IAM role that has minimal permissions. Add the centralized account's Lambda IAM role as a trusted entity.

C.  

In the centralized account, create an IAM role that has roles of the other accounts as trusted entities. Provide minimal permissions.

D.  

In the other AWS accounts, create an IAM role that has permissions to assume the role of the centralized account. Add the Lambda service as a trusted entity.

E.  

In the other AWS accounts, create an IAM role that has minimal permissions. Add the Lambda service as a trusted entity.

Discussion 0
Questions 59

A company manages hundreds of AWS accounts centrally in an organization in AWS Organizations. The company recently started to allow product teams to create and manage their own S3 access points in their accounts. The S3 access points can be accessed only within VPCs not on the internet.

What is the MOST operationally efficient way to enforce this requirement?

Options:

A.  

Set the S3 access point resource policy to deny the s3 CreateAccessPoint action unless the s3: AccessPointNetworkOngm condition key evaluates to VPC.

B.  

Create an SCP at the root level in the organization to deny the s3 CreateAccessPoint action unless the s3 AccessPomtNetworkOngin condition key evaluates to VPC.

C.  

Use AWS CloudFormation StackSets to create a new 1AM policy in each AVVS account that allows the s3: CreateAccessPoint action only if the s3 AccessPointNetworkOrigin condition key evaluates to VP

C.  

D.  

Set the S3 bucket policy to deny the s3: CreateAccessPoint action unless the s3AccessPointNetworkOrigin condition key evaluates to VPC.

Discussion 0
Questions 60

A company is building a serverless application that runs on an AWS Lambda function that is attached to a VPC. The company needs to integrate the application with a new service from an external provider. The external provider supports only requests that come from public IPv4 addresses that are in an allow list.

The company must provide a single public IP address to the external provider before the application can start using the new service.

Which solution will give the application the ability to access the new service?

Options:

A.  

Deploy a NAT gateway. Associate an Elastic IP address with the NAT gateway. Configure the VPC to use the NAT gateway.

B.  

Deploy an egress-only internet gateway. Associate an Elastic IP address with the egress-only internet gateway. Configure the elastic network interface on the Lambda function to use the egress-only internet gateway.

C.  

Deploy an internet gateway. Associate an Elastic IP address with the internet gateway. Configure the Lambda function to use the internet gateway.

D.  

Deploy an internet gateway. Associate an Elastic IP address with the internet gateway. Configure the default route in the public VPC route table to use the internet gateway.

Discussion 0
Questions 61

A company wants to migrate its website from an on-premises data center onto AWS. At the same time, it wants to migrate the website to a containerized microservice-based architecture to improve the availability and cost efficiency. The company's security policy states that privileges and network permissions must be configured according to best practice, using least privilege.

A Solutions Architect must create a containerized architecture that meets the security requirements and has deployed the application to an Amazon ECS cluster.

What steps are required after the deployment to meet the requirements? (Choose two.)

Options:

A.  

Create tasks using the bridge network mode.

B.  

Create tasks using the awsvpc network mode.

C.  

Apply security groups to Amazon EC2 instances, and use IAM roles for EC2 instances to access other resources.

D.  

Apply security groups to the tasks, and pass IAM credentials into the container at launch time to access other resources.

E.  

Apply security groups to the tasks, and use IAM roles for tasks to access other resources.

Discussion 0
Questions 62

A company is designing an AWS environment tor a manufacturing application. The application has been successful with customers, and the application's user base has increased. The company has connected the AWS environment to the company's on-premises data center through a 1 Gbps AWS Direct Connect connection. The company has configured BGP for the connection.

The company must update the existing network connectivity solution to ensure that the solution is highly available, fault tolerant, and secure.

Which solution win meet these requirements MOST cost-effectively?

Options:

A.  

Add a dynamic private IP AWS Site-to-Site VPN as a secondary path to secure data in transit and provide resilience for the Direct Conned connection. Configure MACsec to encrypt traffic inside the Direct Connect connection.

B.  

Provision another Direct Conned connection between the company's on-premises data center and AWS to increase the transfer speed and provide resilience. Configure MACsec to encrypt traffic inside the Dried Conned connection.

C.  

Configure multiple private VIFs. Load balance data across the VIFs between the on-premises data center and AWS to provide resilience.

D.  

Add a static AWS Site-to-Site VPN as a secondary path to secure data in transit and to provide resilience for the Direct Connect connection.

Discussion 0
Questions 63

A company uses AWS Organizations to manage its development environment. Each development team at the company has its own AWS account Each account has a single VPC and CIDR blocks that do not overlap.

The company has an Amazon Aurora DB cluster in a shared services account All the development teams need to work with live data from the DB cluster

Which solution will provide the required connectivity to the DB cluster with the LEAST operational overhead?

Options:

A.  

Create an AWS Resource Access Manager (AWS RAM) resource share tor the DB cluster. Share the DB cluster with all the development accounts

B.  

Create a transit gateway in the shared services account Create an AWS Resource Access Manager (AWS RAM) resource share for the transit gateway Share the transit gateway with all the development accounts Instruct the developers to accept the resource share Configure networking.

C.  

Create an Application Load Balancer (ALB) that points to the IP address of the DB cluster Create an AWS PrivateLink endpoint service that uses the ALB Add permissions to allow each development account to connect to the endpoint service

D.  

Create an AWS Site-to-Site VPN connection in the shared services account Configure networking Use AWS Marketplace VPN software in each development account to connect to the Site-to-Site VPN connection

Discussion 0
Questions 64

A company has migrated its forms-processing application to AWS. When users interact with the application, they upload scanned forms as files through a web application. A database stores user metadata and references to files that are stored in Amazon S3. The web application runs on Amazon EC2 instances and an Amazon RDS for PostgreSQL database.

When forms are uploaded, the application sends notifications to a team through Amazon Simple Notification Service (Amazon SNS). A team member then logs in and processes each form. The team member performs data validation on the form and extracts relevant data before entering the information into another system that uses an API.

A solutions architect needs to automate the manual processing of the forms. The solution must provide accurate form extraction, minimize time to market, and minimize long-term operational overhead.

Which solution will meet these requirements?

Options:

A.  

Develop custom libraries to perform optical character recognition (OCR) on the forms. Deploy the libraries to an Amazon Elastic Kubernetes Service (Amazon EKS) cluster as an application tier. Use this tier to process the forms when forms are uploaded. Store the output in Amazon S3. Parse this output by extracting the data into an Amazon DynamoDB table. Submit the data to the target system's API. Host the new application tier on EC2 instance

B.  

Extend the system with an application tier that uses AWS Step Functions and AWS Lambda. Configure this tier to use artificial intelligence and machine learning (AI/ML) models that are trained and hosted on an EC2 instance to perform optical character recognition (OCR) on the forms when forms are uploaded. Store the output in Amazon S3. Parse this output by extracting the data that is required within the application tier. Submit the data to

C.  

Host a new application tier on EC2 instances. Use this tier to call endpoints that host artificial intelligence and machine learning (Al/ML) models that are trained and hosted in Amazon SageMaker to perform optical character recognition (OCR) on the forms. Store the output in Amazon ElastiCache. Parse this output by extracting the data that is required within the application tier. Submit the data to the target system's API.

D.  

Extend the system with an application tier that uses AWS Step Functions and AWS Lambda. Configure this tier to use Amazon Textract and Amazon Comprehend to perform optical character recognition (OCR) on the forms when forms are uploaded. Store the output in Amazon S3. Parse this output by extracting the data that is required within the application tier. Submit the data to the target system's API.

Discussion 0
Questions 65

A company's solutions architect needs to provide secure Remote Desktop connectivity to users for Amazon EC2 Windows instances that are hosted in a VPC. The solution must integrate centralized user management with the company's on-premises Active Directory. Connectivity to the VPC is through the internet. The company has hardware that can be used to establish an AWS Site-to-Site VPN connection.

Which solution will meet these requirements MOST cost-effectively?

Options:

A.  

Deploy a managed Active Directory by using AWS Directory Service for Microsoft Active Directory. Establish a trust with the on-premises Active Directory.Deploy an EC2 instance as a bastion host in the VPC. Ensure that the EC2 instance is joined to the domain. Use the bastion host to access the target instances through RDP.

B.  

Configure AWS IAM Identity Center (AWS Single Sign-On) to integrate with the on-premises Active Directory by using the AWS Directory Service for MicrosoftActive Directory AD Connector. Configure permission sets against user groups for access to AWS Systems Manager. Use Systems Manager Fleet Manager toaccess the target instances through RDP.

C.  

Implement a VPN between the on-premises environment and the target VP

C.  

Ensure that the target instances are joined to the on-premises Active Directory domain over the VPN connection. Configure RDP access through the VPN. Connect from the company's network to the target instances.

D.  

Deploy a managed Active Directory by using AWS Directory Service for Microsoft Active Directory. Establish a trust with the on-premises Active Directory.Deploy a Remote Desktop Gateway on AWS by using an AWS Quick Start. Ensure that the Remote Desktop Gateway is joined to the domain. Use the Remote Desktop Gateway to access the target instances through RDP.

Discussion 0
Questions 66

A company has a latency-sensitive trading platform that uses Amazon DynamoDB as a storage backend. The company configured the DynamoDB table to use on-demand capacity mode. A solutions architect needs to design a solution to improve the performance of the trading platform. The new solution must ensure high availability for the trading platform.

Which solution will meet these requirements with the LEAST latency?

Options:

A.  

Create a two-node DynamoDB Accelerator (DAX) cluster Configure an application to read and write data by using DAX.

B.  

Create a three-node DynamoDB Accelerator (DAX) cluster. Configure an application to read data by using DAX and to write data directly to the DynamoDB table.

C.  

Create a three-node DynamoDB Accelerator (DAX) cluster. Configure an application to read data directly from the DynamoDB table and to write data by using DAX.

D.  

Create a single-node DynamoD8 Accelerator (DAX) cluster. Configure an application to read data by using DAX and to write data directly to the DynamoD8 table.

Discussion 0
Questions 67

A company is updating an application that customers use to make online orders. The number of attacks on the application by bad actors has increased recently.

The company will host the updated application on an Amazon Elastic Container Service (Amazon ECS) cluster. The company will use Amazon DynamoDB to store application data. A public Application Load Balancer (ALB) will provide end users with access to the application. The company must prevent prevent attacks and ensure business continuity with minimal service interruptions during an ongoing attack.

Which combination of steps will meet these requirements MOST cost-effectively? (Select TWO.)

Options:

A.  

Create an Amazon CloudFront distribution with the ALB as the origin. Add a custom header and random value on the CloudFront domain. Configure the ALB to conditionally forward traffic if the header and value match.

B.  

Deploy the application in two AWS Regions. Configure Amazon Route 53 to route to both Regions with equal weight.

C.  

Configure auto scaling for Amazon ECS tasks. Create a DynamoDB Accelerator (DAX) cluster.

D.  

Configure Amazon ElastiCache to reduce overhead on DynamoDB.

E.  

Deploy an AWS WAF web ACL that includes an appropriate rule group. Associate the web ACL with the Amazon CloudFront distribution.

Discussion 0
Questions 68

A company runs an ecommerce web application on AWS. The static website is hosted on Amazon S3 and served via Amazon CloudFront. API Gateway invokes AWS Lambda for order processing, and Lambda stores data in an Amazon RDS for MySQL DB cluster (On-Demand Instances).

Recently, SQL injection attacks and latency during peak times (cold starts) have been reported. The company wants to ensure scalability, protect against web exploits, and reduce database costs.

Options:

A.  

Increase Lambda timeout, use RDS Reserved Instances, and use AWS Shield Advanced

B.  

Increase Lambda memory, switch to Redshift, use Amazon Inspector

C.  

Use provisioned concurrency, switch to Aurora Serverless, use AWS Shield Advanced

D.  

Use provisioned concurrency, use RDS Reserved Instances, use AWS WAF with CloudFront

Discussion 0
Questions 69

A company implements a containerized application by using Amazon Elastic Container Service (Amazon ECS) and Amazon API Gateway. The application data is stored in Amazon Aurora databases and Amazon DynamoDB databases The company automates infrastructure provisioning by using AWS CloudFormation The company automates application deployment by using AWS CodePipeline.

A solutions architect needs to implement a disaster recovery (DR) strategy that meets an RPO of 2 hours and an RTO of 4 hours.

Which solution will meet these requirements MOST cost-effectively'?

Options:

A.  

Set up an Aurora global database and DynamoDB global tables to replicate the databases to a secondary AWS Region. In the primary Region and in the secondaryRegion, configure an API Gateway API with a Regional Endpoint Implement Amazon CloudFront with origin failover to route traffic to the secondary Region during a DR scenario

B.  

Use AWS Database Migration Service (AWS DMS). Amazon EventBridge. and AWS Lambda to replicate the Aurora databases to a secondary AWS Region Use DynamoDB Streams EventBridge, and Lambda to replicate the DynamoDB databases to the secondary Region. In the primary Region and in the secondary Region, configure an API Gateway API with a Regional Endpoint Implement Amazon Route 53 failover routing to switch traffic from the primary Region to the

C.  

Use AWS Backup to create backups of the Aurora databases and the DynamoDB databases in a secondary AWS Region. In the primary Region and in the secondary Region, configure an API Gateway API with a Regional endpoint. Implement Amazon Route 53 failover routing to switch traffic from the primary Region to the secondary Region

D.  

Set up an Aurora global database and DynamoDB global tables to replicate the databases to a secondary AWS Region. In the primary Region and in the secondaryRegion, configure an API Gateway API with a Regional endpoint Implement Amazon Route 53 failover routing to switch traffic from the primary Region to the secondary Region

Discussion 0
Questions 70

A company runs a website on Amazon ECS containers that use the AWS Fargate launch type. The company configures AWS Application Auto Scaling by using a target tracking scaling policy. The company sets the request count as the scaling metric. An Application Load Balancer (ALB) serves traffic to the ECS containers. The website serves images on request and resizes the images to a predefined size to match the viewers' screens. After the website resizes an image, the website caches the image locally in a container and serves subsequent requests from the cache.

During periods of high traffic, the company observed that images load slowly and with high latency. The company wants to minimize the latency to serve images.

Which solution will meet this requirement with the LEAST operational overhead?

Options:

A.  

Create a new Amazon CloudFront distribution and an Amazon S3 bucket. Set the ALB as one origin for the distribution and the S3 bucket as a second origin. Configure a cache behavior that routes image requests to the S3 origin, and configure a default cache behavior for the ALB origin. Pre-scale all images and upload the images to the S3 bucket.

B.  

Create an Amazon ElastiCache (Memcached) cluster. Update the application to read and write the resized images to the ElastiCache (Memcached) cluster by using the image name and size as the key.

C.  

Create an Amazon Aurora cluster and an Amazon S3 bucket. Update the application to store resized images in the S3 bucket and to store a cache key in the Aurora cluster. Configure the application to load the cache key from the Aurora cluster and to serve images from the S3 bucket.

D.  

Create an Amazon API Gateway HTTP API and enable API request caching. Replace the ALB with the HTTP API and remove the local caching in the application code.

Discussion 0
Questions 71

A company runs its application on Amazon EC2 instances and AWS Lambda functions. The EC2 instances experience a continuous and stable load. The Lambda functions

experience a varied and unpredictable load. The application includes a caching layer that uses an Amazon MemoryDB for Redis cluster.

A solutions architect must recommend a solution to minimize the company's overall monthly costs.

Which solution will meet these requirements?

Options:

A.  

Purchase an EC2 Instance Savings Plan to cover the EC2 instances. Purchase a Compute Savings Plan for Lambda to cover the minimum expectedconsumption of the Lambda functions. Purchase reserved nodes to cover the MemoryDB cache nodes.

B.  

Purchase a Compute Savings Plan to cover the EC2 instances. Purchase Lambda reserved concurrency to cover the expected Lambda usage. Purchasereserved nodes to cover the MemoryDB cache nodes.

C.  

Purchase a Compute Savings Plan to cover the entire expected cost of the EC2 instances, Lambda functions, and MemoryDB cache nodes.

D.  

Purchase a Compute Savings Plan to cover the EC2 instances and the MemoryDB cache nodes. Purchase Lambda reserved concurrency to cover theexpected Lambda usage.

Discussion 0
Questions 72

A company is running a workload that consists of thousands of Amazon EC2 instances. The workload is running in a VPC that contains several public subnets and private subnets. The public subnets have a route for 0.0.0.0/0 to an existing internet gateway. The private subnets have a route for 0.0.0.0/0 to an existing NAT gateway.

A solutions architect needs to migrate the entire fleet of EC2 instances to use IPv6. The EC2 instances that are in private subnets must not be accessible from the public internet.

What should the solutions architect do to meet these requirements?

Options:

A.  

Update the existing VPC, and associate a custom IPv6 CIDR block with the VPC and all subnets. Update all the VPC route tables, and add a route for ::/0 to the internet gateway.

B.  

Update the existing VPC, and associate an Amazon-provided IPv6 CIDR block with the VPC and all subnets. Update the VPC route tables for all private subnets, and add a route for ::/0 to the NAT gateway.

C.  

Update the existing VPC, and associate an Amazon-provided IPv6 CIDR block with the VPC and all subnets. Create an egress-only internet gateway. Update the VPC route tables for all private subnets, and add a route for ::/0 to the egress-only internet gateway.

D.  

Update the existing VPC, and associate a custom IPv6 CIDR block with the VPC and all subnets. Create a new NAT gateway, and enable IPv6 support. Update the VPC route tables for all private subnets, and add a route for ::/0 to the IPv6-enabled NAT gateway.

Discussion 0
Questions 73

A company hosts a VPN in an on-premises data center. Employees currently connect to the VPN to access files in their Windows home directories. Recently, there has been a large growth in the number of employees who work remotely. As a result, bandwidth usage for connections into the data center has begun to reach 100% during business hours.

The company must design a solution on AWS that will support the growth of the company's remote workforce, reduce the bandwidth usage for connections into the data center, and reduce operational overhead.

Which combination of steps will meet these requirements with the LEAST operational overhead? (Select TWO.)

Options:

A.  

Create an AWS Storage Gateway Volume Gateway. Mount a volume from the Volume Gateway to the on-premises file server.

B.  

Migrate the home directories to Amazon FSx for Windows File Server.

C.  

Migrate the home directories to Amazon FSx for Lustre.

D.  

Migrate remote users to AWS Client VPN

E.  

Create an AWS Direct Connect connection from the on-premises data center to AWS.

Discussion 0
Questions 74

A company uses AWS Organizations for a multi-account setup in the AWS Cloud. The company's finance team has a data processing application that uses AWS Lambda and Amazon DynamoDB. The company's marketing team wants to access the data that is stored in the DynamoDB table.

The DynamoDB table contains confidential data. The marketing team can have access to only specific attributes of data in the DynamoDB table. The fi-nance team and the marketing team have separate AWS accounts.

What should a solutions architect do to provide the marketing team with the appropriate access to the DynamoDB table?

Options:

A.  

Create an SCP to grant the marketing team's AWS account access to the specific attributes of the DynamoDB table. Attach the SCP to the OU of the finance team.

B.  

Create an IAM role in the finance team's account by using IAM policy conditions for specific DynamoDB attributes (fine-grained access con-trol). Establish trust with the marketing team's account. In the mar-keting team's account, create an IAM role that has permissions to as-sume the IAM role in the finance team's account.

C.  

Create a resource-based IAM policy that includes conditions for spe-cific DynamoDB attributes (fine-grained access control). Attach the policy to the DynamoDB table. In the marketing team's account, create an IAM role that has permissions to access the DynamoDB table in the finance team's account.

D.  

Create an IAM role in the finance team's account to access the Dyna-moDB table. Use an IAM permissions boundary to limit the access to the specific attributes. In the marketing team's account, create an IAM role that has permissions to assume the IAM role in the finance team's account.

Discussion 0
Questions 75

A large mobile gaming company has successfully migrated all of its on-premises infrastructure tothe AWS Cloud. A solutions architect is reviewing the environment to ensure that it was built according to the design and that it is running in alignment with the Well-Architected Framework.

While reviewing previous monthly costs in Cost Explorer, the solutions architect notices that the creation and subsequent termination of several large instance types account for a high proportion of the costs. The solutions architect finds out that the company's developers are launching new Amazon EC2 instances as part of their testing and that the developers are not using the appropriate instance types.

The solutions architect must implement a control mechanism to limit the instance types that only the developers can launch.

Which solution will meet these requirements?

Options:

A.  

Create a desired-instance-type managed rule in AWS Config. Configure the rule with the instance types that are allowed. Attach the rule to an event to run each time a new EC2 instance is launched.

B.  

In the EC2 console, create a launch template that specifies the instance types that are allowed. Assign the launch template to the developers' IAM accounts.

C.  

Create a new IAM policy. Specify the instance types that are allowed. Attach the policy to an IAM group that contains the IAM accounts for the developers

D.  

Use EC2 Image Builder to create an image pipeline for the developers and assist them in the creation of a golden image.

Discussion 0
Questions 76

A company uses AWS Organizations to manage a multi-account structure. The company has hundreds of AWS accounts and expects the number of accounts to increase. The company is building a new application that uses Docker images. The company will push the Docker images to Amazon Elastic Container Registry (Amazon ECR). Only accounts that are within the company's organization should have

access to the images.

The company has a CI/CD process that runs frequently. The company wants to retain all the tagged images. However, the company wants to retain only the five most recent untagged images.

Which solution will meet these requirements with the LEAST operational overhead?

Options:

A.  

Create a private repository in Amazon ECR. Create a permissions policy for the repository that allows only required ECR operations. Include a condition to allow the ECR operations if the value of the aws:PrincipalOrglD condition key is equal to the ID of the company's organization. Add a lifecycle rule to the ECR repository that deletes all untagged images over the count of five.

B.  

Create a public repository in Amazon ECR. Create an IAM role in the ECR account. Set permissions so that any account can assume the role if the value of the aws:PrincipalOrglD condition key is equal to the ID of the company's organization. Add a lifecycle rule to the ECR repository that deletes all untagged images over the count of five.

C.  

Create a private repository in Amazon ECR. Create a permissions policy for the repository that includes only required ECR operations. Include a condition to allow the ECR operations for all account IDs in the organization. Schedule a daily Amazon EventBridge rule to invoke an AWS Lambda function that deletes all untagged images over the count of five.

D.  

Create a public repository in Amazon ECR. Configure Amazon ECR to use an interface VPC endpoint with an endpoint policy that includes the required permissions for images that the company needs to pull. Include a condition to allow the ECR operations for all account IDs in the company's organization. Schedule a daily Amazon EventBridge rule to invoke an AWS Lambda function that deletes all untagged images over the count of five.

Discussion 0
Questions 77

A global manufacturing company plans to migrate the majority of its applications to AWS. However, the company is concerned about applications that need to remain within a specific country or in the company's central on-premises data center because of data regulatory requirements or requirements for latency of single-digit milliseconds. The company also is concerned about the applications that it hosts in some of its factory sites, where limited network infrastructure exists.

The company wants a consistent developer experience so that its developers can build applications once and deploy on premises, in the cloud, or in a hybrid architecture.

The developers must be able to use the same tools, APIs, and services that are familiar to them.

Which solution will provide a consistent hybrid experience to meet these requirements?

Options:

A.  

Migrate all applications to the closest AWS Region that is compliant. Set up an AWS Direct Connect connection between the central on-premises data center and AWS. Deploy a Direct Connect gateway.

B.  

Use AWS Snowball Edge Storage Optimized devices for the applications that have data regulatory requirements or requirements for latency of single-digit milliseconds. Retain the devices on premises. Deploy AWS Wavelength to host the workloads in the factory sites.

C.  

Install AWS Outposts for the applications that have data regulatory requirements or requirements for latency of single-digit milliseconds. Use AWS Snowball Edge Compute Optimized devices to host the workloads in the factory sites.

D.  

Migrate the applications that have data regulatory requirements or requirements for latency of single-digit milliseconds to an AWS Local Zone. Deploy AWS Wavelength to host the workloads in the factory sites.

Discussion 0
Questions 78

A company recently started hosting new application workloads in the AWS Cloud. The company is using Amazon EC2 instances, Amazon Elastic File System (Amazon EFS) file systems, and Amazon RDS DB instances.

To meet regulatory and business requirements, the company must make the following changes for data backups:

* Backups must be retained based on custom daily, weekly, and monthlyrequirements.

* Backups must be replicated to at least one other AWS Region immediately after capture.

* The backup solution must provide a single source of backup status across the AWS environment.

* The backup solution must send immediate notifications upon failure of any resource backup.

Which combination of steps will meet this requirement with the LEAST amount of operational overhead? (Select THREE.)

Options:

A.  

Create an AWS Backup plan with a backup rule for each of the retention requirements.

B.  

Configure an AWS backup plan to copy backups to another Region.

C.  

Create an AWS Lambda function to replicate backups to another Region and send notification if a failure occurs.

D.  

Add an Amazon Simple Notification Service (Amazon SNS) topic to the backup plan to send a notification for finished jobs that have any status except BACKUP- JOB- COMPLETE

D.  

E.  

Create an Amazon Data Lifecycle Manager (Amazon DLM) snapshot lifecycle policy for each of the retention requirements.

F.  

Set up RDS snapshots on each database.

Discussion 0
Questions 79

A company is using AWS CloudFormation as its deployment tool for all applications. It stages all application binaries and templates within Amazon S3 buckets with versioning enabled. Developers use an Amazon EC2 instance with IDE access to modify and test applications. The developers want to implement CI/CD with AWS CodePipeline with the following requirements:

Use AWS CodeCommit for source control.

Automate unit testing and security scanning.

Alert developers when unit tests fail.

Toggle application features and allow lead developer approval before deployment.

Which solution will meet these requirements?

Options:

A.  

Use AWS CodeBuild for testing and scanning. Use EventBridge and SNS for alerts. Use AWS CDK with a manifest to toggle features. Use a manual approval stage.

B.  

Use Lambda for testing and alerts. Use AWS Amplify plugins for feature toggles. Use SES for manual approval.

C.  

Use Jenkins and SES for alerts. Use nested CloudFormation stacks for features. Use Lambda for approvals.

D.  

Use CodeDeploy for testing and scanning. Use CloudWatch alarms and SNS. Use Docker images for features and AWS CLI for toggles.

Discussion 0
Questions 80

A startup company recently migrated a large ecommerce website to AWS The website has experienced a 70% increase in sates Software engineers are using a private GitHub repository to manage code The DevOps team is using Jenkins for builds and unit testing The engineers need to receive notifications for bad builds and zero downtime during deployments The engineers also need to ensure any changes to production are seamless for users and can be rolled back in the event of a major issue

The software engineers have decided to use AWS CodePipeline to manage their build and deployment process

Which solution will meet these requirements'?

Options:

A.  

Use GitHub websockets to trigger the CodePipeline pipeline Use the Jenkins plugin for AWS CodeBuild to conduct unit testing Send alerts to an Amazon SNS topic for any bad builds Deploy inan in-place all-at-once deployment configuration using AWS CodeDeploy

B.  

Use GitHub webhooks to trigger the CodePipelme pipeline Use the Jenkins plugin for AWS CodeBuild to conduct unit testing Send alerts to an Amazon SNS topic for any bad builds Deploy in a blue'green deployment using AWS CodeDeploy

C.  

Use GitHub websockets to trigger the CodePipelme pipeline. Use AWS X-Ray for unit testing and static code analysis Send alerts to an Amazon SNS topic for any bad builds Deploy in a blue/green deployment using AWS CodeDeploy.

D.  

Use GitHub webhooks to trigger the CodePipeline pipeline Use AWS X-Ray for unit testing and static code analysis Send alerts to an Amazon SNS topic for any bad builds Deploy in an m-place. all-at-once deployment configuration using AWS CodeDeploy

Discussion 0
Questions 81

A company that has multiple AWS accounts is using AWS Organizations. The company’s AWS accounts host VPCs, Amazon EC2 instances, and containers.

The company’s compliance team has deployed a security tool in each VPC where the company has deployments. The security tools run on EC2 instances and send information to the AWS account that is dedicated for the compliance team. The company has tagged all the compliance-related resources with a key of “costCenter” and a value or “compliance”.

The company wants to identify the cost of the security tools that are running on the EC2 instances so that the company can charge the compliance team’s AWS account. The cost calculation must be as accurate as possible.

What should a solutions architect do to meet these requirements?

Options:

A.  

In the management account of the organization, activate the costCenter user-defined tag. Configure monthly AWS Cost and Usage Reports to save to an Amazon S3 bucket in the management account. Use the tag breakdown in the report to obtain the total cost for the costCenter tagged resources.

B.  

In the member accounts of the organization, activate the costCenter user-defined tag. Configure monthly AWS Cost and Usage Reports to save to an Amazon S3 bucket in the management account. Schedule a monthly AWS Lambda function to retrieve the reports and calculate the total cost for the costCenter tagged resources.

C.  

In the member accounts of the organization activate the costCenter user-defined tag. From the management account, schedule a monthly AWS Cost and Usage Report. Use the tag breakdown in the report to calculate the total cost for the costCenter tagged resources.

D.  

Create a custom report in the organization view in AWS Trusted Advisor. Configure the report to generate a monthly billing summary for the costCenter tagged resources in the compliance team’s AWS account.

Discussion 0
Questions 82

A company has millions of objects in an Amazon S3 bucket. The objects are in the S3 Standard storage class. All the S3 objects are accessed frequently. The number of users and applications that access the objects is increasing rapidly. The objects are encrypted with server-side encryption with AWS KMS Keys (SSE-KMS).

A solutions architect reviews the company's monthly AWS invoice and notices that AWS KMS costs are increasing because of the high number of requests from Amazon S3. The solutions architect needs to optimize costs with minimal changes to the application.

Which solution will meet these requirements with the LEAST operational overhead?

Options:

A.  

Create a new S3 bucket that has server-side encryption with customer-provided keys (SSE-C) as the encryption type. Copy the existing objects to the new S3 bucket. Specify SSE-C.

B.  

Create a new S3 bucket that has server-side encryption with Amazon S3 managed keys (SSE-S3) as the encryption type. Use S3 Batch Operations to copy the existing objects to the new S3 bucket. Specify SSE-S3.

C.  

Use AWS CloudHSM to store the encryption keys. Create a new S3 bucket. Use S3 Batch Operations to copy the existing objects to the new S3 bucket. Encrypt the objects by using the keys from CloudHSM.

D.  

Use the S3 Intelligent-Tiering storage class for the S3 bucket. Create an S3 Intelligent-Tiering archive configuration to transition objects that are not accessed for 90 days to S3 Glacier Deep Archive.

Discussion 0
Questions 83

A company's interactive web application uses an Amazon CloudFront distribution to serve images from an Amazon S3 bucket. Occasionally, third-party tools ingest corrupted images into the S3 bucket. This image corruption causes a poor user experience in the application later. The company has successfully implemented and tested Python logic to detect corrupt images.

A solutions architect must recommend a solution to integrate the detection logic with minimal latency between the ingestion and serving.

Which solution will meet these requirements?

Options:

A.  

Use a Lambda@Edge function that is invoked by a viewer-response event.

B.  

Use a Lambda@Edge function that is invoked by an origin-response event.

C.  

Use an S3 event notification that invokes an AWS Lambda function.

D.  

Use an S3 event notification that invokes an AWS Step Functions state machine.

Discussion 0
Questions 84

A company wants to manage the costs associated with a group of 20 applications that are infrequently used, but are still business-critical, by migrating to AWS. The applications are a mix of Java and Node.js spread across different instance clusters. The company wants to minimize costs while standardizing by using a single deployment methodology.

Most of the applications are part of month-end processing routines with a small number of concurrent users, but they are occasionally run at other times Average application memory consumption is less than 1 GB. though some applications use as much as 2.5 GB of memory during peak processing. The most important application in the group is a billing report written in Java that accesses multiple data sources and often runs for several hours.

Which is the MOST cost-effective solution?

Options:

A.  

Deploy a separate AWS Lambda function tor each application. Use AWS CloudTrail logs and Amazon CloudWatch alarms to verify completion of critical jobs.

B.  

Deploy Amazon ECS containers on Amazon EC2 with Auto Scaling configured for memory utilization of 75%. Deploy an ECS task for each application being migrated with ECS task scaling. Monitor services and hosts by using Amazon CloudWatch.

C.  

Deploy AWS Elastic Beanstalk for each application with Auto Scaling to ensure that all requests have sufficient resources. Monitor each AWS Elastic Beanstalk deployment by using CloudWatch alarms.

D.  

Deploy a new Amazon EC2 instance cluster that co-hosts all applications by using EC2 Auto Scaling and Application Load Balancers. Scale cluster size based on a custom metric set on instance memory utilization. Purchase 3-year Reserved Instance reservations equal to the GroupMaxSize parameter of the Auto Scaling group.

Discussion 0
Questions 85

Question:

A company has an application that stores user-uploaded videos in an Amazon S3 bucket using S3 Standard storage. Users access videos frequently for the first 180 days, and rarely after that. Most videos are over 100 MB. Users often have poor internet connectivity, and the company uses multipart uploads.

A solutions architect needs tooptimize S3 storage costs.

Which combination of actions will meet these requirements? (Select TWO.)

Options:

A.  

Configure the S3 bucket to be a Requester Pays bucket.

B.  

Use S3 Transfer Acceleration to upload the videos.

C.  

Create a lifecycle rule to expireincomplete multipart uploadsafter 7 days.

D.  

Create a lifecycle rule to transition objects toS3 Glacier Instant Retrieval after 1 day.

E.  

Create a lifecycle rule to transition objects toS3 Standard-IA after 180 days.

Discussion 0
Questions 86

A company provides a software as a service (SaaS) application that runs in the AWS Cloud. The application runs on Amazon EC2 instances behind a Network LoadBalancer (NLB). The instances are in an Auto Scaling group and are distributed across three Availability Zones in a single AWS Region.

The company is deploying the application into additional Regions. The company must provide static IP addresses for the application to customers so that the customers can add the IP addresses to allow lists.

The solution must automatically route customers to the Region that is geographically closest to them.

Which solution will meet these requirements?

Options:

A.  

Create an Amazon CloudFront distribution. Create a CloudFront origin group. Add the NLB for each additional Region to the origin group. Provide customers with the IP address ranges of the distribution's edge locations.

B.  

Create an AWS Global Accelerator standard accelerator. Create a standard accelerator endpoint for the NLB in each additional Region. Provide customers with the Global Accelerator IP address.

C.  

Create an Amazon CloudFront distribution. Create a custom origin for the NLB in each additional Region. Provide customers with the IP address ranges of the distribution's edge locations.

D.  

Create an AWS Global Accelerator custom routing accelerator. Create a listener for the custom routing accelerator. Add the IP address and ports for the NLB in each additional Region. Provide customers with the Global Accelerator IP address.

Discussion 0
Questions 87

A solutions architect is designing an AWS account structure for a company that consists of multiple teams. All the teams will work in the same AWS Region. The company needs a VPC that is connected to the on-premises network. The company expects less than 50 Mbps of total traffic to and from the on-premises network.

Which combination of steps will meet these requirements MOST cost-effectively? (Select TWO.)

Options:

A.  

Create an AWS Cloud Formation template that provisions a VPC and the required subnets. Deploy the template to each AWS account.

B.  

Create an AWS Cloud Formation template that provisions a VPC and the required subnets. Deploy the template to a shared services account Share the subnets by using AWS Resource Access Manager.

C.  

Use AWS Transit Gateway along with an AWS Site-to-Site VPN for connectivity to the on-premises network. Share the transit gateway by using AWS Resource Access Manager.

D.  

Use AWS Site-to-Site VPN for connectivity to the on-premises network.

E.  

Use AWS Direct Connect for connectivity to the on-premises network.

Discussion 0
Questions 88

A company runs a new application as a static website in Amazon S3. The company has deployed the application to a production AWS account and uses Amazon CloudFront to deliver the website. The website calls an Amazon API Gateway REST API. An AWS Lambda function backs each API method.

The company wants to create a CSV report every 2 weeks to show each API Lambda function’s recommended configured memory, recommended cost, and the price difference between current configurations and the recommendations. The company will store the reports in an S3 bucket.

Which solution will meet these requirements with the LEAST development time?

Options:

A.  

Create a Lambda function that extracts metrics data for each API Lambda function from Amazon CloudWatch Logs for the 2-week penod_ Collate the data into tabular format. Store the data as a _csvfile in an S3 bucket. Create an Amazon Eventaridge rule to schedulethe Lambda function to run every 2 weeks.

B.  

Opt in to AWS Compute Optimizer. Create a Lambda function that calls the ExportLambdaFunctionRecommendatlons operation. Export the _csv file to an S3 bucket. Create an Amazon Eventaridge rule to schedule the Lambda function to run every 2 weeks.

C.  

Opt in to AWS Compute Optimizer. Set up enhanced infrastructure metrics. Within the Compute Optimizer console, schedule a job to export the Lambda recommendations to a _csvfile_ Store the file in an S3 bucket every 2 weeks.

D.  

Purchase the AWS Business Support plan for the production account. Opt in to AWS Compute Optimizer for AWS Trusted Advisor checks. In the Trusted Advisor console, schedule a job to export the cost optimization checks to a _csvfile_ Store the file in an S3 bucket every 2 weeks.

Discussion 0
Questions 89

A company runs a processing engine in the AWS Cloud. The engine processes environmental data from logistics centers to calculate a sustainability index. The company has millions of devices in logistics centers that are spread across Europe. The devices send information to the processing engine through a RESTful API. The API experiences unpredictable bursts of traffic. The company must implement a solution to process all data that the devices send to the processing engine. Data loss is unacceptable. Which solution will meet these requirements?

Options:

A.  

Create an Application Load Balancer (ALB) for the RESTful API. Create an Amazon SQS queue. Create a listener and a target group for the ALB. Add the SQS queue as the target. Use a container that runs in Amazon ECS with the Fargate launch type to process messages in the queue.

B.  

Create an Amazon API Gateway HTTP API that implements the RESTful API. Create an Amazon SQS queue. Create an API Gateway service integration with the SQS queue. Create an AWS Lambda function to process messages in the SQS queue.

C.  

Create an Amazon API Gateway REST API that implements the RESTful API. Create a fleet of Amazon EC2 instances in an Auto Scaling group. Create an API Gateway Auto Scaling group proxy integration. Use the EC2 instances to process incoming data.

D.  

Create an Amazon CloudFront distribution for the RESTful API. Create a data stream in Amazon Kinesis Data Streams. Set the data stream as the origin for the distribution. Create an AWS Lambda function to consume and process data in the data stream.

Discussion 0
Questions 90

A company needs to build a disaster recovery (DR) solution for its ecommerce website. The web application is hosted on a fleet of t3.Iarge Amazon EC2 instances and uses an Amazon RDS for MySQL DB instance. The EC2 instances are in an Auto Scaling group that extends across multiple Availability Zones.

In the event of a disaster, the web application must fail over to the secondary environment with an RPO of 30 seconds and an R TO of 10 minutes.

Which solution will meet these requirements MOST cost-effectively?

Options:

A.  

Use infrastructure as code (IaC) to provision the new infrastructure in the DR Region. Create a cross-Region read replica for the DB instance. Set up a backup plan in AWS Backup to create cross-Region backups for the EC2 instances and the DB instance. Create a cron expression to back up the EC2 instances and the DB instance every 30 seconds to the DR Region. Recover the EC2 instances from the latest EC2 backup. Use an Amazon Route 53 geoloc

B.  

Use infrastructure as code (laC) to provision the new infrastructure in the DR Region. Create a cross-Region read replica for the DB instance. Set up AWS Elastic Disaster Recovery tocontinuously replicate the EC2 instances to the DR Region. Run the EC2 instances at the minimum capacity in the DR Region Use an Amazon Route 53 failover routing policy to automatically fail over to the DR Region in the event of a disaster. Increase the desired

C.  

Set up a backup plan in AWS Backup to create cross-Region backups for the EC2 instances and the DB instance. Create a cron expression to back up the EC2 instances and the DB instance every 30 seconds to the DR Region. Use infrastructure as code (IaC) to provision the new infrastructure in the DR Region. Manually restore the backed-up data on new instances. Use an Amazon Route 53 simple routing policy to automatically fail over to the DR Reg

D.  

Use infrastructure as code (IaC) to provision the new infrastructure in the DR Region. Create an Amazon Aurora global database. Set up AWS Elastic Disaster Recovery to continuously replicate the EC2 instances to the DR Region. Run the Auto Scaling group of EC2 instances at full capacity in the DR Region. Use an Amazon Route 53 failover routing policy to automatically fail over to the DR Region in the event of a disaster.

Discussion 0
Questions 91

A financial services company receives a regular data feed from its credit card servicing partner Approximately 5.000 records are sent every 15 minutes in plaintext, delivered over HTTPS directly into an Amazon S3 bucket with server-side encryption. This feed contains sensitive credit card primary account number (PAN) data The company needs to automatically mask the PAN before sending the data to another S3 bucket for additional internal processing. The company also needs to remove and merge specific fields, and then transform the record into JSON format Additionally, extra feeds are likely to be added in the future, so any design needs to be easily expandable.

Which solutions will meet these requirements?

Options:

A.  

Trigger an AWS Lambda function on file delivery that extracts each record and writes it to an Amazon SQS queue. Trigger another Lambda function when new messages arrive in the SQS queue to process the records, writing the results to a temporary location in Amazon S3. Trigger a final Lambda function once the SQS queue is empty to transform the records into JSON format and send the results to another S3 bucket for internal processing.

B.  

Trigger an AWS Lambda function on file delivery that extracts each record and writes it to an Amazon SQS queue. Configure an AWS Fargate container application to automatically scale to a single instance when the SQS queue contains messages. Have the application process each record, and transform the record into JSON format. When the queue is empty, send the results to another S3bucket for internal processing and scale down the AWS Fargate i

C.  

Create an AWS Glue crawler and custom classifier based on the data feed formats and build a table definition to match. Trigger an AWS Lambda function on file delivery to start an AWS Glue ETL job to transform the entire record according to the processing and transformation requirements. Define the output format as JSON. Once complete, have the ETL job send the results to another S3 bucket for internal processing.

D.  

Create an AWS Glue crawler and custom classifier based upon the data feed formats and build a table definition to match. Perform an Amazon Athena query on file delivery to start an Amazon EMR ETL job to transform the entire record according to the processing and transformation requirements. Define the output format as JSON. Once complete, send the results to another S3 bucket for internal processing and scale down the EMR cluster.

Discussion 0
Questions 92

A software as a service (SaaS) company uses AWS to host a service that is powered by AWS PrivateLink. The service consists of proprietary software that runs on three Amazon EC2 instances behind a Network Load Balancer (NL B). The instances are in private subnets in multiple Availability Zones in the eu-west-2 Region. All the company's customers are in eu-west-2.

However, the company now acquires a new customer in the us-east-I Region. The company creates a new VPC and new subnets in us-east-I. The company establishes

inter-Region VPC peering between the VPCs in the two Regions.

The company wants to give the new customer access to the SaaS service, but the company does not want to immediately deploy new EC2 resources in us-east-I

Which solution will meet these requirements?

Options:

A.  

Configure a PrivateLink endpoint service in us-east-I to use the existing NL B that is in eu-west-2. Grant specific AWS accounts access to connect to theSaaS service.

B.  

Create an NL B in us-east-I . Create an IP target group that uses the IP addresses of the company's instances in eu-west-2 that host the SaaS service.Configure a PrivateLink endpoint service that uses the NLB that is in us-east-I . Grant specific AWS accounts access to connect to the SaaS service.

C.  

Create an Application Load Balancer (ALB) in front of the EC2 instances in eu-west-2. Create an NLB in us-east-I . Associate the NLB that is in us-east-Iwith an ALB target group that uses the ALB that is in eu-west-2. Configure a PrivateLink endpoint service that uses the NLB that is in us-east-I . Grantspecific AWS accounts access to connect to the SaaS service.

D.  

Use AWS Resource Access Manager (AWS RAM) to share the EC2 instances that are in eu-west-2. In us-east-I , create an NLB and an instance targetgroup that includes the shared EC2 instances from eu-west-2. Configure a PrivateLink endpoint service that uses the NL B that is in us-east-I. Grant specific AWS accounts access to connect to the SaaS service.

Discussion 0
Questions 93

A company stores application data in many Amazon S3 buckets in one AWS account. Some of the S3 buckets contain sensitive data. The company does not have data inventory for the S3 buckets. The company uses server-side encryption with Amazon S3 managed keys (SSE-S3) to encrypt all data in the S3 buckets.

A solutions architect must design a solution to encrypt sensitive data with a key that only administrators can access.

Which solution will meet these requirements?

Options:

A.  

Use Amazon Inspector to determine which S3 buckets contain sensitive data. Create a new AWS KMS customer managed key and a key policy that provides access to administrators only. Set default S3 bucket encryption to use the new KMS key (SSE-KMS). Update the S3 bucket policy to add a Deny effect and a Condition element of "StringNotEquals": { "s3:x-amz-server-side-encryption": "aws:kms" }.

B.  

Use Amazon Inspector to determine which S3 buckets contain sensitive data. Update the key policy on the AWS managed key to provide access to administrators only. Use AWS Batch to encrypt all existing objects that include sensitive data in the S3 buckets with the updated AWS managed key.

C.  

Use Amazon Made to determine which S3 buckets contain sensitive data. Create a new AWS KMS customer managed key and a key policy that provides access to administrators only. Set default S3 bucket encryption to use the new KMS key (SSE-KMS). Create an AWS Step Functionsworkflow to encrypt all existing S3 objects that include sensitive data by using the new KMS key.

D.  

Use Amazon Made to determine which S3 buckets contain sensitive data. Update the key policy on the AWS managed key to provide access to administrators only. Update the S3 bucket policy to add a Deny effect and a Condition element of "StringNotEquals": { "s3:x-amz-server-side-encryption": "aws:kms" }.

Discussion 0
Questions 94

A retail business runs an orders and payments solution that uses more than 100 AWS Lambda functions that are written in Python. As the company's product catalog grows, the company observes that many of the functions time out. The company reconfigures the functions to have the maximum available timeout values, and the company increases memory for the functions. However, the functions continue to time out.

The company investigates the timeout issue and finds that most failures occur when several functions have been chained together, which results in the initial function timing out while it waits for responses. A solutions architect must resolve these timeout issues while maintaining the cost and operational benefits of using Lambda functions.

Which solution will meet these requirements with the MINIMUM amount of application changes?

Options:

A.  

Convert the solution to run in containers. Deploy an Amazon EKS cluster and enable auto scaling for the containers.

B.  

Rewrite the Lambda functions in Java. Implement Amazon SQS queues between functions.

C.  

Convert the solution to use an AWS Step Functions workflow to invoke the Lambda functions. Remove the direct function-to-function invocations.

D.  

Combine groups of linked functions together in a single container image. Deploy the container by using Amazon ECS on Amazon EC2 Spot Instances.

Discussion 0
Questions 95

A company migrated an application from on-premises VMs to Amazon EC2 instances in an AWS account 6 months ago. Now, the company needs to deploy the application to a second AWS Region. During the next 2 years, the company will redesign parts of the application to use AWS Lambda functions. The company is expecting stable usage patterns for the application for the next 3 years.

Which strategy will MAXIMIZE the cost savings for the company?

Options:

A.  

Evaluate Savings Plans recommendations each year in AWS Cost Management. Purchase a 1-year Compute Savings Plan based on the recommendations.

B.  

Evaluate Savings Plans recommendations by using AWS Compute Optimizer. Purchase a 3-year EC2 Instance Savings Plan based on the recommendations. Use Compute Optimizer to adjust the Lambda functions based on recommendations.

C.  

Purchase a 1-year EC2 Instance Savings Plan with No Upfront payment. Review the infrastructure after each year. As parts of the application transition to Lambda functions, decrease the hourly commitment for future EC2 Instance Savings Plans.

D.  

Purchase a 3-year EC2 Instance Savings Plan with No Upfront payment. As parts of the application transition to Lambda functions, decrease the hourly commitment for the EC2 Instance Savings Plan.

Discussion 0
Questions 96

A company runs an ecommerce website on Amazon ECS behind an Application Load Balancer (ALB). The company stores the container images in Amazon ECR. The website stores data in an Amazon Aurora MySQL DB cluster. The company uses an Amazon S3 bucket to store backup data.

The company needs to prevent data tampering. The website domain is registered with Amazon Route 53. The company wants to recreate the setup in a second AWS Region with an RPO of 5 minutes and an RTO of 15 minutes. The company has created an ALB in the second Region.

Which solution will meet these requirements?

Options:

A.  

Create a new ECS deployment that uses the Fargate launch type. Use the ECR repository in the current Region to store and pull container images. Set up a cross-Region read replica in Amazon RDS. Create a backup vault in compliance mode and a backup plan in AWS Backup. Set up a Route 53 primary record in the main Region and a secondary record with a multivalue answer routing policy.

B.  

Create a new ECS deployment that uses the Fargate launch type. Use the ECR repository in the current Region to store and pull container images. Set up a cross-Region read replica in Amazon RDS. Set up a Route 53 primary record in the main Region and a secondary record with a failover routing policy.

C.  

Set up ECR cross-Region replication. Create a new ECS deployment that uses the Fargate launch type. Migrate the DB cluster to an Aurora global database. Create a backup vault in compliance mode and a backup plan in AWS Backup. Enable point-in-time recovery and cross-Region replication for Amazon S3. Set up a Route 53 primary record in the main Region and a secondaryrecord with a failover routing policy.

D.  

Set up ECR cross-Region replication. Create a new ECS deployment that uses the Fargate launch type. Migrate the DB cluster to an Aurora global database. Create a backup vault in governance mode and a backup plan in AWS Backup. Set up a Route 53 primary record in the main Region and a secondary record with a geolocation routing policy.

Discussion 0
Questions 97

A company needs to run large batch-processing jobs on data that is stored in an Amazon S3 bucket. The jobs perform simul-ations. The results of the jobs are not time sensitive, and the process can withstand interruptions.

Each job must process 15-20 GB of data when the data is stored in the S3 bucket. The company will store the output from the jobs in a different Amazon S3 bucket for further analysis.

Which solution will meet these requirements MOST cost-effectively?

Options:

A.  

Create a serverless data pipeline. Use AWS Step Functions for orchestration. Use AWS Lambda functions with provisioned capacity to process the data.

B.  

Create an AWS Batch compute environment that includes Amazon EC2 Spot Instances. Specify the SPOT_CAPACITY_OPTIMIZED allocation strategy.

C.  

Create an AWS Batch compute environment that includes Amazon EC2 On-Demand Instances and Spot Instances. Specify the SPOT_CAPACITY_OPTIMIZED allocation strategy for the Spot Instances.

D.  

Use Amazon EKS to run the processing jobs. Use managed node groups that contain a combination of Amazon EC2 On-Demand Instances and Spot Instances.

Discussion 0
Questions 98

A company is planning a large event where a promotional offer will be introduced. The company's website is hosted on AWS and backed by an Amazon RDS for PostgreSQL DB instance. The website explains the promotion and includes a sign-up page that collects user information andpreferences. Management expects large and unpredictable volumes of traffic periodically, which will create many database writes. A solutions architect needs to build a solution that does not change the underlying data model and ensures that submissions are not dropped before they are committed to the database.

Which solutions meets these requirements?

Options:

A.  

Immediately before the event, scale up the existing DB instance to meet the anticipated demand. Then scale down after the event.

B.  

Use Amazon SQS to decouple the application and database layers. Configure an AWS Lambda function to write items from the queue into the database.

C.  

Migrate to Amazon DynamoDB and manage throughput capacity with automatic scaling.

D.  

Use Amazon ElastiCache (Memcached) to increase write capacity to the DB instance.

Discussion 0
Questions 99

A company has a payment gateway that processes millions of daily transactions on AWS. The solution uses Amazon ECS with a single Amazon EC2 instance that is not configured for auto scaling and an Amazon Aurora PostgreSQL database. All the solution's resources are deployed in the same Availability Zone. The company uses Amazon Route 53 to manage its domain name resolution.

The company needs to implement a new strategy to make the application more highly available.

Which solution will meet this requirement with the LEAST operational overhead?

Options:

A.  

Set up an Amazon RDS Proxy in front of the Aurora database. Modify the Aurora database to a Multi-AZ DB cluster by adding a read replica in a second Availability Zone.

B.  

Configure Amazon ECS services to distribute tasks across multiple Availability Zones. Create a cross-Region read replica of the Aurora database in a second AWS Region. Create a script to perform a manual failover process.

C.  

Configure Amazon ECS services on AWS Fargate to distribute tasks across multiple Availability Zones. Modify the Aurora database to a Multi-AZ DB cluster by adding a read replica in a second Availability Zone.

D.  

Deploy the gateway application into a second AWS Region. Migrate the Aurora database to an Aurora global database. Configure Route 53 for active-active gateway request routing.

Discussion 0
Questions 100

A company has 20 accounts in an organization in AWS Organizations. The accounts are in two OUs: development and production. Multiple teams use the development accounts.

The company wants to control the cost that is associated with the development accounts. The company needs a solution that provides a notification when the forecasted monthly cost for all development accounts exceeds a threshold.

A solutions architect creates an Amazon SNS topic and subscribes an email address to the topic.

What should the solutions architect do next to meet the notification requirement with the LEAST configuration effort?

Options:

A.  

Enable Amazon CloudWatch billing alerts in the organization's management account. Create a CloudWatch billing alarm by configuring the EstimatedCharges metric for each development account as a linked account. Configure the SNS topic for email alerts when the EstimatedCharges metric value exceeds the threshold.

B.  

Create an AWS Cost and Usage Report in the organization's management account. Configure report delivery to an Amazon S3 bucket. Configure an AWS Glue job to extract the report data into Amazon Athena. Configure AWS Step Functions to analyze the consolidated cost of all the development accounts. Configure the SNS topic for email alerts when the cost exceeds the threshold.

C.  

Use AWS Budgets to create a cost budget in the organization's management account. Configure each development account as a linked account. Configure an alert threshold. Configure the SNS topic for email alerts.

D.  

Enable AWS Cost Explorer in the organization's management account. Configure each development account as a linked account. Configure an alert threshold. Configure the SNS topic for email alerts.

Discussion 0
Questions 101

A company hosts a Git repository in an on-premises data center. The company uses webhooks to invoke functionality that runs in the AWS Cloud. The company hosts the webhook logic on a set of Amazon EC2 instances in an Auto Scaling group that the company set as a target for an Application Load Balancer (ALB). The Git server calls the ALB for the configured webhooks. The company wants to move the solution to a serverless architecture.

Which solution will meet these requirements with the LEAST operational overhead?

Options:

A.  

For each webhook, create and configure an AWS Lambda function URL. Update the Git servers to call the individual Lambda function URLs.

B.  

Create an Amazon API Gateway HTTP API. Implement each webhook logic in a separate AWS Lambda function. Update the Git servers to call the API Gateway endpoint.

C.  

Deploy the webhook logic to AWS App Runner. Create an ALB, and set App Runner as the target. Update the Git servers to call the ALB endpoint.

D.  

Containerize the webhook logic. Create an Amazon Elastic Container Service (Amazon ECS) cluster, and run the webhook logic in AWS Fargate. Create an Amazon API Gateway REST API, and set Fargate as the target. Update the Git servers to call the API Gateway endpoint.

Discussion 0
Questions 102

A company wants to containerize a multi-tier web application and move the application from an on-premises data center to AWS. The application includes web. application, and database tiers. The company needs to make the application fault tolerant and scalable. Some frequently accessed data must always be available across application servers. Frontend web servers need session persistence and must scale to meet increases in traffic.

Which solution will meet these requirements with the LEAST ongoing operational overhead?

Options:

A.  

Run the application on Amazon Elastic Container Service (Amazon ECS) on AWS Fargate. Use Amazon Elastic File System (Amazon EFS) for data that is frequently accessed between the web and application tiers. Store the frontend web server session data in Amazon Simple Queue Service (Amazon SOS).

B.  

Run the application on Amazon Elastic Container Service (Amazon ECS) on Amazon EC2. Use Amazon ElastiCache for Redis to cache frontend web server session data. Use Amazon Elastic Block Store (Amazon EBS) with Multi-Attach on EC2 instances that are distributed across multiple Availability Zones.

C.  

Run the application on Amazon Elastic Kubernetes Service (Amazon EKS). Configure Amazon EKS to use managed node groups. Use ReplicaSets to run the web servers and applications. Create an Amazon Elastic File System (Amazon EFS) Me system. Mount the EFS file system across all EKS pods to store frontend web server session data.

D.  

Deploy the application on Amazon Elastic Kubernetes Service (Amazon EKS) Configure Amazon EKS to use managed node groups. Run the web servers and application as Kubernetes deployments in the EKS cluster. Store the frontend web server session data in an Amazon DynamoDB table. Create an Amazon Elastic File System (Amazon EFS) volume that all applications will mount at the time of deployment.

Discussion 0
Questions 103

A company has a website that runs on Amazon EC2 instances behind an Application Load Balancer (ALB). The instances are in an Auto Scaling group. The ALB is associated with an AWS WAF web ACL.

The website often encounters attacks in the application layer. The attacks produce sudden and significant increases in traffic on the application server. The access logs show that each attack originates from different IP addresses. A solutions architect needs to implement a solution to mitigate these attacks.

Which solution will meet these requirements with the LEAST operational overhead?

Options:

A.  

Create an Amazon CloudWatch alarm that monitors server access. Set a threshold based on access by IP address. Configure an alarm action that adds the IP address to the web ACL’s deny list.

B.  

Deploy AWS Shield Advanced in addition to AWS WAF. Add the ALB as a protected resource.

C.  

Create an Amazon CloudWatch alarm that monitors user IP addresses. Set a threshold based on access by IP address. Configure the alarm to invoke an AWS Lambda function to add a deny rule in the application server’s subnet route table for any IP addresses that activate the alarm.

D.  

Inspect access logs to find a pattern of IP addresses that launched the attacks. Use an Amazon Route 53 geolocation routing policy to deny traffic from the countries that host those IP addresses.

Discussion 0
Questions 104

A company is rearchitecting its applications to run on AWS. The company's infrastructure includes multiple Amazon EC2 instances. The company's development team needs different levels of access. The company wants to implement a policy that requires all Windows EC2 instances to be joined to an Active Directory domain on AWS. The company also wants to Implement enhanced security processes such as multi-factor authentication (MFA). The company wants to use managed AWS services wherever possible.

Which solution will meet these requirements?

Options:

A.  

Create an AWS Directory Service for Microsoft Active Directory implementation. Launch an Amazon Workspace. Connect to and use the Workspace for domain security configuration tasks.

B.  

Create an AWS Directory Service for Microsoft Active Directory implementation. Launch an EC2 instance. Connect to and use the EC2 instance for domain security configuration tasks.

C.  

Create an AWS Directory Service Simple AD implementation. Launch an EC2 instance. Connect to and use the EC2 instance for domain security configuration tasks.

D.  

Create an AWS Directory Service Simple AD implementation. Launch an Amazon Workspace. Connect to and use the Workspace for domain security configuration tasks.

Discussion 0
Questions 105

A company is running a critical application that uses an Amazon RDS for MySQL database to store data. The RDS DB instance is deployed in Multi-AZ mode.

A recent RDS database failover test caused a 40-second outage to the application A solutions architect needs to design a solution to reduce the outage time to less than 20 seconds.

Which combination of steps should the solutions architect take to meet these requirements? (Select THREE.)

Options:

A.  

Use Amazon ElastiCache for Memcached in front of the database

B.  

Use Amazon ElastiCache for Redis in front of the database.

C.  

Use RDS Proxy in front of the database

D.  

Migrate the database to Amazon Aurora MySQL

E.  

Create an Amazon Aurora Replica

F.  

Create an RDS for MySQL read replica

Discussion 0
Questions 106

A company ingests and processes streaming market data. The data rate is constant. A nightly process that calculates aggregate statistics is run, and each execution takes about 4 hours to complete. The statistical analysis is not mission critical to the business, and previous data points are picked up on the next execution if a particular run fails.

The current architecture uses a pool of Amazon EC2 Reserved Instances with 1-year reservations running full time to ingest and store the streaming data in attached Amazon EBS volumes. On-Demand EC2 instances are launched each night to perform the nightly processing, accessing the stored data from NFS shares on the ingestion servers, and terminating the nightly processing servers when complete. The Reserved Instance reservations are expiring, and the company needs to determine whether to purchase new reservations or implement a new design.

Which is the most cost-effective design?

Options:

A.  

Update the ingestion process to use Amazon Kinesis Data Firehose to save data to Amazon S3. Use a scheduled script to launch a fleet of EC2 On-Demand Instances each night to perform the batch processing of the S3 data. Configure the script to terminate the instances when the processing is complete.

B.  

Update the ingestion process to use Amazon Kinesis Data Firehose to save data to Amazon S3. Use AWS Batch with Spot Instances to perform nightlyprocessing with a maximum Spot price that is 50% of the On-Demand price.

C.  

Update the ingestion process to use a fleet of EC2 Reserved Instances with 3-year reservations behind a Network Load Balancer. Use AWS Batch with SpotInstances to perform nightly processing with a maximum Spot price that is 50% of the On-Demand price.

D.  

Update the ingestion process to use Amazon Kinesis Data Firehose to save data to Amazon Redshift. Use Amazon EventBridge to schedule an AWS Lambdafunction to run nightly to query Amazon Redshift to generate the daily statistics.

Discussion 0
Questions 107

A company has an organization in AWS Organizations that has a large number of AWS accounts. One of the AWS accounts is designated as a transit account and has a transit gateway that is shared with all of the other AWS accounts AWS Site-to-Site VPN connections are configured between ail of the company's global offices and the transit account The company has AWS Config enabled on all of its accounts.

The company's networking team needs to centrally manage a list of internal IP address ranges thatbelong to the global offices Developers Will reference this list to gain access to applications securely.

Which solution meets these requirements with the LEAST amount of operational overhead?

Options:

A.  

Create a JSON file that is hosted in Amazon S3 and that lists all of the internal IP address ranges Configure an Amazon Simple Notification Service (Amazon SNS) topic in each of the accounts that can be involved when the JSON file is updated. Subscribe an AWS Lambda function to the SNS topic to update all relevant security group rules with Vie updated IP address ranges.

B.  

Create a new AWS Config managed rule that contains all of the internal IP address ranges Use the rule to check the security groups in each of the accounts to ensure compliance with the list of IP address ranges. Configure the rule to automatically remediate any noncompliant security group that is detected.

C.  

In the transit account, create a VPC prefix list with all of the internal IP address ranges. Use AWS Resource Access Manager to share the prefix list with all of the other accounts. Use the shared prefix list to configure security group rules is the other accounts.

D.  

In the transit account create a security group with all of the internal IP address ranges. Configure the security groups in me other accounts to reference the transit account's securitygroup by using a nested security group reference of *./sg-1a2b3c4d".

Discussion 0
Questions 108

A company is migrating an on-premises application and a MySQL database to AWS. The application processes highly sensitive data, and new data is constantly updated in the database. The data must not be transferred over the internet. The company also must encrypt the data in transit and at rest.

The database is 5 TB in size. The company already has created the database schema in an Amazon RDS for MySQL DB instance. The company has set up a 1 Gbps AWS Direct Connect connection to AWS. The company also has set up a public VIF and a private VIF. A solutions architect needs to design a solution that will migrate the data to AWS with the least possible downtime.

Which solution will meet these requirements?

Options:

A.  

Perform a database backup. Copy the backup files to an AWS Snowball Edge Storage Optimized device. Import the backup to Amazon S3. Use server-side encryption with Amazon S3 managed encryption keys (SSE-S3) for encryption at rest. Use TLS for encryption in transit. Import the data from Amazon S3 to the DB instance.

B.  

Use AWS Database Migration Service (AWS DMS) to migrate the data to AWS. Create a DMS replication instance in a private subnet. Create VPC endpoints for AWS DMS. Configure a DMS task to copy data from the on-premises database to the DB instance by using full load plus change data capture (CDC). Use the AWS Key Management Service (AWS KMS) default key for encryption at rest. Use TLS for encryption in transit.

C.  

Perform a database backup. Use AWS DataSync to transfer the backup files to Amazon S3. Use server-side encryption with Amazon S3 managed encryption keys (SSE-S3) for encryption at rest. Use TLS for encryption in transit. Import the data from Amazon S3 to the DB instance.

D.  

Use Amazon S3 File Gateway. Set up a private connection to Amazon S3 by using AWS PrivateLink. Perform a database backup. Copy the backup files to Amazon S3. Use server-side encryption with Amazon S3 managed encryption keys (SSE-S3) for encryption at rest. Use TLS for encryption in transit. Import the data from Amazon S3 to the DB instance.

Discussion 0
Questions 109

A company has a web application that allows users to upload short videos. The videos are stored on Amazon EBS volumes and analyzed by custom recognition software for categorization.

The website contains stat c content that has variable traffic with peaks in certain months. The architecture consists of Amazon EC2 instances running in an Auto Scaling group for the web application and EC2 instances running in an Auto Scaling group to process an Amazon SQS queue The company wants to re-architect the application to reduce operational overhead using AWS managed services where possible and remove dependencies on third-party software.

Which solution meets these requirements?

Options:

A.  

Use Amazon ECS containers for the web application and Spot Instances for the Auto Scaling group that processes the SQS queue. Replace the custom software with Amazon Recognition to categorize the videos.

B.  

Store the uploaded videos n Amazon EFS and mount the file system to the EC2 instances for Te web application. Process the SOS queue with an AWS Lambda function that calls the Amazon Rekognition API to categorize the videos.

C.  

Host the web application in Amazon S3. Store the uploaded videos in Amazon S3. Use S3 event notifications to publish events to the SQS queue Process the SQS queue with an AWS Lambda function that calls the Amazon Rekognition API to categorize the videos.

D.  

Use AWS Elastic Beanstalk to launch EC2 instances in an Auto Scaling group for the web application and launch a worker environment to process the SQS queue Replace the custom software with Amazon Rekognition to categorize the videos.

Discussion 0
Questions 110

Question:

A company runs production workloads on EC2 On-Demand Instances and RDS for PostgreSQL. They want to reduce costs without compromising availability or capacity.

Options:

A.  

Use CUR and Lambda to terminate underutilized instances. Buy Savings Plans.

B.  

Use Budgets and Trusted Advisor, then manually terminate and buy RIs.

C.  

UseCompute OptimizerandTrusted Advisorfor recommendations. Apply rightsizing, auto scaling, and purchase a Compute Savings Plan.

D.  

Use Cost Explorer, alerts, and replace with Spot Instances.

Discussion 0
Questions 111

A company deploys a new web application. As pari of the setup, the company configures AWS WAF to log to Amazon S3 through Amazon Kinesis Data Firehose. The company develops an Amazon Athena query that runs once daily to return AWS WAF log data from the previous 24 hours. The volume of daily logs is constant. However, over time, the same query is taking more time to run.

A solutions architect needs to design a solution to prevent the query time from continuing to increase. The solution must minimize operational overhead.

Which solution will meet these requirements?

Options:

A.  

Create an AWS Lambda function that consolidates each day's AWS WAF logs into one log file.

B.  

Reduce the amount of data scanned by configuring AWS WAF to send logs to a different S3 bucket each day.

C.  

Update the Kinesis Data Firehose configuration to partition the data in Amazon S3 by date and time. Create external tables for Amazon Redshift. Configure Amazon Redshift Spectrum to query the data source.

D.  

Modify the Kinesis Data Firehose configuration and Athena table definition to partition the data by date and time. Change the Athena query to view the relevant partitions.

Discussion 0
Questions 112

Question:

A SaaS web app runs on EC2 Linux behind an ALB. It storesuser sessionsin an RDS Multi-AZ database. During high traffic, the app suffers latency due to session read/write.

What is the best way to reduce session latency?

Options:

Options:

A.  

Store session data in Amazon S3.

B.  

Use FSx for Windows and mount it.

C.  

Use Multi-Attach EBS volumes.

D.  

Use ElastiCache for Redis to store sessions.

Discussion 0
Questions 113

A company migrated an application to the AWS Cloud. The application runs on two Amazon EC2 instances behind an Application Load Balancer (ALB). Application data is stored in a MySQL database that runs on an additional EC2 instance. The application's use of the database is read-heavy.

The loads static content from Amazon Elastic Block Store (Amazon EBS) volumes that are attached to each EC2 instance. The static content is updated frequently and must be copied to each EBS volume.

The load on the application changes throughout the day. During peak hours, the application cannot handle all the incoming requests. Trace data shows that the database cannot handle the read load during peak hours.

Which solution will improve the reliability of the application?

Options:

A.  

Migrate the application to a set of AWS Lambda functions. Set the Lambda functions as targets for the ALB. Create a new single EBS volume for the static content. Configure the Lambda functions to read from the new EBSvolume. Migrate the database to an Amazon RDS for MySQL Multi-AZ DB cluster.

B.  

Migrate the application to a set of AWS Step Functions state machines. Set the state machines as targets for the AL

B.  

Create an Amazon Elastic File System (Amazon EFS) file system for the static content. Configure the state machines to read from the EFS file system. Migrate the database to Amazon Aurora MySQL Serverless v2 with a reader DB instance.

C.  

Containerize the application. Migrate the application to an Amazon Elastic Container Service (Amazon ECS) Cluster. Use the AWS Fargate launch type for the tasks that host the application. Create a new single EBS volume the static content. Mount the new EBS volume on the ECS duster. Configure AWS Application Auto Scaling on ECS cluster. Set the ECS service as a target for the ALB. Migrate the database to an Amazon RDS for MySOL Multi-AZ DB c

D.  

Containerize the application. Migrate the application to an Amazon Elastic Container Service (Amazon ECS) cluster. Use the AWS Fargate launch type for the tasks that host the application. Create an Amazon Elastic File System (Amazon EFS) file system for the static content. Mount the EFS file system to each container. Configure AWS Application Auto Scaling on the ECS cluster Set the ECS service as a target for the ALB. Migrate the database t

Discussion 0
Questions 114

A data analytics company has an Amazon Redshift cluster that consists of several reserved nodes. The cluster is experiencing unexpected bursts of usage because a team of employees is compiling a deep audit analysis report. The queries to generate the report are complex read queries and are CPU intensive.

Business requirements dictate that the cluster must be able to service read and write queries at all times. A solutions architect must devise a solution that accommodates the bursts of usage.

Which solution meets these requirements MOST cost-effectively?

Options:

A.  

Provision an Amazon EMR cluster. Offload the complex data processing tasks.

B.  

Deploy an AWS Lambda function to add capacity to the Amazon Redshift cluster by using a classic resize operation when the cluster's CPU metrics in Amazon CloudWatch reach 80%.

C.  

Deploy an AWS Lambda function to add capacity to the Amazon Redshift cluster by using an elastic resize operation when the cluster's CPU metrics in Amazon CloudWatch reach 80%.

D.  

Turn on the Concurrency Scaling feature for the Amazon Redshift cluster.

Discussion 0
Questions 115

A company has multiple AWS accounts that are in an organization in AWS Organizations. The company needs to store AWS account activity and query the data from a central location by using SQL.

Which solution will meet these requirements?

Options:

A.  

Create an AWS CloudTrail trail in each account. Specify CloudTrail management events for the trail. Configure CloudTrail to send the events to Amazon CloudWatch Logs. Configure CloudWatch cross-account observability. Query the data in CloudWatch Logs Insights.

B.  

Use a delegated administrator account to create an AWS CloudTrail Lake data store. Specify CloudTrail management events for the data store. Enable the data store for all accounts tn the organization. Query the data in CloudTrail Lake.

C.  

Use a delegated administrator account to create an AWS CloudTrail trail. Specify CloudTrail management events for the trail. Enable the trail for all accounts in the organization. Keep all other settings as default. Query the CloudTrail data from the CloudTrail event history page.

D.  

Use AWS CloudFormation StackSets to deploy AWS CloudTrail Lake data stores in each account. Specify CloudTrail management events for the data stores. Keep all other settings as default. Query the data in CloudTrail Lake.

Discussion 0
Questions 116

A global media company is planning a multi-Region deployment of an application. Amazon DynamoDB global tables will back the deployment to keep the user experience consistent across the two continents where users are concentrated. Each deployment will have a public Application Load Balancer (ALB). The company manages public DNS internally. The company wants to make the application available through an apex domain.

Which solution will meet these requirements with the LEAST effort?

Options:

A.  

Migrate public DNS to Amazon Route 53. Create CNAME records for the apex domain to point to the ALB. Use a geolocation routing policy to route traffic based on user location.

B.  

Place a Network Load Balancer (NLB) in front of the AL

B.  

Migrate public DNS to Amazon Route 53. Create a CNAME record for the apex domain to point to the NLB's static IP address. Use a geolocation routing policy to route traffic based on user location.

C.  

Create an AWS Global Accelerator accelerator with multiple endpoint groups that target endpoints in appropriate AWS Regions. Use the accelerator's static IP address to create a record in public DNS for the apex domain.

D.  

Create an Amazon API Gateway API that is backed by AWS Lambda in one of the AWS Regions. Configure a Lambda function to route traffic to application deployments by using the round robin method. Create CNAME records for the apex domain to point to the API's URL.

Discussion 0
Questions 117

A solutions architect has deployed a web application that serves users across two AWS Regionsunder a custom domain The application uses Amazon Route 53 latency-based routing The solutions architect has associated weighted record sets with a pair of web servers in separate Availability Zones for each Region

The solutions architect runs a disaster recovery scenario When all the web servers in one Region are stopped. Route 53 does not automatically redirect users to the other Region

Which of the following are possible root causes of this issue1? (Select TWO)

Options:

A.  

The weight for the Region where the web servers were stopped is higher than the weight for the other Region.

B.  

One of the web servers in the secondary Region did not pass its HTTP health check

C.  

Latency resource record sets cannot be used in combination with weighted resource record sets

D.  

The setting to evaluate target health is not turned on for the latency alias resource record set that is associated with the domain in the Region where the web servers were stopped.

E.  

An HTTP health check has not been set up for one or more of the weighted resource record sets associated with the stopped web servers

Discussion 0
Questions 118

A company hosts an intranet web application on Amazon EC2 instances behind an Application Load Balancer (ALB). Currently, users authenticate to the application against an internal user database.

The company needs to authenticate users to the application by using an existing AWS Directory Service for Microsoft Active Directory directory. All users with accounts in the directory must have access to the application.

Which solution will meet these requirements?

Options:

A.  

Create a new app client in the directory. Create a listener rule for the ALB. Specify the authenticate-oidc action for the listener rule. Configure the listener rule with the appropriate issuer, client ID and secret, and endpoint details for the Active Directory service. Configure the new app client with the callback URL that the ALB provides.

B.  

Configure an Amazon Cognito user pool. Configure the user pool with a federated identity provider (IdP) that has metadata from the directory. Create an app client. Associate the app client with the user pool. Create a listener rule for the AL

B.  

Specify the authenticate-cognito action for the listener rule. Configure the listener rule to use the user pool and app client.

C.  

Add the directory as a new 1AM identity provider (IdP). Create a new 1AM role that has an entity type of SAML 2.0 federation. Configure a role policy that allows access to the ALB. Configure the new role as the default authenticated user role for the IdP. Create a listener rule for the ALB. Specify the authenticate-oidc action for the listener rule.

D.  

Enable AWS 1AM Identity Center (AWS Single Sign-On). Configure the directory as an external identity provider (IdP) that uses SAML. Use the automatic provisioning method. Create a new 1AM role that has an entity type of SAML 2.0 federation. Configure a role policy that allows access to the ALB. Attach the new role to all groups. Create a listener rule for the ALB. Specify the authenticate-cognito action for the listener rule.

Discussion 0
Questions 119

A company is building an application on AWS. The application sends logs to an Amazon Elasticsearch Service (Amazon ES) cluster for analysis. All data must be stored within a VPC.

Some of the company's developers work from home. Other developers work from three different company office locations. The developers need to access

Amazon ES to analyze and visualize logs directly from their local development machines.

Which solution will meet these requirements?

Options:

A.  

Configure and set up an AWS Client VPN endpoint. Associate the Client VPN endpoint with a subnet in the VPC. Configure a Client VPN self-service portal. Instruct the developers to connect by using the client for Client VPN.

B.  

Create a transit gateway, and connect it to the VPC. Create an AWS Site-to-Site VPN. Create an attachment to the transit gateway. Instruct the developers to connect by using an OpenVPN client.

C.  

Create a transit gateway, and connect it to the VP

C.  

Order an AWS Direct Connect connection. Set up a public VIF on the Direct Connect connection. Associate the public VIF with the transit gateway. Instruct the developers to connect to the Direct Connect connection

D.  

Create and configure a bastion host in a public subnet of the VPC. Configure the bastion host security group to allow SSH access from the company CIDR ranges. Instruct the developers to connect by using SSH.

Discussion 0
Questions 120

A company is migrating infrastructure for its massive multiplayer game to AWS. The game's application features a leaderboard where players can see rankings in real time. The leaderboard requires microsecond reads and single-digit-millisecond write latencies. The datasets are single-digit terabytes in size and must be available to accept writes in less than a minute if a primary node failure occurs.

The company needs a solution in which data can persist for further analytical processing through a data pipeline.

Which solution will meet these requirements with the LEAST operational overhead?

Options:

A.  

Create an Amazon ElastiCache (Redis OSS) cluster with cluster mode enabled. Configure the application to interact with the primary node.

B.  

Create an Amazon RDS database with a read replica. Configure the application to point writes to the writer endpoint. Configure the application to point reads to the reader endpoint.

C.  

Create an Amazon MemoryDB cluster in Multi-AZ mode. Configure the application to interact with the primary node.

D.  

Create multiple Redis nodes on Amazon EC2 instances that are spread across multiple Availability Zones. Configure backups to Amazon S3.

Discussion 0
Questions 121

Question:

A company is migrating a large on-prem Oracle database (withstored procedures) to AWS. The solution must usemanaged services, behighly available, and enable afast migrationwithminimal downtime.

Options:

A.  

Use AWS DMS to replicate data to RDS for Oracle. Store database files in S3.

B.  

Use backup and restore into EC2-hosted Oracle cluster.

C.  

Use DMS to move data to DynamoDB. Recreate stored procedures in Lambda.

D.  

Use DMS to migrate toAmazon Aurora PostgreSQL. UseAWS SCTto convert stored procedures.

Discussion 0
Questions 122

Question:

A company needs to copy backups of 40 RDS for MySQL databases from a production account to a central backup account within AWS Organizations. The databases usedefault AWS-managed KMS encryption keys. The backups must be stored in aWORM (Write Once Read Many)backup account.

What is the correct approach to enable cross-account backup?

Options:

A.  

Restore the databases with customer-managed KMS keys and use AWS Backup with cross-account vault sharing.

B.  

Share the default KMS keys with the central account and create backup vaults in the central account.

C.  

Use a Lambda function to decrypt and copy the snapshots to the central account.

D.  

Use a Lambda function to share and re-encrypt snapshots across accounts using the default KMS key.

Discussion 0
Questions 123

A company is migrating internal business applications to Amazon EC2 and Amazon RDS in a VPC. The migration requires connecting the cloud-based applications to the on-premises internal network. The company wants to set up an AWS 5ite-to-5ite VPN connection. The company has created two separate customer gateways. The gateways are configured for static routing and have been assigned distinct public IP addresses.

Which solution will meet these requirements?

Options:

A.  

Create one virtual private gateway. Associate the virtual private gateway with the VPC. Enable route propagation for the virtual private gateway in all VPC route tables. Create two Site-to-Slte VPN connections with two tunnels for each connection. Configure the Site-to-Slte VPN connections to use the virtual private gateway and to use separate customer gateways.

B.  

Create one customer gateway. Associate the customer gateway with the VPC. Enable route propagation for the customer gateway in all VPC route tables. Create two Site-to-Site VPN connections with two tunnels for each connection. Configure the Site-to-Site VPN connections to use the customer gateway.

C.  

Create two virtual private gateways. Associate the virtual private gateways with the VP

C.  

Enable route propagation for both customer gateways in all VPC route tables. Create two Site-to-Site VPN connections with two tunnels for each connection. Configure the Site-to-Site VPN connections to use separate virtual private gateways and separate customer gateways.

D.  

Create two virtual private gateways. Associate the virtual private gateways with the VPC. Enable route propagation for both customer gateways in all VPC route tables. Create four Site-to-Site VPN connections with one tunnel for each connection. Configure the Site-to-Site VPN connections into groups of two. Configure each group to connect to separate customer gateways and separate virtual private gateways.

Discussion 0
Questions 124

A company runs an application on AWS. The application uses an Amazon Aurora MySQL database that is encrypted with the default AWS managed AWS KMS key.

The company must implement a solution to rotate the database encryption key every 180 days. The solution must provide a notification if the encryption key is noncompliant with this standard.

Which solution will meet these requirements?

Options:

A.  

Configure the rotation period for the existing AWS managed KMS key to be 180 days. Implement the cmk-backing-key-rotation-enabled AWS Config managed rule for the existing KMS key. Configure AWS Config to use Amazon SNS to notify the security team if key rotation is noncompliant.

B.  

Create a new AWS managed KMS key with automatic rotation set for 180 days. Take a snapshot of the database. Restore the snapshot to a new Aurora cluster that uses the new KMS key. Create an AWS Config custom rule that uses an AWS Lambda function to validate the key rotation period. Configure AWS Config to use Amazon SES to notify the security team if key encryption is noncompliant.

C.  

Create a new customer managed KMS key with automatic rotation set for 180 days. Take asnapshot of the database. Restore the snapshot to a new Aurora cluster that uses the new KMS key. Create an AWS Config custom rule that uses an AWS Lambda function to validate the key rotation period. Configure AWS Config to use Amazon SNS to notify the security team if key encryption is noncompliant.

D.  

Create a new customer managed KMS key with automatic rotation set for 180 days. Update the database to use the new KMS key for encryption. Implement the cmk-backing-key-rotation-enabled AWS Config managed rule for the new KMS key. Configure AWS Config to use Amazon SES to notify the security team if key rotation is noncompliant.

Discussion 0
Questions 125

A company is changing the way that it handles patching of Amazon EC2 instances in its application account. The company currently patches instances over the internet by using a NAT gateway in a VPC in the application account. The company has EC2 instances set up as a patch source repository in a dedicated private VPC in a core account. The company wants to use AWS Systems Manager Patch Manager and the patch source repository in the core account to patch the EC2 instances in the application account. The company must prevent all EC2 instances in the application account from accessing the internet. The EC2 instances in the application account need to access Amazon S3, where the application data is stored. These EC2 instances need connectivity to Systems Manager and to the patch source repository in the private VPC in the core account. Which solution will meet these requirements?

Options:

A.  

Create a network ACL that blocks outbound traffic on port 80. Associate the network ACL with all subnets in the application account. In the application account and the core account, deploy one EC2 instance that runs a custom VPN server. Create a VPN tunnel to access the private VPC. Update the route table in the application account.

B.  

Create private VIFs for Systems Manager and Amazon S3. Delete the NAT gateway from the VPC in the application account. Create a transit gateway to access the patch source repository EC2 instances in the core account. Update the route table in the core account.

C.  

Create VPC endpoints for Systems Manager and Amazon S3. Delete the NAT gateway from the VPC in the application account. Create a VPC peering connection to access the patch source repository EC2 instances in the core account. Update the route tables in both accounts.

D.  

Create a network ACL that blocks inbound traffic on port 80. Associate the network ACL with all subnets in the application account. Create a transit gateway to access the patch source repository EC2 instances in the core account. Update the route tables in both accounts.

Discussion 0
Questions 126

A company use an organization in AWS Organizations to manage multiple AWS accounts. The company hosts some applications in a VPC in the company's snared services account. The company has attached a transit gateway to the VPC in the Shared services account.

The company is developing a new capability and has created a development environment that requires access to the applications that are in the snared services account. The company intends to delete and recreate resources frequently in the development account. The company also wants to give a development team the ability to recreate the team's connection to the shared services account as required.

Which solution will meet these requirements?

Options:

A.  

Create a transit gateway in the development account. Create a transit gateway peering request to the shared services account. Configure the snared services transit gateway to automatically accept peering connections.

B.  

Turn on automate acceptance for the transit gateway in the shared services account. Use AWS Resource Access Manager (AWS RAM) to share the transit gateway resource in the shared services account with the development account. Accept the resource in tie development account. Create a transit gateway attachment in the development account.

C.  

Turn on automate acceptance for the transit gateway in the shared services account. Create a VPC endpoint. Use the endpoint policy to grant permissions on the VPC endpoint for the development account. Configure the endpoint service to automatically accept connection requests. Provide the endpoint details to the development team.

D.  

Create an Amazon EventBridge rule to invoke an AWS Lambda function that accepts the transit gateway attachment value the development account makes an attachment request. Use AWS Network Manager to store. The transit gateway in the shared services account with the development account. Accept the transit gateway in the development account.

Discussion 0
Questions 127

A software company has deployed an application that consumes a REST API by using Amazon API Gateway. AWS Lambda functions, and an Amazon DynamoDB table. The application is showing an increase in the number of errors during PUT requests. Most of the PUT calls come from a small number of clients that are authenticated with specific API keys.

A solutions architect has identified that a large number of the PUT requests originate from one client. The API is noncritical, and clients can tolerate retries of unsuccessful calls. However, the errors are displayed to customers and are causing damage to the API's reputation.

What should the solutions architect recommend to improve the customer experience?

Options:

A.  

Implement retry logic with exponential backoff and irregular variation in the client application. Ensure that the errors are caught and handled with descriptive error messages.

B.  

Implement API throttling through a usage plan at the API Gateway level. Ensure that the client application handles code 429 replies without error.

C.  

Turn on API caching to enhance responsiveness for the production stage. Run 10-minute load tests. Verify that the cache capacity is appropriate for the workload.

D.  

Implement reserved concurrency at the Lambda function level to provide the resources that are needed during sudden increases in traffic.

Discussion 0
Questions 128

A company operates a fleet of servers on premises and operates a fleet of Amazon EC2 instances in its organization in AWS Organizations. The company's AWS accounts contain hundreds of VPCs. The company wants to connect its AWS accounts to its on-premises network. AWS Site-to-Site VPN connections are already established to a single AWS account. The company wants to control which VPCs can communicate with other VPCs.

Which combination of steps will achieve this level of control with the LEAST operational effort? (Choose three.)

Options:

A.  

Create a transit gateway in an AWS account. Share the transit gateway across accounts by using AWS Resource Access Manager (AWS RAM).

B.  

Configure attachments to all VPCs and VPNs.

C.  

Set up transit gateway route tables. Associate the VPCs and VPNs with the route tables.

D.  

Configure VPC peering between the VPCs.

E.  

Configure attachments between the VPCs and VPNs.

F.  

Set up route tables on the VPCs and VPNs.

Discussion 0
Questions 129

A company is developing an application that will display financial reports. The company needs a solution that can store financial Information that comes from multiple systems. The solution must provide the reports through a web interface and must serve the data will less man 500 milliseconds or latency to end users. The solution also must be highly available and must have an RTO or 30 seconds.

Which solution will meet these requirements?

Options:

A.  

Use an Amazon Redshift cluster to store the data. Use a state website that is hosted on Amazon S3 with backend APIs that ate served by an Amazon Elastic Cubemates Service (Amazon EKS) cluster to provide the reports to the application.

B.  

Use Amazon S3 to store the data Use Amazon Athena to provide the reports to the application. Use AWS App Runner to serve the application to view the reports.

C.  

Use Amazon DynamoDB to store the data, use an embedded Amazon QuickStight dashboard with direct Query datasets to provide the reports to the application.

D.  

Use Amazon Keyspaces (for Apache Cassandra) to store the data, use AWS Elastic Beanstalk to provide the reports to the application.

Discussion 0
Questions 130

A solutions architect must update an application environment within AWS Elastic Beanstalk using a blue/green deployment methodology The solutions architect creates an environment that is identical to the existing application environment and deploys the application to the new environment.

What should be done next to complete the update?

Options:

A.  

Redirect to the new environment using Amazon Route 53

B.  

Select the Swap Environment URLs option

C.  

Replace the Auto Scaling launch configuration

D.  

Update the DNS records to point to the green environment

Discussion 0
Questions 131

A company runs a content management application on a single Windows Amazon EC2 instance in a development environment. The application reads and writes static content to a 2 TB Amazon Elastic Block Store (Amazon EBS) volume that is attached to the instance as the root device. The company plans to deploy this application in production as a highly available and fault-tolerant solution that runs on at least three EC2 instances across multiple Availability Zones.

A solutions architect must design a solution that joins all the instances that run the application to an Active Directory domain. The solution also must implement Windows ACLs to control access to file contents. The application always must maintain exactly the same content on all running instances at any given point in time.

Which solution will meet these requirements with the LEAST management overhead?

Options:

A.  

Create an Amazon Elastic File System (Amazon EFS) file share. Create an Auto Scaling group that extends across three Availability Zones and maintains a minimum size of three instances. Implement a user data script to install the application, join the instance to the AD domain, and mount the EFS file share.

B.  

Create a new AMI from the current EC2 instance that is running. Create an Amazon FSx for Lustre file system. Create an Auto Scaling group that extends across three Availability Zones and maintains a minimum size of three instances. Implement a user data script to join the instance to the AD domain and mount the FSx for Lustre file system.

C.  

Create an Amazon FSx for Windows File Server file system. Create an Auto Scaling group that extends across three Availability Zones and maintains a minimum size of three instances. Implement a user data script to install the application and mount the FSx for Windows File Server file system. Perform a seamless domain join to join the instance to the AD domain.

D.  

Create a new AMI from the current EC2 instance that is running. Create an Amazon Elastic File System (Amazon EFS) file system. Create an Auto Scaling group that extends across three Availability Zones and maintains a minimum size of three instances. Perform a seamless domain join to join the instance to the AD domain.

Discussion 0
Questions 132

A company is moving a business-critical, multi-tier application to AWS. The architecture consists of a desktop client application and server infrastructure. The server infrastructure resides in an on-premises data center that frequently fails to maintain the application uptime SLA of 99.95%. A solutions architect must re-architect the application to ensure that it can meet or exceed the SLA.

The application contains a PostgreSQL database running on a single virtual machine. The business logic and presentation layers are load balanced between multiple virtual machines. Remote users complain about slow load times while using this latency-sensitive application.

Which of the following will meet the availability requirements with little change to the application while improving user experience and minimizing costs?

Options:

A.  

Migrate the database to a PostgreSQL database in Amazon EC2. Host the application and presentation layers in automatically scaled Amazon ECS containers behind an Application Load Balancer. Allocate an Amazon WorkSpaces Workspace for each end user to improve the user experience.

B.  

Migrate the database to an Amazon RDS Aurora PostgreSQL configuration. Host the application and presentation layers in an Auto Scaling configuration on Amazon EC2 instances behind an Application Load Balancer. Use Amazon AppStream 2.0 to improve the user experience.

C.  

Migrate the database to an Amazon RDS PostgreSQL Multi-AZ configuration. Host the application and presentation layers in automatically scaled AWS Fargate containers behind a Network Load Balancer. Use Amazon ElastiCache to improve the user experience.

D.  

Migrate the database to an Amazon Redshift cluster with at least two nodes. Combine and host the application and presentation layers in automatically scaled Amazon ECS containers behind an Application Load Balancer. Use Amazon CloudFront to improve the user experience.

Discussion 0
Questions 133

A company deploys workloads in multiple AWS accounts. Each account has a VPC with VPC flow logs published in text log format to a centralized Amazon S3 bucket. Each log file is compressed with gzjp compression. The company must retain the log files indefinitely.

A security engineer occasionally analyzes the togs by using Amazon Athena to query the VPC flow logs. The query performance is degrading over time as the number of ingested togs is growing. A solutions architect: must improve the performance of the tog analysis and reduce the storage space that the VPC flow logs use.

Which solution will meet these requirements with the LARGEST performance improvement?

Options:

A.  

Create an AWS Lambda function to decompress the gzip flies and to compress the tiles with bzip2 compression. Subscribe the Lambda function to an s3: ObiectCrealed;Put S3 event notification for the S3 bucket.

B.  

Enable S3 Transfer Acceleration for the S3 bucket. Create an S3 Lifecycle configuration to move files to the S3 Intelligent-Tiering storage class as soon as the ties are uploaded

C.  

Update the VPC flow log configuration to store the files in Apache Parquet format. Specify Hourly partitions for the log files.

D.  

Create a new Athena workgroup without data usage control limits. Use Athena engine version 2.

Discussion 0
Questions 134

An EC2-based ticketing service pulls a frequently updated pricing file (stored in S3) on startup. Sometimes EC2s have stale pricing, causing charge issues.

Options:

A.  

Lambda updates DynamoDB with new prices.

B.  

Lambda updates Amazon EFS.

C.  

Use Mountpoint for S3 to mount the pricing file to EC2.

D.  

Use Multi-Attach EBS volume for price file.

Discussion 0
Questions 135

A company hosts a payment processing platform across multiple AWS Regions. The processing platform exposes payment processing API endpoints. The company uses Amazon Route 53 to route traffic to an Application Load Balancer (ALB) in each Region to distribute requests to backend payment gateways. The company has implemented Auto Scaling for the payment gateways.

The company needs to improve reliability and availability by handling application failures quickly.

Which solution will meet this requirement with the LEAST response time for global users?

Options:

A.  

Configure Route 53 to use a geolocation routing policy to resolve DNS to an Amazon CloudFront distribution. Configure the distribution with an origin group that includes the ALBs from each Region as origins.

B.  

Create an accelerator in AWS Global Accelerator. Create an endpoint group and add each ALB as an endpoint in the group. Update the Route 53 alias records to point to the accelerator.

C.  

Configure Route 53 to use a latency-based routing policy and health checks for each Regional endpoint.

D.  

Set up cross-Region connectivity by using AWS Cloud WAN to connect VPCs across Regions. Use network policies to route traffic from each Region to the Region that is closest to the origin Region.

Discussion 0
Questions 136

A solutions architect needs to advise a company on how to migrate its on-premises data processing application to the AWS Cloud. Currently, users upload input files through a web portal. The web server then stores the uploaded files on NAS and messages the processing server over a message queue. Each media file can take up to 1 hour to process. The company has determined that the number of media files awaiting processing is significantly higher during business hours, with the number of files rapidly declining after business hours.

What is the MOST cost-effective migration recommendation?

Options:

A.  

Create a queue using Amazon SQS. Configure the existing web server to publish to the new queue. When there are messages in the queue, invoke an AWS Lambda function to pull requests from the queue and process the files. Store the processed files in an Amazon S3 bucket.

B.  

Create a queue using Amazon M. Configure the existing web server to publish to the new queue. When there are messages in the queue, create a new Amazon EC2 instance to pull requests from the queue and process the files. Store the processed files in Amazon EFS. Shut down the EC2 instance after the task is complete.

C.  

Create a queue using Amazon MO. Configure the existing web server to publish to the new queue. When there are messages in the queue, invoke an AWS Lambda function to pull requests from the queue and process the files. Store the processed files in Amazon EFS.

D.  

Create a queue using Amazon SOS. Configure the existing web server to publish to the new queue. Use Amazon EC2 instances in an EC2 Auto Scaling group to pull requests from the queue and process the files. Scale the EC2 instances based on the SOS queue length. Store the processed files in an Amazon S3 bucket.

Discussion 0
Questions 137

A company has an loT platform that runs in an on-premises environment. The platform consists of a server that connects to loT devices by using the MQTT protocol. The platform collects telemetry data from the devices at least once every 5 minutes The platform also stores device metadata in a MongoDB cluster

An application that is installed on an on-premises machine runs periodic jobs to aggregate and transform the telemetry and device metadata The application creates reports that users view by using another web application that runs on the same on-premises machine The periodic jobs take 120-600 seconds to run However, the web application is always running.

The company is moving the platform to AWS and must reduce the operational overhead of the stack.

Which combination of steps will meet these requirements with the LEAST operational overhead? (Select THREE.)

Options:

A.  

Use AWS Lambda functions to connect to the loT devices

B.  

Configure the loT devices to publish to AWS loT Core

C.  

Write the metadata to a self-managed MongoDB database on an Amazon EC2 instance

D.  

Write the metadata to Amazon DocumentDB (with MongoDB compatibility)

E.  

Use AWS Step Functions state machines with AWS Lambda tasks to prepare the reports and to write the reports to Amazon S3 Use Amazon CloudFront with an S3origin to serve the reports

F.  

Use an Amazon Elastic Kubernetes Service (Amazon EKS) cluster with Amazon EC2 instances to prepare the reports Use an ingress controller in the EKS cluster to serve the reports

Discussion 0
Questions 138

A company is running its solution on AWS in a manually created VPC. The company is using AWS CloudFormation to provision other parts of the infrastructure According to a new requirement the company must manage all infrastructure in an automatic way

What should the comp any do to meet this new requirement with the LEAST effort?

Options:

A.  

Create a new AWS Cloud Development Kit (AWS CDK) stack that strictly provisions the existing VPC resources and configuration Use AWS CDK to import the VPC into the stack and to manage the VPC

B.  

Create a CloudFormation stack set that creates the VPC Use the stack set to import the VPC into the stack

C.  

Create a new CloudFormation template that strictly provisions the existing VPC resources and configuration From the CloudFormation console, create a new stack by importing the existing resources

D.  

Create a new CloudFormation template that creates the VPC Use the AWS Serverless Application Model (AWS SAM) CLI to import the VPC

Discussion 0
Questions 139

A research company conducts mathematical simulations in the AWS Cloud. The simulations run on several hundred Amazon EC2 Linux instances. If a simulation encounters an issue, an engineer establishes an SSH connection to the affected EC2 instance.

Company policy requires that each EC2 instance session is established with a unique SSH key and that all SSH connections are logged in AWS CloudTrail. CloudTrail is enabled for the company’s account and the AWS Regions that the company uses.

Which solution will meet these requirements?

Options:

Discussion 0
Questions 140

A company is running an application in the AWS Cloud. The application consists of microservices that run on a fleet of Amazon EC2 instances in multiple Availability Zones behind an Application Load Balancer. The company recently added a new REST API that was implemented in Amazon API Gateway. Some of the older microservices that run on EC2 instances need to call this new API.

The company does not want the API to be accessible from the public internet and does not want proprietary data to traverse the public internet

What should a solutions architect do to meet these requirements?

Options:

A.  

Create an AWS Site-to-Site VPN connection between the VPC and the API Gateway. Use API Gateway to generate a unique API key for each microservice. Configure the API methods to require the key.

B.  

Create an interface VPC endpoint for API Gateway, and set an endpoint policy to only allow access to the specific API Add a resource policy to API Gateway to only allow access from the VPC endpoint. Change the API Gateway endpoint type to private.

C.  

Modify the API Gateway to use 1AM authentication. Update the 1AM policy for the 1AM role that is assigned to the EC2 Instances to allow access to the API Gateway. Move the API Gateway into a new VPC Deploy a transit gateway and connect the VPCs.

D.  

Create an accelerator in AWS Global Accelerator, and connect the accelerator to the API Gateway. Update the route table for all VPC subnets with a route to the created Global Accelerator endpoint IP address. Add an API key for each service to use for authentication.

Discussion 0
Questions 141

A financial services company in North America plans to release a new online web application to its customers on AWS . The company will launch the application in the us-east-1 Region on Amazon EC2 instances. The application must be highly available and must dynamically scale to meet user traffic. The company also wants to implement a disaster recovery environment for the application in the us-west-1 Region by using active-passive failover.

Which solution will meet these requirements?

Options:

A.  

Create a VPC in us-east-1 and a VPC in us-west-1 Configure VPC peering In the us-east-1VPC. create an Application Load Balancer (ALB) that extends across multiple Availability Zones in both VPCs Create an Auto Scaling group that deploys the EC2 instances across the multiple Availability Zones in both VPCs Place the Auto Scaling group behind the ALB.

B.  

Create a VPC in us-east-1 and a VPC in us-west-1. In the us-east-1 VPC. create an Application Load Balancer (ALB) that extends across multiple Availability Zones in that VPC. Create an Auto Scaling group that deploys the EC2 instances across the multiple Availability Zones in the us-east-1 VPC Place the Auto Scaling group behind the ALB Set up the same configuration in the us-west-1 VPC. Create an Amazon Route 53 hosted zone Create separate

C.  

Create a VPC in us-east-1 and a VPC in us-west-1 In the us-east-1 VP

C.  

create an Application Load Balancer (ALB) that extends across multiple Availability Zones in that VPC Create an Auto Scaling group that deploys the EC2 instances across the multiple Availability Zones in the us-east-1 VPC Place the Auto Scaling group behind the ALB Set up the same configuration in the us-west-1 VPC Create an Amazon Route 53 hosted zone. Create separate r

D.  

Create a VPC in us-east-1 and a VPC in us-west-1 Configure VPC peering In the us-east-1 VPC. create an Application Load Balancer (ALB) that extends across multiple Availability Zones in Create an Auto Scaling group that deploys the EC2 instances across the multiple Availability Zones in both VPCs Place the Auto Scaling group behind the ALB Create an Amazon Route 53 host.. Create a record for the ALB.

Discussion 0
Questions 142

A company’s solutions architect is evaluating an AWS workload that was deployed several years ago. The application tier is stateless and runs on a single large Amazon EC2 instance that was launched from an AMI. The application stores data in a MySOL database that runs on a single EC2 instance.

The CPU utilization on the application server EC2 instance often reaches 100% and causes the application to stop responding. The company manually installs patches on the instances. Patching has caused

downtime in the past. The company needs to make the application highly available.

Which solution will meet these requirements with the LEAST development time?

Options:

A.  

Move the application tier to AWS Lambda functions in the existing VPC. Create an Application Load Balancer to distribute traffic across theLambda functbns. Use Amazon GuardDuty to scan the Lambda functions. Migrate the database to Amazon DocumentDB (with MongoDB compatibility).

B.  

Change the EC2 instance type to a smaller Graviton powered instance type. use the existing AMI to create a launch template for an Auto Scaling group. Create an Application Load Balancer to distribute traffic across the instances in the Auto Scaling group. Set the Auto Scaling group to scale based on CPU utilization. Migrate the database to Amazon DynamoD

B.  

C.  

Move the application tier to containers by using Docker. Run the containers on Amazon Elastic Container Service (Amazon ECS) with EC2 instances. Create an Application Load Balancer to distribute traffic across the ECS cluster Configure the ECS cluster to scale based on CPU utilization. Migrate the database to Amazon Neptune.

D.  

Create a new AMI that is configured with AWS Systems Manager Agent (SSM Agent). Use the new AMI to create a launch template for an Auto Scaling group. Use smaller instances in the Auto Scaling group. Create an Application Load Balancer to distribute traffic across the instances in the Auto Scaling group. Set the Auto Scaling group to scale based on CPU utilization. Migrate the database to Amazon Aurora MySQL.

Discussion 0
Questions 143

A company is using an organization in AWS Organizations to manage hundreds of AWS accounts. A solutions architect is working on a solution to provide baseline protection for the Open Web Application Security Project (OWASP) top 10 web application vulnerabilities. The solutions architect is using AWS WAF for all existing and new Amazon CloudFront distributions that are deployed within the organization.

Which combination of steps should the solutions architect take to provide the baseline protection? (Select THREE.)

Options:

A.  

Enable AWS Config in all accounts.

B.  

Enable Amazon GuardDuty in all accounts.

C.  

Enable all features for the organization.

D.  

Use AWS Firewall Manager to deploy AWS WAF rules in all accounts for all CloudFront distributions.

E.  

Use AWS Shield Advanced to deploy AWS WAF rules in all accounts for all CloudFront distributions.

F.  

Use AWS Security Hub to deploy AWS WAF rules in all accounts for all CloudFront distributions.

Discussion 0
Questions 144

A company runs an application on a fleet of Amazon EC2 instances that are in private subnets behind an internet-facing Application Load Balancer (ALB). The ALB is the origin for an Amazon CloudFront distribution. An AWS WAF web ACL that contains various AWS managed rules is associated with the CloudFront distribution.

The company needs a solution that will prevent internet traffic from directly accessing the ALB.

Which solution will meet these requirements with the LEAST operational overhead?

Options:

A.  

Create a new web ACL that contains the same rules that the existing web ACL contains. Associate the new web ACL with the ALB.

B.  

Associate the existing web ACL with the AL

B.  

C.  

Add a security group rule to the ALB to allow traffic from the AWS managed prefix list for CloudFront only.

D.  

Add a security group rule to the ALB to allow only the various CloudFront IP address ranges.

Discussion 0
Questions 145

A company built an application based on AWS Lambda deployed in an AWS CloudFormation stack. The last production release of the web application introduced an issue that resulted in an outage lasting several minutes. A solutions architect must adjust the deployment process to support a canary release.

Which solution will meet these requirements?

Options:

A.  

Create an alias for every new deployed version of the Lambda function. Use the AWS CLIupdate-alias command with the routing-config parameter to distribute the load.

B.  

Deploy the application into a new CloudFormation stack. Use an Amazon Route 53 weighted routing policy to distribute the load.

C.  

Create a version for every new deployed Lambda function. Use the AWS CLI update-function-configuration command with the routing-config parameter to distribute the load.

D.  

Configure AWS CodeDeploy and use CodeDeployDefault.OneAtATime in the Deployment configuration to distribute the load.

Discussion 0
Questions 146

A company uses a load balancer to distribute traffic to Amazon EC2 instances in a single Availability Zone. The company is concerned about security and wants a solutions architect to re-architect the solution to meet the following requirements:

•Inbound requests must be filtered for common vulnerability attacks.

•Rejected requests must be sent to a third-party auditing application.

•All resources should be highly available.

Which solution meets these requirements?

Options:

A.  

Configure a Multi-AZ Auto Scaling group using the application's AMI. Create an Application Load Balancer (ALB) and select the previously created Auto Scaling group as the target. Use Amazon Inspector to monitor traffic to the ALB and EC2 instances. Create a web ACL in WAF. Create an AWS WAF using the web ACL and ALB. Use an AWS Lambda function to frequently push the Amazon Inspector report to the third-party auditing application.

B.  

Configure an Application Load Balancer (ALB) and add the EC2 instances as targets Create a web ACL in WAF. Create an AWS WAF using the web ACL and ALB name and enable logging with Amazon CloudWatch Logs. Use an AWS Lambda function to frequently push the logs to the third-party auditing application.

C.  

Configure an Application Load Balancer (ALB) along with a target group adding the EC2 instances as targets. Create an Amazon Kinesis Data Firehose with the destination of the third-party auditing application. Create a web ACL in WAF. Create an AWS WAF using the web ACL and ALB then enable logging by selecting the Kinesis Data Firehose as the destination. Subscribe to AWS Managed Rules in AWS Marketplace, choosing the WAF as the subscriber.<

D.  

Configure a Multi-AZ Auto Scaling group using the application's AMI. Create an Application Load Balancer (ALB) and select the previously created Auto Scaling group as the target. Create an Amazon Kinesis Data Firehose with a destination of the third-party auditing application. Create a web ACL in WAF. Create an AWS WAF using the WebACL and ALB then enable logging by selecting the Kinesis Data Firehose as the destination. Subscribe to AWS Ma

Discussion 0
Questions 147

A company uses a software package for surveys. During surveys, data is uploaded from a field operator's device to an Amazon S3 bucket. A custom application that runs on several Amazon EC2 instances polls the S3 bucket for new data. When new data is available, the software processes the data.

The data uploads are infrequent. The processing software can take up to 25 minutes to analyze each data upload. The company wants to optimize the application workflow to process the S3 data.

Which solution will meet these requirements with the LEAST operational overhead?

Options:

A.  

Modify the application to accept new S3 object keys as inputs. Containerize the application. Deploy the container to an Amazon ECS cluster that uses the AWS Fargate launch type. Configure S3 bucket notifications to send events to Amazon EventBridge when new objects are uploaded. Create an EventBridge rule that invokes an ECS task to run the application when a new S3 object event occurs.

B.  

Modify the application to accept new S3 object keys as inputs. Containerize the application. Deploy the container image to AWS Lambda functions. Create a new AWS Step Functions state machine to invoke the Lambda functions. Configure the state machine with a Task state that calls the Lambda functions. Set the Task state's Timeout property to 30 minutes.

C.  

Modify the application to accept new S3 object keys as inputs. Move the application from EC2 instances to Amazon ECS by using the EC2 capacity provider. Create an AWS Glue crawler to check the S3 bucket and invoke the application. Configure the application to process the data when the data is uploaded to Amazon S3.

D.  

Modify the application to use HTTP to poll new S3 object keys that reference data to process. Containerize the application. Deploy the container image to AWS Lambda functions. Configure S3 bucket notifications to send events to Amazon EventBridge when new objects are uploaded. Create an EventBridge rule that invokes the Lambda functions to post the new objects to HTTP endpoints by using fan-out.

Discussion 0
Questions 148

A team of data scientists is using Amazon SageMaker instances and SageMaker APIs to train machine learning (ML) models. The SageMaker instances are deployed in a

VPC that does not have access to or from the internet. Datasets for ML model training are stored in an Amazon S3 bucket. Interface VPC endpoints provide access to Amazon S3 and the SageMaker APIs.

Occasionally, the data scientists require access to the Python Package Index (PyPl) repository to update Python packages that they use as part of their workflow. A solutions architect must provide access to the PyPI repository while ensuring that the SageMaker instances remain isolated from the internet.

Which solution will meet these requirements?

Options:

A.  

Create an AWS CodeCommit repository for each package that the data scientists need to access. Configure code synchronization between the PyPl repositoryand the CodeCommit repository. Create a VPC endpoint for CodeCommit.

B.  

Create a NAT gateway in the VPC. Configure VPC routes to allow access to the internet with a network ACL that allows access to only the PyPl repositoryendpoint.

C.  

Create a NAT instance in the VP

C.  

Configure VPC routes to allow access to the internet. Configure SageMaker notebook instance firewall rules that allow access to only the PyPI repository endpoint.

D.  

Create an AWS CodeArtifact domain and repository. Add an external connection for public:pypi to the CodeArtifact repository. Configure the Python client touse the CodeArtifact repository. Create a VPC endpoint for CodeArtifact.

Discussion 0
Questions 149

A company gives users the ability to upload images from a custom application. The upload process invokes an AWS Lambda function that processes and stores the image in an Amazon S3 bucket. The application invokes the Lambda function by using a specific function version ARN.

The Lambda function accepts image processing parameters by using environment variables. The company often adjusts the environment variables of the Lambda function to achieve optimal image processing output. The company tests different parameters and publishes a new function version with the updated environment variables after validating results. This update process also requires frequent changes to the custom application to invoke the new function version ARN. These changes cause interruptions for users.

A solutions architect needs to simplify this process to minimize disruption to users.

Which solution will meet these requirements with the LEAST operational overhead?

Options:

A.  

Directly modify the environment variables of the published Lambda function version. Use the SLATEST version to test image processing parameters.

B.  

Create an Amazon DynamoDB table to store the image processing parameters. Modify the Lambda function to retrieve the image processing parameters from the DynamoDB table.

C.  

Directly code the image processing parameters within the Lambda function and remove the environment variables. Publish a new function version when the company updates the parameters.

D.  

Create a Lambda function alias. Modify the client application to use the function alias ARN. Reconfigure the Lambda alias to point to new versions of the function when the company finishes testing.

Discussion 0
Questions 150

A company is using AWS Organizations lo manage multiple AWS accounts For security purposes, the company requires the creation of an Amazon Simple Notification Service (Amazon SNS) topic that enables integration with a third-party alerting system in all the Organizations member accounts

A solutions architect used an AWS CloudFormation template to create the SNS topic and stack sets to automate the deployment of CloudFormation stacks Trusted access has been enabled in Organizations

What should the solutions architect do to deploy the CloudFormation StackSets in all AWS accounts?

Options:

A.  

Create a stack set in the Organizations member accounts. Use service-managed permissions. Set deployment options to deploy to an organization. Use CloudFormation StackSets drift detection.

B.  

Create stacks in the Organizations member accounts. Use self-service permissions. Set deployment options to deploy to an organization. Enable the CloudFormation StackSets automatic deployment.

C.  

Create a stack set in the Organizations management account Use service-managed permissions. Set deployment options to deploy to the organization. Enable CloudFormation StackSets automatic deployment.

D.  

Create stacks in the Organizations management account. Use service-managed permissions. Set deployment options to deploy to the organization. Enable CloudFormation StackSets drift detection.

Discussion 0
Questions 151

A company has migrated an application from on premises to AWS. The application frontend is a static website that runs on two Amazon EC2 instances behind an Application Load Balancer (ALB). The application backend is a Python application that runs on three EC2 instances behind another ALB. The EC2 instances are large, general purpose On-Demand Instances that were sized to meet the on-premises specifications for peak usage of the application.

The application averages hundreds of thousands of requests each month. However, the application is used mainly during lunchtime and receives minimal traffic during the rest of the day.

A solutions architect needs to optimize the infrastructure cost of the application without negatively affecting the application availability.

Which combination of steps will meet these requirements? (Choose two.)

Options:

A.  

Change all the EC2 instances to compute optimized instances that have the same number of cores as the existing EC2 instances.

B.  

Move the application frontend to a static website that is hosted on Amazon S3.

C.  

Deploy the application frontend by using AWS Elastic Beanstalk. Use the same instance type for the nodes.

D.  

Change all the backend EC2 instances to Spot Instances.

E.  

Deploy the backend Python application to general purpose burstable EC2 instances that have the same number of cores as the existing EC2 instances.

Discussion 0
Questions 152

A company runs its application in the eu-west-1 Region and has one account for each of its environments development, testing, and production All the environments are running 24 hours a day 7 days a week by using stateful Amazon EC2 instances and Amazon RDS for MySQL databases The databases are between 500 GB and 800 GB in size

The development team and testing team work on business days during business hours, but the production environment operates 24 hours a day. 7 days a week. The company wants to reduce costs AH resources are tagged with an environment tag with either development, testing, or production as the key.

What should a solutions architect do to reduce costs with the LEAST operational effort?

Options:

A.  

Create an Amazon EventBridge (Amazon CloudWatch Events) rule that runs once every day Configure the rule to invoke one AWS Lambda function that starts or stops instances based on the tag day and time.

B.  

Create an Amazon EventBridge (Amazon CloudWatch Events) rule that runs every business day in the evening. Configure the rule to invoke an AWS Lambda function that stops instances based on the tag-Create a second EventBridge (CloudWatch Events) rule that runs every business day in the morning Configure the second rule to invoke another Lambda function that starts instances based on the tag

C.  

Create an Amazon EventBridge (Amazon CloudWatch Events) rule that runs every business day in the evening Configure the rule to invoke an AWS Lambda function that terminates instances based on the tag Create a second EventBridge (CloudWatch Events) rule that runs every business day in the morning Configure the second rule to invoke another Lambda function that restores the instances from their last backup based on the tag.

D.  

Create an Amazon EventBridge rule that runs every hour. Configure the rule to invoke one AWS Lambda function that terminates or restores instances from their last backup based on the tag. day, and time.

Discussion 0
Questions 153

A company has automated the nightly retraining of its machine learning models by using AWS Step Functions. The workflow consists of multiple steps that use AWS Lambda Each step can fail for various reasons and any failure causes a failure of the overall workflow

A review reveals that the retraining has failed multiple nights in a row without the company noticing the failure A solutions architect needs to improve the workflow so that notifications are sent for all types of failures in the retraining process

Which combination of steps should the solutions architect take to meet these requirements? (Select THREE)

Options:

A.  

Create an Amazon Simple Notification Service (Amazon SNS) topic with a subscription of type "Email" that targets the team's mailing list.

B.  

Create a task named "Email" that forwards the input arguments to the SNS topic

C.  

Add a Catch field all Task Map. and Parallel states that have a statement of "Error Equals": [ “States. ALL”] and "Next": "Email".

D.  

Add a new email address to Amazon Simple Email Service (Amazon SES). Verify the email address.

E.  

Create a task named "Email" that forwards the input arguments to the SES email address

F.  

Add a Catch field to all Task Map, and Parallel states that have a statement of "Error Equals": [ "states. Runtime”] and "Next": "Email".

Discussion 0
Questions 154

A company is storing data on premises on a Windows file server. The company produces 5 GB of new data daily.

The company migrated part of its Windows-based workload to AWS and needs the data to be available on a file system in the cloud. The company already has established an AWS Direct Connect connection between the on-premises network and AWS.

Which data migration strategy should the company use?

Options:

A.  

Use the file gateway option in AWS Storage Gateway to replace the existing Windows file server, and point the existing file share to the new file gateway.

B.  

Use AWS DataSync to schedule a daily task to replicate data between the on-premises Windows file server and Amazon FSx.

C.  

Use AWS Data Pipeline to schedule a daily task to replicate data between the on-premises Windows file server and Amazon Elastic File System (Amazon EFS).

D.  

Use AWS DataSync to schedule a daily task lo replicate data between the on-premises Windows file server and Amazon Elastic File System (Amazon EFS),

Discussion 0
Questions 155

A company runs a latency-sensitive application that consumes messages from an Amazon Managed Streaming for Apache Kafka (Amazon MSK) cluster. The MSK cluster runs across three Availability Zones.

The current MSK cluster uses Standard brokers with two standard large instances in each Availability Zone. The company wants to minimize latency between Apache Kafka clients that are deployed in the same Availability Zones as the brokers. The company wants to increase available bandwidth and to increase the scaling speed of the cluster. Clients currently use default settings. Some downtime is acceptable while the company implements a solution.

Which solution will meet these requirements?

Options:

A.  

Configure a predictive scaling policy and set the MSK cluster as the target. Set the target value to 80 and set the scheduling buffer size to 0. Configure a placement group for the Kafka clients and associate the MSK hosts with the placement group.

B.  

Configure Cruise Control on the MSK cluster and enable bandwidth control bandwidth and rebalancing. Deploy an Amazon MSK Connect proxy layer that uses latency-based routing. Reconfigure the Kafka clients to use the proxy endpoint.

C.  

Replace the Standard brokers with Express brokers that use express large instances. Set the client.rack property for the Kafka clients to az_id.

D.  

Resize the brokers to standard xlarge instances. Create MSK PrivateLink endpoints in each Availability Zone. Reconfigure each Kafka client to use the endpoint that is in the same Availability Zone as the client.

Discussion 0
Questions 156

A company wants to use AWS IAM Identity Center (AWS Single Sign-On) to manage employee access to AWS services. The company uses AWS Organizations to manage its AWS accounts.

Each employee has their own IAM user. Each IAM user is a member of at least one IAM group. Each IAM group has an attached policy that allows members to assume

specific roles across the accounts. The roles contain appropriate policies for the expected activities of each group of users in each account. All relevant accounts exist inside a single OU.

The company has already created new users and groups in IAM Identity Center to match the permissions that exist in IAM.

How should the company use IAM Identity Center to implement the existing permissions?

Options:

A.  

For each group, create policies in each account. Give the policies the same name in each account. Create a new permission set. Add the name of the newpolicies to the permission set. Assign user access to the AWS accounts in IAM Identity Center.

B.  

For each group, create a new permission set. Attach the relevant existing IAM roles in each account to the permission set. Create a new customer managedpolicy that allows the group to assume the roles. Assign user access to the AWS accounts in IAM Identity Center.

C.  

For each group, create a new permission set. Create policies in each account. Give each policy a unique name. Set the path of each policy to match thename of the permission set. Assign user access to the AWS accounts in IAM Identity Center.

D.  

Add the OU to the accounts configuration in IAM Identity Center. For each group, create policies in each account. Create a new permission set. Add the newpolicies to the permission set as customer managed policies. Attach each new policy to the correct account in the account configuration in IAM IdentityCenter.

Discussion 0
Questions 157

A company is subject to regulatory audits of its financial information. External auditors who use a single AWS account need access to the company's AWS account. A solutions architect must provide the auditors with secure, read-only access to the company's AWS account. The solution must comply with AWS security best practices.

Which solution will meet these requirements?

Options:

A.  

In the company's AWS account, create resource policies for all resources in the account to grant access to the auditors' AWS account. Assign a unique external ID to the resource policy.

B.  

In the company's AWS account create an IAM role that trusts the auditors' AWS account Create an IAM policy that has the required permissions. Attach the policy to the role. Assign a unique external ID to the role's trust policy.

C.  

In the company's AWS account, create an IAM user. Attach the required IAM policies to the IAM user. Create API access keys for the IAM user. Share the access keys with the auditors.

D.  

In the company's AWS account, create an IAM group that has the required permissions Create an IAM user in the company s account for each auditor. Add the IAM users to the IAM group.

Discussion 0
Questions 158

A company collects air quality data from sensors. The company plans to use the MQTT protocol to send the data to AWS IoT Core. The company will process the data and then will store the data in an Amazon Aurora database.

During periods of low air quality, sensors will send data more frequently. The company must buffer the data during these periods to make sure that no data is lost before the data is processed and stored.

Which solution will meet these requirements?

Options:

A.  

Create an Amazon Kinesis data stream. Create an AWS IoT rule action and set the data stream as the target. Create an AWS Step Functions state machine that is invoked by the data stream. Use the state machine to process and store the data.

B.  

Create an Amazon Kinesis data stream. Create an AWS IoT rule action and set the data stream as the target. Create an application that runs on an Amazon ECS cluster with the AWS Fargate launch type. Configure the application to read data from the data stream, process the data, and store the data.

C.  

Create an Amazon SQS queue. Create an AWS IoT rule action and set the SQS queue as the target. Create an AWS Step Functions state machine that is invoked by the SQS queue. Use the state machine to process and store the data.

D.  

Create an Amazon SNS topic. Create an AWS IoT rule action and set the SNS topic as the target. Create an application that runs on an Amazon ECS cluster with the AWS Fargate launch type. Configure the application to read data from the SNS topic, process the data, and store the data.

Discussion 0
Questions 159

A solutions architect must provide a secure way for a team of cloud engineers to use the AWS CLI to upload objects into an Amazon S3 bucket Each cloud engineer has an IAM user. IAM access keys and a virtual multi-factor authentication (MFA) device The IAM users for the cloud engineers are in a group that is named S3-access The cloud engineers must use MFA to perform any actions in Amazon S3

Which solution will meet these requirements?

Options:

A.  

Attach a policy to the S3 bucket to prompt the 1AM user for an MFA code when the 1AM user performs actions on the S3 bucket Use 1AM access keys with the AWS CLI tocall Amazon S3

B.  

Update the trust policy for the S3-access group to require principals to use MFA when principals assume the group Use 1AM access keys with the AWS CLI to call Amazon S3

C.  

Attach a policy to the S3-access group to deny all S3 actions unless MFA is present Use 1AM access keys with the AWS CLI to call Amazon S3

D.  

Attach a policy to the S3-access group to deny all S3 actions unless MFA is present Request temporary credentials from AWS Security Token Service (AWS STS) Attach the temporary credentials in a profile that Amazon S3 will reference when the user performs actions in Amazon S3

Discussion 0
Questions 160

A company is deploying AWS Lambda functions that access an Amazon RDS for PostgreSQL database. The company needs to launch the Lambda functions in a QA

environment and in a production environment.

The company must not expose credentials within application code and must rotate passwords automatically.

Which solution will meet these requirements?

Options:

A.  

Store the database credentials for both environments in AWS Systems Manager Parameter Store. Encrypt the credentials by using an AWS Key ManagementService (AWS KMS) key. Within the application code of the Lambda functions, pull the credentials from the Parameter Store parameter by using the AWS SDKfor Python (Bot03). Add a role to the Lambda functions to provide access to the Parameter Store parameter.

B.  

Store the database credentials for both environments in AWS Secrets Manager with distinct key entry for the QA environment and the production environment.Turn on rotation. Provide a reference to the Secrets Manager key as an environment variable for the Lambda functions.

C.  

Store the database credentials for both environments in AWS Key Management Service (AWS KMS). Turn on rotation. Provide a reference to the credentialsthat are stored in AWS KMS as an environment variable for the Lambda functions.

D.  

Create separate S3 buckets for the QA environment and the production environment. Turn on server-side encryption with AWS KMS keys (SSE-KMS) for theS3 buckets. Use an object naming pattern that gives each Lambda function's application code the ability to pull the correct credentials for the function'scorresponding environment. Grant each Lambda function's execution role access to Amazon S3.

Discussion 0
Questions 161

A company is planning to migrate an Amazon RDS for Oracle database to an RDS for PostgreSQL DB instance in another AWS account. A solutions architect needs to design a migration strategy that will require no downtime and that will minimize the amount of time necessary to complete the migration. The migration strategy must replicate all existing data and any new data that is created during the migration The target database must be identical to the source database at completion of the migration process

All applications currently use an Amazon Route 53 CNAME record as their endpoint for communication with the RDS for Oracle DB instance The RDS for Oracle DB instance is in a private subnet.

Which combination of steps should the solutions architect take to meet these requirements? (Select THREE)

Options:

A.  

Create a new RDS for PostgreSQL DB instance in the target account Use the AWS Schema Conversion Tool (AWS SCT) to migrate the database schema from the source database to the target database

B.  

Use the AWS Schema Conversion Tool (AWS SCT) to create a new RDS for PostgreSQL DB instance in the target account with the schema and initial data from thesource database

C.  

Configure VPC peering between the VPCs in the two AWS accounts to provide connectivity to both DB instances from the target account. Configure the security groups that are attached to each DB instance to allow traffic on the database port from the VPC in the target account.

D.  

Temporarily allow the source DB instance to be publicly accessible to provide connectivity from the VPC in the target account Configure the security groups that are attached to each DB instance to allow traffic on the database port from the VPC in the target account.

E.  

Use AWS Database Migration Service (AWS DMS) in the target account to perform a full load plus change data capture (CDC) migration from the source database to the target database When the migration is complete, change the CNAME record to point to the target DB instance endpoint

F.  

Use AWS Database Migration Service (AWS DMS) in the target account to perform a change data capture (CDC) migration from the source database to the target database When the migration is complete change the CNAME record to pointto the target DB instance endpoint.

Discussion 0
Questions 162

A company runs a customer service center that accepts calls and automatically sends all customers a managed, interactive, two-way experience survey by text message.

The applications that support the customer service center run on machines that the company hosts in an on-premises data center. The hardware that the company uses is old, and the company is experiencing downtime with the system. The company wants to migrate the system to AWS to improve reliability.

Which solution will meet these requirements with the LEAST ongoing operational overhead?

Options:

A.  

Use Amazon Connect to replace the old call center hardware. Use Amazon Pinpoint to send text message surveys to customers.

B.  

Use Amazon Connect to replace the old call center hardware. Use Amazon Simple Notification Service (Amazon SNS) to send text message surveys to customers.

C.  

Migrate the call center software to Amazon EC2 instances that are in an Auto Scaling group. Use the EC2 instances to send text message surveys to customers.

D.  

Use Amazon Pinpoint to replace the old call center hardware and to send text message surveys to customers.

Discussion 0
Questions 163

A company wants to modernize a monolithic application in the company's data center and deploy the application on AWS. The monolithic application consists of an event broker in a central account and multiple microservices in individual AWS accounts. The event broker and the microservices are deployed on Amazon ECS clusters that use the Fargate launch type.

Multiple microservices need access to the same events from the event broker. The company wants to distribute events from the central event broker to each microservice across accounts.

Which solution will meet these requirements?

Options:

A.  

Create an Amazon SNS topic in the central account. Add a topic policy to allow other accounts to subscribe to the topic. Create an Amazon SQS queue in each individual AWS account. Subscribe the SQS queue to the SNS topic. Configure the microservices to read events from their own SQS queue.

B.  

Create a new Amazon EventBridge event bus in the central account with the required permissions. Add EventBridge rules filtered by service for each microservice. Invoke the rules to route events to other accounts.

C.  

Create a data stream in Amazon Kinesis Data Streams in the central account. Create an IAM policy to grant the necessary permissions to access the data stream. Set each of the microservices as an event source on the Kinesis stream. Configure the stream to invoke each microservice.

D.  

Create a new Amazon SQS queue as the event broker in the central account. Grant the required permissions. Configure each of the microservices to read messages from the central SQS queue.

Discussion 0
Questions 164

A company is preparing to deploy an Amazon Elastic Kubernetes Service (Amazon EKS) cluster for a workload. The company expects the cluster to support an

unpredictable number of stateless pods. Many of the pods will be created during a short time period as the workload automatically scales the number of replicas that the workload uses.

Which solution will MAXIMIZE node resilience?

Options:

A.  

Use a separate launch template to deploy the EKS control plane into a second cluster that is separate from the workload node groups.

B.  

Update the workload node groups. Use a smaller number of node groups and larger instances in the node groups.

C.  

Configure the Kubernetes Cluster Autoscaler to ensure that the compute capacity of the workload node groups stays under provisioned.

D.  

Configure the workload to use topology spread constraints that are based on Availability Zone.

Discussion 0
Questions 165

A company runs many workloads on AWS and uses AWS Organizations to manage its accounts. The workloads are hosted on Amazon EC2. AWS Fargate. and AWS Lambda. Some of the workloads have unpredictable demand. Accounts record high usage in some months and low usage in other months.

The company wants to optimize its compute costs over the next 3 years A solutions architect obtains a 6-month average for each of the accounts across the organization to calculate usage.

Which solution will provide the MOST cost savings for all the organization's compute usage?

Options:

A.  

Purchase Reserved Instances for the organization to match the size and number of the most common EC2 instances from the member accounts.

B.  

Purchase a Compute Savings Plan for the organization from the management account by using the recommendation at the management account level

C.  

Purchase Reserved Instances for each member account that had high EC2 usage according to the data from the last 6 months.

D.  

Purchase an EC2 Instance Savings Plan for each member account from the management account based on EC2 usage data from the last 6 months.

Discussion 0
Questions 166

A company has migrated a legacy application to the AWS Cloud. The application runs on three Amazon EC2 instances that are spread across three Availability Zones. One EC2 instance is in each Availability Zone. The EC2 instances are running in three private subnets of the VPC and are set up as targets for an Application Load Balancer (ALB) that is associated with three public subnets.

The application needs to communicate with on-premises systems. Only traffic from IP addresses in the company's IP address range are allowed to access the on-premises systems. The company's security team is bringing only one IP address from its internal IP address range to the cloud. The company has added this IP address to the allow list for the company firewall. The company also has created an Elastic IP address for this IP address.

A solutions architect needs to create a solution that gives the application the ability to communicate with the on-premises systems. The solution also must be able to mitigate failures automatically.

Which solution will meet these requirements?

Options:

A.  

Deploy three NAT gateways, one in each public subnet. Assign the Elastic IP address to the NAT gateways. Turn on health checks for the NAT gateways. If a NAT gateway fails a health check, recreate the NAT gateway and assign the Elastic IP address to the new NAT gateway.

B.  

Replace the ALB with a Network Load Balancer (NLB). Assign the Elastic IP address to the NLB Turn on health checks for the NL

B.  

In the case of a failed health check, redeploy the NLB in different subnets.

C.  

Deploy a single NAT gateway in a public subnet. Assign the Elastic IP address to the NAT gateway. Use Amazon CloudWatch with a custom metric tomonitor the NAT gateway. If the NAT gateway is unhealthy, invoke an AWS Lambda function to create a new NAT gateway in a different subnet. Assign the Elastic IP address to the new NAT gateway.

D.  

Assign the Elastic IP address to the ALB. Create an Amazon Route 53 simple record with the Elastic IP address as the value. Create a Route 53 health check. In the case of a failed health check, recreate the ALB in different subnets.

Discussion 0
Questions 167

Question:

A company hosts an ecommerce site using EC2, ALB, and DynamoDB in one AWS Region. The site uses a custom domain in Route 53. The company wants toreplicate the stack to a second Regionfordisaster recoveryandfaster accessfor global customers.

What should the architect do?

Options:

A.  

Use CloudFormation to deploy to the second Region. Use Route 53 latency-based routing. Enable global tables in DynamoDB.

B.  

Use the console to recreate the infra manually in the second Region. Use weighted routing.

C.  

Replicate only the S3 and DynamoDB data. Use Route 53 failover routing.

D.  

Use Beanstalk and DynamoDB Streams for replication. Use latency-based routing.

Discussion 0
Questions 168

A public retail web application uses an Application Load Balancer (ALB) in front of Amazon EC2 instances running across multiple Availability Zones (AZs) in a Region backed by an Amazon RDS MySQL Multi-AZ deployment. Target group health checks are configured to use HTTP and pointed at the product catalog page. Auto Scaling is configured to maintain the web fleet size based on the ALB health check.

Recently, the application experienced an outage. Auto Scaling continuously replaced the instances during the outage. A subsequent investigation determined that the web server metrics were within the normal range, but the database tier was experiencing high toad, resulting in severely elevated query response times.

Which of the following changes together would remediate these issues while improving monitoring capabilities for the availability and functionality of the entire application stack for future growth? (Select TWO.)

Options:

A.  

Configure read replicas for Amazon RDS MySQL and use the single reader endpoint in the web application to reduce the load on the backend database tier.

B.  

Configure the target group health check to point at a simple HTML page instead of a product catalog page and the Amazon Route 53 health check against the product page to evaluate full application functionality. Configure Ama7on CloudWatch alarms to notify administrators when the site fails.

C.  

Configure the target group health check to use a TCP check of the Amazon EC2 web server and the Amazon Route S3 health check against the product page to evaluate full application functionality. Configure Amazon CloudWatch alarms to notify administrators when the site fails.

D.  

Configure an Amazon CtoudWatch alarm for Amazon RDS with an action to recover a high-load, impaired RDS instance in the database tier.

E.  

Configure an Amazon Elastic ache cluster and place it between the web application and RDS MySQL instances to reduce the load on the backend database tier.

Discussion 0
Questions 169

A company uses an organization in AWS Organizations to manage the company's AWS accounts. The company uses AWS CloudFormation to deploy all infrastructure. A finance team wants to buikJ a chargeback model The finance team asked each business unit to tag resources by using a predefined list of project values.

When the finance team used the AWS Cost and Usage Report in AWS Cost Explorer and filtered based on project, the team noticed noncompliant project values. The company wants to enforce the use of project tags for new resources.

Which solution will meet these requirements with the LEAST effort?

Options:

A.  

Create a tag policy that contains the allowed project tag values in the organization's management account. Create an SCP that denies the cloudformation:CreateStack API operation unless a project tag is added. Attach the SCP to each OU.

B.  

Create a tag policy that contains the allowed project tag values in each OU. Create an SCP that denies the cloudformation:CreateStack API operation unless a project tag is added. Attach the SCP to each OU.

C.  

Create a tag policy that contains the allowed project tag values in the AWS management account. Create an 1AM policy that denies the cloudformation:CreateStack API operation unless a project tag is added. Assign the policy to each user.

D.  

Use AWS Service Catalog to manage the CloudFoanation stacks as products. Use a TagOptions library to control project tag values. Share the portfolio with all OUs that are in the organization.

Discussion 0
Questions 170

A company uses an organization in AWS Organizations that has multiple AWS accounts. The accounts host multiple resources that are tagged with a CostCenter tag key. The tag value is the name of the team. The company wants to accurately identify the cost of the resources so that the company can charge each team accordingly.

Which solution meets these requirements?

Options:

A.  

Activate the CostCenter user-defined tag in the organization's management account. Configure monthly AWS Cost and Usage Reports to save to an Amazon S3 bucket in the management account. Use the tag breakdown in the report to obtain the total cost for the resources that have the CostCenter tag.

B.  

Activate the CostCenter user-defined tag in every member account. Configure monthly AWS Cost and Usage Reports to save to an Amazon S3 bucket in the management account. Create an AWS Lambda function that runs monthly to retrieve the reports and calculate the total cost for the resources that have the CostCenter tag.

C.  

Activate the CostCenter user-defined tag in every member account. Schedule a monthly AWS Cost and Usage Report from the management account. Use the tag breakdown in the report to calculate the total cost for the resources that have the CostCenter tag.

D.  

Customize a report in the AWS Trusted Advisor organization view. Configure the report to generate monthly billing summaries for resources that have the CostCenter tag under the AWS accounts.

Discussion 0
Questions 171

A company is planning to migrate its on-premises transaction-processing application to AWS. The application runs inside Docker containers that are hosted on VMS in the company's data center. The Docker containers have shared storage where the application records transaction data.

The transactions are time sensitive. The volume of transactions inside the application is unpredictable. The company must implement a low-latency storage solution that will automatically scale throughput to meet increased demand. The company cannot develop the application further and cannot continue to administer the Docker hosting environment.

How should the company migrate the application to AWS to meet these requirements?

Options:

A.  

Migrate the containers that run the application to Amazon Elastic Kubernetes Service (Amazon EKS). Use Amazon S3 to store the transaction data that the containers share.

B.  

Migrate the containers that run the application to AWS Fargate for Amazon Elastic Container Service (Amazon ECS). Create an Amazon Elastic File System (Amazon EFS) file system. Create a Fargate task definition. Add a volume to the task definition to point to the EFS file system

C.  

Migrate the containers that run the application to AWS Fargate for Amazon Elastic Container Service (Amazon ECS). Create an Amazon Elastic Block Store (Amazon EBS) volume. Create a Fargate task definition. Attach the EBS volume to each running task.

D.  

Launch Amazon EC2 instances. Install Docker on the EC2 instances. Migrate the containers to the EC2 instances. Create an Amazon Elastic File System (Amazon EFS) file system. Add a mount point to the EC2 instances for the EFS file system.

Discussion 0
Questions 172

A company has an application that is deployed on Amazon EC2 instances behind an Application Load Balancer (ALB). The instances are part of an Auto Scaling group.Theapplication has unpredictable workloads and frequently scales out and in. The company's development team wants to analyze application logs to find ways to improve the application's performance. However, the logs are no longer available after instances scale in.

Which solution will give the development team the ability to view the application logs after a scale-in event?

Options:

A.  

Enable access logs for the ALB. Store the logs in an Amazon S3 bucket.

B.  

Configure the EC2 instances lo publish logs to Amazon CloudWatch Logs by using the unified CloudWatch agent.

C.  

Modify the Auto Scaling group to use a step scaling policy.

D.  

Instrument the application with AWS X-Ray tracing.

Discussion 0
Questions 173

A company is migrating its legacy .NET workload to AWS. The company has a containerized setup that includes a base container image. The base image is tens of

gigabytes in size because of legacy libraries and other dependencies. The company has images for custom developed components that are dependent on the base image.

The company will use Amazon Elastic Container Registry (Amazon ECR) as part of its solution on AWS.

Which solution will provide the LOWEST container startup time on AWS?

Options:

A.  

Use Amazon ECR to store the base image and the images for the custom developed components. Use Amazon Elastic Container Service (Amazon ECS) onAWS Fargate to run the workload.

B.  

Use Amazon ECR to store the base image and the images for the custom developed components. Use AWS App Runner to run the workload.

C.  

Use Amazon ECR to store the images for the custom developed components. Create an AMI that contains the base image. Use Amazon Elastic Container Service (Amazon ECS) on Amazon EC2 instances that are based on the AMI to run the workload

D.  

Use Amazon ECR to store the images for the custom developed components. Create an AMI that contains the base image. Use Amazon Elastic Kubernetes Service (Amazon EKS) on AWS Fargate with the AMI to run the workload.

Discussion 0
Questions 174

Question:

A company uses AWS Organizations and tags every resource with a BusinessUnit tag. They want toallocate cloud costsby business unit andvisualizethem.

Options:

Options:

A.  

Activate BusinessUnit cost allocation tag in the management account. Create a CUR to S3. Use Athena + QuickSight for reporting.

B.  

Create cost allocation tags in each member account. Use CloudWatch Dashboards.

C.  

Create cost allocation tags in the management account. Deploy CURs per account.

D.  

Use tags and CUR per account. Visualize with QuickSight from management account.

Discussion 0
Questions 175

A company runs an ecommerce website on Amazon ECS behind an Application Load Balancer (ALB). The container images are stored in Amazon ECR. The website stores data in an Amazon Aurora MySQL DB cluster. The ALB uses an HTTPS listener with a public SSL certificate that is saved in AWS Certificate Manager (ACM). The website domain is registered with Amazon Route 53.

The company wants to duplicate this setup in a second AWS Region in an active-active configuration. The website can tolerate minor latency for data replication between Regions. The company has already deployed an ECS cluster with an ALB in the secondary Region. The ECS cluster is registered for geolocation routing with Route 53.

Which combination of steps will meet these requirements with the LEAST operational overhead? (Select THREE.)

Options:

A.  

Request a new ACM certificate for the company website in the secondary Region. Configure the ALB in the secondary Region with an HTTPS listener. Set the new ACM certificate as the default certificate.

B.  

Share the ACM certificate with the secondary Region by using AWS Resource Access Manager (AWS RAM). Configure the ALB in the secondary Region with an HTTPS listener. Set the shared ACM certificate as the default certificate.

C.  

Create a VPC endpoint for Amazon ECR in the secondary Region. Configure Amazon EC2 instances to download container images from the primary Region.

D.  

Enable Cross-Region Replication for ECR repositories to the secondary Region. Re-push the existing images to ECR repositories with a new tag.

E.  

Configure an Aurora global database in the primary Region. Enable write forwarding to the secondary Region.

F.  

Use an Aurora DB cluster that has multiple writer instances in the primary Region. Create a secondary Aurora DB instance in the secondary Region. Enable cross-Region writes between the DB clusters.

Discussion 0
Questions 176

A telecommunications company is running an application on AWS. The company has set up an AWS Direct Connect connection between the company's on-premises data center and AWS. The company deployed the application on Amazon EC2 instances in multiple Availability Zones behind an internal Application Load Balancer (ALB). The company's clients connect from the on-premises network by using HTTPS. The TLS terminates in the ALB. The company has multiple target groups and uses path-based routing to forward requests based on the URL path.

The company is planning to deploy an on-premises firewall appliance with an allow list that is based on IP address. A solutions architect must develop a solution to allow traffic flow to AWS from the on-premises network so that the clients can continue to access the application.

Which solution will meet these requirements?

Options:

A.  

Configure the existing ALB to use static IP addresses. Assign IP addresses in multiple Availability Zones to the ALB. Add the ALB IP addresses to the firewall appliance.

B.  

Create a Network Load Balancer (NLB). Associate the NLB with one static IP addresses in multiple Availability Zones. Create an ALB-type target group for the NLB and add the existing ALAdd the NLB IP addresses to the firewall appliance. Update the clients to connect to the NL

B.  

C.  

Create a Network Load Balancer (NLB). Associate the LNB with one static IP addresses in multiple Availability Zones. Add the existing target groups to the NLB. Update the clients to connect to the NLB. Delete the ALB Add the NLB IP addresses to the firewall appliance.

D.  

Create a Gateway Load Balancer (GWLB). Assign static IP addresses to the GWLB in multiple Availability Zones. Create an ALB-type target group for the GWLB and add the existing ALB. Add the GWLB IP addresses to the firewall appliance. Update the clients to connect to the GWLB.

Discussion 0
Questions 177

A company is planning to migrate workloads from its on-premises data center to Amazon EC2 instances. The workloads run on physical servers and VMware virtual servers. The company has gathered details about each on-premises server and virtual server, including server specification, CPU utilization, and memory utilization. The company has stored these details in a .csv file named onprem.csv.

Before the migration, the company must estimate the cost of running the servers on AWS and must determine recommended EC2 instance types for the servers. The company must export this information to a different .csv file.

Which solution will meet these requirements?

Options:

A.  

Configure AWS Compute Optimizer to generate recommendations from an external source. Import the onprem.csv file. Export the Compute Optimizer recommendations to a new .csv file.

B.  

Import the onprem.csv file into AWS Migration Hub by using AWS Migration Hub import. Use EC2 instance recommendations from Migration Hub to generate recommendations. Export the recommendations to a new .csv file.

C.  

Deploy AWS Application Discovery Service Agentless Collector on premises. Use Agentless Collector to import the onprem.csv file. Send the file to AWS Migration Hub. Use EC2 instance recommendations from Migration Hub to generate recommendations. Export the recommendations to a new .csv file.

D.  

Upload the onprem.csv file to an Amazon S3 bucket. Configure Migration Evaluator to import the data from the S3 bucket. Generate and confirm recommendations by using Migration Evaluator Quick Insights. Export the final recommendations to a new .csv file in the S3 bucket.

Discussion 0
Questions 178

A company has 10 accounts that are part of an organization in AWS Organizations AWS Config is configured in each account All accounts belong to either the Prod OU or the NonProd OU

The company has set up an Amazon EventBridge rule in each AWS account to notify an Amazon Simple Notification Service (Amazon SNS) topic when an Amazon EC2 security group inbound rule is created with 0.0.0.0/0 as the source The company's security team is subscribed to the SNS topic

For all accounts in the NonProd OU the security team needs to remove the ability to create a security group inbound rule that includes 0.0.0.0/0 as the source

Which solution will meet this requirement with the LEAST operational overhead?

Options:

A.  

Modify the EventBridge rule to invoke an AWS Lambda function to remove the security group inbound rule and to publish to the SNS topic Deploy the updated rule to the NonProd OU

B.  

Add the vpc-sg-open-only-to-authorized-ports AWS Config managed rule to the NonProd OU

C.  

Configure an SCP to allow the ec2 AulhonzeSecurityGrouplngress action when the value of the aws Sourcelp condition key is not 0.0.0.0/0 Apply the SCP to the NonProd OU

D.  

Configure an SCP to deny the ec2 AuthorizeSecurityGrouplngress action when the value of the aws Sourcelp condition key is 0.0.0.0/0 Apply the SCP to the NonProd OU

Discussion 0
Questions 179

A company is processing videos in the AWS Cloud by using Amazon EC2 instances in an Auto Scaling group. It takes 30 minutes to process a video. Several EC2 instances scale in and out depending on the number of videos in an Amazon Simple Queue Service (Amazon SQS) queue.

The company has configured the SQS queue with a redrive policy that specifies a target dead-letter queue and a maxReceiveCount of 1. The company has set the visibility timeout for the SQS queue to 1 hour. The company has set up an Amazon CloudWatch alarm to notify the development team when there are messages in the dead-letter queue.

Several times during the day, the development team receives notification that messages are in the dead-letter queue and that videos have not been processed properly. An investigation finds no errors in the application logs.

How can the company solve this problem?

Options:

A.  

Turn on termination protection for the EC2 instances.

B.  

Update the visibility timeout for the SOS queue to 3 hours.

C.  

Configure scale-in protection for the instances during processing.

D.  

Update the redrive policy and set maxReceiveCount to 0.

Discussion 0
Questions 180

A company uses a service to collect metadata from applications that the company hosts on premises. Consumer devices such as TVs and internet radios access the applications. Many older devices do not support certain HTTP headers and exhibit errors when these headers are present in responses. The company has configured an on-premises load balancer to remove the unsupported headers from responses sent to older devices, which the company identified by the User-Agent headers.

The company wants to migrate the service to AWS, adopt serverless technologies, and retain the ability to support the older devices. The company has already migrated the applications into a set of AWS Lambda functions.

Which solution will meet these requirements?

Options:

A.  

Create an Amazon CloudFront distribution for the metadata service. Create an Application Load Balancer (ALB). Configure the CloudFront distribution to forward requests to the ALB. Configure the ALB to invoke the correct Lambda function for each type of request. Create a CloudFront function to remove the problematic headers based on the value of the User-Agent header.

B.  

Create an Amazon API Gateway REST API for the metadata service. Configure API Gateway to invoke the correct Lambda function for each type of request. Modify the default gateway responses to remove the problematic headers based on the value of the User-Agent header.

C.  

Create an Amazon API Gateway HTTP API for the metadata service. Configure API Gateway to invoke the correct Lambda function for each type of request. Create a response mapping template to remove the problematic headers based on the value of the User-Agent. Associate the response data mapping with the HTTP API.

D.  

Create an Amazon CloudFront distribution for the metadata service. Create an Application Load Balancer (ALB). Configure the CloudFront distribution to forward requests to the ALB. Configure the ALB to invoke the correct Lambda function for each type of request. Create a Lambda@Edge function that will remove the problematic headers in response to viewer requests based on the value of the User-Agent header.

Discussion 0
Questions 181

A company's compliance audit reveals that some Amazon Elastic Block Store (Amazon EBS) volumes that were created in an AWS account were not encrypted. A solutions architect must Implement a solution to encrypt all new EBS volumes at rest

Which solution will meet this requirement with the LEAST effort?

Options:

A.  

Create an Amazon EventBridge rule to detect the creation of unencrypted EBS volumes. Invoke an AWS Lambda function to delete noncompliant volumes.

B.  

Use AWS Audit Manager with data encryption.

C.  

Create an AWS Config rule to detect the creation of a new EBS volume. Encrypt the volume by using AWS Systems Manager Automation.

D.  

Turn in EBS encryption by default in all AWS Regions.

Discussion 0
Questions 182

A company's public API runs as tasks on Amazon Elastic Container Service (Amazon ECS). The tasks run on AWS Fargate behind an Application Load Balancer (ALB) and are configured with Service Auto Scaling for the tasks based on CPU utilization. This service has been running well for several months.

Recently, API performance slowed down and made the application unusable. The company discovered that a significant number of SQL injection attacks had occurred against the API and that the API service had scaled to its maximum amount.

A solutions architect needs to implement a solution that prevents SQL injection attacks from reaching the ECS API service. The solution must allow legitimate traffic through and must maximize operational efficiency.

Which solution meets these requirements?

Options:

A.  

Create a new AWS WAF web ACL to monitor the HTTP requests and HTTPS requests that are forwarded to the ALB in front of the ECS tasks.

B.  

Create a new AWS WAF Bot Control implementation. Add a rule in the AWS WAF Bot Control managed rule group to monitor traffic and allow only legitimate traffic to the ALB in front of the ECS tasks.

C.  

Create a new AWS WAF web ACL. Add a new rule that blocks requests that match the SQL database rule group. Set the web ACL to allow all other traffic that does not match those rules. Attach the web ACL to the ALB in front of the ECS tasks.

D.  

Create a new AWS WAF web ACL. Create a new empty IP set in AWS WAF. Add a new rule to the web ACL to block requests that originate from IP addresses in the new IP set. Create an AWS Lambda function that scrapes the API logs for IP addresses that send SQL injection attacks, and add those IP addresses to the IP set. Attach the web ACL to the ALB in front of the ECS tasks.

Discussion 0
Questions 183

A company's inventory application stores data in an Amazon Aurora PostgreSQL DB cluster in a single AWS Region. The company wants to improve resiliency by extending the database infrastructure to a secondary Region. The company wants an RTO of 15 minutes and an RPO of 5 minutes. The solution must not run Aurora DB instances in the secondary Region when the application is operational in the primary Region. Which solution meets these requirements?

Options:

A.  

Configure AWS DMS to copy the Aurora DB cluster in the primary Region to the secondary Region. Use AWS DMS to synchronize the primary DB cluster with the secondary DB cluster.

B.  

Create a new Aurora PostgreSQL DB cluster in the secondary Region. Use AWS Backup to synchronize the primary DB cluster with the secondary DB cluster.

C.  

Create a headless Aurora DB cluster in the second Region that is part of the same global DB cluster as the primary Region's DB cluster.

D.  

Create an AWS Backup job to back up the DB cluster and copy the DB cluster to the secondary Region every 5 minutes.

Discussion 0
Questions 184

A company has many AWS accounts and uses AWS Organizations to manage all of them. A solutions architect must implement a solution that the company can use to share a common network across multiple accounts.

The company's infrastructure team has a dedicated infrastructure account that has a VPC. The infrastructure team must use this account to manage the network. Individual accounts cannot have the ability to manage their own networks. However, individual accounts must be able to create AWS resources within subnets.

Which combination of actions should the solutions architect perform to meet these requirements? (Select TWO.)

Options:

A.  

Create a transit gateway in the infrastructure account.

B.  

Enable resource sharing from the AWS Organizations management account.

C.  

Create VPCs in each AWS account within the organization in AWS Organizations. Configure the VPCs to share the same CIDR range and subnets as the VPC in the infrastructure account. Peer the VPCs in each individual account with the VPC in the infrastructure account,

D.  

Create a resource share in AWS Resource Access Manager in the infrastructure account. Select the specific AWS Organizations OU that will use the shared network. Select each subnet to associate with the resource share.

E.  

Create a resource share in AWS Resource Access Manager in the infrastructure account. Select the specific AWS Organizations OU that will use the shared network. Select each prefix list to associate with the resource share.

Discussion 0
Questions 185

A company manufactures smart vehicles. The company uses a custom application to collect vehicle data. The vehicles use the MQTT protocol to connect to the application.

The company processes the data in 5-minute intervals. The company then copies vehicle telematics data to on-premises storage. Custom applications analyze this data to detect anomalies.

The number of vehicles that send data grows constantly. Newer vehicles generate high volumes of data. The on-premises storage solution is not able to scale for peak traffic, which results in data loss. The company must modernize the solution and migrate the solution to AWS to resolve the scaling challenges.

Which solution will meet these requirements with the LEAST operational overhead?

Options:

A.  

Use AWS IOT Greengrass to send the vehicle data to Amazon Managed Streaming for Apache Kafka (Amazon MSK). Create an Apache Kafka application to store the data in Amazon S3. Use a pretrained model in Amazon SageMaker to detect anomalies.

B.  

Use AWS IOT Core to receive the vehicle data. Configure rules to route data to an Amazon Kinesis Data Firehose delivery stream that stores the data in Amazon S3. Create an Amazon Kinesis Data Analytics application that reads from the delivery stream to detect anomalies.

C.  

Use AWS IOT FleetWise to collect the vehicle data. Send the data to an Amazon Kinesis data stream. Use an Amazon Kinesis Data Firehose delivery stream to store the data in Amazon S3. Use the built-in machine learning transforms in AWS Glue to detect anomalies.

D.  

Use Amazon MQ for RabbitMQ to collect the vehicle data. Send the data to an Amazon Kinesis Data Firehose delivery stream to store the data in Amazon S3. Use Amazon Lookout for Metrics to detect anomalies.

Discussion 0
Questions 186

A company has an organization in AWS Organizations. The company is using AWS Control Tower to deploy a landing zone for the organization. The company wants to implement governance and policy enforcement. The company must implement a policy that will detect Amazon RDS DB instances that are not encrypted at rest in the company’s production OU.

Which solution will meet this requirement?

Options:

A.  

Turn on mandatory guardrails in AWS Control Tower. Apply the mandatory guardrails to the production OU.

B.  

Enable the appropriate guardrail from the list of strongly recommended guardrails in AWS Control Tower. Apply the guardrail to the production OU.

C.  

Use AWS Config to create a new mandatory guardrail. Apply the rule to all accounts in the production OU.

D.  

Create a custom SCP in AWS Control Tower. Apply the SCP to the production OU.

Discussion 0
Questions 187

A company wants to migrate an Amazon Aurora MySQL DB cluster from an existing AWS account to a new AWS account in the same AWS Region. Both accounts are members of the same organization in AWS Organizations.

The company must minimize database service interruption before the company performs DNS cutover to the new database.

Which migration strategy will meet this requirement?

Options:

A.  

Take a snapshot of the existing Aurora database. Share the snapshot with the new AWS account. Create an Aurora DB cluster in the new account from the snapshot.

B.  

Create an Aurora DB cluster in the new AWS account. Use AWS Database Migration Service (AWS DMS) to migrate data between the two Aurora DB clusters.

C.  

Use AWS Backup to share an Aurora database backup from the existing AWS account to the new AWS account. Create an Aurora DB cluster in the new AWS account from the snapshot.

D.  

Create an Aurora DB cluster in the new AWS account. Use AWS Application Migration Service to migrate data between the two Aurora DB clusters.

Discussion 0