Labour Day Special 65% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: exams65

AWS Certified Database - Specialty Question and Answers

AWS Certified Database - Specialty

Last Update Apr 25, 2024
Total Questions : 324

We are offering FREE DBS-C01 Amazon Web Services exam questions. All you do is to just go and sign up. Give your details, prepare DBS-C01 free exam questions and then go for complete pool of AWS Certified Database - Specialty test questions that will help you more.

DBS-C01 pdf

DBS-C01 PDF

$35  $99.99
DBS-C01 Engine

DBS-C01 Testing Engine

$42  $119.99
DBS-C01 PDF + Engine

DBS-C01 PDF + Testing Engine

$56  $159.99
Questions 1

A gaming company wants to deploy a game in multiple Regions. The company plans to save local high scores in Amazon DynamoDB tables in each Region. A Database Specialist needs to design a solution to automate the deployment of the database with identical configurations in additional Regions, as needed. The solution should also automate configuration changes across all Regions.

Which solution would meet these requirements and deploy the DynamoDB tables?

Options:

A.  

Create an AWS CLI command to deploy the DynamoDB table to all the Regions and save it for future deployments.

B.  

Create an AWS CloudFormation template and deploy the template to all the Regions.

C.  

Create an AWS CloudFormation template and use a stack set to deploy the template to all the Regions.

D.  

Create DynamoDB tables using the AWS Management Console in all the Regions and create a step-by- step guide for future deployments.

Discussion 0
Questions 2

A company is looking to migrate a 1 TB Oracle database from on-premises to an Amazon Aurora PostgreSQL DB cluster. The company’s Database Specialist discovered that the Oracle database is storing 100 GB of large binary objects (LOBs) across multiple tables. The Oracle database has a maximum LOB size of 500 MB with an average LOB size of 350 MB. The Database Specialist has chosen AWS DMS to migrate the data with the largest replication instances.

How should the Database Specialist optimize the database migration using AWS DMS?

Options:

A.  

Create a single task using full LOB mode with a LOB chunk size of 500 MB to migrate the data and LOBs together

B.  

Create two tasks: task1 with LOB tables using full LOB mode with a LOB chunk size of 500 MB and task2 without LOBs

C.  

Create two tasks: task1 with LOB tables using limited LOB mode with a maximum LOB size of 500 MB and task 2 without LOBs

D.  

Create a single task using limited LOB mode with a maximum LOB size of 500 MB to migrate data and LOBs together

Discussion 0
Questions 3

A company with 500,000 employees needs to supply its employee list to an application used by human resources. Every 30 minutes, the data is exported using the LDAP service to load into a new Amazon DynamoDB table. The data model has a base table with Employee ID for the partition key and a global secondary index with Organization ID as the partition key.

While importing the data, a database specialist receives ProvisionedThroughputExceededException errors. After increasing the provisioned write capacity units

(WCUs) to 50,000, the specialist receives the same errors. Amazon CloudWatch metrics show a consumption of 1,500 WCUs.

What should the database specialist do to address the issue?

Options:

A.  

Change the data model to avoid hot partitions in the global secondary index.

B.  

Enable auto scaling for the table to automatically increase write capacity during bulk imports.

C.  

Modify the table to use on-demand capacity instead of provisioned capacity.

D.  

Increase the number of retries on the bulk loading application.

Discussion 0
Questions 4

A company is using Amazon Aurora MySQL as the database for its retail application on AWS. The company receives a notification of a pending database upgrade and wants to ensure upgrades do not occur before or during the most critical time of year. Company leadership is concerned that an Amazon RDS maintenance window will cause an outage during data ingestion.

Which step can be taken to ensure that the application is not interrupted?

Options:

A.  

Disable weekly maintenance on the DB cluster.

B.  

Clone the DB cluster and migrate it to a new copy of the database.

C.  

Choose to defer the upgrade and then find an appropriate down time for patching.

D.  

Set up an Aurora Replica and promote it to primary at the time of patching.

Discussion 0
Questions 5

A company runs online transaction processing (OLTP) workloads on an Amazon RDS for PostgreSQL Multi-AZ DB instance. The company recently conducted tests on the database after business hours, and

the tests generated additional database logs. As a result, free storage of the DB instance is low and is expected to be exhausted in 2 days.

The company wants to recover the free storage that the additional logs consumed. The solution must not result in downtime for the database.

Which solution will meet these requirements?

Options:

A.  

Modify the rds.log_retention_period parameter to 0. Reboot the DB instance to save the changes.

B.  

Modify the rds.log_retention_period parameter to 1440. Wait up to 24 hours for database logs to be deleted.

C.  

Modify the temp file_limit parameter to a smaller value to reclaim space on the DB instance.

D.  

Modify the rds.log_retention_period parameter to 1440. Reboot the DB instance to save the changes.

Discussion 0
Questions 6

AWS CloudFormation stack including an Amazon RDS database instance was mistakenly removed, resulting in the loss of recent data. A Database Specialist must apply RDS parameters to the CloudFormation template in order to minimize the possibility of future inadvertent instance data loss.

Which settings will satisfy this criterion? (Select three.)

Options:

A.  

Set DeletionProtection to True

B.  

Set MultiAZ to True

C.  

Set TerminationProtection to True

D.  

Set DeleteAutomatedBackups to False

E.  

Set DeletionPolicy to Delete

F.  

Set DeletionPolicy to Retain

Discussion 0
Questions 7

A company is planning to use Amazon RDS for SQL Server for one of its critical applications. The company's security team requires that the users of the RDS for

SQL Server DB instance are authenticated with on-premises Microsoft Active Directory credentials.

Which combination of steps should a database specialist take to meet this requirement? (Choose three.)

Options:

A.  

Extend the on-premises Active Directory to AWS by using AD Connector.

B.  

Create an IAM user that uses the AmazonRDSDirectoryServiceAccess managed IAM policy.

C.  

Create a directory by using AWS Directory Service for Microsoft Active Directory.

D.  

Create an Active Directory domain controller on Amazon EC2.

E.  

Create an IAM role that uses the AmazonRDSDirectoryServiceAccess managed IAM policy.

F.  

Create a one-way forest trust from the AWS Directory Service for Microsoft Active Directory directory to the on-premises Active Directory.

Discussion 0
Questions 8

A database specialist is working on an Amazon RDS for PostgreSQL DB instance that is experiencing application performance issues due to the addition of new workloads. The database has 5 ׀¢׀’ of storage space with Provisioned IOPS. Amazon CloudWatch metrics show that the average disk queue depth is greater than

200 and that the disk I/O response time is significantly higher than usual.

What should the database specialist do to improve the performance of the application immediately?

Options:

A.  

Increase the Provisioned IOPS rate on the storage.

B.  

Increase the available storage space.

C.  

Use General Purpose SSD (gp2) storage with burst credits.

D.  

Create a read replica to offload Read IOPS from the DB instance.

Discussion 0
Questions 9

A company is running its line of business application on AWS, which uses Amazon RDS for MySQL at the persistent data store. The company wants to minimize downtime when it migrates the database to Amazon Aurora.

Which migration method should a Database Specialist use?

Options:

A.  

Take a snapshot of the RDS for MySQL DB instance and create a new Aurora DB cluster with the option to migrate snapshots.

B.  

Make a backup of the RDS for MySQL DB instance using the mysqldump utility, create a new Aurora DB cluster, and restore the backup.

C.  

Create an Aurora Replica from the RDS for MySQL DB instance and promote the Aurora DB cluster.

D.  

Create a clone of the RDS for MySQL DB instance and promote the Aurora DB cluster.

Discussion 0
Questions 10

A company is using an Amazon Aurora PostgreSQL database for a project with a government agency. All database communications must be encrypted in transit. All non-SSL/TLS connection requests must be rejected.

What should a database specialist do to meet these requirements?

Options:

A.  

Set the rds.force SSI parameter in the DB cluster parameter group to default.

B.  

Set the rds.force_ssl parameter in the DB cluster parameter group to 1.

C.  

Set the rds.force_ssl parameter in the DB cluster parameter group to 0.

D.  

Set the SQLNET.SSL VERSION option in the DB cluster option group to 12.

Discussion 0
Questions 11

A single MySQL database was moved to Amazon Aurora by a business. The production data is stored in a database cluster in VPC PROD, whereas 12 testing environments are hosted in VPC TEST with the same AWS account. Testing has a negligible effect on the test data. The development team requires that each environment be updated nightly to ensure that each test database has daily production data.

Which migration strategy will be the quickest and least expensive to implement?

Options:

A.  

Run the master in Amazon Aurora MySQL. Create 12 clones in VPC_TEST, and script the clones to be deleted and re-created nightly.

B.  

Run the master in Amazon Aurora MySQL. Take a nightly snapshot, and restore it into 12 databases in VPC_TEST using Aurora Serverless.

C.  

Run the master in Amazon Aurora MySQL. Create 12 Aurora Replicas in VPC_TEST, and script the replicas to be deleted and re-created nightly.

D.  

Run the master in Amazon Aurora MySQL using Aurora Serverless. Create 12 clones in VPC_TEST, and script the clones to be deleted and re-created nightly.

Discussion 0
Questions 12

A company uses Amazon Aurora for secure financial transactions. The data must always be encrypted at rest and in transit to meet compliance requirements.

Which combination of actions should a database specialist take to meet these requirements? (Choose two.)

Options:

A.  

Create an Aurora Replica with encryption enabled using AWS Key Management Service (AWS KMS). Then promote the replica to master.

B.  

Use SSL/TLS to secure the in-transit connection between the financial application and the Aurora DB cluster.

C.  

Modify the existing Aurora DB cluster and enable encryption using an AWS Key Management Service (AWS KMS) encryption key. Apply the changes immediately.

D.  

Take a snapshot of the Aurora DB cluster and encrypt the snapshot using an AWS Key Management Service (AWS KMS) encryption key. Restore the snapshot to a new DB cluster and update the financial application database endpoints.

E.  

Use AWS Key Management Service (AWS KMS) to secure the in-transit connection between the financial application and the Aurora DB cluster.

Discussion 0
Questions 13

A Database Specialist is working with a company to launch a new website built on Amazon Aurora with several Aurora Replicas. This new website will replace an on-premises website connected to a legacy relational database. Due to stability issues in the legacy database, the company would like to test the resiliency of Aurora.

Which action can the Database Specialist take to test the resiliency of the Aurora DB cluster?

Options:

A.  

Stop the DB cluster and analyze how the website responds

B.  

Use Aurora fault injection to crash the master DB instance

C.  

Remove the DB cluster endpoint to simulate a master DB instance failure

D.  

Use Aurora Backtrack to crash the DB cluster

Discussion 0
Questions 14

A company has an existing system that uses a single-instance Amazon DocumentDB (with MongoDB compatibility) cluster. Read requests account for 75% of the system queries. Write requests are expected to increase by 50% after an upcoming global release. A database specialist needs to design a solution that improves the overall database performance without creating additional application overhead.

Which solution will meet these requirements?

Options:

A.  

Recreate the cluster with a shared cluster volume. Add two instances to serve both read requests and write requests.

B.  

Add one read replica instance. Activate a shared cluster volume. Route all read queries to the read replica instance.

C.  

Add one read replica instance. Set the read preference to secondary preferred.

D.  

Add one read replica instance. Update the application to route all read queries to the read replica instance.

Discussion 0
Questions 15

A database specialist needs to move an Amazon RDS DB instance from one AWS account to another AWS account.

Which solution will meet this requirement with the LEAST operational effort?

Options:

A.  

Use AWS Database Migration Service (AWS DMS) to migrate the DB instance from the source AWS account to the destination AWS account.

B.  

Create a DB snapshot of the DB instance. Share the snapshot with the destination AWS account. Create a new DB instance by restoring the snapshot in the destination AWS account.

C.  

Create a Multi-AZ deployment for the DB instance. Create a read replica for the DB instance in the source AWS account. Use the read replica to replicate the data into the DB instance in the destination AWS account.

D.  

Use AWS DataSync to back up the DB instance in the source AWS account. Use AWS Resource Access Manager (AWS RAM) to restore the backup in the destination AWS account.

Discussion 0
Questions 16

A company is running a blogging platform. A security audit determines that the Amazon RDS DB instance that is used by the platform is not configured to encrypt the data at rest. The company must encrypt the DB instance within 30 days.

What should a database specialist do to meet this requirement with the LEAST amount of downtime?

Options:

A.  

Create a read replica of the DB instance, and enable encryption. When the read replica is available, promote the read replica and update the endpoint that is used by the application. Delete the unencrypted DB instance.

B.  

Take a snapshot of the DB instance. Make an encrypted copy of the snapshot. Restore the encrypted snapshot. When the new DB instance is available, update the endpoint that is used by the application. Delete the unencrypted DB instance.

C.  

Create a new encrypted DB instance. Perform an initial data load, and set up logical replication between the two DB instances When the new DB instance is in sync with the source DB instance, update the endpoint that is used by the application. Delete the unencrypted DB instance.

D.  

Convert the DB instance to an Amazon Aurora DB cluster, and enable encryption. When the DB cluster is available, update the endpoint that is used by the application to the cluster endpoint. Delete the unencrypted DB instance.

Discussion 0
Questions 17

A company wants to migrate its existing on-premises Oracle database to Amazon Aurora PostgreSQL. The migration must be completed with minimal downtime using AWS DMS. A Database Specialist must validate that the data was migrated accurately from the source to the target before the cutover. The migration must have minimal impact on the performance of the source database.

Which approach will MOST effectively meet these requirements?

Options:

A.  

Use the AWS Schema Conversion Tool (AWS SCT) to convert source Oracle database schemas to the target Aurora DB cluster. Verify the datatype of the columns.

B.  

Use the table metrics of the AWS DMS task created for migrating the data to verify the statistics for the tables being migrated and to verify that the data definition language (DDL) statements are completed.

C.  

Enable the AWS Schema Conversion Tool (AWS SCT) premigration validation and review the premigration checklist to make sure there are no issues with the conversion.

D.  

Enable AWS DMS data validation on the task so the AWS DMS task compares the source and target records, and reports any mismatches.

Discussion 0
Questions 18

An application reads and writes data to an Amazon RDS for MySQL DB instance. A new reporting dashboard needs read-only access to the database. When the application and reports are both under heavy load, the database experiences performance degradation. A database specialist needs to improve the database performance.

What should the database specialist do to meet these requirements?

Options:

A.  

Create a read replica of the DB instance. Configure the reports to connect to the replication instance endpoint.

B.  

Create a read replica of the DB instance. Configure the application and reports to connect to the cluster endpoint.

C.  

Enable Multi-AZ deployment. Configure the reports to connect to the standby replica.

D.  

Enable Multi-AZ deployment. Configure the application and reports to connect to the cluster endpoint.

Discussion 0
Questions 19

A company is running its customer feedback application on Amazon Aurora MySQL. The company runs a report every day to extract customer feedback, and a team reads the feedback to determine if the customer comments are positive or negative. It sometimes takes days before the company can contact unhappy customers and take corrective measures. The company wants to use machine learning to automate this workflow.

Which solution meets this requirement with the LEAST amount of effort?

Options:

A.  

Export the Aurora MySQL database to Amazon S3 by using AWS Database Migration Service (AWS DMS). Use Amazon Comprehend to run sentiment analysis on the exported files.

B.  

Export the Aurora MySQL database to Amazon S3 by using AWS Database Migration Service (AWS DMS). Use Amazon SageMaker to run sentiment analysis on the exported files.

C.  

Set up Aurora native integration with Amazon Comprehend. Use SQL functions to extract sentiment analysis.

D.  

Set up Aurora native integration with Amazon SageMaker. Use SQL functions to extract sentiment analysis.

Discussion 0
Questions 20

A company is running critical applications on AWS. Most of the application deployments use Amazon Aurora MySQL for the database stack. The company uses AWS CloudFormation to deploy the DB instances.

The company's application team recently implemented a CI/CD pipeline. A database engineer needs to integrate the database deployment CloudFormation stack with the newly built CllCD platform. Updates to the CloudFormation stack must not update existing production database resources.

Which CloudFormation stack policy action should the database engineer implement to meet these requirements?

Options:

A.  

Use a Deny statement for the Update:Modify action on the production database resources.

B.  

Use a Deny statement for the action on the production database resources.

C.  

Use a Deny statement for the Update:Delete action on the production database resources.

D.  

Use a Deny statement for the Update:Replace action on the production database resources.

Discussion 0
Questions 21

A financial company is running an Amazon Redshift cluster for one of its data warehouse solutions. The company needs to generate connection logs, user logs, and user activity logs. The company also must make these logs available for future analysis.

Which combination of steps should a database specialist take to meet these requirements? (Choose two.)

Options:

A.  

Edit the database configuration of the cluster by enabling audit logging. Direct the logging to a specified log group in Amazon CloudWatch Logs.

B.  

Edit the database configuration of the cluster by enabling audit logging. Direct the logging to a specified Amazon S3 bucket

C.  

Modify the cluster by enabling continuous delivery of AWS CloudTrail logs to Amazon S3.

D.  

Create a new parameter group with the enable_user_activity_logging parameter set to true. Configure the cluster to use the new parameter group.

E.  

Modify the system table to enable logging for each user.

Discussion 0
Questions 22

A database specialist manages a critical Amazon RDS for MySQL DB instance for a company. The data stored daily could vary from .01% to 10% of the current database size. The database specialist needs to ensure that the DB instance storage grows as needed.

What is the MOST operationally efficient and cost-effective solution?

Options:

A.  

Configure RDS Storage Auto Scaling.

B.  

Configure RDS instance Auto Scaling.

C.  

Modify the DB instance allocated storage to meet the forecasted requirements.

D.  

Monitor the Amazon CloudWatch FreeStorageSpace metric daily and add storage as required.

Discussion 0
Questions 23

A company is using an Amazon ElastiCache for Redis cluster to host its online shopping website. Shoppers receive the following error when the website's application queries the cluster:

Which solutions will resolve this memory issues with the LEAST amount of effort? (Choose three.)

Options:

A.  

Reduce the TTL value for keys on the node.

B.  

Choose a larger node type.

C.  

Test different values in the parameter group for the maxmemory-policy parameter to find the ideal value to use.

D.  

Increase the number of nodes.

E.  

Monitor the EngineCPUUtilization Amazon CloudWatch metric. Create an AWS Lambda function to delete keys on nodes when a threshold is reached.

F.  

Increase the TTL value for keys on the node.

Discussion 0
Questions 24

A business just transitioned from an on-premises Oracle database to Amazon Aurora PostgreSQL. Following the move, the organization observed that every day around 3:00 PM, the application's response time is substantially slower. The firm has determined that the problem is with the database, not the application.

Which set of procedures should the Database Specialist do to locate the erroneous PostgreSQL query most efficiently?

Options:

A.  

Create an Amazon CloudWatch dashboard to show the number of connections, CPU usage, and disk space consumption. Watch these dashboards during the next slow period.

B.  

Launch an Amazon EC2 instance, and install and configure an open-source PostgreSQL monitoring tool that will run reports based on the output error logs.

C.  

Modify the logging database parameter to log all the queries related to locking in the database and then check the logs after the next slow period for this information.

D.  

Enable Amazon RDS Performance Insights on the PostgreSQL database. Use the metrics to identify any queries that are related to spikes in the graph during the next slow period.

Discussion 0
Questions 25

A company is launching a new Amazon RDS for MySQL Multi-AZ DB instance to be used as a data store for a custom-built application. After a series of tests with point-in-time recovery disabled, the company decides that it must have point-in-time recovery reenabled before using the DB instance to store production data.

What should a database specialist do so that point-in-time recovery can be successful?

Options:

A.  

Enable binary logging in the DB parameter group used by the DB instance.

B.  

Modify the DB instance and enable audit logs to be pushed to Amazon CloudWatch Logs.

C.  

Modify the DB instance and configure a backup retention period

D.  

Set up a scheduled job to create manual DB instance snapshots.

Discussion 0
Questions 26

A company is load testing its three-tier production web application deployed with an AWS CloudFormation template on AWS. The Application team is making changes to deploy additional Amazon EC2 and AWS Lambda resources to expand the load testing capacity. A Database Specialist wants to ensure that the changes made by the Application team will not change the Amazon RDS database resources already deployed.

Which combination of steps would allow the Database Specialist to accomplish this? (Choose two.)

Options:

A.  

Review the stack drift before modifying the template

B.  

Create and review a change set before applying it

C.  

Export the database resources as stack outputs

D.  

Define the database resources in a nested stack

E.  

Set a stack policy for the database resources

Discussion 0
Questions 27

A database specialist was alerted that a production Amazon RDS MariaDB instance with 100 GB of storage was out of space. In response, the database specialist modified the DB instance and added 50 GB of storage capacity. Three hours later, a new alert is generated due to a lack of free space on the same DB instance. The database specialist decides to modify the instance immediately to increase its storage capacity by 20 GB.

What will happen when the modification is submitted?

Options:

A.  

The request will fail because this storage capacity is too large.

B.  

The request will succeed only if the primary instance is in active status.

C.  

The request will succeed only if CPU utilization is less than 10%.

D.  

The request will fail as the most recent modification was too soon.

Discussion 0
Questions 28

A company hosts an on-premises Microsoft SQL Server Enterprise edition database with Transparent Data Encryption (TDE) enabled. The database is 20 TB in size and includes sparse tables. The company needs to migrate the database to Amazon RDS for SQL Server during a maintenance window that is scheduled for an upcoming weekend. Data-at-rest encryption must be enabled for the target DB instance.

Which combination of steps should the company take to migrate the database to AWS in the MOST operationally efficient manner? (Choose two.)

Options:

A.  

Use AWS Database Migration Service (AWS DMS) to migrate from the on-premises source database to the RDS for SQL Server target database.

B.  

Disable TDE. Create a database backup without encryption. Copy the backup to Amazon S3.

C.  

Restore the backup to the RDS for SQL Server DB instance. Enable TDE for the RDS for SQL Server DB instance.

D.  

Set up an AWS Snowball Edge device. Copy the database backup to the device. Send the device to AWS. Restore the database from Amazon S3.

E.  

Encrypt the data with client-side encryption before transferring the data to Amazon RDS.

Discussion 0
Questions 29

A user has a non-relational key-value database. The user is looking for a fully managed AWS service that will offload the administrative burdens of operating and scaling distributed databases. The solution must be cost- effective and able to handle unpredictable application traffic.

What should a Database Specialist recommend for this user?

Options:

A.  

Create an Amazon DynamoDB table with provisioned capacity mode

B.  

Create an Amazon DocumentDB cluster

C.  

Create an Amazon DynamoDB table with on-demand capacity mode

D.  

Create an Amazon Aurora Serverless DB cluster

Discussion 0
Questions 30

A company runs online transaction processing (OLTP) workloads on an Amazon RDS for PostgreSQL Multi- AZ DB instance. Tests were run on the database after work hours, which generated additional database logs. The free storage of the RDS DB instance is low due to these additional logs.

What should the company do to address this space constraint issue?

Options:

A.  

Log in to the host and run the rm $PGDATA/pg_logs/* command

B.  

Modify the rds.log_retention_period parameter to 1440 and wait up to 24 hours for database logs to be deleted

C.  

Create a ticket with AWS Support to have the logs deleted

D.  

Run the SELECT rds_rotate_error_log() stored procedure to rotate the logs

Discussion 0
Questions 31

A company has an on-premises production Microsoft SQL Server with 250 GB of data in one database. A database specialist needs to migrate this on-premises

SQL Server to Amazon RDS for SQL Server. The nightly native SQL Server backup file is approximately 120 GB in size. The application can be down for an extended period of time to complete the migration. Connectivity between the on-premises environment and AWS can be initiated from on-premises only.

How can the database be migrated from on-premises to Amazon RDS with the LEAST amount of effort?

Options:

A.  

Back up the SQL Server database using a native SQL Server backup. Upload the backup files to Amazon S3. Download the backup files on an Amazon EC2 instance and restore them from the EC2 instance into the new production RDS instance.

B.  

Back up the SQL Server database using a native SQL Server backup. Upload the backup files to Amazon S3. Restore the backup files from the S3 bucket into the new production RDS instance.

C.  

Provision and configure AWS DMS. Set up replication between the on-premises SQL Server environment to replicate the database to the new production RDS instance.

D.  

Back up the SQL Server database using AWS Backup. Once the backup is complete, restore the completed backup to an Amazon EC2 instance and move it to the new production RDS instance.

Discussion 0
Questions 32

The Security team for a finance company was notified of an internal security breach that happened 3 weeks ago. A Database Specialist must start producing audit logs out of the production Amazon Aurora PostgreSQL cluster for the Security team to use for monitoring and alerting. The Security team is required to perform real- time alerting and monitoring outside the Aurora DB cluster and wants to have the cluster push encrypted files to the chosen solution.

Which approach will meet these requirements?

Options:

A.  

Use pg_audit to generate audit logs and send the logs to the Security team.

B.  

Use AWS CloudTrail to audit the DB cluster and the Security team will get data from Amazon S3.

C.  

Set up database activity streams and connect the data stream from Amazon Kinesis to consumer applications.

D.  

Turn on verbose logging and set up a schedule for the logs to be dumped out for the Security team.

Discussion 0
Questions 33

A company’s ecommerce website uses Amazon DynamoDB for purchase orders. Each order is made up of a Customer ID and an Order ID. The DynamoDB table uses the Customer ID as the partition key and the Order ID as the sort key.

To meet a new requirement, the company also wants the ability to query the table by using a third attribute named Invoice ID. Queries using the Invoice ID must be strongly consistent. A database specialist must provide this capability with optimal performance and minimal overhead.

What should the database administrator do to meet these requirements?

Options:

A.  

Add a global secondary index on Invoice ID to the existing table.

B.  

Add a local secondary index on Invoice ID to the existing table.

C.  

Recreate the table by using the latest snapshot while adding a local secondary index on Invoice ID.

D.  

Use the partition key and a FilterExpression parameter with a filter on Invoice ID for all queries.

Discussion 0
Questions 34

A company has a production Amazon Aurora Db cluster that serves both online transaction processing (OLTP) transactions and compute-intensive reports. The reports run for 10% of the total cluster uptime while the OLTP transactions run all the time. The company has benchmarked its workload and determined that a six-node Aurora DB cluster is appropriate for the peak workload.

The company is now looking at cutting costs for this DB cluster, but needs to have a sufficient number of nodes in the cluster to support the workload at different times. The workload has not changed since the previous benchmarking exercise.

How can a Database Specialist address these requirements with minimal user involvement?

Options:

A.  

Split up the DB cluster into two different clusters: one for OLTP and the other for reporting. Monitor and set up replication between the two clusters to keep data consistent.

B.  

Review all evaluate the peak combined workload. Ensure that utilization of the DB cluster node is at an acceptable level. Adjust the number of instances, if necessary.

C.  

Use the stop cluster functionality to stop all the nodes of the DB cluster during times of minimal workload. The cluster can be restarted again depending on the workload at the time.

D.  

Set up automatic scaling on the DB cluster. This will allow the number of reader nodes to adjust automatically to the reporting workload, when needed.

Discussion 0
Questions 35

A database professional maintains a fleet of Amazon RDS database instances that are configured to utilize the default database parameter group. A database expert must connect a custom parameter group with certain database instances.

When will the instances be allocated to this new parameter group once the database specialist performs this change?

Options:

A.  

Instantaneously after the change is made to the parameter group

B.  

In the next scheduled maintenance window of the DB instances

C.  

After the DB instances are manually rebooted

D.  

Within 24 hours after the change is made to the parameter group

Discussion 0
Questions 36

A manufacturing company stores its inventory details in an Amazon DynamoDB table in the us-east-2 Region. According to new compliance and regulatory policies, the company is required to back up all of its tables nightly and store these backups in the us-west-2 Region for disaster recovery for 1 year

Which solution MOST cost-effectively meets these requirements?

Options:

A.  

Convert the existing DynamoDB table into a global table and create a global table replica in the us-west-2 Region.

B.  

Use AWS Backup to create a backup plan. Configure cross-Region replication in the plan and assign the DynamoDB table to this plan

C.  

Create an on-demand backup of the DynamoDB table and restore this backup in the us-west-2 Region.

D.  

Enable Amazon S3 Cross-Region Replication (CRR) on the S3 bucket where DynamoDB on-demand backups are stored.

Discussion 0
Questions 37

A database specialist is constructing an AWS CloudFormation stack using AWS CloudFormation. The database expert wishes to avoid the stack's Amazon RDS ProductionDatabase resource being accidentally deleted.

Which solution will satisfy this criterion?

Options:

A.  

Create a stack policy to prevent updates. Include ג€Effectג€ : ג€ProductionDatabaseג€ and ג€Resourceג€ : ג€Denyג€ in the policy.

B.  

Create an AWS CloudFormation stack in XML format. Set xAttribute as false.

C.  

Create an RDS DB instance without the DeletionPolicy attribute. Disable termination protection.

D.  

Create a stack policy to prevent updates. Include Effect, Deny, and Resource :ProductionDatabase in the policy.

Discussion 0
Questions 38

A company is using Amazon Redshift as its data warehouse solution. The Redshift cluster handles the following types of workloads:

*Real-time inserts through Amazon Kinesis Data Firehose

*Bulk inserts through COPY commands from Amazon S3

*Analytics through SQL queries

Recently, the cluster has started to experience performance issues.

Which combination of actions should a database specialist take to improve the cluster's performance? (Choose three.)

Options:

A.  

Modify the Kinesis Data Firehose delivery stream to stream the data to Amazon S3 with a high buffer size and to load the data into Amazon Redshift by using the COPY command.

B.  

Stream real-time data into Redshift temporary tables before loading the data into permanent tables.

C.  

For bulk inserts, split input files on Amazon S3 into multiple files to match the number of slices on Amazon Redshift. Then use the COPY command to load data into Amazon Redshift.

D.  

For bulk inserts, use the parallel parameter in the COPY command to enable multi-threading.

E.  

Optimize analytics SQL queries to use sort keys.

F.  

Avoid using temporary tables in analytics SQL queries.

Discussion 0
Questions 39

Recently, a financial institution created a portfolio management service. The application's backend is powered by Amazon Aurora, which supports MySQL.

The firm demands a response time of five minutes and a response time of five minutes. A database professional must create a disaster recovery system that is both efficient and has a low replication latency.

How should the database professional tackle these requirements?

Options:

A.  

Configure AWS Database Migration Service (AWS DMS) and create a replica in a different AWS Region.

B.  

Configure an Amazon Aurora global database and add a different AWS Region.

C.  

Configure a binlog and create a replica in a different AWS Region.

D.  

Configure a cross-Region read replica.

Discussion 0
Questions 40

An IT consulting company wants to reduce costs when operating its development environment databases. The company’s workflow creates multiple Amazon Aurora MySQL DB clusters for each development group. The Aurora DB clusters are only used for 8 hours a day. The DB clusters can then be deleted at the end of the development cycle, which lasts 2 weeks.

Which of the following provides the MOST cost-effective solution?

Options:

A.  

Use AWS CloudFormation templates. Deploy a stack with the DB cluster for each development group. Delete the stack at the end of the development cycle.

B.  

Use the Aurora DB cloning feature. Deploy a single development and test Aurora DB instance, and create clone instances for the development groups. Delete the clones at the end of the development cycle.

C.  

Use Aurora Replicas. From the master automatic pause compute capacity option, create replicas for each development group, and promote each replica to master. Delete the replicas at the end of the development cycle.

D.  

Use Aurora Serverless. Restore current Aurora snapshot and deploy to a serverless cluster for each development group. Enable the option to pause the compute capacity on the cluster and set an appropriate timeout.

Discussion 0
Questions 41

A software-as-a-service (SaaS) company is using an Amazon Aurora Serverless DB cluster for its production MySQL database. The DB cluster has general logs and slow query logs enabled. A database engineer must use the most operationally efficient solution with minimal resource utilization to retain the logs and facilitate interactive search and analysis.

Which solution meets these requirements?

Options:

A.  

Use an AWS Lambda function to ship database logs to an Amazon S3 bucket. Use Amazon Athena and Amazon QuickSight to search and analyze the logs.

B.  

Download the logs from the DB cluster and store them in Amazon S3 by using manual scripts. Use Amazon Athena and Amazon QuickSight to search and analyze the logs.

C.  

Use an AWS Lambda function to ship database logs to an Amazon S3 bucket. Use Amazon Elasticsearch Service (Amazon ES) and Kibana to search and analyze the logs.

D.  

Use Amazon CloudWatch Logs Insights to search and analyze the logs when the logs are automatically uploaded by the DB cluster.

Discussion 0
Questions 42

A company has an on-premises system that tracks various database operations that occur over the lifetime of a database, including database shutdown, deletion, creation, and backup.

The company recently moved two databases to Amazon RDS and is looking at a solution that would satisfy these requirements. The data could be used by other systems within the company.

Which solution will meet these requirements with minimal effort?

Options:

A.  

Create an Amazon Cloudwatch Events rule with the operations that need to be tracked on Amazon RDS. Create an AWS Lambda function to act on these rules and write the output to the tracking systems.

B.  

Create an AWS Lambda function to trigger on AWS CloudTrail API calls. Filter on specific RDS API calls and write the output to the tracking systems.

C.  

Create RDS event subscriptions. Have the tracking systems subscribe to specific RDS event system notifications.

D.  

Write RDS logs to Amazon Kinesis Data Firehose. Create an AWS Lambda function to act on these rules and write the output to the tracking systems.

Discussion 0
Questions 43

An online gaming company is using an Amazon DynamoDB table in on-demand mode to store game scores. After an intensive advertisement campaign in South

America, the average number of concurrent users rapidly increases from 100,000 to 500,000 in less than 10 minutes every day around 5 PM.

The on-call software reliability engineer has observed that the application logs contain a high number of DynamoDB throttling exceptions caused by game score insertions around 5 PM. Customer service has also reported that several users are complaining about their scores not being registered.

How should the database administrator remediate this issue at the lowest cost?

Options:

A.  

Enable auto scaling and set the target usage rate to 90%.

B.  

Switch the table to provisioned mode and enable auto scaling.

C.  

Switch the table to provisioned mode and set the throughput to the peak value.

D.  

Create a DynamoDB Accelerator cluster and use it to access the DynamoDB table.

Discussion 0
Questions 44

A Database Specialist needs to define a database migration strategy to migrate an on-premises Oracle database to an Amazon Aurora MySQL DB cluster. The company requires near-zero downtime for the data migration. The solution must also be cost-effective.

Which approach should the Database Specialist take?

Options:

A.  

Dump all the tables from the Oracle database into an Amazon S3 bucket using datapump (expdp). Run data transformations in AWS Glue. Load the data from the S3 bucket to the Aurora DB cluster.

B.  

Order an AWS Snowball appliance and copy the Oracle backup to the Snowball appliance. Once the Snowball data is delivered to Amazon S3, create a new Aurora DB cluster. Enable the S3 integration to migrate the data directly from Amazon S3 to Amazon RDS.

C.  

Use the AWS Schema Conversion Tool (AWS SCT) to help rewrite database objects to MySQL during the schema migration. Use AWS DMS to perform the full load and change data capture (CDC) tasks.

D.  

Use AWS Server Migration Service (AWS SMS) to import the Oracle virtual machine image as an Amazon EC2 instance. Use the Oracle Logical Dump utility to migrate the Oracle data from Amazon EC2 to an Aurora DB cluster.

Discussion 0
Questions 45

A database specialist is designing the database for a software-as-a-service (SaaS) version of an employee information application. In the current architecture, the change history of employee records is stored in a single table in an Amazon RDS for Oracle database. Triggers on the employee table populate the history table with historical records.

This architecture has two major challenges. First, there is no way to guarantee that the records have not been changed in the history table. Second, queries on the history table are slow because of the large size of the table and the need to run the queries against a large subset of data in the table.

The database specialist must design a solution that prevents modification of the historical records. The solution also must maximize the speed of the queries.

Which solution will meet these requirements?

Options:

A.  

Migrate the current solution to an Amazon DynamoDB table. Use DynamoDB Streams to keep track of changes. Use DynamoDB Accelerator (DAX) to improve query performance.

B.  

Write employee record history to Amazon Quantum Ledger Database (Amazon QLDB) for historical records and to an Amazon OpenSearch Service domain for queries.

C.  

Use Amazon Aurora PostgreSQL to store employee record history in a single table. Use Aurora Auto Scaling to provision more capacity.

D.  

Build a solution that uses an Amazon Redshift cluster for historical records. Query the Redshift cluster directly as needed.

Discussion 0
Questions 46

A company stores session history for its users in an Amazon DynamoDB table. The company has a large user base and generates large amounts of session data.

Teams analyze the session data for 1 week, and then the data is no longer needed. A database specialist needs to design an automated solution to purge session data that is more than 1 week old.

Which strategy meets these requirements with the MOST operational efficiency?

Options:

A.  

Create an AWS Step Functions state machine with a DynamoDB DeleteItem operation that uses the ConditionExpression parameter to delete items older than a week. Create an Amazon EventBridge (Amazon CloudWatch Events) scheduled rule that runs the Step Functions state machine on a weekly basis.

B.  

Create an AWS Lambda function to delete items older than a week from the DynamoDB table. Create an Amazon EventBridge (Amazon CloudWatch Events) scheduled rule that triggers the Lambda function on a weekly basis.

C.  

Enable Amazon DynamoDB Streams on the table. Use a stream to invoke an AWS Lambda function to delete items older than a week from the DynamoDB table

D.  

Enable TTL on the DynamoDB table and set a Number data type as the TTL attribute. DynamoDB will automatically delete items that have a TTL that is less than the current time.

Discussion 0
Questions 47

A company has branch offices in the United States and Singapore. The company has a three-tier web application that uses a shared database. The database runs on an Amazon RDS for MySQL DB instance that is hosted in the us-west-2 Region. The application has a distributed front end that is deployed in us-west-2 and in the ap-southeast-1 Region. The company uses this front end as a dashboard that provides statistics to sales managers in each branch office.

The dashboard loads more slowly in the Singapore branch office than in the United States branch office. The company needs a solution so that the dashboard loads consistently for users in each location.

Which solution will meet these requirements in the MOST operationally efficient way?

Options:

A.  

Take a snapshot of the DB instance in us-west-2. Create a new DB instance in ap-southeast-2 from the snapshot. Reconfigure the ap-southeast-1 front-end dashboard to access the new DB instance.

B.  

Create an RDS read replica in ap-southeast-1 from the primary DB instance in us-west-2. Reconfigure the ap-southeast-1 front-end dashboard to access the read replica.

C.  

Create a new DB instance in ap-southeast-1. Use AWS Database Migration Service (AWS DMS) and change data capture (CDC) to update the new DB instance in ap-southeast-1. Reconfigure the ap-southeast-1 front-end dashboard to access the new DB instance.

D.  

Create an RDS read replica in us-west-2, where the primary DB instance resides. Create a read replica in ap-southeast-1 from the read replica in us-west-2. Reconfigure the ap-southeast-1 front-end dashboard to access the read replica in ap-southeast-1.

Discussion 0
Questions 48

An information management services company is storing JSON documents on premises. The company is using a MongoDB 3.6 database but wants to migrate to

AWS. The solution must be compatible, scalable, and fully managed. The solution also must result in as little downtime as possible during the migration.

Which solution meets these requirements?

Options:

A.  

Create an AWS Database Migration Service (AWS DMS) replication instance, a source endpoint for MongoDB, and a target endpoint of Amazon DocumentDB (with MongoDB compatibility).

B.  

Create an AWS Database Migration Service (AWS DMS) replication instance, a source endpoint for MongoDB, and a target endpoint of a MongoDB image that is hosted on Amazon EC2

C.  

Use the mongodump and mongorestore tools to migrate the data from the source MongoDB deployment to Amazon DocumentDB (with MongoDB compatibility).

D.  

Use the mongodump and mongorestore tools to migrate the data from the source MongoDB deployment to a MongoDB image that is hosted on Amazon EC2.

Discussion 0
Questions 49

The Amazon CloudWatch metric for FreeLocalStorage on an Amazon Aurora MySQL DB instance shows that the amount of local storage is below 10 MB. A database engineer must increase the local storage available in the Aurora DB instance.

How should the database engineer meet this requirement?

Options:

A.  

Modify the DB instance to use an instance class that provides more local SSD storage.

B.  

Modify the Aurora DB cluster to enable automatic volume resizing.

C.  

Increase the local storage by upgrading the database engine version.

D.  

Modify the DB instance and configure the required storage volume in the configuration section.

Discussion 0
Questions 50

A large gaming company is creating a centralized solution to store player session state for multiple online games. The workload required key-value storage with low latency and will be an equal mix of reads and writes. Data should be written into the AWS Region closest to the user across the games’ geographically distributed user base. The architecture should minimize the amount of overhead required to manage the replication of data between Regions.

Which solution meets these requirements?

Options:

A.  

Amazon RDS for MySQL with multi-Region read replicas

B.  

Amazon Aurora global database

C.  

Amazon RDS for Oracle with GoldenGate

D.  

Amazon DynamoDB global tables

Discussion 0
Questions 51

An ecommerce company uses a backend application that stores data in an Amazon DynamoDB table. The backend application runs in a private subnet in a VPC and must connect to this table.

The company must minimize any network latency that results from network connectivity issues, even during periods of heavy application usage. A database administrator also needs the ability to use a private connection to connect to the DynamoDB table from the application.

Which solution will meet these requirements?

Options:

A.  

Use network ACLs to ensure that any outgoing or incoming connections to any port except DynamoDB are deactivated. Encrypt API calls by using TLS.

B.  

Create a VPC endpoint for DynamoDB in the application's VPC. Use the VPC endpoint to access the table.

C.  

Create an AWS Lambda function that has access to DynamoDB. Restrict outgoing access only to this Lambda function from the application.

D.  

Use a VPN to route all communication to DynamoDB through the company's own corporate network infrastructure.

Discussion 0
Questions 52

A team of Database Specialists is currently investigating performance issues on an Amazon RDS for MySQL DB instance and is reviewing related metrics. The team wants to narrow the possibilities down to specific database wait events to better understand the situation.

How can the Database Specialists accomplish this?

Options:

A.  

Enable the option to push all database logs to Amazon CloudWatch for advanced analysis

B.  

Create appropriate Amazon CloudWatch dashboards to contain specific periods of time

C.  

Enable Amazon RDS Performance Insights and review the appropriate dashboard

D.  

Enable Enhanced Monitoring will the appropriate settings

Discussion 0
Questions 53

Recently, a gaming firm purchased a popular iOS game that is especially popular during the Christmas season. The business has opted to include a leaderboard into the game, which will be powered by Amazon DynamoDB. The application's load is likely to increase significantly throughout the Christmas season.

Which solution satisfies these criteria at the lowest possible cost?

Options:

A.  

DynamoDB Streams

B.  

DynamoDB with DynamoDB Accelerator

C.  

DynamoDB with on-demand capacity mode

D.  

DynamoDB with provisioned capacity mode with Auto Scaling

Discussion 0
Questions 54

A business that specializes in internet advertising is developing an application that will show adverts to its customers. The program stores data in an Amazon DynamoDB database. Additionally, the application caches its reads using a DynamoDB Accelerator (DAX) cluster. The majority of reads come via the GetItem and BatchGetItem queries. The application does not need consistency of readings.

The application cache does not behave as intended after deployment. Specific extremely consistent queries to the DAX cluster are responding in several milliseconds rather than microseconds.

How can the business optimize cache behavior in order to boost application performance?

Options:

A.  

Increase the size of the DAX cluster.

B.  

Configure DAX to be an item cache with no query cache

C.  

Use eventually consistent reads instead of strongly consistent reads.

D.  

Create a new DAX cluster with a higher TTL for the item cache.

Discussion 0
Questions 55

A company has a hybrid environment in which a VPC connects to an on-premises network through an AWS Site-to-Site VPN connection. The VPC contains an application that is hosted on Amazon EC2 instances. The EC2 instances run in private subnets behind an Application Load Balancer (ALB) that is associated with multiple public subnets. The EC2 instances need to securely access an Amazon DynamoDB table.

Which solution will meet these requirements?

Options:

A.  

Use the internet gateway of the VPC to access the DynamoDB table. Use the ALB to route the traffic to the EC2 instances.

B.  

Add a NAT gateway in one of the public subnets of the VPC_ Configure the security groups of the EC2 instances to access the DynamoDB table through the NAT gateway

C.  

Use the Site-to-Site VPN connection to route all DynamoD8 network traffic through the on-premises network infrastructure to access the EC2 instances

D.  

Create a VPC endpoint for DynamoDB_ Assign the endpoint to the route table of the private subnets that contain the EC2 instances.

Discussion 0
Questions 56

A company is concerned about the cost of a large-scale, transactional application using Amazon DynamoDB that only needs to store data for 2 days before it is deleted. In looking at the tables, a Database Specialist notices that much of the data is months old, and goes back to when the application was first deployed.

What can the Database Specialist do to reduce the overall cost?

Options:

A.  

Create a new attribute in each table to track the expiration time and create an AWS Glue transformation to delete entries more than 2 days old.

B.  

Create a new attribute in each table to track the expiration time and enable DynamoDB Streams on each table.

C.  

Create a new attribute in each table to track the expiration time and enable time to live (TTL) on each table.

D.  

Create an Amazon CloudWatch Events event to export the data to Amazon S3 daily using AWS Data Pipeline and then truncate the Amazon DynamoDB table.

Discussion 0
Questions 57

A development team at an international gaming company is experimenting with Amazon DynamoDB to store in-game events for three mobile games. The most popular game hosts a maximum of 500,000 concurrent users, and the least popular game hosts a maximum of 10,000 concurrent users. The average size of an event is 20 KB, and the average user session produces one event each second. Each event is tagged with a time in milliseconds and a globally unique identifier.

The lead developer created a single DynamoDB table for the events with the following schema:

  • Partition key: game name
  • Sort key: event identifier
  • Local secondary index: player identifier
  • Event time

The tests were successful in a small-scale development environment. However, when deployed to production, new events stopped being added to the table and the logs show DynamoDB failures with the ItemCollectionSizeLimitExceededException error code.

Which design change should a database specialist recommend to the development team?

Options:

A.  

Use the player identifier as the partition key. Use the event time as the sort key. Add a global secondary index with the game name as the partition key and the event time as the sort key.

B.  

Create two tables. Use the game name as the partition key in both tables. Use the event time as the sort key for the first table. Use the player identifier as the sort key for the second table.

C.  

Replace the sort key with a compound value consisting of the player identifier collated with the event time, separated by a dash. Add a local secondary index with the player identifier as the sort key.

D.  

Create one table for each game. Use the player identifier as the partition key. Use the event time as the sort key.

Discussion 0
Questions 58

A company just migrated to Amazon Aurora PostgreSQL from an on-premises Oracle database. After the migration, the company discovered there is a period of time every day around 3:00 PM where the response time of the application is noticeably slower. The company has narrowed down the cause of this issue to the database and not the application.

Which set of steps should the Database Specialist take to most efficiently find the problematic PostgreSQL query?

Options:

A.  

Create an Amazon CloudWatch dashboard to show the number of connections, CPU usage, and disk space consumption. Watch these dashboards during the next slow period.

B.  

Launch an Amazon EC2 instance, and install and configure an open-source PostgreSQL monitoring tool that will run reports based on the output error logs.

C.  

Modify the logging database parameter to log all the queries related to locking in the database and then check the logs after the next slow period for this information.

D.  

Enable Amazon RDS Performance Insights on the PostgreSQL database. Use the metrics to identify any queries that are related to spikes in the graph during the next slow period.

Discussion 0
Questions 59

A company has multiple applications serving data from a secure on-premises database. The company is migrating all applications and databases to the AWS Cloud. The IT Risk and Compliance department requires that auditing be enabled on all secure databases to capture all log ins, log outs, failed logins, permission changes, and database schema changes. A Database Specialist has recommended Amazon Aurora MySQL as the migration target, and leveraging the Advanced Auditing feature in Aurora.

Which events need to be specified in the Advanced Auditing configuration to satisfy the minimum auditing requirements? (Choose three.)

Options:

A.  

CONNECT

B.  

QUERY_DCL

C.  

QUERY_DDL

D.  

QUERY_DML

E.  

TABLE

F.  

QUERY

Discussion 0
Questions 60

A Database Specialist is creating a new Amazon Neptune DB cluster, and is attempting to load fata from Amazon S3 into the Neptune DB cluster using the Neptune bulk loader API. The Database Specialist receives the following error:

“Unable to connect to s3 endpoint. Provided source = s3://mybucket/graphdata/ and region = us-east-1. Please verify your S3 configuration.”

Which combination of actions should the Database Specialist take to troubleshoot the problem? (Choose two.)

Options:

A.  

Check that Amazon S3 has an IAM role granting read access to Neptune

B.  

Check that an Amazon S3 VPC endpoint exists

C.  

Check that a Neptune VPC endpoint exists

D.  

Check that Amazon EC2 has an IAM role granting read access to Amazon S3

E.  

Check that Neptune has an IAM role granting read access to Amazon S3

Discussion 0
Questions 61

A business's production database is hosted on a single-node Amazon RDS for MySQL DB instance. The database instance is hosted in a United States AWS Region.

A week before a significant sales event, a fresh database maintenance update is released. The maintenance update has been designated as necessary. The firm want to minimize the database instance's downtime and requests that a database expert make the database instance highly accessible until the sales event concludes.

Which solution will satisfy these criteria?

Options:

A.  

Defer the maintenance update until the sales event is over.

B.  

Create a read replica with the latest update. Initiate a failover before the sales event.

C.  

Create a read replica with the latest update. Transfer all read-only traffic to the read replica during the sales event.

D.  

Convert the DB instance into a Multi-AZ deployment. Apply the maintenance update.

Discussion 0
Questions 62

A database specialist has been entrusted by an ecommerce firm with designing a reporting dashboard that visualizes crucial business KPIs derived from the company's primary production database running on Amazon Aurora. The dashboard should be able to read data within 100 milliseconds after an update.

The Database Specialist must conduct an audit of the Aurora DB cluster's present setup and provide a cost-effective alternative. The solution must support the unexpected read demand generated by the reporting dashboard without impairing the DB cluster's write availability and performance.

Which solution satisfies these criteria?

Options:

A.  

Turn on the serverless option in the DB cluster so it can automatically scale based on demand.

B.  

Provision a clone of the existing DB cluster for the new Application team.

C.  

Create a separate DB cluster for the new workload, refresh from the source DB cluster, and set up ongoing replication using AWS DMS change data capture (CDC).

D.  

Add an automatic scaling policy to the DB cluster to add Aurora Replicas to the cluster based on CPU consumption.

Discussion 0
Questions 63

A security team is conducting an audit for a financial company. The security team discovers that the database credentials of an Amazon RDS for MySQL DB instance are hardcoded in the source code. The source code is stored in a shared location for automatic deployment and is exposed to all users who can access the location.

A database specialist must use encryption to ensure that the credentials are not visible in the source code.

Which solution will meet these requirements?

Options:

A.  

Use an AWS Key Management Service (AWS KMS) key to encrypt the most recent database backup. Restore the backup as a new database to activate encryption.

B.  

Store the source code to access the credentials in an AWS Systems Manager Parameter Store secure string parameter that is encrypted by AWS Key Management Service (AWS KMS). Access the code with calls to Systems Manager.

C.  

Store the credentials in an AWS Systems Manager Parameter Store secure string parameter that is encrypted by AWS Key Management Service (AWS KMS). Access the credentials with calls to Systems Manager.

D.  

Use an AWS Key Management Service (AWS KMS) key to encrypt the DB instance at rest. Activate RDS encryption in transit by using SSL certificates.

Discussion 0
Questions 64

A company is using Amazon RDS for PostgreSQL. The Security team wants all database connection requests to be logged and retained for 180 days. The RDS for PostgreSQL DB instance is currently using the default parameter group. A Database Specialist has identified that setting the log_connections parameter to 1 will enable connections logging.

Which combination of steps should the Database Specialist take to meet the logging and retention requirements? (Choose two.)

Options:

A.  

Update the log_connections parameter in the default parameter group

B.  

Create a custom parameter group, update the log_connections parameter, and associate the parameter with the DB instance

C.  

Enable publishing of database engine logs to Amazon CloudWatch Logs and set the event expiration to 180 days

D.  

Enable publishing of database engine logs to an Amazon S3 bucket and set the lifecycle policy to 180 days

E.  

Connect to the RDS PostgreSQL host and update the log_connections parameter in the postgresql.conf file

Discussion 0
Questions 65

A business's production databases are housed on a 3 TB Amazon Aurora MySQL DB cluster. The database cluster is installed in the region us-east-1. For disaster recovery (DR) requirements, the company's database expert needs to fast deploy the DB cluster in another AWS Region to handle the production load with an RTO of less than two hours.

Which approach is the MOST OPERATIONALLY EFFECTIVE in meeting these requirements?

Options:

A.  

Implement an AWS Lambda function to take a snapshot of the production DB cluster every 2 hours, and copy that snapshot to an Amazon S3 bucket in the DR Region. Restore the snapshot to an appropriately sized DB cluster in the DR Region.

B.  

Add a cross-Region read replica in the DR Region with the same instance type as the current primary instance. If the read replica in the DR Region needs to be used for production, promote the read replica to become a standalone DB cluster.

C.  

Create a smaller DB cluster in the DR Region. Configure an AWS Database Migration Service (AWS DMS) task with change data capture (CDC) enabled to replicate data from the current production DB cluster to the DB cluster in the DR Region.

D.  

Create an Aurora global database that spans two Regions. Use AWS Database Migration Service (AWS DMS) to migrate the existing database to the new global database.

Discussion 0
Questions 66

An ecommerce company uses Amazon DynamoDB as the backend for its payments system. A new regulation requires the company to log all data access requests for financial audits. For this purpose, the company plans to use AWS logging and save logs to Amazon S3

How can a database specialist activate logging on the database?

Options:

A.  

Use AWS CloudTrail to monitor DynamoDB control-plane operations. Create a DynamoDB stream to monitor data-plane operations. Pass the stream to Amazon Kinesis Data Streams. Use that stream as a source for Amazon Kinesis Data Firehose to store the data in an Amazon S3 bucket.

B.  

Use AWS CloudTrail to monitor DynamoDB data-plane operations. Create a DynamoDB stream to monitor control-plane operations. Pass the stream to Amazon Kinesis Data Streams. Use that stream as a source for Amazon Kinesis Data Firehose to store the data in an Amazon S3 bucket.

C.  

Create two trails in AWS CloudTrail. Use Trail1 to monitor DynamoDB control-plane operations. Use Trail2 to monitor DynamoDB data-plane operations.

D.  

Use AWS CloudTrail to monitor DynamoDB data-plane and control-plane operations.

Discussion 0
Questions 67

A database specialist needs to move an Amazon ROS DB instance from one AWS account to another AWS account.

Which solution will meet this requirement with the LEAST operational effort?

Options:

A.  

Use AWS Database Migration Service (AWS DMS) to migrate the DB instance from the source AWS account to the destination AWS account.

B.  

Create a DB snapshot of the DB instance. Share the snapshot With the destination AWS account Create a new DB instance by restoring the snapshot in the destination AWS account

C.  

Create a Multi-AZ deployment tor the DB instance. Create a read replica tor the DB instance in the source AWS account. use the read replica to replicate the data into the DB instance in the destination AWS account

D.  

Use AWS DataSync to back up the DB instance in the source AWS account Use AWS Resource Access Manager (AWS RAM) to restore the backup in the destination AWS account.

Discussion 0
Questions 68

A company is using Amazon RDS for MySQL to redesign its business application. A Database Specialist has noticed that the Development team is restoring their MySQL database multiple times a day when Developers make mistakes in their schema updates. The Developers sometimes need to wait hours to the restores to complete.

Multiple team members are working on the project, making it difficult to find the correct restore point for each mistake.

Which approach should the Database Specialist take to reduce downtime?

Options:

A.  

Deploy multiple read replicas and have the team members make changes to separate replica instances

B.  

Migrate to Amazon RDS for SQL Server, take a snapshot, and restore from the snapshot

C.  

Migrate to Amazon Aurora MySQL and enable the Aurora Backtrack feature

D.  

Enable the Amazon RDS for MySQL Backtrack feature

Discussion 0
Questions 69

A company is moving its fraud detection application from on premises to the AWS Cloud and is using Amazon Neptune for data storage. The company has set up a 1 Gbps AWS Direct Connect connection to migrate 25 TB of fraud detection data from the on-premises data center to a Neptune DB instance. The company already has an Amazon S3 bucket and an S3 VPC endpoint, and 80% of the company’s network bandwidth is available.

How should the company perform this data load?

Options:

A.  

Use an AWS SDK with a multipart upload to transfer the data from on premises to the S3 bucket. Use the Copy command for Neptune to move the data in bulk from the S3 bucket to the Neptune DB instance.

B.  

Use AWS Database Migration Service (AWS DMS) to transfer the data from on premises to the S3 bucket. Use the Loader command for Neptune to move the data in bulk from the S3 bucket to the Neptune DB instance.

C.  

Use AWS DataSync to transfer the data from on premises to the S3 bucket. Use the Loader command for Neptune to move the data in bulk from the S3 bucket to the Neptune DB instance.

D.  

Use the AWS CLI to transfer the data from on premises to the S3 bucket. Use the Copy command for Neptune to move the data in bulk from the S3 bucket to the Neptune DB instance.

Discussion 0
Questions 70

A company is migrating a mission-critical 2-TB Oracle database from on premises to Amazon Aurora. The cost for the database migration must be kept to a minimum, and both the on-premises Oracle database and the Aurora DB cluster must remain open for write traffic until the company is ready to completely cut over to Aurora.

Which combination of actions should a database specialist take to accomplish this migration as quickly as possible? (Choose two.)

Options:

A.  

Use the AWS Schema Conversion Tool (AWS SCT) to convert the source database schema. Then restore the converted schema to the target Aurora DB cluster.

B.  

Use Oracle’s Data Pump tool to export a copy of the source database schema and manually edit the schema in a text editor to make it compatible with Aurora.

C.  

Create an AWS DMS task to migrate data from the Oracle database to the Aurora DB cluster. Select the migration type to replicate ongoing changes to keep the source and target databases in sync until the company is ready to move all user traffic to the Aurora DB cluster.

D.  

Create an AWS DMS task to migrate data from the Oracle database to the Aurora DB cluster. Once the initial load is complete, create an AWS Kinesis Data Firehose stream to perform change data capture (CDC) until the company is ready to move all user traffic to the Aurora DB cluster.

E.  

Create an AWS Glue job and related resources to migrate data from the Oracle database to the Aurora DB cluster. Once the initial load is complete, create an AWS DMS task to perform change data capture (CDC) until the company is ready to move all user traffic to the Aurora DB cluster.

Discussion 0
Questions 71

A company maintains several databases using Amazon RDS for MySQL and PostgreSQL. Each RDS database generates log files with retention periods set to their default values. The company has now mandated that database logs be maintained for up to 90 days in a centralized repository to facilitate real-time and after- the-fact analyses.

What should a Database Specialist do to meet these requirements with minimal effort?

Options:

A.  

Create an AWS Lambda function to pull logs from the RDS databases and consolidate the log files in an Amazon S3 bucket. Set a lifecycle policy to expire the objects after 90 days.

B.  

Modify the RDS databases to publish log to Amazon CloudWatch Logs. Change the log retention policy for each log group to expire the events after 90 days.

C.  

Write a stored procedure in each RDS database to download the logs and consolidate the log files in an Amazon S3 bucket. Set a lifecycle policy to expire the objects after 90 days.

D.  

Create an AWS Lambda function to download the logs from the RDS databases and publish the logs to Amazon CloudWatch Logs. Change the log retention policy for the log group to expire the events after 90 days.

Discussion 0
Questions 72

A company is planning to migrate a 40 TB Oracle database to an Amazon Aurora PostgreSQL DB cluster by using a single AWS Database Migration Service (AWS DMS) task within a single replication instance. During early testing, AWS DMS is not scaling to the company's needs. Full load and change data capture (CDC) are taking days to complete.

The source database server and the target DB cluster have enough network bandwidth and CPU bandwidth for the additional workload. The replication instance has enough resources to support the replication. A database specialist needs to improve database performance, reduce data migration time, and create multiple DMS tasks.

Which combination of changes will meet these requirements? (Choose two.)

Options:

A.  

Increase the value of the ParallelLoadThreads parameter in the DMS task settings for the tables.

B.  

Use a smaller set of tables with each DMS task. Set the MaxFullLoadSubTasks parameter to a higher value.

C.  

Use a smaller set of tables with each DMS task. Set the MaxFullLoadSubTasks parameter to a lower value.

D.  

Use parallel load with different data boundaries for larger tables.

E.  

Run the DMS tasks on a larger instance class. Increase local storage on the instance.

Discussion 0
Questions 73

A Database Specialist is designing a new database infrastructure for a ride hailing application. The application data includes a ride tracking system that stores GPS coordinates for all rides. Real-time statistics and metadata lookups must be performed with high throughput and microsecond latency. The database should be fault tolerant with minimal operational overhead and development effort.

Which solution meets these requirements in the MOST efficient way?

Options:

A.  

Use Amazon RDS for MySQL as the database and use Amazon ElastiCache

B.  

Use Amazon DynamoDB as the database and use DynamoDB Accelerator

C.  

Use Amazon Aurora MySQL as the database and use Aurora’s buffer cache

D.  

Use Amazon DynamoDB as the database and use Amazon API Gateway

Discussion 0
Questions 74

A large company is using an Amazon RDS for Oracle Multi-AZ DB instance with a Java application. As a part of its disaster recovery annual testing, the company would like to simulate an Availability Zone failure and record how the application reacts during the DB instance failover activity. The company does not want to make any code changes for this activity.

What should the company do to achieve this in the shortest amount of time?

Options:

A.  

Use a blue-green deployment with a complete application-level failover test

B.  

Use the RDS console to reboot the DB instance by choosing the option to reboot with failover

C.  

Use RDS fault injection queries to simulate the primary node failure

D.  

Add a rule to the NACL to deny all traffic on the subnets associated with a single Availability Zone

Discussion 0
Questions 75

A company stores critical data for a department in Amazon RDS for MySQL DB instances. The department was closed for 3 weeks and notified a database specialist that access to the RDS DB instances should not be granted to anyone during this time. To meet this requirement, the database specialist stopped all the

DB instances used by the department but did not select the option to create a snapshot. Before the 3 weeks expired, the database specialist discovered that users could connect to the database successfully.

What could be the reason for this?

Options:

A.  

When stopping the DB instance, the option to create a snapshot should have been selected.

B.  

When stopping the DB instance, the duration for stopping the DB instance should have been selected.

C.  

Stopped DB instances will automatically restart if the number of attempted connections exceeds the threshold set.

D.  

Stopped DB instances will automatically restart if the instance is not manually started after 7 days.

Discussion 0
Questions 76

A Database Specialist is planning to create a read replica of an existing Amazon RDS for MySQL Multi-AZ DB instance. When using the AWS Management Console to conduct this task, the Database Specialist discovers that the source RDS DB instance does not appear in the read replica source selection box, so the read replica cannot be created.

What is the most likely reason for this?

Options:

A.  

The source DB instance has to be converted to Single-AZ first to create a read replica from it.

B.  

Enhanced Monitoring is not enabled on the source DB instance.

C.  

The minor MySQL version in the source DB instance does not support read replicas.

D.  

Automated backups are not enabled on the source DB instance.

Discussion 0
Questions 77

A database specialist at a large multi-national financial company is in charge of designing the disaster recovery strategy for a highly available application that is in development. The application uses an Amazon DynamoDB table as its data store. The application requires a recovery time objective (RTO) of 1 minute and a recovery point objective (RPO) of 2 minutes.

Which operationally efficient disaster recovery strategy should the database specialist recommend for the DynamoDB table?

Options:

A.  

Create a DynamoDB stream that is processed by an AWS Lambda function that copies the data to a DynamoDB table in another Region.

B.  

Use a DynamoDB global table replica in another Region. Enable point-in-time recovery for both tables.

C.  

Use a DynamoDB Accelerator table in another Region. Enable point-in-time recovery for the table.

D.  

Create an AWS Backup plan and assign the DynamoDB table as a resource.

Discussion 0
Questions 78

A company has a on-premises Oracle Real Application Clusters (RAC) database. The company wants to migrate the database to AWS and reduce licensing costs. The company's application team wants to store JSON payloads that expire after 28 hours. The company has development capacity if code changes are required.

Which solution meets these requirements?

Options:

A.  

Use Amazon DynamoDB and leverage the Time to Live (TTL) feature to automatically expire the data.

B.  

Use Amazon RDS for Oracle with Multi-AZ. Create an AWS Lambda function to purge the expired data. Schedule the Lambda function to run daily using Amazon EventBridge.

C.  

Use Amazon DocumentDB with a read replica in a different Availability Zone. Use DocumentDB change streams to expire the data.

D.  

Use Amazon Aurora PostgreSQL with Multi-AZ and leverage the Time to Live (TTL) feature to automatically expire the data.

Discussion 0
Questions 79

A company runs hundreds of Microsoft SQL Server databases on Windows servers in its on-premises data center. A database specialist needs to migrate these databases to Linux on AWS.

Which combination of steps should the database specialist take to meet this requirement? (Choose three.)

Options:

A.  

Install AWS Systems Manager Agent on the on-premises servers. Use Systems Manager Run Command to install the Windows to Linux replatforming assistant for Microsoft SQL Server Databases.

B.  

Use AWS Systems Manager Run Command to install and configure the AWS Schema Conversion Tool on the on-premises servers.

C.  

On the Amazon EC2 console, launch EC2 instances and select a Linux AMI that includes SQL Server. Install and configure AWS Systems Manager Agent on the EC2 instances.

D.  

On the AWS Management Console, set up Amazon RDS for SQL Server DB instances with Linux as the operating system. Install AWS Systems Manager Agent on the DB instances by using an options group.

E.  

Open the Windows to Linux replatforming assistant tool. Enter configuration details of the source and destination databases. Start migration.

F.  

On the AWS Management Console, set up AWS Database Migration Service (AWS DMS) by entering details of the source SQL Server database and the destination SQL Server database on AWS. Start migration.

Discussion 0
Questions 80

A company is using an Amazon Aurora PostgreSQL DB cluster with an xlarge primary instance master and two large Aurora Replicas for high availability and read-only workload scaling. A failover event occurs and application performance is poor for several minutes. During this time, application servers in all Availability Zones are healthy and responding normally.

What should the company do to eliminate this application performance issue?

Options:

A.  

Configure both of the Aurora Replicas to the same instance class as the primary DB instance. Enable cache coherence on the DB cluster, set the primary DB instance failover priority to tier-0, and assign a failover priority of tier-1 to the replicas.

B.  

Deploy an AWS Lambda function that calls the DescribeDBInstances action to establish which instance has failed, and then use the PromoteReadReplica operation to promote one Aurora Replica to be the primary DB instance. Configure an Amazon RDS event subscription to send a notification to an Amazon SNS topic to which the Lambda function is subscribed.

C.  

Configure one Aurora Replica to have the same instance class as the primary DB instance. Implement Aurora PostgreSQL DB cluster cache management. Set the failover priority to tier-0 for the primary DB instance and one replica with the same instance class. Set the failover priority to tier-1 for the other replicas.

D.  

Configure both Aurora Replicas to have the same instance class as the primary DB instance. Implement Aurora PostgreSQL DB cluster cache management. Set the failover priority to tier-0 for the primary DB instance and to tier-1 for the replicas.

Discussion 0
Questions 81

A vehicle insurance company needs to choose a highly available database to track vehicle owners and their insurance details. The persisted data should be immutable in the database, including the complete and sequenced history of changes over time with all the owners and insurance transfer details for a vehicle.

The data should be easily verifiable for the data lineage of an insurance claim.

Which approach meets these requirements with MINIMAL effort?

Options:

A.  

Create a blockchain to store the insurance details. Validate the data using a hash function to verify the data lineage of an insurance claim.

B.  

Create an Amazon DynamoDB table to store the insurance details. Validate the data using AWS DMS validation by moving the data to Amazon S3 to verify the data lineage of an insurance claim.

C.  

Create an Amazon QLDB ledger to store the insurance details. Validate the data by choosing the ledger name in the digest request to verify the data lineage of an insurance claim.

D.  

Create an Amazon Aurora database to store the insurance details. Validate the data using AWS DMS validation by moving the data to Amazon S3 to verify the data lineage of an insurance claim.

Discussion 0
Questions 82

A company has a database fleet that includes an Amazon RDS for MySQL DB instance. During an audit, the company discovered that the data that is stored on the DB instance is unencrypted.

A database specialist must enable encryption for the DB instance. The database specialist also must encrypt all connections to the DB instance.

Which combination of actions should the database specialist take to meet these requirements? (Choose three.)

Options:

A.  

In the RDS console, choose ג€Enable encryptionג€ to encrypt the DB instance by using an AWS Key Management Service (AWS KMS) key.

B.  

Encrypt the read replica of the unencrypted DB instance by using an AWS Key Management Service (AWS KMS) key. Fail over the read replica to the primary DB instance.

C.  

Create a snapshot of the unencrypted DB instance. Encrypt the snapshot by using an AWS Key Management Service (AWS KMS) key. Restore the DB instance from the encrypted snapshot. Delete the original DB instance.

D.  

Require SSL connections for applicable database user accounts.

E.  

Use SSL/TLS from the application to encrypt a connection to the DB instance.

F.  

Enable SSH encryption on the DB instance.

Discussion 0
Questions 83

A business uses Amazon EC2 instances in VPC A to serve an internal file-sharing application. This application is supported by an Amazon ElastiCache cluster in VPC B that is peering with VPC A. The corporation migrates the instances of its applications from VPC A to VPC B. The file-sharing application is no longer able to connect to the ElastiCache cluster, as shown by the logs.

What is the best course of action for a database professional to take in order to remedy this issue?

Options:

A.  

Create a second security group on the EC2 instances. Add an outbound rule to allow traffic from the ElastiCache cluster security group.

B.  

Delete the ElastiCache security group. Add an interface VPC endpoint to enable the EC2 instances to connect to the ElastiCache cluster.

C.  

Modify the ElastiCache security group by adding outbound rules that allow traffic to VPC CIDR blocks from the ElastiCache cluster.

D.  

Modify the ElastiCache security group by adding an inbound rule that allows traffic from the EC2 instances security group to the ElastiCache cluster.

Discussion 0
Questions 84

A company migrated an on-premises Oracle database to Amazon RDS for Oracle. A database specialist needs to monitor the latency of the database.

Which solution will meet this requirement with the LEAST operational overhead?

Options:

A.  

Publish RDS Performance insights metrics to Amazon CloudWatch. Add AWS CloudTrail filters to monitor database performance

B.  

Install Oracle Statspack. Enable the performance statistics feature to collect, store, and display performance data to monitor database performance.

C.  

Enable RDS Performance Insights to visualize the database load. Enable Enhanced Monitoring to view how different threads use the CPU

D.  

Create a new DB parameter group that includes the AllocatedStorage, DBInstanceClassMemory, and DBInstanceVCPU variables. Enable RDS Performance Insights

Discussion 0
Questions 85

A company conducted a security audit of its AWS infrastructure. The audit identified that data was not encrypted in transit between application servers and a

MySQL database that is hosted in Amazon RDS.

After the audit, the company updated the application to use an encrypted connection. To prevent this problem from occurring again, the company's database team needs to configure the database to require in-transit encryption for all connections.

Which solution will meet this requirement?

Options:

A.  

Update the parameter group in use by the DB instance, and set the require_secure_transport parameter to ON.

B.  

Connect to the database, and use ALTER USER to enable the REQUIRE SSL option on the database user.

C.  

Update the security group in use by the DB instance, and remove port 80 to prevent unencrypted connections from being established.

D.  

Update the DB instance, and enable the Require Transport Layer Security option.

Discussion 0
Questions 86

A company uses an on-premises Microsoft SQL Server database to host relational and JSON data and to run daily ETL and advanced analytics. The company wants to migrate the database to the AWS Cloud. Database specialist must choose one or more AWS services to run the company's workloads.

Which solution will meet these requirements in the MOST operationally efficient manner?

Options:

A.  

Use Amazon Redshift for relational data. Use Amazon DynamoDB for JSON data

B.  

Use Amazon Redshift for relational data and JSON data.

C.  

Use Amazon RDS for relational data. Use Amazon Neptune for JSON data

D.  

Use Amazon Redshift for relational data. Use Amazon S3 for JSON data.

Discussion 0
Questions 87

A company has a web application that uses Amazon API Gateway to route HTTPS requests to AWS Lambda functions. The application uses an Amazon Aurora MySQL database for its data storage. The application has experienced unpredictable surges in traffic that overwhelm the database with too many connection requests. The company needs to implement a scalable solution that is more resilient to database failures as quickly as possible.

Which solution will meet these requirements MOST cost-effectively?

Options:

A.  

Migrate the Aurora MySQL database to Amazon Aurora Serverless by restoring a snapshot. Change the endpoint in the Lambda functions to use the new database.

B.  

Migrate the Aurora MySQL database to Amazon DynamoDB tables by using AWS Database Migration Service (AWS DMS). Change the endpoint in the Lambda functions to use the new database.

C.  

Create an Amazon EventBridge rule that invokes a Lambda function. Code the function to iterate over all existing connections and to call MySQL queries to end any connections in the sleep state.

D.  

Increase the instance class for the Aurora database with more memory. Set a larger value for the max_connections parameter.

Discussion 0
Questions 88

A company is running an Amazon RDS for MySQL Multi-AZ DB instance for a business-critical workload. RDS encryption for the DB instance is disabled. A recent security audit concluded that all business-critical applications must encrypt data at rest. The company has asked its database specialist to formulate a plan to accomplish this for the DB instance.

Which process should the database specialist recommend?

Options:

A.  

Create an encrypted snapshot of the unencrypted DB instance. Copy the encrypted snapshot to Amazon S3. Restore the DB instance from the encrypted snapshot using Amazon S3.

B.  

Create a new RDS for MySQL DB instance with encryption enabled. Restore the unencrypted snapshot to this DB instance.

C.  

Create a snapshot of the unencrypted DB instance. Create an encrypted copy of the snapshot. Restore the DB instance from the encrypted snapshot.

D.  

Temporarily shut down the unencrypted DB instance. Enable AWS KMS encryption in the AWS Management Console using an AWS managed CMK. Restart the DB instance in an encrypted state.

Discussion 0
Questions 89

A company's applications store data in Amazon Aurora MySQL DB clusters. The company has separate AWS accounts for its production, test, and development environments. To test new functionality in the test environment, the company's development team requires a copy of the production database four times a day.

Which solution meets this requirement with the MOST operational efficiency?

Options:

A.  

Take a manual snapshot in the production account. Share the snapshot with the test account. Restore the database from the snapshot.

B.  

Take a manual snapshot in the production account. Export the snapshot to Amazon S3. Copy the snapshot to an S3 bucket in the test account. Restore the database from the snapshot.

C.  

Share the Aurora DB cluster with the test account. Create a snapshot of the production database in the test account. Restore the database from the snapshot.

D.  

Share the Aurora DB cluster with the test account. Create a clone of the production database in the test account.

Discussion 0
Questions 90

A company is due for renewing its database license. The company wants to migrate its 80 TB transactional database system from on-premises to the AWS Cloud. The migration should incur the least possible downtime on the downstream database applications. The company’s network infrastructure has limited network bandwidth that is shared with other applications.

Which solution should a database specialist use for a timely migration?

Options:

A.  

Perform a full backup of the source database to AWS Snowball Edge appliances and ship them to be loaded to Amazon S3. Use AWS DMS to migrate change data capture (CDC) data from the source database to Amazon S3. Use a second AWS DMS task to migrate all the S3 data to the target database.

B.  

Perform a full backup of the source database to AWS Snowball Edge appliances and ship them to be loaded to Amazon S3. Periodically perform incremental backups of the source database to be shipped in another Snowball Edge appliance to handle syncing change data capture (CDC) data from the source to the target database.

C.  

Use AWS DMS to migrate the full load of the source database over a VPN tunnel using the internet for its primary connection. Allow AWS DMS to handle syncing change data capture (CDC) data from the source to the target database.

D.  

Use the AWS Schema Conversion Tool (AWS SCT) to migrate the full load of the source database over a VPN tunnel using the internet for its primary connection. Allow AWS SCT to handle syncing change data capture (CDC) data from the source to the target database.

Discussion 0
Questions 91

A Database Specialist modified an existing parameter group currently associated with a production Amazon RDS for SQL Server Multi-AZ DB instance. The change is associated with a static parameter type, which controls the number of user connections allowed on the most critical RDS SQL Server DB instance for the company. This change has been approved for a specific maintenance window to help minimize the impact on users.

How should the Database Specialist apply the parameter group change for the DB instance?

Options:

A.  

Select the option to apply the change immediately

B.  

Allow the preconfigured RDS maintenance window for the given DB instance to control when the change is applied

C.  

Apply the change manually by rebooting the DB instance during the approved maintenance window

D.  

Reboot the secondary Multi-AZ DB instance

Discussion 0
Questions 92

A financial services company runs an on-premises MySQL database for a critical application. The company is dissatisfied with its current database disaster recovery (DR) solution. The application experiences a significant amount of downtime whenever the database fails over to its DR facility. The application also experiences slower response times when reports are processed on the same database. To minimize the downtime in DR situations, the company has decided to migrate the database to AWS. The company requires a solution that is highly available and the most cost-effective.

Which solution meets these requirements?

Options:

A.  

Create an Amazon RDS for MySQL Multi-AZ DB instance and configure a read replica in a different Availability Zone. Configure the application to reference the replica instance endpoint and report queries to reference the primary DB instance endpoint.

B.  

Create an Amazon RDS for MySQL Multi-AZ DB instance and configure a read replica in a different Availability Zone. Configure the application to reference the primary DB instance endpoint and report queries to reference the replica instance endpoint.

C.  

Create an Amazon Aurora DB cluster and configure an Aurora Replica in a different Availability Zone. Configure the application to reference the cluster endpoint and report queries to reference the reader endpoint.

D.  

Create an Amazon Aurora DB cluster and configure an Aurora Replica in a different Availability Zone. Configure the application to reference the primary DB instance endpoint and report queries to reference the replica instance endpoint.

Discussion 0
Questions 93

A company requires near-real-time notifications when changes are made to Amazon RDS DB security groups.

Which solution will meet this requirement with the LEAST operational overhead?

Options:

A.  

Configure an RDS event notification subscription for DB security group events.

B.  

Create an AWS Lambda function that monitors DB security group changes. Create an Amazon Simple Notification Service (Amazon SNS) topic for notification.

C.  

Turn on AWS CloudTrail. Configure notifications for the detection of changes to DB security groups.

D.  

Configure an Amazon CloudWatch alarm for RDS metrics about changes to DB security groups.

Discussion 0
Questions 94

A worldwide digital advertising corporation collects browser information in order to provide targeted visitors with contextually relevant pictures, websites, and connections. A single page load may create many events, each of which must be kept separately. A single event may have a maximum size of 200 KB and an average size of 10 KB. Each page load requires a query of the user's browsing history in order to deliver suggestions for targeted advertising. The advertising corporation anticipates daily page views of more than 1 billion from people in the United States, Europe, Hong Kong, and India. The information structure differs according to the event. Additionally, browsing information must be written and read with a very low latency to guarantee that consumers have a positive viewing experience.

Which database solution satisfies these criteria?

Options:

A.  

Amazon DocumentDB

B.  

Amazon RDS Multi-AZ deployment

C.  

Amazon DynamoDB global table

D.  

Amazon Aurora Global Database

Discussion 0
Questions 95

A company uses an Amazon RDS for PostgreSQL DB instance for its customer relationship management (CRM) system. New compliance requirements specify that the database must be encrypted at rest.

Which action will meet these requirements?

Options:

A.  

Create an encrypted copy of manual snapshot of the DB instance. Restore a new DB instance from the encrypted snapshot.

B.  

Modify the DB instance and enable encryption.

C.  

Restore a DB instance from the most recent automated snapshot and enable encryption.

D.  

Create an encrypted read replica of the DB instance. Promote the read replica to a standalone instance.

Discussion 0
Questions 96

A financial services company has an application deployed on AWS that uses an Amazon Aurora PostgreSQL DB cluster. A recent audit showed that no log files contained database administrator activity. A database specialist needs to recommend a solution to provide database access and activity logs. The solution should use the least amount of effort and have a minimal impact on performance.

Which solution should the database specialist recommend?

Options:

A.  

Enable Aurora Database Activity Streams on the database in synchronous mode. Connect the Amazon Kinesis data stream to Kinesis Data Firehose. Set the Kinesis Data Firehose destination to an Amazon S3 bucket.

B.  

Create an AWS CloudTrail trail in the Region where the database runs. Associate the database activity logs with the trail.

C.  

Enable Aurora Database Activity Streams on the database in asynchronous mode. Connect the Amazon Kinesis data stream to Kinesis Data Firehose. Set the Firehose destination to an Amazon S3 bucket.

D.  

Allow connections to the DB cluster through a bastion host only. Restrict database access to the bastion host and application servers. Push the bastion host logs to Amazon CloudWatch Logs using the CloudWatch Logs agent.

Discussion 0