DBS-C01 Exam Questions

Total 200 Questions

Last Updated Exam : 16-Dec-2024

A company wants to migrate its existing on-premises Oracle database to Amazon Aurora PostgreSQL. The
migration must be completed with minimal downtime using AWS DMS. A Database Specialist must validate
that the data was migrated accurately from the source to the target before the cutover. The migration must have
minimal impact on the performance of the source database.
Which approach will MOST effectively meet these requirements?


A.

Use the AWS Schema Conversion Tool (AWS SCT) to convert source Oracle database schemas to the
target Aurora DB cluster. Verify the datatype of the columns.


B.

Use the table metrics of the AWS DMS task created for migrating the data to verify the statistics for the
tables being migrated and to verify that the data definition language (DDL) statements are completed.


C.

Enable the AWS Schema Conversion Tool (AWS SCT) premigration validation and review the
premigrationchecklist to make sure there are no issues with the conversion.


D.

Enable AWS DMS data validation on the task so the AWS DMS task compares the source and
targetrecords, and reports any mismatches.





D.
  

Enable AWS DMS data validation on the task so the AWS DMS task compares the source and
targetrecords, and reports any mismatches.



A company runs online transaction processing (OLTP) workloads on an Amazon RDS for PostgreSQL
Multi-AZ DB instance. Tests were run on the database after work hours, which generated additional database
logs. The free storage of the RDS DB instance is low due to these additional logs.
What should the company do to address this space constraint issue?


A.

Log in to the host and run the rm $PGDATA/pg_logs/* command


B.

Modify the rds.log_retention_period parameter to 1440 and wait up to 24 hours for database logs to bedeleted


C.

Create a ticket with AWS Support to have the logs deleted


D.

Run the SELECT rds_rotate_error_log() stored procedure to rotate the logs





B.
  

Modify the rds.log_retention_period parameter to 1440 and wait up to 24 hours for database logs to bedeleted



A manufacturing company’s website uses an Amazon Aurora PostgreSQL DB cluster.
Which configurations will result in the LEAST application downtime during a failover? (Choose three.)


A.

Use the provided read and write Aurora endpoints to establish a connection to the Aurora DB cluster.


B.

Create an Amazon CloudWatch alert triggering a restore in another Availability Zone when the primary
Aurora DB cluster is unreachable.


C.

Edit and enable Aurora DB cluster cache management in parameter groups.


D.

Set TCP keepalive parameters to a high value.


E.

Set JDBC connection string timeout variables to a low value.


F.

Set Java DNS caching timeouts to a high value.





A.
  

Use the provided read and write Aurora endpoints to establish a connection to the Aurora DB cluster.



B.
  

Create an Amazon CloudWatch alert triggering a restore in another Availability Zone when the primary
Aurora DB cluster is unreachable.



C.
  

Edit and enable Aurora DB cluster cache management in parameter groups.



A company is about to launch a new product, and test databases must be re-created from production data.
The company runs its production databases on an Amazon Aurora MySQL DB cluster. A Database Specialist
needs to deploy a solution to create these test databases as quickly as possible with the least amount of
administrative effort.
What should the Database Specialist do to meet these requirements?


A.

Restore a snapshot from the production cluster into test clusters


B.

Create logical dumps of the production cluster and restore them into new test clusters


C.

Use database cloning to create clones of the production cluster


D.

Add an additional read replica to the production cluster and use that node for testing





D.
  

Add an additional read replica to the production cluster and use that node for testing



The Development team recently executed a database script containing several data definition language (DDL)
and data manipulation language (DML) statements on an Amazon Aurora MySQL DB cluster. The release
accidentally deleted thousands of rows from an important table and broke some application functionality. This
was discovered 4 hours after the release. Upon investigation, a Database Specialist tracked the issue to a
DELETE command in the script with an incorrect WHERE clause filtering the wrong set of rows.
The Aurora DB cluster has Backtrack enabled with an 8-hour backtrack window. The Database Administrator
also took a manual snapshot of the DB cluster before the release started. The database needs to be returned to
the correct state as quickly as possible to resume full application functionality. Data loss must be minimal.
How can the Database Specialist accomplish this?


A.

Quickly rewind the DB cluster to a point in time before the release using Backtrack.


B.

Perform a point-in-time recovery (PITR) of the DB cluster to a time before the release and copy the deleted rows from the restored database to the original database.


C.

Restore the DB cluster using the manual backup snapshot created before the release and change the application configuration settings to point to the new DB cluster


D.

Create a clone of the DB cluster with Backtrack enabled. Rewind the cloned cluster to a point in time before the release. Copy deleted rows from the clone to the original database





D.
  

Create a clone of the DB cluster with Backtrack enabled. Rewind the cloned cluster to a point in time before the release. Copy deleted rows from the clone to the original database



A Database Specialist migrated an existing production MySQL database from on-premises to an Amazon RDS
for MySQL DB instance. However, after the migration, the database needed to be encrypted at rest using AWS
KMS. Due to the size of the database, reloading, the data into an encrypted database would be too
time-consuming, so it is not an option.
How should the Database Specialist satisfy this new requirement?


A.

Create a snapshot of the unencrypted RDS DB instance. Create an encrypted copy of the unencryptedsnapshot. Restore the encrypted snapshot copy.


B.

Modify the RDS DB instance. Enable the AWS KMS encryption option that leverages the AWS CLI.


C.

Restore an unencrypted snapshot into a MySQL RDS DB instance that is encrypted.


D.

Create an encrypted read replica of the RDS DB instance. Promote it the master





A.
  

Create a snapshot of the unencrypted RDS DB instance. Create an encrypted copy of the unencryptedsnapshot. Restore the encrypted snapshot copy.



An AWS CloudFormation stack that included an Amazon RDS DB instance was accidentally deleted and
recent data was lost. A Database Specialist needs to add RDS settings to the CloudFormation template to
reduce the chance of accidental instance data loss in the future.
Which settings will meet this requirement? (Choose three.)


A.

Set DeletionProtection to True


B.

Set MultiAZ to True


C.

Set TerminationProtection to True


D.

Set DeleteAutomatedBackups to False


E.

Set DeletionPolicy to Delete


F.

Set DeletionPolicy to Retain





A.
  

Set DeletionProtection to True



C.
  

Set TerminationProtection to True



F.
  

Set DeletionPolicy to Retain



A company is hosting critical business data in an Amazon Redshift cluster. Due to the sensitive nature of the
data, the cluster is encrypted at rest using AWS KMS. As a part of disaster recovery requirements, the
company needs to copy the Amazon Redshift snapshots to another Region.
Which steps should be taken in the AWS Management Console to meet the disaster recovery requirements?


A.

Create a new KMS customer master key in the source Region. Switch to the destination Region, enable
Amazon Redshift cross-Region snapshots, and use the KMS key of the source Region.


B.

Create a new IAM role with access to the KMS key. Enable Amazon Redshift cross-Region replication
using the new IAM role, and use the KMS key of the source Region


C.

Enable Amazon Redshift cross-Region snapshots in the source Region, and create a snapshot copy grant
and use a KMS key in the destination Region.


D.

Create a new KMS customer master key in the destination Region and create a new IAM role with
access to the new KMS key. Enable Amazon Redshift cross-Region replication in the source Region and
use the KMS key of the destination Region





A.
  

Create a new KMS customer master key in the source Region. Switch to the destination Region, enable
Amazon Redshift cross-Region snapshots, and use the KMS key of the source Region.



A Database Specialist is designing a disaster recovery strategy for a production Amazon DynamoDB table.
The table uses provisioned read/write capacity mode, global secondary indexes, and time to live (TTL). The
Database Specialist has restored the latest backup to a new table.
To prepare the new table with identical settings, which steps should be performed? (Choose two.)


A.

Re-create global secondary indexes in the new table


B.

Define IAM policies for access to the new table


C.

Define the TTL settings


D.

Encrypt the table from the AWS Management Console or use the update-table command


E.

Set the provisioned read and write capacity





A.
  

Re-create global secondary indexes in the new table



E.
  

Set the provisioned read and write capacity



A company is going to use an Amazon Aurora PostgreSQL DB cluster for an application backend. The DBcluster contains some tables with sensitive data. A Database Specialist needs to control the access privileges at
the table level.
How can the Database Specialist meet these requirements?


A.

Use AWS IAM database authentication and restrict access to the tables using an IAM policy.


B.

Configure the rules in a NACL to restrict outbound traffic from the Aurora DB cluster.


C.

Execute GRANT and REVOKE commands that restrict access to the tables containing sensitive data.


D.

Define access privileges to the tables containing sensitive data in the pg_hba.conf file.





C.
  

Execute GRANT and REVOKE commands that restrict access to the tables containing sensitive data.




Page 7 out of 20 Pages
Previous