DBS-C01 Exam Questions

Total 200 Questions

Last Updated Exam : 16-Dec-2024

A company developed an AWS CloudFormation template used to create all new Amazon DynamoDB tables in
its AWS account. The template configures provisioned throughput capacity using hard-coded values. The
company wants to change the template so that the tables it creates in the future have independently
configurable read and write capacity units assigned.
Which solution will enable this change?


A.

Add values for the rcuCount and wcuCount parameters to the Mappings section of the template.
ConfigureDynamoDB to provision throughput capacity using the stack’s mappings


B.

Add values for two Number parameters, rcuCount and wcuCount, to the template. Replace the
hard-codedvalues with calls to the Ref intrinsic function, referencing the new parameters


C.

Add values for the rcuCount and wcuCount parameters as outputs of the template. Configure
DynamoDBto provision throughput capacity using the stack outputs.


D.

Add values for the rcuCount and wcuCount parameters to the Mappings section of the template.
Replacethe hard-coded values with calls to the Ref intrinsic function, referencing the new parameters





B.
  

Add values for two Number parameters, rcuCount and wcuCount, to the template. Replace the
hard-codedvalues with calls to the Ref intrinsic function, referencing the new parameters



A large ecommerce company uses Amazon DynamoDB to handle the transactions on its web portal. Traffic
patterns throughout the year are usually stable; however, a large event is planned. The company knows that
traffic will increase by up to 10 times the normal load over the 3-day event. When sale prices are published
during the event, traffic will spike rapidly.How should a Database Specialist ensure DynamoDB can handle the increased traffic?


A.

Ensure the table is always provisioned to meet peak needs


B.

Allow burst capacity to handle the additional load


C.

Set an AWS Application Auto Scaling policy for the table to handle the increase in traffic


D.

Preprovision additional capacity for the known peaks and then reduce the capacity after the event





B.
  

Allow burst capacity to handle the additional load



A gaming company wants to deploy a game in multiple Regions. The company plans to save local high scores
in Amazon DynamoDB tables in each Region. A Database Specialist needs to design a solution to automate
the deployment of the database with identical configurations in additional Regions, as needed. The solution
should also automate configuration changes across all Regions.
Which solution would meet these requirements and deploy the DynamoDB tables?


A.

Create an AWS CLI command to deploy the DynamoDB table to all the Regions and save it for future
deployments


B.

Create an AWS CloudFormation template and deploy the template to all the Regions.


C.

Create an AWS CloudFormation template and use a stack set to deploy the template to all the Regions


D.

Create DynamoDB tables using the AWS Management Console in all the Regions and create a
step-bystep guide for future deployments.





B.
  

Create an AWS CloudFormation template and deploy the template to all the Regions.



A company with branch offices in Portland, New York, and Singapore has a three-tier web application that
leverages a shared database. The database runs on Amazon RDS for MySQL and is hosted in the us-west-2
Region. The application has a distributed front end deployed in the us-west-2, ap-southheast-1, and us-east-2
Regions.
This front end is used as a dashboard for Sales Managers in each branch office to see current sales statistics.
There are complaints that the dashboard performs more slowly in the Singapore location than it does in
Portland or New York. A solution is needed to provide consistent performance for all users in each location.
Which set of actions will meet these requirements?


A.

Take a snapshot of the instance in the us-west-2 Region. Create a new instance from the snapshot in the
ap-southeast-1 Region. Reconfigure the ap-southeast-1 front-end dashboard to access this instance.


B.

Create an RDS read replica in the ap-southeast-1 Region from the primary RDS DB instance in the
uswest- 2 Region. Reconfigure the ap-southeast-1 front-end dashboard to access this instance


C.

Create a new RDS instance in the ap-southeast-1 Region. Use AWS DMS and change data capture
(CDC) to update the new instance in the ap-southeast-1 Region. Reconfigure the ap-southeast-1
front-end dashboard to access this instance


D.

Create an RDS read replica in the us-west-2 Region where the primary instance resides. Create a read
replica in the ap-southeast-1 Region from the read replica located on the us-west-2 Region. Reconfigure
the ap-southeast-1 front-end dashboard to access this instance.





A.
  

Take a snapshot of the instance in the us-west-2 Region. Create a new instance from the snapshot in the
ap-southeast-1 Region. Reconfigure the ap-southeast-1 front-end dashboard to access this instance.



A company has a production Amazon Aurora Db cluster that serves both online transaction processing (OLTP)
transactions and compute-intensive reports. The reports run for 10% of the total cluster uptime while the
OLTP transactions run all the time. The company has benchmarked its workload and determined that a
six-node Aurora DB cluster is appropriate for the peak workload.
The company is now looking at cutting costs for this DB cluster, but needs to have a sufficient number of
nodes in the cluster to support the workload at different times. The workload has not changed since the
previous benchmarking exercise.
How can a Database Specialist address these requirements with minimal user involvement?


A.

Split up the DB cluster into two different clusters: one for OLTP and the other for reporting. Monitor
and set up replication between the two clusters to keep data consistent.


B.

Review all evaluate the peak combined workload. Ensure that utilization of the DB cluster node is at an
acceptable level. Adjust the number of instances, if necessary.


C.

Use the stop cluster functionality to stop all the nodes of the DB cluster during times of minimal
workload. The cluster can be restarted again depending on the workload at the time.


D.

Set up automatic scaling on the DB cluster. This will allow the number of reader nodes to adjust
automatically to the reporting workload, when needed.





D.
  

Set up automatic scaling on the DB cluster. This will allow the number of reader nodes to adjust
automatically to the reporting workload, when needed.



A company is closing one of its remote data centers. This site runs a 100 TB on-premises data warehouse
solution. The company plans to use the AWS Schema Conversion Tool (AWS SCT) and AWS DMS for themigration to AWS. The site network bandwidth is 500 Mbps. A Database Specialist wants to migrate the
on-premises data using Amazon S3 as the data lake and Amazon Redshift as the data warehouse. This move
must take place during a 2-week period when source systems are shut down for maintenance. The data should
stay encrypted at rest and in transit.
Which approach has the least risk and the highest likelihood of a successful data transfer?


A.

Set up a VPN tunnel for encrypting data over the network from the data center to AWS. Leverage
AWSSCT and apply the converted schema to Amazon Redshift. Once complete, start an AWS DMS
task tomove the data from the source to Amazon S3. Use AWS Glue to load the data from Amazon S3
to AmazonRedshift.


B.

Leverage AWS SCT and apply the converted schema to Amazon Redshift. Start an AWS DMS task
withtwo AWS Snowball Edge devices to copy data from on-premises to Amazon S3 with AWS KMS
encryption.Use AWS DMS to finish copying data to Amazon Redshift.


C.

Leverage AWS SCT and apply the converted schema to Amazon Redshift. Once complete, use a fleet
of10 TB dedicated encrypted drives using the AWS Import/Export feature to copy data from
on-premises toAmazon S3 with AWS KMS encryption. Use AWS Glue to load the data to Amazon
redshift.


D.

Set up a VPN tunnel for encrypting data over the network from the data center to AWS. Leverage a
nativedatabase export feature to export the data and compress the files. Use the aws S3 cp multi-port
uploadcommand to upload these files to Amazon S3 with AWS KMS encryption. Once complete, load
the data toAmazon Redshift using AWS Glue.





C.
  

Leverage AWS SCT and apply the converted schema to Amazon Redshift. Once complete, use a fleet
of10 TB dedicated encrypted drives using the AWS Import/Export feature to copy data from
on-premises toAmazon S3 with AWS KMS encryption. Use AWS Glue to load the data to Amazon
redshift.



A Database Specialist is designing a new database infrastructure for a ride hailing application. The application
data includes a ride tracking system that stores GPS coordinates for all rides. Real-time statistics and metadata
lookups must be performed with high throughput and microsecond latency. The database should be fault
tolerant with minimal operational overhead and development effort.
Which solution meets these requirements in the MOST efficient way?


A.

Use Amazon RDS for MySQL as the database and use Amazon ElastiCache


B.

Use Amazon DynamoDB as the database and use DynamoDB Accelerator


C.

Use Amazon Aurora MySQL as the database and use Aurora’s buffer cache


D.

Use Amazon DynamoDB as the database and use Amazon API Gateway





D.
  

Use Amazon DynamoDB as the database and use Amazon API Gateway



A company has a web-based survey application that uses Amazon DynamoDB. During peak usage, when
survey responses are being collected, a Database Specialist sees the
ProvisionedThroughputExceededException error.
What can the Database Specialist do to resolve this error? (Choose two.)


A.

Change the table to use Amazon DynamoDB Streams


B.

Purchase DynamoDB reserved capacity in the affected Region


C.

Increase the write capacity units for the specific table


D.

Change the table capacity mode to on-demand


E.

Change the table type to throughput optimized





C.
  

Increase the write capacity units for the specific table



E.
  

Change the table type to throughput optimized



A company is deploying a solution in Amazon Aurora by migrating from an on-premises system. The IT
department has established an AWS Direct Connect link from the company’s data center. The company’s
Database Specialist has selected the option to require SSL/TLS for connectivity to prevent plaintext data from
being set over the network. The migration appears to be working successfully, and the data can be queried
from a desktop machine.
Two Data Analysts have been asked to query and validate the data in the new Aurora DB cluster. Both
Analysts are unable to connect to Aurora. Their user names and passwords have been verified as valid and
the Database Specialist can connect to the DB cluster using their accounts. The Database Specialist also
verified that the security group configuration allows network from all corporate IP addresses.
What should the Database Specialist do to correct the Data Analysts’ inability to connect?


A.

Restart the DB cluster to apply the SSL change.


B.

Instruct the Data Analysts to download the root certificate and use the SSL certificate on the connection string to connect.


C.

Add explicit mappings between the Data Analysts’ IP addresses and the instance in the security group
assigned to the DB cluster.


D.

Modify the Data Analysts’ local client firewall to allow network traffic to AWS.





D.
  

Modify the Data Analysts’ local client firewall to allow network traffic to AWS.



A financial company wants to store sensitive user data in an Amazon Aurora PostgreSQL DB cluster. The
database will be accessed by multiple applications across the company. The company has mandated that all
communications to the database be encrypted and the server identity must be validated. Any non-SSL-based
connections should be disallowed access to the database.
Which solution addresses these requirements?


A.

Set the rds.force_ssl=0 parameter in DB parameter groups. Download and use the Amazon RDS
certificatebundle and configure the PostgreSQL connection string with sslmode=allow.


B.

Set the rds.force_ssl=1 parameter in DB parameter groups. Download and use the Amazon RDS
certificatebundle and configure the PostgreSQL connection string with sslmode=disable.


C.

Set the rds.force_ssl=0 parameter in DB parameter groups. Download and use the Amazon RDS
certificatebundle and configure the PostgreSQL connection string with sslmode=verify-ca.


D.

Set the rds.force_ssl=1 parameter in DB parameter groups. Download and use the Amazon RDS
certificatebundle and configure the PostgreSQL connection string with sslmode=verify-full.





D.
  

Set the rds.force_ssl=1 parameter in DB parameter groups. Download and use the Amazon RDS
certificatebundle and configure the PostgreSQL connection string with sslmode=verify-full.




Page 4 out of 20 Pages
Previous