A large company is using an Amazon RDS for Oracle Multi-AZ DB instance with a Java application. As a part
of its disaster recovery annual testing, the company would like to simulate an Availability Zone failure and
record how the application reacts during the DB instance failover activity. The company does not want to
make any code changes for this activity.
What should the company do to achieve this in the shortest amount of time?
A.
Use a blue-green deployment with a complete application-level failover test
B.
Use the RDS console to reboot the DB instance by choosing the option to reboot with failover
C.
Use RDS fault injection queries to simulate the primary node failure
D.
Add a rule to the NACL to deny all traffic on the subnets associated with a single Availability Zone
Use RDS fault injection queries to simulate the primary node failure
A media company is using Amazon RDS for PostgreSQL to store user data. The RDS DB instance currently
has a publicly accessible setting enabled and is hosted in a public subnet. Following a recent AWS Well-
Architected Framework review, a Database Specialist was given new security requirements.
Only certain on-premises corporate network IPs should connect to the DB instance.
Connectivity is allowed from the corporate network only. Which combination of steps does the Database
Specialist need to take to meet these new requirements?
Choose three.)
A.
Modify the pg_hba.conf file. Add the required corporate network IPs and remove the unwanted IPs.
B.
Modify the associated security group. Add the required corporate network IPs and remove the unwanted
IPs.
C.
Move the DB instance to a private subnet using AWS DMS.
D.
Enable VPC peering between the application host running on the corporate network and the VPC
associated with the DB instance.
E.
Disable the publicly accessible setting.
F.
Connect to the DB instance using private IPs and a VPN.
Enable VPC peering between the application host running on the corporate network and the VPC
associated with the DB instance.
Disable the publicly accessible setting.
Connect to the DB instance using private IPs and a VPN.
A Database Specialist is working with a company to launch a new website built on Amazon Aurora with
several Aurora Replicas. This new website will replace an on-premises website connected to a legacy
relational database. Due to stability issues in the legacy database, the company would like to test the resiliency
of Aurora.
Which action can the Database Specialist take to test the resiliency of the Aurora DB cluster?
A.
Stop the DB cluster and analyze how the website responds
B.
Use Aurora fault injection to crash the master DB instance
C.
Remove the DB cluster endpoint to simulate a master DB instance failure
D.
Use Aurora Backtrack to crash the DB cluster
Use Aurora fault injection to crash the master DB instance
A large financial services company requires that all data be encrypted in transit. A Developer is attempting to
connect to an Amazon RDS DB instance using the company VPC for the first time with credentials provided
by a Database Specialist. Other members of the Development team can connect, but this user is consistently
receiving an error indicating a communications link failure. The Developer asked the Database Specialist to
reset the password a number of times, but the error persists.
Which step should be taken to troubleshoot this issue?
A.
Ensure that the database option group for the RDS DB instance allows ingress from the
Developermachine’s IP address
B.
Ensure that the RDS DB instance’s subnet group includes a public subnet to allow the Developer
toconnect
C.
Ensure that the RDS DB instance has not reached its maximum connections limit
D.
Ensure that the connection is using SSL and is addressing the port where the RDS DB instance is
listeningfor encrypted connections
Ensure that the RDS DB instance’s subnet group includes a public subnet to allow the Developer
toconnect
A company maintains several databases using Amazon RDS for MySQL and PostgreSQL. Each RDS database
generates log files with retention periods set to their default values. The company has now mandated that
database logs be maintained for up to 90 days in a centralized repository to facilitate real-time and
after-the-fact analyses.
What should a Database Specialist do to meet these requirements with minimal effort?
A.
Create an AWS Lambda function to pull logs from the RDS databases and consolidate the log files in an
Amazon S3 bucket. Set a lifecycle policy to expire the objects after 90 days
B.
Modify the RDS databases to publish log to Amazon CloudWatch Logs. Change the log retention policy
for each log group to expire the events after 90 days.
C.
Write a stored procedure in each RDS database to download the logs and consolidate the log files in an
Amazon S3 bucket. Set a lifecycle policy to expire the objects after 90 days.
D.
Create an AWS Lambda function to download the logs from the RDS databases and publish the logs to
Amazon CloudWatch Logs. Change the log retention policy for the log group to expire the events after
90 days.
Create an AWS Lambda function to pull logs from the RDS databases and consolidate the log files in an
Amazon S3 bucket. Set a lifecycle policy to expire the objects after 90 days
A Database Specialist is performing a proof of concept with Amazon Aurora using a small instance to confirm
a simple database behavior. When loading a large dataset and creating the index, the Database Specialist
encounters the following error message from Aurora:
ERROR: cloud not write block 7507718 of temporary file: No space left on device
What is the cause of this error and what should the Database Specialist do to resolve this issue
A.
The scaling of Aurora storage cannot catch up with the data loading. The Database Specialist needs
tomodify the workload to load the data slowly.
B.
The scaling of Aurora storage cannot catch up with the data loading. The Database Specialist needs
toenable Aurora storage scaling.
C.
The local storage used to store temporary tables is full. The Database Specialist needs to scale up
theinstance.
D.
The local storage used to store temporary tables is full. The Database Specialist needs to enable
localstorage scaling.
The local storage used to store temporary tables is full. The Database Specialist needs to scale up
theinstance.
A company has deployed an e-commerce web application in a new AWS account. An Amazon RDS for
MySQL Multi-AZ DB instance is part of this deployment with a
database-1.xxxxxxxxxxxx.us-east-1.rds.amazonaws.com endpoint listening on port 3306. The company’s
Database Specialist is able to log in to MySQL and run queries from the bastion host using these details.
When users try to utilize the application hosted in the AWS account, they are presented with a generic error
message. The application servers are logging a “could not connect to server: Connection times out” error
message to Amazon CloudWatch Logs.
What is the cause of this error?
A.
The user name and password the application is using are incorrect.
B.
The security group assigned to the application servers does not have the necessary rules to allow
inbound connections from the DB instance
C.
The security group assigned to the DB instance does not have the necessary rules to allow inbound
connections from the application servers.
D.
The user name and password are correct, but the user is not authorized to use the DB instance.
The security group assigned to the DB instance does not have the necessary rules to allow inbound
connections from the application servers.
A company wants to automate the creation of secure test databases with random credentials to be stored safely
for later use. The credentials should have sufficient information about each test database to initiate a
connection and perform automated credential rotations. The credentials should not be logged or stored
anywhere in an unencrypted form.
Which steps should a Database Specialist take to meet these requirements using an AWS CloudFormation
template?
A.
Create the database with the Ma sterUserName and MasterUserPassword properties set to the default values. Then, create the secret with the user name and password set to the same default values. Add a
Secret Target Attachment resource with the SecretId and TargetId properties set to the Amazon
Resource Names (ARNs) of the secret and the database. Finally, update the secret’s password value with
a randomly generated string set by the GenerateSecretString property.
B.
Add a Mapping property from the database Amazon Resource Name (ARN) to the secret ARN. Then,
create the secret with a chosen user name and a randomly generated password set by the
GenerateSecretString property. Add the database with the MasterUserName and MasterUserPassword
properties set to the user name of the secret.
C.
Add a resource of type AWS::SecretsManager::Secret and specify the GenerateSecretString property.
Then, define the database user name in the SecureStringTemplate template. Create a resource for the
database and reference the secret string for the MasterUserName and MasterUserPassword properties.
Then, add a resource of type AWS::SecretsManagerSecretTargetAttachment with the SecretId and
TargetId properties set to the Amazon Resource Names (ARNs) of the secret and the database.
D.
Create the secret with a chosen user name and a randomly generated password set by the
GenerateSecretString property. Add an SecretTargetAttachment resource with the SecretId property set
to the Amazon Resource Name (ARN) of the secret and the TargetId property set to a parameter value
matching the desired database ARN. Then, create a database with the MasterUserName and
MasterUserPassword properties set to the previously created values in the secret.
Add a resource of type AWS::SecretsManager::Secret and specify the GenerateSecretString property.
Then, define the database user name in the SecureStringTemplate template. Create a resource for the
database and reference the secret string for the MasterUserName and MasterUserPassword properties.
Then, add a resource of type AWS::SecretsManagerSecretTargetAttachment with the SecretId and
TargetId properties set to the Amazon Resource Names (ARNs) of the secret and the database.
A marketing company is using Amazon DocumentDB and requires that database audit logs be enabled. A
Database Specialist needs to configure monitoring so that all data definition language (DDL) statements
performed are visible to the Administrator. The Database Specialist has set the audit_logs parameter to
enabled in the cluster parameter group.
What should the Database Specialist do to automatically collect the database logs for the Administrator?
A.
Enable DocumentDB to export the logs to Amazon CloudWatch Logs
B.
Enable DocumentDB to export the logs to AWS CloudTrail
C.
Enable DocumentDB Events to export the logs to Amazon CloudWatch Logs
D.
Configure an AWS Lambda function to download the logs using the download-db-log-file-portion operationand store the logs in Amazon S3
Enable DocumentDB to export the logs to Amazon CloudWatch Logs
A Database Specialist has migrated an on-premises Oracle database to Amazon Aurora PostgreSQL. Theschema and the data have been migrated successfully. The on-premises database server was also being used to
run database maintenance cron jobs written in Python to perform tasks including data purging and generating
data exports. The logs for these jobs show that, most of the time, the jobs completed within 5 minutes, but a
few jobs took up to 10 minutes to complete. These maintenance jobs need to be set up for Aurora PostgreSQL.
How can the Database Specialist schedule these jobs so the setup requires minimal maintenance and provides
high availability?
A.
Create cron jobs on an Amazon EC2 instance to run the maintenance jobs following the required
schedule.
B.
Connect to the Aurora host and create cron jobs to run the maintenance jobs following the
requiredschedule
C.
Create AWS Lambda functions to run the maintenance jobs and schedule them with Amazon
CloudWatchEvents.
D.
Create the maintenance job using the Amazon CloudWatch job scheduling plugin
Create the maintenance job using the Amazon CloudWatch job scheduling plugin
Page 2 out of 20 Pages |
Previous |