SAA-C03 Exam Questions

Total 825 Questions

Last Updated Exam : 27-Dec-2024

Topic 4: Exam Pool D

A company manages its own Amazon EC2 instances that run MySQL databases. The company is manually managing replication and scaling as demand increases or decreases. The company needs a new solution that simplifies the process of adding or removing compute capacity to or from its database tier as needed. The solution also must offer improved performance, scaling, and durability with minimal effort from operations. Which solution meets these requirements?


A. Migrate the databases to Amazon Aurora Serverless for Aurora MySQL.


B. Migrate the databases to Amazon Aurora Serverless for Aurora PostgreSQL.


C. Combine the databases into one larger MySQL database. Run the larger database on larger EC2 instances.


D. Create an EC2 Auto Scaling group for the database tier. Migrate the existing databases to the new environment.





A.
  Migrate the databases to Amazon Aurora Serverless for Aurora MySQL.

A company has applications that run on Amazon EC2 instances in a VPC. One of the applications needs to call the Amazon S3 API to store and read objects. According to the company's security regulations, no traffic from the applications is allowed to travel across the internet.
Which solution will meet these requirements?


A. Configure an S3 interface endpoint.


B. Configure an S3 gateway endpoint.


C. Create an S3 bucket in a private subnet.


D. Create an S3 bucket in the same Region as the EC2 instance.





B.
  Configure an S3 gateway endpoint.

A company recently migrated its web application to AWS by rehosting the application on Amazon EC2 instances in a single AWS Region. The company wants to redesign its application architecture to be highly available and fault tolerant. Traffic must reach all running EC2 instances randomly. Which combination of steps should the company take to meet these requirements? (Choose two.)


A. Create an Amazon Route 53 failover routing policy.


B. Create an Amazon Route 53 weighted routing policy.


C. Create an Amazon Route 53 multivalue answer routing policy.


D. Launch three EC2 instances: two instances in one Availability Zone and one instance in another Availability Zone.


E. Launch four EC2 instances: two instances in one Availability Zone and two instances in another Availability Zone.





C.
  Create an Amazon Route 53 multivalue answer routing policy.

E.
  Launch four EC2 instances: two instances in one Availability Zone and two instances in another Availability Zone.

A company is designing an application. The application uses an AWS Lambda function to receive information through Amazon API Gateway and to store the information in an Amazon Aurora PostgreSQL database.
During the proof-of-concept stage, the company has to increase the Lambda quotas significantly to handle the high volumes of data that the company needs to load into the database. A solutions architect must recommend a new design to improve scalability and minimize the configuration effort.
Which solution will meet these requirements?


A. Refactor the Lambda function code to Apache Tomcat code that runs on Amazon EC2 instances. Connect the database by using native Java Database Connectivity (JDBC) drivers.


B. Change the platform from Aurora to Amazon DynamoDB. Provision a DynamoDB Accelerator (DAX) cluster. Use the DAX client SDK to point the existing DynamoDB API calls at the DAX cluster.


C. Set up two Lambda functions. Configure one function to receive the information. Configure the other function to load the information into the database. Integrate the Lambda functions by using Amazon Simple Notification Service (Amazon SNS).


D. Set up two Lambda functions. Configure one function to receive the information. Configure the other function to load the information into the database. Integrate the Lambda functions by using an Amazon Simple Queue Service (Amazon SQS) queue.





B.
  Change the platform from Aurora to Amazon DynamoDB. Provision a DynamoDB Accelerator (DAX) cluster. Use the DAX client SDK to point the existing DynamoDB API calls at the DAX cluster.

A company maintains a searchable repository of items on its website. The data is stored in an Amazon RDS for MySQL database table that contains more than 10 million rows The database has 2 TB of General Purpose SSD storage There are millions of updates against this data every day through the company's website
The company has noticed that some insert operations are taking 10 seconds or longer The company has determined that the database storage performance is the problem Which solution addresses this performance issue?


A. Change the storage type to Provisioned IOPS SSD


B. Change the DB instance to a memory optimized instance class


C. Change the DB instance to a burstable performance instance class


D. Enable Multi-AZ RDS read replicas with MySQL native asynchronous replication.





A.
  Change the storage type to Provisioned IOPS SSD

A company has a workload in an AWS Region. Customers connect to and access the workload by using an Amazon API Gateway REST API. The company uses Amazon Route 53 as its DNS provider. The company wants to provide individual and secure URLs for all customers. Which combination of steps will meet these requirements with the MOST operational efficiency? (Select THREE.)


A. Register the required domain in a registrar. Create a wildcard custom domain name in a Route 53 hosted zone and record in the zone that points to the API Gateway endpoint.


B. Request a wildcard certificate that matches the domains in AWS Certificate Manager (ACM) in a different Region.


C. Create hosted zones for each customer as required in Route 53. Create zone records that point to the API Gateway endpoint.


D. Request a wildcard certificate that matches the custom domain name in AWS Certificate Manager (ACM) in the same Region.


E. Create multiple API endpoints for each customer in API Gateway.


F. Create a custom domain name in API Gateway for the REST API. Import the certificate from AWS Certificate Manager (ACM).





A.
  Register the required domain in a registrar. Create a wildcard custom domain name in a Route 53 hosted zone and record in the zone that points to the API Gateway endpoint.

D.
  Request a wildcard certificate that matches the custom domain name in AWS Certificate Manager (ACM) in the same Region.

F.
  Create a custom domain name in API Gateway for the REST API. Import the certificate from AWS Certificate Manager (ACM).

Explanation: To provide individual and secure URLs for all customers using an API Gateway REST API, you need to do the following steps: Register the required domain in a registrar. Create a wildcard custom domain name in a Route 53 hosted zone and record in the zone that points to the API Gateway endpoint. This step will allow you to use a custom domain name for your API instead of the default one generated by API Gateway. A wildcard custom domain name means that you can use any subdomain under your domain name (such as customer1.example.com or customer2.example.com) to access your API. You need to register your domain name with a registrar (such as Route 53 or a third-party registrar) and create a hosted zone in Route 53 for your domain name. You also need to create a record in the hosted zone that points to the API Gateway endpoint using an alias record. Request a wildcard certificate that matches the custom domain name in AWS Certificate Manager (ACM) in the same Region. This step will allow you to secure your API with HTTPS using a certificate issued by ACM. A wildcard certificate means that it can match any subdomain under your domain name (such as *.example.com). You need to request or import a certificate in ACM that matches your custom domain name and verify that you own the domain name. You also need to request the certificate in the same Region as your API. F. Create a custom domain name in API Gateway for the REST API. Import the certificate from AWS Certificate Manager (ACM). This step will allow you to associate your custom domain name with your API and use the certificate from ACM to enable HTTPS. You need to create a custom domain name in API Gateway for the REST API and specify the certificate ARN from ACM. You also need to create a base path mapping that maps a path from your custom domain name to your API stage.

A global company runs its applications in multiple AWS accounts in AWS Organizations. The company's applications use multipart uploads to upload data to multiple Amazon S3 buckets across AWS Regions. The company wants to report on incomplete multipart uploads for cost compliance purposes. Which solution will meet these requirements with the LEAST operational overhead?


A. Configure AWS Config with a rule to report the incomplete multipart upload object count.


B. Create a service control policy (SCP) to report the incomplete multipart upload object count.


C. Configure S3 Storage Lens to report the incomplete multipart upload object count.


D. Create an S3 Multi-Region Access Point to report the incomplete multipart upload object count.





C.
  Configure S3 Storage Lens to report the incomplete multipart upload object count.

Explanation: S3 Storage Lens is a cloud storage analytics feature that provides organization-wide visibility into object storage usage and activity across multiple AWS accounts in AWS Organizations. S3 Storage Lens can report the incomplete multipart upload object count as one of the metrics that it collects and displays on an interactive dashboard in the S3 console. S3 Storage Lens can also export metrics in CSV or Parquet format to an S3 bucket for further analysis. This solution will meet the requirements with the least operational overhead, as it does not require any code development or policy changes.

A company needs to configure a real-time data ingestion architecture for its application. The company needs an API, a process that transforms data as the data is streamed, and a storage solution for the data.
Which solution will meet these requirements with the LEAST operational overhead?


A. Deploy an Amazon EC2 instance to host an API that sends data to an Amazon Kinesis data stream. Create an Amazon Kinesis Data Firehose delivery stream that uses the Kinesis data stream as a data source. Use AWS Lambda functions to transform the data. Use the Kinesis Data Firehose delivery stream to send the data to Amazon S3.


B. Deploy an Amazon EC2 instance to host an API that sends data to AWS Glue. Stop source/destination checking on the EC2 instance. Use AWS Glue to transform the data and to send the data to Amazon S3.


C. Configure an Amazon API Gateway API to send data to an Amazon Kinesis data stream. Create an Amazon Kinesis Data Firehose delivery stream that uses the Kinesis data stream as a data source. Use AWS Lambda functions to transform the data. Use the Kinesis Data Firehose delivery stream to send the data to Amazon S3.


D. Configure an Amazon API Gateway API to send data to AWS Glue. Use AWS Lambda functions to transform the data. Use AWS Glue to send the data to Amazon S3.





C.
  Configure an Amazon API Gateway API to send data to an Amazon Kinesis data stream. Create an Amazon Kinesis Data Firehose delivery stream that uses the Kinesis data stream as a data source. Use AWS Lambda functions to transform the data. Use the Kinesis Data Firehose delivery stream to send the data to Amazon S3.

A company runs multiple Amazon EC2 Linux instances in a VPC across two Availability Zones. The instances host applications that use a hierarchical directory structure. The applications need to read and write rapidly and concurrently to shared storage. What should a solutions architect do to meet these requirements?


A. Create an Amazon S3 bucket. Allow access from all the EC2 instances in the VPC.


B. Create an Amazon Elastic File System (Amazon EFS) file system. Mount the EFS file system from each EC2 instance.


C. Create a file system on a Provisioned IOPS SSD (102) Amazon Elastic Block Store (Amazon EBS) volume. Attach the EBS volume to all the EC2 instances.


D. Create file systems on Amazon Elastic Block Store (Amazon EBS) volumes that are attached to each EC2 instance. Synchromze the EBS volumes across the different EC2 instances.





B.
  Create an Amazon Elastic File System (Amazon EFS) file system. Mount the EFS file system from each EC2 instance.

Explanation: it allows the EC2 instances to read and write rapidly and concurrently to shared storage across two Availability Zones. Amazon EFS provides a scalable, elastic, and highly available file system that can be mounted from multiple EC2 instances. Amazon EFS supports high levels of throughput and IOPS, and consistent low latencies. Amazon EFS also supports NFSv4 lock upgrading and downgrading, which enables high levels of concurrency.

A company has created an image analysis application in which users can upload photos and add photo frames to their images. The users upload images and metadata to indicate which photo frames they want to add to their images. The application uses a single Amazon EC2 instance and Amazon DynamoDB to store the metadata.

The application is becoming more popular, and the number of users is increasing. The company expects the number of concurrent users to vary significantly depending on the time of day and day of week. The company must ensure that the application can scale to meet the needs of the growing user base.

Which solution meats these requirements?


A. Use AWS Lambda to process the photos. Store the photos and metadata in DynamoDB.


B. Use Amazon Kinesis Data Firehose to process the photos and to store the photos and metadata.


C. Use AWS Lambda to process the photos. Store the photos in Amazon S3. Retain DynamoDB to store the metadata.


D. Increase the number of EC2 instances to three. Use Provisioned IOPS SSD (io2) Amazon Elastic Block Store (Amazon EBS) volumes to store the photos and metadata.





C.
  Use AWS Lambda to process the photos. Store the photos in Amazon S3. Retain DynamoDB to store the metadata.

Explanation: This solution meets the requirements of scalability, performance, and availability. AWS Lambda can process the photos in parallel and scale up or down automatically depending on the demand. Amazon S3 can store the photos and metadata reliably and durably, and provide high availability and low latency. DynamoDB can store the metadata efficiently and provide consistent performance. This solution also reduces the cost and complexity of managing EC2 instances and EBS volumes.
Option A is incorrect because storing the photos in DynamoDB is not a good practice, as it can increase the storage cost and limit the throughput. Option B is incorrect because Kinesis Data Firehose is not designed for processing photos, but for streaming data to destinations such as S3 or Redshift. Option D is incorrect because increasing the number of EC2 instances and using Provisioned IOPS SSD volumes does not guarantee scalability, as it depends on the load balancer and the application code. It also increases the cost and complexity of managing the infrastructure.


Page 11 out of 83 Pages
Previous