1. Home
  2. Amazon
  3. Amazon Professional
  4. SAP-C02 Exam Info

Amazon SAP-C02 Exam Questions - Navigate Your Path to Success

The Amazon AWS Certified Solutions Architect - Professional Exam (SAP-C02) exam is a good choice for AWS Solutions Architect and if the candidate manages to pass Amazon AWS Certified Solutions Architect - Professional Exam, he/she will earn Amazon Professional, Amazon AWS Certified Solutions Architect Professional Certifications. Below are some essential facts for Amazon SAP-C02 exam candidates:

  • In actual Amazon AWS Certified Solutions Architect - Professional Exam (SAP-C02) exam, a candidate can expect 75 Questions and the officially allowed time is expected to be around 180 Minutes.
  • TrendyCerts offers 483 Questions that are based on actual Amazon SAP-C02 syllabus.
  • Our Amazon SAP-C02 Exam Practice Questions were last updated on: Mar 08, 2025

Sample Questions for Amazon SAP-C02 Exam Preparation

Question 1

A company uses AWS Organizations to manage its development environment. Each development team at the company has its own AWS account Each account has a single VPC and CIDR blocks that do not overlap.

The company has an Amazon Aurora DB cluster in a shared services account All the development teams need to work with live data from the DB cluster

Which solution will provide the required connectivity to the DB cluster with the LEAST operational overhead?

Correct : B

Create a Transit Gateway:

In the shared services account, create a new AWS Transit Gateway. This serves as a central hub to connect multiple VPCs, simplifying the network topology and management.

Configure Transit Gateway Attachments:

Attach the VPC containing the Aurora DB cluster to the transit gateway. This allows the shared services VPC to communicate through the transit gateway.

Create Resource Share with AWS RAM:

Use AWS Resource Access Manager (AWS RAM) to create a resource share for the transit gateway. Share this resource with all development accounts. AWS RAM allows you to securely share your AWS resources across AWS accounts without needing to duplicate them.

Accept Resource Shares in Development Accounts:

Instruct each development team to log into their respective AWS accounts and accept the transit gateway resource share. This step is crucial for enabling cross-account access to the shared transit gateway.

Configure VPC Attachments in Development Accounts:

Each development account needs to attach their VPC to the shared transit gateway. This allows their VPCs to route traffic through the transit gateway to the Aurora DB cluster in the shared services account.

Update Route Tables:

Update the route tables in each VPC to direct traffic intended for the Aurora DB cluster through the transit gateway. This ensures that network traffic is properly routed between the development VPCs and the shared services VPC.

Using a transit gateway simplifies the network management and reduces operational overhead by providing a scalable and efficient way to interconnect multiple VPCs across different AWS accounts.

Reference

AWS Database Blog on RDS Proxy for Cross-Account Access48.

AWS Architecture Blog on Cross-Account and Cross-Region Aurora Setup49.

DEV Community on Managing Multiple AWS Accounts with Organizations51.


Options Selected by Other Users:
Question 2

A company has an application that analyzes and stores image data on premises The application receives millions of new image files every day Files are an average of 1 MB in size The files are analyzed in batches of 1 GB When the application analyzes a batch the application zips the images together The application then archives the images as a single file in an on-premises NFS server for long-term storage

The company has a Microsoft Hyper-V environment on premises and has compute capacity available The company does not have storage capacity and wants to archive the images on AWS The company needs the ability to retrieve archived data within t week of a request.

The company has a 10 Gbps AWS Direct Connect connection between its on-premises data center and AWS. The company needs to set bandwidth limits and schedule archived images to be copied to AWS dunng non-business hours.

Which solution will meet these requirements MOST cost-effectively?

Correct : B

Deploy DataSync Agent:

Install the AWS DataSync agent as a VM in your Hyper-V environment. This agent facilitates the data transfer between your on-premises storage and AWS.

Configure Source and Destination:

Set up the source location to point to your on-premises NFS server where the image batches are stored.

Configure the destination location to be an Amazon S3 bucket with the Glacier Deep Archive storage class. This storage class is cost-effective for long-term storage with retrieval times of up to 12 hours.

Create DataSync Tasks:

Create and configure DataSync tasks to manage the data transfer. Schedule these tasks to run during non-business hours to minimize bandwidth usage during peak times. The tasks will handle the copying of data batches from the NFS server to the S3 bucket.

Set Bandwidth Limits:

In the DataSync configuration, set bandwidth limits to control the amount of data being transferred at any given time. This ensures that your network's performance is not adversely affected during business hours.

Delete On-Premises Data:

After successfully copying the data to S3 Glacier Deep Archive, configure the DataSync task to delete the data from your on-premises NFS server. This helps manage storage capacity on-premises and ensures data is securely archived on AWS.

This approach leverages AWS DataSync for efficient, secure, and automated data transfer, and S3 Glacier Deep Archive for cost-effective long-term storage.

Reference

AWS DataSync Overview41.

AWS Storage Blog on DataSync Migration40.

Amazon S3 Transfer Acceleration Documentation42.


Options Selected by Other Users:
Amazon SAP-C02