[AWS, Practice exam, Developer Associate, Part 1] Cheatsheet collect when i am doing

Ngoc Phan
16 min readMar 14, 2021

This is a practice exam I had just doing.

https://www.udemy.com/course/amazon-certified-developer-associate-practice-exams-aws/ (part 3)

So i passed ✌️ , and here is these thing i collected to summary knowledge of this exam. Hope help you have overview on it.

Result

Knowledge

Same low latency and high throughput performance of S3 Standard. Designed for durability of 99.999999999% of objects across multiple Availability Zones. Resilient against events that impact an entire Availability Zone. Data is resilient in the event of one entire Availability Zone destruction.

Individual Amazon S3 objects can range in size from a minimum of 0 bytes to a maximum of 5 terabytes. The largest object that can be uploaded in a single PUT is 5 gigabytes. For objects larger than 100 megabytes, customers should consider using the Multipart Upload capability.

Hot partitions: throttles are caused by a few partitions in the table that receive more requests than the average partition. Not enough capacity: throttles are caused by the table itself not having enough capacity to service requests on many partitions.

To speed up queries on non-key attributes, you can create a global secondary index. A global secondary index contains a selection of attributes from the base table, but they are organized by a primary key that is different from that of the table. The index key does not need to have any of the key attributes from the table. It doesn’t even need to have the same key schema as a table.

For example, you could create a global secondary index named GameTitleIndex, with a partition key of GameTitle and a sort key of TopScore. The base table's primary key attributes are always projected into an index, so the UserId attribute is also present. The following diagram shows what GameTitleIndex index would look like.

Elastic Block Storage( EBS), as self-explanatory as it sounds, it is used to store data in blocks. An EBS Volumes can be up to 16TB in size. EBS volumes are like virtual hard disks that are attached to the server for storage. So by default Amazon EC2 instance, takes an EBS volume as a root device.

In addition to that, you cannot stop an instance store volume. Once you start, you have to keep it running until you are finally done with it. This is because of the fact the instance’s root device is the physical hard drive. You can only stop it by terminating it, which means you will lose your instance completely and you cannot recover it. Remember you will have to pay for the entire time the instance is up and running.

On the other hand, EBS allows you to stop and restart the instance. When you stop the instance, EBS and all the data will be stored as volume. This is called “Persistent”. This is an important feature of EBS volumes.

AWS Local Zones are a type of AWS infrastructure deployment that places AWS compute, storage, database, and other select services close to large population, industry, and IT centers. With AWS Local Zones, you can easily run applications that need single-digit millisecond latency closer to end-users in a specific geography. AWS Local Zones are ideal for use cases such as media & entertainment content creation, real-time gaming, live video streaming, and machine learning inference.

AWS Local Zones are an extension of an AWS Region where you can run your latency-sensitive applications using AWS services such as Amazon Elastic Compute Cloud, Amazon Virtual Private Cloud, Amazon Elastic Block Store, Amazon FSx, Amazon Elastic Load Balancing, Amazon EMR, Amazon ElastiCache, Amazon Elastic Kubernetes Service, Amazon Elastic Container Service, and Amazon Relational Database Service in geographic proximity to end-users

AWS Wavelength is an AWS Infrastructure offering optimized for mobile edge computing applications. Wavelength Zones are AWS infrastructure deployments that embed AWS compute and storage services within communications service providers’ (CSP) datacenters at the edge of the 5G network, so application traffic from 5G devices can reach application servers running in Wavelength Zones without leaving the telecommunications network. This avoids the latency that would result from application traffic having to traverse multiple hops across the Internet to reach their destination, enabling customers to take full advantage of the latency and bandwidth benefits offered by modern 5G networks.

Run AWS infrastructure and services on premises for a truly consistent hybrid experience

AWS Outposts is a fully managed service that offers the same AWS infrastructure, AWS services, APIs, and tools to virtually any datacenter, co-location space, or on-premises facility for a truly consistent hybrid experience

  • Low-latency compute
  • Local data processing
  • Data residency
  • Migration & Modernization

How AWS identifies an IAM user

When you create a user, IAM creates these ways to identify that user:

  • A “friendly name” for the user, which is the name that you specified when you created the user, such as Richard or Anaya. These are the names you see in the AWS Management Console.
  • An Amazon Resource Name (ARN) for the user. You use the ARN when you need to uniquely identify the user across all of AWS. For example, you could use an ARN to specify the user as a Principal in an IAM policy for an Amazon S3 bucket. An ARN for an IAM user might look like the following:

arn:aws:iam::*account-ID-without-hyphens*:user/Richard

  • A unique identifier for the user. This ID is returned only when you use the API, Tools for Windows PowerShell, or AWS CLI to create the user; you do not see this ID in the console.

Cluster placement groups

A cluster placement group is a logical grouping of instances within a single Availability Zone. A cluster placement group can span peered VPCs in the same Region. Instances in the same cluster placement group enjoy a higher per-flow throughput limit for TCP/IP traffic and are placed in the same high-bisection bandwidth segment of the network.

The following image shows instances that are placed into a cluster placement group.

Cluster placement groups are recommended for applications that benefit from low network latency, high network throughput, or both

Lists the IAM users that have the specified path prefix. If no path prefix is specified, the operation returns all users in the AWS account. If there are none, the operation returns an empty list.

list-users
[--path-prefix <value>]
[--max-items <value>]
[--cli-input-json <value>]
[--starting-token <value>]
[--page-size <value>]
[--generate-cli-skeleton <value>]

To list IAM users

The following list-users command lists the IAM users in the current account:

aws iam list-users

Output:

"Users": [
{
"UserName": "Adele",
"Path": "/",
"CreateDate": "2013-03-07T05:14:48Z",
"UserId": "AKIAI44QH8DHBEXAMPLE",
"Arn": "arn:aws:iam::123456789012:user/Adele"
},
{
"UserName": "Bob",
"Path": "/",
"CreateDate": "2012-09-21T23:03:13Z",
"UserId": "AKIAIOSFODNN7EXAMPLE",
"Arn": "arn:aws:iam::123456789012:user/Bob"
}
]

For more information, see Listing Users in the Using IAM guide.

Boost Throughput Capacity to High-Traffic Partitions

It’s not always possible to distribute read and write activity evenly. When data access is imbalanced, a “hot” partition can receive a higher volume of read and write traffic compared to other partitions. In extreme cases, throttling can occur if a single partition receives more than 3,000 RCUs or 1,000 WCUs.

The following diagram illustrates how adaptive capacity works. The example table is provisioned with 400 WCUs evenly shared across four partitions, allowing each partition to sustain up to 100 WCUs per second. Partitions 1, 2, and 3 each receives write traffic of 50 WCU/sec. Partition 4 receives 150 WCU/sec. This hot partition can accept write traffic while it still has unused burst capacity, but eventually it throttles traffic that exceeds 100 WCU/sec.

DynamoDB adaptive capacity responds by increasing partition 4’s capacity so that it can sustain the higher workload of 150 WCU/sec without being throttled.

Each Amazon S3 object has data, a key, and metadata. Object key (or key name) uniquely identifies the object in a bucket. Object metadata is a set of name-value pairs. You can set object metadata at the time you upload it. After you upload the object, you cannot modify object metadata. The only way to modify object metadata is to make a copy of the object and set the metadata.

http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingMetadata.html

It is possible to change the metadata by performing an object copy (see How to update metadata using Amazon S3 SDK):

ObjectMetadata metadataCopy = new ObjectMetadata();
// copy previous metadata
metadataCopy.addUserMetadata("newmetadata", "newmetadatavalue");
CopyObjectRequest request = new CopyObjectRequest(bucketName, existingKey, bucketName, existingKey)
.withNewObjectMetadata(metadataCopy);
amazonS3Client.copyObject(request);

Whether this is philosophically an “update” is up to you to decide.

The partition key portion of a table’s primary key determines the logical partitions in which a table’s data is stored. This in turn affects the underlying physical partitions. Provisioned I/O capacity for the table is divided evenly among these physical partitions.

Default quotas for IAM entities

  • Bucket names should not contain upper-case letters
  • Bucket names should not contain underscores (_)
  • Bucket names should not end with a dash
  • Bucket names should be between 3 and 63 characters long
  • Bucket names cannot contain dashes next to periods (e.g., and are invalid)
  • my-.bucket.com
  • my.-bucket

Bucket names cannot contain periods — Due to our S3 client utilizing SSL/HTTPS, Amazon documentation indicates that a bucket name cannot contain a period, otherwise you will not be able to upload files from our S3 browser in the dashboard.

Object Key Naming Guidelines from the AWS S3 official documentation

Safe characters

The following character sets are generally safe for use in key names:

  • Alphanumeric characters: 0–9 a-z A-Z
  • Special characters: ! — _ . * ‘ ( )

NOTE ABOUT THE DELIMITER (“/”)

The following are examples of valid object key names:

  • 4my-organization
  • my.great_photos-2014/jan/myvacation.jpg
  • videos/2014/birthday/video1.wmv

Note that the Amazon S3 data model is a flat structure: you create a bucket, and the bucket stores objects. There is no hierarchy of subbuckets or subfolders; however, you can infer logical hierarchy using keyname prefixes and delimiters as the Amazon S3 console does.

e.g if you use Private/taxdocument.pdf as a key, it will create the Private folder, with taxdocument.pdf in it.

Amazon S3 supports buckets and objects, there is no hierarchy in Amazon S3. However, the prefixes and delimiters in an object key name, enables the Amazon S3 console and the AWS SDKs to infer hierarchy and introduce concept of folders.

IAM supports CodeCommit with three types of credentials:

  • Git credentials, an IAM-generated user name and password pair you can use to communicate with CodeCommit repositories over HTTPS.
  • SSH keys, a locally generated public-private key pair that you can associate with your IAM user to communicate with CodeCommit repositories over SSH.
  • AWS access keys, which you can use with the credential helper included with the AWS CLI to communicate with CodeCommit repositories over HTTPS.

Default encryption works with all existing and new Amazon S3 buckets. Without default encryption, to encrypt all objects stored in a bucket, you must include encryption information with every object storage request. You must also set up an Amazon S3 bucket policy to reject storage requests that don’t include encryption information.

There are no additional charges for using default encryption for S3 buckets. Requests to configure the default encryption feature incur standard Amazon S3 request charges. For information about pricing, see Amazon S3 pricing. For SSE-KMS CMK storage, AWS KMS charges apply and are listed at AWS KMS pricing.

SSE-S3 is the simplest method — the keys are managed and handled by AWS to encrypt the data you have selected. You cannot see the key directly or use this key manually to encrypt or decrypt the data. AES-256 is used as the encryption algorithm. AES (Advanced Encryption Standard) is a symmetric block cypher, with 256 bit being the cryptographic key length. If you fully trust AWS, use this S3 encryption method.

SSE-KMS is a slightly different method from SSE-S3. AWS Key Management Service (KMS) is used to encrypt S3 data on the Amazon server side. The data key is managed by AWS, but a user manages the customer master key (CMK) in AWS KMS. The advantages of using the SSE-KMS encryption type are user control and audit trail.

With SSE-C, keys are provided by a customer and AWS doesn’t store the encryption keys. The provided key is passed in to AWS to handle each request related to data encryption or decryption. A user must ensure the safety of the keys. S3 data encryption is performed on the AWS server side. Only an HTTPS connection can be used (not HTTP).

You cannot choose your own user name or password for Git credentials. IAM generates these credentials for you to help ensure they meet the security standards for AWS and secure repositories in CodeCommit. You can download the credentials only once, at the time they are generated. Make sure that you save the credentials in a secure location. If necessary, you can reset the password at any time, but doing so invalidates any connections configured with the old password. You must reconfigure connections to use the new password before you can connect.

After you deregister an AMI, you can’t use it to launch new instances. When you deregister an AMI, it doesn’t affect any instances that you’ve already launched from the AMI. You’ll continue to incur usage costs for these instances. Therefore, if you are finished with these instances, you should terminate them.

Important When you delete a user’s password, the user can no longer sign in to the AWS Management Console. If the user has active access keys, they continue to function and allow access through the AWS CLI, Tools for Windows PowerShell, or AWS API function calls. When you use the AWS CLI, Tools for Windows PowerShell, or AWS API to delete a user from your AWS account, you must first delete the password using this operation. For more information, see Deleting an IAM user (AWS CLI).

Change the initial size of the root volume

By default, the size of the root volume is determined by the size of the snapshot. You can increase the initial size of the root volume using the block device mapping of the instance as follows.

  1. Determine the device name of the root volume specified in the AMI, as described in View the EBS volumes in an AMI block device mapping.
  2. Confirm the size of the snapshot specified in the AMI block device mapping, as described in View Amazon EBS snapshot information.
  3. Override the size of the root volume using the instance block device mapping, as described in Update the block device mapping when launching an instance, specifying a volume size that is larger than the snapshot size.

For example, the following entry for the instance block device mapping increases the size of the root volume, /dev/xvda, to 100 GiB. You can omit the snapshot ID in the instance block device mapping because the snapshot ID is already specified in the AMI block device mapping.

{
"DeviceName": "/dev/xvda",
"Ebs": {
"VolumeSize": 100
}
}
  • Availability Zones are multiple, isolated locations within each Region.
  • Local Zones provide you the ability to place resources, such as compute and storage, in multiple locations closer to your end users.
  • AWS Outposts brings native AWS services, infrastructure, and operating models to virtually any data center, co-location space, or on-premises facility.
  • Wavelength Zones allow developers to build applications that deliver ultra-low latencies to 5G devices and end users. Wavelength deploys standard AWS compute and storage services to the edge of telecommunication carriers’ 5G networks.

To describe your Availability Zones

The following example describe-availability-zones displays details for the Availability Zones that are available to you. The response includes Availability Zones only for the current Region. In this example, it uses the profiles default us-west-2 (Oregon) Region.

aws ec2 describe-availability-zones

Output:

{
"AvailabilityZones": [
{
"State": "available",
"OptInStatus": "opt-in-not-required",
"Messages": [],
"RegionName": "us-west-2",
"ZoneName": "us-west-2a",
"ZoneId": "usw2-az1",
"GroupName": "us-west-2",
"NetworkBorderGroup": "us-west-2"
},
{
"State": "available",
"OptInStatus": "opt-in-not-required",
"Messages": [],
"RegionName": "us-west-2",
"ZoneName": "us-west-2b",
"ZoneId": "usw2-az2",
"GroupName": "us-west-2",
"NetworkBorderGroup": "us-west-2"
},

If you are on the correct sign-in page and lose or forget your passwords or access keys, you cannot retrieve them from IAM. Instead, you can reset them using the following methods:

  • AWS account root user password — If you forget your root user password, you can reset the password from the AWS Management Console. For details, see Resetting a lost or forgotten root user password later in this topic.
  • AWS account access keys — If you forget your account access keys, you can create new access keys without disabling the existing access keys. If you are not using the existing keys, you can delete those. For details, see Creating access keys for the root user and Deleting access keys for the root user.
  • IAM user password — If you are an IAM user and you forget your password, you must ask your administrator to reset your password. To learn how an administrator can manage your password, see Managing passwords for IAM users.
  • IAM user access keys — If you are an IAM user and you forget your access keys, you will need new access keys. If you have permission to create your own access keys, you can find instructions for creating a new one at Managing access keys (console). If you do not have the required permissions, you must ask your administrator to create new access keys. If you are still using your old keys, ask your administrator not to delete the old keys. To learn how an administrator can manage your access keys, see Managing access keys for IAM users.

AWS supports bulk deletion of up to 1000 objects per request using the S3 REST API and its various wrappers. This method assumes you know the S3 object keys you want to remove (that is, it’s not designed to handle something like a retention policy, files that are over a certain size, etc).

The S3 REST API can specify up to 1000 files to be deleted in a single request, which is must quicker than making individual requests. Remember, each request is an HTTP (thus TCP) request. So each request carries overhead. You just need to know the objects’ keys and create an HTTP request (or use an wrapper in your language of choice). AWS provides great information on this feature and its usage. Just choose the method you’re most comfortable with!

Every local secondary index must meet the following conditions:

  • The partition key is the same as that of its base table.
  • The sort key consists of exactly one scalar attribute.
  • The sort key of the base table is projected into the index, where it acts as a non-key attribute.

When you send a PutItem, UpdateItem, DeleteItem, or BatchWriteItem request to DAX, the following occurs:

  • DAX sends the request to DynamoDB.
  • DynamoDB replies to DAX, confirming that the write succeeded.
  • DAX writes the item to its item cache.
  • DAX returns success to the requester.

To prevent accidental termination, you can disable instance termination. If you do so, ensure that the disableApiTermination attribute is set to true for the instance. To control the behavior of an instance shutdown, such as shutdown -h in Linux or shutdown in Windows, set the instanceInitiatedShutdownBehavior instance attribute to stop or terminate as desired. Instances with Amazon EBS volumes for the root device default to stop, and instances with instance-store root devices are always terminated as the result of an instance shutdown. Link: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instances-and-amis.html

When you configure server-side encryption using SSE-KMS, you can configure your bucket to use an S3 Bucket Key for SSE-KMS on new objects. S3 Bucket Keys decrease the request traffic from Amazon S3 to AWS Key Management Service (AWS KMS) and reduce the cost of SSE-KMS

When you configure your bucket to use default encryption with SSE-KMS, you can also enable S3 Bucket Keys to decrease request traffic from Amazon S3 to AWS Key Management Service (AWS KMS) and reduce the cost of encryption. For more information, see Reducing the cost of SSE-KMS with Amazon S3 Bucket Keys.

Console password: A password that the user can type to sign in to interactive sessions such as the AWS Management Console. Access keys: A combination of an access key ID and a secret access key. You can assign two to a user at a time. These can be used to make programmatic calls to AWS. For example, you might use access keys when using the API for code or at a command prompt when using the AWS CLI or the AWS PowerShell tools. SSH keys for use with CodeCommit: An SSH public key in the OpenSSH format that can be used to authenticate with CodeCommit. Server certificates: SSL/TLS certificates that you can use to authenticate with some AWS services. We recommend that you use AWS Certificate Manager (ACM) to provision, manage, and deploy your server certificates. Use IAM only when you must support HTTPS connections in a region that is not supported by ACM. To learn which regions support ACM, see AWS Certificate Manager endpoints and quotas in the AWS General Reference. Link: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users.html

If an Amazon EBS-backed instance fails, you can restore your session by following one of these methods: Stop and then start again (try this method first). Automatically snapshot all relevant volumes and create a new AMI. For more information, see Create an Amazon EBS-backed Linux AMI. Attach the volume to the new instance by following these steps: Create a snapshot of the root volume. Register a new AMI using the snapshot. Launch a new instance from the new AMI. Detach the remaining Amazon EBS volumes from the old instance. Reattach the Amazon EBS volumes to the new instance. Link: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/RootDeviceStorage.html

Thank you for reading, hope you enjoy it ! 😄

--

--