You upload a file named report.pdf to an Amazon S3 bucket. Later, you upload another file with the same name report.pdf to the same bucket. What is the result?
Think about how S3 handles object keys and overwriting.
In Amazon S3, object keys are unique within a bucket. Uploading an object with the same key replaces the existing object. Versioning must be enabled to keep multiple versions.
You want to download the file image.png from the S3 bucket my-bucket to your current local directory using AWS CLI. Which command is correct?
Look for the AWS CLI command that copies files between S3 and local.
The aws s3 cp command copies files between S3 and your local system. The other commands are invalid or do not exist.
You want users to upload files directly to an S3 bucket without exposing your AWS credentials or giving full bucket access. Which approach is best?
Think about temporary, limited access for uploads without exposing credentials.
Pre-signed URLs allow users to upload files directly to S3 with limited permissions and expiration, improving security and reducing complexity.
You enable versioning on an S3 bucket. What happens when you upload a new object with the same key or delete an object?
Consider how versioning preserves object history.
With versioning enabled, each upload creates a new version. Deleting an object adds a delete marker but does not remove previous versions, allowing recovery.
You need to upload a 10 GB video file to S3. Which method is best to ensure efficient upload and recovery from failures?
Think about handling large files and network interruptions.
Multipart upload splits large files into parts uploaded in parallel. It allows retrying failed parts without restarting the entire upload, improving reliability and speed.