Creation of S3 bucket using the S3 console

Creation of S3 bucket using the S3 console

Recipe Objective - Creation of S3 bucket using the S3 console

The Amazon Simple Storage Service or Amazon S3 is widely used as an offering as object storage and offers industry-leading scalability, data availability, security, and performance. Customers or clients of all sizes and industries can be able to store and protect any amount of data for virtually any use case, such as the data lakes, cloud-native applications, and mobile apps. The Amazon Web Services S3 offers cost-effective storage classes and easy-to-use management features which can be used to optimize costs, organize data, and configure fine-tuned access controls to meet the specific business, organizational, and compliance requirements. The Amazon Web Services S3 Object Ownership is an Amazon S3 bucket-level setting that can be used to disable the access control lists (ACLs) and take ownership of every object in the bucket which can be used to simplify access management for the data stored in Amazon S3. Also, When another AWS account uploads an object to your S3 bucket that account (the object writer) owns the object, has access to it, and can grant other users access to it through ACLs by default. When a bucket is created, the bucket owner enforced setting can be applied for the Object Ownership to change this default behaviour so that ACLs are further disabled and the bucket owner automatically owns every object in the bucket created by the user.

Build a Real-Time Dashboard with Spark, Grafana and Influxdb

System Requirements

  • Any Operating System(Mac, Windows, Linux)

This recipe explains Amazon Web Services S3 and the process of creation of S3 bucket using the S3 console.

Creation of S3 Bucket using S3 Console

    • Sign in to Amazon Web Services(AWS) Management Console and open the Amazon S3 console.

Go to the AWS Management Console and further open the Amazon S3 Console.

    • Choose the Create Bucket option and then the Create Wizard opens

Select the Create Bucket option and then the Create Wizards opens.

    • Choose the Enter a DNS-compliant name for your bucket in the Bucket Name.

Choose the Bucket Name and further enter a DNS-compliant name for the bucket. The bucket name includes that the name is unique across all of the Amazon S3. The name must be between 3 and 63 characters long. The name must not contain uppercase characters. The name must start with a lowercase letter or number. Further, after the creation of the bucket, the name of the bucket cannot be changed.

    • Select the AWS Region where the bucket needs to reside in Region.

Choose a Region close to minimize latency and costs and address regulatory requirements and depending upon the user geographics. Also, the objects stored in the Region never leave that Region unless explicitly transfer to another Region by the user.

    • Disable or enable the ACLs and control ownership of objects uploaded in the bucket under the Object Ownership.

Select the disable or enable the ACLs and control the ownership of objects uploaded in the bucket under the Object Ownership. When the ACLs are disabled, Bucket owners are enforced, the bucket owner automatically owns and has full control over every object in a bucket. The ACLs no longer affect the permissions to data in the S3 bucket. The bucket uses policies to define access control. When ACLs are enabled, Bucket owners are preferred, the bucket owner owns and has full control over new objects that other accounts write to the bucket with the bucket-owner-full-control canned ACL.

    • Choose the Block Public Access settings that are to be applied to the bucket in the Bucket settings for Block Public Access.

Apply the Block Public Access settings on the bucket in the bucket settings for the Block Public Access. The Block Public Access settings that are enabled for the bucket are also enabled for all access points that are created on the bucket.

    • (Optional) Enable the S3 Object Lock.

Enable the S3 object if the user wants, So choose the Advanced settings and further read the message that appears. Also, the S3 Object Lock can be enabled for a bucket when it is created. If Object Lock is enabled for the bucket, it cannot be disabled. The Enabling Object Lock also enables versioning for the bucket. After enabling the Object Lock for the bucket, the Object Lock default retention must be configured and legal hold settings to protect new objects from being deleted or overwritten.

    • Choose the Create bucket option.

Finally, Select the Create Bucket Option for creating the bucket in Amazon Web Services S3.

What Users are saying..

profile image

Jingwei Li

Graduate Research assistance at Stony Brook University
linkedin profile url

ProjectPro is an awesome platform that helps me learn much hands-on industrial experience with a step-by-step walkthrough of projects. There are two primary paths to learn: Data Science and Big Data.... Read More

Relevant Projects

AWS Project-Website Monitoring using AWS Lambda and Aurora
In this AWS Project, you will learn the best practices for website monitoring using AWS services like Lambda, Aurora MySQL, Amazon Dynamo DB and Kinesis.

Build a big data pipeline with AWS Quicksight, Druid, and Hive
Use the dataset on aviation for analytics to simulate a complex real-world big data pipeline based on messaging with AWS Quicksight, Druid, NiFi, Kafka, and Hive.

Snowflake Azure Project to build real-time Twitter feed dashboard
In this Snowflake Azure project, you will ingest generated Twitter feeds to Snowflake in near real-time to power an in-built dashboard utility for obtaining popularity feeds reports.

Deploy an Application to Kubernetes in Google Cloud using GKE
In this Kubernetes Big Data Project, you will automate and deploy an application using Docker, Google Kubernetes Engine (GKE), and Google Cloud Functions.

Log Analytics Project with Spark Streaming and Kafka
In this spark project, you will use the real-world production logs from NASA Kennedy Space Center WWW server in Florida to perform scalable log analytics with Apache Spark, Python, and Kafka.

AWS CDK and IoT Core for Migrating IoT-Based Data to AWS
Learn how to use AWS CDK and various AWS services to replicate an On-Premise Data Center infrastructure by ingesting real-time IoT-based.

Real-time Auto Tracking with Spark-Redis
Spark Project - Discuss real-time monitoring of taxis in a city. The real-time data streaming will be simulated using Flume. The ingestion will be done using Spark Streaming.

PySpark Project-Build a Data Pipeline using Hive and Cassandra
In this PySpark ETL Project, you will learn to build a data pipeline and perform ETL operations by integrating PySpark with Hive and Cassandra

Build an Incremental ETL Pipeline with AWS CDK
Learn how to build an Incremental ETL Pipeline with AWS CDK using Cryptocurrency data

dbt Snowflake Project to Master dbt Fundamentals in Snowflake
DBT Snowflake Project to Master the Fundamentals of DBT and learn how it can be used to build efficient and robust data pipelines with Snowflake.