Creation of S3 bucket using the S3 console

Creation of S3 bucket using the S3 console

Recipe Objective - Creation of S3 bucket using the S3 console

The Amazon Simple Storage Service or Amazon S3 is widely used as an offering as object storage and offers industry-leading scalability, data availability, security, and performance. Customers or clients of all sizes and industries can be able to store and protect any amount of data for virtually any use case, such as the data lakes, cloud-native applications, and mobile apps. The Amazon Web Services S3 offers cost-effective storage classes and easy-to-use management features which can be used to optimize costs, organize data, and configure fine-tuned access controls to meet the specific business, organizational, and compliance requirements. The Amazon Web Services S3 Object Ownership is an Amazon S3 bucket-level setting that can be used to disable the access control lists (ACLs) and take ownership of every object in the bucket which can be used to simplify access management for the data stored in Amazon S3. Also, When another AWS account uploads an object to your S3 bucket that account (the object writer) owns the object, has access to it, and can grant other users access to it through ACLs by default. When a bucket is created, the bucket owner enforced setting can be applied for the Object Ownership to change this default behaviour so that ACLs are further disabled and the bucket owner automatically owns every object in the bucket created by the user.

Build a Real-Time Dashboard with Spark, Grafana and Influxdb

System Requirements

  • Any Operating System(Mac, Windows, Linux)

This recipe explains Amazon Web Services S3 and the process of creation of S3 bucket using the S3 console.

Creation of S3 Bucket using S3 Console

    • Sign in to Amazon Web Services(AWS) Management Console and open the Amazon S3 console.

Go to the AWS Management Console and further open the Amazon S3 Console.

    • Choose the Create Bucket option and then the Create Wizard opens

Select the Create Bucket option and then the Create Wizards opens.

    • Choose the Enter a DNS-compliant name for your bucket in the Bucket Name.

Choose the Bucket Name and further enter a DNS-compliant name for the bucket. The bucket name includes that the name is unique across all of the Amazon S3. The name must be between 3 and 63 characters long. The name must not contain uppercase characters. The name must start with a lowercase letter or number. Further, after the creation of the bucket, the name of the bucket cannot be changed.

    • Select the AWS Region where the bucket needs to reside in Region.

Choose a Region close to minimize latency and costs and address regulatory requirements and depending upon the user geographics. Also, the objects stored in the Region never leave that Region unless explicitly transfer to another Region by the user.

    • Disable or enable the ACLs and control ownership of objects uploaded in the bucket under the Object Ownership.

Select the disable or enable the ACLs and control the ownership of objects uploaded in the bucket under the Object Ownership. When the ACLs are disabled, Bucket owners are enforced, the bucket owner automatically owns and has full control over every object in a bucket. The ACLs no longer affect the permissions to data in the S3 bucket. The bucket uses policies to define access control. When ACLs are enabled, Bucket owners are preferred, the bucket owner owns and has full control over new objects that other accounts write to the bucket with the bucket-owner-full-control canned ACL.

    • Choose the Block Public Access settings that are to be applied to the bucket in the Bucket settings for Block Public Access.

Apply the Block Public Access settings on the bucket in the bucket settings for the Block Public Access. The Block Public Access settings that are enabled for the bucket are also enabled for all access points that are created on the bucket.

    • (Optional) Enable the S3 Object Lock.

Enable the S3 object if the user wants, So choose the Advanced settings and further read the message that appears. Also, the S3 Object Lock can be enabled for a bucket when it is created. If Object Lock is enabled for the bucket, it cannot be disabled. The Enabling Object Lock also enables versioning for the bucket. After enabling the Object Lock for the bucket, the Object Lock default retention must be configured and legal hold settings to protect new objects from being deleted or overwritten.

    • Choose the Create bucket option.

Finally, Select the Create Bucket Option for creating the bucket in Amazon Web Services S3.

What Users are saying..

profile image

Ray han

Tech Leader | Stanford / Yale University
linkedin profile url

I think that they are fantastic. I attended Yale and Stanford and have worked at Honeywell,Oracle, and Arthur Andersen(Accenture) in the US. I have taken Big Data and Hadoop,NoSQL, Spark, Hadoop... Read More

Relevant Projects

Build Classification and Clustering Models with PySpark and MLlib
In this PySpark Project, you will learn to implement pyspark classification and clustering model examples using Spark MLlib.

Building Real-Time AWS Log Analytics Solution
In this AWS Project, you will build an end-to-end log analytics solution to collect, ingest and process data. The processed data can be analysed to monitor the health of production systems on AWS.

How to deal with slowly changing dimensions using snowflake?
Implement Slowly Changing Dimensions using Snowflake Method - Build Type 1 and Type 2 SCD in Snowflake using the Stream and Task Functionalities

PySpark Project-Build a Data Pipeline using Hive and Cassandra
In this PySpark ETL Project, you will learn to build a data pipeline and perform ETL operations by integrating PySpark with Hive and Cassandra

Learn to Create Delta Live Tables in Azure Databricks
In this Microsoft Azure Project, you will learn how to create delta live tables in Azure Databricks.

Project-Driven Approach to PySpark Partitioning Best Practices
In this Big Data Project, you will learn to implement PySpark Partitioning Best Practices.

PySpark Project-Build a Data Pipeline using Kafka and Redshift
In this PySpark ETL Project, you will learn to build a data pipeline and perform ETL operations by integrating PySpark with Apache Kafka and AWS Redshift

Talend Real-Time Project for ETL Process Automation
In this Talend Project, you will learn how to build an ETL pipeline in Talend Open Studio to automate the process of File Loading and Processing.

Azure Stream Analytics for Real-Time Cab Service Monitoring
Build an end-to-end stream processing pipeline using Azure Stream Analytics for real time cab service monitoring

Build a Data Pipeline with Azure Synapse and Spark Pool
In this Azure Project, you will learn to build a Data Pipeline in Azure using Azure Synapse Analytics, Azure Storage, Azure Synapse Spark Pool to perform data transformations on an Airline dataset and visualize the results in Power BI.