Different types of storages in s3

Different types of storages in s3

What is Simple Storage Service in AWS

Amazon Simple Storage Service (S3) is employed for the storage of data in the form of objects S3 is not like any other file storage device or service. In addition, Amazon S3 offers industry-leading scalability, data availability, security, and performance. The data that the user uploads to S3 is stored as objects and assigned an ID. Furthermore, they store data in bucket-like shapes and can upload files up to 5 Terabytes in size (TB). This service is primarily intended for Amazon Web Services online backup and archiving of data and applications (AWS).

Amazon S3 Storage Classes:

By inspecting the data, this storage preserves its originality. Storage classes are classified as follows:

  • Amazon S3 Standard
  • Amazon S3 Intelligent-Tiering
  • Amazon S3 Standard-Infrequent Access
  • Amazon S3 One Zone-Infrequent Access
  • Amazon S3 Glacier Instant Retrieval
  • Amazon S3 Glacier Flexible Retrieval
  • Amazon S3 Glacier Deep Archive
    • 1. Amazon S3 Standard:

It is used for general storage and provides high durability, availability, and performance for frequently accessed data. Cloud applications, dynamic websites, content distribution, mobile and gaming applications, and big data analytics are all appropriate use cases for S3 Standard.

It is primarily used for general purposes in order to increase durability, availability, and performance. Cloud applications, dynamic websites, content distribution, mobile and gaming apps, and big data analysis or data mining are some of its applications.

    • Characteristics of S3 Standard:

The availability criteria are quite good, such as 99.9%.

Improves an object file's recovery.

It is against the difficult events that can affect an entire Availability Zone.

The S3 standard has a durability of 99.999999999%.

    • 2. Amazon S3 Intelligent-Tiering

The first cloud storage reduces the user's storage costs automatically. It offers very cost-effective access based on frequency without compromising other performance. It also handles difficult operations. Amazon S3 Intelligent - Tiering automatically reduces the cost of granular objects. Amazon S3 Intelligent - Tiering has no retrieval fees.

    • Characteristics of S3 Intelligent-Tiering:

• Less monitoring was required, and the tier charge was automatically applied.

• There are no minimum storage requirements or recovery fees to use the service.

• S3 Intelligent- Tiering has a durability of 99.999999999%.

    • 3. Amazon S3 Standard-Infrequent Access:

S3 Standard-IA is used by users to access less frequently used data. It necessitates quick access when required. Using S3 Standard-IA, we can achieve high strength, high output, and low bandwidth. It is ideal for storing backup and recovery data for an extended period of time. It serves as a repository for disaster recovery files.

    • Characteristics of S3 Standard-Infrequent Access:

High performance and the same rate of action.

In all AZs, it is extremely durable.

The durability is 99.999999999%.

    • 4. Amazon S3 Glacier Instant Retrieval:

It is an archive storage class that provides the lowest-cost storage for data archiving and is structured to give you the best performance and flexibility. The S3 Glacier Instant Retrieval service provides the quickest access to archive storage. Data retrieval in milliseconds, as in the S3 standard.

    • Characteristics of S3 Glacier Instant Retrieval:

The data is recovered in milliseconds.

The minimum size of an object should be 128KB.

S3 glacier Instant Retrieval has a 99.9% availability rate..

The durability is 99.999999999%

    • 5. Amazon S3 One Zone-Infrequent Access:

S3 One Zone-IA, in contrast to other S3 Storage Classes that store data in a minimum of three Availability Zones, stores data in a single Availability Zone and costs 20% less than S3 Standard-IA. It's an excellent choice for storing secondary backup copies of on-premises data or data that can be easily recreated. S3 One Zone-IA offers the same high durability, throughput, and latency as S3 Standard.

    • Characteristics of S3 One Zone-Infrequent Access:-

SSL (Secure Sockets Layer) is supported for data transfer and encryption.

Data can be harmed if an availability zone is destroyed.

In S3 one Zone-Infrequent Access, availability is 99.5%..

The durability is 99.999999999%.

    • 6. Amazon S3 Glacier Flexible Retrieval:

When compared to S3 Glacier Instant Retrieval, it offers less expensive storage. It is an appropriate solution for backing up data so that it can be easily recovered a few times per year. It only takes a few minutes to access the data.

    • Characteristics of S3 Glacier Flexible Retrieval:

Free recoveries in abundance.

Data access may be hampered if AZs are destroyed.

S3 glacier flexible retrieval is best for backup and disaster recovery use cases when retrieving large data sets.

S3 glacier flexible retrieval has a 99.99% availability rate..

Durability is of 99.999999999%

    • 7. Amazon S3 Glacier Deep Archive:

The Glacier Deep Archive storage class is intended to provide long-term, secure storage for large amounts of data at a price competitive with low-cost off-premises tape archival services. You no longer have to deal with expensive services. Accessibility is so efficient that data can be restored within 12 hours. This storage class is designed in such a way that users can easily obtain long-lasting and more secure storage for large amounts of data at a low cost. It has efficient accessibility and can restore data in very little time, so its time complexity is also efficient. Object replication is another feature of S3 Glacier Deep Archive.

    • Characteristics of S3 Glacier Deep Archive:-

Storage that is more secure.

Recovery time is shorter and requires less time.

S3 glacier deep archive availability is 99.99%.

Durability is of 99.999999999%.

What Users are saying..

profile image

Ed Godalle

Director Data Analytics at EY / EY Tech
linkedin profile url

I am the Director of Data Analytics with over 10+ years of IT experience. I have a background in SQL, Python, and Big Data working with Accenture, IBM, and Infosys. I am looking to enhance my skills... Read More

Relevant Projects

AWS Snowflake Data Pipeline Example using Kinesis and Airflow
Learn to build a Snowflake Data Pipeline starting from the EC2 logs to storage in Snowflake and S3 post-transformation and processing through Airflow DAGs

Learn Efficient Multi-Source Data Processing with Talend ETL
In this Talend ETL Project , you will create a multi-source ETL Pipeline to load data from multiple sources such as MySQL Database, Azure Database, and API to Snowflake cloud using Talend Jobs.

Loan Default Prediction Project using Explainable AI ML Models
Loan Default Prediction Project that employs sophisticated machine learning models, such as XGBoost and Random Forest and delves deep into the realm of Explainable AI, ensuring every prediction is transparent and understandable.

Learn to Build an End-to-End Machine Learning Pipeline - Part 1
In this Machine Learning Project, you will learn how to build an end-to-end machine learning pipeline for predicting truck delays, addressing a major challenge in the logistics industry.

End-to-End Snowflake Healthcare Analytics Project on AWS-2
In this AWS Snowflake project, you will build an end to end retraining pipeline by checking Data and Model Drift and learn how to redeploy the model if needed

PySpark Tutorial - Learn to use Apache Spark with Python
PySpark Project-Get a handle on using Python with Spark through this hands-on data processing spark python tutorial.

MLOps using Azure Devops to Deploy a Classification Model
In this MLOps Azure project, you will learn how to deploy a classification machine learning model to predict the customer's license status on Azure through scalable CI/CD ML pipelines.

Recommender System Machine Learning Project for Beginners-2
Recommender System Machine Learning Project for Beginners Part 2- Learn how to build a recommender system for market basket analysis using association rule mining.

Locality Sensitive Hashing Python Code for Look-Alike Modelling
In this deep learning project, you will find similar images (lookalikes) using deep learning and locality sensitive hashing to find customers who are most likely to click on an ad.

Build a Logistic Regression Model in Python from Scratch
Regression project to implement logistic regression in python from scratch on streaming app data.