Different types of storages in s3

Different types of storages in s3

What is Simple Storage Service in AWS

Amazon Simple Storage Service (S3) is employed for the storage of data in the form of objects S3 is not like any other file storage device or service. In addition, Amazon S3 offers industry-leading scalability, data availability, security, and performance. The data that the user uploads to S3 is stored as objects and assigned an ID. Furthermore, they store data in bucket-like shapes and can upload files up to 5 Terabytes in size (TB). This service is primarily intended for Amazon Web Services online backup and archiving of data and applications (AWS).

Amazon S3 Storage Classes:

By inspecting the data, this storage preserves its originality. Storage classes are classified as follows:

  • Amazon S3 Standard
  • Amazon S3 Intelligent-Tiering
  • Amazon S3 Standard-Infrequent Access
  • Amazon S3 One Zone-Infrequent Access
  • Amazon S3 Glacier Instant Retrieval
  • Amazon S3 Glacier Flexible Retrieval
  • Amazon S3 Glacier Deep Archive
    • 1. Amazon S3 Standard:

It is used for general storage and provides high durability, availability, and performance for frequently accessed data. Cloud applications, dynamic websites, content distribution, mobile and gaming applications, and big data analytics are all appropriate use cases for S3 Standard.

It is primarily used for general purposes in order to increase durability, availability, and performance. Cloud applications, dynamic websites, content distribution, mobile and gaming apps, and big data analysis or data mining are some of its applications.

    • Characteristics of S3 Standard:

The availability criteria are quite good, such as 99.9%.

Improves an object file's recovery.

It is against the difficult events that can affect an entire Availability Zone.

The S3 standard has a durability of 99.999999999%.

    • 2. Amazon S3 Intelligent-Tiering

The first cloud storage reduces the user's storage costs automatically. It offers very cost-effective access based on frequency without compromising other performance. It also handles difficult operations. Amazon S3 Intelligent - Tiering automatically reduces the cost of granular objects. Amazon S3 Intelligent - Tiering has no retrieval fees.

    • Characteristics of S3 Intelligent-Tiering:

• Less monitoring was required, and the tier charge was automatically applied.

• There are no minimum storage requirements or recovery fees to use the service.

• S3 Intelligent- Tiering has a durability of 99.999999999%.

    • 3. Amazon S3 Standard-Infrequent Access:

S3 Standard-IA is used by users to access less frequently used data. It necessitates quick access when required. Using S3 Standard-IA, we can achieve high strength, high output, and low bandwidth. It is ideal for storing backup and recovery data for an extended period of time. It serves as a repository for disaster recovery files.

    • Characteristics of S3 Standard-Infrequent Access:

High performance and the same rate of action.

In all AZs, it is extremely durable.

The durability is 99.999999999%.

    • 4. Amazon S3 Glacier Instant Retrieval:

It is an archive storage class that provides the lowest-cost storage for data archiving and is structured to give you the best performance and flexibility. The S3 Glacier Instant Retrieval service provides the quickest access to archive storage. Data retrieval in milliseconds, as in the S3 standard.

    • Characteristics of S3 Glacier Instant Retrieval:

The data is recovered in milliseconds.

The minimum size of an object should be 128KB.

S3 glacier Instant Retrieval has a 99.9% availability rate..

The durability is 99.999999999%

    • 5. Amazon S3 One Zone-Infrequent Access:

S3 One Zone-IA, in contrast to other S3 Storage Classes that store data in a minimum of three Availability Zones, stores data in a single Availability Zone and costs 20% less than S3 Standard-IA. It's an excellent choice for storing secondary backup copies of on-premises data or data that can be easily recreated. S3 One Zone-IA offers the same high durability, throughput, and latency as S3 Standard.

    • Characteristics of S3 One Zone-Infrequent Access:-

SSL (Secure Sockets Layer) is supported for data transfer and encryption.

Data can be harmed if an availability zone is destroyed.

In S3 one Zone-Infrequent Access, availability is 99.5%..

The durability is 99.999999999%.

    • 6. Amazon S3 Glacier Flexible Retrieval:

When compared to S3 Glacier Instant Retrieval, it offers less expensive storage. It is an appropriate solution for backing up data so that it can be easily recovered a few times per year. It only takes a few minutes to access the data.

    • Characteristics of S3 Glacier Flexible Retrieval:

Free recoveries in abundance.

Data access may be hampered if AZs are destroyed.

S3 glacier flexible retrieval is best for backup and disaster recovery use cases when retrieving large data sets.

S3 glacier flexible retrieval has a 99.99% availability rate..

Durability is of 99.999999999%

    • 7. Amazon S3 Glacier Deep Archive:

The Glacier Deep Archive storage class is intended to provide long-term, secure storage for large amounts of data at a price competitive with low-cost off-premises tape archival services. You no longer have to deal with expensive services. Accessibility is so efficient that data can be restored within 12 hours. This storage class is designed in such a way that users can easily obtain long-lasting and more secure storage for large amounts of data at a low cost. It has efficient accessibility and can restore data in very little time, so its time complexity is also efficient. Object replication is another feature of S3 Glacier Deep Archive.

    • Characteristics of S3 Glacier Deep Archive:-

Storage that is more secure.

Recovery time is shorter and requires less time.

S3 glacier deep archive availability is 99.99%.

Durability is of 99.999999999%.

What Users are saying..

profile image

Ray han

Tech Leader | Stanford / Yale University
linkedin profile url

I think that they are fantastic. I attended Yale and Stanford and have worked at Honeywell,Oracle, and Arthur Andersen(Accenture) in the US. I have taken Big Data and Hadoop,NoSQL, Spark, Hadoop... Read More

Relevant Projects

End-to-End ML Model Monitoring using Airflow and Docker
In this MLOps Project, you will learn to build an end to end pipeline to monitor any changes in the predictive power of model or degradation of data.

Word2Vec and FastText Word Embedding with Gensim in Python
In this NLP Project, you will learn how to use the popular topic modelling library Gensim for implementing two state-of-the-art word embedding methods Word2Vec and FastText models.

Spark Project-Analysis and Visualization on Yelp Dataset
The goal of this Spark project is to analyze business reviews from Yelp dataset and ingest the final output of data processing in Elastic Search.Also, use the visualisation tool in the ELK stack to visualize various kinds of ad-hoc reports from the data.

Hands-On Real Time PySpark Project for Beginners
In this PySpark project, you will learn about fundamental Spark architectural concepts like Spark Sessions, Transformation, Actions, and Optimization Techniques using PySpark

Build a Graph Based Recommendation System in Python-Part 2
In this Graph Based Recommender System Project, you will build a recommender system project for eCommerce platforms and learn to use FAISS for efficient similarity search.

GCP Project to Learn using BigQuery for Exploring Data
Learn using GCP BigQuery for exploring and preparing data for analysis and transformation of your datasets.

Big Data Project for Solving Small File Problem in Hadoop Spark
This big data project focuses on solving the small file problem to optimize data processing efficiency by leveraging Apache Hadoop and Spark within AWS EMR by implementing and demonstrating effective techniques for handling large numbers of small files.

Build a big data pipeline with AWS Quicksight, Druid, and Hive
Use the dataset on aviation for analytics to simulate a complex real-world big data pipeline based on messaging with AWS Quicksight, Druid, NiFi, Kafka, and Hive.

Build an ETL Pipeline with DBT, Snowflake and Airflow
Data Engineering Project to Build an ETL pipeline using technologies like dbt, Snowflake, and Airflow, ensuring seamless data extraction, transformation, and loading, with efficient monitoring through Slack and email notifications via SNS

Learn Object Tracking (SOT, MOT) using OpenCV and Python
Get Started with Object Tracking using OpenCV and Python - Learn to implement Multiple Instance Learning Tracker (MIL) algorithm, Generic Object Tracking Using Regression Networks Tracker (GOTURN) algorithm, Kernelized Correlation Filters Tracker (KCF) algorithm, Tracking, Learning, Detection Tracker (TLD) algorithm for single and multiple object tracking from various video clips.