Explain the features of DynamoDB

This recipe explains what the features of DynamoDB

Recipe Objective - Explain the features of DynamoDB?

The Amazon DynamoDB is a widely used service and is defined as the fully managed proprietary NoSQL database service which supports the key-value and document data structures and is offered by Amazon.com as part of the Amazon Web Services portfolio. Amazon DynamoDB exposes a similar data model to and derives its name from Dynamo but has a different underlying implementation. Dynamo had the multi-leader design requiring clients to resolve version conflicts and DynamoDB uses synchronous replication across the multiple data centres for high durability and availability. Amazon DynamoDB was announced by the Amazon CTO Werner Vogels on January 18, 2012, and is presented as an evolution of the Amazon SimpleDB. Amazon DynamoDB offers reliable performance even as it scales further, a managed experience so users won't be SSH-ing into the servers to upgrade the crypto libraries and the small, simple API allowing for simple key-value access as well as more advanced query patterns. Amazon DynamoDB offers built-in security, continuous backups, automated multi-region replication, in-memory caching and data export tools. Amazon DynamoDB offers security to the users' data encryption at the rest automatic backup and restores with guaranteed reliability with an SLA of 99.99&% availability.

Deploy an Auto Twitter Handle with Spark and Kafka

Benefits of Amazon DynamoDB

  • The Amazon DynamoDB offers users the ability to auto-scale by tracking how close the usage is to the upper bounds. This can allow users systems to adjust according to the amount of data traffic, helping users to avoid issues with the performance while reducing costs and thus helping in performance and scalability. The Amazon DynamoDB offers Access to the control rules as to when the data gets more specific and personal, it becomes more important to have effective access control so, users want to easily apply access control to the right people without creating bottlenecks in other people’s workflow. The fine-grained access control of DynamoDB allows the table owner to gain a higher level of control over data in the table. Amazon DynamoDB streams allow developers to further receive and update the item-level data before and after changes in that data and this is because DynamoDB streams provide the time-ordered sequence of changes made to the data within the last 24 hours. So, with streams, users can easily use the API to make changes to the full-text search data store such as the Elasticsearch, push incremental backups to Amazon S3, or maintain an up-to-date read-cache.

System Requirements

  • Any Operating System(Mac, Windows, Linux)

This recipe explains Amazon DynamoDB and the Features of Amazon DynamoDB.

Features of Amazon DynamoDB

    • It provides Key-value and document data models

Amazon DynamoDB supports both key-value and the document data models and this enables DynamoDB to have a flexible schema, so each row can have any number of columns at any point in time. This allows users to easily adapt the tables as per the business requirements change, without having to redefine the table schema as a user would in relational databases.

    • It provides Microsecond latency with DynamoDB Accelerator

Amazon DynamoDB (DAX) is defined as an in-memory cache that delivers fast read performance for the tables at scale by enabling users to use a fully managed in-memory cache. Using DAX, users can improve the read performance of their DynamoDB tables by up to 10 times—taking the time required for reads from milliseconds to microseconds and even at millions of requests per second.

    • It offers global replication automated with global tables

Amazon DynamoDB global tables replicate users data automatically across their choice of AWS Regions and automatically scale capacity to accommodate your workloads. With the global tables, the globally distributed applications can access data locally in the selected regions to get a single-digit millisecond to read and write performance.

    • It offers Advanced streaming applications with the Kinesis Data Streams for DynamoDB

Amazon Kinesis Data Streams for DynamoDB captures the item-level changes in users DynamoDB tables as the Kinesis data stream and this feature enables users to build advanced streaming applications such as real-time log aggregation, real-time business analytics, and the Internet of Things data capture. Through Kinesis Data Streams, users can also use the Amazon Kinesis Data Firehose to deliver DynamoDB data automatically to the other AWS services.

    • It offers Encryption at rest.

Amazon DynamoDB encrypts all the customer data at rest by default. Encryption at rest enhances the security of users data by using the encryption keys stored in AWS Key Management Service. With encryption at rest, users build security-sensitive applications that meet strict encryption compliance and regulatory requirements. The default encryption using the AWS owned customer master keys is provided at no additional charge.

What Users are saying..

profile image

Ray han

Tech Leader | Stanford / Yale University
linkedin profile url

I think that they are fantastic. I attended Yale and Stanford and have worked at Honeywell,Oracle, and Arthur Andersen(Accenture) in the US. I have taken Big Data and Hadoop,NoSQL, Spark, Hadoop... Read More

Relevant Projects

Spark Project-Analysis and Visualization on Yelp Dataset
The goal of this Spark project is to analyze business reviews from Yelp dataset and ingest the final output of data processing in Elastic Search.Also, use the visualisation tool in the ELK stack to visualize various kinds of ad-hoc reports from the data.

dbt Snowflake Project to Master dbt Fundamentals in Snowflake
DBT Snowflake Project to Master the Fundamentals of DBT and learn how it can be used to build efficient and robust data pipelines with Snowflake.

AWS Project for Batch Processing with PySpark on AWS EMR
In this AWS Project, you will learn how to perform batch processing on Wikipedia data with PySpark on AWS EMR.

AWS Project-Website Monitoring using AWS Lambda and Aurora
In this AWS Project, you will learn the best practices for website monitoring using AWS services like Lambda, Aurora MySQL, Amazon Dynamo DB and Kinesis.

Implementing Slow Changing Dimensions in a Data Warehouse using Hive and Spark
Hive Project- Understand the various types of SCDs and implement these slowly changing dimesnsion in Hadoop Hive and Spark.

Hadoop Project to Perform Hive Analytics using SQL and Scala
In this hadoop project, learn about the features in Hive that allow us to perform analytical queries over large datasets.

SQL Project for Data Analysis using Oracle Database-Part 3
In this SQL Project for Data Analysis, you will learn to efficiently write sub-queries and analyse data using various SQL functions and operators.

Build an ETL Pipeline with DBT, Snowflake and Airflow
Data Engineering Project to Build an ETL pipeline using technologies like dbt, Snowflake, and Airflow, ensuring seamless data extraction, transformation, and loading, with efficient monitoring through Slack and email notifications via SNS

Log Analytics Project with Spark Streaming and Kafka
In this spark project, you will use the real-world production logs from NASA Kennedy Space Center WWW server in Florida to perform scalable log analytics with Apache Spark, Python, and Kafka.

Build an ETL Pipeline on EMR using AWS CDK and Power BI
In this ETL Project, you will learn build an ETL Pipeline on Amazon EMR with AWS CDK and Apache Hive. You'll deploy the pipeline using S3, Cloud9, and EMR, and then use Power BI to create dynamic visualizations of your transformed data.