Explain the features of DynamoDB

This recipe explains what the features of DynamoDB

Recipe Objective - Explain the features of DynamoDB?

The Amazon DynamoDB is a widely used service and is defined as the fully managed proprietary NoSQL database service which supports the key-value and document data structures and is offered by Amazon.com as part of the Amazon Web Services portfolio. Amazon DynamoDB exposes a similar data model to and derives its name from Dynamo but has a different underlying implementation. Dynamo had the multi-leader design requiring clients to resolve version conflicts and DynamoDB uses synchronous replication across the multiple data centres for high durability and availability. Amazon DynamoDB was announced by the Amazon CTO Werner Vogels on January 18, 2012, and is presented as an evolution of the Amazon SimpleDB. Amazon DynamoDB offers reliable performance even as it scales further, a managed experience so users won't be SSH-ing into the servers to upgrade the crypto libraries and the small, simple API allowing for simple key-value access as well as more advanced query patterns. Amazon DynamoDB offers built-in security, continuous backups, automated multi-region replication, in-memory caching and data export tools. Amazon DynamoDB offers security to the users' data encryption at the rest automatic backup and restores with guaranteed reliability with an SLA of 99.99&% availability.

Deploy an Auto Twitter Handle with Spark and Kafka

Benefits of Amazon DynamoDB

  • The Amazon DynamoDB offers users the ability to auto-scale by tracking how close the usage is to the upper bounds. This can allow users systems to adjust according to the amount of data traffic, helping users to avoid issues with the performance while reducing costs and thus helping in performance and scalability. The Amazon DynamoDB offers Access to the control rules as to when the data gets more specific and personal, it becomes more important to have effective access control so, users want to easily apply access control to the right people without creating bottlenecks in other people’s workflow. The fine-grained access control of DynamoDB allows the table owner to gain a higher level of control over data in the table. Amazon DynamoDB streams allow developers to further receive and update the item-level data before and after changes in that data and this is because DynamoDB streams provide the time-ordered sequence of changes made to the data within the last 24 hours. So, with streams, users can easily use the API to make changes to the full-text search data store such as the Elasticsearch, push incremental backups to Amazon S3, or maintain an up-to-date read-cache.

System Requirements

  • Any Operating System(Mac, Windows, Linux)

This recipe explains Amazon DynamoDB and the Features of Amazon DynamoDB.

Features of Amazon DynamoDB

    • It provides Key-value and document data models

Amazon DynamoDB supports both key-value and the document data models and this enables DynamoDB to have a flexible schema, so each row can have any number of columns at any point in time. This allows users to easily adapt the tables as per the business requirements change, without having to redefine the table schema as a user would in relational databases.

    • It provides Microsecond latency with DynamoDB Accelerator

Amazon DynamoDB (DAX) is defined as an in-memory cache that delivers fast read performance for the tables at scale by enabling users to use a fully managed in-memory cache. Using DAX, users can improve the read performance of their DynamoDB tables by up to 10 times—taking the time required for reads from milliseconds to microseconds and even at millions of requests per second.

    • It offers global replication automated with global tables

Amazon DynamoDB global tables replicate users data automatically across their choice of AWS Regions and automatically scale capacity to accommodate your workloads. With the global tables, the globally distributed applications can access data locally in the selected regions to get a single-digit millisecond to read and write performance.

    • It offers Advanced streaming applications with the Kinesis Data Streams for DynamoDB

Amazon Kinesis Data Streams for DynamoDB captures the item-level changes in users DynamoDB tables as the Kinesis data stream and this feature enables users to build advanced streaming applications such as real-time log aggregation, real-time business analytics, and the Internet of Things data capture. Through Kinesis Data Streams, users can also use the Amazon Kinesis Data Firehose to deliver DynamoDB data automatically to the other AWS services.

    • It offers Encryption at rest.

Amazon DynamoDB encrypts all the customer data at rest by default. Encryption at rest enhances the security of users data by using the encryption keys stored in AWS Key Management Service. With encryption at rest, users build security-sensitive applications that meet strict encryption compliance and regulatory requirements. The default encryption using the AWS owned customer master keys is provided at no additional charge.

What Users are saying..

profile image

Anand Kumpatla

Sr Data Scientist @ Doubleslash Software Solutions Pvt Ltd
linkedin profile url

ProjectPro is a unique platform and helps many people in the industry to solve real-life problems with a step-by-step walkthrough of projects. A platform with some fantastic resources to gain... Read More

Relevant Projects

Analyse Yelp Dataset with Spark & Parquet Format on Azure Databricks
In this Databricks Azure project, you will use Spark & Parquet file formats to analyse the Yelp reviews dataset. As part of this you will deploy Azure data factory, data pipelines and visualise the analysis.

Building Data Pipelines in Azure with Azure Synapse Analytics
In this Microsoft Azure Data Engineering Project, you will learn how to build a data pipeline using Azure Synapse Analytics, Azure Storage and Azure Synapse SQL pool to perform data analysis on the 2021 Olympics dataset.

AWS Snowflake Data Pipeline Example using Kinesis and Airflow
Learn to build a Snowflake Data Pipeline starting from the EC2 logs to storage in Snowflake and S3 post-transformation and processing through Airflow DAGs

Deploy an Application to Kubernetes in Google Cloud using GKE
In this Kubernetes Big Data Project, you will automate and deploy an application using Docker, Google Kubernetes Engine (GKE), and Google Cloud Functions.

COVID-19 Data Analysis Project using Python and AWS Stack
COVID-19 Data Analysis Project using Python and AWS to build an automated data pipeline that processes COVID-19 data from Johns Hopkins University and generates interactive dashboards to provide insights into the pandemic for public health officials, researchers, and the general public.

Build a Spark Streaming Pipeline with Synapse and CosmosDB
In this Spark Streaming project, you will learn to build a robust and scalable spark streaming pipeline using Azure Synapse Analytics and Azure Cosmos DB and also gain expertise in window functions, joins, and logic apps for comprehensive real-time data analysis and processing.

Databricks Data Lineage and Replication Management
Databricks Project on data lineage and replication management to help you optimize your data management practices | ProjectPro

SQL Project for Data Analysis using Oracle Database-Part 7
In this SQL project, you will learn to perform various data wrangling activities on an ecommerce database.

Build an Analytical Platform for eCommerce using AWS Services
In this AWS Big Data Project, you will use an eCommerce dataset to simulate the logs of user purchases, product views, cart history, and the user’s journey to build batch and real-time pipelines.

Build an AWS ETL Data Pipeline in Python on YouTube Data
AWS Project - Learn how to build ETL Data Pipeline in Python on YouTube Data using Athena, Glue and Lambda