What are the various Routing Policies in route53

This recipe explains what are the various Routing Policies in route53

What is Amazon Route 53?

Route 53 is a DNS service that routes Internet traffic to the servers that host the requested Web application. Route 53 is named after the port of the same name. Route 53, in contrast to traditional DNS management services, enables scalable, flexible, secure, and manageable traffic routing in conjunction with other AWS services.

Using the AWS Management Console, you can use Route 53 to perform three main functions: domain registration, DNS routing, and health checking.

How does Route 53 work?

1. A user launches a web browser and types www.site.com into the address bar.

2. The www.site.com request is routed to a DNS resolver, which is typically managed by the Internet Service Provider (ISP).

3. The ISP DNS resolver routes www.site.com's request to a DNS root name server.

4. The DNS resolver redirects the request from www.site.com to one of the.com top-level domain (TLD) name servers. The names of the four Route 53 name servers associated with the example.com domain are returned by the.com domain name server. The Route 53 name servers are cached by the DNS resolver for future use.

5. The DNS resolver selects a Route 53 name server and forwards www.site.com's request to that Route 53 name server.

6. In the case of simple routing, the Route 53 name server looks for the record www.site.com in the hosted zone site.com and obtains its value, such as the alias of Amazon CloudFront distribution.

7. When the DNS resolver has found the correct route (CloudFront IP), it returns the value to the user's web browser.

8. The web browser sends a request from www.site.com to the CloudFront distribution's IP address.

9. The example CloudFront distribution returns to the web browser the web page for www.site.com from the cache or origin server.

What are the different routing policies available in Route 53?

Route 53 provides powerful policies for efficient DNS requests. Once your domain is up and running, you can select the routing policy that best suits your needs. However, in order to get the most out of the service, you must first understand how each policy type works.

You select a routing policy when you create a record, which determines how Amazon Route 53 responds to queries:

1. Simple routing policy: Use for a single resource in your domain that performs a specific function, such as an Amazon EC2 instance that serves content for the example.com website.

2. Weighted: You can assign weights to resource record sets with this option. For example, you can specify 25 for one resource and 75 for another, which means that 25% of requests will be routed to the first resource and 75% to the second.

3. LBR (Latency-based routing): Use this when you have resources in multiple AWS Regions and want to route end users to the AWS Region with the lowest latency.

4. Failover: Used to configure active-passive failover. More information can be found in our blog post: Route 53 on Amazon: Health Checks and DNS Failover

5. Geolocation: This allows you to balance the load on your resources by routing requests to specific endpoints based on their geographic location.

6. Multivalue response: Use this option when you want Route 53 to respond to DNS queries with up to eight healthy records chosen at random.

7. IP-based routing: You can use IP-based routing to create a series of Classless Inter-Domain Routing (CIDR) blocks that represent the client IP network range and associate these CIDR blocks with locations.

Benefits of Route 53

    • 1. High availability, reliability, and scalability

Amazon Route 53 is built on AWS's highly available and dependable infrastructure and is designed to scale automatically to handle extremely high query volumes

Our DNS servers are distributed, which helps to ensure a consistent ability to route your end users to your application. Route 53 is intended to provide the dependability required by critical applications, and it is backed by the Amazon Route 53 SLA (Service Level Agreement).

Because AWS services are tightly integrated, users can make changes to their architecture and scale resources to accommodate increasing Internet traffic volume without requiring significant configuration or management.

    • 2. Security

You can control who has access to which parts of the Route 53 service by managing permissions for each user in your AWS account. You can configure the Route 53 Resolver DNS firewall to check outbound DNS requests against a list of known malicious domains when you enable it.

    • 3. Global network

A global Anycast network of Route 53 DNS servers distributed globally aids in achieving lightning-fast speeds. Between regions, the DNS database is replicated. Route 53 is now a globally resilient service, which means it can withstand failure in one or more regions and continue to operate.

    • 4. Cost-effective

You only pay for the resources you use, such as the number of queries for each of your domains, hosted zones, and optional features like routing policies and health checks, at a low cost with no minimum usage commitments or up-front fees.

    • 5. Integrated routing policies

It is advantageous to route traffic based on various criteria such as latency, endpoint health, and geographic location. Route 53's flexibility allows for the configuration of multiple traffic policies and determines the activity of policies at any given time.

    • 6. Compatibility with other AWS services

Domain names can be mapped to Amazon CloudFront distributions, Elastic Load Balancers, EC2 instances, S3 buckets, and other AWS resources using Route 53.The use of AWS Identity and Access Management (IAM) with Route 53 assists with updating DNS data privileges

What Users are saying..

profile image

Anand Kumpatla

Sr Data Scientist @ Doubleslash Software Solutions Pvt Ltd
linkedin profile url

ProjectPro is a unique platform and helps many people in the industry to solve real-life problems with a step-by-step walkthrough of projects. A platform with some fantastic resources to gain... Read More

Relevant Projects

Orchestrate Redshift ETL using AWS Glue and Step Functions
ETL Orchestration on AWS - Use AWS Glue and Step Functions to fetch source data and glean faster analytical insights on Amazon Redshift Cluster

Learn to Create Delta Live Tables in Azure Databricks
In this Microsoft Azure Project, you will learn how to create delta live tables in Azure Databricks.

Azure Data Factory and Databricks End-to-End Project
Azure Data Factory and Databricks End-to-End Project to implement analytics on trip transaction data using Azure Services such as Data Factory, ADLS Gen2, and Databricks, with a focus on data transformation and pipeline resiliency.

Hadoop Project to Perform Hive Analytics using SQL and Scala
In this hadoop project, learn about the features in Hive that allow us to perform analytical queries over large datasets.

GCP Data Ingestion with SQL using Google Cloud Dataflow
In this GCP Project, you will learn to build a data processing pipeline With Apache Beam, Dataflow & BigQuery on GCP using Yelp Dataset.

Web Server Log Processing using Hadoop in Azure
In this big data project, you will use Hadoop, Flume, Spark and Hive to process the Web Server logs dataset to glean more insights on the log data.

Build an Analytical Platform for eCommerce using AWS Services
In this AWS Big Data Project, you will use an eCommerce dataset to simulate the logs of user purchases, product views, cart history, and the user’s journey to build batch and real-time pipelines.

Big Data Project for Solving Small File Problem in Hadoop Spark
This big data project focuses on solving the small file problem to optimize data processing efficiency by leveraging Apache Hadoop and Spark within AWS EMR by implementing and demonstrating effective techniques for handling large numbers of small files.

Build Serverless Pipeline using AWS CDK and Lambda in Python
In this AWS Data Engineering Project, you will learn to build a serverless pipeline using AWS CDK and other AWS serverless technologies like AWS Lambda and Glue.

Build a big data pipeline with AWS Quicksight, Druid, and Hive
Use the dataset on aviation for analytics to simulate a complex real-world big data pipeline based on messaging with AWS Quicksight, Druid, NiFi, Kafka, and Hive.