Introduction to AWS Application Discovery Service and its use cases

In this recipe, we will learn about AWS Application Discovery Service. We will also learn about the use cases of AWS Application Discovery Service.

Recipe Objective - Introduction to AWS Application Discovery Service and its use cases?

The AWS Application Discovery Service is widely used and is defined as a service that gathers information about on-premises data centres to assist enterprise clients in planning migration projects. Thousands of workloads, many of which are highly interrelated, can be involved in data centre migration planning. Data on server use and dependency mapping are crucial early on in the transfer process. To help users better understand the workloads, AWS Application Discovery Service collects and presents configuration, use, and behaviour data from users' servers. The acquired data is stored in an AWS Application Discovery Service data store in an encrypted format. This information may be exported as a CSV file and used to calculate the Total Cost of Ownership (TCO) of running on AWS and to plan the user's migration to AWS. Many commercial customers have completed their cloud migrations with the aid of AWS Professional Services and APN Migration Partners. These experts have been trained to assess the output of the Application Discovery Service and may assist you in learning more about your on-premises setup and recommending viable migration solutions. This information is also available in the AWS Migration Hub, where users can migrate the detected servers and monitor their progress as they migrate to AWS. The data users uncover is saved in their AWS Migration Hub home region. As a result, before doing any discovery or migration activities, users must first set their home region in the Migration Hub console or with CLI commands. User's data can be exported to Microsoft Excel or AWS analytical tools like Amazon Athena and Amazon QuickSight for further investigation.

Benefits of AWS Application Discovery Service

  • The AWS Application Discovery Service gathers data on server specifications, performance, and details on running processes and network connections. This information can be used to create a precise cost estimate before migrating to AWS, as well as to group servers into applications for planning purposes thus for migration planning, reliable research is essential. The AWS Application Discovery Service is connected with the AWS Migration Hub, making migration tracking easier. Users may use Migration Hub to follow the status of migrations throughout their application portfolio after executing discovery and grouping the servers as apps and thus gets integration with migration hub. The acquired data is protected by AWS Application Discovery Service, which encrypts it both in transit to AWS and at rest within the Application Discovery Service data storage and thus helps in protecting data with encryption.

System Requirements

  • Any Operating System(Mac, Windows, Linux)

This recipe explains AWS Application Discovery Service and uses cases of AWS Application Discovery Service.

Use cases of AWS Application Discovery Service

    • It has a use case of discovering on-premises infrastructure

The AWS Application Discovery Service collects server hostnames, IP addresses, and MAC addresses, as well as resource allocation and utilisation details for CPU, network, memory, and disc. When you migrate, users can utilise this information to scale AWS resources.

    • It has a use case of providing APIs which can export various types of data

Input the exported data into your cost model to figure out how much it will cost to run those servers in AWS. Users can also export information about the network connections between servers. This information aids in the identification of network dependencies between servers and the classification of those connections into applications for migration planning.

    • It provides a use case for offering agentless discovery

It may be done by using the VMware vCenter to deploy the AWS Agentless Discovery Connector (OVA file). The Discovery Connector finds virtual machines (VMs) and hosts linked with vCenter after it is configured. Server hostnames, IP addresses, MAC addresses, and disc resource allocations are among the static configuration data collected by the Discovery Connector. It also collects data on VM utilisation and calculates average and peak utilisation for parameters like CPU, RAM, and Disk I/O.

    • It provides a use case for offering agent-based discovery

It can be by installing the AWS Application Discovery Agent on all of your virtual machines and physical servers. Windows and Linux operating systems are supported by the agent installer. It gathers static configuration information, detailed time-series system performance statistics, inbound and outgoing network connections, and running activities.

What Users are saying..

profile image

Ray han

Tech Leader | Stanford / Yale University
linkedin profile url

I think that they are fantastic. I attended Yale and Stanford and have worked at Honeywell,Oracle, and Arthur Andersen(Accenture) in the US. I have taken Big Data and Hadoop,NoSQL, Spark, Hadoop... Read More

Relevant Projects

Hadoop Project to Perform Hive Analytics using SQL and Scala
In this hadoop project, learn about the features in Hive that allow us to perform analytical queries over large datasets.

PySpark Project-Build a Data Pipeline using Hive and Cassandra
In this PySpark ETL Project, you will learn to build a data pipeline and perform ETL operations by integrating PySpark with Hive and Cassandra

Build an ETL Pipeline on EMR using AWS CDK and Power BI
In this ETL Project, you will learn build an ETL Pipeline on Amazon EMR with AWS CDK and Apache Hive. You'll deploy the pipeline using S3, Cloud9, and EMR, and then use Power BI to create dynamic visualizations of your transformed data.

AWS CDK Project for Building Real-Time IoT Infrastructure
AWS CDK Project for Beginners to Build Real-Time IoT Infrastructure and migrate and analyze data to

Talend Real-Time Project for ETL Process Automation
In this Talend Project, you will learn how to build an ETL pipeline in Talend Open Studio to automate the process of File Loading and Processing.

A Hands-On Approach to Learn Apache Spark using Scala
Get Started with Apache Spark using Scala for Big Data Analysis

Build Classification and Clustering Models with PySpark and MLlib
In this PySpark Project, you will learn to implement pyspark classification and clustering model examples using Spark MLlib.

Hadoop Project-Analysis of Yelp Dataset using Hadoop Hive
The goal of this hadoop project is to apply some data engineering principles to Yelp Dataset in the areas of processing, storage, and retrieval.

Deploy an Application to Kubernetes in Google Cloud using GKE
In this Kubernetes Big Data Project, you will automate and deploy an application using Docker, Google Kubernetes Engine (GKE), and Google Cloud Functions.

SQL Project for Data Analysis using Oracle Database-Part 3
In this SQL Project for Data Analysis, you will learn to efficiently write sub-queries and analyse data using various SQL functions and operators.