Sqoop Interview Questions and Answers for 2018

Sqoop Interview Questions and Answers for 2018

Last Update made on January 3,2017.

Sqoop Interview Questions and Answers 2017

DeZyre wants you to be prepared with Hadoop Interview Questions around all the Hadoop components in the big data ecosystem so that you can answer any Hadoop interview question that gets thrown your way. This blog post covers commonly asked Hadoop Sqoop Interview Questions and Answers that will help you get through your next Hadoop job interview.

You are sitting in the lobby waiting to go in for your Hadoop job interview, mentally you have prepared dozens of Hadoop Interview Questions and Answers by referring to these blogs –

Top 100 Hadoop Interview Questions and Answers 

Hadoop Developer Interview Questions at Top Tech Companies,

Top Hadoop Admin Interview Questions and Answers

Top 50 Hadoop Interview Questions

Hadoop HDFS Interview Questions and Answers

Hadoop Pig Interview Questions and Answers

Hadoop Hive Interview Questions and Answers

Hadoop MapReduce Interview Questions and Answers

Needless to say, you are confident that you are going to nail this Hadoop job interview. But then, the interviewer instead of beginning the Hadoop interview with questions around Hadoop MapReduce, HDFS, Pig, Hive, throws a curveball at you by asking “What are the possible file formats for importing data using Sqoop in Hadoop?”

Big Data and Hadoop Certification

Apache Sqoop Interview Questions and Answers for 2016


Before we dive into apache Sqoop Hadoop interview questions and answers, let’s take a look at why Sqoop was developed and what is its significance in the Hadoop ecosystem-

Suppose you want to process legacy data or lookup tables present in RDBMS using Hadoop MapReduce , the straightforward solution is to read data from the RDBMS into the mapper and process it but this could result in distributed denial of service (i.e. the bandwidth of the resources would be flooded). Thus, this solution is not practically recommended and this is when Apache Sqoop comes to the rescues of users that allows users to import data on HDFS.

Learn Hadoop to become a Microsoft Certified Big Data Engineer.

Apache Sqoop is a lifesaver for people facing challenges with moving data out of a data warehouse into the Hadoop environment. Sqoop is a SQL to Hadoop tool for efficiently importing data from a RDBMS like MySQL, Oracle, etc. directly into HDFS or Hive or HBase. It can also be used to export the data in HDFS and back to the RDBMS. Users can import one or more tables, the entire database to selected columns from a table using Apache Sqoop. Sqoop is compatible with all JDBC compatible databases.

Apache Sqoop uses Hadoop MapReduce to get data from relational databases and stores it on HDFS. When importing data, Sqoop controls the number of mappers accessing RDBMS to avoid distributed denial of service attacks. 4 mappers can be used at a time by default, however, the value of this can be configured. It is suggested not to use a greater value that 4 as this might occupy the entire spool space of the database.

For the complete list of big data companies and their salaries- CLICK HERE

Hadoop Sqoop Interview Questions and Answers 

1) Compare Sqoop and Flume

Sqoop vs Flume



Used for importing data from structured data sources like RDBMS. Used for moving bulk streaming data into HDFS.
It has a connector based architecture. It has a agent based architecture.

Data import in sqoop is not event driven.

Data load in flume is event driven
HDFS is the destination for importing data. Data flows into HDFS through one or more channels.


Difference between Sqoop and Flume

Read more on Sqoop vs Flume

2) What is the default file format to import data using Apache Sqoop?

Sqoop allows data to be imported using two file formats

i) Delimited Text File Format

This is the default file format to import data using Sqoop. This file format can be explicitly specified using the –as-textfile argument to the import command in Sqoop. Passing this as an argument to the command will produce the string based representation of all the records to the output files with the delimited characters between rows and columns.

ii) Sequence File Format

It is a binary file format where records are stored in custom record-specific data types which are shown as Java classes. Sqoop automatically creates these data types and manifests them as java classes.

3) I have around 300 tables in a database. I want to import all the tables from the database except the tables named Table298, Table 123, and Table299. How can I do this without having to import the tables one by one?

This can be accomplished using the import-all-tables import command in Sqoop and by specifying the exclude-tables option with it as follows-

sqoop import-all-tables

--connect –username –password --exclude-tables Table298, Table 123, Table 299

4) Does Apache Sqoop have a default database?

Yes, MySQL is the default database.

5) How can I import large objects (BLOB and CLOB objects) in Apache Sqoop?

Apache Sqoop import command does not support direct import of BLOB and CLOB large objects. To import large objects, I Sqoop, JDBC based imports have to be used without the direct argument to the import utility.

6) How can you execute a free form SQL query in Sqoop to import the rows in a sequential manner?

This can be accomplished using the –m 1 option in the Sqoop import command. It will create only one MapReduce task which will then import rows serially.

7) How will you list all the columns of a table using Apache Sqoop?

Unlike sqoop-list-tables and sqoop-list-databases, there is no direct command like sqoop-list-columns to list all the columns. The indirect way of achieving this is to retrieve the columns of the desired tables and redirect them to a file which can be viewed manually containing the column names of a particular table.

Sqoop import --m 1 --connect 'jdbc: sqlserver: //nameofmyserver; database=nameofmydatabase; username=DeZyre; password=mypassword' --query "SELECT column_name, DATA_TYPE FROM INFORMATION_SCHEMA.Columns WHERE table_name='mytableofinterest' AND \$CONDITIONS" --target-dir 'mytableofinterest_column_name'

8) What is the difference between Sqoop and DistCP command in Hadoop?

Both distCP (Distributed Copy in Hadoop) and Sqoop transfer data in parallel but the only difference is that distCP command can transfer any kind of data from one Hadoop cluster to another whereas Sqoop transfers data between RDBMS and other components in the Hadoop ecosystem like HBase, Hive, HDFS, etc.

9) What is Sqoop metastore?

Sqoop metastore is a shared metadata repository for remote users to define and execute saved jobs created using sqoop job defined in the metastore. The sqoop –site.xml should be configured to connect to the metastore.

10) What is the significance of using –split-by clause for running parallel import tasks in Apache Sqoop?

--Split-by clause is used to specify the columns of the table that are used to generate splits for data imports. This clause specifies the columns that will be used for splitting when importing the data into the Hadoop cluster. —split-by clause helps achieve improved performance through greater parallelism. Apache Sqoop will create splits based on the values present in the columns specified in the –split-by clause of the import command. If the –split-by clause is not specified, then the primary key of the table is used to create the splits while data import. At times the primary key of the table might not have evenly distributed values between the minimum and maximum range. Under such circumstances –split-by clause can be used to specify some other column that has even distribution of data to create splits so that data import is efficient.

11) You use –split-by clause but it still does not give optimal performance then how will you improve the performance further.

Using the –boundary-query clause. Generally, sqoop uses the SQL query select min (), max () from to find out the boundary values for creating splits. However, if this query is not optimal then using the –boundary-query argument any random query can be written to generate two numeric columns.

12) During sqoop import, you use the clause –m or –numb-mappers to specify the number of mappers as 8 so that it can run eight parallel MapReduce tasks, however, sqoop runs only four parallel MapReduce tasks. Why?

Hadoop MapReduce cluster is configured to run a maximum of 4 parallel MapReduce tasks and the sqoop import can be configured with number of parallel tasks less than or equal to 4 but not more than 4.

13) You successfully imported a table using Apache Sqoop to HBase but when you query the table it is found that the number of rows is less than expected. What could be the likely reason?

If the imported records have rows that contain null values for all the columns, then probably those records might have been dropped off during import because HBase does not allow null values in all the columns of a record.

14) The incoming value from HDFS for a particular column is NULL. How will you load that row into RDBMS in which the columns are defined as NOT NULL?

Using the –input-null-string parameter, a default value can be specified so that the row gets inserted with the default value for the column that it has a NULL value in HDFS.

15) If the source data gets updated every now and then, how will you synchronise the data in HDFS that is imported by Sqoop?

Data can be synchronised using incremental parameter with data import –

--Incremental parameter can be used with one of the two options-

i) append-If the table is getting updated continuously with new rows and increasing row id values then incremental import with append option should be used where values of some of the columns are checked (columns to be checked are specified using –check-column) and if it discovers any modified value for those columns then only a new row will be inserted.

ii) lastmodified – In this kind of incremental import, the source has a date column which is checked for. Any records that have been updated after the last import based on the lastmodifed column in the source, the values would be updated.

16) Below command is used to specify the connect string that contains hostname to connect MySQL with local host and database name as test_db –

–connect jdbc: mysql: //localhost/test_db

Is the above command the best way to specify the connect string in case I want to use Apache Sqoop with a distributed hadoop cluster?

When using Sqoop with a distributed Hadoop cluster the URL should not be specified with localhost in the connect string because the connect string will be applied on all the DataNodes with the Hadoop cluster. So, if the literal name localhost is mentioned instead of the IP address or the complete hostname then each node will connect to a different database on their localhosts. It is always suggested to specify the hostname that can be seen by all remote nodes.

Sqoop Interview Questions for Experienced

1) I have 20000 records in a table. I want copy them to two separate files( records equally distributed) into HDFS (using Sqoop). 
How do we achieve this, if table does not have primary key or unique key?




Hadoop Training Online

Relevant Projects

Implementing Slow Changing Dimensions in a Data Warehouse using Hive and Spark
Hive Project- Understand the various types of SCDs and implement these slowly changing dimesnsion in Hadoop Hive and Spark.

Yelp Data Processing Using Spark And Hive Part 1
In this big data project, we will continue from a previous hive project "Data engineering on Yelp Datasets using Hadoop tools" and do the entire data processing using spark.

Hadoop Project-Analysis of Yelp Dataset using Hadoop Hive
The goal of this hadoop project is to apply some data engineering principles to Yelp Dataset in the areas of processing, storage, and retrieval.

Data Mining Project on Yelp Dataset using Hadoop Hive
Use the Hadoop ecosystem to glean valuable insights from the Yelp dataset. You will be analyzing the different patterns that can be found in the Yelp data set, to come up with various approaches in solving a business problem.

Tough engineering choices with large datasets in Hive Part - 2
This is in continuation of the previous Hive project "Tough engineering choices with large datasets in Hive Part - 1", where we will work on processing big data sets using Hive.

Yelp Data Processing using Spark and Hive Part 2
In this spark project, we will continue building the data warehouse from the previous project Yelp Data Processing Using Spark And Hive Part 1 and will do further data processing to develop diverse data products.

Hadoop Project for Beginners-SQL Analytics with Hive
In this hadoop project, learn about the features in Hive that allow us to perform analytical queries over large datasets.

Real-Time Log Processing in Kafka for Streaming Architecture
The goal of this apache kafka project is to process log entries from applications in real-time using Kafka for the streaming architecture in a microservice sense.

Hive Project - Visualising Website Clickstream Data with Apache Hadoop
Analyze clickstream data of a website using Hadoop Hive to increase sales by optimizing every aspect of the customer experience on the website from the first mouse click to the last.

Data Warehouse Design for E-commerce Environments
In this hive project, you will design a data warehouse for e-commerce environments.