|Preparing for a Hadoop job interview then this list of most commonly asked Hive Interview questions and answers will help you ace your hadoop job interview.These Hive Interview questions and answers are formulated just to make candidates familiar with the nature of questions that are likely to be asked in a Hadoop job interview on the subject of Hive.|
1) What is the difference between Pig and Hive ?
|Type of Data||Apache Pig is usually used for semi structured data.||Used for Structured Data|
|Schema||Schema is optional.||Hive requires a well-defined Schema.|
|Language||It is a procedural data flow language.||Follows SQL Dialect and is a declarative language.|
|Purpose||Mainly used for programming.||It is mainly used for reporting.|
|General Usage||Usually used on the client side of the hadoop cluster.||Usually used on the server side of the hadoop cluster.|
|Coding Style||Verbose||More like SQL|
For a detailed answer on the difference between Pig and Hive, refer this link -
2) What is the difference between HBase and Hive ?
|HBase does not allow execution of SQL queries.||Hive allows execution of most SQL queries.|
|HBase runs on top of HDFS.||Hive runs on top of Hadoop MapReduce.|
|HBase is a NoSQL database.||Hive is a datawarehouse framework.|
|Supports record level insert, updated and delete operations.||Does not support record level insert, update and delete.|
2) I do not need the index created in the first question anymore. How can I delete the above index named index_bonuspay?
DROP INDEX index_bonuspay ON employee;
3) Can you list few commonly used Hive services?
4) Suppose that I want to monitor all the open and aborted transactions in the system along with the transaction id and the transaction state. Can this be achieved using Apache Hive?
Hive 0.13.0 and above version support SHOW TRANSACTIONS command that helps administrators monitor various hive transactions.
5) What is the use of Hcatalog?
Hcatalog can be used to share data structures with external systems. Hcatalog provides access to hive metastore to users of other tools on Hadoop so that they can read and write data to hive’s data warehouse.
6) Write a query to rename a table Student to Student_New.
Alter Table Student RENAME to Student_New
7) Where is table data stored in Apache Hive by default?
8) Explain the difference between partitioning and bucketing.
9) Explain about the different types of partitioning in Hive?
Partitioning in Hive helps prune the data when executing the queries to speed up processing. Partitions are created when data is inserted into the table. In static partitions, the name of the partition is hardcoded into the insert statement whereas in a dynamic partition, Hive automatically identifies the partition based on the value of the partition field.
Based on how data is loaded into the table, requirements for data and the format in which data is produced at source- static or dynamic partition can be chosen. In dynamic partitions the complete data in the file is read and is partitioned through a MapReduce job based into the tables based on a particular field in the file. Dynamic partitions are usually helpful during ETL flows in the data pipeline.
When loading data from huge files, static partitions are preferred over dynamic partitions as they save time in loading data. The partition is added to the table and then the file is moved into the static partition. The partition column value can be obtained from the file name without having to read the complete file.
10) When executing Hive queries in different directories, why is metastore_db created in all places from where Hive is launched?
When running Hive in embedded mode, it creates a local metastore. When you run the query, it first checks whether a metastore already exists or not. The property javax.jdo.option.ConnectionURL defined in the hive-site.xml has a default value jdbc: derby: databaseName=metastore_db; create=true.
The value implies that embedded derby will be used as the Hive metastore and the location of the metastore is metastore_db which will be created only if it does not exist already. The location metastore_db is a relative location so when you run queries from different directories it gets created at all places from wherever you launch hive. This property can be altered in the hive-site.xml file to an absolute path so that it can be used from that particular location instead of creating multiple metastore_db subdirectory multiple times.
11) How will you read and write HDFS files in Hive?
i) TextInputFormat- This class is used to read data in plain text file format.
ii) HiveIgnoreKeyTextOutputFormat- This class is used to write data in plain text file format.
iii) SequenceFileInputFormat- This class is used to read data in hadoop SequenceFile format.
iv) SequenceFileOutputFormat- This class is used to write data in hadoop SequenceFile format.
12) What are the components of a Hive query processor?
Query processor in Apache Hive converts the SQL to a graph of MapReduce jobs with the execution time framework so that the jobs can be executed in the order of dependencies. The various components of a query processor are-
13) Differentiate between describe and describe extended.
Describe database/schema- This query displays the name of the database, the root location on the file system and comments if any.
Describe extended database/schema- Gives the details of the database or schema in a detailed manner.
14) Is it possible to overwrite Hadoop MapReduce configuration in Hive?
Yes, hadoop MapReduce configuration can be overwritten by changing the hive conf settings file.
15) I want to see the present working directory in UNIX from hive. Is it possible to run this command from hive?
Hive allows execution of UNIX commands with the use of exclamatory (!) symbol. Just use the ! Symbol before the command to be executed at the hive prompt. To see the present working directory in UNIX from hive run !pwd at the hive prompt.
16) What is the use of explode in Hive?
Explode in Hive is used to convert complex data types into desired table formats. explode UDTF basically emits all the elements in an array into multiple rows.
17) Explain about SORT BY, ORDER BY, DISTRIBUTE BY and CLUSTER BY in Hive.
SORT BY – Data is ordered at each of ‘N’ reducers where the reducers can have overlapping range of data.
ORDER BY- This is similar to the ORDER BY in SQL where total ordering of data takes place by passing it to a single reducer.
DISTRUBUTE BY – It is used to distribute the rows among the reducers. Rows that have the same distribute by columns will go to the same reducer.
CLUSTER BY- It is a combination of DISTRIBUTE BY and SORT BY where each of the N reducers gets non overlapping range of data which is then sorted by those ranges at the respective reducers.
18) Difference between HBase and Hive.
19) Write a hive query to view all the databases whose name begins with “db”
SHOW DATABASES LIKE ‘db.*’
20) How can you prevent a large job from running for a long time?
This can be achieved by setting the MapReduce jobs to execute in strict mode set hive.mapred.mode=strict;
The strict mode ensures that the queries on partitioned tables cannot execute without defining a WHERE clause.
21) What is a Hive Metastore?
Hive Metastore is a central repository that stores metadata in external database.
22) Are multiline comments supported in Hive?
23) What is ObjectInspector functionality?
ObjectInspector is used to analyse the structure of individual columns and the internal structure of the row objects. ObjectInspector in Hive provides access to complex objects which can be stored in multiple formats.
24) Explain about the different types of join in Hive.
HiveQL has 4 different types of joins –
JOIN- Similar to Outer Join in SQL
FULL OUTER JOIN – Combines the records of both the left and right outer tables that fulfil the join condition.
LEFT OUTER JOIN- All the rows from the left table are returned even if there are no matches in the right table.
RIGHT OUTER JOIN-All the rows from the right table are returned even if there are no matches in the left table.
25) How can you configure remote metastore mode in Hive?
To configure metastore in Hive, hive-site.xml file has to be configured with the below property –
thrift: //node1 (or IP Address):9083
IP address and port of the metastore host
26) Is it possible to change the default location of Managed Tables in Hive, if so how?
Yes, we can change the default location of Managed tables using the LOCATION keyword while creating the managed table. The user has to specify the storage path of the managed table as the value to the LOCATION keyword.
27) How data transfer happens from HDFS to Hive?
If data is already present in HDFS then the user need not LOAD DATA that moves the files to the /user/hive/warehouse/. So the user just has to define the table using the keyword external that creates the table definition in the hive metastore.
Create external table table_name (
28) In case of embedded Hive, can the same metastore be used by multiple users?
We cannot use metastore in sharing mode. It is suggested to use standalone real database like PostGreSQL and MySQL.
29) The partition of hive table has been modified to point to a new directory location. Do I have to move the data to the new location or the data will be moved automatically to the new location?
Changing the point of partition will not move the data to the new location. It has to be moved manually to the new location from the old one.
30) What will be the output of cast (‘XYZ’ as INT)?
It will return a NULL value.
31) What are the different components of a Hive architecture?
Hive Architecture consists of a –
32) What happens on executing the below query? After executing the below query, if you modify the column –how will the changes be tracked?
Hive> CREATE INDEX index_bonuspay ON TABLE employee (bonus)
The query creates an index named index_bonuspay which points to the bonus column in the employee table. Whenever the value of bonus is modified it will be stored using an index value.
33) What is the default database provided by Hive for Metastore ?
Derby is the default database.
34) Is it possible to compress json in Hive external table ?
Yes, you need to gzip your files and put them as is (*.gz) into the table location.
35) What is the difference between local and remote metastore?
A local metastore runs on the same JVM (Java Virtual Machine) in which the Hive service is running. It connects to a database running in a separate JVM that can either be on the same machine or a remote machine.
In a remote metastore, the service runs on its JVM differently from the JVM on which the Hive service runs. Other processes can communicate with the server containing the metastore using Thrift Network APIs. There can be more than one metastore server to provide more availability.
36) Why does Hive not support storage of metadata information in HDFS?
Hive stores the metadata in RDBMS rather than HDFS. RDBMS is chosen primarily for low latency since reads and writes to the HDFS are time-consuming. The storage for metastore must be optimized for online transactions involving random accesses and updates. HDFS is optimized for sequential scans and not for random access, and hence it is not a suitable choice for the storage of metadata.
37) What is the difference between external tables and managed tables in Hive?
Managed tables are stored by default in the Hive warehouse in HDFS. The default storage location of managed tables can be modified.
When an external table is created, the location from where the data is taken from the HDFS has to be mentioned. These tables take data from other locations on the HDFS.
When managed tables are dropped, their metadata and table data are both permanently deleted.
When external tables are dropped, only the metadata associated with that particular table is deleted from Hive. The tables take data from other locations on the HDFS and this data remains untouched.
Hive has full ownership of the data on managed tables.
Hive does not have any ownership of the data in External tables, it only has ownership of the metadata that it creates for that table when the table is created using data from an external location.
Support Query results caching in cases where the result of one query can be cached for use in another query.
No support for query result caching.
ACID (atomicity, consistency, isolation, durability) transaction property is supported.
There is no support for ACID transaction property.
Support ARCHIVE, UNARCHIVE, TRUNCATE, MERGE, CONCATENATE commands.
No support for ARCHIVE, UNARCHIVE, TRUNCATE, MERGE, CONCATENATE commands.
38) What is the purpose of partitioning in Hive?
Partitioning in Hive refers to the process of dividing the table into smaller parts based on the values of a particular column, such as date, city, age or country. Partitioning is done so that data can be stored in slices, enabling the query response to become faster. Since Hadoop is used to handle large amounts of data, it is important to find ways to access the data using the best approach.
39) Mention some limitations of Hive.
Apache Hive does not support update and delete operations on the rows of table data.
The latency for Hive queries is pretty high. Hadoop was built for long sequential scans, and Hive is built on Hadoop and has a high latency.
Hive cannot be used for real-time data querying since it takes quite some time to give the results.
Apache Hive does not support subqueries.
Hive does not support online transaction processing (OLTP) and only supports online analytical processing (OLAP). This is because Hive does not support update or deletion at the row level, and it is not suitable for real-time processing. Hive is ideal for processes where the response time of the query is not highly interactive.
40) How can you change the data type of a column in Hive?
You can use the following command in Hive to change the data type of a column name.
ALTER TABLE table_name CHANGE column_name column_name new_datatype;
E.g. for a table named “Students” with a column “age”. To change the data type of the “age” column from String to int, we would use the following command -
ALTER TABLE Students CHANGE age age int;
41) Can Hive be used in OLTP systems?
Hive cannot be used to support online transaction processing (OLTP), but can only be used for online analytical processing (OLAP). Hive does not support update or delete at the row level and is not suitable for real-time query processing.
42) What is the purpose of archiving tables in Hive?
Hadoop archives are used as an approach to reduce the number of files in partitions. Apache Hive has built-in support for converting files on existing partitions to a Hadoop Archive. In this manner, a partition that would have once consisted of hundreds of files can be allowed to occupy just around 3 files. This number can vary depending on the settings. The trade-off in such a case is that the queries may end up having a higher latency, caused by the additional overhead of reading from Hadoop Archive. Archiving does not compress the files, but works similar to the Linux ‘tar’ command.
43) Is there any method to delete the DBPROPERTY in Hive?
There is no way to delete or “unset” the DBPROPERTY in Hive.
44) How can you list all the databases which end with ‘e’ in Hive?
The following command can be used:
SHOW DATABASES LIKE ‘*e’;
45) Differentiate between local mode and MapReduce mode in Hive.
Hive can operate in Local mode or the MapReduce mode depending on the size of data nodes in Hadoop.
Local mode is to be used if the data size is smaller and can be limited to a single local machine. Processing of smaller data sets will be faster using the local mode.
Hive may run in local mode if the following conditions are met:
The local mode can be used in cases where the total input size of the job is lower than the value specified by hive.exec.mode.local.auto.inputbytes.max, which is 128MB by default
the local mode is applicable if the value of the total number of map tasks is less than hive.exec.mode.local.auto.tasks.max, which is 4 by default
The total number of reduce tasks is 1 or 0.
MapReduce mode is the default mode that is used. If the Hadoop system has multiple data nodes, with the data distributed across the nodes, then MapReduce mode is used. MapReduce mode is more suitable for large data sets and better performance can be achieved through this mode.
46) What is the purpose of ORC tables in Hive?
In Hive, the Optimized Row Columnar (ORC) file format was designed to provide a highly efficient way to store Hive data. It overcomes the limitations which are present in other Hive file formats. ORC files improve performance for reads, writes and data processing. ORC file format supports Hive types including datetime, decimal and complex types, including struct, list, map and union, which are not supported by the RC file format. The ORC file format supports lightweight indexes to be stored within the file which allow skipping to row groups that do not meet a certain filtering.
There are various ways to run Hive queries faster -
We hope that these Hive Interview questions and answers have pre-charged you for your next Hadoop interview on the subject of Hive. Let us know about your experience on Hive interview questions in Hadoop interviews in the comments below.