Apache Drill Spark Interview Questions and Answers

Apache Drill & Spark Interview Questions and Answers

What Is Apache Drill?
Apache Drill is a Schema-free SQL Query Engine for Hadoop, NoSQL and Cloud Storage and it allows us to explore, visualize and query different datasets without having to fix to a schema using ETL and so on.

Apache Drill is also Analyse the multi-structured and nested data in non-relational data stores directly without restricting any data.

Apache Drill is the first distributed SQL query engine and it contains the schema free JSON model and its looks like -
ü  Elastic Search
ü  MongoDB
ü  NoSQL database
ü  And SO on

The Apache Drill is very useful for those professionals that already working with SQL databases and BI tools like Pentaho, Tableau, and Qlikview.
Also Apache Drill supports to -
ü  RESTful,
ü  ANSI SQL and
ü  JDBC/ODBC drivers

What Are the Great Features of Apache Drill?
The following features are -
ü  Schema-free JSON document model similar to MongoDB and Elastic search
ü  Code reusability
ü  Easy to use and developer friendly
ü  High performance Java based API
ü  Memory management system
ü  Industry-standard API like ANSI SQL, ODBC/JDBC, RESTful APIs
ü  How does Drill achieve performance?
ü  Distributed query optimization and execution
ü  Columnar Execution
ü  Optimistic Execution
ü  Pipelined Execution
ü  Runtime compilation and code generation
ü  Vectorization

What Datastores does Drill support?
Drill’s main focused on non-relational data stores, including Hadoop, NoSQL and cloud storage.
The following datastores are -
ü  NoSQL - HBase and MongoDB
ü  Cloud Storage - Amazon S3, Google Cloud Storage, Azure Blog Storage and Swift
ü  Hadoop - MapR, CDH and Amazon EMR

What Similarities between Spark SQL and Apache Drill?
ü  Both the Apache Drill and Spark SQL are open source
ü  Do not require a Hadoop cluster to get started
ü  Both the SQL-on-Hadoop tools can easily be run inside a VM.
ü  Both the Apache Drill and Spark SQL are supports multiple data formats- JSON, Parquet, MongoDB, Avro, MySQL and so on.

What Are the Main Differences between Spark SQL and Apache Drill?
The Spark SQL only supports a subset of SQL but Apache Drill supports ANSI SQL.
Querying data in Spark SQL with help of languages like Java, Scala or Python but Apache Drill querying data with helps of MySQL or Oracle.

Is Spark SQL similar to Drill?
No!

How does Drill support queries on self-describing data?
ü  JSON data model
ü  On-the-fly schema discovery

Do I need to load data into Drill to start querying it?
No! The Drill can query data in-situ.

What Is Spark SQL?
The Spark SQL is used for real-time, in-memory and parallelized SQL-on-Hadoop engine.
The Spark SQL is not a general purpose SQL layer and it’s used to allow us to do several advanced analytics with data.

The Spark SQL supports only a subset of SQL functionality and users have to write code in Java, Python and so on to execute a query.

What Are the Great Features of Spark SQL?
ü  Spark SQL provides security through encryption using SSL for HTTP protocols.
ü  The Spark SQL supports lots of features to analysis the large scale of data.
ü  The Spark SQL supports lots of data types for machine learning.
ü  In the Spark SQL, you can easily to write data pipelines.
ü  In the Spark SQL, easy to add optimization rules, data types and data source by using the Scala programming language
When To Use Spark SQL?
Spark SQL is the best SQL-on-Hadoop tool and best used of Spark SQL is fetch data for diverse machine learning tasks.

What Is The Disadvantage of Spark SQL?
The Spark SQL is lacks advanced security features.

What Is Apache Spark?
The Apache Spark is an open source, very fast, in-memory data processing and general engine and used for the large amount of data processing.
Apache Spark is a cluster-computing framework.

What Are the Advantage of Spark?
ü  Ease of Use
ü  Open Source
ü  Spark is in-memory cluster computing so it Speed is very fast.
ü  Combine SQL, streaming, and complex analytics
ü  Spark runs everywhere - on Hadoop, Mesos, and standalone and so on.
ü  Supports multiple languages

The Spark is not a modified version of Hadoop and the Spark uses Hadoop for -
ü  Storage
ü  Data Processing
ü  Spark supports the following languages -
ü  Java
ü  Python
ü  Scala
ü  R
ü  Clojure

Is Apache Spark going to replace Hadoop?
My answer Is Yes! What Is your Opinions about the same?

Hadoop will be replaced by Spark and both Apache Spark and Hadoop are big-data frameworks.
The Spark is one of the favourite choices of data scientist. Apache Spark is growing very quickly and replacing MapReduce.

What Is Presto?
Presto is an Open-source Big Data Tools used to Distributed SQL Query Engine. It is developed by Facebook in in 2012.

It also used to Join multiple databases, Scalable, very fast and easily used in minutes.

Presto’s Advantage - The main advantage is that you can run probabilistic queries.

Presto’s Architecture - The Presto architecture is very similar to a classic database management system by using the cluster computing.

You Might Also Like?

Presto vs. Apache DrillApache Spark vs. Apache Drill
Spark SQL vs. Apache DrillWhat Is Presto?
What Is Apache Spark?What Is Apache Drill?
What Is Spark SQL?Apache Drill Q/A
ANIL SINGH

Hey! I'm Anil Singh. I author this blog. I'm Active Blogger, Programmer. I love learning new technologies, programming, blogging and participating the forum discussions more...
My Blogs - https://code-sample.com and https://code-sample.xyz
My Books - Interview Questions and Answers Books- Get Your Book in 15+ Digital Stores Worldwide..

You Might Also Like
Post a Comment
www.code-sample.com/. Powered by Blogger.
ASK Questions