BigData / Apache Spark
Key features of Apache Spark.
Speed: Spark runs faster than Hadoop MapReduce for large-scale data through controlled partitioning. Spark manages data using partitions that help parallelize distributed data processing with minimal network traffic.
Real Time Computation: Spark perform computation in real-time and has less latency due to in-memory computation. Spark is designed for massive scalability that allows live production clusters with thousands of nodes and supports several computational models.
Hadoop Integration: Apache Spark provides compatibility with Hadoop. Spark is a potential replacement for the MapReduce functions of Hadoop, while Spark has the ability to run on top of an existing Hadoop cluster using YARN for resource scheduling.
Machine Learning: Spark MLlib is the machine learning component which is handy when it comes to big data processing. It eradicates the need to use multiple tools, one for processing and one for machine learning. Spark provides data engineers and data scientists with a powerful, unified engine that is both fast and easy to use.
Multiple Formats: Spark supports multiple data sources such as Parquet, JSON, Hive and Cassandra. The Data Sources API provides a pluggable mechanism for accessing structured data through Spark SQL. Data sources can be more than just simple pipes that convert data and pull it into Spark.
Lazy Evaluation: Apache Spark delays its evaluation till it is absolutely necessary. This is one of the key factors contributing to its speed. For transformations, Spark adds them to a DAG (Directed Acyclic Graph) of computation and only when the driver requests some data, this DAG actually gets executed.
Multiple language support (Polyglot): Spark provides high-level APIs in Java, Scala, Python and R, enabling support. It provides a shell in Scala and Python. The Scala shell can be accessed through ./bin/spark-shell and Python shell through ./bin/pyspark from the installed directory.
More Related questions...