top button
Flag Notify
    Connect to us
      Site Registration

Site Registration

What is Spark framework and uses?

0 votes
412 views

What is Spark?

Spark is a simple and lightweight Java web framework built for rapid development. Spark's intention isn't to compete with Sinatra, or the dozen of similar web frameworks in different languages, but to provide a pure Java alternative for developers that want to, or are required to, develop in Java. Spark focuses on being as simple and straight-forward as possible, without the need for cumbersome (XML) configuration, to enable very fast web application development in pure Java with minimal effort. It’s a totally different paradigm when compared to the overuse of annotations for accomplishing pretty trivial stuff seen in other web frameworks 

 

Why use Spark?

If you're a Java developer with neither the urge nor time to learn a new programming language, and you're not planning to build a super large web application that scales in all directions, then Spark might be a great web framework for you. It will have you up and running in minutes, and you won't have to think too much about configuration and boilerplate code 

Spark

 

Sample Code

 
import static spark.Spark.*; 

public class HelloWorld 
{
 public static void main(String[] args)
 {
  get("/hello", (req, res) -> "Hello World");
 } 
} 

For More Documentation visit this link : http://sparkjava.com/documentation.html

posted Jan 13, 2015 by anonymous

  Promote This Article
Facebook Share Button Twitter Share Button LinkedIn Share Button


Related Articles

What is MLlib?

MLlib stands for Machine Learning Library (MLlib)

MLlib is Spark’s scalable machine learning library consisting of common learning algorithms and utilities, including classification, regression, clustering, collaborative filtering, dimensionality reduction, as well as underlying optimization primitives, as outlined below:

  • Data types
  • Basic statistics
  • Classification and regression
  • Collaborative filtering
  • Clustering
  • Dimensionality reduction
  • Feature extraction and transformation
  • Optimization

Spark Core is the foundation of the overall project. It provides distributed task dispatching, scheduling, and basic I/O functionalities, exposed through an application programming interface  centered on the RDD abstraction  This interface mirrors a functional/higher-order model of programming: a "driver" program invokes parallel operations such as map, filter or reduce on an RDD by passing a function to Spark, which then schedules the function's execution in parallel on the cluster.

These operations, and additional ones such as joins, take RDDs as input and produce new RDDs. RDDs are immutable and their operations are lazy; fault-tolerance is achieved by keeping track of the "lineage" of each RDD so that it can be reconstructed in the case of data loss. RDDs can contain any type of Python, Java, or Scala objects.​

The Video for MLlib Spark

https://www.youtube.com/watch?v=qKYpMPPL-fo

READ MORE
...