MIT-Stanford project uses LLVM to break big data bottlenecks

The more cores you can use, the better — especially with big data. But the easier a big data framework is to work with, the harder it is for the resulting pipelines, such as TensorFlow plus Apache Spark, to run in parallel as a single unit.

Researchers from MIT CSAIL, the home of envelope-pushing big data acceleration projects like Milk and Tapir, have paired with the Stanford InfoLab to create a possible solution. Written in the Rust language, Weld generates code for an entire data analysis workflow that runs efficiently in parallel using the LLVM compiler framework.

To read this article in full or to leave a comment, please click here

from InfoWorld Big Data http://ift.tt/2nJkdeD
via IFTTT

Advertisements