David Ramel, Application Development Trends
May 13, 2014
Concurrent Inc. today announced an upgrade of its Cascading development framework for building Big Data applications, offering more choices for working with the Apache Hadoop ecosystem.
Cascading 3.0, due early this summer, features a new pluggable query planner that can be customized for working with different technologies, basically offering alternatives to using the problematic MapReduce programming model, sometimes referred to as an execution fabric.
MapReduce was an original integral part of the Hadoop system of Big Data programming, handling the compute function for working with data stored in the Hadoop Distributed File System (HDFS). Notoriously complex and inflexible, the limitations of MapReduce reportedly helped prompt Chris Wensel to found Concurrent to provide more options and simplify Big Data programming with Hadoop.
One of those options in Cascading 3.0 is the ability to work with the emerging Apache Tez project, an alternative application framework that takes advantage of improvements to the Hadoop ecosystem such as Apache Hadoop YARN that came with Hadoop’s upgrade to version 2. YARN is sometimes referred to as Yet Another Resource Manager and described as a tool to separate resource management from processing components.
“For existing users, they will be able to migrate their existing applications from Hadoop 1 or 2 MapReduce to Apache Tez trivially,” Wensel told this site. “Or any other new fabric that the community adds support for.
“And, as Tez matures and gains features, users can create or use new rules in our query planner to experiment with different features or optimizations in Tez,” said Wenzel, the company’s CTO. “The same will be true with other fabrics.”
Cascading 3.0 will ship with support for both MapReduce and Tez and will feature local in-memory computing. Concurrent said soon after release, community support for the open source software is expected to add the capability for the new pluggable and customizable query planner to work with other alternative technologies such as Apache Spark and Apache Storm.
Concurrent said it designed Cascading 3.0 to help developers build data applications once and then run those applications on the most appropriate fabric, providing flexibility to solve business problems of varying complexity, regardless of latency or scale.
“In the same way people have adopted Spring and J2EE for container and service-oriented application development, Cascading is used by enterprises to develop data-oriented applications,” Wensel told this site.
Concurrent, which just a few weeks ago announced a partnership with major Big Data player Hortonworks, today also announced a partnership with Databricks that will let Cascading work with the Apache Spark processing engine.
Databricks, founded by the creators of Apache Spark — recently upgraded to a top-level Apache project — builds software for analyzing data and extracting value. Spark is an open source processing engine — or data analytics cluster computing framework — that speeds up Big Data analytics through the use of in-memory computing and other means.
“One of our primary goals is to drive broad adoption of Spark and ensure a great experience for users,” said Databricks CEO Ion Stoica in a statement. “By partnering with Concurrent, all the developers who already use Cascading will be able to deploy their applications on Spark, while Spark users benefit from direct access to all of the benefits of Cascading and Driven. We are committed to open source and partnering with proven market leaders like Concurrent to drive new growth and innovation in the Big Data community.”
Driven is Concurrent’s flagship commercial offering designed to enhance the development and management of data applications for enterprises.SHARE: