Scaling Hadoop Linearly with Confidence
Share this Session:
  Chuck Yarbrough   Chuck Yarbrough
Director, Big Data Product Marketing
Pentaho
 
  Steve Szabo   Steve Szabo
Enterprise Architect
Pentaho
 


 

Wednesday, August 19, 2015
11:45 AM - 12:30 PM

Level:  Technical - Introductory


Processing large data sets is a challenge regardless of the technology used to perform the processing. But ensuring that those processes can scale linearly across nodes is crucial to building out high scale solutions. Pentaho Map Reduce gives users a simple mechanism for creating and executing MapReduce jobs in cluster with proven linear scalability. This session will give Hadoop developers the confidence they need to build data processing applications that easily scale. This session is based on a series of scalabilty tests performed in Amazon Web Services using Cloudera and Pentaho.


Chuck is the Director of Big Data Product Marketing at Pentaho, a Hitachi Data Systems company specializing in big data analytics, including blending structured and unstructured data. Much of Chuck's focus at Pentaho is in educating organizations on how big data can help win, serve and retain customers, lower costs and grow revenue through the strategic use of data. Prior to Pentaho, Chuck held leadership roles at Deloitte Consulting, SAP Business Objects, Hyperion and National Semiconductor.

Steve is leading the effort at Pentaho to ensure scalability across the Pentaho Platform, particularly around Hadoop and NoSQL.


   
Close Window