Wednesday, August 19, 2015

O'Reilly Announces Affordable Video Training for Big Data and More

O'Reilly, known for publishing technical books, has announced a new line of affordable training materials.

These "Learning Paths" consist of a group of videos all related to a particular topic:
  • Git source code management (5 courses with 22 hours of video training)
  • Beginning UX Design (3 courses with 10 hours)
  • Design for Mobile (4 courses with 12 hours)
  • Beginning JavaScript (3 courses with 14 hours)
  • Hadoop for Data Scientists (3 courses with 16 hours) 
  • Data Visualization (4 courses with 11 hours) 
  • Data Science with R (5 courses with 24 hours)
  • Beginning Java (4 courses with 26 hours)
  • Python for Data (4 courses with 19 hours)
  • Networking for Sysadmins (3 courses with 17 hours) 


Right now, O'Reilly is offering a special introductory price of $99 for each of the Learning Paths.

Click here to see their website

Thursday, July 9, 2015

Taking the Mystery out of Big Data

Today's companies have the potential to benefit from incredibly large amounts of data.

To shake off the mystery of this "Big Data," it's useful to know its history.

In the not-so-distant past, firms tracked their own internal transactions and master data (products, customers, employees, and so forth) but little else. Companies probably only had very large databases if their industry called for high-volume and high-speed applications such as telecommunication, shipping, or point of sales. Even then, those transactions were all formatted in a standard way and could be saved inside the relational database IBM designed in the 1960s.

This was perfectly fine for corporate computing in the 1970s and 1980s. Then, in the middle of the 1990s, along came the world-wide web, browsers, and e-commerce. Before the end of that decade, a web search engine company named Google was facing challenges as to how to track all of the changes happening all over global web pages. A traditional computing option would have been to scale-up: get a bigger platform, a more powerful database engine, and more disk space.

But spending money wasn't a good option for a little operation like Google; it was well behind the established search engines like Lycos, WebCrawler, AltaVista, Infoseek, Yahoo, and others.

Google decided on a strategy of scaling out instead of up. Using easily-obtained commodity computers, they spread out not only the data but the application processing. Instead of buying a big super-computer, they used thousands of run-of-the-mill boxes all working together. On top of this distributed data framework, they built a processing engine using a common software technique known as Map-Shuffle-Reduce.

Of course, a scale-out paradigm meant Google now had multiple places where a failure could happen when writing data or running a software process. One or more of those thousands of cheap computers could crash and mess up everything. To deal with this, Google added automated data replication and fail-over logic to handle bad situations under the covers and still make everything work as expected for the user.

In 2003 in a published document, Google explained to the world their distributed data storage methods. The following year, they disclosed details on their parallel-processing engine.

One reader of Google's white papers was Doug Cutting, who was working on an Apache Software Foundation open-source software spider/crawler search engine called Nutch. Like Google, Doug had run into issues handling the complexity and size of large-scale search problems. Within a couple of years, Doug applied Google's techniques to Nutch and had it scaling out dramatically.

Understanding its importance, Doug shared his success with others. In 2006 while working with Yahoo, Doug started an Apache project called "Hadoop," named after his daughter's stuffed toy elephant. By 2008, individuals familiar with this new Hadoop open-source product were forming companies to provide complementary products and services.

With our history lesson over, we are back to the present. Today, Hadoop is an entire "ecosystem" of offerings available not only from the Apache Software Foundation but from for-profit companies such as Cloudera, Hortonworks, MapR, and others. Volunteers and paid employees around the world work diligently and passionately on these open-source Big Data software offerings.

When you hear somebody say "Big Data," he or she typically refers to the need to accumulate and analyze massive amounts of very diverse and unstructured data that cannot fit on a single computer. Big Data is usually accomplished using the following:

  • Scale-out techniques to distribute data and process in parallel
  • Lots of commodity hardware
  • Open-source software (in particular, Apache Hadoop)




Large companies with terabytes of transactions stored in an enterprise data warehouse on database computers or applications like Teradata or Netezza are not doing Big Data. Sure, they have very large databases but that's not "big" in today's sense of the word.

Big Data comes from the world around the company; it's generated rapidly from social media, server logs, machine interfaces, and so forth. Big Data doesn't follow any particular set of rules, so you will be challenged when trying to slap a static layout on top of it and make it conform. That's one big reason why traditional relational database management systems (RDBMSs) cannot handle Big Data.

The term "Hadoop" usually refers to several pieces of Big Data software:

  • The "Common" modules, handling features such as administration, management, and security
  • The distributed data engine, known as Hadoop Distributed File System (HDFS)
  • The parallel-processing engine (either the traditional MapReduce framework now known as YARN or an emerging one called Spark)
  • A distributed data warehouse feature on top of the HDFS (HBase for standard reporting needs or Cassandra for active, operational needs)


In addition to the basic Hadoop software, however, there are lots of other pieces. For putting data into Hadoop, for example, you have several options:

  • Programmatically with languages (e.g., Java, Python, Scala, or R), you can use Application Programming Interfaces (APIs)  or a Command Line Interface (CLI)
  • Streaming data using the Apache Flume software
  • Batch file transfers using the Sqoop module
  • Messages using the Kafka product


When pulling data out of Hadoop, you have other open-source options:

  • Programmatically with languages
  • Hbase, Hive with HiveQL, or Pig with PigLatin which all provide easier access than using MapReduce against the underlying distributed file system   
  • Elasticsearch or Solr for searching
  • Mahout for automated machine learning
  • Drill, an always-active "daemon" process, which acts as a query engine for data exploration


But why would you want the complexity of this "Big Data?"

It was obvious for Google and Nutch, search engines trying to scour and collect bytes from the entire world-wide web.  It was their business to handle Big Data.

Any large firm is on the other end of Google; they have a web site which people browse and use, quite probably navigating to it from Google's search results. One Big Data use case for most companies would therefore be to do large-scale analysis of its web server logs. In particular, they could look for suspicious behavior that suggests some type of hacking attempt. Big Data can protect your company from cybercrimes.

If you offer products online, a common Big Data use case would be as a "recommendation engine." A smart Big Data application can provide each customer with personalized suggestions on what to buy. By understanding the customer as an individual, Big Data can improve engagement, satisfaction, and retention.

Big Data can be a more cost-effective method of extracting, transforming, and loading data into an enterprise data warehouse. Apache open-source software might replace and modernize your expensive proprietary COTS ETL package and database engines. Big Data could reduce the cost and time of getting your BI results.  

It's a jungle out there; there's fraud happening. You may have some bad customers with phony returns, a bad manager trying to game the system for bonuses, or entire groups of bad hackers actively involved in scamming money from your company. Big Data can "score" financial activities and provide an estimate of how likely individual transactions are fraudulent.

Most companies have machine-generated data: time-and-attendance boxes, garage security gates, badge readers, manufacturing machines with logs, and so forth. These are examples of the emerging tsunami of "Internet of Things" data. Capturing and analyzing time-series events from IoT devices can uncover high-value insights of which we would otherwise be ignorant.

The real key to Big Data success is having specific business problems you need to solve and on which you would take immediate action.

One of my clients was great about focusing on problems and taking actions. They had pharmacies inside their retail stores and, each week, a simple generated report showed the top 10 reasons insurance companies rejected their pharmacy claims. Somebody was then responsible for making sure the processing problems behind the top reasons went away.

Likewise, the company's risk management system identified weekly the top 10 reasons customers got hurt in the stores (by the way, the next time you are in a grocery store, thank the worker sweeping up spilled grapes from the floor around the salad bar). This sounds simple, but you might be surprised the extreme business benefits obtained from constantly solving the problems from the top of a dynamic Top-10 list. 

Today, your company may be making the big mistake of ignoring the majority of data around it. Hadoop and its ecosystem of products and partners make it easier for everybody to get value from Big Data.

We are truly just at the beginning of this Big Data movement. Exciting things are still ahead. 

Saturday, May 23, 2015

Information Builders Talking Big Data at Summit 2015

In just a couple of weeks, Information Builders will hold its annual user conference in Kissimmee, Florida. Many of the topics at Summit 2015 will deal with Big Data.



Be sure to attend the following sessions:

Tom White, MapR Technologies
Eric Greisdorf, Information Builders
Sunday 2:00PM - 3:00PM
We've all heard that the market is demanding big data solutions that provide real-time insights on their data. With the countless claims of companies solving this problem, how can you discern fact from fiction? And how do these solutions support WebFOCUS in providing real-time insights? Join MapR Technologies, an Information Builders partner and provider of the leading Hadoop distribution, to learn how MapR and WebFOCUS deliver on the promise of true real-time data analytics.


John Thuma, Teradata Big Data Practice
Sunday 3:15PM - 4:15PM
Big data is not a product or a service. Big data is a movement. Understanding how you can leverage big data from within your enterprise may be a challenge. Business intelligence (BI) and data warehousing have matured into technology, process, and people. In this session, we will discuss how BI tools fit into this new big data zoo. The secret is, there is no secret. Don't forget what you already know.


Boris Evelson, Forrester Research
Monday 1:30PM - 2:30PM
Customer insight teams, agile business intelligence (BI) investments, and big data buzz have grown at breakneck rates as organizations try to capitalize on new data with limited success. To break through the data fog, technology leaders need new approaches to systematically link data directly to insight and action. In this session, Mr. Evelson will answer questions such as: (1) What will it take for organizations to start using more of its data for analysis and insights? Today, the average organization uses only 12 percent of its data. (2) Why is business agility a key success factor in the age of the customer, and what impact does it have on your earlier-generation data management and BI investments? (3) What are the key differences between earlier-generation BI and the leading-edge systems of insights? (4) What are the key components of the new-generation systems of insights (processes, people, technology)?


Howard Dresner, Dresner Advisory Services, LLC
Monday 4:00PM - 5:00PM
In this session, veteran industry analyst Howard Dresner shares the latest findings from his annual "Wisdom of Crowds Business Intelligence Market Study." He'll answer questions such as: Who's driving business intelligence (BI) within the organization? Who are the targeted users and how are they changing? Which organizations are most successful with BI and why? What do organizations hope to achieve with BI and how is that changing over time? Which technologies and initiatives are most important, which are climbing, and which are falling? What is the current state of data and how has this changed since last year? How are people sharing BI-derived insights within their organizations and has this improved since 2014? How has user adoption of BI changed in recent years and why?


Mark Smith, Ventana Research
Tony Cosentino, Ventana Research
Tuesday 11:00AM - 12:00PM
In today's applications, systems, and devices, there is data being generated every second of the day that can either overwhelm an organization, or improve its effectiveness. Smart organizations architect their enterprise to integrate and process data from any location, including cloud computing and the Internet of Things (IoT), and at any time to deliver analytics and business intelligence (BI) that improve performance. Using a business perspective on technology and IT is required to bring the right analytics and BI technology and skills to an organization. Moving beyond the hype on agile and self-service BI requires a focus on the metrics and information people need to be effective in their roles and responsibilities. Unveiling the latest in analytics and data research across business and IT, Ventana's Tony Cosentino and Mark Smith will provide best practices and steps to help any organization be effective in using big data for a strategic advantage in analytics and BI.


Stephen Mooney, Information Builders
Tuesday 11:00AM - 12:00PM
Are you interested in learning how iWay leverages the Hadoop ecosystem? Join us for an informative session on big data, where we will show you how iWay is harnessing the power of technologies like Sqoop, Flume, Kafka, Storm, and HDFS to provide a simplified and reliable data integration platform.


Clif Kranish, Information Builders
Wednesday 9:45AM - 10:45AM
Many organizations now rely on Hadoop for their big data initiatives. In this presentation, we will show you how data managed by Hadoop can be staged by DataMigrator and used by WebFOCUS. We will cover how to use the data adapter for Hive and when to use Impala or Tez. You will learn how arrays and other complex Hive data types are supported, and how to take advantage of alternatives to HDFS, such as MapR-FS. We will also introduce the new Phoenix adapter for the NoSQL database HBase, which is distributed with Hadoop.

About Me

My photo

I am a project-based software consultant, specializing in automating transitions from legacy reporting applications into modern BI/Analytics to leverage Social, Cloud, Mobile, Big Data, Visualizations, and Predictive Analytics using Information Builders' WebFOCUS. Based on scores of successful engagements, I have assembled proven Best Practice methodologies, software tools, and templates.

I have been blessed to work with innovators from firms such as: Ford, FedEx, Procter & Gamble, Nationwide, The Wendy's Company, The Kroger Co., JPMorgan Chase, MasterCard, Bank of America Merrill Lynch, Siemens, American Express, and others.

I was educated at Valparaiso University and the University of Cincinnati, where I graduated summa cum laude. In 1990, I joined Information Builders and for over a dozen years served in regional pre- and post-sales technical leadership roles. Also, for several years I led the US technical services teams within Cincom Systems' ERP software product group and the Midwest custom software services arm of Xerox.

Since 2007, I have provided enterprise BI services such as: strategic advice; architecture, design, and software application development of intelligence systems (interactive dashboards and mobile); data warehousing; and automated modernization of legacy reporting. My experience with BI products include WebFOCUS (vendor certified expert), R, SAP Business Objects (WebI, Crystal Reports), Tableau, and others.