big-ideas-banner
 
 
Stay Connected:

Five Big Data Mistakes


A successful big data implementation avoids these common pitfalls.


by Subramanian Iyer,
March 2014

A 2013 survey by the big data firm Infochimps indicated that over 55 percent of big data projects are not completed, and many more fail to meet their objectives. Given that over 81 percent of the companies in the same survey included big data projects in their top five IT priorities for 2013, the failure percentage is significant.

bigdata-spotart

Here are the most common mistakes organizations make when they go through the journey of implementing big data. Avoiding these mistakes does not guarantee a successful implementation, but it certainly improves the chances of success.

1. Focusing on technology instead of the business need: Too often, IT leaders look at the infrastructure needed for a big data analytics solution rather than the business requirement for the solution. They focus on the storage and compute capacity needed, or make decisions based on technology. This is usually driven by the need to contain ballooning infrastructure costs. Instead, IT leaders should focus on business outcomes of big data initiatives. This will ensure there is business context around data collected, increasing alignment with business and certifying IT delivers what business really needs. Companies can architect the technology that they need to support these business outcomes accordingly, allowing for the potential of reduced spend on new investments.

It is important to understand that big data implementation is really a journey with multiple modules that get added and refined over time.

2. Working with vendor use cases so that “quick wins” can be achieved: There seems to be a stock set of published big data use cases that customers have to sit through when they call for big data evaluations. Customers assume that vendor use cases will provide them a jumpstart to realizing benefits in their big data implementations, as the technology components are tried and tested for their industry. While this will be true in some cases, it is critical for organizations to look at those use cases that will provide the biggest impact to their own organization, as results needed largely depend on both organization direction and culture. Organizations are unique, and their interpretations of data need to be looked at from that perspective. Use cases need to be tailored based on the focus that organizations have for the future, rather than what a vendor thinks the organization needs to focus on.

3. Executing big bang or pilot implementations: Executives who are undecided about potential benefits of big data initiatives sometimes decide to execute a number of initiatives in parallel as part of a big bang approach to implementing big data. Some of the initiatives could yield benefits, but others don’t, resulting in a difference in understanding of the actual benefits of big data. A big bang implementation would mean that the organization might not have thought the ramifications through, especially when procuring infrastructure.

Other organizations decide to be conservative, implementing only a single initiative as a pilot project so they can evaluate the feasibility of investing further in big data. Executing a pilot in isolation usually means the organization is undecided about potential benefits.

Big data implementations have the potential to change organizational business strategies and must be executed cautiously. It is important to understand that big data implementation is really a journey with multiple modules that get added and refined over time. The ideal situation would be to chart out a full-fledged reference architecture – one that includes use cases across the organization – and begin the implementation process in a phased manner. A sample approach is provided in the accompanying diagram.

031214-siyer-f1

4. Not executing a cost-benefit analysis: A critical aspect of big data implementations is the potential cost implication. Based on the implementation methodology, every use case could have a different cost model. Too often, the initial use case is used to model the overall costs required for a big data journey, and the fact that the initial use case is usually low-hanging fruit is missed. Therefore, it is important to ensure that there is an overall cost model built around the reference architecture. This allows for a reasonable degree of predictability as the implementation progresses.

Thought Leader


Iyer-headshot-croppedSubramanian Iyer is senior director of Insight and Customer Strategy at Oracle Corporation.

5. Running environments in business-as-usual model: Big data requirements include a different mechanism for authentication, access, data isolation, and management of environments as compared to traditional deployments. This in turn requires changes in operational processes. Trying to add big data deployments into an existing environment will not help. Big data environments need to be structured separately, and the operating processes to maintain them will also need to change. Failing to do so will only ensure a highly complex and non-sustainable architecture. 

In order to create the most value for organizations, executors of big data implementations need to look at a holistic view of what the organization requires, get the organization committed to the strategy, and then execute in phases, all the while moving toward the reference architecture that was built as part of the original strategy.

 
 
    E-mail this page E-mail this page    Printer View Printer View