CFOs and CIOs often fall short in understanding all the considerations for a providing a solid infrastructure enterprise that is needed to leverage the software capabilities.
by Eric Helmer
Enterprise Performance Management (EPM) and Business Intelligence (BI) suites have dramatically changed financial and operational reporting strategy for organizations. These new solutions are accelerated by new capabilities such as drill back to source systems, integration with governance and tax systems, predictive analytics, and social media. Modern technologies allow CFOs to deliver better analytics not only to corporate advisors but also to staff in the field. We have evolved from simple spreadsheet reporting to doing true analytics and gaining unprecedented insight.
As companies become more global and bring more divisional units into an enterprise EPM and BI solution, they increasingly become more sensitive to performance, security, scalability, cost, and support as the solution becomes more mission critical. However, many CFOs and CIOs fall short in understanding all the considerations for a providing a solid infrastructure enterprise that is needed to leverage the software capabilities. Here are some of the common mistakes made:
Whether it is a full initial EPM roll-out or an upgrade from a previous version, proper enterprise design and planning is crucial for a successful EPM/BI implementation. An upgrade is a perfect time to implement better practices and become higher on the maturity model. The following success factors are critical when implementing EPM and BI systems:
1. Reducing cost: Modern architectures leverage a 64-bit platform and cheaper server resources. Today, organizations can simplify their footprint by co-locating services by their type rather than purpose (Web-based, service-based, and database-based functions). A smaller footprint not only reduces hardware cost, but also eliminates maintenance, operating system licenses, power consumption, backup resources, and data center floor space. Other ways to save include considering cheaper operating systems such as Linux, looking at Oracle’s SaaS and cloud models, and employing virtual machines. For large implementations, Oracle’s engineered systems such as Oracle Exalytics can drastically reduce the number of servers to the point of paying for themselves.
2. Getting better, faster support: World-class organizations have a strong helpdesk that can assist with level-one issues and will only escalate the issues after exhausting level-one troubleshooting. These organizations also have a ticketing system tightly tied into the support organization for issue tracking, root-cause analysis, impact reporting, and resolution documentation. It is also common to employ automated monitoring solutions designed to alert support teams of failures proactively, even before users are aware of a problem. When the need arises, it imperative to have a clear plan for support escalation and ownership as well.
3. Reducing vulnerability: EPM and BI deployments are commonly used to provide analysis on corporate financial and operational data that most would consider confidential and sensitive. As such, executives must protect the data from getting into the wrong hands or being manipulated otherwise. They must design for security from the beginning and then test to ensure it was done correctly. Commonly, users authenticate to the EPM system with the same IDs and passwords they use in other systems such as ERP, payroll and CRM, so if an ID/password combination is compromised, it would compromise more than just the EPM system. We must also protect from hardware and datacenter failures. Last year’s storms such as Hurricane Sandy were a somber reminder of how anything can happen. It’s devastating to think about losing everything so quickly. The best IT design includes business continuity plans for any sort of failure—even a catastrophic one. Organizations must design a plan to address simple failures to full disasters that is documented and fully tested annually.
4. Increasing performance: Modern strategies to maximize performance use hardware load balancers to distribute user requests across many servers. This helps performance by distributing server load across multiple resources. Scalability is inherent in this design as we can simply add additional servers as needed with little to no impact to users. Adequate server subsystems such as memory, CPU, and disks are imperative as well. Fine-tuning these and other specific configurations needs to happen after the product deployment to best utilize the server resources, as the out-of-the box default settings assume only the bare minimum requirements.
5. Aligning the IT roadmap with business direction: Finally, and perhaps most importantly, the IT design and strategy should be aligned with future business direction. Of course, proper design takes into consideration current requirements such as uptime, availability, tolerance to failures, maintenance, quality assurance, and support. However, one must also understand the future direction from a scalability perspective. Companies should plan with IT from the beginning any plans or possibilities of adding users, departments, even countries.
From traditional on-site hardware design, cloud and SaaS to engineered systems, a full hardware and IT process design must be performed in lock-step with corporate strategies around support, business continuity, performance, quality, and security. In the end, enterprises need a simple and robust system that maximizes performance and availability while minimizing footprint and cost. There is no one solution that fits all. CFOs and CIOs who partner with their IT architects during strategic direction initiatives stand to gain the most efficiency, flexibility, and cost effectiveness going forward.
Eric Helmer is vice president of IT Services at Linium and an Oracle ACE Director.