Art Wittmann | Oracle Technology Content Director | February 9, 2025
Every business needs a data center strategy. How flexible, scalable, and expansive that strategy should be depends on the organization, but as technology’s importance to a business grows, so does the need for a solid plan to support goals.
The data center is foundational for an integrated and efficient tech operation, but it’s no longer the sole focus. Rather, the data center and the systems within it need to fit into a larger picture, which often includes cloud-based resources that must mesh with those in an owned data center.
A data center strategy is a plan that details where an organization’s technology will run and how it will be accessed by stakeholders. A data center strategy must look at both technical and business concerns related to ongoing improvements for an organization, including the following elements:
Key Takeaways
Being physical structures, data centers have limitations. Expanding beyond what the floor space, power, and cooling of an existing data center can handle is an expensive proposition. Likewise, creating a plan to leave an organization’s data centers implies potentially years of effort as the workloads running in the data center are moved and updated or converted to cloud-based applications.
Strategies must chart a path between expanding and exiting existing facilities, including how cloud and colocation facilities may come into play.
Other strategic considerations, such as continuity plans and general business evolution will also inform a data center strategy. For instance, running all systems in a single data center represents a single point of failure, an often untenable risk. For many organizations, the goal is to not own more data center space. Rather, it’s to use existing space in combination with other resources, including cloud, to meet the organization’s needs.
Crafting the right data center strategy for your organization is predicated on deciding where workloads are best run and how your technology strategy is best realized. Consider the following elements:
Creating a data center strategy involves assessing your current needs and facilities, identifying business needs and growth projections, and determining the best technology and infrastructure to support the organization. Note that if you have a data center, you have a data center strategy—it might feel current and in lockstep with business priorities, or it may feel out of date and need of attention, but understanding where you stand is a critical first step in creating a document that can be reviewed by IT management and other stakeholders.
1. Figure out your current data center strategy
This step is mostly a matter of collecting budgets, procedures, and existing plans and pulling them into a single document that describes the higher-level strategy along with the steps typically taken to execute it. If this is the first time such an effort has been undertaken, it’s likely that the data center strategy boils down to “support existing workloads within the current budget.” Important to the project will be to capture current SLAs offered to business partners and a record of outages over the recent past. The goal simply is to answer the question, What are we currently doing, and how well is it working?
2. Align IT strategy with business goals
How happy are IT’s customers with the services they receive? It’s unlikely that business partners will have a particular point of view on your data center itself, but they will have strong opinions on the value and performance of the workloads it supports. Query partners on application responsiveness, the availability of the most current versions of software, the quality of the integrations between applications, and the utility of data produced by applications.
If the fact that an app runs on-premises means that upgrades are either slow to come or won’t come at all, it may be time to consider switching to an as-a-service model. Likewise, if running in the data center makes the app expensive to own or leaves performance lacking—say for remote users—it’s time to reevaluate.
3. Assess the changing IT environment
Review your typical equipment upgrade cycle. Will you keep up with the storage and computing power on-premises applications need? Do you have a handle on the changing criticality of applications, likely increased usage, updated regulations, new or updated regulations, new workloads, and other items on the roadmap? All may either indicate new workloads for the data center or dictate that some workloads head to the cloud.
4. Document current IT assets
It’s hard to imagine an IT organization working at an enterprise level that doesn’t already have an inventory—the finance department will require detailed lists of IT assets so gear can be properly depreciated. The group responsible for asset management will also be tracking what the company owns and where items are. Typically companies track software subscriptions and licenses, locations, contracts, support, networking diagrams, and the like.
Inventory control lists typically won’t have the level of detail needed by IT, however. Instead, these teams will need lists of assets along with current firmware versions as well as add-on equipment that may have been installed, such as networking cards or other bus cards. This information can generally be found in your orchestration system, which likely requires agents running on each server. These agents will assess health and report any configuration changes. Similarly, networking hardware and storage systems will be known to your orchestration system.
If you somehow aren’t using any form of orchestration software, you may have a configuration management database. These were popular before virtualization came into common use, and they too will tell you what you need to know about the systems running in your data center.
5. Assess current data center options
Your options will depend on your goals as well as other factors, including existing leases, power use, facility condition, available capacity, and location relative to business and user needs. Is the data center doing what it needs to do, but could use more capacity? Addressing that might be a matter of adding power and cooling or possibly simply using existing cooling differently.
If you’re looking to shut down your data centers, then it’s a much bigger task. For most enterprises, exiting a data center will take several years and is a very expensive proposition. The focus won’t be on what options exist for the data center, but rather to rethink how workloads should be handled to best suit the business.
If circumstances are extraordinary—like there’s a wrecking ball in your data center’s future—then starting with where to move each workload will be necessary. Options include colocation facilities, moving workloads to public or private cloud services, and shifting workloads to other data centers owned by the company. Assessing the cost and time associated with each option will be important if the wrecking ball is imminent, but in most cases, it’s wiser to start with what the business needs out of its IT infrastructure and work back to whether your current data center fits that long-term plan.
6. Create a plan for applications
Dispensation of the data center should never drive application plans. The applications and how they serve the business are what’s important. The data center is just a room where some of those applications currently run. For instance, it would be wrongheaded to think about moving your HR system to a cloud service just because of an issue with your data center, such as your HVAC nearing end of life. On the other hand, if moving to a cloud-based HR app is in the best interest of the company, then that decision may affect your data center strategy.
Moving any enterprise application is an important decision that can affect the long-term success of the business and should be independent of a change in data center strategy. To be precise: Your application strategy should drive your data center strategy. It should never be the other way around.
7. Build a custom framework
Each organization’s combination of budget, existing assets, IT resources, data communication needs, applications, and future scalability expectations is unique. Particularly when organizations own more than one data center, use colocation facilities, and run applications in the cloud, developing a customized picture of resources in use is an important step to understanding the role that the data center should play in the future. The application picture comes first; how workloads are managed and delivered comes next. This will help you form the framework you need to manage changes that might need to take place in the data center itself.
8. Compare current state to custom scenarios
With the big picture of where workloads run in hand, it’s time to run some scenarios. While it’s perfectly reasonable to ask, What if we didn’t have this data center, or what if we added some key workloads to our data centers? without immediately considering the costs, realize that such moves are expensive, time-consuming, and resource-intensive. The benefits to big moves need to be just as big. Fully exiting your data centers, for instance, means shifting hundreds or possibly thousands of workloads, and it can take years to do.
9. Contact an expert, if needed
A data center migration is, to put it bluntly, not easy. With quickly evolving technology and the constraints of both budget and resources, making critical decisions can feel at best confusing and at worst overwhelming. Fortunately, a wide range of consultants can help guide or even manage the journey. In many cases, the cost of hiring a consultant, even for just a one-time sanity check, is well worth the investment and often far less costly than dealing with unexpected setbacks.
10. Choose a strategy and develop a roadmap
With all current state and strategy guideline information in place, it’s time to look at the details and develop a roadmap. This phase involves the nitty-gritty steps: securing and migrating data, examining automation options, prioritizing departments and sections of data, assessing power needs, and building a timeline. Each organization will work to a unique roadmap reflecting the complexity of their existing configuration, budget, application connectivity, timeline, security needs, external vendor contracts, multicloud/hybrid connectivity, and internal functional needs. A thorough roadmap ensures that risks and surprises are kept to a minimum while teams recognize and proactively plan for the biggest migration challenges.
Key components of an effective data center strategy include having a clear understanding of your current infrastructure and future needs, establishing robust security and compliance features, utilizing efficient and cost-effective hardware and facilities, ensuring disaster recovery plans are in place, choosing capable vendor partners, and hiring or training up an IT team that can effectively manage and maintain the data center infrastructure.
The following breaks down specific pieces of an effective data strategy:
Infrastructure Design and Scalability
Knowing your compute, storage, and network needs provides the guidelines for infrastructure design, helping to inform refresh schedules. Chances are your data center is used to evolutional performance improvements as technology improves with each refresh cycle. New uses and new applications may not fit in that evolutionary pattern. In particular, modifying systems to respond to customer queries—say around inventory availability, new orders, or delivery schedules—may result in much greater demand on applications. Both apps and infrastructure must be designed to handle new loads.
Security and Compliance Measures
Every data center element should meet security standards based on the latest state of technology and risk. However, many organizations will need additional layers on top of that. Government contracts or possession of healthcare, financial, or other types of sensitive data require specialized security configurations. In addition, data centers also must meet regional compliance needs, such as the EU’s GDPR and the California Consumer Privacy Act (CCPA).
Energy Efficiency and Cooling Solutions
Changing out your data center’s cooling or backup power system is a very expensive proposition—so much so that taking measures to work within the existing cooling and power capacity of the facility can become a key factor in equipment layouts. Spreading servers that produce a lot of heat across racks is the first defense against data center hot spots. In instances where it’s important to keep systems together, racks with their own improved cooling and high-capacity power can be a way to extend the utility of a data center’s existing systems. New or unique workloads that will require significantly different servers, storage, networking, power, and cooling may be good candidates for colocation facilities or cloud deployments.
Disaster Recovery and Business Continuity
Any data center, even in a hybrid configuration, will need a robust disaster recovery plan that includes worst-case-scenario contingencies for natural disasters, connectivity issues, and major power outages. A business continuity (BC) plan will include data center business-support considerations and provide for contingencies should the data center become inaccessible.
Vendor Selection and Management
An organization’s choice of vendors can significantly impact the performance, cost-effectiveness, and security of a data center. There are some key considerations when selecting and managing vendors; they start with ensuring that vendor roadmaps align with your strategic objectives and that the provider can support your growth over the long-term, considering factors such as industry expertise, track record, and financial stability. Evaluate the vendor’s technical capabilities and track record of delivering high-quality solutions. Are available SLAs and pricing models aligned with your specific requirements? Do its security practices meet your organization’s requirements? Maybe most important, look for vendors that are committed to innovation and are investing in emerging technologies, such as AI, automation, and advanced security.
Workforce Training and Skills Development
As servers pack in more computing power and expel more heat, understanding the dynamics of networking, cooling, and power distribution in densely packed data centers is a unique skill that will be increasingly in demand. Managing access and dealing with the inevitable system outages also require special expertise. More and more, businesses that own data centers must upskill employees to meet staffing requirements.
Data centers and the technologies that drive them don’t change all that rapidly, though there is certainly innovation in power delivery, cooling, physical security, rack compactness, and more. But it’s often related trends that drive new thinking about data center resources. Here are some to watch.
Shift to Cloud-Based Solutions
Moving workloads to the cloud can be part of a strategy to shut down data centers, rework workloads to better meet business needs, or to provide better reliability or scalability. In some cases, it’s time to abandon older applications and move to SaaS-based offerings that may integrate better with other applications in use or are better able to intrinsically handle variations in load. In any case, understanding the ROI of moving workloads is important. For older applications, the best option may be to let them run where they are.
Green and Sustainable Data Centers
Purchasing power from renewable sources and implementing equipment recycling programs can substantially improve the sustainability of a data center. Choosing servers and other systems that can run at higher temperatures can also be a way to save power. There’s typically a cost associated with green efforts for on-premises systems. If being green is an important goal, consider colocation facilities or cloud providers that have made significant investments in sustainability.
Edge Computing and Micro Data Centers
Edge computing is a way to provide systems and applications with necessary processing without the need to send real-time streams of data back to a data center. Instrumented devices, like those used in manufacturing or energy production, may produce so much data that sending it back to a central location for processing is both impractical and will incur too much latency. Edge computing systems provide significant local processing power so that data is quickly analyzed near where it is produced, with only summary data being sent back to central data centers. While the advantages are obvious, edge computing systems come with many of the same requirements for physical and cybersecurity and fault tolerance that are faced in larger data centers.
Micro data centers are self-contained systems that bring processing power closer to the end user or device to minimize latency. They’re typically a purpose-built set of hardware customized to meet specific requirements, housed in a portable or modular enclosure that can be easily deployed outside a main data center environment.
Hyperconverged Infrastructure (HCI)
Hyperconverged infrastructure (HCI) virtualizes compute, storage, and networking resources so that applications can describe the resources they need in software, and those resources are allocated as the application starts up. The goal is to create a system in which workloads can run anywhere, and orchestration of what runs where is purely a matter of allocating virtual resources.
AI and Automation in Data Centers
Data centers have long benefitted from automation. Creating standard configurations for servers, storage systems, and networking lets orchestration systems automatically allocate resources based on demand. Without this level of automation, data centers become incredibly difficult to manage. The step beyond automation involves working toward systems that can autonomously find and fix faults, optimize operations, and detect anomalies that could mean anything, from a server about to break to attackers attempting to infiltrate systems.
AI and how it’s used in the data center is a fast-evolving area that will drive significant innovation.
Oracle Cloud Infrastructure (OCI) is a powerful, flexible, and cost-effective cloud platform. Supporting models including public cloud, private cloud, multicloud, and distributed cloud configurations, with on-premises Cloud@Customer options as well as a sovereign and dedicated private cloud, OCI represents a scalable choice that can augment and improve nearly any data center strategy.
With the ability to support environments such as legacy VMware estates and complex, demanding AI initiatives, OCI gives customers the flexibility they need to support their technology goals. Start with a free trial.
Companies that want to keep some workloads local and supplement with a “best of cloud” strategy are embracing multicloud. And the timing has never been better.
How much does it cost to build a data center?
Costs to build data centers vary widely depending on the location, whether the facility is brand new or a remodel, and the overall specifications such as number of racks, power and cooling requirements, and physical security requirements. If capital is limited, an alternative to building your own data center is to rent space in a colocation facility. This is often an economical way to get the benefits of a high performance, modern data center without spending the time and money to build it yourself.
How do you optimize a data center?
Optimizing a data center involves several different factors. IT teams should practice a regular phase-in/phase-out cadence to replace outdated hardware with new gear. On a practical level, cutting-edge hardware and techniques for cooling and consolidation can reduce facility needs and hardware footprint. Resource usage of all types can be tied into automation, which can help improve utilization and satisfy needs.
What are the three main components of a data center infrastructure?
The three principal components of a data center infrastructure are compute hardware, data storage, and the network itself. Compute hardware handles processing and communication of data. Data storage manages the storage of data files and applications within the network. The network provides the means of transmitting data between end users, the servers, and all the other components connected to the infrastructure.
What is colocation versus hyperscale?
Hyperscale data centers are large-scale facilities often located in cost-effective geographies, often the outskirts of regional hubs. Hyperscalers generally belong to the company that supports them—in many cases, meeting the needs of a major cloud provider. On the other hand, colocation works on a much smaller scale. Colocation refers to an organization leasing data center space from a larger facility in an effort to lower costs by sharing expenses such as security and facilities.