Your search did not match any results.
We suggest you try the following to help find what you're looking for:
by Bob Rhubart
A conversation about the similarities and differences between public, private, and hybrid clouds; the connection between cows, condos, and cloud computing; and what architects need to know in order to take advantage of cloud computing.
This interview is based on a transcript of an OTN Architect podcast panel discussion recorded on May 18, 2012.
Vice President of Oracle’s Global Enterprise Architecture Program and a frequent speaker at OTN Architect Days and other events.
Lead architect for Oracle Cloud, responsible for designing the infrastructure for Oracle's public Software as a Service, and Platform as a Service offerings.
Vice President of Oracle’s On Demand Platform.
Architect for Oracle’s Middleware/Applications Management and Cloud Computing.
What are we talking about when we talk about public, private, and hybrid clouds? What do those terms actually mean?
I think it's a bit obvious and then a bit complicated.
Public cloud is something that is potentially off premises, managed by somebody else, owned by somebody else, and offered as a service.
Private cloud is for those seeking the same dynamic flexibility, provisioning, and so on, but for one reason or another don't want to put that into a public service. A private cloud could be purely managed by an enterprise or the enterprise could have some support for managing it as a dedicated private cloud.
Hybrid is about mixing the two, and this is where I think some of the complexity comes. There are many different types or models of hybrid cloud. One involves functional distribution, putting different things in different places, perhaps having development in a private cloud and having production in a public cloud.
Another example that people mention often as kind of the nirvana of hybrid clouds is a workload distribution, having the same app running in two places. Synchronizing this can be very complicated if it's a complex stateful application.
A third example is a kind of functional distribution where you have different pieces of an ongoing workflow or application in public versus private. So a classic Internet application would have the catalog engine in a private location in your own data center, while the payment engine is somewhere else. And this is increasingly applied to enterprise applications where you might be running ERP systems in a private cloud and get CRM or other types of applications from a public cloud.
It's probably worth talking a little bit, just generally, about what we mean when we say "cloud."
One of the important attributes of something being "cloudy" is that it's a set of resources that you get in a self-service manner. It's self-service and you get a set of uniform resources in a uniform way. This is part of the DevOps or NoOps movement. The resources vary from the type of cloud. From an IaaS to a PaaS to a SaaS type of provider, you're getting different resources. But the important thing is that your end users can ask for resources as they need them, get them on their own, pay for what they use, and return them. "Paid" may mean actual money in a public cloud or funny money between departments.
But the basic notion is that you're able to do on-demand in a self-service fashion what previously would have taken interaction with a human being in an IT department to do—to stand up some custom infrastructure.
One of the key differentiators I see in private versus public is the notion of control and security. Our customers were looking to be less conformant, to have a little more control, a little more security, are more likely inclined toward offerings in private cloud. Customers more focused on cost of ownership or flexibility or scalability go more toward public cloud offerings.
I think this is definitely one of the strengths that Oracle is trying to play with in our offerings: providing that whole range of offerings, bringing our software stack to bear, and making it available in a variety of options that are tailored to the customers needs. So those that have high degrees of security or customization needs can run in a private cloud on their own data centers. Those that don't necessarily have those needs have Oracle Cloud. In between are those who can do some operations in one and some in the other.
You may be where your important intellectual property is in your data and not in your code, in which case you may be perfectly willing to do test and dev in a public environment with artificial data sets, where security around those data sets is not a primary concern, but you would go production in a private cloud in your own data center, so you can feel that you had total control over the entire lifecycle of your data.
It's interesting to combine these two points that were just made.
A lot of people look at the existing developer clouds that are out there and think about on-demand self-service as being about dropping down your credit card and getting twenty nodes in five minutes. And yet because of the kinds of issues that Ajay mentioned, security issues, governance issues, do I really want my developers downloading whatever they wish and potentially running questionable software? No! I want to put in some sort of governance model as an enterprise. So enterprise cloud computing self-service might be more like a few hours than five minutes, but it's much better than the old way of days or weeks.
There is a continuum there. It starts from a management perspective. Even if you have distinct workloads and deployments between the two for the time being, at least start by having a unified, managed approach, a technical management perspective on performance, availability, all those classic application management concerns, as well as the process, compliance, and governance aspects that Jim talked about. I'm going at that from an enterprise manager and an IT management perspective, but I think that's the place where the hybrid challenge can be first tackled.
First, give me visibility across the various deployments, either public or private. And then to move into really having more fluidity of the application pillars between the two. There's some low-hanging fruit there, and then there are some more long-term ideas that will probably require changes in the way applications are architected in order to really fully take advantage. At this point we're not just talking hybrid in the sense of public versus private, but it could be any kind of widely distributed application deployment for maximum availability.
But whether you're talking about two different public providers or one public, one private, or within a single public provider, we're talking about being able to fully utilize the global scale and all the global geography of your provider. And we've seen that going on. Some people use it really well and were able to overcome some of the outages that Amazon had a while ago, and others didn't quite use it so efficiently.
So in a sense that's not hybrid by the usual definition because you're always in one public cloud provider, but the technical challenge of spreading across a very distributed cloud infrastructure remained the same.
Many people assume, "Oh well, I'll do simple virtualization everywhere and then I'll have a cloud." And one of the potential challenges here is increasing complexity, where I had one node before I now have ten virtual nodes. So if you do everything the same as you used to do, you potentially have increasing complexity over time, and that fights against the reason you wanted a cloud in the first place.
As William mentioned, you have to have some sort of principles, increasing levels of standardization so that you're running the same stuff the same way in all these different places, potentially raising the levels of integration and abstraction, and thinking about moving up to managing services across this platform rather than just the piece-parts at the bottom of the stack.
Yes, totally. And that model that has tradeoffs. There are a couple of quotes and ideas that I picked up from reading the research out there that I really like.
One compares the traditional machine management to machine management in the cloud. In traditional machine management is your machine is very precious to you, it's like a pet. You give it a name, and when it gets sick you nurse it back to health. Your machine in the cloud is like cattle. You give it a number and when it's sick you replace it with another one. It's that mentality of uniformity, that you can get resources quickly, spin them up, shut them down, and do things that you would never do when your systems are these precious entities that needed to be nursed along and kept intact.
Things like just A/B testing, right? It's amazing what you see being done out there now on cloud because you can easily stand up these complex systems in a matter of minutes and tear them back down. Before that you would have a very elaborate test plan and a very elaborate rollover strategy to do your testing because you had one shot at that upgrade. When it's so easy to stand up another system, you just stand it up. You test it. If it doesn't work then you tear it back down. And it's easy to run them in parallel, which is great.
There is other analogy that I like very much, one that gets to the heart of that model and why there is a fundamental shift that's not necessarily easy nor is it necessarily for everyone, is from Pat Helland, who came from Microsoft and has been around the distributed system community for a long time. He makes the analogy between the clouds being condos and traditional data centers being living in residential housing.
You get great efficiencies and great services by moving to a condo. You can live in high-rise buildings with great views. You get someone else to take out the trash. You can get really good services, but you have to give things up as well. You can't raise chickens in your backyard anymore, and you don't get to paint your house pink. So you get those services that are so much better, and you get on-demand elasticity and the scalability. But one of the things that you have to give up is total control. You have to move to uniformity and standardization, but along with those restrictions you get a great degree of goodness in the available services.
I wonder if it's too far along in the adoption cycle to start calling it cow computing?
This could become a frightening conversation if you take that too far.
Well so far, in describing all of this you gentleman have used automotive analogies, and there's been a reference to pink houses and cows and kittens...
Are you challenging us to find another one?
Let's not take the conversation in that direction.
Another shift in the mindset as you move to clouds is to stop thinking about the individual things and start thinking about the groups of things. Stop thinking about the cows, but think about different herds of cows. If you're a big rancher you've got multiple locations.
As you move into cloud computing you stop looking at individual machines, you don't give that machine a name anymore. But you start thinking about racks or pods or modules, or sets of racks that perform certain functions. Pools of resources that you manage as a dynamic resource. And in applications, you start to do the same thing. Instead of thinking about 20 different pieces that each have to be installed individually, you start thinking about design patterns, architectural patterns, and trying to group together and install whole applications as one process rather than 20, 30, 50 different steps.
I think that, increasingly, moving up to thinking about groups and pods and pools and patterns is one of the key mind shifts of getting to cloud.
We're seeing, too, that there is more balance between the interest and the ease of use for various constituencies. When people are hosting their own software, we'll build them the software platform that gives them what they ask for and all the features they want. We try to fulfill their requests. When you come to the point where that software platform is now something that will have two very different constituencies, the users and those who will run it, whether in public or private setting, then you really have to look at, as you'd build and define that platform and define those architectural patterns, you really have to see how they influence both sides.
You can't just be focused on, "Oh, yeah, that would be cool for development productivity." Because you really have to look at the flip side of the equation, what does that mean for the person who's trying to host this in a way that's fully automated and very resource-efficient and that requires absolutely no manual management. And some of the tradeoffs that you come up with in defining the application interface, defining the application patterns, are different because they are also strongly guided by the hosting and the operation aspect and making those as efficient as possible.
You touch on something there when you mention the roles of user and cloud operator and builder, and so on.. Clouds potentially create a need, a demand, for refactoring not just the hardware and the software but also the roles and the identities of who works with what. You interject an intermediation. It's not just the people that roll the physical systems in, the system operator and the end user. Now you have a cloud operator, cloud developer, you have some new sets of roles that have to be defined, and in some ways their definition is a partitioning of other roles, an intermediation. This is a modification of the classic models of provider and consumer.
Now you have a chain of providers and consumers. I have a provider of the cloud infrastructure; the consumer is my platform developers. The platform developers become a provider of my applications. The applications are providers of services to end-users who are the final consumer. But you have to think about these roles and who does what in the cloud.
[So, based on your description, it sounds like even a terribly shop-worn term like "paradigm shift" is thoroughly appropriate in describing this. We're talking about significant changes not just on the technological level, but on how enterprise IT is organized.
It's a paradigm shift potentially at least in scale. We've been doing all of these things before. Go back 20 years— abstraction, design, patterns, virtualization—this is the root functionality of IT for the last two or three decades. But it's pulling it all together and really thinking about upping your game to the next level. What you did manually that took days now takes minutes and is done by scripts. What you did to one machine over a few hours you now do to a hundred machines over a few minutes.
So the paradigm shift is there. It really requires you to think about it and do things differently. But it's a lot of the same stuff you've always been doing.
I agree. While the earlier notion of cloud started with the concept of virtualization, and some concepts of consolidation, what is now happening in terms of shift in focus, both from an IT perspective and the consumers of services provided by IT, is that we're asking, "What can the cloud do for my business?" The question earlier on was, "What IT services can cloud provide for me?" Most of the discussions are about shifting capital expense to operational expense, and that was two, three, four, five years ago.
With the advent of most of these public clouds, which are very targeted, which are very application-focused—they're very specialized, they're very standardized—the consumers have different expectations from the cloud: "How am I going to do my business? How am I going to interact? What are the integration services between private cloud and public cloud? How do you do analytics better? How do you do development better?" So that's the shift in focus. I think that is the point that was made earlier, as well: now you're not only talking about sysadmins and application admins and database admins. You're also talking about administrators who can provide oversight and management services, not just for the infrastructure, but also for the business.
Yes, one of the first mentions of "cloud computing" was in an Eric Schmidt interview in Wired about six years ago, and it was about what Google had built. But they didn't build it that way just to run traditional apps. They built it that way to run the massive indexing engine that they had to have to do their business, what they're known for. So what Ajay just mentioned is very important, that cloud enables you to do things in terms of analytics, business analysis, scale of analyzing consumer behavior or modeling, and so on, that you couldn't do before based on the scale of resources that are available.
Throughout this conversation the four of you have described some fairly sweeping changes that the adoption of cloud is going to put the enterprise through. As adoption progresses, how does that continue to change the picture? What will the enterprise look like in five or ten years?
Everything will be hybrid cloud and the IT department potentially becomes the broker of multiple public and private clouds and not a sole developer.
I'm seeing an emergence of a new architecture. In addition to focusing on consolidation and optimization of resources at IT level, the new architecture will support hybrid models, and is more about collaboration across multiple models of cloud. It's more about how can you stay mobile in cloud for a lot of mobile applications, a lot of social applications, and then finally of the cloud becoming the hub for analytics as well. So I'm beginning to see those four changes happening, in terms of collaboration, in terms of mobility, in terms of social and analytics.
The true answer is, I don't know. In five years a lot of the things we are familiar with now we'll see in a somewhat modified way. More mobility, even more virtualization, including virtualization hosted by public cloud providers. I think there will still be a lot of the applications that we're seeing now, either because they'll be unchanged, or because they will be developed to the same set of use cases that are currently addressed. But if you look at the entire landscape everything will look entirely different. There definitely will be new classes of applications, because they'll be serving mobile and other use cases, but more importantly because they'll be taking advantage of a new higher abstraction model which will give the application users a more productive overall hosting experience.
What I don't know is, how much going to be PaaS, versus SaaS? Clearly we're going to be hiding more and more of the underlying meshing of infrastructure. But are people going to be writing code that looks a lot like the code they're writing today, just against new framework with new application architecture? Or are we coming closer to the old dream of not programming anymore, just configuring. Everything is available via what used to be called modules and packages and now those can be PaaS services that you sign up for. You configure them, you integrate them, and to some extent you get your application without having to write any code.
Clearly, that level is a pipe dream. But to what extent is cloud going to move us closer to that? To what extent are we going to move from very generic application hosting platforms—where you can run an application, you can run a web app, you can store data, you can send messages—to a much richer set of things that you can call SaaS or PaaS, very specialized platforms, application services that you can use.
Will the applications look different if they are written against those sets of very specialized services? I think in five years that's a big change. In five years how far along in that direction we will be?
Combining the two points that William and Ajay just made, as you move into this shift you will be freed from potentially having to develop, focus, maintain, and manage commodity services—applications—and move them to some sort of cloud and increasingly consume that service as software as a service, and be freed to focus any in-house development effort on competitively differentiating applications, business analytics, mobile workforce, and so on. So you end up splitting these things into stuff that becomes more abstracted, becomes more commoditized, and frees you to focus on differentiating business functionality, where that might be where you're still doing development.
I like that. That helps in thinking about that split between your core competency and your differentiation versus, more and more, the idea that there's really no point in writing your own code. I think that that's a good way to structure that.
What advice or recommendations can you give to architects to help to ensure that they can ride this wave, that they have a seat at the cloud table as this progresses?
As Jim said, they need to recognize whether they're in a situation of really having to create something new, create something special, create something they're going to have to optimize and that they should own. They need to have the discipline to realize that, while you may be a good architect who could build something great, it's not worth it. The value proposition is not there and you should really be striving to reuse what's out there already, and even try to map your need to the availability because your available resources are better focused in other areas.
I'm not sure how much of this is the architect's responsibility versus others in the organization. But I think being able to make that determination and choose the right architecture and the right set of tools is going to be one of the requirements for success in that environment.
One of the shifts we've had over time is that we've moved from batch processing to continuous processing. We're moving now from provisioning up front to continuous dynamic provisioning. So architecture becomes a continuous process.
In the old way you would do architecture up front because, as Mark talked about earlier, you have an intimate relationship with particular machines that you give names to, and that's where the app runs. If it breaks down, you have to go fix that machine. All of that architecture was done in that early phase of the overall project in the lifecycle.
And yet now I think that architecture kind of gets pushed out to different locations. You have a lot of architecture that's done up front to create the infrastructure and the platform, to build the cloud and the platform resources, or set up the software as a service. But then you have a late binding of run time architecture. It's the opportunity to dynamically re-provision resources, to bring together application components that have been previously built in the new business workflow. Again, it goes back to James Martin's end-user computing. You have kind of end-user architects out there that are potentially creating run time workflow.
And one other thing is that some of the architecture gets done even before you make these buying decisions. If you look what we've done with our engineered systems, Exalogic and Exadata take a lot of the architecture that you used to build in the data center, and that's built at the vendor and you buy that as a package. So architecture now gets done everywhere, and it's continuous and not simply one discreet exercise.
One of the changes—again, be it cloud or because we have the scale and complexity that I'm seeing in the architecture—is the use of clustering technology at various levels in the stack. From the bottom-up, like in a network, up to the application, it's now becoming a fundamental requirement to be able to provide clustering of some sort of highly available options. Because, cloud requires you to provide provisioning on demand and capacity on demand. And when you translate that into an architecture design principle, that translates into the need for clustering technology at all levels so we can provide either the illusion of, or actual capacity on demand. That's one.
Secondly, what I'm noticing in the shift in architecture also is the need to provide self-service options. It was okay earlier on to take 21 days to provision a new hardware assuming there's a provision software, and so on. Now the end users want self-serviceable options. That's not just a dashboard anymore. It is a design principle because you need to change the architecture to be able to separate what is a user-defined view, what is an internal view, and what's a business view when you start providing some of these features.
Those are the two things I'm beginning to see. I think there are many more, but those two are the big ones.
The key word here, or the key acronym, is API. That's a very obvious and yet not fully understood field, the focus on bringing together and managing a set of APIs and being able to bring consistency across them to really have a well-managed, well-monitored, and well-orchestrated application process that can progress. We're getting all the building blocks, but I think it's still very, very immature compared to where we'll be in a few years.
As an architect your job is to know what it is that you're actually building that's adding unique value, and finding the right pieces to build it upon. The challenge for those of us providing clouds is finding the right toolkit to arm those architects with, right? Because building on the condo versus residential housing idea, you have to give up something but you're gaining these great services. But those services are only great if they're the services that people want and can use. If you force them to have restrictions but you don't give them good services, then you've made them move into the slums.
So as cloud builders, our challenge is to build higher and higher level services that let people do things more easily by making it uniform, easy to use, and really pluggable for a wide range of users, to get the right tool kits out there at higher and higher levels of abstraction.
It's very clear why the IaaS notion has been the easiest to adopt, because it's the closest to what people are used to. The past abstraction offers much more prominence, but it hasn't quite taken off, at least not uniformly yet. Because of that higher level of abstraction, you have to give up more to get onto that level. And so it's difficult to find that sweet spot of, "Yes, this gives me so much more that I'm willing to give up the things that I used to have." I don't think we found that magic sweet spot in the PaaS layer yet.
There are lots of interesting things that have hit interesting niches, but no one's hit that 80% use case. It'll be interesting to see if we can hit the 80% use case or if it really means that there'll always be a variety of PaaS ideas and platforms out there.
A couple of design principles that are brought into the conversation by these points. As Mark just mentioned, what's the right level of chunking of stuff? As you move up abstraction, you have to build groups of pieces together. So you begin to think, as an architect, in terms of repeatable design patterns and templates. You don't think of the ten pieces, you think of the chunk of pieces. So, increasingly, the software management tools and the way you think about design will have you thinking about installing an architecture rather than installing ten different pieces.
I also think that you have a kind of a shift in the way in which you encapsulate things, finding the right size of the architectural components, and translating architectural patterns to this new physical universe. Ajay mentioned clusters; you have to create in the cloud a way of building clusters. If you were self-provisioning this feature in the past it involved dedicated machines talking to each other. You can build a cluster by load balancing two nodes in an environment, a lightweight cluster. But if you do that in a dynamic universe you don't know if those two virtual machines you just provisioned are on the same machine or on different machines.
So in your cloud you have to create concepts of usage pools, or domains, or zones that enable you to build in a virtual universe the things that you previously built in a physical universe.
The views expressed in this interview are those of the participants and do not necessarily reflect the views of Oracle.