Quantcast
Viewing latest article 2
Browse Latest Browse All 5

Database Consolidation Part 2 – Shared Infrastructure Design Choices

Part one was all about the business drivers and technical challenges faced when building a database consolidation platform. Database consolidation is all about sharing infrastructure, so part two is about the design choices that are available…

Image may be NSFW.
Clik here to view.
An important architectural decision when consolidating databases is that of where the shared infrastructure should diverge. If we assume that your customers are applications which require a database service, at what point should each application be segregated from the others? Obviously you want to use the same underlying hardware, but what about the OS? What about the storage, do you want to segregate the data into different volumes on different LUNs? Maybe you want to share right at the top and just have different application schemas in one big container database?

Let’s have a look at the three main choices available:

  • Multi-Tenancy databases
  • Shared Platform databases
  • Virtualisation

A multi-tenancy database is a database which contains main different applications, each with their own schema. In many ways this model makes a lot of sense, since it allows for the highest level of resource sharing and an almost-zero deployment time for new schemas. And after all, Oracle is designed to have multiple users and schemas; the database resource manager allows for a level of QoS (quality of service) to be maintained whilst features such as Virtual Private Database can be used to enhance the security levels. Oracle allows for services to be defined which can then be controlled and relocated on a clustered database. Why not opt for this method? In fact, some customers do – although the vast majority don’t. The reasons for avoiding this method are further up this page, under the heading “Technical Challenges”. A single big database is a big single point of failure. You don’t want to hit an ORA-600 and see the whole thing come crashing down if it’s a container for your entire application estate! Say someone accidentally truncates a table and wants the whole database rolled back so they can retrieve their data, how can you work that situation out? Maintenance becomes a nightmare – can you really have all of your applications on the exact same release and patchset of Oracle? What about testing… Say one of your applications requires a patch for the optimizer, how do you go about testing every other application to ensure they are not affected? And security… it only takes one mistaken privilege to be granted and everything is exposed… do you really trust this model?

A shared platform database model provides segregation at the database level, so that a cluster of hardware (for example a six-node cluster running Oracle Grid Infrastructure) then runs different databases. This allows for a wide-ranging variety of database versions and patchsets to be run on the same platform, which is far more practical and makes the security issues far easier to cope with. Of course, it’s not without its challenges either. Firstly, there are still components that cannot be upgraded without affecting large groups (or all) of the customers: the operating systems, the Grid Infrastructure software, firmware for various components etc. Then there are the additional resource requirements for running multiple databases: extra RAM to cope with all of the SGAs and PGAs, extra CPU capacity to cope with all the additional processes from each instance, extra storage for all of those temporary and undo tablespaces, the online and archive redo logs, the SYSTEM and SYSAUX tablespaces. Maintenance requirements also increase, because although you can upgrade or patch each database independently you now have many more databases to upgrade / patch. This means administrative time increases dramatically – although you can combat this with the use of enterprise management tools such as Oracle Enterprise Manager.

An environment which uses virtualisation is perhaps the strongest design model. Virtualisation products have matured significantly in recent years to the point that they are now being used not just in non-database production environments but now for databases as well. Traditionally this is been a difficult subject for DBAs due to Oracle’s support policy for databases running on VMWare. This has softened considerably in recent years but Oracle still reserves the right withdraw support for an issue unless it “can be demonstrated to not be as a result of running on VMware”. Of course, Oracle has its own virtualisation product Oracle VM (which I have to say I actually really like) where support is not an issue, but I suspect that it has a far smaller share of the market than VMWare (although you wouldn’t know it from the aggressive marketing…). The great thing about virtualisation is that you have inherent security based on the segregation of each virtual machine. Maintenance becomes a lot easier because even OS upgrades can take place without affecting other users, whilst VMs can be migrated from one physical stack to another in order to perform non-disruptive hardware maintenance. Deployment and provisioning becomes easier as virtualisation products like VMWare and OVM are designed with these requirements in mind; the use of templates and the cloning of existing images are both great options. Similarly, expansion both at the VM level and across the whole platform is a lot easier. On the other hand, licensing (particularly of Oracle products) isn’t always clear (but then when is it?). The main challenge though is capacity, because now you not only have to consider all of those database SGAs and PGAs but also the operating systems and their various requirements, from root filesystems to swap files. Again I will talk about this in the second post on this topic.

Image may be NSFW.
Clik here to view.
Finally… there is a fourth model, which I haven’t mentioned here because it almost certainly won’t apply to the majority of people reading this. The fourth model is schema-level multi-tenancy, as used by the likes of Software-as-a-Service companies, whereby a single application is shared by multiple customers each of which only see their slice of the data. This is really an application-based consolidation solution, where each user or set of users only has visibility of their data despite it being stored in the same tables as that of other users. The application uses unique keys and referential integrity to lookup only the correct data for each user, leading to the security ramification that your data is only as secure as the developer code written to extract it for you. I once worked on one of these systems and discovered a SQL injection issue that allowed me to view not only my data but that of anyone whose userID I could guess. Of course there are products such as Oracle’s Virtual Private Database that can be used to provide additional levels of protection.

The reason I mention this fourth model is that Larry Ellison attacked Salesforce.com for using a variant of this model and said that multi-tenancy “was the state-of-the-art 15 years ago”, whilst talking up the Oracle Public Cloud for using virtualisation as a security model. According to Larry, multi-tenancy “puts your data at risk by commingling it with others”. Now, I don’t know Salesforce’s database design so I don’t know how well it fits into my description above (I have some friends who work for Salesforce though so I do know that they employ great developers!)… but what I do know is Exadata. And Exadata, along with the Super Cluster, is the platform for Oracle’s “Private Cloud” offering (details of which you can read about here). Exadata, however, has no virtualisation option. You cannot run OVM on Exadata, so if you read Oracle’s Exadata Database Consolidation white paper, it’s all about building the shared platform model I talked about above. To me, that doesn’t really fit in with Larry’s words on the subject.

Scale works in both directions…

One final thought for this section. If you build a DaaS environment and get all of your automated provisioning right etc you will make it very easy for your users to build new applications and services. That’s a good thing, right? But don’t forget to spend some time thinking about how you are going to ensure that this thing doesn’t grow and grow out of control. Ideally you need some sort of cross-charging process in place (I could probably write another whole article on this at some point, it’s such a big topic) but most of all you need to have a process for decommissioning and tearing down applications and databases that have exceeded their shelf life. If you don’t have that, you will find that all of your infrastructure cost savings are very short lived…!

That’s it for part two. In part three I will be discussing the capacity requirements of a consolidation platform. And you won’t be surprised to hear that flash is going to make an appearance soon, because flash memory is the perfect fit for a consolidation environment. Don’t believe me? Wait and see…


Filed under: Blog, Database, Database Consolidation, Database Virtualisation, Flash, Storage Tagged: database, database-consolidation, oracle Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.

Viewing latest article 2
Browse Latest Browse All 5

Trending Articles