VIRTUALIZATION

ABSTRACT

Virtualization as a concept is not new; computational environment virtualization has been around since the first mainframe systems. But recently, the term “virtualization” has become ubiquitous, representing any type of process obfuscation where a process is somehow removed from its physical operating environment.
Because of this ambiguity, virtualization can almost be applied to any and all parts of an IT infrastructure. For example, mobile device emulators are a form of virtualization because the hardware platform normally required to run the mobile operating system has been emulated, removing the OS binding from the hardware it was written for. But this is just one example of one type of virtualization; there are many definitions of the term “virtualization” floating around in the current lexicon, and all (or at least most) of them are correct, which can be quite confusing. This paper focuses on virtualization as it pertains to the data center; but before considering any type of data center virtualization, it’s important to define what technology or category of service you’re trying to virtualize. Generally speaking, virtualization falls into three categories: Operating System, Storage, and Applications. But these categories are very broad and don’t adequately delineate the key aspects of data center virtualization.
INTRODUCTION

WHAT IS VIRTUALIZATION?

Virtualization is one of the more significant technologies to impact computing in the last few years. With roots extending back several decades, today its resurgence in popularity has many industry analysts predicting that its use will grow expansively in companies over the next several years. Promising benefits such as consolidation of infrastructure, lower costs, greater security, ease of management, better employee productivity, and more, it’s easy to see why virtualization is poised to change the landscape of computing.
But what exactly is virtualization? The term is used abundantly, and often confusingly, throughout the computing industry. You’ll quickly discover after sifting through the literature that virtualization can take on different shades of meaning depending on the type of solution or strategy being discussed and whether the reference applies to memory, hardware, storage, operating systems, or the like.

Untitled

 

HISTORY

Virtualization is a framework or methodology of logically dividing computer resources such as hardware, software, time-sharing, and others into separate virtual machines executing instructions independent from the operating system. In layman’s term, virtualization allows you to run an independent operating system within an existing operating system using the existing hardware resources. So if you want to learn another operating system like Linux, you can use virtualization to run Linux on top of the existing operating system.
Initially, virtualization is used to test new software but today it has proven to be a lot more than a testing environment. The concept behind this great technology might be a bit complicated but it has been around for a couple of decades waiting to be developed. Virtualization concept was first developed by IBM in the 1960s to fully utilize mainframe hardware by logically partitioning them into virtual machines. These partitions will allow mainframe computers to perform multiple tasks and applications at the same time. Keep in mind that mainframe computers during that time are very expensive that’s why they are looking for ways to fully utilize its resources.
During the 1980s and 1990s, desktop computing and x86 servers become available and so the virtualization technology was discarded eventually. Client-server applications and the emergence of Windows and Linux made server computing significantly inexpensive. However, new challenges has surfaced which includes high maintenance and management cost, high infrastructure cost, and insufficient failure and disaster protection that lead to the invention of virtualization for x86 platform. Virtualization dramatically improves efficiency and drive down overall IT cost
VMware invented virtualization for the x86 architecture in the 1990s to address the above mentioned problems. In 1999, VMware introduced a virtualization solution which transform x 86 systems into a fully isolated shared hardware infrastructure. VMware started with a long history as an open source which greatly helped the concept developed considerably. Today, VMware is the global leader in x86 virtualization which offers desktop, server and datacenter solutions.
Virtualization allows you to run multiple operating systems on a single computer including Windows, Linux, and more. The number of virtual machines that you can run on a single computer depends on the hardware specification. It is very useful when conducting test or evaluation on new application before finally implementing on the network. This way, all the problems or issues will be diagnosed in an isolated environment. Since virtualization software is run by the operating system like a normal application, it allows you to quickly replace a virtual machine thereby increasing availability throughout the datacenter. VMware also provides a player which allows you to run a virtual machine on any computer without installing additional software. These are some of the reasons why VMware is the leader when it comes to virtualization.
Due to the potential revenue of virtualization, Microsoft took a giant leap in the acquisition of a virtual server software company Connect to share the market with VMware. Microsoft released a Windows-hosted virtualization program called Microsoft Virtual PC 2004 in July 2006 as a free product. It was immediately followed by the release of Virtual PC 2007 beta in October 2006. The production release was done on February 2007. The latest version which was renamed to Windows Virtual PC is released in conjunction with the latest Windows7 operating system
Virtual machines are now implemented in most datacenters to function like a normal server but with significantly less maintenance and management cost. This technology has very huge potential and will play a very important role in the future of computing.
VIRTUALIZATION DEFINDS

Virtualization is the process of decoupling the hardware from the operating system on a physical machine. It turns what used to be considered purely hardware into software. Put simply, you can think of virtualization as essentially a computer within a computer, implemented in software. This is true all the way down to the emulation of certain types of devices, such as sound cards, CPUs, memory, and physical storage. An instance of an operating system running in a virtualized environment is known as a virtual machine. Virtualization technologies allow multiple virtual machines, with heterogeneous operating systems to run side by side and in isolation on the same physical machine. By emulating a complete hardware system, from processor to network card, each
Virtual machine can share a common set of hardware unaware that this hardware may also be being used by another virtual machine at the same time. The operating system running in the virtual machine sees a consistent, normalized set of hardware regardless of the actual physical hardware components.
Technologies such as Intel Virtualization Technology (Intel VT), which will be reviewed later in this article, significantly improve and enhance virtualization from the perspective of the vendors that produce these solutions. With a working definition of virtualization on the table, here’s a quick mention of some of the other types of virtualization technology available today. For example, computer memory virtualization is software that allows a program to address a much larger amount of memory than is actually available. To accomplish this, you would generally swap units of address space back and forth as needed between a storage device and virtual memory. In computer storage management, virtualization is the pooling of physical storage from multiple network storage devices into what appears to be a single storage device that is managed from a central console.
TERMINOLOGY

Terminology
Individual vendors often choose terminology that suits their marketing needs to describe their products. Like the nuances of the virtualization technologies, it’s easy to get confused over the different terms used to describe features or components. Hopefully as virtualization technology continues to evolve and as more players enter the marketplace, a common set of terminology will emerge. But for now, here is a list of terms and corresponding definitions.

Host Machine
A host machine is the physical machine running the virtualization software. It contains the physical resources, such as memory, hard disk space, and CPU, and other resources, such as network access, that the virtual machines utilize.

Virtual Machine
The virtual machine is the virtualized representation of a physical machine that is run and maintained by the virtualization software. Each virtual machine, implemented as a single file or a small collection of files in a single folder on the host system, behaves as if it is running on an individual, physical, non-virtualized PC.

Virtualization Software
Virtualization software is a generic term denoting software that allows a user to run virtual machines on a host machine.

Virtual Disk
The term refers to the virtual machine’s physical representation on the disk of the host machine. A virtual disk comprises either a single file or a collection of related files. It appears to the virtual machine as a physical hard disk. One of the benefits of using virtual machine architecture is its portability whereby you can move virtual disk files from one physical machine to another with limited impact on the files. Subsequent chapters illustrate various ways in which this can be a significant benefit across a wide variety of areas.

Virtual Machine Additions
Virtual machine additions increase the performance of the guest operating system when compared to running without the additions, provide access to USB devices and other specialized devices, and, in some cases, to higher video resolutions than without the additions, thus offering an improved user interface experience within a virtual machine.
The additions also allow the use of customizations such as shared folders, drag -and-drop copy and paste between the host and virtual machines and between virtual machines, and other enhancements. One particularly useful enhancement is the ability of the mouse pointer’s focus to naturally move from the virtual machine window to the host machine’s active application windows without having to physically adjust it each time the window changes.
This allows you to interact with the virtualized operating system as if it were nothing more than another application window, such as a word processing program running on the host machine.

Shared Folders
Most virtual machine implementations support the use of shared folders. After the installation of virtual machine additions, shared folders enables the virtual machine to access data on the host. Through a series of under -the-cover drive mappings the virtual machine can open up files and folders on the physical host machine. You then can transfer these files from the physical machine to a
Virtual machine using a standard mechanism such as a mapped drive. Shared folders can access installation files for programs, data files, or other files that you need to copy and load into the virtual machine. With shared folders you don’t have to copy data files into each virtual machine. Instead, all of your virtual machines access the same files through a shared folder that targets a single endpoint on the physical host machine.
Virtual Machine Monitor (VMM)
A virtual machine monitor is the software solution that implements
Virtualization to run in conjunction with the host operating system. The virtual machine monitor virtualizes certain hardware resources, such as the CPU, memory, and physical disk, and creates emulated devices for virtual machines running on the host machine. An overview of emulated devices is presented later in this chapter. For now, it is important to understand that the virtual machine monitor determines how resources should be allocated, virtualized,
and presented to the virtual machines running on the host computer. Many software solutions that exist today utilize this method of virtualization.

HYPERVISOR
In contrast to the virtual machine monitor, a hypervisor runs directly on the physical hardware. The hypervisor runs directly on the hardware without any intervening help from the host operating system to provide access to hardware resources. The hypervisor is directly responsible for hosting and managing virtual machines running on the host machine. However, the implementation of the hypervisor and its overall benefits vary widely across vendors.

Untitled

 

FULL VIRTUALIZATION

In computer science, full virtualization is a virtualization technique used to provide a certain kind of machine environment, namely, one that is a complete simulation of the underlying hardware. Full virtualization requires that every salient feature of the hardware be reflected into one of several virtual machines – including the full instruction set, input/output operations, interrupts, memory access, and whatever other elements are used by the software that runs on the bare machine, and that is intended to run in a virtual machine. In such an environment, any software capable of execution on the raw hardware can be run in the virtual machine and, in particular, any operating systems. The obvious test of virtualization is whether an operating system intended for stand-alone use can successfully run inside a virtual machine.

Untitled

 

Other forms of platform virtualization allow only certain or modified software to run within a virtual machine. The concept of full virtualization is well established in the literature, but it is not always referred to by this specific term; see platform virtualization for terminology.
An important example of full virtualization was that provided by the control program of IBM’s CP/CMS operating system. It was first demonstrated with IBM’s CP-40 research system in 1967, then distributed via open source in CP/CMS in 1967-1972, and re-implemented in IBM’s VM family from 1972 to the present. Each CP/CMS user was provided a simulated, stand-alone computer. Each such virtual machine had the complete capabilities of the underlying machine, and (for its user) the virtual machine was indistinguishable from a private system. This simulation was comprehensive, and was based on the Principles of Operation manual for the hardware. It thus included such elements as instruction set, main memory, interrupts, exceptions, and device access. The result was a single machine that could be multiplexed among many users.
Full virtualization is only possible given the right combination of hardware and software elements. For example, it was not possible with most of IBM’s System/360 series with the exception being the IBM System/360-67; nor was it possible with IBM’s early System/370 system until IBM added virtual memory hardware to the System/370 series in 1972.
Similarly, full virtualization was not quite possible with the x 86 platforms until the 2005-2006 addition of the AMD-V and Intel VT-x extensions (see x86 virtualization). Many platform virtual machines for the x 86 platforms came very close and claimed full virtualization even prior to the AMD-V and Intel VT-x additions. Examples include Adios, Mac-on-Linux, Parallels Desktop for Mac, Parallels Workstation, VMware Workstation, VMware Server (formerly GSX Server), Virtual Box, Win4BSD, and Win4Lin Pro. VMware, for instance, employs a technique called binary to automatically modify x86 software on-the-fly to replace instructions that “pierce the virtual machine” with a different, virtual machine safe sequence of instructions; this technique provides the appearance of full virtualization.
A key challenge for full virtualization is the interception and simulation of privileged operations, such as I/O instructions. The effects of every operation performed within a given virtual machine must be kept within that virtual machine – virtual operations cannot be allowed to alter the state of any other virtual machine, the control program, or the hardware. Some machine instructions can be executed directly by the hardware, since their effects are entirely contained within the elements managed by the control program, such as memory locations and arithmetic registers. But other instructions that would “pierce the virtual machine” cannot be allowed to execute directly; they must instead be trapped and simulated. Such instructions either access or affect state information that is outside the virtual machine.

Full virtualization has proven highly successful for:
a) Sharing a computer system among multiple users;
b) Isolating users from each other (and from the control program);
c) Emulating new hardware to achieve improved reliability, security and productivity.
HARDWARE VIRTUALIZATION

Computer hardware virtualization is virtualization of computers or operating systems. It hides the physical characteristics of a computing platform from users, instead showing another abstract computing platform.[]The software that controls the virtualization used to be called a “control program” at its origins, but nowadays the terms “hypervisor” or “virtual machine monitor” are preferred.
The term “virtualization” was coined in the 1960s to refer to a virtual machine (sometimes called “pseudo machine”), a term which itself dates from the experimental IBM M44/44Xsystem.The creation and management of virtual machines has been called “platform virtualization”, or “server virtualization”, more recently.

Untitled

 

Platform virtualization is performed on a given hardware platform by host software (a control program), which creates a simulated computer environment, a virtual machine (VM), for its guest software. The guest software is not limited to user applications; many hosts allow the execution of complete operating systems. The guest software executes as if it were running directly on the physical hardware, with several notable caveats. Access to physical system resources (such as the network access, display, keyboard, and disk storage) is generally managed at a more restrictive level than the host processor and system-memory. Guests are often restricted from accessing specific peripheral devices, or may be limited to a subset of the device’s native capabilities, depending on the hardware access policy implemented by the virtualization host.
Virtualization often exacts performance penalties, both in resources required to run the hypervisor, and as well as in reduced performance on the virtual machine compared to running native on the physical machine.
HARDWARE UTILIZATION

Impacts
Vitualizing your infrastructure or even a small number of machines can have enormous benefits, but it can also affect the performance of your server, workstation, or mobile machine hardware even with advances such as multi -core processors. It is important to understand some of the tradeoffs that occur at the hardware level with virtualization. This section outlines them on a component-by-component basis.
Physical RAM, CPU, hard disk space, and networking all play a role in determining whether a host machine is prepared to run a virtual m a chine based application. Properly preparing your host machines prior to running virtual machines on them will help you achieve better stability, scalability and long-term performance for your virtual machines. When selecting a host, you’ll need to ensure that it meets the virtual machine application’s minimum hardware requirements and further that enough resources, particularly memory, are available for the number of virtual machines you want to run simultaneously on the host Here is a breakdown of the various hardware components that are the usual bottlenecks and what can be done to prevent them.

CPU
The CPU is one of the more significant bottlenecks in the system when
running multiple virtual machines. All of the operating systems that are
running on the host in a virtual machine are competing for access to the CPU.
An effective solution to this problem is to use a multi-processor or, better, a multi-core machine where you can dedicate a core or more to a virtual machine. The technology to assign a given core to a virtual machine image is not yet fully provided by current virtualization vendors but is expected to be available in the near future. In the absence of a multi -core processor, the next best step is to find the fastest processor available to meet your needs.
Memory
Memory also can be a significant bottleneck but its effect can be mitigated, impart, by selecting the best vendor for your virtualization solution because various vendors handle memory utilization differently. Regardless of the vendor you chose, you must have a significant amount of memory—one that is roughly equivalent to the amount you would have assigned to each machine if they were t o run as a physical machine.
For example, to run Windows XP Professional on a virtual machine, you might allocate 256 megabytes (MB) of memory. This is on top of the 256 MB recommended for the host computer, assuming Windows XP is the host.
This can mean in many cases that a base machine configuration comes out to approximately 1 –2 gigabytes (GB) of memory or perhaps many more gigabytes for a server -based virtualization solution.
You can easily change memory configuration for a guest operating system
that is virtualized. Typically this change is done from within the virtualization software itself and requires only a shutdown and restart cycle of the virtual machine to take effect. Contrast this process with the requirement to manually install memory on each physical machine and you can see one of the benefits of virtualization technology.

Physical Disk
When it comes to virtualization, overall disk space utilization for each virtual machine isn’t as great a concern as is the intelligent utilization of each physical drive. An additional important point to consider is the rotational speed of the drive in use. Because you may utilize multiple virtual machine sum single drive the rotational speed of the drive can have a dramatic affection performance with greater drive speeds. For the best performance across most of the virtualization products today, consider implementing multiple disk drives and using the fastest drive possible, in terms of its rotation speed, for each drive. One way to boost performance of a virtualized solution beyond just having a faster drive is to ensure that the host machine and its associated operating system have a dedicated physical hard drive, and that all virtual
machines or potentially each virtual machine has a separate physical hard disk allocated to it.

Network
Network utilization can also present bottleneck issues, similar to those with memory. Even though the virtual machine doesn’t add any significant amount of network latency into the equation, the host machine must have the capacity to service the network needs of all of the running virtual machines on the host machine. However as with memory you still need to app y the appropriate amount of network bandwidth and network resources that you would have If the machines were running on separate physical hardware.
You might need to upgrade your network card if you are running multiple virtual machines in an IT environment and all machines are experiencing heavy concurrent network traffic. But in most desktop virtualization scenarios you will find that the network is not the problem. Most likely the culprit is the CPU, disk, or memory.

OPERATING SYSTEM LEVEL VIRTUALIZATION

Operating system-level virtualization is a server virtualization method where the kernel of an operating system allows for multiple isolated user-space instances, instead of just one. Such instances (often called containers, VEs, VPSs or jails) may look and feel like a real server, from the point of view of its owner. On Unix systems, this technology can be thought of as an advanced implementation of the standard cheroot mechanism. In addition to isolation mechanisms, the kernel often provides resource management features to limit the impact of one container’s activities on the other containers.

USES
Operating system-level virtualization is commonly used in virtual hosting environments, where it is useful for securely allocating finite hardware resources amongst a large number of mutually-distrusting users. It is also used, to a lesser extent, for consolidating server hardware by moving services on separate hosts into containers on the one server.
Other typical scenarios include separating several applications to separate containers for improved security, hardware independence, and added resource management features.
OS-level virtualization implementations that are capable of live migration can be used for dynamic load balancing of containers between nodes in a cluster.

ADVANTAGES AND DISADVANTAGES

Overhead
This form of virtualization usually imposes little or no overhead, because programs in virtual partition use the operating system’s normal system call interface and do not need to be subject to emulation or run in an intermediate virtual machine, as is the case with whole-system virtualizes (such as VMware and QEMU) or paravirtualizers (such as Xen and UML). It also does not require hardware assistance to perform efficiently.

Flexibility
Operating system-level virtualization is not as flexible as other virtualization approaches since it cannot host a guest operating system different from the host one, or a different guest kernel. For example, with Linux, different distributions are fine, but other OS such as Windows cannot be hosted. This limitation is partially overcome in Solaris by its zones feature, which provides the ability to run an environment within a container that emulates a Linux 2.4-based release or an older Solaris releases.

Storage
Some operating-system virtualizers provide file-level copy-on-write mechanisms. (Most commonly, a standard file system is shared between partitions, and partitions which change the files automatically create their own copies.) This is easier to back up, more space-efficient and simpler to cache than the block-level copy-on-write schemes common on whole-system virtualizers. Whole-system virtualizers, however, can work with non-native file systems and create and roll back snapshots of the entire system state.
WHY VIRTUALIZATION REQUIRED

Following are some (possibly overlapping) representative reasons for and benefits of virtualization:
 Virtual machines can be used to consolidate the workloads of several under-utilized servers to fewer machines, perhaps a single machine (server consolidation). Related benefits (perceived or real, but often cited by vendors) are savings on hardware, environmental costs, management, and administration of the server infrastructure.
 The need to run legacy applications is served well by virtual machines. A legacy application might simply not run on newer hardware and/or operating systems. Even if it does, if may under-utilize the server, so as above, it makes sense to consolidate several applications. This may be difficult without virtualization as such applications are usually not written to co-exist within a single execution environment (consider applications with hard-coded System V IPC keys, as a trivial example).
 Virtual machines can be used to provide secure, isolated sandboxes for running untrusted applications. You could even create such an execution environment dynamically – on the fly – as you download something from the Internet and run it. You can think of creative schemes, such as those involving address obfuscation. Virtualization is an important concept in building secure computing platforms.
 Virtual machines can be used to create operating systems, or execution environments with resource limits, and given the right schedulers, resource guarantees. Partitioning usually goes hand-in-hand with quality of service in the creation of Quos-enabled operating systems.
 Virtual machines can provide the illusion of hardware, or hardware configuration that you do not have (such as SCSI devices, multiple processors, …) Virtualization can also be used to simulate networks of independent computers.
 Virtual machines can be used to run multiple operating systems simultaneously: different versions, or even entirely different systems, which can be on hot standby. Some such systems may be hard or impossible to run on newer real hardware.
 Virtual machines allow for powerful debugging and performance monitoring. You can put such tools in the virtual machine monitor, for example. Operating systems can be debugged without losing productivity, or setting up more complicated debugging scenarios.
 Virtual machines can isolate what they run, so they provide fault and error containment. You can inject faults proactively into software to study its subsequent behavior.
 Virtual machines make software easier to migrate, thus aiding application and system mobility.
 You can treat application suites as appliances by “packaging” and running each in a virtual machine.
 Virtual machines are great tools for research and academic experiments. Since they provide isolation, they are safer to work with. They encapsulate the entire state of a running system: you can save the state, examine it, modify it, reload it, and so on. The state also provides an abstraction of the workload being run.
 Virtualization can enable existing operating systems to run on shared memory multiprocessors.
 Virtual machines can be used to create arbitrary test scenarios, and can lead to some very imaginative, effective quality assurance.
 Virtualization can be used to retrofit new features in existing operating systems without “too much” work.
 Virtualization can make tasks such as system migration, backup, and recovery easier and more manageable.
 Virtualization can be an effective means of providing binary compatibility.
 Virtualization on commodity hardware has been popular in co-located hosting. Many of the above benefits make such hosting secure, cost-effective, and appealing in general.
 Virtualization is fun.
 Plenty of other reasons.
FUTURE SCOPE

The future of virtualization
When most people think of server virtualization, they think of one operating system running within a window on top of a second OS. Perhaps you want to keep some old application chugging along, but it can only work with Microsoft Windows NT. You don’t want to run a stand-alone NT server, so you’d run a virtual instance of NT on top of Microsoft Server 2003, using virtualization software from VMware Inc. of Palo Alto, Calif., or some other company.
More advanced enterprise usage of virtualization software may involve on-the-fly bare-metal installations, where multiple instances of an OS, along with related applications, are downloaded to computers that have no operating systems at all Turns out, such approaches may be the might be yesterday’s way of thinking about virtualization. Developers are looking at ways of expanding the whole concept of the technology beyond its current limitations. Take VMware for instance. Last week, the company announced the winner of its Ultimate Virtual Appliance Challenge.
This contest challenged developers to look at using virtualization as a way to ease software deployment. In this approach, a software package would come to the user bundled with an OS. In VMware’s perspective, this approach would allow software developers to pick a single and the best OS for their applications, minimizing installation headaches.
VMware launched the challenge last February and announced the winners last week. Some of the entries included a network debugging console, a database caching mechanism, an encryption server and network attached appliance file server. The company posted hundreds of the entries, including the winning ones on its site. Users can down these applications and run them on VMware Player or VMware Server, both available as free downloads.
VMware is not the only on thinking outside the virtualized box, evidently. The open source Xen community is also hammering away at extending virtualization as well. At the Linux World conference in Boston last spring, we got the chance to talk virtualization with Brian Stevens, the chief technical officer for Red Hat Inc. a big supporter of Xen.
‘We’re trying to change the usage model,’ Stevens said. ‘Today what happens if you want to do a virtualization is that you buy a multiple number of products and put them all together. You buy VMware, you install it, you buy an OS and install that. You spend some time making sure all that works together. So we’re trying to think of virtualization as more fan integrated solution.
In the near-term, Stevens foresees rolling virtualization capabilities into the OS itself so that you can ‘have a virtualization-aware operating system.’ By doing this, developers can then build performance tools to gauge how well the virtualized environments are performing, how many resources they are taking up, and so on. You can do some analysis today, but the virtualized instances still operate as a bit of a black box, both to the host and the guest OS.
Beyond that, Xen developers are also playing with the radical idea of getting rid of the guest OS altogether. Why fire up an entire OS when you only need a few of its features? Taking the place of the guest OS could be an abstract virtual container tweaked for a particular environment. We realize there will be have different ways of carving up systems with different properties,
Xen would provide a set of application programming interfaces, which would allow developers to build containers for their specific applications. ‘They could specify what type of container they want: A lightweight machine, a full virtual machine, or an emulated environment,’ Stevens said.

CONCLUSION

Virtualization technology, while not new, is growing at a significant rate in its use on servers and desktop machines and long ago lost its connection to mainframe systems alone. While challenges do exist, such as the unification of terminology, the development of even more robust software solutions, and the implementation of greater device virtualization support, virtualization is still poised to make a significant impact on the landscape of computing over the next few years.

REFERENCES

[1] Sean campbell and michael jeronimo” applied virtualization Technology”, 2006.

[2] David e. Williams “virtualization with xen “ 2007 http://books.google.co.in/books

[3] Edward haletky “vmware esx and esxi” in the enterprise 2nd edition may 2005

[4] Aki tuhkanen “server consolidation with vmware esxi 3.5” april 2010

[5] Daniel A. Menasc”virtualization: concepts, applications, and performance modeling” IEEE Internet Computing, May 2005, Vol. 38, No. 5.

Click here to Download PPT…

Leave a Reply

Your email address will not be published. Required fields are marked *