Scroll Top

A Virtualisation Article

Why To Go For Virtual Private Servers

A Virtualisation Article

An analogy:

One bus can carry 30 passengers, but cars are now so relatively cheap we all travel by car. That’s up to 30 vehicles that need maintaining,rather than just one.

If you can equip the bus with all the facilities you get in the cars (eg airline style seats with all the personal electronics built in for entertainment and communication, we can all revert to public transport, freeing up the roads, and cutting maintenance contracts.

If only!

However, in the computer world this is much more attainable. One ‘large’ computer, fully equipped and ‘partitioned’, can run applications simultaneously that would otherwise need a number of individual machines, but with much reduced maintenance and upgrade costs

Introduction:

Funnily enough this process started with computers, whose limited resources caused problems. For example, small computers & notebook systems only had physical space for a single hard disk, but the introduction of partitioning allowed this to be addressed as if it were two or more devices; when newer larger hard drives came along, partitioning was the only way for legacy MSDOS systems to address all the space.

This then led to the RAM disk, which is impossible without partitioning. A RAM disk provides applications with RAM which does not really exist by borrowing space from the hard drive. This virtual memory has become so common because it provides benefit at a very low cost. Emulation applications capable of imitating computer platforms or programmes on another platform or programme have existed ever since the need for migration.

These examples help resolve problems of limited resources. However, now hardware costs have fallen the need for such economy is absent, and the numbers of devices proliferate. A different sort of economy has become necessary, with each real device representing an individual management exercise and the maintenance costs of all this equipment is becoming the problem.

Partitioning effectively creates a set of virtual hard disks which allows the creation of several file systems on a single hard disk; these logical divisions within a hard disk add a second level of abstraction to the information storage capabilities of a computer.

The result is the ability to have many isolated execution environments on a single computer. Only one can be used at a time on a single processor, although dual coree technology is changing this rapidly. This is known as a hardware virtual machine. Here we have one physical set of resources, in this case a desktop computer, with multiple personalities (eg dual booting into either Windows or Linux).

The current generation of computer applications typically require a larger amount of memory than the computer actually has. The solution to this problem involves providing ways to allocate portions of memory to programs at their request, and freeing it for reuse when no longer needed. The virtual memory can be many times larger than the physical memory in the system.

Virtual memory systems separate the memory addresses used by a process from actual physical addresses, allowing separation of processes and increasing the effectively available amount of RAM using disk swapping. The quality of the virtual memory manager can have a big impact on overall system performance.

Virtual memory management automates allocation of memory resources and copies areas of RAM that have not been used recently onto the hard disk, thus freeing memory space to load more applications then the physical RAM can support. Because this copying happens automatically, the process is transparent to both the users and the applications because the Memory Management Unit, sits between the CPU and the memory bus, intercepting every virtual address and converting it into a physical address.

The above combination of techniques allows a complete implementation-free model of a computer system to be constructed within the memory of a single host system; this technology can be thought of as an advanced form of emulation.

The model computer can emulate all the layers of hardware and software required for a complete virtual machine, including operating system, utilities and application programs; or simply provide an application interface to the host operating system and any level between these two extremes. The Java virtual machine is an example of such a Virtual machine. This software emulates a non-native system which allows computers to run software written for a different execution environment.

The resulting external interface is in effect a higher level of abstraction of an emulator which conceals the real system implementation by creating an extra resource layer between an existing computer platform and its operating system.

Virtualisation:

In practical terms Virtualisation is achieved in one of two ways, either as a virtual or an emulated machine. Both techniques create an additional software environment positioned between the underlying computer platform and main operating system.

Emulation provides functionality completely in software, whereas Virtualisation uses both software and the physical resources of the host system which are partitioned into multiple contexts consisting of isolated address spaces completely separate from any Windows process – all of which take turns running directly on the processor itself.

An operating system is comprised of layers; the kernel is the most central component which remains in main memory providing all the essential services such as memory management, process and task management, and disk management to the other parts of the operating system and applications.

Full virtualisation requires multiple kernels running concurrently on the host computer system, where the single physical computer’s memory is partitioned into multiple small environments which can support complete operating system architectures of the emulated computers.

However, unlike an emulated machine, each of the simulated machines seemingly has dedicated access to the underlying raw hardware and the host operating system relinquishes control of the central processing unit via time division multiplexing.

Who it all works:

Hardware is used to describe the physical equipment, interconnections and devices required to read, store and execute instructions. In a traditional computer system, the mechanics and electronics are controlled by the operating system “Kernel” and the application run at the top level

The central processing unit fetches, decodes and executes the instructions in memory, all systems hardware devices are controlled by driver utilities, which are mapped to memory locations. The control unit creates pathways for instructions between the appropriate parts of the system through the data bus. The data is transferred through the system via the address bus.

Products such as “Virtual PC” emulate the operating system, applications and the underlying hardware of the simulated system, interposed between the physical hardware and host operating system as a Virtual Machine Monitor (VMM) layer. This interface interrupts the normal protection domains within the architecture of a computer system and allows programs from one privilege level to access resources intended for programs in another.

Each simulated hardware resource is assigned an address within the host application, the virtual addresses of the simulated hardware are redirected to the physical addresses of the underlying hardware, which allows hardware resources to be applied to any resource mapped across the bus, including memory address space, and I/O address space.

Here the host operating system manages the physical computer and the Virtual Machine Monitor (VMM) layer manages the emulated machines providing infrastructure for hardware emulation.

The guest operating system execute on the virtual machine as if they were running on physical hardware, rather than emulated hardware. When a guest operating system is running, the VMM kernel manages the CPU and hardware during virtual machine operations, creating an isolated environment in which the guest operating system and applications run close to the hardware at the highest possible performance.

The current generation of machine Virtualisation applications run below the kernel level but are not integrated into the host operating system. Virtual PC applications allow desktop systems to run concurrent operating systems and virtual server allows one server to run current services. This currently requires manipulation of the internal protection domains within with the host operating system and the some CPU architectures, which exist to prevent data and functionality from faults, which represents a considerable risk.

The next generation Intel and AMD architectures (“Vanderpool” and AMD’s “Pacifica”) will incorporate hardware access directly thus simplifying the interaction and will make VM systems more reliable, and allow a guest operating system to run operations natively without affecting other guests or the host OS.

Using virtualised resources:

The ability to run multiple operating systems simultaneously on a single computer has found users amongst applications testers who have traditionally required an isolated environment for testing new code changes and outright experimentation in addition to the production environment. However, with a virtualised system, both the live system and testing environments can coexist on the same system in total isolation. This also eliminates the need for users to compete for access to the test environment, each of could potentially have a dedicated virtual test environment.

Computer systems in teaching environments such as universities have traditionally been abused by the students as a direct result of the necessity for the students to have unrestricted access to enable them to learn.

Internetworking of local servers and storage is in part a necessity of the physical restrictions on the hardware that can be attached to a common data bus. In typical data centres you have lots of devices. If any part of the environment goes down, time is wasted and profits lost while the fault is diagnosed and traced back through a spaghetti soup of cables. A physical rack of network equipment could be replaced with one physical unit with everything else a digital version of these devices all separate in memory.

As described above, virtual environments are slower than their physical counterparts due to the fact that the simulated hardware is subject to the restriction of the physical hardware. In particular, any disk-related activity is significantly slower.

When applied to enterprise level systems such as ‘virtualised’ networking, in place of multiple traditional independent hardware devices, the ability to run hundreds of virtual private servers on a single physical server could potentially create substantial savings.

Network virtualisation refers to the ability to manage traffic over a network shared among different enterprises. However virtualisation of networking infrastructure called server virtualisation replicates the isolated execution environments found in any data centre. However unlike the traditional infrastructure which impose boundaries such as the need for space, power and cooling systems, these virtual servers are all running concurrently within a single host computer and their interfaces need not exist, interconnected with virtual networks. This approach allows dynamic, efficient and available computing resources

A Storage Area Network (SAN) is a heterogeneous collection of storage devices linked to the local area network which are accessed and administered as one central pool. This is a cluster of many storage devices that have been aggregated together as a larger and more powerful “virtual” storage system. In this case, the software allows a single storage environment to be created spanning multiple storage devices, where this implementation is ‘transparent’ to the user. This is known as storage Virtualisation and is the inverse of machine Virtualisation.

Once resources are virtualised, the software can be easily manipulated. In addition to reducing fixed and operational costs associated with managing multiple devices, otherwise impossible and highly costly software enhancements and uses of the machines and real-time changes are possible along with the potential to provide services at much lower costs.

System upgrades require making changes or additions to the programming of a system in order to keep functionality up to date with current needs.

A virtualised network could be seen as a security risk, allowing hackers access to all the resources for just the effort required to break into one device. On the other hand protecting one physical asset should be easier than protecting many due to the reduced number of possible entry points, although in a virtual network there are no routers thus reducing the number of physical access barriers. The issue here can be compared to the difference between the level of protection gained from a software firewall running on your web server, and having a dedicated system to do this.

The ability to virtualise complete physical systems provides a new way to overcome the problems created by legacy systems. Traditional solutions involve maintaining the old system and keeping it running in an essentially unaltered state. The system may be expanded or partly integrated with some other software or hardware. Such solutions are traditionally perceived as technically infeasible or prohibitively expensive, seemingly forcing the choice of a maintenance strategy.

There is no easy way to install and maintain such services. By comparison this can be accomplished with virtual services in a much more straightforward manner. The current generation of servers are difficult to operate and maintain with our existing applications. The new the real-time streaming services, such as Voice-over-IP and multimedia instant messaging will require even more sophisticated configuration and maintenance.

Just like the all-in-one small office solutions, which integrate a fax, printer and printer into and single system and provide scanner and photocopier functionality, enterprises are able to integrate their web, email and ftp servers into one host server. However, just the all-in-one systems, this has the primary effect of reducing physical requirements but drastically increases complexity of the overall solution.

Traditionally if one component of a photocopier failed, the stand alone printer and scanner were unaffected and there two devices could be used to replace the lost functionality. However, the same failure of an integrated system would most likely result in total loss of all functionality where there is no backup.

Summary:

Although there are still problems, the introduction of the newer processor architecture and their corresponding operating systems able to use the new architecture, these are disappearing rapidly. Soon the wheel will have turned full circle, and we will go back to the days of a single ‘large’ computer. with many users linked by dumb terminals.

Or will it – the newer range of online applications such as those launched by Google may just take us all off in an entirely different direction!


uk dedicated hosting server
#Virtualisation #Article

Will be pleased to have you visit my pages on social networking .

 Facebook page here.

Twitter account is here.

Linkedin account here

Post byBedewy for info askme VISIT GAHZLY

Related Posts

Privacy Preferences
When you visit our website, it may store information through your browser from specific services, usually in form of cookies. Here you can change your privacy preferences. Please note that blocking some types of cookies may impact your experience on our website and the services we offer.