Who invented virtualization




















This model represented a major breakthrough in computer technology: the cost of providing computing capability dropped considerably and it became possible for organizations, and even individuals, to use a computer without actually owning one. Similar reasons are driving virtualization for industry standard computing today: the capacity in a single server is so large that it is almost impossible for most workloads to effectively use it.

The best way to improve resource utilization, and at the same time simplify data center management, is through virtualization. Data centers today use virtualization techniques to make abstraction of the physical hardware, create large aggregated pools of logical resources consisting of CPUs, memory, disks, file storage, applications, networking, and offer those resources to users or customers in the form of agile, scalable, consolidated virtual machines.

Java allowed you to write an application once, then run the application on any computer with the Java Run-time Environment JRE installed. Java works by compiling the application into something known as Java Byte Code.

At the time you write your program, your Java code is not compiled. Instead, it is converted into Java Byte Code, until just before the program is executed. Since the JRE compiles the software just before running, the developer does not need to worry about what operating system or hardware platform the end user will run the application on; and the user does not need to know how to compile a program, that is handled by the JRE.. Whenever a java application is run, it is run inside of the Java Virtual Machine.

You can think of the Java Virtual Machine is a very small operating system, created with the sole purpose of running your Java application. You can write the application once, and run anywhere.

At least that is the idea; there are some limitations. With increased complexity came expanding administrative costs driven by the need to hire more experienced IT professionals, and the need to carry out a wide variety of tasks, including Windows Server backup and recovery.

These actions required more manual intervention in processes than many IT budgets could support. Server maintenance costs, especially those tied to Windows Server backup, were climbing and more personnel were required to work through an increasing number of day-to-day tasks.

Just as important were the issues of how to limit the impact of server outages, improve business continuity and create more robust disaster recovery plans. Still, it served as a powerful inspiration for VMware to reviving the concept and apply it to x86 machines. In the late s, VMware stepped in and began to apply its own virtualization model. In , VMware introduced virtualization to x86 systems, which VMware points out were not designed for virtualization in the way mainframes were.

To prevent this idle time, developers and computers were separated. A few years later, scientists came to the idea that if they allow more than one user to use the computer, efficiency will increase. When one user enters data, the computer works with the tasks of other users. It will fill pauses and minimize idle time. This idea was called the time-sharing concept. The Time-sharing concept implies that computing resources are shared among many users. It appeared in the early s and led to many revolutionary changes, including the emergence of virtualization.

While one user was thinking or entering data, the computer processed tasks of another user. The equipment became more powerful, and the tasks generated by several users were not enough to fill its capacity. Then the processor was taught to switch between tasks more often. Every task received a "quantum" of time when the processor worked with it.

If one quant wasn't enough to complete the task, the processor switched to another task returning to the first one at the next quant. The switches occurred so quickly that every user may even think that he used the machine alone. The first time when the time-sharing concept was implemented was the Compatible Time-Sharing System of Massachusetts Institute of Technology in the early s.

By the way, around the same time, the time-sharing system was also started at Dartmouth College the one where Basik was invented. The scientists even managed to sell it, although it did not get widespread. The first operating system supporting time-sharing was Multics, the predecessor of the Unix family. Both Multics and the system from Dartmouth College found its application although they were far from being perfect: slow, unreliable, and unsafe.

Scientists wanted to make them better and even knew how, however, the equipment capabilities were limited. It was hard to overcome these limits without having support from manufacturers, and after some time they joined the work as well.

It was the first OS supporting virtualization. The hypervisor runs on the hardware and creates several virtual machines. This approach is much more convenient than the time-sharing system and here's why:. Such systems already looked much more familiar to a modern user - as terminals connected to the mainframe.

The mainframe was a big and powerful computer while terminals were devices having a screen, keyboard, and mouse. Users worked with terminals to enter data and set tasks, and the mainframe processed these data.

One mainframe could be connected to tens or even hundreds of terminals. The personal computer boom started in the late 20th century, affected not only the mainframe industry but also the virtualization technologies development. While computers were expensive and bulky, companies preferred to use a mainframe with terminals because they could not afford to buy a computer for every employee. Over time, computers became smaller and their price was affordable enough for private companies.

So in the s, personal computers changed terminals. Together with mainframes, virtualization technologies have faded into the background, but not for long.



0コメント

  • 1000 / 1000