finelemon2
02-09-2004, 01:09 AM
How does everyone feel about being limited by only being able to use Windows Mobile on their Pocket PC hardware or Palm OS on their Palm hardware?
The definition of a virtual machine:
A virtual machine is any agreed upon standard which gives a consistent interface to using software on different hardware platforms or software in different contexts including context created by other software. Thus, a CPU instruction set can be a virtual machine. How it is implemented is IRRELEVANT, except for optimization. What is important is the fact that it enables COMPATIBILITY between software. It is also why it is called "Virtual" - it can exist in the abstract without necessarily being IMPLEMENTED in hardware or software. Anything digital is as such “virtual" - as soon as you leave the analog world and start talking in terms of digital entites such as bits or bytes, you are talking in terms of virtual machines. The electric circuitry is real, a bit is purely abstract, infinitely copyable. However, the processes that occur in hardware are real - they can do useful work. Programming software and digital hardware design, past the analog gate stage, are the SAME TASK, they are just at differing levels of abstraction.
You might ask what is it that enables software to be written on an x86 to work on a PowerPC architecture? There are plenty of examples of cross-platform software. How is it that the programmers can write such software without having to start from scratch each time? The answer is that they are in fact using a VIRTUAL MACHINE, which is the C PROGRAMMING LANGUAGE. In other words, a programmer will write code in C knowing that if it is written carefully in portable C (there is a subset of C which is portable) according to the standard, other C compilers will be able to compile it for their respective architectures and enable the software to be used on that platform. This is an example of coarse-grained, extremely primitive, manually activated MIGRATION.
Other examples of virtual machines in an little bit more abstract sense (but not really), are the UNIX and Win32 APIs, or any API.
There are purely technical reasons for having virtual machines that are high-level enough to be able to be implemented efficiently on any present and future hardware, and that are free software and not proprietary software (for our purposes proprietary means unchangeable by anyone except a specific group, ie Intel, Microsoft). In the software ecology, a virtual machine's place is to enable portable interchange of programs and data. The ideal situation is UNIVERSAL COMPATIBILITY. That is, where any computing context, where it is conceptually meaningful and provides UTILITY, should be capable of exchanging information including program code and data with another context. In many cases, this exchange is essential to the operation of the system. In the case where we wish to prevent communication, then there is no reason why it cannot be prevented. However when portability is prevented because it is artificially imposed by the very fact that we are using a proprietary virtual machine to run the software is when it becomes a burden. In other words when the said prevention is imposed by the system's limitations rather than our own desire to prevent. Use of a proprietary virtual machine artificially limits the software's ability to carry out such exchanges efficiently. Why? Not because of the lack of what that virtual machine is capable of on ONE system but what it is capable of across MULTIPLE systems and how it enables migration. In other words how it enables ABSTRACTION. The problem is not pettifogging, it is very real. It is faced by every programmer today by the second, with any attempt to migrate, with untold duplication of work and waste of human effort in trying to solve what is essentially the same problem. It is more a problem of bureaucracy than a technical one, it is in fact simply failure to agree on a standard. What would happen if each Company decided to use their own system of units and no one agreed on the French one as we do? Since the units are related by a simple multiplicative factor, it is actually not all that bad. But with software where we are intricately describing PROCESS, the problem of not agreeing on a standard simply means that the system can't do what you want when it comes to migration. Unless every system uses the one same proprietary VM, compatibility and ability to migrate is impeded. Why? Because software has be written to run in some digital context or other in order to operate. If every system were to use the one proprietary VM such as x86, then it prevents certain optimizations taking place such as more intelligent fine control over caching than what was put into the CPU hardware, when dealing with multiple connected systems migrating data. Otherwise, systems are forced to provide a compatibility layer in order to have code migration. Compatibility layers such as this are themselves virtual machines that do not run as efficiently as a native implementation.
It is all very well to say “Well this is all just a matter of programming”. This is true. But it is impossible that every programmer is going to write their own operating system for every piece of hardware they come across simply so they can enable efficient migration. These are LARGE SYSTEMS that have to be ENGINEERED by teams of engineers. You have conventional interfaces used to enable different people (and softwares) to manipulate them. Engineers, whether they like it or not, are part of a global work force which is working towards the unnecessary DUPLICATION of work. No one team or group of people can produce a piece of software without someone else having an influence or without someone else needing to change it later on. Preventing duplication of work means using computers to automate what can be automated, including involving them in the engineering of new computing systems and maintenance of existing systems.
Extract from an article by François-René Rideau,
Actually, as long as the only operation that leads to production of code is manual addition, the writing out of the blue by a human mind supposedly inspired directly by the Muses, then it is possible to attribute an ``origin'', an ``author'' to every word, to every symbol that constitutes a program. Now, as soon as is allowed metaprogramming, that is, arbitrary operations that act on code and produce code, as soon as is considered the environment inside which lives the author (man or machine) of a program then this one is no more the inventor of the program, but only the last link of a holistic process of transformation, in which it is not possible to attribute an unequivocal origin to any produced element whatsover.
The problem at the moment is that there are no such conventional interfaces to enable ease of migration. This is because engineers themselves are held hostage by being forced to work with or on top of PROPRIETARY interfaces rather than free ones. Whenever you hear about compatibility problems between different softwares, hardwares or "can't do" attitude from techos or programmers in terms of migration, its root cause is likely due to the use of differing incompatible proprietary virtual machines. In this sense, a virtual machine can just as well be, for example, a word processor document format, as a CPU instruction set like x86. It is all just software which we use to fulfil our purposes. What you want to do is to make it as flexible as possible in terms of migration. Why else would you have it: this is what computers are for, enabling information processing and exchange. There is no point to having computers without it. This is also why computers currently make it easy for companies to have an excuse to USE PEOPLE. Rather than having computers provide assistance they lock people in to one method or other of doing things, ie the one created by the company. This is where the term "Microserf” originates. It is the same with inflexible bank systems where the teller is forced to follow the rules of the system even though it isn't COMMON SENSE. This is looking at it from the perspective of cybernetics, where computing systems are considered as tools and the goal of all involved in their design is to automate what can be automated by them, including dealings with themselves. The eventual goal is transparency of use. This is important for enabling us to work at higher levels of abstraction and use computers to solve more abstract and fundamental problems instead of being stuck in a rut. Otherwise people are essentially wasting their time with computers. In this sense we need to redefine what we typically (thoughtlessly) consider software. Software is anything past the analog gate stage considered as "digital". It only exists in the abstract. Of course in reality nothing is digital, Bit errors occur. But because we can consider this in our digital abstraction as stochastic processes, we can write software to try help deal with it and reduce the probabilities of error to minute amounts. Of course the model of the analog gates that we use to talk about them is also abstract. It works according to a set of rules which are understood, but only in the abstract. In the real world, models are inaccurate. The same is true of any model, Einstein himself said about relativity, it will do until the next one comes along.
In today's world it is fortunate that the languages we use to communicate such as the English language are not proprietary, because in fact it is currently the only working thing which enables us to bootstrap our information exchange. In English, no one is held hostage by a specific group of people saying you must use THEIR method of communication, as it is with computers currently. In the same way, computers are not inherently limited to using proprietary software to exchange information. In many ways a non proprietary system is about individual freedom of choice, the individual should have control over the hardware they own in the same way that we have control over our own mouths. The simple fact is that freely flowing information exchange in a system co-ordinated by machines cannot occur without standards and contexts to enable that flow and to build on them. Proprietary systems inherently prevent this. So we have a choice. We drop proprietary standards involved in information exchange and allow automated co-ordination to occur, or we forget that we are free individuals as gradually institutions' systems (and mandatory use of them imposed by law) takes a stranglehold on free flow of information with their arbitrary limitations. People are all to easily intimidated by the threat of someone ostensibly more knowledgeable than themselves "knowing more" and thus being dictated to by that person or group of people when it comes to their rights to freely communicate and exchange information, which in the end does affect quality of life in a very real, physical sense.
The definition of a virtual machine:
A virtual machine is any agreed upon standard which gives a consistent interface to using software on different hardware platforms or software in different contexts including context created by other software. Thus, a CPU instruction set can be a virtual machine. How it is implemented is IRRELEVANT, except for optimization. What is important is the fact that it enables COMPATIBILITY between software. It is also why it is called "Virtual" - it can exist in the abstract without necessarily being IMPLEMENTED in hardware or software. Anything digital is as such “virtual" - as soon as you leave the analog world and start talking in terms of digital entites such as bits or bytes, you are talking in terms of virtual machines. The electric circuitry is real, a bit is purely abstract, infinitely copyable. However, the processes that occur in hardware are real - they can do useful work. Programming software and digital hardware design, past the analog gate stage, are the SAME TASK, they are just at differing levels of abstraction.
You might ask what is it that enables software to be written on an x86 to work on a PowerPC architecture? There are plenty of examples of cross-platform software. How is it that the programmers can write such software without having to start from scratch each time? The answer is that they are in fact using a VIRTUAL MACHINE, which is the C PROGRAMMING LANGUAGE. In other words, a programmer will write code in C knowing that if it is written carefully in portable C (there is a subset of C which is portable) according to the standard, other C compilers will be able to compile it for their respective architectures and enable the software to be used on that platform. This is an example of coarse-grained, extremely primitive, manually activated MIGRATION.
Other examples of virtual machines in an little bit more abstract sense (but not really), are the UNIX and Win32 APIs, or any API.
There are purely technical reasons for having virtual machines that are high-level enough to be able to be implemented efficiently on any present and future hardware, and that are free software and not proprietary software (for our purposes proprietary means unchangeable by anyone except a specific group, ie Intel, Microsoft). In the software ecology, a virtual machine's place is to enable portable interchange of programs and data. The ideal situation is UNIVERSAL COMPATIBILITY. That is, where any computing context, where it is conceptually meaningful and provides UTILITY, should be capable of exchanging information including program code and data with another context. In many cases, this exchange is essential to the operation of the system. In the case where we wish to prevent communication, then there is no reason why it cannot be prevented. However when portability is prevented because it is artificially imposed by the very fact that we are using a proprietary virtual machine to run the software is when it becomes a burden. In other words when the said prevention is imposed by the system's limitations rather than our own desire to prevent. Use of a proprietary virtual machine artificially limits the software's ability to carry out such exchanges efficiently. Why? Not because of the lack of what that virtual machine is capable of on ONE system but what it is capable of across MULTIPLE systems and how it enables migration. In other words how it enables ABSTRACTION. The problem is not pettifogging, it is very real. It is faced by every programmer today by the second, with any attempt to migrate, with untold duplication of work and waste of human effort in trying to solve what is essentially the same problem. It is more a problem of bureaucracy than a technical one, it is in fact simply failure to agree on a standard. What would happen if each Company decided to use their own system of units and no one agreed on the French one as we do? Since the units are related by a simple multiplicative factor, it is actually not all that bad. But with software where we are intricately describing PROCESS, the problem of not agreeing on a standard simply means that the system can't do what you want when it comes to migration. Unless every system uses the one same proprietary VM, compatibility and ability to migrate is impeded. Why? Because software has be written to run in some digital context or other in order to operate. If every system were to use the one proprietary VM such as x86, then it prevents certain optimizations taking place such as more intelligent fine control over caching than what was put into the CPU hardware, when dealing with multiple connected systems migrating data. Otherwise, systems are forced to provide a compatibility layer in order to have code migration. Compatibility layers such as this are themselves virtual machines that do not run as efficiently as a native implementation.
It is all very well to say “Well this is all just a matter of programming”. This is true. But it is impossible that every programmer is going to write their own operating system for every piece of hardware they come across simply so they can enable efficient migration. These are LARGE SYSTEMS that have to be ENGINEERED by teams of engineers. You have conventional interfaces used to enable different people (and softwares) to manipulate them. Engineers, whether they like it or not, are part of a global work force which is working towards the unnecessary DUPLICATION of work. No one team or group of people can produce a piece of software without someone else having an influence or without someone else needing to change it later on. Preventing duplication of work means using computers to automate what can be automated, including involving them in the engineering of new computing systems and maintenance of existing systems.
Extract from an article by François-René Rideau,
Actually, as long as the only operation that leads to production of code is manual addition, the writing out of the blue by a human mind supposedly inspired directly by the Muses, then it is possible to attribute an ``origin'', an ``author'' to every word, to every symbol that constitutes a program. Now, as soon as is allowed metaprogramming, that is, arbitrary operations that act on code and produce code, as soon as is considered the environment inside which lives the author (man or machine) of a program then this one is no more the inventor of the program, but only the last link of a holistic process of transformation, in which it is not possible to attribute an unequivocal origin to any produced element whatsover.
The problem at the moment is that there are no such conventional interfaces to enable ease of migration. This is because engineers themselves are held hostage by being forced to work with or on top of PROPRIETARY interfaces rather than free ones. Whenever you hear about compatibility problems between different softwares, hardwares or "can't do" attitude from techos or programmers in terms of migration, its root cause is likely due to the use of differing incompatible proprietary virtual machines. In this sense, a virtual machine can just as well be, for example, a word processor document format, as a CPU instruction set like x86. It is all just software which we use to fulfil our purposes. What you want to do is to make it as flexible as possible in terms of migration. Why else would you have it: this is what computers are for, enabling information processing and exchange. There is no point to having computers without it. This is also why computers currently make it easy for companies to have an excuse to USE PEOPLE. Rather than having computers provide assistance they lock people in to one method or other of doing things, ie the one created by the company. This is where the term "Microserf” originates. It is the same with inflexible bank systems where the teller is forced to follow the rules of the system even though it isn't COMMON SENSE. This is looking at it from the perspective of cybernetics, where computing systems are considered as tools and the goal of all involved in their design is to automate what can be automated by them, including dealings with themselves. The eventual goal is transparency of use. This is important for enabling us to work at higher levels of abstraction and use computers to solve more abstract and fundamental problems instead of being stuck in a rut. Otherwise people are essentially wasting their time with computers. In this sense we need to redefine what we typically (thoughtlessly) consider software. Software is anything past the analog gate stage considered as "digital". It only exists in the abstract. Of course in reality nothing is digital, Bit errors occur. But because we can consider this in our digital abstraction as stochastic processes, we can write software to try help deal with it and reduce the probabilities of error to minute amounts. Of course the model of the analog gates that we use to talk about them is also abstract. It works according to a set of rules which are understood, but only in the abstract. In the real world, models are inaccurate. The same is true of any model, Einstein himself said about relativity, it will do until the next one comes along.
In today's world it is fortunate that the languages we use to communicate such as the English language are not proprietary, because in fact it is currently the only working thing which enables us to bootstrap our information exchange. In English, no one is held hostage by a specific group of people saying you must use THEIR method of communication, as it is with computers currently. In the same way, computers are not inherently limited to using proprietary software to exchange information. In many ways a non proprietary system is about individual freedom of choice, the individual should have control over the hardware they own in the same way that we have control over our own mouths. The simple fact is that freely flowing information exchange in a system co-ordinated by machines cannot occur without standards and contexts to enable that flow and to build on them. Proprietary systems inherently prevent this. So we have a choice. We drop proprietary standards involved in information exchange and allow automated co-ordination to occur, or we forget that we are free individuals as gradually institutions' systems (and mandatory use of them imposed by law) takes a stranglehold on free flow of information with their arbitrary limitations. People are all to easily intimidated by the threat of someone ostensibly more knowledgeable than themselves "knowing more" and thus being dictated to by that person or group of people when it comes to their rights to freely communicate and exchange information, which in the end does affect quality of life in a very real, physical sense.