Log in

View Full Version : inherent limitations of proprietary software


finelemon2
02-09-2004, 01:09 AM
How does everyone feel about being limited by only being able to use Windows Mobile on their Pocket PC hardware or Palm OS on their Palm hardware?

The definition of a virtual machine:
A virtual machine is any agreed upon standard which gives a consistent interface to using software on different hardware platforms or software in different contexts including context created by other software. Thus, a CPU instruction set can be a virtual machine. How it is implemented is IRRELEVANT, except for optimization. What is important is the fact that it enables COMPATIBILITY between software. It is also why it is called "Virtual" - it can exist in the abstract without necessarily being IMPLEMENTED in hardware or software. Anything digital is as such “virtual" - as soon as you leave the analog world and start talking in terms of digital entites such as bits or bytes, you are talking in terms of virtual machines. The electric circuitry is real, a bit is purely abstract, infinitely copyable. However, the processes that occur in hardware are real - they can do useful work. Programming software and digital hardware design, past the analog gate stage, are the SAME TASK, they are just at differing levels of abstraction.

You might ask what is it that enables software to be written on an x86 to work on a PowerPC architecture? There are plenty of examples of cross-platform software. How is it that the programmers can write such software without having to start from scratch each time? The answer is that they are in fact using a VIRTUAL MACHINE, which is the C PROGRAMMING LANGUAGE. In other words, a programmer will write code in C knowing that if it is written carefully in portable C (there is a subset of C which is portable) according to the standard, other C compilers will be able to compile it for their respective architectures and enable the software to be used on that platform. This is an example of coarse-grained, extremely primitive, manually activated MIGRATION.

Other examples of virtual machines in an little bit more abstract sense (but not really), are the UNIX and Win32 APIs, or any API.

There are purely technical reasons for having virtual machines that are high-level enough to be able to be implemented efficiently on any present and future hardware, and that are free software and not proprietary software (for our purposes proprietary means unchangeable by anyone except a specific group, ie Intel, Microsoft). In the software ecology, a virtual machine's place is to enable portable interchange of programs and data. The ideal situation is UNIVERSAL COMPATIBILITY. That is, where any computing context, where it is conceptually meaningful and provides UTILITY, should be capable of exchanging information including program code and data with another context. In many cases, this exchange is essential to the operation of the system. In the case where we wish to prevent communication, then there is no reason why it cannot be prevented. However when portability is prevented because it is artificially imposed by the very fact that we are using a proprietary virtual machine to run the software is when it becomes a burden. In other words when the said prevention is imposed by the system's limitations rather than our own desire to prevent. Use of a proprietary virtual machine artificially limits the software's ability to carry out such exchanges efficiently. Why? Not because of the lack of what that virtual machine is capable of on ONE system but what it is capable of across MULTIPLE systems and how it enables migration. In other words how it enables ABSTRACTION. The problem is not pettifogging, it is very real. It is faced by every programmer today by the second, with any attempt to migrate, with untold duplication of work and waste of human effort in trying to solve what is essentially the same problem. It is more a problem of bureaucracy than a technical one, it is in fact simply failure to agree on a standard. What would happen if each Company decided to use their own system of units and no one agreed on the French one as we do? Since the units are related by a simple multiplicative factor, it is actually not all that bad. But with software where we are intricately describing PROCESS, the problem of not agreeing on a standard simply means that the system can't do what you want when it comes to migration. Unless every system uses the one same proprietary VM, compatibility and ability to migrate is impeded. Why? Because software has be written to run in some digital context or other in order to operate. If every system were to use the one proprietary VM such as x86, then it prevents certain optimizations taking place such as more intelligent fine control over caching than what was put into the CPU hardware, when dealing with multiple connected systems migrating data. Otherwise, systems are forced to provide a compatibility layer in order to have code migration. Compatibility layers such as this are themselves virtual machines that do not run as efficiently as a native implementation.

It is all very well to say “Well this is all just a matter of programming”. This is true. But it is impossible that every programmer is going to write their own operating system for every piece of hardware they come across simply so they can enable efficient migration. These are LARGE SYSTEMS that have to be ENGINEERED by teams of engineers. You have conventional interfaces used to enable different people (and softwares) to manipulate them. Engineers, whether they like it or not, are part of a global work force which is working towards the unnecessary DUPLICATION of work. No one team or group of people can produce a piece of software without someone else having an influence or without someone else needing to change it later on. Preventing duplication of work means using computers to automate what can be automated, including involving them in the engineering of new computing systems and maintenance of existing systems.
Extract from an article by François-René Rideau,
Actually, as long as the only operation that leads to production of code is manual addition, the writing out of the blue by a human mind supposedly inspired directly by the Muses, then it is possible to attribute an ``origin'', an ``author'' to every word, to every symbol that constitutes a program. Now, as soon as is allowed metaprogramming, that is, arbitrary operations that act on code and produce code, as soon as is considered the environment inside which lives the author (man or machine) of a program then this one is no more the inventor of the program, but only the last link of a holistic process of transformation, in which it is not possible to attribute an unequivocal origin to any produced element whatsover.

The problem at the moment is that there are no such conventional interfaces to enable ease of migration. This is because engineers themselves are held hostage by being forced to work with or on top of PROPRIETARY interfaces rather than free ones. Whenever you hear about compatibility problems between different softwares, hardwares or "can't do" attitude from techos or programmers in terms of migration, its root cause is likely due to the use of differing incompatible proprietary virtual machines. In this sense, a virtual machine can just as well be, for example, a word processor document format, as a CPU instruction set like x86. It is all just software which we use to fulfil our purposes. What you want to do is to make it as flexible as possible in terms of migration. Why else would you have it: this is what computers are for, enabling information processing and exchange. There is no point to having computers without it. This is also why computers currently make it easy for companies to have an excuse to USE PEOPLE. Rather than having computers provide assistance they lock people in to one method or other of doing things, ie the one created by the company. This is where the term "Microserf” originates. It is the same with inflexible bank systems where the teller is forced to follow the rules of the system even though it isn't COMMON SENSE. This is looking at it from the perspective of cybernetics, where computing systems are considered as tools and the goal of all involved in their design is to automate what can be automated by them, including dealings with themselves. The eventual goal is transparency of use. This is important for enabling us to work at higher levels of abstraction and use computers to solve more abstract and fundamental problems instead of being stuck in a rut. Otherwise people are essentially wasting their time with computers. In this sense we need to redefine what we typically (thoughtlessly) consider software. Software is anything past the analog gate stage considered as "digital". It only exists in the abstract. Of course in reality nothing is digital, Bit errors occur. But because we can consider this in our digital abstraction as stochastic processes, we can write software to try help deal with it and reduce the probabilities of error to minute amounts. Of course the model of the analog gates that we use to talk about them is also abstract. It works according to a set of rules which are understood, but only in the abstract. In the real world, models are inaccurate. The same is true of any model, Einstein himself said about relativity, it will do until the next one comes along.

In today's world it is fortunate that the languages we use to communicate such as the English language are not proprietary, because in fact it is currently the only working thing which enables us to bootstrap our information exchange. In English, no one is held hostage by a specific group of people saying you must use THEIR method of communication, as it is with computers currently. In the same way, computers are not inherently limited to using proprietary software to exchange information. In many ways a non proprietary system is about individual freedom of choice, the individual should have control over the hardware they own in the same way that we have control over our own mouths. The simple fact is that freely flowing information exchange in a system co-ordinated by machines cannot occur without standards and contexts to enable that flow and to build on them. Proprietary systems inherently prevent this. So we have a choice. We drop proprietary standards involved in information exchange and allow automated co-ordination to occur, or we forget that we are free individuals as gradually institutions' systems (and mandatory use of them imposed by law) takes a stranglehold on free flow of information with their arbitrary limitations. People are all to easily intimidated by the threat of someone ostensibly more knowledgeable than themselves "knowing more" and thus being dictated to by that person or group of people when it comes to their rights to freely communicate and exchange information, which in the end does affect quality of life in a very real, physical sense.

Janak Parekh
02-09-2004, 01:39 AM
How does everyone feel about being limited by only being able to use Windows Mobile on their Pocket PC hardware or Palm OS on their Palm hardware?
That's an issue that has nothing to do with the rest of your post...

A virtual machine is any agreed upon standard which gives a consistent interface to using software on different hardware platforms or software in different contexts including context created by other software.
That's really not the definition of a virtual machine. A virtual machine is an entity that can execute assembly language-level instructions. An interpreter can execute high-level code on a lower-level virtual machine, if you want, but the two different levels have very different problems to solve from a Computer Science theoretic standpoint.

Programming software and digital hardware design, past the analog gate stage, are the SAME TASK, they are just at differing levels of abstraction.
Computer scientists and engineers would probably disagree with you there.

The answer is that they are in fact using a VIRTUAL MACHINE, which is the C PROGRAMMING LANGUAGE. In other words, a programmer will write code in C knowing that if it is written carefully in portable C (there is a subset of C which is portable) according to the standard, other C compilers will be able to compile it for their respective architectures and enable the software to be used on that platform.
No, no, no. C is far from a virtual machine environment. First off, "portable C" is rarely that -- libraries are different on different machines. Even if libraries are the same, the hardware abstraction is not. For example, different devices will produce incompatibilities even if you write extremely portable C code -- at some point, you'll have to realize that different units are capable of executing different things.

Virtual machine monitors, on the other hand, try to abstract all of that away, and let binary code run portably on different environments. But even then, virtual machine monitors have their own limits, unless they implement a complete abstraction layer -- and that at potentially significant performance penalties when you're running on nonidentical hardware from the reference platform. For example, new 2D and 3D chipsets will have APIs that are fundamentally different from the virtual machine's reference platform. C does nothing to address this.

Other examples of virtual machines in an little bit more abstract sense (but not really), are the UNIX and Win32 APIs, or any API.
No - the UNIX and Win32 APIs don't completely abstract away hardware. Useful UNIX code often has to write to the /dev hierarchy, for example.

There are purely technical reasons for having virtual machines that are high-level enough to be able to be implemented efficiently on any present and future hardware
Not easy to do, and in fact, you often don't want this...

The ideal situation is UNIVERSAL COMPATIBILITY.
... and a virtual machine is a very expensive way to accomplish this, especially for PDAs, and in fact we don't want this as PDAs evolve and improve.

However when portability is prevented because it is artificially imposed by the very fact that we are using a proprietary virtual machine to run the software is when it becomes a burden.
Actually, it's more of an issue of drivers. You might be able to get CE running on a Tungsten T3, assuming someone sits down and writes drivers for the unit. Instead, if you had a reference virtual machine, than the T3's extra-high-res screen might not be addressable.

Because software has be written to run in some digital context or other in order to operate. If every system were to use the one proprietary VM such as x86, then it prevents certain optimizations taking place such as more intelligent fine control over caching than what was put into the CPU hardware, when dealing with multiple connected systems migrating data.
And what ISA do you propose that will efficiently translate into platforms today and in the future? Besides, the x86 ISA is not "closed" -- you can trivially download the documentation for it.

The problem at the moment is that there are no such conventional interfaces to enable ease of migration. This is because engineers themselves are held hostage by being forced to work with or on top of PROPRIETARY interfaces rather than free ones.
Have you looked at the open-source world? If anything, open-source approaches lead to fragmentation in its own right, because people can and have opinions as to what's better. Come back with the ideal open reference platform that everyone agrees on, and I'll sell you a bridge.

Whenever you hear about compatibility problems between different softwares, hardwares or "can't do" attitude from techos or programmers in terms of migration, its root cause is likely due to the use of differing incompatible proprietary virtual machines.
No. Linux and FreeBSD binaries are incompatible, even though they run on the same ISA. It so turns out FreeBSD has a compatibility library for Linux libraries, but this is an extra translation layer, and has limitations.

In today's world it is fortunate that the languages we use to communicate such as the English language are not proprietary, because in fact it is currently the only working thing which enables us to bootstrap our information exchange.
What are you talking about? English is most definitely proprietary. Travel to other parts of the world and you'll notice that immediately.

In English, no one is held hostage by a specific group of people saying you must use THEIR method of communication, as it is with computers currently.
Sure they do. You must use English to do business with the US Government. It so turns out the semantics of English are similar to many other languages, but so's the case for 3GL programming languages.

Anyway, you have a few good ideas in your post, but they're all interspersed and mixed together with a bunch of terms that don't really make sense in the context you define them in... and I know this, as I have an MS (and am working towards a PhD) in Computer Science. :)

--janak

finelemon2
02-09-2004, 02:22 AM
That's an issue that has nothing to do with the rest of your post...

At the moment I have a problem with Pocket Information 4 whereby the XML export option does not include the notes. Since Pocket Informant is proprietary software, I can't change it to include that. However, there is nothing about the hardware which is going to stop it from doing that. The issue is the same when it comes to the rest of my post and programming in general.

That's really not the definition of a virtual machine. A virtual machine is an entity that can execute assembly language-level instructions. An interpreter can execute high-level code on a lower-level virtual machine, if you want, but the two different levels have very different problems to solve from a Computer Science theoretic standpoint.


No not at all. The problem is the same, which is to allow exchange of information between different software contexts. You are thinking about it in terms of computer science prexisting in its own right. Computer science does not exist in its own right, it is not even really a science at all, it is engineering or possibly art. The techniques which we currently use such as programming languages are only just that - techniques of abstraction. The difference between, say, an interpreter executing assembly language instructions and an interpreter executing a high level language is only a difference in the level of abstraction the people are working with. Any given assembly language or CPU intruction set is by no means inherently part of some great theoretical foundation in computer science, it was designed by engineers to do a job.

Computer scientists and engineers would probably disagree with you there.

Only the ones who can't see the big picture because they get so caught up in the technicalities of pre-existing infrastructure and its utization, in particular, proprietary infrastructure.



No, no, no. C is far from a virtual machine environment...at some point, you'll have to realize that different units are capable of executing different things

No, anything digital is in fact capable of the same set of operations. The only difference is the speed at which it can do it. Sure, people build up software to do certain things on certain platforms, but only because they are forced to use pre-existing and proprietary infrastructure. The basic concept of abstraction in the digital world does not require hardware, it is a concept that is purely abstract which nearly anyone can understand.

Libraries are different on different machines
The basic point of programming is to abstract away anything that is involved in the operation of the system itself so that it can be used transparently. The fact that we currently have a situation where "different libraries are different" and software cannot be easily migrated is because of the proprietary nature of software, not because of any inherent reason to do with digital technology in general.

..and that at potentially significant performance penalties when you're running on nonidentical hardware from the reference platform.

That is right, this is what I said isn't it? "Compatibility layers are themselves virtual machines that do not run as efficiently as a native implementation".

new 2D and 3D chipsets will have APIs that are fundamentally different from the virtual machine's reference platform. C does nothing to address this.

The only purpose for building new 2D/3D chipsets is to make certain specific operations related to 3d computer grahpics available to be executed faster compared to what a general purpose CPU which follows the fetch-decode-execute cycle could do them.

C does nothing to address it, that is exactly the problem. C is the most widely available entity by which anyone can program and yet it does nothing to address issues of hardware or operational abstraction. What a wreck we have!

Unix and Win32 don't solve the problem either, you're right! Nevertheless they are still some of the most used softwares in existance. This is a problem.

and a virtual machine is a very expensive way to accomplish this, especially for PDAs,
What is the difference between a "machine" at a low level of abstraction (thinking in terms of CPU instructions) and a "machine" at a high level of abstraction such as a system the bank teller uses. The only difference is in the level of abstraction; there is no "fundamental" difference. This is why any language such as C is a virtual machine.

Actually, it's more of an issue of drivers.

Its an issue of the much more general topic of migration. You want software to be as flexible as possible. Any abstraction that someone makes should be useable on any hardware or software context that is reasonable to.

Whenever you hear about compatibility problems between different softwares, hardwares or "can't do" attitude from techos or programmers in terms of migration, its root cause is likely due to the use of differing incompatible proprietary virtual machines.

Well, someone has the option, if they like, of writing translation code to translate between a linux binary and a freebsd binary. Or more to the point, you could write code which itself writes code to do the translation (although why would you bother if you have a system that is a capable of such a feat). This is because linux and freebsd source code are fully available. You can't do that with a proprietary binary only closed box system like windows.

What are you talking about? English is most definitely proprietary. Travel to other parts of the world and you'll notice that immediately.

How does the existence of other languages than English make English or any of other proprietary? For example, we take many words from other languages. Many take words from English. These are in no sense proprietary.

Anyway, you have a few good ideas in your post, but they're all interspersed and mixed together with a bunch of terms that don't really make sense in the context you define them in... and I know this, as I have an MS (and am working towards a PhD) in Computer Science.

Then you aren't thinking deeply enough about the problems involved and only looking at a small part of the picture. I suggest you take a look at
cliki.tunes.org.

[/quote]

Janak Parekh
02-09-2004, 02:50 AM
No not at all. The problem is the same, which is to allow exchange of information between different software contexts.
Of course, and this is what I teach my students.

The techniques which we currently use such as programming languages are only just that - techniques of abstraction.
And yet, abstraction is so fundamental to the ability of computers today. While I don't pretend to imply that the current layers of abstraction we have now are "optimal", you're not going to visualize every problem in the context of a Turing machine -- it's far too complicated to do so. In fact, problem-solving can be done via abstraction on both open and closed layers.

Only the ones who can't see the big picture because they get so caught up in the technicalities of pre-existing infrastructure and its utization, in particular, proprietary infrastructure.
Or those who have to work with it everyday. ;) It's important to understand the theoretical underpinnings, but the practical implementation details are not unimportant either.

No, anything digital is in fact capable of the same set of operations. The only difference is the speed at which it can do it.
... which is utterly critical to keep in mind on constrained devices, like PDAs.

The basic point of programming is to abstract away anything that is involved in the operation of the system itself so that it can be used transparently.
Within limits. Different computing devices have different I/O interfaces, for example. This bridge between analog and digital is a fundamental difference between devices, and prevents totally abstract platforms. While technologies like software radios help, at a certain point there are discrete pieces of hardware, like a display, speaker, etc.

The fact that we currently have a situation where "different libraries are different" and software cannot be easily migrated is because of the proprietary nature of software, not because of any inherent reason to do with digital technology in general.
How about the fact that HCI changes, then?

The only purpose for building new 2D/3D chipsets is to make certain specific operations related to 3d computer grahpics available to be executed faster compared to what a general purpose CPU which follows the fetch-decode-execute cycle could do them.
Sure, but that abstraction is what makes it possible.

C does nothing to address it, that is exactly the problem. C is the most widely available entity by which anyone can program and yet it does nothing to address issues of hardware or operational abstraction. What a wreck we have!
And what do you propose as a solution? "Openness" is only one piece in the puzzle. Witness the fact that in the open-source community, we have several different display systems, and different windowing systems, all with their own level of abstraction -- and incompatibility. And, in fact, proprietary platforms have an advantage here.

For example: you can't rich-text cut-and-paste between KDE and GNOME applications, unless a more sophisticated model than the X clipboard is developed. You can hand-craft a bridge of your own for limited approaches, but to get true interoperability that makes sense for the end-user you'd want the two communities to cooperate. How, exactly, are you going to do that? Whereas a company like MS dictates the platform, and everyone conforms to it. In that one example, you'll find the Windows clipboard to be a far superior information exchange platform than X.

Its an issue of the much more general topic of migration. You want software to be as flexible as possible.
However, as a software engineer, I see "as flexible as possible" as infinitely complicated. Even with the OO models that we have today, there's a lot of complication. With open-source, you can in theory do whatever you want, but in practice it's a different challenge entirely, and is totally dependent on the system's architecture.

To truly get such interoperability, you want a completely stable platform. And, again, I assert that doesn't (and won't!) exist in today's world.

Well, someone has the option, if they like, of writing translation code to translate between a linux binary and a freebsd binary.
Ah, but what about all the applications that use Linux syscalls... especially when new ones are coming out? You'd have to constantly maintain this.

How does the existence of other languages than English make English or any of other proprietary? For example, we take many words from other languages. Many take words from English. These are in no sense proprietary.
Well, I have a feeling that proprietary is a complicated (and overloaded) definition. If you look at C# and Java, you'll find that language constructs are passing between the two all the time -- even though the underlying platforms themselves are closed.

Then you aren't thinking deeply enough about the problems involved and only looking at a small part of the picture. I suggest you take a look at cliki.tunes.org.
Neat site. I think I grok you now -- you're trying to overthrow the classical notion of "computing". As a "systems person", this strikes me as highly theoretical, and I'll believe it works when I see it realized. I think, partially we differ on the practical implications of true openness. I'm not against the idea -- as a Computer Scientist, open platforms appeal to me very much -- but the accessibility to end-users, the people who you happen to be targeting on this board, is significantly in question. In theory, everything is a Turing machine. In practice, as humans, we are not going to be able to visualize any modern business practice at that level... unless we're engineers and programmers, and even brilliant ones at that.

And we haven't gotten to the business models of open platforms, which is perhaps the important part of the debate ("how are you going to convince people to put in labor for an open platform?"). The TUNES platform demonstrates this itself ("The main problem currently is to find a few very active members, who would work at least half-time to have TUNES running.") I don't have the time nor energy to discuss that, so I'm going to leave it at this.

--janak

CME2C
02-09-2004, 02:54 AM
My philosophy is if you don't like it don't buy it.

Janak Parekh
02-09-2004, 03:13 AM
One postscript: a related discussion is going on Slashdot right now:

http://slashdot.org/article.pl?sid=04/02/08/1822205

You'll find the group diverges into the pragmatists ("I want a good address-book!") and the open-source fans ("I can write a filter for the addressbook! Yay!"). A lot of interesting points on both sides (PalmOS/Pocket PCs much more polished than the Linux counterparts, but you can write neat hacks on the latter.)

Anyway, I still maintain my points in respect to this. ;)

--janak

Lex
02-09-2004, 03:30 AM
How do I feel about the original question?

Dang that's a long post.

finelemon2
02-09-2004, 04:06 AM
And yet, abstraction is so fundamental to the ability of computers today. While I don't pretend to imply that the current layers of abstraction we have now are "optimal", you're not going to visualize every problem in the context of a Turing machine -- it's far too complicated to do so. In fact, problem-solving can be done via abstraction on both open and closed layers.


What about the kind of abstraction that involves writing code which itself manipulates code, (metaprogramming). This is one of the most obvious forms of abstraction. A closed layer inherently prevents it, and hence inherently prevents the maintainability of that layer across the dimension of time. I get the feeling you are confusing "black box abstraction" as an engineering and programming tool to visualise how a system works with the issue of allowing systems to be changed now and in the future. And the whole point of abstraction is so you don't have to visualise *anything* in the context of a Turing machine. Its so you can work within the problem domain of the problem you are trying to solve instead of constantly dealing with technicalities of the system you are using to solve it. That is what abstraction is, is it not??

Or those who have to work with it everyday. It's important to understand the theoretical underpinnings, but the practical implementation details are not unimportant either.

The whole point of programming is to automate what can be automated, so that you don't have to do that intellectual work anymore, the computer will do it for you. Surely you can see that. That is what computers are for, are they not? The fact that certain things simply cannot be automated is due to the proprietary clsoed box nature of cetain closed layers. If those layers were open you could writer adaptors, and automate translation to enable the free flow of information. This is not philosophical, the reasons to only have open layers are purely technical. They are to enable computer hardware to do what they can possibly do. It is proprietary layers that are stopping it.

... which is utterly critical to keep in mind on constrained devices, like PDAs
If you need to generate fast static code to run on a constrained device then the process should be automated. In other words when you have a system capable of reflection, this is a strategy to make best use of hardware available in a particular context. The humans, ultimately, shouldn't have to worry about the migration aspects of converting a subset of a dynamic system into static pieces of code where needed for speed. It should be automatically handled by the system.


Within limits. Different computing devices have different I/O interfaces, for example. This bridge between analog and digital is a fundamental difference between devices, and prevents totally abstract platforms. While technologies like software radios help, at a certain point there are discrete pieces of hardware, like a display, speaker, etc.

The use of these discrete pieces of hardware in particular contexts should be automated so that humans can deal with it at a level that ultimately anybody can understand. This is what something like windows attempts. Unfortunately it is still a system with static elements and little support for reflection and migration. This is because it is proprietary. They have huge programming teams at MS to attempt to make software which everybody can use, but the problem comes when they try to make software which is all that people will ever require, which will never happen. Instead automation so that people can work with computers at higher, more human-like levels of abstraction is the way to go.

How about the fact that HCI changes, then?
The term Human Computer Interaction is essentially a buzzword invented to give an excuse for the static nature of computer software systems as they are. The ultimate goal is for total transparency, so that someone can think in terms of the concepts they are dealing with, like "how do I find my way home", rather than how do I get my bluetooth gprs receiver to be compatible with my proprietary Palm OS software with no drivers for it.

And what do you propose as a solution? "Openness" is only one piece in the puzzle. Witness the fact that in the open-source community, we have several different display systems, and different windowing systems, all with their own level of abstraction -- and incompatibility. And, in fact, proprietary platforms have an advantage here.

Openness is only one piece but it is an absolutely necessary piece. Any static closed portion of the system is unavailable for further changes, which essentially makes it useless in the long run. The different open-source community systems and their various fragmented abstractions could all potentially be made to be "compatible" using a reasoning system that analyses their source code, understands the environment in which they work at a yet higher level of abstraction and produces compatilibity glue both for abstraction and code itself. Currently, many programmers are little more than compability layer writers. But the real work of software engineering is producing meaningful abstractions to model the real world.


With open-source, you can in theory do whatever you want, but in practice it's a different challenge entirely, and is totally dependent on the system's architecture.

No the idea is to abstract from any particular one piece of hardware's system architecture. Again this can be done via generating a strategy to cope with different hardware architectures. An examples of manual attempts at this:
http://sources.redhat.com/cgen/


To truly get such interoperability, you want a completely stable platform. And, again, I assert that doesn't (and won't!) exist in today's world.

The whole point of computer science is that you can know as precisely as you'd like about any particular component, or as little as you like using black box abstraction to supress detail. Using metaprogramming in a system consisting of only open layers is a natural part of this, and is the way to elimiate bugs..the main cause of bugs is when humans are forced to think of too many things at once instead of using black box abstraction. Proprietary systems impose this.

Janak Parekh
02-09-2004, 04:24 AM
What about the kind of abstraction that involves writing code which itself manipulates code, (metaprogramming).
Ah, that's what you're driving at! Yes, that's a fantastic idea if made practical, and while it has been done so in limited fashion (e.g., functional programming, machine learning), I've yet to see where it works practically on a broad scale.

And the whole point of abstraction is so you don't have to visualise *anything* in the context of a Turing machine. Its so you can work within the problem domain of the problem you are trying to solve instead of constantly dealing with technicalities of the system you are using to solve it. That is what abstraction is, is it not??
Sure - but layers of abstraction impose a system hierarchy which may, at times, make the interoperability you speak of difficult. I see that TUNES has the beginnings of building an abstraction hierarchy that attempts to get around this, but this is a non-trivial concept and it has yet to be proven.

The whole point of computer science is that you can know as precisely as you'd like about any particular component, or as little as you like using black box abstraction to supress detail.
Yes and no -- here's where you and I will have to disagree. While you can abstract away most concepts, there are certain fundamental properties of information interchange that, in my opinion, can't be... and that hierarchy is imposed upon the abstraction. Time will tell. Anyway, while open platforms may be fundamental to your vision, it is but one piece. Get the other pieces working, and then you might have a more compelling argument to make to the rest of the world. ;)

--janak

finelemon2
02-09-2004, 04:32 AM
http://introspector.sourceforge.net/2003/08/ArrowPhilosophy.txt

The basic unit of abstraction has as little semantics as possible in order to allow the flexibility.