Message 01740 [Homepage] [Navigation]
Thread: oxenT01623 Message: 24/129 L15 [In index]
[First in Thread] [Last in Thread] [Date Next] [Date Prev]
[Next in Thread] [Prev in Thread] [Next Thread] [Prev Thread]

Re: Documentation Standards was Re: [ox-en] UserLinux



On 12 Dec 2003 at 13:16, Rich Walker wrote:

The more innovative and radical the step forwards, the less chance
for agreement. And therefore, volunteer based production is
inherently conservative and tends towards conformity. Hence why
Linux clones other systems rather than ever doing anything new.

An open source project has much more chance of undergoing innovation,
because the innovator can just code up the innovation and distribute
as a patch to those interested. In the event that the innovation is
seen to be an advantage, it can be incorporated. Agreement beforehand
isn't necessary!

Oh come now! This looks good in theory but simply does not happen in
practice! The reality is that all thriving volunteer based projects
are very conformant and tend to keep all the interesting stuff out.
Look at GCC and the countless useful patches and variants which have
been produced for it eg; the XML output. Very few of these get merged
back. Indeed the only time to my knowledge that GCC merged a major
breakaway was for EGCS. Look at Python - there are stackless models,
patches with more functional stuff - yet again, these stay on the
periphery. I can go on - I'm not saying that all projects never
accept radical innovations, but I am saying that the volunteer nature
of the thing creates a strong tendency to prevent it.

After all, people only code up innovations if they think others will
be interested - it's rare to find a programmer who pours thousands of
man hours into something with no desire whatsoever that others use
it. And nothing is more disheartening to think the maintainers will
take a patch, develop it and then find they won't.

The only real area where step-change innovation tends to occur are in
forks - however, the smaller the group which forks the considerably
less chance it has. EGCS only made it because the four or five core
programmers were very good and worked hard. They were up against many
more developers in the main GCC thrust.

I was going to list some examples of superior innovation, but after
thinking about BitTorrent, Freenet, the open community re-development
and enhancement of the "Gnutella" protocol, scaling an OS from the
68000 to the biggest mainframes, the Debian *process*, public CVS
repositories, public bug tracking, sub-24-hour response to security
holes, Emacs, and MergeMem, I realised you weren't going to call them
innovation.

BitTorrent - no, EDonkey had all that years before; Freenet -
couldn't find much about this but if it's Gnutella, then that's
certainly nothing new; OS scaling - hmm, weren't they doing that in
the 1980's and before when it was much harder? CVS isn't even a free
software invention nor is bug tracking. Emacs was/is step-change
innovative, but it sure ain't intuitive :( MergeMem - I was very
interested to learn about this - we wrote a module for RISC-OS doing
exactly the same thing around 1992.

These are barely examples of incremental let alone step-change
innovation apart from Emacs.

Now if you can pay people you can pay a group to do a job whether
they think it's a good idea or not. *That's* why capital injection
is necessary for step-change innovation - it creates coherence.

The history of capital paying for innovation is pretty poor.

Amiga Workbench, Acorn RISC-OS, NextSTEP, OS/2, WinNT, BeOS and Plan
9 just to name some OS's. Even things like EROS are mostly funded by
academia or ReiserFS by the US government. If you look at games
development, one of the richest areas of step-change innovation in
software, you'll find that ALL of it is via capital.

I don't think you have a leg to stand on here, sorry.

If you
observe what actually happens in the history of technology, you'll
notice that micro-projects are the source of innovation; capital only
chases an area after a first version has been brought to market.

Capital chases profit. Mostly that means pinning your market into a
box and ruthlessly exploiting them, but when competition works
properly innovation is only way to beat your competitor. I very much
doubt if Intel would have kept pushing ahead so hard if AMD weren't
chasing them - look at MS after it finished off IBM - it's just sat
back on the software side and tried entering new markets instead. We
haven't seen a new version since Win2k and we won't till 2007
probably.

I also suspect that the NT kernel is step-change innovative - the
DDK certainly suggests it. However, it's hard to be sure.

No, those who know VMS say it has some ideas from VMS, but not the
best ones.

I know VMS, though only as an experienced user and light programmer
for it. It's still even today pretty good, certainly reliable and has
only had one root exploit ever. Well written.

The NT kernel, anathema as it is to some for me to say it, is an
improved Unix. It improves on Unix in almost every single way and has
things like a unified kernel namespace, a total lack of hard limits
on resources and unicode, 32Kb path limit, full C3 ACL based
security, good journaled caching versioned file system and before MS
castrated it for NT v4.0, it maintained strict compartmentalisation
of memory spaces so that a even a crashing device driver had no
effect on system stability. For 1992, it is a superb operating system
kernel. To my knowledge, it has improved in every way over VMS which
makes sense as the same guys did it.

Unfortunately, MS have pretended the C3 security facilities don't
exist anymore, removed the multiple streams feature of NTFS, made the
default user root, hacked out the stability protection, grafted on
crappy stuff like GDI, COM, DirectX etc. and made it the pig's
breakfast we all know today. Shame.

Cheers,
Niall






_______________________
http://www.oekonux.org/



Thread: oxenT01623 Message: 24/129 L15 [In index]
Message 01740 [Homepage] [Navigation]