The Road-Crash Ahead

Tib Galles


Some rambling about the past, present and future of computing, OSes and slightly anti-M1cro$oft stuff.

(Past) The 'good' old days?

Back in the old days of computers, operating-systems and God-like coders, some of the most fundamental algorithms, basic techniques and now widely known tricks (both optimization and size) were created. The names of Huffman and Bresenham are very widely respected for their visionary approaches and algorithmic gifts to coders all over the world. At first it may seem that they were total geniuses, egg-heads of the highest order, but in fact their creations were not that different from all the other work at that time. I'm sure if some good research went back and dug deeply enough then they would uncover the 'seeds' of the Huffman and Bresenham algorithms in other people's work and discussions. And of course, since their publications a vast deal of work has been done, not only by themselves but by thousands (perhaps millions) of others.

So what?

Well, the point I'm trying to make is that NO coder works in a total vacuum. He/she will take the experience and knowledge from others and add their own little 'twist' on a problem. Trying to trace the origins of software is very difficult, perhaps impossible. I guess you could call it 'binary-envolution'. The development of software is similiar to hardware, but sadly at a far, far slower rate. The main problem for dating software in terms of mile-stones in history is the fact that most of it is hidden away, far from the user's view.

The UI (User Interface)

Well, apart from the user-interface.

This is (perhaps) the most important part of any program. Some may argue that the actual heart of the program (the 3d-engine, the memory-management, the image processing etc..) is more important than the fancy UI which the user sees. But this is a very coder orientated way of looking at a program. In the past the only people who would be using their programs would be other coders, mathematicians or engineers. Basically people with a fondness for numbers and a good understanding of algorithms and formulaic procedures (the typical egg-head stereotype). History has proved that an easy to use interface will convince most users to use an inferior program. The user interface is the welcome-mat. If it is covered in trash with bits hanging off or with a bad user guide scribbled on the back of an old Kellogs Corn-flakes packet then most people will look elsewhere.

Of course GUIs (Graphic User Interfaces) could only arrive with the advent of reasonable amount of memory and processing power. Another factor why ugly, text based UIs continued to be used for so long (yes, even now) was the lack of knowledge for creating a large operating-system and user-friendly UI. So it took a number of years after the arrival of sufficiently powerful hardware for programmers and designers to develop the tools and knowledge for improving the UI aspect of their programs/operating-systems.

(Past) User-friendly?

In the past every coder had their own favourite control system, their own quirks and design mentality. Every time a user bought a new program they had to learn a vast array of new keyboard controls and mental model of how the new program worked behind the scenes. You have to look now further than the world of text-editors and word-processors to see what I mean. Every program had different rules and of course, totally different, incompatible keyboard layouts (not to mention file formats).

Even with expensive operating-systems the magic hand of user-friendliness had yet to touch the keyboard. Some were full of bugs, badly designed or just plain slow. Many were hacked to bits just in order to stop it crashing every time a certain event occured. And when some companies did fix some of these problems in later OS'es they often made them incompatible with older versions (the old Amiga OS/Kickstart is a good example of this, and of course let's not forget the M$ product line...#;o)

(Present) Standard or Brainwashing?

Some things are starting to slowly (oh, so slowly) improve. One GUI today nows looks very similar to another GUI (there is obviously some cross-fertilisation of ideas happening here). The ideas of W.I.M.P.S. has been around for many years (Windows Icons Mouse Pointer System - if I remember correctly, hey it's been many years since I've heard that phraze). The idea of a standard interface and program design mentality is a very attractive one to both users and companies. There is no need to struggle through a huge tome of new controls before you can use your new program. Common keystrokes is one way in which users can easily switch between programs while still using the same keys to perform Cut, Copy, Paste, Open, Close operations.

The problem.

But who decides these common keystrokes?

Yeah, the company who wrote your Operating-system(s).

(Present) The best input device?

The keyboard still is THE best input device (IMHO). The mouse is fine for moving graphical object around, selecting from menus and other 'exploration' tasks (ones which allow the user to view all the available operations before he/she selects one and where manipulation of tasks based on the user's preferences are required like graphics). But for any text based task like coding or word-processing the mouse can't really compete with the keyboard.

M1cro$oft has tried to push many people towards the split-keyboard, but personally I hate that horrible thing!! I mean, why break the best input device into two parts? You wouldn't do this with a car steering wheel, would you? (Sorry, bit of a rant there..)

The problem with a mouse is that you can only influence ONE item at a time (i.e. drag a window, move an icon, double-click on an item etc...). With the keyboard you have quick access to around 330...1000 different functions by means of the 105+ keys and the [ALT] [CNTRL] [SHIFT] keys (not to mention the special windoze-95 ones). The biggest flaw of the mouse is that you can't type letters or numbers very easily (or quickly). You would need some kind of graphical depiction of a keyboard so you could use the mouse pointer to 'type' with (very slow).

(Present) Combinations & Crashes

Perhaps the biggest challenge for any new/existing Operating-system design and coder is the task of managing vast amounts of information, both in terms of user data and the various house-keeping tasks (supporting the hardware, GUI events, Network events, user inputs etc.. etc...)

Opertating-systems seem to choose one of two different solutions to this 'combination' problem. Either hide everything away from the user (as in the case of Windoze) or to present the user with an overwhelming collection of different options, settings and configuration files (like Linux for example).

Both solutions have their own merits. The first prevents newbies from wrecking their systems too much (kinda like placing a lid on the blender so they don't lose any fingers). The second allows experienced users to tinker away with all the settings and possibly correct ANY problem which may occur.

The bad points are that the first stops the user from gaining access to important settings. He/she is forced to rely on the skill and design vision of the OS creator to provide all the tools needed to correct a future problem (e.g. if some hardware auto-detect fails). The second solution allows a newbie user to possibly trash their entire OS without knowing it. I doubt that any user who is forced to repair an entire OS by themselves will be thankful for the open-ness of their OS.

(Future) People are stoopid.

Yeah, users will do completely stupid things. Sometimes they will not know what they have done themselves. People need to interact with their OS, to get FAST feedback from their inputs. If an object can not be dragged into a certain area then stop them visually (and possibly highlite the problem with a simple message. By message I don't mean an alert box which must be closed EVERY time a problem occurs, but a simple message on an info bar somewhere on the screen).

Trying to make something fool-proof is very, very difficult, but this can be minimized by not taking any programming shortcuts (i.e. Don't assume anything!!!). The screen and the speaker are the only ways in which a program can give feedback to the user. Currently we don't have the ability to give more sensory cues (we are beginning to get force-feedback devices and VR style environments but these are still a decade or more away). Probably the best thing that any OS or program can do, is to talk/interact with the user.

(Future) Operating-System.

The current batch of OSes will look very primative in the future because they are SO obvious to the user. OSes like BeOS, Linux, Unix, Windoze, MacOS are very poor attempts at creating a real-time virtual environment on the monitor screen. Many people have said that a monitor is still pretty useless in terms of representing data, At most we have a 21 inch screen with 1600x1024 (or some other eye-killing resolution) yet still the visuals are very poor when you look at printed items. Maybe in the future we will have a laser-printer display connected to/placed in front of our eyeballs. Something to give a huge virtual field of vision while having the resolutional quality of printed, or perhaps even, real world objects.

Maybe we will soon be using holographic glasses which project virtual items into the real world. For example, a webpage would appear as holograms on your real desk and your computer room wall would become an entire web site with dozens of individual pages projected onto it like stickup posters of your favourite band/individuals/nude pictures (heheh). So rather than trying to cram an entire digital desktop on a small 14 to 21 inch screen the entire room becomes the display.

I would guess that an Operating-system will cease to exist, instead we will interact with multiple operating-systems, one for the web browser (provided by your video-telephone/communications company) one for the VR display techology and one for the climate control of your computer room. All these systems would need to be linked together with some pretty smart technology too. I remember a quote from (I think it was) IBM in an old 1950/1960's film about computers. They said that the world will only need about 5 or 6 computers..... Hmmm.. I would imagine that 5 or 6 computer per PERSON is likely to be more accrate.

Final Thoughts.

Oh well, most/all of the above has been predicted many years ago. It's not a question of 'IF' they will happen, but 'WHO' do we want to design and create this new technology. M1cro$oft is already investing heavily into lots of communications technology including satellites called 'Skynet'. (Hmmmm... anyone got Arnie's phone number?? Theres a certain 'Bill' I want terminated...)

Can you imagine living in your M$ home, watching your M$ controlled VCR, listening to your M$ music system and then leaving your home in the hands of a M$ controlled security-system while you travel to work in your M$ controlled transport pod???

ARRGGGGGGGHHHHHHH... 1984 "Big Brother Bill is watching you"!!!!)

Thank you for reading "The Road-crash ahead" by Tib Galles.