From cave paintings to the web (part of lectures 1994-99 )

2010 post script:
This text was written in 1994-99, it was part of a series of lectures from my multimedia courses; mobile phones, Google, facebook, myspace, YouTube and all the tools that are now part of our daily life weren’t there, if not in the imagination of few. I think it is still very useful to trace back where it all came from, to have a better picture of where we may be going. Starting from cave painting and getting to ask whether computers can fall in love…

Being able to calculate has always been associated with the acquisition and maintenance of power. There is plenty of evidence to support this idea; think of the Egyptian priests who, thanks to their knowledge of mathematics and astronomy, could predict solar and lunar eclipses and were thus able to scare and blackmail the people by threatening them with switching off the sun. However anecdotal this may sound, it is a clear example of how a certain kind of knowledge can be so dramatically important from the social, economic and political points of view. Microsoft’s stranglehold on today’s economy and way of learning is the best (and worst) example of how far this concept has gone.

In this respect, the fact that most people in the western world now have access to networked computers has to be considered as a sort of historical mishap. That is why it is so important that everybody be allowed to learn to use technology, independently being given the ability to understand the underlying principles; this can avoid the risk of our being told how the tools are to be used and given access only to a more limited amount of information.
From the beginning of the modern world a few thousand years ago commerce has been the engine propelling discovery, growth and research. Inevitably this created a world with rich and poor, with rulers and those who are ruled. Efficient commerce requires the ability to calculate. The ten fingers nature endowed us with soon fell short of being sufficient for complex calculation (ever asked yourself why the most common calculation systems in the world are based on the decimal structure?) hence humans started to develop calculating machines. Calculation has become increasingly complicated, and it has to be done faster and faster. By now world economy is a very complex digital game, played on-line in real time, conditioning everyone’s life and becoming increasingly (and dangerously) fragile.

We have numerous examples of calculating machines, the Babylonian abacus (about 5000 years old and using principles applied throughout several ages and cultures); the Chinese calculator with moving spheres, the tablets used by Phoenician and Sumerian…. The abacus used by the Romans is a typical example of a tool for symbolic calculations, being a tablet with grooves, each groove having an assigned value (M=1000, D=500, C=100, L=50, X=10, V=5 and I=1). By inserting pebbles (called calculus, wherefrom calculate) in the grooves, one could perform simple calculations. An odd and still unexplained example is the system of knotted strings used by the pre-Columbian civilisations of Central America.

For a long time something all systems lacked was a symbol to represent “zero” – a difficult concept to grasp. This appeared rather late in history; some believe it came from India, some from Babylon but effectively it was introduced into Arabian calculation systems in around 800 AD by the mathematician Al-Khwarizmi, who gave his name to the term “algorithm”, the mathematical term to define a method or a formula to solve a problem.

… mechanical era
The obstacle to the evolution of all these systems was that until the 20th century we could only use mechanical systems. Let us consider Leonardo and his machines, which represent human effort to create some form of “extension” to the human body and its abilities. With a relatively primitive set of tools and techniques at his disposal, Leonardo managed to design an impressive range of machines for the most disparate applications. More than anybody else, he exploited existing resources and knowledge, but he did not have electricity, fuel propelled engines and, least of all, electronics and digital technologies.

Leonardo’s work is a useful reference in understanding another important element which has remained a constant in history: his inventions were financed by the rulers of his era and, most of the time, were intended as destructive devices – as war machines. Leonardo drew on the resources of this kind of sponsored research to widen his studies and experiments. This reality remains and, without research geared to the ends of warfare, whether we like it or not, we would not have personal computers, simulators, robots, virtual reality, a global network, video games and so on; perhaps not even the sophisticated domestic appliances we are used to in the western world.
Going back to the need for calculating machines and the limitations imposed by mechanics, it is not until the 17th century (in Europe) that the development of both new concepts and new techniques started. Logarithms and Napier’s Stick (a measuring instrument carved in bone, hence nicknamed “Napier’s bones”); Oughtred’s calculator and the “automatic calculating machine” by Pascal (1642, called Pascalina) as well as Schickard’s: these are the first machines that could be called “computers”.
Leibniz invented the first machine that could also multiply and divide (1671). It is worth noting that even at the time that these machines were invented, many people expressed their worry about humans being replaced by machines.
Also worthy of note is the fact that in those days it was mainly philosophers who studied mathematics and physics.

From the end of the 17th century many scholars, mainly German, French and English, put their efforts into making automatic calculating machines, from Morland to Grant to Borroughs and many more. The first “modern” computer appeared only at the end of the 19th century, in Victorian England (of course, with all the colonies to exploit there were a lot of calculations to be made). Babbage corrected all the errors common to the previous machines; he invented new mechanical devices as well as mathematical concepts, integrating all the research that had been done by other mathematicians. On these bases Babbage invented the analytical machine (1833) and the differential machine (funded by the government). These calculators employed perforated rolls, adapted from the textile machines, which were by then quite sophisticated, introducing thus the first kind of “programming”. The French inventor Jacquard revolutionised the textile industry when in 1804 he introduced perforated cards to program industrial weaving machines. His name is still used to describe a particular technique of sophisticated embroidered weaving.

The analytical machine also had an internal memory, something that the computers of the 20th century lacked for a long time. This memory consisted of 50 rotating counters, each containing 1000 numbers of 50 ciphers.

The machines invented by the genius of Babbage cannot be called “practical”. The analytic machine (of which only one incomplete prototype exists) is a colossal contraption of brass and wooden cogs. But it works; it successfully manages quite complex mathematical operations. However, it is still a mechanical device, steam-driven, and hence with insurmountable limitations.
There are several other examples of calculating machines, which we could call mono-functional: there is one by Kelvin designed to calculate the cycles of tides; there is one by Hollerith (who founded IBM at the end of the 19th century) designed to count electoral votes; there are various perennial calendars, some of them sophisticated enough to tell us the day of the week or the lunar phase of any given date.

For anyone with any degree of familiarity with mathematical principles it will not be difficult to imagine how these machines operate. The electromechanical machine by Hollerith looks like an upright piano decorated with a series of clocks. It works with a series of cards whose perforations correspond to various kinds of data that the American government needed for the nation-wide census they carried out every ten years. In 1890 the use of this machine enabled the recording of data from the forms of 62 million citizens in two and a half years, as against the seven years required ten years previously in order to classify 50 million citizens.

Apart from their primary function and their historical value, these machines are now just a curiosity. Today we use dozens of mono-functional calculators in our daily life, from watches to washing machines, from telephones to televisions, from the alarm clock to the microwave oven – all of these contain microprocessors designed for a specific kind of calculation needed by the device to work.

Going back into history, it took up to the 20th century, when the industrial revolution had developed other means and technologies – particularly electricity, telephone and photography – before someone would again set out to create a new kind of calculator, one capable of more than just counting faster than a human being.
The Memex, by Vannevar Bush, is the first example of a machine devised to archive, catalogue and retrieve data. This was the first hint of database calculation. By the 1930s the USA was what England had once been, the nearest thing to a world empire and needed to collect an enormous amount of data of all kinds – commercial, scientific, demographic and so on. This wealth of information was accessible only with great difficulty because it was fragmented over a variety of physical locations spread out over long distances. Bush had a brilliant thought: wouldn’t it be wonderful if, using existing technology, we could miniaturise this mass of data, collect it in a portable unit and archive it in such a way to allow for systematic research and retrieval as well as to enable its duplication and transmission? He thus set out to combine a typewriter, microfilm, projector and electrical circuitry. This resulted in a machine that could archive and catalogue data stored on microfilm, find information via keyword search, project it as slides and finally print it out. Bush probably dreamed of storing the whole Library of Congress on his desktop.

Bush tried to obtain funding to develop his project, but failed. In all probability the machine looked quite useless as a weapon and the value of its concept was far beyond the understanding of the politicians.

the influence of WW II
The research led by Bush and continued by Shannon and Stibiz dates from the 1930s and represents the foundations of many of the developments that were to change our lives.

During the same time period IBM, in association with Harvard University, was exploring similar ideas. It was in fact at Harvard University that, after the Second World War, the first digital computer was developed, the Mark 1.
In Germany, Zuse had invented a very similar machine in 1941; this was the Z3 that was to be employed in the aeronautical industry. The Z3 was capable of adding up within fractions of second and could multiply in a few seconds. All models of this machine were destroyed at the end of the war.
The Second World War played a major role in our story. If the Third Reich had not had the paradoxical idea of persecuting the Jews, the Germans would probably have won the war and we would all be living in a different world. The best German scientists were Jewish; hence those who could, left Germany and most of them ended up in the States.

Before the war Germany had been much more technologically advanced than any other Western country. The level and quality of its research in all fields was way ahead of the competition. The Second World War was much faster and more widespread than any previous war and the need for speedy communications became crucial. What has this got to do with Bush, Harvard and IBM? Easy: the Germans had devised some very clever ways to encrypt their communications, these encryption systems were driving the best American and English linguists mad. It was imperative to manage to decipher German communications, and to do it fast – particularly when the V1 and V2 rockets started falling on London.

Deciphering an encoded message can be a desperate enterprise, just as it is easy to read the same message once the “key” is found. All the experts were at work trying all possible combinations but the mathematical combinations applied to language are just about infinite, and time was short. It became obvious that only by employing a fast calculating machine would they have any chance of success. All existing machines were put to work and new ones were developed under the pressure of war. Electrical machines like the Mark I by Aiken and Turing’s valve ‘Colossus’ and others were all employed in the task of decoding the German communication system.

Someone even remembered an odd man named Bush who had wanted to make a “combination machine” some ten years previously, so they looked for him and offered him a job.

post-war evolution
With this we arrived at a point that interests us more closely. Post-war reconstruction and the fast growth of the USA as world leader fuelled the impetus towards technological and scientific research, leading to the world as it is today, for better or worse. The USA was rich, powerful, convinced it was the best and most righteous nation on the planet. The Americans were determined to keep their supremacy and they had at their disposal some of the most brilliant minds of the time. These minds were only too happy to be allowed to work and to experiment, fully funded and safe in their host country. These conditions allowed the fast development of research centres, both within Universities and in the armed forces.

Modern computers and all that makes up information technology and electronics today, were born and developed in these centres.
The first fruit of these policies was ENIAC (Electronic Numerical Integrator And Calculator), the first electronic digital computer. This was designed by Mauchly and Eckert at the University of Pennsylvania, completed in 1946 and used by the army up until the mid 1950s. ENIAC could perform 300 multiplications per second; it weighed 30 tons; it was 30 meters long and 3 meters high, and it contained 18,000 thermo-ionic valves (which would burn up at a phenomenal speed).
ENIAC was bigger than 30 elephants but, unlike those noble creatures, it did not have something very important: a memory. All data had to be manually inserted into ENIAC, every time.

These early computers were those room-sized boxes that we can still see in some vintage sci-fi movies. In actual fact they were just a huge series of switches. The position of these switches would assume a positive or negative value, and thus enabled mathematical operations based on the binary system. They did not have programs, software nor an operating system neither did they have an internal memory. Everything, operations and instructions alike, had to be inserted each time.

Let us now jump back to the origins of human’s ability to communicate before we see the next step of development of computers, which relates to language….

..from cave painting to language
Do not forget that all this started in those caves where our distant ancestors had the funny idea of starting to communicate and use abstract concepts through graffiti. If they had not started this game we would still now be dumb and simple monkeys, rather than dangerous and complex ones.

Iconographic communication is about 40,000 years old. It coincides with the development of the frontal lobe of our brain, the portion that deals with “associative thought”, with establishing a relationship between diverse observed or felt elements.
Essentially art and communication are the ability to use a symbol to represent a concept or a fact. This ensures that a sign on a rock will mean “buffalo” not only to the one who drew it but also to those who look at the sign later on. In order for this ability to develop a common basis, thought processes and knowledge must exist and be shared by the members of a group; this takes a long time.

The first known signs showing a symbolic ability in human beings go back some 100,000 years. It then took another 60,000 years before this ability developed into the skill to decorate first the body, and then the cave walls. It took another 15,000 or 20,000 years to go from the first expression of what could be called “Art” to the paintings in the caves of Lascaux and Altamira – the same length of time which separates these paintings from the first television broadcast.
In the landscape of knowledge, language is the starting point, that which enables humans to codify abstractions, to communicate thought, and to transfer knowledge.
The great revolutions of modern history are all indissolubly linked to the development of communication and archival media, from print to photography, from telephony to radio and television.

there is language and language…
The language humans use to communicate and (with varying degrees of success) understand each other is nothing but an abstract convention. By accepting this convention, we agree on the symbolic value of some graphic signs – letters or ideograms – and the rules we use to apply to combine them. With these symbols, we manage to represent and describe almost anything.

The languages we use in the Western World are relatively simple, based on some 20 letters and used in quite rigid combinations. Other languages are far more complex – think of Japanese, Chinese or Sanskrit: based on thousand of ideograms conveying concepts rather than simple sounds, concepts that can often change their meaning depending upon context and combination.

We also have to accept that almost anything can be represented using mathematical formulae, the most symbolic of symbolic languages.

While at first it may be difficult to imagine, all that happens within a computer’s brain is represented using a simple binary language, a combination of just two symbols – 0 and 1. This may become easier to understand if we think that all that goes on in our brain is also based on a combination, albeit complex, of electromagnetic pulses, on-off electrical signals.

What makes all the difference between our brains and that of the computer is that the latter lacks something terribly important – it lacks what is called “associative memory”. The computer does not have imagination and does not spontaneously link acquired knowledge (the “software” and “files” we store in it). When we learn, see, hear, experience something, then our brain stores it and classifies it, instantly retrieving the information thus acquired whenever it may be needed or whenever some external factors may trigger it.

During the first years of our life, our brain constantly absorbs signals from the surrounding world. These signals are classified and stored, making up what becomes our own personal knowledge. Once we have burned our fingers on a flame, we don’t need to be told again that fire burns; and we will instinctively be able to identify searing hot and potentially painful objects even when they are not exactly like the flame that burned us the first time.
The computer is incapable of this kind of association and needs to be told every time, clearly and in simple language, what to do. Furthermore, the amount of a computer’s memory, despite the enormous improvements and developments of recent years, remains a very long way from the almost unlimited capacity of our brains. We should therefore be patient with computers, understanding that they are simply fast calculators and unable to think.

the computer is dumb and “too straight” – be patient with it…
Science fiction often portrays a future with thinking machines, with computers taking over the world. Scientific experiments have been done with computers that are able to recombine the acquired knowledge, and some success has been achieved. The ongoing research in nano-technology has achieved fantastic results, with complex machines the size of a pinhead; with some machines able to repair themselves or to build others, simulating the way the complex cells of our body can perform incredibly sophisticated operations and reproduce themselves. However, we are still very far from any independently thinking machine of any sort and we do not know if such result will ever be achieved.

If I say something simple like, “table” you will all think of a similar object: a flat surface mounted on legs. The picture in each of your minds may well be quite different in style, colour and size, but each of you will have an idea of what a table is and what its use can be. A computer needs to be told everything. It needs a description of the colour, the size, the material etc.; and then in turn each of these elements needs to be defined further . . . Even when we have managed to clearly describe a table to a computer, though, the poor machine still will not be able to recognise another table of a different style, if it were to be able to see one…

Our brain ceaselessly stores information, recombines it, retrieves it when needed. We absorb and re-elaborate the signals we receive from our surroundings, from our mother’s womb to our last breath. All that information is continuously updated and reformatted, we don’t have to make a conscious effort for this to happen, it’s written in our genes (apparently another binary code, now that the DNA sequence is being unravelled…).

The computer’s only way to learn (so far) is when we install new software that contains specific instructions. No new knowledge will be acquired until the next software upgrade.
Lets have a look at more analogies between our brain and the computer’s

pattern recognition and model making
The computer is an ideal tool when it comes to comparing patterns, creating models of reality that can be used for comparisons.
The computer ability to work out the various possible outcomes deriving from variations within a given scenario are powerful, they can be extremely useful.
Imagine the effort of estimating the consequences of introducing a certain kind of tree in a new environment; the impact of a given increase of traffic in a specified area of a city; the consequences of a few degrees change in temperature in the atmosphere… the applications are limitless; they all rely upon the ability to calculate and compare enormous amounts of data. The computer is ideal for this work, not last because it doesn’t have opinions and emotions; the computer-generated models are more reliable than any such a model a human could ever create.
{ I wonder if one day in the future a highly opinionated and emotional computer will spit on my name reading these lines….}

To use a seemingly odd comparison lets consider something deeply human like the choice of a partner. Let me say before we start that I would never dream of suggesting we should use a computer to choose a partner, (although that’s what many dating agencies naively do).

can the computer fall in love?
This is something we all experience, irrespective of culture, religion, area, era, and it is something that greatly conditions our life.
In choosing a partner our brain does a lot of pattern-recognition and model-making work.
Without our conscious knowledge upon meeting someone new our brain immediately retrieves all our memories of childhood, family life, friendships and previous relationships. It does it instantly; remember that our brain is in many ways superior to the computer at storing and recombining information.

Our brain immediately makes a model, inserts in it our personal data (memories, emotions) and then starts trying all possible combinations with the data coming from the person we have just met.
During a first meeting the brain does a fascinating work, it recombines this stream of data (words, images, smell, sound, motion …) coming from the other person, it does it in real time and changing the outcome (our impression/feeling) continuously, as new data comes in and it gets recombined to create new scenarios (our dreams and fantasies).

Overlapping and comparing patterns our brain conjures an ideal model and, if the data somehow matches, sets a process in motion that can lead to what we call “feeling attraction” which can then develop in “falling in love”.

This way of looking at the subject may sound terribly cold and mechanical, yet it is a natural part of how our brain works and knowing these mechanisms can be helpful.
Thinking of our emotions in terms of more or less efficient combinations of binary code may spoil the romance, however, that’s the way our DNA is programmed and the signals reaching out brain are somewhat like those reaching the computer’s processor: on-off pulses.
The computer way of building models and working out solutions is based on these same principles.
Virtual reality in its practical applications relates to this kind of model building and enables explanations and hypotheses otherwise out of our reach.
What would it be like for a doctor to travel along the arteries of a patient to discover where the damage is and pilot micro-surgery equipment, what would it be like to re-experience the first months we spent in our mother’s womb… again, the applications are infinite.

the power of knowledge
Referring to what we mentioned at the beginning about the equation “knowledge = power”:
The more you know the less someone will be able to tell you what to do and how.
What does “knowing” means though? For sure it is not just the accumulation of data or the acquisition of skills.

A suggested recipe:
Gather, classify, compare and place the data you find in the appropriate context.
Get the overall picture, looking at things from far in their ensemble, then look at the same from very close, in their details.

Assimilate, and make up your own opinion.
Never before so much knowledge has been so easily and freely available.
Computer technology and networking can greatly help the spreading of knowledge.
The risk is that the “suppliers” of this knowledge and the examples of how to employ it are still American and European. This can perpetuate the roots of Colonialism, just presenting a new and subtler version of it.
It is imperative and urgent that people from different cultural backgrounds appropriate the technology and use it in their personal ways.
What this ways may lead to nobody can truly know, but it is a chance not to be missed.

early “modern” computers
Let us now go back to the early computers that we were talking about. Effectively these were huge boxes, full of switches. Their ‘talent’ consisted of the ability to perform mathematical operations faster than a good human mathematician could.
At the beginning, access to these machines was limited to a selected few. The reason for this was that they were enormously expensive and that only a few scientists were able to use them.

Here we are again, then, with the concept from which we started: ability to perform complex calculations = power = restricted class of users (in any combination).

These users were mainly Government bodies; all of the first computers were owned by governments, designed and produced by highly specialised personnel in secret research centres.

It is worth remembering that the most powerful of those early machines would pale into insignificance beside the simplest of today’s machines, even one of today’s children’s computers is faster and more powerful than any of these techno-dinosaurs.

Computer development has taken place over a period of about 40 years and the speed of such development has increased exponentially, thanks to the “enabling technologies” chain effect. Every new discovery has made other, more complex, discoveries possible; thus the various research fields nurture and influence each other.

After a century during which the various scientific fields operated quite separately and almost unaware of each other, over the last decades we have seen a return to the multidisciplinary concept – to the roles of science, philosophy and art merging as they did originally in men like Leonardo Da Vinci.

As a side effect, the exchange of knowledge between researchers has resulted in an enormous development in communications, not the least of which being the principal cause of the creation of the Internet – with all its implications.

Despite the cold war and industrial secrecy, the scientific community offers the first glimpse into a possible super-national community.

pioneers
After the War, the main technological developments took place in the USA and England. Then the USA pushed their research as much as they could, supported by their booming economy, and managed to establish hegemony that not even the Japanese have yet matched.

The mathematician Von Neumann invented the basic concepts of programming and memory that form the foundations of modern computing.

Cambridge University in England made EDSAC (Electronic Delayed Storage Automatic Computer) based on Von Neumann’s project. This was the first computer with an internal programmable memory.

Mauchly and Eckert improved ENIAC into BINAC, which used magnetic tapes as a storage medium; and then contributed to the creation of the first commercial computer, UNIVAC, in 1951.

The next step was the series of IBM 704 computers, made in 1955, using a different kind of memory, bigger and more reliable.

All of these early computers had to be programmed directly in what is called “machine language”, listing every single instruction. We are still far from the first programming languages.
keep it secret – knowledge mustn’t fall into people’s hands
As we have seen, during the ’50s intense research activity took place in American Government-run centres. This research concentrated on computing and related sciences. We know less about what was happening in the USSR at the time, but it is easy to imagine a similar situation. However, while the Soviet scientists were very competent, they did not have access to the same kind of investment, as well as having to battle with a stiff bureaucracy; this slowed down their progress.

Initially all this newly acquired knowledge was confined within the research centres and there was neither plan nor intention to make it public. Naturally, many researchers ended up working in Universities – some because they had ended their Government contracts, some because they wanted to expand their research into other fields.

At the same time IBM began to understand the commercial potential of computing outside of Government applications.

winds of change
Despite many of the innovative studies being protected by non-disclosure acts or by being classified top secret, it was inevitable that those researchers, who had now become lecturers, would transfer their knowledge to their students, directly or indirectly.

Do not forget either that we are at the beginning of the ’60s, a time in history when, particularly in American Universities, young people thought that they could change the world (and they did have a slight chance too…).

Moreover, the American Government understood the profits that could have derived from creating a new generation of highly trained people, once they had “settled down”. In the light of these considerations, American students of the 1960s enjoyed a considerable degree of freedom and were able to make use of vast resources, both in terms of facilities and of funds.

Here, then, we see a whole generation of hippies and eccentrics with enormous (compared to the rest of the world) resources, encouraged to experiment freely and to create something new.
It is in the ’60s that ideas like Virtual Reality were born. VR was born simultaneously in Army Research Centres (first and foremost for simulation and testing purposes) and in the Universities (for science and entertainment).
We will get back to this point later, as there was still much to be developed before computer graphics were born and before Virtual Reality could become a viable reality.
now we start talking!

Let us step back a bit.

Initially, the way to “talk” to the big boxes was through perforated cards. The computer technician would punch the cards, a hole=0, a non-hole=1. With this system that – now that we are used to the mouse, the touch screen and the virtual glove – feels positively primeval, the programmer (who was still a member of a limited élite) would describe to the computer the operations that it was meant to perform. The computer would read the card, do the calculations and answer by punching a similar card and spitting it out. These cards were, again, understood by the computer technician only.

The computer’s memory was then limited to an archive of punch cards or, a little later, perforated paper rolls (remember Jaquard and his textile machines?). This was cumbersome and hardly practical.

To give you an idea, a postcard-sized photograph that you can see on screen, can amount to several pages of code in the computer’s “mind”. (see “the computer’s eyes”) This because it is described with a modern programming language, which is already a very concise way of writing a description. If the same picture were to be “written” in plain binary language it could take several hundred pages of zeros and ones. Today you can click on a few buttons and get your image scanned and displayed on screen, whereas only fifty years ago you would have had to describe each individual pixel’s characteristics by punching holes in series of cards or in yards of paper rolls. That would only be if the computers of fifty years ago had had a screen, and the capacity to understand and represent an image . . .

from paper to tape
The next step towards a more reasonable usability of computers was to replace the paper with magnetic tape. This had been developed and improved for audio recording, and it was the best available medium to record and store computer data. The first experimental model was BINAC in 1948.

It is at this point that the room-sized computers mutated into those metal “wardrobes”, usually blue and grey with big reels of tape that you see in some vintage Sci-Fi movies. With this system the ability to store and quickly retrieve information increased dramatically. If Bush had been there at the time, he might well have melted with pleasure and envy thinking of his original idea of using microfilms.

A meaningful figure: at the beginning of the 1950s, there were about twelve computers in the USA.

the transistor
With the invention of the transistor, the electronic era and the real revolution could begin.

The first transistor was invented in 1948. Barden, Brattain and Shockley won a Nobel Prize for this discovery. The first computers employing transistors were the IBM 7090 and TX-O, the latter designed in 1956 at the MIT (Massachusetts Institute of Technology).

Kilby at Texas Instruments designed the first integrated circuit in 1958.
These two single elements enabled miniaturisation, the reduction in prices and, in short, and the existence of personal computers, as we know them now, at the beginning of the 21st century.

first software and programming languages
Something that these computers still lacked was an interface – a direct system of communicating with the machine. There was no way to see what the computer was doing while it was doing it without interfering with its calculations. At this point, one major improvement was to modify the electric typewriter keyboard and fit it onto the computer. Again we see the adaptation of a pre-existing item, and its integration with the computing machine.

A further step forward was the creation of printers based on punching needles. These were so noisy that they had to be put in separate rooms or soundproof boxes. They would print out miles of perforated paper that would pile up in the print room, again something you see in some vintage movies.

At this point we are beginning to get close to a more familiar object, more directly recognisable as an ancestor of modern personal computers.

The IBM 700 series, brought out in 1952, can be considered the first of this generation of computers. Nonetheless, these were still machines based on valves; very expensive and complex, not designed for the general public.

With the IBM 704 series, a programming language was born, it enabled the programmer to instruct the computer with something similar to the English language. This was the first step towards teaching the computer to understand humans rather than the other way round.

This first language was called FORTRAN (FORmula TRANslation) and its specs were defined in 1957, kick-starting a process of creation and evolution of programming languages that is still under way.

After FORTRAN, ALGOL (ALGOrithmic Language) and COBOL (Common Business Oriented Language) were born; this last was still in use in the ’90s.

The 1960s saw the development of a series of specific languages, directed towards a new kind of user, more commercial than scientific. Another milestone of this period was the IBM 709, the first computer equipped with a CPU (central processing unit) where the control unit and the arithmetic unit were for the first time integrated.

the operating system is coming
The second fundamental milestone was the appearance of the first operating systems, the ancestors of DOS, Macintosh System and Unix. (Why do I not mention Windows? Because it is not an operating system! Later we will talk about this, one of the biggest scams of the century.)
These operating systems were designed to automatically handle a series of routine tasks, performing control procedures on the internal resources and executing programs. It was at this time that the term “software” was invented, to describe computer programs that include pre-codified specific functions.

the Massachusetts Institute of Technology
It is worth saying a few words about MIT. Of the many American Universities, this is perhaps the one that contributed the most innovative and revolutionary research and experimentation of all. The majority of the new technologies that we all employ or live with in our daily life, originated at MIT. The campus is a lively centre of experimentation, with the oddest kind of projects all taking place simultaneously; it is an experimental laboratory where anyone with a congenial idea, however mad it may sound in a conventional context, can find the ideal environment in which to develop it.

From the beginning, MIT was sponsored by private industry. All the world’s major corporations invest heavily in it, knowing that sooner or later they will reap the financial benefits. It is a rather peculiar case in as far as the investors do not have much of a say in how their money is spent. This is mainly the result of Nicholas Negroponte’s leadership and policies: over the years, he and his team have managed to demonstrate that pure research can lead a very long way if it is not conditioned by politics and direct commercial purposes.

During the 1950s, thinking of owning a personal computer was not only against the mainstream but also downright forbidden. Douglas Engelbart (inspired by some old writings by Bush) was perhaps the first person that understood the conceptual and social importance of allowing the general public to access computers. The concept was revolutionary – it envisaged the computer as an “amplifier” of human mind potential, rather than as a tool for financial and military applications.

Engelbart and the ’70s visionaries
Well in his seventies, Douglas Engelbart was still a very active researcher, regarded as the father of a whole generation of modern thinkers. Yet it took several years before anyone started taking his ideas seriously. During the ’60s Engelbart ran the ARC (Augmentation Research Centre) at Stanford Research Institute. There, he elaborated his thesis, based on the consciousness that the world was developing at an increasingly fast pace, and was becoming more complex than ever before. Mankind needed new tools to understand and to manage the planet. Computers were the ideal instruments to use in order to take charge of all that portion of our thinking processes which requires the storage and comparison of vast amounts of data. This would have freed the human mind from a great burden, allowing more space and time for creative thinking and processes.

Back in the ’50s, Engelbart was already thinking of a computer more or less as we know it today. He also hypothesised a kind of remote collaborative work that is more advanced than today’s Internet.
Working at the same time as Engelbart, but without the two knowing each other, another visionary was developing a similar theory. This was J.C.R. Licklider, professor at MIT.

Licklider was developing the idea of employing the computer as a work-mate, one which could be given the tasks not only of archiving, retrieving and combining data intelligently, but also those of developing simulations and “models”

At this point, though, Licklider was called upon to develop a defence system for American territory, as the U S Government was terrified by the launch of the Sputnick, which made them think the Soviets had surpassed them technologically and were ready to invade or destroy the USA.

Stemming from this research carried out at MIT, several technological improvements were made, such as adding a monitor to the computer (Engelbart himself was trying to modify a TV monitor for this purpose).

… maybe we can use the computer for something else
From this basic implementation followed the touch-screen, on-screen graphic representation, real time interaction, the graphic user interface, the electronic pen, visual simulation and indeed, as a consequence, all that we are familiar with now, from video games to video conferencing.

This project spawned another one, one that called for the participation of another seminal figure in this story: Ivan Sutherland, commonly recognised as the originator of computer-graphics.

On this second project, enjoying conspicuous financial support, Engelbart was called in to be part of the team. Fourteen years after the publication of his ideas, he could finally have at his disposal one million dollars a year and all the facilities and resources he needed to develop his “mind amplifier”.

Engelbart, Licklider and Sutherland: the world today would be a different place without them.

It was at MIT that someone, inspired by the three great men’s ideas, thought that computers could be used for something more fun and more useful than designing missiles and defence systems, bombs and spy satellites. To achieve this, it was essential to make computers more accessible, both in terms of use and of cost. The first step was hence to find a way to break the monopoly of programmers and computer technicians. The steps leading to this were logical and happened in sequence, thanks to the cross-fertilisation between various scientific and technological applications that were being developed in a variety of areas.

giant steps and here comes the Mac
IBM and Xerox, aware of the commercial potential deriving from the development of electronic devices, were designing all sorts of office machines, sponsoring the research at MIT as well as running their own research centres.

DEC (Digital Equipment Company) produced the first mini computer that only cost $250,000, instead of millions, and was as small as a sideboard; this machine accepted input from paper rolls and allowed for a certain degree of interaction.

Being able to see what the computer was doing was the first necessary step. As soon as the researchers at MIT succeeded in attaching a TV monitor to the computer, IBM produced the first series of personal computers with a keyboard and a monitor and these immediately invaded American offices.

This started a chain reaction, it caused the birth of software houses like Microsoft which started writing programs, (when they couldn’t steal them from someone else) and a multitude of small companies producing electronic components and software. Not by chance were many of these companies founded by ex-MIT students, young graduates from other Universities and people who had left Government research posts.

From that point onwards, most of the technological effort has gone into optimising the performance of computers, making them more and more capable of handling a variety of tasks, faster and more efficiently. The basic concept has not changed much over the last twenty years – a few fundamental improvements, such as the graphic user interface, have remained more or less the same since the introduction of the Macintosh system in the early ’80s.

In the meanwhile our friends at MIT, and a handful of other dreamers, were increasingly thinking of a “democratic” computer, one that could be used by anybody, and for creative purposes too.

At the same time, the entertainment industry had grown to massive proportions, and this became another source of finance for the technological research, mainly directed towards the development of tools and products for the creative industries: cinema, television and video games.

the birth of computer graphics.
Computer graphics were born out of a combination of needs. One was the need to create new consumer products, particularly a new kind of games, with a strong visual component and a high level of interaction. The other need was centred on simulation, particularly directed to the training of fighter plane and tank pilots. As aeroplanes and tanks had become more complex and expensive, it had also become more dangerous to let inexperienced pilots use them.

Most of the advanced systems of computer visualisation we commonly use today, from 3D modelling to real-time rendering, from digital photography to digital video, are derived from the devices initially developed for realistic flight simulation, remote missile control and the like.

By the end of the ’70s researchers had understood that if it was possible to tell the computer to switch on and off pixels on a screen, it was also possible to teach it how to draw. After all, it was just a matter of informing the machine of the co-ordinates in the bi-dimensional space of the screen where the pixels had to be placed. The first software to enable drawing on screen was absolutely primitive – nonetheless, it was a conceptual revolution. After all, the computer was in its infancy, and no one would expect a baby to know how to paint a masterpiece.

mouse, screen and graphic user interface
Another idea that came to mind to the same group of researchers at MIT was that if the computer could receive electric signals from a keyboard, there was no reason why something else could not be used instead.

They made a device that looked a bit like a mouse (hence the name we still use) which enabled drawing on screen; together they also designed a system to allow moving objects on a screen via commands given through a microphone.

These first drawings were actually made of tiny letters, since letters and numbers were the first graphical symbols that the computer had been taught to represent.
A demonstration was thus organised at MIT to present these revolutionary new devices to the sponsors. Such demos were held periodically, to show sponsors how their money was being spent.

The head of the project set up his computer, linked to a projector, to demonstrate the use of mouse and vocal command, together with a first hint of graphic interface.

He first drew a little boat using the mouse, then he got closer to the screen and said “computer, move this boat from x to y”. The boat stuttered across the screen and the young scientist turned around to face the audience, expecting a deluge of applause.
There was an embarrassing silence. The very important people in the audience thought they were the victims of a bad joke: some million dollars had been wasted to let a kid play, drawing little boats on a computer – a serious machine, designed to count money and to make money – this was inadmissible!

Everybody left in scorn, all but two: two young students from another University, the ones who had founded in their garage the company which was to become a legend, a multimillion dollar company that at the time nobody would take seriously, not least because of its name: Apple. How could anybody serious about business, give the name “Apple” to a company producing serious business machines!?
These two had understood that there were millions of potential “different” computer users out there. They were terrifically enthusiastic about what they had seen and decided to produce a computer specifically designed for image manipulation and other creative applications. Moreover, it had to be a computer that would not require any computer skills, not even a basic knowledge of DOS, the operating system that was (and still is) the core of all PCs.

The first true demonstration of a functioning system containing all the basic elements of a modern computer, including a graphic user interface, was given by the self same Engelbart.

This presentation is remembered as one of the crucial events of modern history. It happened in 1968 at the Fall Joint Computer Conference. On that occasion Engelbart presented a system that allowed human interaction with the machine. It employed icons to represent directories that could be “opened”, it had text shortcuts, and it also allowed cut-and-paste of both text and graphic elements within documents.

rock & roll, drugs and new visions
At the end of the 1970s things were changing rapidly. Among hippies and revolutionaries, there were some who understood that the system is better fought from the inside, and well equipped.
There was a general desire to appropriate the technology, aware of its empowering significance.

It is no accident that behind the most successful technological enterprises of that time we find people like the Grateful Dead, Jaron Lanier, Timothy Leary and other exponents of the rock music scene, of the beat generation, of the pacifist and hippie movements.

Apple set out to create a new computer, merging all the best ideas and projects developed at MIT and at the Xerox Parc (the research centre of Rank Xerox). After a troubled experiment with a computer called “Lisa” in 1984 the Macintosh was born, branded “the computer for those who hate computers”. The advertising campaign itself was a hit, inspired by Orwell’s book “1984″ and shot in a “Blade Runner” style (the film was made in 1982). The message was “down with Big Brother!”

This marked the beginning of a new era, of a new way of working and communicating, a new way of producing art and entertainment.

To begin with, most people in the business world disregarded Apple’s products as pointless and fanciful. Ever since then, all “serious” computing and business people have maintained that Apple is on the verge of collapsing – in fact they did risk going under several times, not for the quality of their products but for their totally insane way of handling marketing strategies.

Against all the odds, Apple grew and became a success. In its first years, sales increased at the rate of more than 100% a year, compared to losses of 15% to 20% made by IBM and other major manufacturers.
This made the giants think.

mad Mac
The first Mac was terribly expensive, compared to the PCs. Moreover, it was totally incompatible with anything else and there was very little software written for it.
However, this funny looking box kick-started a revolution that infiltrated all fields.

The Mac had a few very special characteristics:
- all operations were controlled via the mouse, except text input that was obviously done via the keyboard.
- an operating system based on a GUI, graphic user interface, representing all functions with icon. By clicking on the icon, one could perform all sorts of operations without ever needing to input a single line of code.
- it had an internal floppy drive, allowing files to be transferred from machine to machine.

This computer was so easy to use, and so unintimidating, that a child could use it with no need for instructions.

The graphic user interface was modelled on children’s needs and ways of learning. In fact, children were among the first users of these machines, which were introduced into various American schools in a series of educational projects. The results were astonishing. It is worth reading about some projects like Lego-Logo, where a group of children learned how to create “behaviours” with the Mac and assign these to mechanical creatures they had made with Lego blocks and electrical engines, which were effectively programmed and driven by the behaviours created with the Mac.

In a short time this toy-computer became the standard machine in Educational institutions at all levels in the States.

Desktop publishing…
The next area where the Mac found application was in publishing, effectively initiating Desk Top Publishing.

This was a milestone, particularly interesting for the subject we are examining. Let us remember that from Gutemberg’s invention of print, publishing had been an extremely powerful tool, in the hands of a few and capable of changing the public opinion in a world that had no radio nor television.

Before the Mac, the only chance anybody had had to see his or her work published was to sell it to a publisher. There had been no way to produce and distribute independent material at low cost and good quality. This explains the sudden success of a machine which would allow an individual to lay out text documents using the traditional typefaces, including images, (although only black & white and at low resolution) and print at a reasonable quality. This was a machine that could sit on a desktop and did not require technological knowledge.

The cost was high but still negligible if compared to the cost of setting up a newspaper or magazine and typography. Also, the fact that the Mac had an internal floppy drive meant that documents could be saved on floppies and sent to be printed elsewhere, thus minimising the distribution costs.

This was the first step of a series that led to electronic publishing, remote working, distance learning and global networking.

new tools, new users, new work
The success of the Mac in publishing meant that other companies started producing hardware and software for it, first for graphic applications, then for photography and finally for audio and video. Soon it became possible to work in colour and at high resolution and by 1988, the Mac was the standard computer for creative applications, allowing for input and output on paper, film and tape.

Something very important for those of us who work in creative areas is the possibility of concentrating on the style, look and content of what we are doing, rather than having to think of the technical aspects of the tool we are using. Once we know how a pencil works, we draw without having to bother thinking how the pencil works – we simply need to sharpen it from time to time. That was the concept behind the Mac, the computer as a tool that must become as transparent and friendly as the old familiar pencil.

Having understood this need, Apple concentrated on refining the operating system, making it more and more efficient and simple to use, and making sure that the operating system would take care of as many boring functions as possible (those operations which PC users had to do themselves, writing commands for the DOS system).

auch, here comes Microsoft
Here we have to get onto the Microsoft case, probably the biggest con of modern history and one of the most typically American success stories.

Microsoft started as a small group of programmers headed by Bill Gates. They knew that there was money to be made with software design and they were determined to make it, at all costs.

When they heard that IBM was desperately looking for an operating system, they told IBM that they had one and it would be ready in no time. In fact, they didn’t have any operating system, or the time to design one. However, they found someone who did, stole the code and promptly sold it to IBM, establishing the basis for one of the widest and richest empires of recorded history.

That was DOS, the operating system which has since then been at the core of all PC computers.

From then on, Microsoft dictated what computing had to be and how people had to work.

Six years later, IBM regretfully acknowledged that a market for a computer like the Mac, equipped with a graphic user interface, did exist. IBM had to resign itself to the necessity of providing a GUI for the PC, something that could be overlaid on DOS, the classic CLI (Command Line Interface).

At this point, Microsoft dished out Windows, a patchy copy of the Macintosh operating system. Not only did they charge a fortune for permission to use Windows, they also managed to blackmail IBM into buying a copy of Windows for each computer it would produce, but with ownership of the software remaining with Microsoft.

Microsoft also managed to convince the world that Windows was an operating system, while in reality it was – and is – simply a graphic “mediator”, providing an iconic interface to DOS, which remains the PC operating system.

The stories relating to Microsoft monopolistic policies and its unethical way of dealing with competitors are well known, yet the fact remains that Microsoft has managed to impose its ways and make a fortune. Its influence goes beyond the pure power of money. Having managed to force the whole world to use certain software, particularly the Office suite (Word/Excel/PowerPoint) and the browser Internet Explorer, it is conditioning the way people work and think, including the way people use the English language. All Microsoft software comes with predefined settings, presuming Microsoft to know what’s best for you and pretending it wants to make your life easier.
They even invented U.S. English as if it was a separate language – what about Scottish English or Irish English then or, for that matters, my “Italian English”?!

This means that most people in the world, except perhaps a few fussy British linguists, use the spell checking and dictionary which come as a standard with Word – these are obviously American spelling and an American dictionary, not always the best and certainly incomplete.

In this way Microsoft decides how people in the world should speak English.

Internet Explorer’s settings do not respect the design of web sites, unless the programmers include specific code to force IE to forget about its own settings and to respect the designer’s. IE decides how best you should view the information on the net.

Encarta, the CD ROM encyclopaedia produced by Microsoft and distributed in millions of copies worldwide, has attempted to change history by modifying some events and omitting others. Following complaints from many historians, the first versions have had to be amended. It still presents a world that is but a peripheral extension of the USA.

The reason why IBM accepted all of Microsoft’s conditions is quite simple: they could not do without a graphic user interface but they could not produce an operating system which, like the one on the Mac, integrated the GUI and the system. In order to achieve that, they should have redesigned the PCs’ architecture, which would have meant that the tens of millions of PCs sold up until then would have had to be scrapped. This would have obviously been suicide for IBM; so they just pulled their trousers down in front of Bill and let him do what he wanted. After all, Windows had probably saved them.

Post a Comment

Your email is never published nor shared. Required fields are marked *

Better Tag Cloud