The Ultimate Software Enigma:
Is software beyond human understanding?

© 2013 Stan Yack   

My first visit to cyberia was in the early 1960s, when my high school class spent a day at the University of Waterloo, learning how to program a computer. In the summer of 1968, after my second year at the University of Toronto, I got my first job working with computers, at IBM, writing "RPG" programs. Since then I've built application and system software to run on many different computing platforms. Today I'm a "recovering" softsmith, giving talks and writing essays like this one to make amends for all the software I've inflicted on a unwary world.

I've written a debunking of (among other claims) the notion that computer systems have demonstrated an artificial intelligence. If from the title of this essay you think that I've changed my mind about that, you would be wrong: I still believe there's no evidence that any computer system has ever exhibited what a serious observer would identify as thinking. But my reading of philosophy, psychology and other disciplines has taught me that the capabilities of the human mind are also limited, and I believe it may be the case that computer systems have already been built whose complexity renders them beyond human understanding.

I assume that you the reader have a keen interest in the science and logic behind both computer systems and human thought. So to start, I'll try to place computers, and the software that animates them, in an historical context. Then I'll broaden my perspective to philosophy, mathematics, and systems theory, using results from those fields to support my thesis that the current worldwide collection of computer software may already be, even in theory, beyond our understanding.


You probably have at least an intuitive grasp of the difference between hardware and software. In "Hardware is from Mars; software is from Venus", Winn Rosch says that "software is the programmer's labor of love, and hardware is the stuff pounded out in Vulcan's forge." Less poetically, I'll compare the formal language text of software to the natural language text of a story, and hardware to the printed page, or magazine, or book. Like a composition given substance in a musical performance, software needs substantiating hardware to give it a direct effect on the world; hardware is nuts and bolts, and software is ideas.

Software is the set of instructions that computer programmers compose to direct the operation of computer hardware (the operation of which is called "executing" those instructions). Writing those instructions is like writing a newspaper article, or a short story, or a blog entry, or even a poem; programming style guidelines are perhaps more strict, but the process is as creative. One important difference is that one of the copy editors for a computer program is always another computer program. Those automatic and unthinking editors won't pay attention to creative explanations. They insist that every computer "sentence" has proper structure, the formal-language equivalent of an English-language subject, verb, and object. But (ignoring academic baby steps, and paranoid rumours about powerful, ultra-secret systems) programming is something that only humans can do; computers can't program themselves.

Especially with their enormous capacity increases in recent years, computer hardware components — keyboards, display screens, power supplies, chips, cables — may seem to be incredibly complex. But computer hardware is actually much simpler than the software which drives it. Hardware really only gets complicated when it actually software given corporeal form in an embedded system. In those systems software, designed and produced in the usual way, is "burned" into the chips of a cell phone, a VCR, a thermostat, an electronic ignition. Except when such chips fail due to catastrophic physical damage (like an XBox 360 melting), their behavior is determined by their software. And fortunately for us, especially when we're driving a car with ABS brakes, embedded systems fail less often than PCs, in large part because they contain far less software. But it doesn't take a catastrophic hardware failure to make an imbedded system unusable; badly designed embedded software will manage that quite well, like the "FDIV bug", a problem with early versions of the Intel Pentium processor. One software designer, of the C++ computer programming language, tells the story about wishing that his computer were as easy to use as his telephone. Bjarne Stroustrup says that his wish has finally come true … since he now has difficulties using his (computerized) telephone. I cleaned up tangle of cables in my office by installing a new 2-line telephone with callerID. But I had to bring home three new phones, from two different branches of a popular electronics store, and communicate with a developer at the manufacturer's factory, before I got a phone that worked.

The Software Crisis

You probably won't be surprised to hear that many IT people talk about a software crisis. What do you think of when you hear those words?

About the software crisis in ...

  • Economics? (Your software costs more than the PC you run it on.)
  • Complexity? (You'll just never really understand MS Word styles.)
  • Unavailability? (Your connection to the Internet is down too often.)
  • Unreliability? (Windows is showing its "Blue screen of death".)

You may be surprised to hear that the phrase "software crisis" was coined by software engineers in the 1960s, long before MS Windows, before Macintosh, before UNIX, before the Internet. They were talking about software operating on the mainframe computers of large organizations — businesses, governments, universities. Even the most modern of that software was written using what are now considered "historic" programming languages like COBOL and FORTRAN. The software crisis that they were bemoaning was about the difficulties the programming fraternity had using those tools to produce reliable software, and to do that quickly.

Today we've solved only one of those problems: we still produce unreliable software, but we do it much more quickly. You might have heard the phrase "garbage in, garbage out", perhaps when Spencer Tracy explained it to Katherine Hepburn in the 1950s film "Desk Set". That maxim was created to warn computer programmers (and computer users) that if they supplied their computer with incorrect input, it would produce incorrect output. That may seem to be pretty obvious to you. But logically challenged commentators (and nasty telephone support staff) sometimes reverse the direction of the implication, concluding that incorrect output from a computer system implies incorrect input. Curiously, until very recently that fallacy actually made some sense.

That's because most programmers, then and now, use software tools provided by other organizations, packaged in system software and program libraries. Programmers depend on those libraries for basic functions — like reading a card, calculating a mathematical function, or printing a line in a report. The foundational software of a library could usually be relied upon to have been more thoroughly tested and verified than their own COBOL and FORTRAN creations. So, when debugging their own programs, programmers find it an effective strategy (at least initially) to assume that failures in the end results are caused errors by their programs, and not by the more stable software environment. But the software environments of the 1950s and 60s were vastly simpler that those of today; there just wasn't all that much foundational software in them. As computer capacities have increased, software applications have become more sophisticated, and so have the system development environments (SDEs) that support those applications.

The most common measure of software quantity, which is at least 50 years old, is "lines of code". You can compare a line of code to one step in a recipe, or a single line in a poem. In 1954, IBM's model 650 computer shipped with 6,000 lines of code. Twelve years later, in 1966, all of the software that IBM produced for its mainframe computer (the operating system, utility programs, compilers, ...) consisted of about 400,000 lines of code. The amount of software laid on top of that by IBM customers varied, but in those days no customer except the US military produced anywhere near IBM's 400,000 lines of software themselves. In the early days, almost everybody with IBM computers used IBM's operating systems, and compilers. There wasn't any sharing of software, and unlike today, you certainly couldn't buy shrink-wrapped software at the corner drugstore!

The first big increase in the quantity of general-use software for IBM mainframe computers came as a result of NASA's Apollo project. IBM produced a general purpose data manager called "IMS", the "Inventory Management System", to inventory the components of the moon rockets (then the most complicated artificial systems that humans had created). After the Apollo project IBM recycled IMS to create its first commercial "database" management system.

Today the construction of software is a major contributor to the world economy. The U. S. Bureau of Labor Statistics estimated that in 2004 there were over 1.2 million Americans working as Computer Programmers or Software Engineers. It's hard to find authoritative comparisons for 1965, but my guess is that 40 years ago those numbers were a factor of one hundred times smaller. Using today's advanced SDEs, a programmer can generate software far more quickly than those 1960s programmers could — a conservative estimate would be 100 times faster. And 100 times as many programmers, each producing 100 times as much software, means that there is 10,000 times as much software being produced.

But I may be underestimating. One 2002 version of Linux OS "kernel" contained 30 million lines of code, and the next to last major IBM mainframe operating system (OS/390) contained about 25 million lines. Moving beyond finished, commercial products (*Yes, I know: Linux is not "commercial"), a good guess is that in January of 2005 the "Internet-accessible World code base" was 3 trillion lines of code.

And it doesn't stop there; you don't have to be employed as a Computer Programmer or a Software Engineer to program a computer. Have you ever written an Excel formula, a Word macro, or a SQL query. And when that macro or query didn't work properly was it always your fault? Myabe you maybe the problem was caused by Bill Gate's boys. Well, maybe it was ... though you probably had a hard time convincing the telephone support agent!

In the forty years since the "software crisis" was first noticed, there have been dramatic improvements in the way we produce software. By frightening them with the spectre of computers programmed in Russian, Grace Hopper got her McCarthy-era military bosses to let her "program in English" (actually the formal language COBOL, which used English words as symbols). That was a bit before my time as a softsmith. But another critical change in programming style stands out in my memory, in 1967, when the Dutch computer scientist Edsger Dijkstra fired the first shot in the "Structured" programming revolution with a short letter to the Communications of the ACM (the "Association of Computer Machinery") .That letter was titled "GOTO Statement Considered Harmful". At the time, the GOTO statement was considered an essential element of the FORTRAN and COBOL programming languages. Dijkstra said that its use limited programmers' ability to understand their programs, and thus their ability to do that correctly. In time the troublesome GOTO was largely banished from proper programming. Much more recently, another revolution in programming style has been claimed to enhance the correctness and productivity of programmers: "object-oriented" (AKA "OO") programming.

(Object-orienting may turn out to be as revolutionary as structuring, but I'm skeptical, and not just because I'm an old-timer; I just haven't yet seen empirical evidence to support those claims. I am also distrustful of revelations that can't be stated concisely, and I don't find the logic of the claims of OO's revolutionary nature to be as compelling as Dijkstra's plain-talking, half-page letter.)

Good Software Going Bad

My first hint that software behavior might be beyond our ability to understand came in 1973. That was the year that I patched the operating system nucleus of a 512K IBM mainframe, a System/360 Model 50. In the Simpsons-Sears IT department we called that computer "the Blue" because of the colour of its side panels. Colour seemed to turn into emotion when that computer began experiencing "false program checks", what we called it when the blue's IBM hardware and software (incorrectly) reported encounters with computer instructions whose operation codes were not within the S/360 lexicon.

Perfectly valid instructions were being rejected, and each false program check resulted in a disastrous collapse of the operating system. There was garbage coming out, even though no garbage was going in. The IBM engineers said that the problem might be due to one newly added "real-time" piece of hardware exposing the computer to the unpredictable communications world outside the sheltered machine room ... but they weren't sure. I was the whiz-kid systems programmer who fixed the problem (with what I called a Zero Level Interrupt Handler, aka ZLIH). That was my first encounter with a computer system whose behavior was beyond the ability of its makers to understand.

Like good hardware design, modern software design is supposed to create components whose failures can be independently diagnosed and corrected. Creating separate and separable hardware components is a natural process; how would you even go about creating a washer that didn't have a well-defined relationship with the screw and nut with which it was connected? But it's not as natural, or as easy to do that with software; and it was especially hard in the early days, when the resources were limited, and taking a shortcut that introduced inter-component dependencies might be the only way to get the essential functions to fit into the available programming space. And we were new at that game; the concept of modular programming was not well known until the 1970s. And anyway, it didn't really matter all that much that the software components were not always independent of each other; there was little enough software that the possible side-effects could be managed. But then hardware kept improving, and the space available for computer programs kept growing. In 1973, a typical IBM mainframe computer had 512 kilobytes of main memory (today called RAM); each processor in today's IBM Z900 mainframe can have up to 64 gigabytes. That's 128,000 times as much RAM available to hold computer software and data. As well, that 1973 computer had a single general purpose "Central Processing Unit", and each of today's Z900s can have up to 64 processors.

A Parkinson's law of computer hardware causes there to be an awful lot more software today than there was in the days of my ZLIH. Ten years after my ZLIH, on what in the 1980s were called "microcomputers", the ancestor of the first IBM PC's operating system comprised just 4.000 lines of code. Today, the 512K of RAM on the 1970s trailer-sized blue model 50 is less than 2% of the memory in the 32 Megabyte Palm PDA that fits in my jacket pocket (not to mention the gigabytes of audio and video memory on the iPod on my belt). My current main home/office computer system has 512 Megabytes of RAM. That system also has 300 gigabytes of "hard disk" storage containing well over 1/2 million separate files. Only a small fraction of those files is my own creative output or data I've collected; most of them are computer software components. There's software for editing text, pictures, sounds, and video; software for browsing, emailing, and diagnosing on the Internet; software for "relaxing" with puzzles or word games or other recreations; software for keeping my PC from self-destructing.

I don't believe that computers will ever be able to program themselves (at least not as well as humans can), but I see the dividing lines between software and data getting blurrier.

e.g. When I select certain display options in Dreamweaver, the software tool that I use to edit this online essay, statements in the programming language Javascript are automatically generated.

The words kilobytes and megabytes are 20th century! Today you can purchase a PC hard disk that will hold terrabytes. And before the technology reaches quantum limits there will probably be hard drives storing petabytes (i.e. 1,000,000,000,000,000's of bytes), and maybe even exabytes (i.e. 1,000,000,000,000,000,000's of bytes).

Much of the future's terra, peta and exabytes will be filled with data exhibiting some of the characteristics of software. But even if all that software is modular and structured and object-oriented, will it work?

Software Design

In the past fifty years, revolutions have occurred not just in the structuring of programming languages, but also the process of designing software. Dijkstra's straightforward banishment of the GOTO spawned more complex concepts of structuring of computer programs and their design, analysis, testing, and project management, by people such as Michael Jackson (no, a different one), Larry Constantine, and Ed Yourdon, and eventually to university courses, programs, and whole disciplines. In 1999, promoting what he called "Mature Architecture Practices" of software design, Joe Batman of Software Engineering Institute presented this academic mouthful:

Prescriptive architectures are so important because they directly attack the problem of how to insure that system concepts are created that are effective in reducing complexity and abstracting system characteristics so that human designers can retain intellectual control of the artifact they are creating.

I've been interested in the science of software design, and software reliability for almost as long as the GOTO has been proscribed. I've attended conferences, read papers and books, listened to lectures; and only in science fiction stories have I heard anyone question whether humans will always be able to comprehend the operation of our software constructs. But I believe that software systems can be created, and may already have been, that are beyond our comprehension.

Though many of us understand computer software well enough to maneuver it around our personal and professional information highways, very few of us can do much more to tune it than choosing prepackaged operational options. But there are software mechanics who designed and built those things, and who repair balky parts and change the digital oil. Why should computer software be qualitatively different from our other tools? After all, just like our cars and our home appliances, software is a product of the human mind; why should it be beyond our ability to understand?

These are the reasons why I believe that computer software can and may already exist which is too complex for we humans to understand:

  • Software is implemented using the formal instructions of the consistent logical system of a programming language, which a some fundamental theorems of logic declare are limited in their power to decide the truth or falsity of statements within such systems.

  • Software is is a product of human ingenuity, and an argument can be made that there is a fundamental limit to the ability of humans to understand their own thinking.

  • Software is instantiated physically within systems whose operations have in practice become unpredictable.

Gödel Incompleteness: Limits to Theory

Kurt Gödel was a logician, mathematician, and philosopher of mathematics, whose most famous works were his incompleteness theorems, the most famous of which states that

For any self-consistent recursive axiomatic system powerful enough to describe integer arithmetic, there are true propositions about integers that cannot be proven from the axioms.

"Hmmm .... Very interesting", you might say (especially if you were a math geek who had read Douglas Hofstadter 's 1999 cult classic "Gödel, Escher, Bach"). But why should you care if you haven't read it, and you're not a student majoring in the philosophy of mathematics?

You should care because the theorems proved that every logical system has limits (at least every "consistent" system does, and inconsistent systems are ... well, illogical and useless). Specifically, there are some theorems in any such system which, without stepping outside of the system, can neither be proved or disproved. The undecidable theorems in existing Gödel proofs are constructed by having a logical system reflect on its own structure; that self-reference is an essential root of the undecidability.

I apologize for my impossibly brief (and perhaps flawed) explanation of "Gödel Incompleteness". You'll find much, much better explanations, and even theorem proofs (ostensibly) accessible to the lay reader, in Hofstader's book, and in several books by Roger Penrose. My purpose here is to point out that because computing theory is modeled with consistent logical systems, the computer software algorithms those systems produce are necessarily limited in their ability to describe themselves.

Cognitive Closure: Limits to Thought

To believe that even if there are things that we don't understand today, we will understand them in the future is almost an article of "scientific faith" (which phrase is a bit of an oxymoron). After all, a good part of the history of human existence shows a steady (if not continuous) increase in our understanding of the universe. Do you believe that we will eventually understand everything? That there are no limits to the power of human thought?

Colin McGinn is a modern philosopher who also writes for the non-specialist audience; his book, The Making of a Philosopher was on the NY Times best seller list. McGinn suggested that we humans may be inherently unable to understand our own thinking, that we are subject to what he called cognitive closure.

In advancing that thesis, McGinn became to some an "anti-philosopher". It didn't make him very welcome in the philosophers' guild. Daniel Dennet for one said that McGinn's ideas are so wrong that they make him "ashamed to be in the same profession".

McGinn pointed to the evidence of three thousand years of philosophical thought, where revolutions have been very different from those in other disciplines. In physics and chemistry and biology scientific revolutions have always built on previous theories. e.g. Einstein didn't say that Newton was totally wrong; he said that Newton's conclusions were limited to the domains where things didn't move very, very fast, or weigh very, very much. But the history of philosophy is not like the history of physics, said McGinn: it seems that every generation of philosophers routinely discards the ideas of its teachers as intellectual rubbish.

Trying to explain that failure to achieve philosophical continuity, McGinn pointed out that there was no reason to assume our evolution as biological creatures equipped us to understand philosophy. Evolution gave us many other intellectual abilities, to recognize faces, to plan future actions, to understand language; but there is no developmental reason why evolution should have equipped us understand our own thinking.

But even if McGinn is right and understanding the human mind is beyond us, why should computer software be as well. Elsewhere I have claimed that the human mind is more complicated than computer software, but I also believe that there could be a software/mind corollary to McGinn's theory of cognitive closure.

Chaotic Systems: Limits to Prediction

Computer software doesn't exist in isolated systems the way that we thought that it did in the simpler, "garbage in, garbage out" days of the the 1950s, something that should be becoming more and more obvious. Software is always part of the larger systems in which it is imbedded, which includes the physical world, and the human systems of local, national and world economy and politics. All of those are what physicists and engineers call dynamical systems, whose most important attribute is that their final states can never be predicted with confidence, and only modeled using "chaos theory". The most familiar dynamical system is a non-human one, the weather, where, so goes the old saw, the flapping of a butterfly's wings in the Amazon can effect the progress of a hurricane in Florida. Your MP3 player also exists within a large, dynamical system whose components includes the hardware and software on the MP3 player and elsewhere in your home or office, plus the Internet, plus the international entertainment industry, to name just the most important. Your MP3 player's reliability, availability, and utility are all affected by its interactions with those systems.

The effects on imbedded systems are not always obvious. I know about one home Internet connection that was adversely affected by one chaotic system in which it was imbedded: the weather. My client had been encountering sporadic, unpredictable bouts of low capacity on his high-speed Internet link, with occasionally dropped connections. He endured many futile telephone conversations with his vendor trying to get that problem fixed. The first three technicians who finally visited him were unsuccessful in isolating the problem, after checking for faulty software, PC circuits, telephone connectors. It wasn't until a fourth on-site visit that the problem was finally diagnosed and fixed, and that happened only because the technician listened to my friend's offhand remark about how response times seemed to be slower when it rained. The probable explanation: rain-soaked branches were touching a outdoor junction box and shorting out connections inside. A quick snip of garden shears to the tree restored his connection. The unreliability of one man's Internet connection turned out to be connected to the movement of air masses over North America … maybe even to a butterfly in the Amazon!

With a worldwide system of millions of computers interacting with other computers and humans and the Internet, and even with the weather, the unavoidably chaotic nature of that worldwide system means that its future states are unpredictable.

Why does this matter?

But why should we care about all this? So what if computer software is incomplete, that we can understand it, and its future states are unpredictable. Things like art and music and literature are more complex than computer software, even though as we continue to learn about those subjects, our grasp of them is less complete than is our grasp of software.

Well, our inability to grasp art and music and literature fully doesn't limit our ability to engage in them. One wrong note doesn't ruin our enjoyment of a entire Beethoven symphony; when politicians deceive us with their usual logical fallacies they don't blurt out "invalid page fault" and choke to death. (Yes, I know; would that it were so!) That's not true with computers, where a tiny fault in one component can render an entire system unusable.

Late twentieth century philosophers have solved Descartes's mind/body problem with the theory of computational functionalism, using a software/hardware metaphor to explain the embedding of cognition in the brain. I believe that philosophical school has inherited both the undecidability of the cognitive systems which they model, and the complexity of the physical systems within which it is imbedded. Equating artificially created software to our evolved minds links it with the undecidable; and from imbedding it in the real-world systems which are computers, software inherits the real-world's complexity and chaos. The more imbedded and connected those software systems become, the more unknowable and indeterminate they are.

What should we do about this?

How should we respond to the incomprehensibility of the software we create? Despite its complexity and flaws, software is more useful for very many applications than are alternative, manual systems. Can we afford to slow down the pace of software development? Should we change the methods we use to develop software?

I'm still thinking about that ...


Further Reading

Edsgar Dijkstra: A Case against the GO TO Statement (1967)

This was the clarion call that started the "Structured Programming" revolution, declaring that the use of the GOTO statement limited programmers' ability to understand the programs that they were composing.

Herbert Dreyfus: What Computers (Still) Can’t Do (1972, 1979, 1992)

Dreyfus predicts that the AI project will fail because researchers' conceptions of mental functioning are naïve. He suggests that those researchers would do well to acquaint themselves with modern philosophical approaches to being human.

Colin McGinn: The Making of a Philosopher: My Journey Through Twentieth-Century Philosophy (2002)

Part memoir, part study, this is the self-portrait of a deeply intelligent mind as it develops over a life lived on both sides of the Atlantic, following the author from his early years in England, reading Descartes and Anselm, to his years in Los Angeles, then New York. McGinn presents a contemporary academic take on the great philosophical figures of the twentieth century.

Colin McGinn: Knowledge and Reality - Selected Essays (2002)

This book brings together a selection of Colin McGinn's philosophical essays from the 1970s to the 1990s, whose unifying theme is the relation between the mind and the world. The essays range over a set of prominent topics in contemporary philosophy, including the analysis of knowledge, the a priori, necessity, possible worlds, realism, mental representation, appearance and reality, and color.

Roger Penrose: The Emperor’s New Mind  (1989)

Penrose is a mathematician and physicist who believes that some aspects of the human mind will never be duplicated by artificial intelligence, supporting his view with material drawn from quantum mechanics and brain structure.

David Shenk: Data Smog  (1997)

Media scholar and Internet enthusiast Shenk examines the troubling effects of information proliferation on our bodies, our brains, our relationships, and our culture, then offers strikingly down-to-earth insights for coping with the deluge.

Norbert Wiener: The Human Use of Human Beings (1954)

This entirely equationless text is a popularization of mathematician Wiener's ideas about humans and machines, as well as a fascinating piece of philosophy and sociology.

Joseph Weizenbaum, Computer Power and Human Reason (1976)

A distinguished computer scientist's elucidation of the impact of scientific rationality on man's self-image.

Stan Yack
Instructional Designer and Softsmith