© 2013 Stan Yack
When I last checked, the Google Internet search engine reported that it found the phrase "computer bug" in two hundred and forty thousand web pages. One of those pages shows the picture of a moth, which in 1945 was found in an early computer, the Mark II Aiken Relay Calculator. The Mark II operators at Harvard University later claimed that was the first case of an actual bug being found inside a computer, and that by removing the moth they had "debugged" the computer. But it was much earlier, in 1889 that Thomas Edison blamed a troublesome insect for a mechanical failure when he reported that that he had "worked for two days on a bug in my phonograph".
I was born just two years after the first bug was reported at Harvard, and for the first 16 years my life was free from digital grief. Then in 1963 my high school math class spent the day at the University of Waterloo, learning how to create computer programs by marking magnetic circles on cards. I was doing okay until the point on my special pencil broke. I blamed myself for pressing too hard, but today I consider that self-criticism my earliest acceptance of the first computer myth.
I call that a "myth" because, like the other fallacies listed here, it has achieved the fabulous, otherworldly reality of an historical saga. Lots of people have told me that when they have trouble using their computer they usually assume that it's their own fault. Many of them do that because they believe that computers never make mistakes, and they're intimidated by that supposed perfection. When something goes wrong there's always something they've just done that seems more likely to be the cause of that failure than their perfect computer making a mistake. Of course, computers were never perfect. As they have accumulated more and more hardware and software over the years, they have become less and less perfect; and the root causes of their failures are now often far from obvious.
In my years as a user and programmer of computers, I've experienced more than a few computer failures. I've encountered incorrect displays, locked keyboards, unreadable diskettes, crashed hard drives and, more recently, unavailable websites. I've long stopped believing that computers failures are my fault (except occasionally when I was the one who did their programming).
As we all gain more experience with computer systems, fewer and fewer of us assume that when those systems don't work properly, it's must be our own fault. But there many of us, especially novices, who believe in the myth of computer infallibility.
Have you ever experienced inexplicable computer behaviour, maybe a major failure that required you to restart an application or reboot your system? A quick scan of the "computer bug" Web pages listed by Google will show that you're not alone. When you do encounter a computer failure, take comfort in the doctrine accepted by user interface professionals that using a computer in a reasonable way should never be labeled a user error. Students are taught in introductory courses with names like "Human/Computer Engineering 101" that when some normal action like opening a window or clicking on a hyperlink results in a "crash", the user action should not be considered the cause of the failure.
It's not even your fault when you can't figure out how to get your computer system to do something that looks reasonable. Systems should be constructed in a way that helps you find the way to do things. Say that you're browsing an Internet website created by some young, "age-ist" designer who's used text too small to be read without a magnifying glass. Captain Picard on the bridge of the star ship Enterprise would have no trouble: He'd say "Computer, make text larger," and the problem would be fixed. But in the real world, computers may listening, but aren't really understanding (though some, like the iPhone's Siri, are pretty good retail shills!). To change the font size in your web browser you'll have to select a menu item, and/or type some numbers, and/or pick an icon in a palette, and/or drag a control, and/or click a button ...
For an experience computer user seeking an answer to a simple question like "How do I increase the font size?", it will seem logical to turn to the computer's online help. But in a modern, sophisticated application like a web browser or a word processor, the quantity of online help text can be enormous; and the help tools for narrowing your search don't always lead to the answer you're seeking. So you may turn to the paper documentation (which is usually on a CD these days; pretty useless if your computer has crashed!), or to the Internet, or to your friends and colleagues, or in desperation to your young nephew the computer nerd.
If you persevere, you may discover what looks like the solution, but sometimes the computer doesn't respond the way you thought it would. So you vary your request (maybe typing in upper case, or clicking on "OK" instead of "Done"); sometimes that will help, but not often. You try another variation, and another, and another ... and every once in a while, as if to emphasize its disrespect for non-digital beings like us, your cheeky computer will lock up and stop responding entirely!
Alan Cooper was the designer of a revolutionary computer application construction tool called
In The Inmates Are Running The Asylum, he shamed many software developers, calling their creations dancing bearware, "brought in from the wild not when it dances well, but as soon as it can dance at all". Cooper said that software is often hard to use because programmers and engineers design computer interfaces for other programmers and engineers, not for ordinary users, and that most users just "grit their teeth and put up with the abuse".
I say don't let yourself be boggled by the technical gibberish. Meaningless error messages like "illegal instruction" or "invalid page fault" or "memory leak" were created to assist software diagnosticians fluent in technical computer dialects. The display of such messages to an ordinary user is a symptom of a software manufacturer's improper design, or bad implementation, or inadequate testing or all three.
No one can deny that by using computers humans have been able to do some amazing things. We've watched movies of a flight over the canyons of Mars; we've seen blood flowing inside our own hearts; we've shrunk the planet with a worldwide communication network; one or two of us have even golfed on the moon.
And computers keep getting faster and roomier. For over forty years now Moore's Law has held true, as every year and a half or so we've seen a doubling in computer capacity and speed.
But factual reports of quantitative progress sometimes slip into science fiction speculation, as has happened with the next three myths.
Its true that computers can forecast todays weather accurately, at least most of the time; but we are prepared to be surprised by rain or snow tomorrow, and we shouldnt expect any certainty in the forecast for the week after next. To predict the weather at an outdoor wedding reception Saturday afternoon six months from now ... we might as well use astrology, numerology, or pyromancy.
The myth of more powerful computers eventually delivering accurate, long-term local weather forecasting has been categorically refuted. Metereologists, physicists and mathematicians know that precise prediction of the future state of a "chaotic" system like the weather is not just a time-consuming computation problem; it's a theoretical impossibility. The behavior of "dynamical systems" (the scientific objects studied by "chaos theory") like the weather is inherently unpredictable, because those dynamical systems exhibit what's called a "sensitive dependence on initial conditions". Thats often called the butterfly effect, a metaphor for an unobservably small change, like a tiny wisp of air moved aside by a butterflys wings, having an enormous effect, like shifting a hurricanes path by several hundred miles.
The unavoidable unpredictability of the future state of a dynamical system means that a computer accurately forecasting next month's weather (never mind next year's) is no more conceivable than a computer squaring the circle, or flying faster than the speed of light.
And what about those Star Trek computers that follow spoken instructions?
A “natural” language is what people use to communicate with each other, like English, French, Mandarin, Swahili. A “formal” language is what people use to submit instructions to a computer, like COBOL, FORTRAN, Java, C++. Natural language understanding is what I hope you’re experiencing right now: making sense of this essay. As children all normal human beings learn to speak and understand their native natural languages. Does it surprise you to learn that no computer has ever matched that accomplishment?
In the 1960s, computer scientist and educator Joseph Weizenbaum created a computer program named ELIZA that dispensed prepackaged, pseudo-expert advice. In reaction to its human correspondents' typing, the ELIZA program generated what appeared to be pertinent feedback. The program did that by scanning input text for trigger words and regurgitating previously prepared questions and comments modeled after those of a Rogerian psychotherapist. When it found the word "mother" in the input, it might respond with the question "Did you have a happy childhood?". In no sense was there a thoughtful ELIZA that "understood" the symptoms of its patients and the "wisdom" of its diagnoses. There was just a computer program performing some simple syntactic manipulations, matching input text to table entries that some other human had previously decided were catch-all reasonable responses.
Weizenbaum knew full well that his computer program was hopelessly unprepared to master the complexities of natural language. He called ELIZA a "parody of a psychotherapist", and said that he wrote that program in part to expose the myth of the imminent arrival of computer understanding of natural language. But he wasn't prepared for the reaction of his test subjects, as credulous members of his MIT academic community (computer scientists and psychologists hopefully excluded) huddled over their teletypes, conducting what they believed were conversations with an insightful computerized therapist.
Forty years after ELIZA there are some advanced computerized language tools that provide useful functions beyond basic word processing. There is software that performs first draft translations of technical documents to assist human experts; and there is software that can perform fairly accurate single-speaker voice-to-text transcription. But, pace the iPhone's Siri, there are no computerized tools that demonstrate natural language understanding at the level of the average three-year-old child.
Computers far exceed the abilities of humans to calculate, to store and retrieve information, and to quickly and reliably respond to stimulae in simple ways. Computer programs have been written to solve complex, "knowledge-based" problems in fields as diverse as mineralogy, drug interactions, and chess. But computers are not "intelligent".
For several years there have been contests at universities like MIT & CALTECH where computer programs are presented which seek to pass a limited version of the Turing test for computer intelligence. The contestant programs exchange email messages with human examiners in an attempt to fool those examiners into judging that the computer correspondents are human.
The candidate computer programs do much better when test dialogues are restricted in content to specific limited domains of discourse, like places to eat Sushi in Manhattan, or references to trees in 19th century English poetry, or World Series baseball records. But that’s not what Alan Turing had in mind when he invented his test in 1950. As a measure of its intelligence Turing proposed testing a computer for total "natural language understanding", and that can only be exhibited when the conversation is not limited.
Turing knew that in real human dialogues there are never predefined domains. A human dialogue might range anywhere over the whole of human existence or beyond: a discussion of computer architecture leading to chess playing programs, leading to Bobby Fisher, leading to world politics, leading to ... who knows. In determining what they believe is the best frame of reference for a discussion, humans solve without apparent effort what cognitive scientists call the frame problem, something no computer has ever done.
You can buy an inexpensive, computerized toy that will beat you at checkers, or scrabble, or chess (If you're a grandmaster, you'll have to pay a bit more), but for no amount of money can you buy a computerized assistant that can follow instructions like these:
Most executive assistants could easily handle a request like that,
and probably wouldn't consider it to be much of an intellectual challenge.
But there is no computer anywhere on this planet that can do it. Humans
have not yet built a computer that can match the intelligence of the
average three-year-old child, never mind that of a mature adult. And
there are many things that even a one-year-old can do that a computer
can't, like recognize his mother's voice in a noisy room, and know when
he's being ignored.
Intelligent androids have been a science fiction staple for many years, from Karel Capek's automats to Star Trek's all-knowing but unfeeling android Data. But in the real world, computers are just not as smart as people. Will they ever be? Well, maybe. But I'll go out on a limb and predict that they won't. Has that got you itching to tell me about the debunking of some famous nay-sayer of human progress? Do you want to report how history has lampooned their prediction that "<technology> will never <do something>"?
Well I too have read about many of those predictions, like "radio has no future" (Lord Kelvin) and "man will never fly" (too many forecasters to list here). I know that some respected computer science experts have predicted that there will eventually be computers smarter than humans. And as an orthodox skeptic I have to admit that the day may come when my opinion that computers will always be dumb is condemned as bigoted, pre-singularity "human race-ism". But I know that there are some things that science and mathematics have shown to be impossible, even in theory (like dividing by zero, or squaring the circle). One of the "impossibles" of mathematics is that no logical system can ever be complete. A corollary that no set of computers algorithms can be complete is one of the things that has lead me to conclude that a "thinking computer" is just not possible.
The philosophy of science tells us that a negative prediction like "computers will never think" can never be proven to be true, but only shown to be false by a counterexample, like the arrival of a mindful computer. But the day of intelligent computers has certainly not yet arrived. Both the yes and no sides of the "Will there ever be artificial intelligence?" debate use sophisticated computer tools (like the Internet) to do their research and disseminate their opinions. But there are no computer intelligences making any comments. (Two of the most famous human debaters are the pro-AI Marvin Minsky and the anti-AI Roger Penrose.)
Enough about the possibility of computers replacing us. Surely, you say, I must concede that …
Well I do admit that computers help us to do some things very much faster, and that they have let us do things that we couldn't do before, some of which we did not even imagine. But do computers actually make us more productive?
When everything works as it's supposed to … wow! I have used powerful computerized tools to review and edit documents, music, pictures, movies. I can find information on almost any topic by typing a few words into an Internet search engine. I performed all of the research for this essay without leaving my office (except to get parts to repair a baulky computer).
When I recently bought and installed a wireless Internet router, it took me less than half an hour's work to get it working. The next day, my wife's new notebook computer arrived, and twenty minutes after she unpacked it, she had her new computer connected to the Net through the new router, downloading her email from work. But computer tools are not always trouble-free.
In order to accomplish a simple task like "get my new email", a great many components in an enormously complex, worldwide network of hardware and software must operate properly. Those systems do usually work; but the complexity and volatility and number of interactions of the system's components contribute to a "brittleness". And when failures do occur, they often manifest symptoms whose diagnosis and cure seem to be beyond human understanding.
Just a week before my wife’s amazing success connecting her wireless notebook to the Net, I had an encounter with a "simple" software change that didn't go nearly as smoothly.
The trouble began after I finally gotten fed up with various Windows 98 shortcomings (Don't ask!) and I decided to upgrade to Windows XP Pro (as all my techie friends had been telling me to do for some time). I plugged in the XP Pro installation CD, and to all of the installer's prompts I responded with default answers. In just over an hour the upgrade was complete ... but the upgraded computer could no longer access the Internet. Well, I'm no newbie; I knew what to do next: I used an Internet search engine to research the problem (accessing the Net from a backup computer) after which I took remedial action: I turned devices off and back on again, replugged and swapped connecting cables, changed control panel settings, deleted and reinstalled software ... but none of that helped. The newly-installed Windows XP Pro just wouldn't connect to the Internet, even though it was using the same hardware through which Windows 98 had connected just an hour before.
For a couple of days I accessed the Net using my older, Windows 95 backup system. But I couldn't stand the reduced performance of that slower computer. So I “uninstalled” XP Pro (kudos to Microsoft for providing an easy-to-use uninstaller) on my faster, main computer, which back to Windows 98 could again connect to the Net. But I was still committed to dumping Windows 98 (Again, don't ask), and to do that I had to resolve the problem of XP Pro not connecting to the Net.
I spent more hours searching the Net for clues, I talked to my many technical contacts, and I stewed in vendors' telephone voice jails. After a week I had the answer: buy and install a new hard disk, and perform a “clean” install of XP Pro, instead of the "upgrade" I'd done to my existing Windows 98 operating system. And that worked. The cleanly installed XP Pro system connected to the Net right away. But because I'd installed a virgin system to replace my personalized Windows 98, I had to waste time reinstalling software and restoring settings. The total cost of my upgrade to XP Pro was not just the $200 for the software and $150 for a new Hard Disk, but also the 20 hours I spent troubleshooting, reinstalling, and repersonalizing the system. Click here to read stories about other people that I've helped with computer problems, and to get my advice for saving time dealing with your own computer problems.
But those stories are just anecdotal evidence, a researcher would say. Humans have been using computers for over 50 years, intensively for a least 20. Is there any empirical evidence about the effect of computers on the productivity of the computer-using community worldwide? Well, yes there is. A 1996 Gartner Group study for STB Accounting Systems found that PC users typically spend 43% of their time "futzing around", a term coined by Thomas Landauer for time spent doing useless things that you hope will enable you to do useful things.
How much futzing around do you do with your computer? How often do you wait on hold for 30 minutes, a captive of lo-fi Muzak and vendor advertising, just to be told that you should reinstall some software component? How much time do you spend installing obligatory software upgrades and learning to use "improved" user interfaces, or updating SPAM filters and purging email, or updating virus definitions, or (God-forbid!) recovering from a malware infection.
I try to have something else to do while my computer is occupied with my futzing: something to read, coffee or food to prepare, a bathroom break to take. I treat the requirement for something interruptable to do as a necessity for my using a computer. (It's like being sure that you have something to read when you go to the bank planning to communicate with a human being.) Another way to reduce the time lost futzing around is to have a second computer to use when your first computer is unavailable; in my case that's two spare computers. My home/office computer network includes of a Macintosh, a PC running Linux, a running Windows, and an iPod Touch, all hi-speed connected to the Net and to each other. While I'm doing something time-consuming futzing, like defragging a hard drive on one computer, I have the other computers available to keep me amused. But of course futzing usually requires that you pay attention to what is going on, at least occasionally replied OK or Next or "For heavens dake, don't do that!".
It seems that not a day goes by that I'm not affected by at least one computer failure. Most of the disruptions are minor, and I adapt quickly (but not cheerfully). But occasionally a more serious problem will occur, and I will be presented with some incomprehensible software death cry such as a Windows “invalid page fault” or a Macintosh "bus error". Occasionally one of my computers will lock up totally and I have to “reboot”, or even power the computer off and back on.
Why do these things happen to me? I'm a conscientious cyber-citizen; I don't install under-tested, "bleeding edge" software, and I no longer hack down into the operating system to modify the behavior of my tools. I'm an experienced computer user, so sure I open a lot of windows, and run several applications at once. But isn't that what I'm supposed to be able to do? When I reported my freewheeling computer behavior to a less adventurous friend, he compared my actions to someone trying to drive at 200 kph just because the car's speedometer goes up that high. Well I say that operating a computer isn't like driving a car. Computers don't come with warning signs about not opening another window in your web browser when you're using it to watch a streaming video, or about not playing an MP3 while downloading a software update, or about not using other applications while a CD is being formatted. And comparing computers to automobiles? Well, when you start out in your compact hybrid to drive to the New England seaside, you're not concerned that you may end up stranded in the Arizona desert in a broken down SUV with an oil leak and four flat tires. But things like that seems to happen all the time with computers.
The solutions to computer problems are almost never simple.
I heard about one intermittently failing Internet connection that wasn't fixed until the fourth on-site vendor visit. And that happened only because the customer happened to mention that his Internet response times seemed to be worse when it rained, and the serviceman was listening. It turned out that rain-soaked tree branches falling on the service provider's junction box, shorting connections inside. And the worldwide system of hardware, software and communications technology that supports your email and web browsing? More and more often, its failures are triggered by distant events, like a power failure in Michigan, or the release of malware by a computer hacker in Germany.
You might think that the activity of your computer system can be predicted with certainty, since the operation of its hardware and software is based on well-understood physical and mathematical principles. But your computer system is embedded in the larger, chaotic systems of the natural and human world. And as many of you might have noticed (e.g. listening to politicians or commercial advertising), more often than not when simple answers are offered for complex human problems, those answers are based on misconceptions and/or misrepresentations.
Sometimes a rant like this one will prompt a born-again technophile to tell me that he has a simple solution: "Use Rogers, not Sympatico" (or vice versa), "Switch to Linux", or "Upgrade to XP Pro." Well, I've been using XP Pro for quite a while now, it's does seem better than what I had before (Windows 98): applications do still freeze occasionally, but Windows almost never does. Because of my positive experiences I tell people that it's the best Windows operating system to use (warning them of course to do a "clean" install, or find it pre-installed on a new PC). Windows Vista followed XP, and was an acknowledged disaster. Windows 7 and 8 have been released and the word is that they're pretty good. But moving to a new Windows version means upgrading your other software (like word processors, sound and video editors), often at great expense and great inconvenience; that's another story ...
Upgrading to the latest software version won't guarantee you fewer problems, especially if you are using less than "state of the art" hardware (like a PC more than a few months old). I heard about one system running Windows XP Home on slightly older hardware that occasionally crashed, and even rebooted itself when it wasn't even being used. Its problems were eliminated only after the installation of a newer "motherboard" (the PC equivalent of a brain transplant). The cyber-doctor who did that digital surgery said that he thought that the problems might have been caused by "BIOS incompatibilities". Don't worry if you don't know what that means (creating the BIOS for the first IBM PC is what launched Bill Gates's career). The important lesson is that hardware and software don't always play nice together, and that even for a knowledgeable technician, diagnosing their misbehaviours can be quite a challenge.
I've concluded that as computer systems become more and more complex and embedded, they reach a point where they must be regarded as dynamical systems; that the binary absoluteness of the digital computer metaphor (it's either "one" or "zero", "on" or "off") is overwhelmed by chaotic interactions with elements of the interconnected systems. I think that more and more often we will find computer systems producing "garbage output" even when they have been provided with non-garbage input.
But aren't we better off with computers than without them?
There are obviously many ways that our lives have been improved by computers, and so you may think I've finally crossed the line by questioning this last proposition. Without computers, my doctor wouldn't have had the scanner that helped him diagnose and repair my injured knee; I wouldn't be able to plug in a plastic card and get cash at 3 am; there would be no telephone CallerID freeing me from inadvertently answering telemarketing calls at dinner time.
But many far less desirable things would not exist without computers, like telephone voice jails that block customer access to service personnel, or Net porn and email Spam. And without computers, you wouldn't experience the loss of productivity and self-confidence when your professional or personal life is disrupted by the demands and side-effects of mandatory updates to your computer tools.
Bjarne Stroustrup, the designer of the C++ computer programming language, says that for some time he's wished his computer was as easy to use as his telephone. He says that his wish finally came true ... since he now has trouble using his (computerized) telephone.
Computers have improved my own life in several ways. They have given me years of employment as a computer softsmith and as an online instructional designer; and they have provided me with anecdotes for essays like the one you are reading. But my years of experience with computers, and my professional focus on the quality of their "user interfaces", have inclined me to be alert to encounters with computer systems where I am treated badly. I've decided that computer systems will not be a positive factor in my life, or yours, if their manufacturers and distributors don't spend a greater effort designing and maintaining quality user interfaces, and performing more thorough testing, especially of new and updated systems.
So what remedies am I suggesting? Perhaps members of design or development or testing teams should stand up to their bosses and say things like "My responsibility to the community compels me to reject your unrealistic project plan", or "We should tell customers the truth about what our product can and can't do". Well, sure, that would help. But not all of us work as developers of computer systems (at least not yet); and not many of us the courage of a Gandhi or a Mandela to change the world. And of course the world just doesn't work that way; a single individual is almost always powerless. But as Margaret Mead and others have told us, power can be and always has been exercised by individuals working together. This essay is my attempt to spread the word, and to mobilize resistance.
In my call to expose false claims of computer perfection I will do my best to shame vendors who distribute intractable and unreliable computer systems. I will resist interactions where I am forced to deal with uncomprehending automated systems, and I will demand the right to conduct my business transactions with an entity with the natural language understanding of at least a two-year-old child!
I hope you haven't judged my arguments about computer abuse of human beings to be sophomoric, or fanciful, or even dead wrong.
Even if you find my arguments reasonable, in a troubled world with so many important causes demanding your attention, you may not share my commitment to expend much of an effort trying to improve the quality of our computer tools.
But you can help in that cause just by being skeptical of advertisers'
claims, and by not accepting blame
for the mistakes and misdeeds of others. And if the time does come when
you do decide that further action is warranted, remember to be conscious
of the motives of the powerful and self-interested who will scorn you,
and be wary of those offering simple solutions to complex problems.
About his former employer, Cooper says: "Microsoft does little or no design, and its products are famous for making people feel stupid."
Dreyfus predicts that the AI project will fail because researchers' conceptions of mental functioning are naïve. He suggests that those researchers would do well to acquaint themselves with modern philosophical approaches to being human.
Garson explores in thought-provoking, at times frightening, detail the “second industrial revolution,” showing how and why computer technology is dehumanizing the modern workplace.
Penrose is a mathematician and physicist who believes that some aspects of the human mind will never be duplicated by artificial intelligence, supporting his view with material drawn from quantum mechanics and brain structure.
Roszak cuts through the advertising hype, media fictions, and commercial propaganda that have heralded the high-tech revolution and shows us the risks of confusing what computers can do well (process and store information) with what they cannot do at all (reason and feel).
Media scholar and Internet enthusiast Shenk examines the troubling effects of information proliferation on our bodies, our brains, our relationships, and our culture, then offers strikingly down-to-earth insights for coping with the deluge.
Stoll looked at the Internet as it was in 1995, not as it was promised to be, intelligently questioning where it was leading us.
Ullman uses her experiences as a programmer, writer, commentator, and consultant to show the many contradictions that can arise from technology, discussing how technology has affected not only the workplace but the work space.
This entirely equationless text is a popularization of mathematician Wiener's ideas about humans and machines, as well as a fascinating piece of philosophy and sociology.
A distinguished computer scientist's elucidation of the impact of scientific rationality on man's self-image.