In the 1980’s. I went on Sabbatical Leave from UMass to GE Research Labs. The goal was to study artificial intelligence (AI) for Engineering Design. Fortunately for me I was steered to a brilliant scientist at GE who was also interested in the idea. We collaborated for a year, wrote a couple of seminal publications, and I went back to UMass with a great deal of financial support from National Science Foundation and various companies for advanced computers and graduate students to continue the research.
This was AI but not the AI you are hearing so much about today. There was a bubble of activity back then in what was called AI. We defined it as “getting the computer to do what had previously been better done by people.” Our specific goal was to learn how engineering designers do design. We would do that by learning to program the computer to do design. Engineering designers are brilliant practitioners, astonishingly capable of designing machines and products, including cars, airplanes, refrigerators, etc., but they are not reflective practitioners. That is, they cannot tell you how they do it; they just do it. It’s hard to teach or improve on ‘just do it’.
Back at UMass, we made considerable progress by trying to teach the computer to reason about the design process. (As you will see later in this article, that is not at all the way AI is being done today.) Our publications and my students were in great demand. After graduating with Master’s or Doctoral degrees, students went to places like NASA, Polaroid, several GE divisions, several top universities, and other good companies. Two went into business designing software for Sikorsky, Boeing, and others. They ultimately sold their company for millions, and established a John R. Dixon Fellowship at UMass..
How do designers do it? It’s iteration, but not trial and error. Human designers have knowledge, knowledge of how to evaluate a trial design and knowledge of how to make a redesign based on the evaluation. We gave the computer that kind of knowledge and it was able to do simple designs. That’s old-fashioned AI.
The new-fashioned AIs, like Chatgpt from a company called OpenAI and Bing from Microsoft, works instead with a huge-beyond-belief database of everything that’s on the Internet. It’s called ‘generative AI’ and uses a ‘large language model’, as large as all the language on the Internet, and more. I don’t know how they organize all that data, or how they access it efficiently, or how they choose what to reply, but they do it. Training their models takes time and money – about a billion dollars per model. And now they have also taught the computer to write realistically sounding sentences, paragraphs, and whole essays with the data they retrieve.
You can experience the process on a small scale with your phone. Ask Siri or Alexa a question and you’ll get a usually-correct answer very quickly. For example, I asked “Who wrote the poem ‘The Children’s Hour?’” Before I could blink, Siri told me ‘The Children’s Hour’ was written by American poet Henry Wadsworth Longfellow and first published in the September, 1860, issue of Atlantic Monthly magazine. (Note the extra info I got and that Siri can do good sentences.)
When you realize that millions of people world-wide can ask Siri literally millions of different questions, and nearly all receive good information immediately, you can begin to see what the new AI can do. Chatgpt and Bing (and there are others) are even bigger, quicker, and can write much longer, more comprehensive answers. This is an astonishing accomplishment. It can, however, make mistakes occasionally, and give wrong answers. The Bing version even invented a nefarious character, but he was quickly cancelled by the programmers.
I wanted to ask Chatgpt to write as essay critiquing the poem but OpenAI wanted my phone number as a prerequisite for logging in to their site. Why do they need my phone number? Why can’t they just ask Chatgpt? I refused to give it, so I can’t give you the result, but I’m sure it would be quite fine. Students, after all, are using the software to write homework assignments more difficult than my little question about a poem.
There are very real questions about the new AI. Serious, astute people are worried about what it may become, and the harms it may then do. The best article I have read appeared in the Wall Street Journal, and was written by Henry Kissinger (yes, the famous one), Eric Schmidt (former CEO of Google), and Daniel Huttenlocher (a Dean of Computer Science at M.I.T.) Their article, the longest and most scholarly I have ever seen in a newspaper, describes the technology and what it can do. Then these erudite authors, who have written a book entitled “The Age of AI and our Human Future”, lay out their concerns, which are many and significant. There are too many concerns, and many are too complex, for me to recount in this space. The most important it seems to me, have to do with how this AI may change us human beings, make us less smart, less critical, and very dependent and trusting of the computer. The AI may well modify how we view knowledge, politics, and each other. Though the machine is not conscious, we might come to think it is. It can provide probabilistic judgements as if it were conscious. And, these authors ask, who will control it?
The large language models and generative AI do not display human intelligence. The test for that is called a Turing Test: If a machine can engage in a conversation with a human without being detected as a machine, it has demonstrated human intelligence. They are not there — yet.
I think the fears and concerns are a little too much. I am reminded of when the slide rule was being replaced by the hand calculator. There were faculty who resisted the change because, they argued, students would lose their ability to do calculations. Didn’t happen; students, relieved of boring calculations, just got better at a higher-level skill: quantitative reasoning.
But that was then, this is now. This new AI is a very big deal, not a hand calculator. It’s a revolution in scale like the Gutenberg printing press. This is words – text – not numbers. The machine doesn’t understand itself; its words are without meaning. Our human thoughts with meaning generally preceed our words. But, like people who speak without thinking, the machine reverses the process. The words come before the meaning, if any.
The new technology has attracted investors, big investors. They expect to make big money on their investments. These investments ensure that the technology will continue to grow. What will happen if the machine/computer grows out of control? My first reaction to this question was: We’ll pull the plug. Then I realized there may not be a plug to pull, and worse: we probably won’t even realize the computer/machine is out of control. We will be immersed in its words, enjoying the services it provides, letting our minds and our humanness slide away. Just Sayin’.