FIRSTIMPRESSION ADS ARE OFF

These men are credited as being founding fathers of AI, and 2 of them are from Carnegie Mellon

Story by MEGAN TROTTER, DINARI CLACKS, HALEY MORELAND
TribLive

Sept. 8, 2024

Pittsburgh may not be the birthplace of AI, but it served as a cradle.

And the technology is older than you probably think.

Carnegie Mellon University is one of the places where AI was being developed more than 50 years ago.

Tom Mitchell, who has taught at Carnegie Mellon since 1986 and led the university’s Machine Learning Department for the first 10 years of its operation, said AI has been around longer than people realize.

"Most people kind of attribute the start of the field to a meeting that was held back in the 1950s at Dartmouth (College) about AI,” he said. "There were a number of people who came to be known as the … founding fathers of the field.”

Two of them were professors at Carnegie Mellon: Allen Newell and Herbert Simon.

Newell was a researcher in computer science and cognitive psychology, also working for the RAND Corp. He died in 1992. Simon, a Nobel Prize winner, was well known for his work in computer and economic sciences as well as cognitive psychology. He died in 2001.

Both won awards for their work to develop The General Problem Solver, one of the first AI programs, in 1957. In 1975, the pair won the A.M. Turing Award, named for AI pioneer Alan Turing and widely considered the Nobel Prize of computer science.

In the 1950s, Newell and Simon founded what is believed to be the first hub dedicated to researching AI at Carnegie Mellon. That laid the foundation of the university’s research and its robotics institute. The robotics institute expanded in 1997 and created the Center for Automated Learning and Discovery, which evolved into the current Machine Learning Department.

The others credited with founding AI are a virtual Who’s Who of pioneering computer scientists.

Alan Turing (1912-54): A British logician and a pioneer in computers, he may have been the first person to give a public lecture that mentioned artificial intelligence. In 1947 in London, he said humans could develop "a machine that can learn from experience” and that the "possibility of letting the machine change its own instructions provides the mechanism for this,” according to Quidgest, an online scientific paper clearinghouse and software developer.

Even before that, in 1935, Turing described a machine with infinite memory and a scanner that could troll through the memory, finding what it needs and creating new entries. It was known as the universal Turing machine and became the basis of every modern-day computer. During World War II, Turing worked with the British government to crack German code machines.

Related stories

Sunday
An overview of AI, where it came from and how it affects our lives
Monday
How AI is being used for finance and investing
Tuesday
AI technologies are giving some doctors more time for patients, improving health care
Wednesday
AI in the news; how it impacts what you read

In 1948, he published "Intelligent Machinery,” a report that contained many base concepts of artificial intelligence. In the early 1950s, he developed the Turing Test, designed to determine whether a machine was "smart” enough to pass for a human. It wasn’t until 2014 that a chatbot called Eugene Goostman, designed to mimic a 13-year-old Ukrainian boy, managed to convince the minimum 30% of judges that it was human.

John McCarthy (1927-2011): Another Turing Award winner, McCarthy is credited with contributing to some of the most important technologies ever devised, including the internet, robots and programming languages. He invented the List Processing Language (Lisp), one of the basic tools used in the development of AI. In 1959, he invented a tool that allows computer programs to remove bits of programming language not being used from a computer’s Random Access Memory. That tool, combined with Lisp, is the basis of many modern programming languages.

He was the founding director of Stanford University’s Artificial Intelligence Laboratory in 1967.

He also was the principal organizer of the Dartmouth College meeting and is credited with coining the phrase "artificial intelligence.”

Marvin Minsky (1927-2016): A professor at the Massachusetts Institute of Technology, Minsky is credited with being the first scientist dedicated to giving computer programs human-like characteristics. He developed the Stochastic Neural-Analog Reinforcement Computer, the world’s first electronic learning system and first simulation of the brain’s neural networks. He co-founded MIT’s Artificial Intelligence Lab and won the Turing Award in 1969.

Artificial Intelligence comes in many shapes and sizes.

Although definitions vary, companies like Microsoft and IBM, along with universities researching and developing AI, offer general categories under which most AI falls:

1. Narrow AI: This is what exists today. This is AI focused on a single, although sometimes broad, task. It can’t perform outside of its assigned task. This kind of AI, based on datasets and complex algorithms, is broken down into subcategories:

Reactive Machine AI: This type of programming uses currently available data but can’t recall past experiences. It’s focused on very specific tasks. Because it accesses vast amounts of data, it can seem to be making intelligent reactions to inquiries. An example would be the viewing recommendation functions on streaming services such as Netflix or YouTube. The tool can access your choices in the past (this is what is meant by currently available data) to recommend future viewing choices.

Limited Memory AI: This version can remember its past experiences. It can combine that experience with new data to suggest answers or courses of action to achieve a desired outcome. Examples would be ChatGPT, virtual assistants such as Alexa, Cortana, Siri or IBM Watson, and even the programs that control self-driving cars.

2. General AI: This currently exists only in theory. This would allow computers or machines to perform functions outside of an original task. The machine would be able to use information to perform tasks by training itself without the need for human intervention. Machines with this ability would be able to perform any intellectual task a human can.

Under this category would be so-called "theory of mind” AI, or machines that could recognize and interpret the thoughts and emotions of humans, allowing them to tailor interactions based on that information. Examples would be machines that could contextualize things like artwork or essays without input from humans and recognize human emotions through voice or body language recognition. It would be able to respond to humans based on their emotions and feelings.

3. Super AI: This is the stuff, at least for now, of science fiction. This type of AI would surpass human thinking and cognitive abilities. Not only would it be able to understand human emotions, but it also would have its own feelings, needs, expectations and opinions. Super AI would be self-aware, a status currently achieved only by living organisms.