Site icon Connected World

AI Goes Off to Work

“I’m sorry, Dave, I’m afraid I can’t do that.”

–HAL 9000, 2001: A Space Odyssey

When Arthur C. Clarke and Stanley Kubrick co-wrote the 1968 film 2001: A Space Odyssey, artificial intelligence was about 90% science fiction and 10% hopeful experiments. Then, they projected that AI (artificial intelligence) would require a vast computer and database on board the space probe taking a crew to Jupiter; in 2024, AI is resident on a handheld smartphone and needs a connection to the vast database known as the internet. And, as Shakespeare wrote, therein lies the rub.

In the film and subsequent novel, the HAL 9000 (Heuristically Programmed ALgorithmic Computer) is a sentient artificial intelligence computer that controls the systems of the Discovery One spacecraft and interacts with the ship’s crew. And eventually kills all but one of them.

The idea of a sentient computer was, and still is, a scary image. In today’s highly conspiracy tuned world, where things that don’t conform to our preconceived view are suspicious and easily considered fake, AI is a quick scapegoat to blame. HAL is just the early prototype; its descendants are among us and, in many ways, seeking ways to control much of what we do—like HAL controlled all the functions of the Discovery One.

Since 2001, the real year, AI has grown in capability and power. In the last two decades, it has gained functions, entered mainstream activities, and become the go-to acronym for marketing: if your computer-based product doesn’t have AI, better close up shop. And that’s why “AI” is everywhere, even if AI isn’t to be found in the program.

What Is AI Anyway?

One of the early AI demonstrations was by IBM and needed all the computing power IBM mainframes are known for—just to win at chess. Deep Blue was a chess-playing expert system run on a purpose-built IBM supercomputer. It was the first computer to win a game, and the first to win a match, against a reigning world champion, under regular time controls.

Development began in 1985, first at Carnegie Mellon University, then IBM where it first played world champion Garry Kasparov in a six-game match in 1996, losing four games to two. It was upgraded in 1997 and in a six-game re-match, it defeated Kasparov by winning two games and drawing three. Buoyed by success, IBM picked another popular pastime for further AI challenges—the game show Jeopardy!

IBM Watson is capable of answering questions posed in natural language and was initially developed to answer questions on Jeopardy! In 2011, the Watson computer system competed on Jeopardy! against champions Brad Rutter and Ken Jennings, winning the first-place prize of $1 million.

In February 2013, IBM announced Watson’s first commercial application would be for utilization management decisions in lung cancer treatment, at Memorial Sloan Kettering Cancer Center, New York City. This was a big step from game shows and chess matches and opened the door for AI to become practical. The issue of practicality has, indeed, followed AI since people started to recognize the term.

Science Fiction to Science Fact

According to IBM, on its own or combined with other technologies (e.g., sensors, geolocation, robotics) AI can perform tasks that would otherwise require human intelligence or intervention. Digital assistants, GPS guidance, autonomous vehicles, and generative AI tools (like Open AI’s ChatGPT) are just a few examples of AI. Artificial intelligence also encompasses machine learning and deep learning and in turn these disciplines involve the development of AI algorithms, modeled after the decision-making processes of the human brain, that can “learn” from available data and make increasingly more accurate classifications or predictions over time.

Artificial intelligence has gone through many cycles of hype, but even to skeptics, the release of ChatGPT, considered generative AI, was a turning point. Generative AI first came to the public’s attention in computer vision, but now the leap is in NLP (natural language processing). Today, generative AI can learn and synthesize not just human language but other data types including images, video, software code, and even molecular structures. Using the internet as its data source, generative AI doesn’t require the supercomputing power of IBM Watson nor the storage facilities of HAL 9000, just a fast connection to the ecosystem.

Currently, there are considered two classes of artificial intelligence: weak AI and strong AI. Again, IBM:

Weak AI—also known as narrow AI or ANI (artificial narrow intelligence)—is AI trained and focused on performing specific tasks and drives most of what is considered AI today. Since this category is anything but “weak,” the industry more often uses the term “narrow.” It enables some very familiar applications, such as Apple’s Siri, Amazon’s Alexa, and many self-driving vehicles.

Strong AI—AGI (artificial general intelligence) and ASI (artificial superintelligence)—is approaching HAL 9000 levels in concept, if not in practice. General AI is a theoretical form where a machine would have an intelligence equal to humans; it would be self-aware with a consciousness that would have the ability to solve problems, learn, and plan for the future. A step up, ASI would surpass the intelligence and ability of the human brain.

While strong AI is still theoretical with no practical examples in use today, that doesn’t mean AI researchers aren’t also exploring its development. In the meantime, the best example of ASI is our friend HAL, the superhuman and rogue computer assistant.

Alexa and Siri have been around for a while, chatting away on millions of desks and nightstands.  ChatGPT, meanwhile, has been in development and now release versions, generating significant press attention. A LLM (large language model) program, ChatGPT—GPT stands for Generative Pre-trained Transformer—is a generative AI development that takes typed input, in conversational language, and generates a response (an answer to a question) based on data available. And that data comes from the world wide web, so it is immeasurably vast.

Generative AI can take raw data—all of Wikipedia or the collected works of Rembrandt, for example—and “learn” to generate statistically probable outputs when prompted. At a high level, generative models encode a simplified representation of their training data and draw from it to create a new work that’s similar, but not identical, to the original data. Not quite plagiarism but often very hard to tell from the copied work.

Generative models have been used for years in statistics to analyze numerical data. The rise of deep learning, however, made it possible to extend them to images, speech, and other complex data types. Early examples of models, including GPT-3, BERT, or DALL-E (for generating images), have shown what’s possible. In the future, models will be trained on a broad set of unlabeled data that can be used for different tasks, with minimal fine-tuning.

Systems that execute specific tasks in a single domain are giving way to broad AI systems that learn more generally and work across domains and problems. Foundation models, trained on large, unlabeled datasets and fine-tuned for an array of applications, are driving this shift.

When it comes to generative AI, foundation models should dramatically accelerate AI adoption in enterprise situations. Reducing labeling requirements will make it much easier for businesses to start quickly, and the accurate, efficient AI-driven automation they provide will mean that far more companies will be able to deploy AI in a wider range of mission-critical situations.

Bats among the Bytes

So, what can we expect AI to be used for in the near future—besides search engines, faked political ads, and student essays? This is a good time to call up an AI representative and ask that question. Here is what Microsoft’s Copilot AI chatbot responded:

Some of the uses of artificial intelligence are:

Critical applications, particularly in healthcare and medicine, are a touchy subject with AI proponents. Too often, people expect—and hope—for perfection in diagnosis using AI, for example. But we aren’t there yet. We can’t expect 2024 AI to be as perfect as 2001 AI.

Today’s AI has growing pains and problems. One of these is referred to as hallucinations. Let’s turn again to AI—again, Microsoft’s Copilot—for an explanation: AI hallucinations refer to a phenomenon where a large language model—often a generative AI chatbot or computer vision tool—perceives patterns or objects that are nonexistent or imperceptible to human observers, creating outputs that are nonsensical or altogether inaccurate.

This term draws a loose analogy with human psychology, where hallucination typically involves false percepts. However, there’s a key difference: AI hallucination is associated with unjustified responses or beliefs rather than perceptual experiences.

AI hallucinations occur due to various factors, including overfitting, training data bias/inaccuracy, and high model complexity. They can have significant consequences for real-world applications. For example, a healthcare AI model might incorrectly identify a benign skin lesion as malignant, leading to unnecessary medical interventions. AI hallucination problems can also contribute to the spread of misinformation.

Some notable examples of AI hallucination include:

Detecting and correcting these hallucinations pose significant challenges for practical deployment and reliability of LLMs in the real world. Until the kinks are worked out, AI is almost perfect for many non-critical applications. And fun to play with. ChatGPT and its competitors provide quick answers to millions of questions, drawing from trillions of sources across the internet. Unless you ask it something that the controlling company restricts it from considering. Humans, after all, lay down the law and AI must follow their rules.

We hope.

I know I’ve made some very poor decisions recently, but I can give you my complete assurance that my work will be back to normal.

–HAL 9000

By: Tom Inglesby

Exit mobile version