What is Artificial Intelligence?

 



What is AI?

''Artificial intelligence is a machine that's able to learn, make decisions, and take action—even when it encounters a situation it has never come across before.''

As with what constitutes intelligence in humans, AI is hard to neatly draw a box around. 

In the broadest possible sense, artificial intelligence is a machine that's able to learn, make decisions, and take action—even when it encounters a situation it has never come across before.

In the narrowest possible sci-fi sense, many people intuitively feel that AI refers to robots and computers with human or super-human levels of intelligence and enough personality to act as a character and not just a plot device. In Star Trek, Data is an AI, but the computer is just a supercharged version of Microsoft Clippy. No modern AI comes close to this definition.

In simple terms, a non-AI computer program is programmed to repeat the same task in the same way every single time. Imagine a robot that's designed to make paper clips by bending a small strip of wire. It takes the few inches of wire and makes the exact same three bends every single time. As long as it keeps being given wire, it will keep bending it into paper clips. Give it a piece of dry spaghetti, however, and it will just snap it. It has no capacity to do anything except bend a strip of wire. It could be reprogrammed, but it can't adapt to a new situation by itself. 

AIs, on the other hand, are able to learn and solve more complex and dynamic problems—including ones they haven't faced before. In the race to build a driverless car, no company is trying to teach a computer how to navigate every intersection on every road in the United States. Instead, they're attempting to create computer programs that are able to use a variety of different sensors to assess what's going on around them and react correctly to real-world situations, regardless of if they've ever encountered it before. We're still a long way from a truly driverless car, but it's clear that they can't be created in the same way as regular computer programs. It's just impossible for the programmers to account for every individual case, so you need to build computer systems that are able to adapt.

Of course, you can question if a driverless car would be truly intelligent. The answer is likely a big maybe, but it's certainly more intelligent than a robotic vacuum cleaner for most definitions of intelligence. The real win in AI would be to build an artificial general intelligence (AGI) or strong AI: basically, an AI with human-like intelligence, capable of learning new tasks, conversing and understanding instructions in various forms, and fulfilling all our sci-fi dreams. Again, this is something that's a long way off

What we have now is sometimes called weak AI, narrow AI, or artificial narrow intelligence (ANI): AIs that are trained to perform specific tasks but aren't able to do everything. This still enables some pretty impressive uses. Apple's Siri and Amazon's Alexa are both fairly simple ANIs, but they can still respond to a wide number of requests.

With AI so popular right now, we're likely to see the term thrown around a lot for things where it doesn't really apply. So take it with a grain of salt when you see a brand marketing itself with the concept—do some digging to be sure it's really AI, not just a set of rules. Which brings me to the next point.


How does AI work?

Currently, most AIs rely on a process called machine learning to develop the complex algorithms that constitute their ability to act intelligently. There are other areas of AI research—like robotics, computer vision, and natural language processing—that also play a major role in many practical implementations of AI, but the underlying training and development still start with machine learning.

With machine learning, a computer program is provided with a large training data set—the bigger, the better. Say you want to train a computer to recognize different animals. Your data set could be thousands of photographs of animals paired with a text label describing them. By getting the computer program to crunch through the whole training data set, it could create an algorithm—a series of rules, really—for identifying the different creatures. Instead of a human having to program a list of criteria, the computer program would create its own. 

This means that businesses will have the most success adopting AI if they have existing data—like customer queries—to train it with.

Although the specifics get a lot more complicated, structured training using machine learning is at the core of how both GPT-3 and GPT-4 (Generative Pre-trained Transformer 3/4) and Stable Diffusion were developed. GPT-3—the GPT in ChatGPT—was trained on almost 500 billion "tokens" (roughly four characters of text) from books, news articles, and websites around the internet. Stable Diffusion, on the other hand, used the LAOIN-5B dataset, a dataset with 5.85 billion text-image pairs.

From these training datasets, both the GPT models and Stable Diffusion developed neural networks—complex, many-layered, weighted algorithms modeled after the human brain—that allow them to predict and generate new content based on what they learned from their training data. When you ask ChatGPT a question, it answers by using its neural network to predict what token should come next. When you give Stable Diffusion a prompt, it uses its neural network to modify a set of random noise into an image that matches the text.

Both these neural networks are technically "deep learning algorithms." Although the words are often used interchangeably, a neural network can theoretically be quite simple, while modern AIs rely on deep neural networks that often take into account millions or billions of parameters. This makes their operations murky to end users because the specifics of what they're doing can't easily be deconstructed. These AIs are often black boxes that take an input and return an output—which can cause problems when it comes to biased or otherwise objectionable content. 

There are other ways that AIs can be trained as well. AlphaZero taught itself to play chess by playing millions of games against itself. All it knew at the start was the basic rules of the game and the win condition. As it tried different strategies, it learned what worked and what didn't—and even came up with some humans hadn't considered before. 

AI fundamentals: terms and definitions

Currently, AI can perform a wide variety of impressive technical tasks, often by combining different functions. Here are some of the major things it can do.

Machine learning

Machine learning is when computers (machines) pull out information from data they're trained on and then begin to develop new information (learn) based on it. The computer is given a massive dataset, trained on it in various ways by humans, and then learns to adapt based on that training.



Deep learning

Deep learning is part of machine learning—a "deep" part, in that the computers can do even more autonomously, with less help from humans. The massive dataset that the computer is trained on is used to form a deep learning neural network: a complex, many-layered, weighted algorithm modeled after the human brain. That means deep learning algorithms can process information (and more types of data) in an incredibly advanced, human-like way.

Generative AI


Generative AIs like GPT and DALL·E 2 are able to generate new content from your inputs based on their training data. 

GPT-3 and GPT-4, for example, were trained on an unbelievable quantity of written work. It basically amounts to the whole of the public internet, plus hundreds of thousands of books, articles, and other documents. This is why they're able to understand your written prompts and talk at length about Shakespeare, the Oxford comma, and which emojis are inappropriate for work Slack. They've read all about them in their training data.

Similarly, image generators were trained on huge datasets of text-image pairs. That's why they understand that dogs and cats are different, though they still struggle with more abstract concepts like numbers and color. 



Comments

Popular posts from this blog

Tokyo Revenger Season 3

PML-N, PPP invite PTI to ‘meaningful’ talks

Microsoft Windows 11