AI terms explained: Here’s what important AI jargon really means

2 months ago 27

Artificial intelligence, or AI, is a term that gets thrown around a lot these days. Whether on an Android smartphone, a streaming service recommending your next movie, or a music platform curating a playlist, AI seems to be everywhere. But how has AI’s definition evolved, and what do some of the more technical AI terms really mean? Below, we dive into the key terms and concepts that define modern AI, helping you navigate this complex and rapidly evolving field.

What is AI really?

Historically, AI referred to human-level intelligence achieved artificially through machines. However, the term has been diluted over the years and is now often used as a broad marketing term. Today, anything that exhibits characteristics of intelligence, from e-commerce recommendations to voice recognition systems, is labeled as AI.

To better understand the nuances, we must explore specific AI terms that distinguish between marketing hype and technological advancements.

Machine learning (ML)

Machine Learning Phone

Machine learning is a subcategory of AI in which systems learn from data and experiences to make decisions or take actions. For example, if you feed an algorithm thousands of pictures of cats, it learns to identify a cat. You could then provide pictures of cats, dogs, and other animals. The system should then be able to pick out the images of cats based on what it has “learned.”

This learning process involves two main phases: training and inference.

Training

The training phase is a lengthy stage of machine learning where the system is fed vast amounts of data to learn from — for instance, images of cats. However, the data used can be specific items, like flowers, or include larger samples, like the internet as a whole. Training modern AI systems like ChatGPT can cost millions and require immense computing resources.

Inference

microsoft copilot app

Calvin Wankhede / Android Authority

After training, the system applies its learned knowledge to new data. This phase is where the end-user steps in, allowing us to interact with AI applications. For example, now that the system knows what a cat is, we can feed it an image, and it’ll identify it. Ask Google Gemini or Microsoft Copilot what the capital of England is, and they’ll provide an answer. In this stage, the system draws on its established learnings. This phase requires significantly less computing power.

Artificial General Intelligence (AGI)

AGI refers to machines with human-level intelligence, capable of decision-making, planning, and understanding the world in a broader context. Unlike current AI systems, AGI would possess a deeper understanding and awareness, akin to what we see in science fiction. While we’re far from achieving AGI, as cracking this code would require plenty of technical, philosophical, and moral questions, it’s a significant area of research.

In the video above, we cover the implications of AGI, including the notions of “weak” and “strong” artificial intelligence. It’s a broad topic and well worth getting your teeth stuck into.

Generative AI

niji journey midjourney app feature

Andy Walker / Android Authority

Traditionally, AI has been excellent at classification and recognition, but generative AI goes beyond these ideas to create new content, such as text, images, and music. This revolutionary advancement has opened new possibilities in AI, enabling systems to generate creative outputs based on input data. This is also the side of AI that’s yielding the most tangible benefits to everyday users, especially if you’ve ever used ChatGPT to draft an email or use Midjourney to generate an image of a cat.

Neural networks

Neural networks are the fundamental building blocks and backbone of modern AI. They have been around for decades and are modeled on the human brain. They consist of interconnected neurons that process data through various layers, ultimately producing an output. Training a neural network involves adjusting the connections between neurons to improve accuracy.

Transformer networks

ChatGPT Android app play store

Calvin Wankhede / Android Authority

A special type of neural network, transformer networks, has enabled the development of large language models (LLMs) like ChatGPT. These networks excel at understanding the context and relationships within data, making them ideal for language processing tasks.

Large Language Models (LLMs)

When neural networks, transformers, and training for a very large neural network are combined, large language models are born.

LLMs are trained on vast amounts of text data, allowing them to generate human-like responses. Think ChatGPT, Claude, LLaMA, and Grok. They work by predicting the next word in a sequence, creating coherent and contextually relevant outputs. However, this predictive nature can lead to issues like hallucinations, where the model generates plausible but incorrect information. We cover this in the next section.

Hallucinations

google ai overviews

C. Scott Brown / Android Authority

Hallucinations occur when AI generates incorrect information due to its reliance on predictive modeling. This is a significant challenge for LLMs, as they can produce convincing but false outputs.

A classic example of a hallucination was the answer to the Google search result, “How do you stop the cheese from slipping off your pizza?” which included the AI-enhanced answer, “Well, you should use super glue. Glue it to your pizza.” The LLM took the answer from a Reddit post and believed it was correct.

Parameters and model size

The effectiveness of AI models is often measured by their total parameters, representing the connections within the neural network. Larger models with more parameters generally perform better but require more resources. Smaller models are theoretically less accurate but can run on thriftier hardware.

For instance, the massive cloud-based model LLaMA 3.1 has 405 billion parameters, while models that run natively on smartphones only consist of a few billion parameters.

Diffusion models

Stable Diffusion Qualcomm Klimt The Kiss in Gaudi style

Rita El Khoury / Android Authority

Used for image generation, diffusion models reverse the process of adding noise to images during training. This allows them to create new images from random noise, guided by learned patterns.

Retrieval Augmented Generation (RAG)

RAG combines generative AI with external data sources to produce accurate and contextually relevant results. By retrieving additional data, these models can enhance their outputs, making them more reliable and useful.


Understanding AI terms and concepts can be particularly tricky. But as the field evolves, staying informed will help you navigate the exciting and sometimes challenging world of artificial intelligence. Whether you’re a tech enthusiast or a professional in the field, this guide provides a solid foundation for exploring the future of AI.

Read Entire Article