Skip to main content

AI Chat Basics

Browse common Artificial Intelligence (AI) terminology and definitions and educational resources.

Common AI Terms and Definitions

General AI

  • Artificial Intelligence (AI):
    • The simulation of human intelligence processes by machines, especially computer systems. These processes include learning (the acquisition of information and rules for using the information), reasoning (using rules to reach approximate or definite conclusions), and self-correction.  
  • Machine Learning (ML):
    • A subset of AI that allows systems to learn from data with or without explicit programming. It focuses on developing algorithms that can improve their performance over time.
  • Large Language Model (LLM):
    • A type of AI model trained on (sometimes massive amounts of) text data, enabling it to receive, process and generate human-like text
  • Deep Learning (DL):
    • A subset of machine learning that uses artificial neural networks with multiple layers (deep neural networks) to analyze data. It’s particularly effective for complex tasks like classification (such as image recognition) and natural language processing.
  • Algorithm:
    • A set of rules or instructions that a computer follows to solve a problem or perform a task.
  • Neural Network:
    • A computational model inspired by the structure and function of the human brain, consisting of interconnected nodes (neurons) that process and transmit information.  
  • Training Data:
    • The data used to teach a machine learning model. The model identifies patterns and relationships in the data.
  • Model:
    • A representation of a system, algorithm or process such as the ones created by a machine learning algorithm based on training data.

Large Language Model (LLM) Specific Terms

  • Large Language Model (LLM):
    • A type of AI model trained on (sometimes massive amounts of) text data, enabling it to receive, process and generate human-like text.
  • Natural Language Processing (NLP):
    • A field of AI that focuses on enabling computers to process, interpret, and generate human language.  
  • Prompt:
    • The input text or question given to a language model to elicit a response.
  • Token:
    • A basic unit of text that an AI model processes. It can be a word, part of a word, or punctuation mark.
  • Context Window:
    • The amount of text that a language model can consider when generating a response. It defines how much of the conversation history or input text the model remembers.
  • Generation:
    • The process of a language model producing text output in response to a prompt.
  • Fine-tuning:
    • The process of further training a pre-trained model on a smaller, specific dataset to improve its performance on a particular task.
  • Parameters:
    • The values that a machine learning model uses in its algorithm as weights.  Parameters are refined using training data. These values determine the model’s behavior. The more parameters, the larger the model.
  • Hallucination:
    • When a language or other model generates information that is factually incorrect or nonsensical (and sometimes presents it as if it were true).
  • Transformer:
    • A type of neural network architecture that is particularly effective for natural language processing tasks. LLMs are frequently based on transformer models.
  • Vector Database:
    • A specialized database that stores data as high-dimensional vectors, enabling efficient similarity searches. These are very useful for models to organize, store and retrieve relevant information.
  • Embedding:
    • A numerical representation of text that captures its semantic meaning. Language models use embeddings to understand the relationships between words and phrases.
  • Retrieval-Augmented Generation (RAG):
    • A technique that improves the accuracy of language models by retrieving relevant information from an external knowledge source and incorporating it into the generated response.
  • Zero-shot learning:
    • The ability of a model to perform a task without being explicitly trained on that specific task.
  • Few-shot learning:
    • The ability of a model to perform a task with only a few examples or iterations.

Resources

Online Courses and Tutorials

This course is designed for project managers, product managers, directors, executives, and students starting a career in AI. LinkedIn Learning courses are free for NC State students, faculty and staff.

In this course, generative AI expert Pinar Seyhan Demirdag covers the basics of generative AI, with topics including what it is, how it works, how to create your own content, different types of models, future predictions, and ethical implications. LinkedIn Learning courses are free for NC State students, faculty and staff.

Learn More About AI

Whether you want to explore the fundamentals, applications, or ethics of AI, you can find courses on LinkedIn Learning that challenge and inform you. Develop your skills in programming, data analysis, machine learning, computer vision, natural language processing, and more.