What's everyone talking about?
Machine learning (ML)
In machine learning processes, an algorithm repeatedly performs a defined task and thereby learns to perform it independently. Instead of following a programmed solution path, data-driven, mathematical optimization procedures enable ML systems to adapt to a defined objective—a process referred to as »training«.
Artificial neural networks (ANN)
Artificial neural networks are computer models inspired by the processes of biological neural networks. They are made up of numerous interconnected nodes (»neurons«) that independently perform calculations depending on certain parameters, sharing their results via the connections between them.
These computational networks are thereby able to learn to recognize patterns and relationships in data. This means that a learning process based on suitable »training data« can systematically configure and optimize the network parameters for a given task. ANNs are primarily used in situations beyond the limitations of symbolic AI approaches, such as image analysis, voice recognition, forecasting models and automated decision-making.
Deep learning
Deep learning describes the process of training extensive, complex ANN architectures (known as »deep artificial neural networks«), which are comprised of multiple layers of »neurons«. Their hierarchical structure enables them to solve highly complex tasks. Another significant difference from conventional ANNs is that deep ANNs are capable of automatically identifying and extracting complex patterns in data in order to model a problem (known as »representation learning«).
Foundation models
These models are artificial neural networks that have been pre-trained for a general task with extensive, diverse datasets using ML techniques. They can then be optimized for specific applications and tasks with the help of specialized datasets (known as »fine-tuning«).
Large language models (LLM)
ChatGPT is one example of this type of autoregressive language model, which is trained exclusively on text data. These models harness deep learning algorithms for word prediction in order to process, understand and generate natural language. When presented with a given »prompt«, the LLM generates subsequent words as text output. Given the complexity of the architecture of LLMs and the volume of texts that are used to train them, their text output often appears very plausible to people—though its content is frequently inaccurate.
Generative AI
This term covers a range of ML models that generate new content, such as text, images, audio and video. These models are trained using large volumes of suitable data and »learn« to mimic content based on these templates. Generative AI models include ChatGPT and Midjourney, which we have also tapped to generate images in this issue.
Symbolic AI
Symbolic AI (also known as »classical AI«) represents knowledge about a given domain propositionally, i.e. as a collection of statements that apply to that domain. These statements are depicted by means of meaningful symbol structures and supplemented with computational rules that allow a computer to solve problems similarly to humans in clearly structured, precisely described domains.
Their applications include »expert systems« used by doctors to assist with diagnosis. Symbolic AI reaches its limits in areas that lack a propositional description and for which people can struggle or fail to accurately verbalize their problem-solving methods. It is therefore often supplemented by approaches such as machine learning.