
AI Basics Course: Introduction to Working with 'AI', Basics and Terminology
Juli 14
23 min read
1
3
0
Hello and welcome to the first part of our comprehensive AI foundation course! In today's digital world, Artificial Intelligence (AI) is no longer just a buzzword, but a transformative force that is fundamentally changing our everyday lives and the business world. Whether you're a software developer, passionate about no-code technologies or simply want to understand what's behind the intelligent systems that surround us every day, this course is your ideal introduction.
In this first module, we will lay the foundation. We will highlight the core concepts of AI, clarify common terminology and give you an overview of the different types and functionalities of this fascinating technology. Get ready to explore the world of bits and neurons!
To give you a better orientation, here is a brief preview of the topics we will cover in this introductory article:
🚀 Introduction to Artificial Intelligence: What is it actually?
Welcome to the first chapter of our basic AI course! Before we dive deep into the technical details, let's clarify the most basic question: What exactly is artificial intelligence (AI)?
Imagine that a machine or computer program can perform tasks that would normally require human intelligence. That's the essence of AI.
Basically, artificial intelligence (AI) is the simulation of human intelligence processes by machines, especially computer systems. 🧠
These processes include:
- Learning: The ability to take in information and create rules for its use. 
- Reasoning: The ability to apply rules to reach approximate or final conclusions. 
- Problem solving: Recognizing challenges and finding effective solutions. 
- Perception: Understanding the environment through visual, auditory or other sensory data (e.g. image or speech recognition). 
- Language processing: Understanding and generating human language. 
An AI system learns from experience, adapts to new circumstances and performs human-like tasks, often without direct human intervention.
📜 A brief look into the past
The term "artificial intelligence" is not a recent invention. It was coined as early as 1955 by John McCarthy, an assistant professor of mathematics at Dartmouth College at the time. His vision was to develop machines that could "use language, form abstractions and concepts, solve problems normally reserved for humans, and improve themselves." This fundamental goal continues to drive AI research today.
✨ Why is AI so important today?
You may be wondering why AI is on everyone's lips right now. Several factors are contributing to the current AI revolution:
- Enormous amounts of data: We produce huge amounts of data (Big Data) every day, which serves as "fodder" for AI systems. 2 High computing power: Modern computers, especially specialized chips such as GPUs (Graphics Processing Units), can perform the complex calculations required for AI extremely quickly. 3 Advanced algorithms: Algorithms and models, especially in the field of machine learning and deep learning, have improved considerably in recent years. 
This combination has meant that AI is no longer just a theoretical concept, but a powerful tool that is transforming industries and is deeply integrated into our everyday lives.
🎯 The overarching goal of AI
The main goal of AI remains consistent: to develop computer programs that are capable of solving problems and achieving goals at a level equal or even superior to that of humans.
To achieve this goal, AI relies on various sub-areas, which we will look at in more detail in the next sections of this course, including:
- Machine Learning (ML): The centerpiece of many modern AI applications. 
- Neuronal Networks & Deep Learning: Modeled after the human brain. 
- Natural language processing (NLP): The bridge between man and machine. 
- Computer vision: Teaches machines to "see". 
This section has given you an initial overview of what AI is and why it is such a transformative force. In the following chapters, we will explore all these concepts step by step
💡 Important terms and definitions: The ABCs of AI
In order to work confidently with artificial intelligence, it is essential to understand the key terms. This glossary serves as your reference work for the most important concepts that you will encounter in the AI environment.
Fundamental concepts
These terms form the foundation for understanding AI systems.
- Artificial Intelligence (AI): This is the umbrella term for computer systems that are capable of performing tasks that typically require human intelligence. This includes learning from experience, understanding language, recognizing patterns, solving problems and making decisions. 
- Algorithm: An algorithm is a step-by-step set of instructions or rules that a computer follows to solve a specific task or perform a calculation. It can be thought of as a "recipe" for the computer. In AI, algorithms are the basis for how models learn from data. 
- Model: An AI model is the result of the training process. It is a mathematical representation of the patterns and relationships that the algorithm has found in the training data. This trained model can then be used to make predictions or generate new content based on new data. 
🧠 Learning methods and data
This is about how AI systems learn and what "food" they need for this.
- Machine learning (ML) / Machine Learning: A branch of AI in which algorithms are developed that enable computers to learn from data without being explicitly programmed for each task. Instead of fixed rules, the system recognizes patterns and improves its performance through experience. The basics of machine learning are discussed in more detail in the next section. 
- Training Data: This is the data set used to "train" an AI model. The quality, variety and scope of this data is critical to the performance and accuracy of the subsequent model. If the training data is incorrect or biased, the AI model will also be biased. 
- Supervised Learning: An ML training method in which the AI system is trained with "labeled" data. This means that each data input is already assigned the correct output, similar to a vocabulary test with solutions. 
- Unsupervised learning: In this method, the AI system receives data without labels and must independently recognize patterns, structures and clusters in the data. 
- Reinforcement learning: Here, an AI model learns through interaction with an environment. It receives rewards for correct actions and "punishments" for incorrect ones, allowing it to gradually optimize its strategy. 
🏗️ Models and architectures
These are the "blueprints" and specific types of AI systems that are widely used today.
- Neural Network: An AI model inspired by the structure and functioning of the human brain. It consists of interconnected nodes (neurons) that process information and recognize patterns. Complex neural networks with many layers are the basis for deep learning. You can find out more about this in the section "Deep learning and neural networks". 
Large Language Model (LLM): A sophisticated AI model trained on huge amounts of text data to understand and generate human language. LLMs such as OpenAI's GPT series are the technology behind many well-known AI applications. Their core function is to predict the most likely next word in a sentence.
- Transformer: A special, very powerful architecture for neural networks that is particularly well suited to processing sequential data such as text. Transformer models are the basis for most modern LLMs. 
- Generative AI: A branch of AI that focuses on creating new, original content. Instead of just analyzing or classifying data, these models generate text, images, music or code. Examples are ChatGPT for text or DALL-E for images. 
💬 Interaction and application
These terms describe how we communicate with AI systems and what roles they can take on.
- Prompt: The initial input (usually text) given to an AI system to produce a specific output. A prompt is basically an instruction or a question to the AI. 
- Prompt Engineering: The art and science of designing and optimizing effective prompts to get the desired behavior or most accurate output from an AI model. 
- Chatbot: An AI-powered conversational interface designed to interact with users in natural language. Chatbots are often used for customer support to answer repetitive queries and perform simple tasks. 
- AI agent: An AI agent is the further development of a chatbot. While a chatbot primarily responds, an agent can independently perform complex, multi-step tasks to achieve a goal. It can use tools, research on the internet and perform actions on behalf of the user. 
⚖️ Ethics and security
These concepts are crucial to managing the challenges and risks of AI responsibly.
- Bias (bias): Refers to systematic errors or biases in an AI system that result in certain groups or perspectives being unfairly favored or disadvantaged. Bias often arises from unbalanced or biased training data. 
- Hallucination: This phenomenon occurs when an AI model generates content that sounds plausible but is factually incorrect, nonsensical or fictitious. Minimizing hallucinations is a key challenge to ensure the reliability of AI. 
- AI Ethics: Comprises the moral principles and values that guide the development and use of AI. Key topics are transparency, justice, fairness, data protection and the social impact of AI systems. 
- Explainable AI (XAI): An approach in AI research and development that aims to make the decision-making processes of AI models transparent and comprehensible to humans. XAI is the answer to the "black box" problem, in which the functioning of complex models is often opaque. 
🤖 Types of AI: an overview of the intelligence spectra
Welcome to the fascinating world of AI types! Not all artificial intelligence is the same. To understand the vast field of AI, it is crucial to distinguish between the different types. Basically, we classify AI according to two main criteria: their capabilities (how "intelligent" they are compared to humans) and their functionality (how they "think" and work).
Classification by ability: From "Weak" to "Superhuman"
This classification describes the level of intelligence and awareness of an AI.
1 Weak AI (Narrow AI) What it is: This is the form of AI that we use every day today. It is specialized to perform a specific task or a narrow set of tasks. It simulates human behavior, but has no real consciousness or understanding. * Properties: * Specialized: Excels in its defined domain (e.g. playing chess, recognizing images). Limited horizon: Cannot transfer their "knowledge" to other, unfamiliar areas. * No awareness: Works on the basis of algorithms and data without understanding the actual meaning or consequences of their actions. * Examples from everyday life: * Voice assistants such as Siri, Alexa and Google Assistant. * Recommendation systems from Netflix or Amazon. * Facial recognition and image recognition software. * Autonomous vehicles (although highly developed, they are only specialized in driving).
2 Strong AI (General AI / Artificial General Intelligence - AGI) * What it is: This is the type of AI often depicted in science fiction. An AGI would have the ability to understand, learn and perform any intellectual task that a human can. * Properties: * Human cognition: Possesses human-like abilities for reasoning, planning, problem solving and creative idea generation. * Adaptability: Can generalize knowledge from one domain and apply it to new, unfamiliar situations. * Consciousness (theoretical): Would have their own consciousness and sense of self. * Status: 💡 Hypothetical. There are currently no real examples of an AGI. Their development is one of the biggest goals of AI research.
3 Artificial Superintelligence (ASI) What it is: The next step after AGI. An ASI would be an intelligence that far surpasses human intelligence in virtually every area, from scientific creativity to general wisdom to social skills. * Properties: * Cognitive superiority: Solves problems beyond the reach of the human mind. * Exponential learning: Can develop and improve itself at a rapid pace. * Status: 🚀 Purely theoretical An ASI raises profound ethical and existential questions and is the subject of intense debate.
Classification by functionality: How AI "thinks" and learns
This classification describes the technical functionality and complexity of AI systems.
- reactive machines Functionality: The most basic form of AI. It reacts exclusively to immediate stimuli and has no memory of past events to influence future decisions. Each situation is considered in isolation. * Example: IBM's chess computer Deep Blue, which defeated the world champion Garry Kasparov in 1997. It analyzed the current position on the board and chose the best move without "remembering" previous moves in the game. 
2 Limited memory capacity (Limited Memory) * This AI can store and use information from the past over a short period of time to make its decisions. Almost all modern AI applications fall into this category. * Examples: * Autonomous vehicles: Observe the speed and direction of other cars and store this information for a short period of time in order to navigate safely. Chatbots: Memorize parts of the previous conversation in order to be able to respond contextually.
3 Theory of Mind (ToM) Functionality: An advanced, future form of AI. It could not only perceive the world, but also understand that other beings (humans, animals, other AIs) have their own beliefs, intentions, desires and emotions that influence their behavior. * Status: 🧠 In research This is an active field that is crucial for the development of truly social robots and advanced human-machine interactions.
4 Self-Aware AI * Functionality: The pinnacle of AI development. This AI would have an awareness of itself, a sense of self and could understand its own inner states. It would be a further development of the "Theory of Mind" AI. * Status: Science fiction. Such machines do not exist and their creation would be a monumental milestone for humanity.
Specialized AI types: Generative vs. Predictive
In practical use today, one often encounters two further important distinctions:
- Generative AI: Purpose: Creates new, original content. Function: It learns patterns and structures from huge amounts of data in order to generate new texts, images, music or code that are similar but not identical to the training data. Examples: ChatGPT (text), Midjourney (images), Nexaluna AI's "Creative Suite" (fictitious product example). 
- Predictive AI: * Purpose: Makes predictions about future events. Function: Analyzes historical data to identify patterns and derive probabilities for future outcomes. Examples: Fraud detection for credit card transactions, prediction of stock prices, demand forecasting in logistics. 
This understanding of the different types of AI is key to realistically assessing the potential and limitations of today's technology and preparing for the innovations of tomorrow.
🤖 Machine learning basics: When machines learn
Imagine trying to teach a computer to recognize a kitten in a picture. A traditional approach would be to program countless rules: "If it has pointy ears AND whiskers AND...", which quickly becomes impossible.
This is where machine learning (ML) comes into play. It is a branch of artificial intelligence that reverses this approach. Instead of giving the computer rules, we give it lots of examples (data) and let it learn the rules itself.
Machine learning is therefore the science of developing algorithms that can learn from data and improve themselves in order to make predictions or decisions without being explicitly programmed to do so.
The three main learning approaches in machine learning
In ML, there are three basic "learning methods" that are used depending on the task and available data.
1. Supervised learning 🧑🏫
In supervised learning, the human is the "teacher". The algorithm receives a data set in which each piece of information is already correctly labeled. It therefore learns from examples where the correct answer is already known.
- Analogy: Like learning with flashcards. On the front is the question (e.g. a picture of a cat), on the back is the answer ("cat"). After many cards, the brain learns to recognize cats. 
- Two main tasks: Classification:* The aim is to classify data into predefined categories. Example:* An e-mail program learns from thousands of marked e-mails which messages are spam and which are not spam. Regression: The aim is to predict a continuous value. Example:* A model predicts the price of a house based on characteristics such as size, location and age. 
2. Unsupervised learning 🧩
Here there is no teacher and no pre-labeled data. The algorithm must independently find patterns, structures and correlations in the raw, unlabeled data.
Analogy: Imagine you are given a box full of mixed Lego bricks. Without instructions, you intuitively sort them by color, shape or size, creating order in the chaos.
- Two main tasks: * Clustering: The grouping of data points that are similar. Example:* An online store groups customers into different segments (e.g. "savers", "technology fans") based on their purchasing behavior in order to conduct targeted marketing. Association: Finding relationships between data points. Example:* A supermarket finds that customers who buy diapers often also buy beer (the classic "diaper-beer effect"). 
3. Reinforcement learning 🏆
This is a dynamic learning process based on reward and punishment. An "agent" (the algorithm) acts in an "environment" (e.g. a game or the real world) and tries to achieve a specific goal.
- Analogy: Like training a dog. For a correct behavior (e.g. "sit") there is a treat (reward), for a wrong behavior there is a correction (or no reward). Over time, the dog learns which actions lead to the maximum reward. 
- The agent performs an action, observes the result and receives feedback (reward or penalty). His goal is to adjust his strategy so that he maximizes the sum of rewards over time. 
- Examples: * An AI learns to master a chess game by playing millions of games against itself. * A robotic arm learns to grasp an object precisely by trial and error. * Autonomous driving systems learn to stay in lane to earn "reward points" and avoid collisions. 
⚙️ The typical ML process at a glance
Although learning approaches vary, a machine learning project often follows a basic pattern. We will take a closer look at these steps in later sections:
- Data collection & preparation: Everything starts with data. This must be collected, cleansed and prepared for the model. 2 Model selection: A suitable algorithm (e.g. classification, clustering) is selected based on the problem. 3 Model training: The selected algorithm is "fed" with the prepared data so that it can learn its internal patterns and rules. 4 Model evaluation: The trained model is tested with new, unknown data to check how good and reliable its predictions are. 5 Application (inference): The finished, evaluated model is used in a real application to make new predictions. 
🧠 Deep learning and neural networks: The brain of AI
Now that we've covered the basics of machine learning, let's dive deeper into one of its most fascinating and powerful subfields: Deep Learning. It can be thought of as the sophisticated brain of modern AI, responsible for many of the most impressive breakthroughs of recent years.
What is an artificial neural network (ANN)?
The foundation of deep learning is the artificial neural network (ANN), a structure inspired by the way the human brain works. Instead of being explicitly programmed to solve a task, a KNN learns by example - much like we humans do.
A neural network consists of layers of interconnected "neurons" (computing units):
- 🔹 Input layer (input layer): Takes in the raw data, e.g. the pixels of an image or the words of a sentence. 
- 🔹 Hidden Layers: This is the heart of the network. This is where the actual calculations take place. Each layer can recognize different features in the data. In the first layers, simple edges could be recognized in an image, while deeper layers identify complex shapes such as faces or objects. 
- Output layer (output layer): Outputs the final result, e.g. the classification "dog" for an image or the translation of a sentence. 
From the neural network to deep learning
The term "deep" in deep learning refers directly to the number of hidden layers in a neural network. While a simple neural network may only have one or two hidden layers, deep learning models have dozens, hundreds or even thousands.
This depth gives them extraordinary capabilities:
- Processing huge amounts of data: Deep learning models only unfold their full potential with very large amounts of data (Big Data), where they recognize complex patterns that remain invisible to other methods. 2 Automatic feature learning: Unlike many classic ML methods, where experts have to manually define the relevant characteristics (features) of the data, a deep learning model learns the important characteristics directly from the raw data. 3 Solving highly complex problems: Tasks such as real-time translation, recognizing diseases in medical images or controlling autonomous vehicles are prime examples of the power of deep learning. 
How does a neural network learn? The training process
The "learning" of a network is a process known as training. In simple terms, it works like this:
- Prediction: The network receives an input (e.g. a picture of a cat) and makes a prediction (e.g. "dog"). 2 Error calculation: A "loss function" calculates how far the prediction is from the correct result ("cat"). 3 Adjustment: An algorithm called Backpropagation is used to trace the error back through the network. The connection strengths (weights) between the neurons are minimally adjusted to reduce the error in the next prediction. 
- repetition: This process is repeated millions of times with thousands of examples until the network reliably makes correct predictions. 
Important architectures and their use cases
Different architectures are used depending on the task. Here are two of the best known:
- 🖼️ Convolutional Neural Networks (CNNs): Specialized on: Raster data, especially images and videos. * They use special filters (convolutions) to recognize spatial hierarchies - from simple edges to complex objects. Application examples: Image recognition (e.g. in social media), medical image analysis (e.g. tumor diagnosis), face recognition, autonomous vehicles. 
- 🗣️ Recurrent Neural Networks (RNNs): * Specialized in: Sequential data where order is important. * Functionality: They have a kind of "memory" that enables them to take into account information from earlier steps in the sequence. Application examples: Speech and text recognition (e.g. Siri, Alexa), machine translation (Google Translate), chatbots, stock price predictions. 
Deep learning and neural networks are the driving forces behind the most advanced AI applications we know today. They enable machines to "see", "hear" and "understand" in a way that was considered science fiction just a few years ago.
📊 Data processing in AI: the foundation for intelligent systems
Imagine artificial intelligence as a highly talented chef. Even the best chef in the world cannot create a masterpiece if the ingredients are of poor quality, spoiled or mislabeled. It's the same with AI models: Their performance and accuracy depend directly on the quality of the data they are trained with.
The principle here is: Garbage In, Garbage Out. Bad data inevitably leads to bad, unreliable or even harmful AI results. This is why data processing is not a trivial preliminary step, but a fundamental and often the most time-consuming part of any AI project.
The life cycle of data processing 🔄
The path from raw, unorganized data to a clean, structured data set that can be understood by AI can be divided into several crucial phases.
1st Data collection The first step is to collect relevant data. This can come from a variety of sources: * Internal databases: Customer information, sales figures, production data. Cloud services & APIs*: Data from external platforms and services. IoT sensors: Real-time data from networked devices. Unstructured sources: Text documents, emails, social media feeds, images and videos. The challenge is to identify and bundle the data relevant to the specific problem.
2 Data cleansing. Raw data is rarely perfect. Cleaning is crucial to ensure data quality. Typical tasks are: * Dealing with missing values: Missing entries can either be replaced by estimates (e.g. mean, median) (imputation) or the corresponding data records can be removed. Handling of outliers *: Extreme values that deviate greatly from the rest (e.g. a typing error in a price) can distort analyses and must be identified and corrected. Correction of inconsistencies: Consistent formats and designations are essential (e.g. standardize "Deutschland", "DE" and "Germany" into one standard). Removal of duplicates: Duplicate data records are deleted to avoid distortion of the model.
3 Data transformation. After cleansing, the data must be converted into a format that is understood by machine learning algorithms. Feature scaling (normalization/standardization)*: When features have different scales (e.g. age from 0-100 and income from 0-100,000), they are scaled to a common range. This prevents features with larger numerical values from having a disproportionate influence on the model. Feature Encoding: AI models can only work with numbers. Categorical data (such as "red", "green", "blue" or "customer", "partner") must be converted into a numerical format (e.g. using one-hot encoding).
- data reduction (data reduction) Sometimes data sets contain a huge number of characteristics (dimensions), not all of which are relevant. Too many irrelevant features can "confuse" the model and increase the training time unnecessarily. Dimensionality reduction: Techniques such as principal component analysis (PCA) help to reduce the number of variables while retaining the essential information. 
5 Data splitting. The last step before the actual training is the splitting of the data set. This is crucial in order to objectively evaluate the performance of the model. Training data (approx. 70-80%)*: The largest part of the data that the AI model uses to learn to recognize patterns. Validation data (approx. 10-15%): Used during training to optimize the model parameters and avoid overfitting. Test data (approx. 10-15%)*: A completely separate data set that the model has never seen. It is used for the final, unbiased evaluation of the model's performance in the "real world".
How AI is revolutionizing data processing 🚀
Ironically, AI itself is increasingly being used to automate and improve tedious data processing.
- Automated data discovery: AI systems can sift through vast data landscapes to find hidden data ("shadow data") and classify it automatically (e.g. recognizing invoices, contracts or CVs). Intelligent data cleansing: AI algorithms can detect anomalies and patterns in data that would be invisible to humans, highlighting potential quality issues. They can even fill in missing values by generating plausible synthetic data. 
- Understanding unstructured data: Using Natural Language Processing (NLP), AI can extract relevant information from thousands of text documents such as emails or customer reviews and put it into a structured form. 
- Automatic relationship detection: AI can uncover connections between different data silos, e.g. linking a product ID in a warehouse table with an article number in an e-commerce database. 
Attention: Typical misunderstandings 🧐
Finally, two common misconceptions to avoid:
- "More data is always better. " -> False. The quality of the data is far more important than the sheer quantity. A small but clean and relevant data set leads to better results than a huge, erroneous mountain of data. "Data preparation is a one-off task. " -> Wrong. Data preparation is an iterative process. Models need to be retrained regularly with new, up-to-date data, which means that the entire processing cycle is run over and over again to maintain the relevance and accuracy of the AI. 
Key insight: Careful and thoughtful data processing is not an obstacle, but the strategic foundation on which every successful AI system at Nexaluna AI Solutions is built. It will make or break your AI project.
🧠 AI models: training and evaluation
Think of an AI model like a hard-working student. Before it can develop its full potential, it must first learn (training) and then be tested to see how well it has understood the material (evaluation). This cycle is crucial for developing powerful and reliable AI systems.
The training process: How an AI model learns
The training process is the phase in which the AI model uses data to learn to recognize patterns and make predictions. Two core components are essential here:
1. The division of data: Training, validation and testing 📊
To train a model effectively and prevent "memorization" of the data, the existing data set is typically split into three separate parts:
- Training dataset (approx. 70-80%): This is the largest part of the data. The model uses this data set to learn the basic patterns and relationships. It is effectively the "textbook" of the model. 
- Validation dataset (approx. 10-15%): As the model learns, this dataset is used to check performance and make adjustments. It serves as a kind of "intermediate check" to see if the learning progress is going in the right direction and to optimize the model. 
- Test data set (approx. 10-15%): This data set remains untouched until the end. It is used only once to evaluate the final, unbiased performance of the final trained model. It can be thought of as the "final test" that shows how well the model can generalize to completely new, unknown data. 
This split is crucial to ensure that the model not only performs well on the known data, but also works reliably in real-world application scenarios.
2. Hyperparameter tuning: Finding the right adjustment screws ⚙️
Hyperparameters are the settings that control the learning process of the model, but are not learned by the model itself. They are set manually before training. Finding the optimal hyperparameters is like tuning an instrument perfectly - it can make the difference between a mediocre and an outstanding result.
Typical hyperparameters are: Learning Rate: How quickly or slowly the model adjusts its internal parameters. Too high a rate can lead to the optimum being missed, too low a rate can unnecessarily slow down the training process.
- Number of epochs: How often the entire training dataset is presented to the model. 
- Number of neurons or layers: In neural networks, this determines the complexity of the model. 
Methods such as Grid Search (systematic testing of all combinations) or Random Search help to find the best configuration of these "adjusting screws".
The evaluation: Has the model done its homework?
After training, the performance of the model must be evaluated objectively. There are special key figures and techniques for this that show how good, fair and robust the model really is.
1. Key performance indicators (metrics): The scores of the model 🎯
Depending on the task, there are different metrics to measure performance:
- Accuracy: The percentage of correct predictions. Caution: For unbalanced data sets (e.g. 99% healthy patients, 1% sick), high accuracy can be misleading. Precision: Indicates how many of the results classified as positive were actually positive. Example spam filter: Of all emails marked as spam, how many were really spam? 
- Sensitivity (Recall): Indicates how many of the actual positive cases were recognized by the model. Example spam filter: Of all real spam emails in your mailbox, how many were correctly recognized by the filter? F1 score: An average of precision and sensitivity. It is particularly useful when a balance between the two metrics is important. 
2. Cross-validation: Making the test fair 🔁
Cross-validation is used to ensure that the performance of the model does not depend on the chance of a single split in training and test data. The most common method is k-fold cross-validation:
- the data set is divided into k equal parts (so-called "folds"). 
- the model is trained k times. 
- in each round, a different part is used as the test data set, while the remaining k-1 parts are used for training. 
- the performance results from all k rounds are averaged at the end. 
This process provides a more robust and reliable estimate of model performance and ensures that the model generalizes well across different data segments.
Practical aspects: Tools and frameworks 🛠️
Now that we've explored the theoretical foundations of AI, machine learning and neural networks, it's time to get practical! How do you turn these concepts into reality? The answer lies in specialized tools and frameworks. They are the tools of the trade for anyone working with AI - from data scientists to application developers.
Think of a framework as a toolbox and a prefabricated workshop. Instead of forging every tool and screw yourself from scratch, you get a collection of high-quality, ready-made components that you can assemble for your project. This speeds up development enormously and makes complex AI applications possible in the first place.
The basis: Python and its AI libraries
While AI models can be developed in various programming languages, Python has established itself as the undisputed leader. Why?
- Simple syntax: Python is easy to learn and read, allowing teams to focus on solving AI problems instead of complicated code. 
- Huge ecosystem: There is a huge selection of libraries (collections of pre-built code) designed specifically for AI and machine learning. 
- Strong community: A large, active community means you can find help, guides and examples for almost any problem. 
Here are the most important libraries that form the foundation of AI development in Python:
- TensorFlow What it is: A comprehensive, open-source platform developed by Google. It is one of the most widely used libraries for machine learning and deep learning. * Features: Scalable: Ideal for developing and training large, complex models running on multiple servers (in the cloud). Flexible: Offers different levels of abstraction so that both beginners and experts can work with it. Production-ready:* With tools like TensorFlow Extended (TFX), models can be used robustly in real-world applications. Ideal for: Large, scalable AI applications, image and speech recognition, complex neural networks. 
- PyTorch What it is: An open source library developed by Meta (Facebook) that is particularly popular in the research community. * Properties: * Dynamic Graphs: Allows high flexibility in building and customizing models during training. This makes debugging and experimentation more intuitive. User-friendly: Considered slightly easier to learn than TensorFlow, especially for quick prototypes. * Ideal for: Research, rapid prototyping, projects that require high flexibility. 
3 Scikit-learn What it is: The standard library for classical* machine learning in Python. * Features: * Comprehensive: Provides a wide range of algorithms for classification, regression and clustering (e.g. support vector machines, random forests). * Unified API: All models are used in a similar way, making it easy to switch between different algorithms. Excellent documentation:* Considered one of the best documented libraries. * Ideal for: Anyone starting with traditional ML tasks, data analysis, predictive models without complex neural networks.
- Keras What it is: A high-level neural network API that runs on top of other libraries like TensorFlow as a user-friendly interface. * Properties: * Intuitive: Makes building neural networks as easy as stacking LEGO bricks (or layers). Fast prototypes: Allows you to quickly test ideas for deep learning models. * Ideal for: Deep learning beginners, quick experiments. 
Frameworks for the development of AI agents 🤖
Recently, the trend has gone beyond pure prediction models to autonomous AI agents. These are systems that can break down, plan and execute complex tasks independently. There are specialized frameworks for this:
- LangChain: A leading framework for creating applications based on large language models (LLMs). It enables the simple linking of LLMs with external data sources, APIs and other tools. 
- AutoGen: A framework developed by Microsoft that specializes in creating systems with multiple AI agents that communicate and collaborate with each other to solve complex problems. CrewAI: A framework designed to orchestrate AI agents with specific "roles" (e.g. "research analyst", "content author") that work together in a team. Ideal for automating collaborative workflows. 
Beginner-friendly platforms and managed services ☁️
Not everyone wants or needs to program themselves. There are a growing number of platforms for companies and users who want to create AI solutions quickly and without in-depth technical knowledge:
- Low-code/no-code platforms (e.g. Dify, Langflow): * These tools provide a visual drag-and-drop interface to create AI workflows and agents. * They allow non-technical people to build powerful AI applications by connecting different models, databases and APIs. 
- Managed services (e.g. Amazon SageMaker, Microsoft Copilot Studio): What they are: Cloud platforms that manage the entire process from data preparation to training and deployment of an AI model. Advantages: You don't have to worry about the underlying infrastructure (servers, computing power). These services often offer automated ML functions (AutoML) that independently find the best model for your data. * Examples: * Amazon SageMaker: A fully integrated service from AWS for creating, training and deploying ML models in the cloud. * Microsoft Copilot Studio: A low-code platform for creating your own "copilots" that can be seamlessly integrated into Microsoft 365 applications such as Teams or Outlook. 
With this knowledge of the tools available, you are now well equipped to move from theory to practice. Choosing the right tool depends on your goal, your level of technical knowledge and the scope of your project. The great thing is that today there is a suitable tool for almost every requirement
We have now taken the first important steps on our journey through the world of 'AI'. We have explored the basic concepts such as machine learning and neural networks, understood the importance of clean data and gained an insight into the tools and frameworks that are driving this revolution.
This first part is designed to give you a solid foundation and the vocabulary you need to better grasp the complexities of AI. At Nexaluna AI Solutions, we believe that a solid understanding of the fundamentals is the key to successfully shaping the digital future.
We hope you enjoyed this first overview and that it has piqued your interest in this exciting topic. Stay tuned for the next modules, where we will dive deeper into specific application areas and advanced concepts. Until then, feel free to experiment with the presented tools yourself or learn more about our work on our website Nexaluna AI Solutions.
Thank you for your attention and see you next time!
Sources
- https://brollyacademy.com/basic-concepts-of-artificial-intelligence/ 
- https://www.elegantthemes.com/blog/business/what-is-ai 
- https://www.momentslab.com/blog/the-ai-terms-you-need-to-know-in-2025 
- https://felloai.com/2025/03/25-ai-terms-that-you-need-to-know-in-2025/ 
- https://www.sunrisegeek.com/post/ultimate-glossary-100-ai-terms-you-need-to-know-in-2025 
- https://www.zendesk.com/blog/generative-ai-glossary/ 
- https://dokumen.pub/research-handbook-on-public-management-and-artificial-intelligence-elgar-handbooks-in-public-administration-and-management-1802207333-9781802207330.html 
- https://www.momentslab.com/blog/the-ai-terms-you-need-to-know-in-2025 
- https://thoughtfocusbuild.com/wtf-is-an-llm-the-ultimate-ai-glossary-100-terms-you-need-to-know/ 
- https://imobisoft.co.uk/30-key-ai-terms-you-should-know-in-2025/ 
- https://phrase.com/blog/posts/artificial-intelligence/ 
- https://www.leewayhertz.com/what-is-artificial-intelligence/ 
- https://www.teachfloor.com/blog/types-of-ai 
- https://brightbid.com/blog/what-ai-is/ 
- https://www.besttechie.com/complete-guide-to-artificial-intelligence-in-2025-from-basics-to-advanced-applications/ 
- https://365datascience.com/tutorials/how-to-learn-machine-learning/ 
- https://www.datacamp.com/blog/how-to-learn-machine-learning 
- https://www.igmguru.com/blog/machine-learning-roadmap 
- https://former-students.imperial.edu/045-vase/article?ID=gBe92-0933&title=artificial-intelligence-and-machine-learning-fundamentals.pdf 
- https://www.coursera.org/courses?query=machine+learning&productDifficultyLevel=Beginner 
- https://www.mltut.com/best-books-on-neural-networks-and-deep-learning/ 
- https://www.nielit.gov.in/aurangabad/sites/default/files/Aurangabad/9th%20version%20final%20scheme%20and%20syllabus.pdf 
- https://www.datacamp.com/blog/top-10-deep-learning-books-to-read-in-2022 
- https://bookauthority.org/books/new-deep-learning-books 
- https://365datascience.com/trending/best-ai-books/ 
- https://technologyadvice.com/blog/information-technology/ai-data-analytics/ 
- https://www.pecan.ai/blog/data-preparation-for-machine-learning/ 
- https://www.eweek.com/artificial-intelligence/ai-data-analytics/ 
- https://www.youtube.com/watch?v=swp1QJZQzEw 
- https://www.youtube.com/watch?v=UkIdRsKagaI 
- https://dataengineeracademy.com/module/how-to-break-the-150k-salary-barrier-as-a-data-engineer-2/ 
- https://aiforsocialgood.ca/blog/learn-artificial-intelligence-online-become-an-expert-in-ai-with-our-comprehensive-course 
- https://whatfix.com/blog/training-evaluation-models/ 
- http://www.upubscience.com/upload/202506301117531.pdf 
- https://orq.ai/blog/llm-evaluation-tools 
- https://www.shakudo.io/blog/top-9-ai-agent-frameworks 
- https://www.cogniteq.com/blog/how-python-powers-artificial-intelligence-tools-libraries-and-use-cases 
- https://www.datacamp.com/blog/best-ai-agents 
- https://www.geeksforgeeks.org/blogs/machine-learning-frameworks/ 
- https://www.splunk.com/en_us/blog/learn/ai-frameworks.html 

