قاموس شامل لمصطلحات الذكاء الاصطناعي وتعلم الآلة.
An autonomous AI system that can perceive its environment, make decisions, and take actions to achieve specific goals. AI agents can use tools, access external systems, and perform multi-step reasoning to complete complex tasks.
Systematic errors in AI systems that create unfair outcomes for certain groups. Bias can originate from training data, algorithm design, or deployment context. Identifying and mitigating bias is critical for responsible AI.
The framework of policies, processes, and controls that ensure AI systems are developed and used responsibly. AI governance covers ethics, bias, privacy, security, and regulatory compliance.
A measure of an organization's capability to successfully implement and scale AI initiatives. Maturity levels typically range from ad-hoc experimentation to organization-wide AI integration.
A comprehensive plan for how an organization will adopt, develop, and deploy AI to achieve business objectives. An effective AI strategy aligns technology investments with business goals and organizational capabilities.
A field of computer science focused on creating systems that can perform tasks typically requiring human intelligence. This includes learning from experience, understanding language, recognizing patterns, solving problems, and making decisions.
A subset of machine learning based on artificial neural networks with multiple layers. Deep learning can learn complex patterns from large amounts of data and is particularly effective for image recognition, speech processing, and natural language understanding.
Running AI models on edge devices (phones, IoT devices, vehicles) rather than in the cloud. Edge AI reduces latency, improves privacy, and enables offline operation.
A numerical representation of data (text, images, etc.) in a multi-dimensional vector space. Embeddings capture semantic meaning, allowing similar concepts to be mathematically close together.
The European Union's comprehensive regulation on artificial intelligence, establishing requirements based on risk levels. High-risk AI systems face strict requirements for transparency, human oversight, and conformity assessment.
The ability to understand and explain how an AI model reaches its decisions. Explainable AI is crucial for building trust, meeting regulatory requirements, and debugging model behavior.
The process of taking a pre-trained AI model and further training it on a specific dataset to adapt it for a particular task or domain. Fine-tuning can improve model performance for specialized use cases.
The process of using a trained model to make predictions on new data. Inference is distinct from training and typically requires optimization for speed and cost in production environments.
AI models trained on vast amounts of text data that can understand and generate human-like text. Examples include GPT-4, Claude, and Llama. LLMs power modern chatbots, content generation, and code assistance tools.
A subset of AI that enables systems to automatically learn and improve from experience without being explicitly programmed. ML algorithms build models based on training data to make predictions or decisions.
The practice of combining Machine Learning, DevOps, and data engineering to streamline the deployment, monitoring, and maintenance of ML models in production. MLOps ensures reliable, scalable, and reproducible ML systems.
The process of deploying trained ML models to production environments where they can receive inputs and return predictions. Model serving must handle scalability, latency, and reliability requirements.
A branch of AI that enables computers to understand, interpret, and generate human language. NLP powers applications like chatbots, translation services, sentiment analysis, and text summarization.
A computing system inspired by biological neural networks in the brain. It consists of interconnected nodes (neurons) organized in layers that process information and can learn patterns from data.
A limited deployment of an AI solution in a real-world environment to test performance, gather feedback, and refine the approach before broader rollout.
The practice of designing and optimizing input prompts to get the best results from AI models, especially LLMs. Good prompt engineering can dramatically improve output quality without changing the underlying model.
A small-scale implementation that demonstrates the feasibility of an AI solution. POCs help validate technical approaches and estimate value before full-scale development.
A technique that enhances LLM responses by retrieving relevant information from external knowledge sources before generating a response. RAG improves accuracy, reduces hallucinations, and enables LLMs to access up-to-date information.
A neural network architecture that uses self-attention mechanisms to process sequential data. Transformers are the foundation of modern LLMs and have revolutionized NLP by enabling parallel processing and capturing long-range dependencies.
A specialized database designed to store and efficiently search high-dimensional vectors (embeddings). Vector databases are essential for RAG systems, similarity search, and recommendation engines.
احجز استشارة لمناقشة كيفية تطبيق مفاهيم الذكاء الاصطناعي على تحدياتك.