Semantic Kernel is Microsoft's open-source SDK for building AI-powered applications that integrate LLMs with conventional programming. It provides a structured framework for orchestrating AI capabilities alongside traditional code, plugins, and external services.
Core Concepts
- Kernel: The central orchestrator that manages AI services, plugins, and memory. Acts as the "brain" of your application.
- Plugins (formerly "Skills"): Modular units of functionality that can be either semantic functions (LLM prompts with structured inputs/outputs) or native functions (regular code in C#, Python, or Java). These are the building blocks of AI workflows.
- Planners: AI-powered components that take a user's goal and automatically create a plan — a sequence of plugin calls — to achieve it.
- Memory: Built-in support for vector-based semantic memory for context retention and retrieval.
- Connectors: Integrations with LLM providers (OpenAI, Azure OpenAI, HuggingFace) and other AI services.
Key Features
- Multi-Language: Available for C# (primary), Python, and Java.
- Enterprise Ready: Deep integration with Azure services, enterprise security patterns, and production deployment best practices.
- Prompt Engineering: Built-in templating system for creating reusable, parameterized prompts.
- Function Calling: Native support for LLM function/tool calling, connecting model outputs to executable code.
Use Cases
- Copilot Development: Building custom copilot experiences for enterprise applications.
- Process Automation: Orchestrating multi-step workflows that combine AI reasoning with business logic.
- RAG Applications: Combining retrieval with generation using Semantic Kernel's memory and plugin systems.
Semantic Kernel is a core component of Microsoft's Copilot Stack and is used internally to build Microsoft's own AI-powered products. It emphasizes responsible AI patterns and enterprise-grade reliability.