2025-08-27

从零开始学MCP(6) | MCP 与大型语言模型(LLM)深度集成

在前几期的MCP系列教程中,我们已经了解了MCP的基本概念、工作原理和核心组件。本期我们将深入探讨如何将Model Context Protocol (MCP) 与大型语言模型(LLM)进行深度集成,实现更加智能和强大的AI应用。

本文将涵盖三个核心方面:本地模型接入(Ollama/vLLM)、在线模型扩展(OpenAI/DeepSeek)以及提示词模板设计,帮助你全面掌握MCP与LLM的集成技巧。

一、MCP与LLM集成架构设计

1.1 整体架构概述

MCP与LLM的集成通常采用客户端-服务器架构:

<pre data-tool="mdnice编辑器" style="-webkit-tap-highlight-color: transparent; margin: 10px 0px; padding: 0px; outline: 0px; max-width: 100%; box-sizing: border-box !important; overflow-wrap: break-word !important; border-radius: 5px; box-shadow: rgba(0, 0, 0, 0.55) 0px 2px 10px; text-align: left; visibility: visible;">+----------------+ +----------------+ +----------------+ | | | | | | | MCP客户端 +------+ MCP服务器 +------+ LLM后端 | | (应用层) | | (适配层) | | (模型层) | | | | | | | +----------------+ +----------------+ +----------------+ </pre>

1.2 核心组件职责

  • MCP客户端:主应用程序,负责用户交互和请求调度
  • MCP服务器:协议转换层,将MCP协议转换为LLM API调用
  • LLM后端:实际执行模型推理的组件

二、本地模型接入:Ollama/vLLM + MCP

2.1 Ollama集成方案

环境准备

首先安装必要的依赖:

<pre data-tool="mdnice编辑器" style="-webkit-tap-highlight-color: transparent; margin: 10px 0px; padding: 0px; outline: 0px; max-width: 100%; box-sizing: border-box !important; overflow-wrap: break-word !important; border-radius: 5px; box-shadow: rgba(0, 0, 0, 0.55) 0px 2px 10px; text-align: left;"># 安装Ollama curl -fsSL https://ollama.ai/install.sh | sh # 安装Python MCP SDK pip install mcp[sse] ollama </pre>

创建Ollama MCP服务器

<pre data-tool="mdnice编辑器" style="-webkit-tap-highlight-color: transparent; margin: 10px 0px; padding: 0px; outline: 0px; max-width: 100%; box-sizing: border-box !important; overflow-wrap: break-word !important; border-radius: 5px; box-shadow: rgba(0, 0, 0, 0.55) 0px 2px 10px; text-align: left;"># ollama_mcp_server.py import mcp.server as mcp from mcp.server import Server import ollama from pydantic import BaseModel # 创建服务器实例 server = Server("ollama-mcp-server") class GenerateRequest(BaseModel): model: str = "llama2" prompt: str max_tokens: int = 512 @server.tool() asyncdef generate_text(request: GenerateRequest) -> str: """使用Ollama生成文本""" try: response = ollama.generate( model=request.model, prompt=request.prompt, options={'num_predict': request.max_tokens} ) return response['response'] except Exception as e: returnf"生成文本时出错: {str(e)}" @server.list_resources() asyncdef list_models() -> list: """列出可用的Ollama模型""" try: models = ollama.list() return [ mcp.Resource( uri=f"ollama://{model['name']}", name=model['name'], description=f"Ollama模型: {model['name']}" ) for model in models['models'] ] except Exception as e: return [] if __name__ == "__main__": # 启动服务器 mcp.run(server, transport='stdio') </pre>

客户端配置

<pre data-tool="mdnice编辑器" style="-webkit-tap-highlight-color: transparent; margin: 10px 0px; padding: 0px; outline: 0px; max-width: 100%; box-sizing: border-box !important; overflow-wrap: break-word !important; border-radius: 5px; box-shadow: rgba(0, 0, 0, 0.55) 0px 2px 10px; text-align: left;">// mcp.client.json { "mcpServers": { "ollama": { "command": "python", "args": ["/path/to/ollama_mcp_server.py"] } } } </pre>

2.2 vLLM集成方案

vLLM MCP服务器实现

<pre data-tool="mdnice编辑器" style="-webkit-tap-highlight-color: transparent; margin: 10px 0px; padding: 0px; outline: 0px; max-width: 100%; box-sizing: border-box !important; overflow-wrap: break-word !important; border-radius: 5px; box-shadow: rgba(0, 0, 0, 0.55) 0px 2px 10px; text-align: left;"># vllm_mcp_server.py import mcp.server as mcp from mcp.server import Server from vllm import LLM, SamplingParams from pydantic import BaseModel import asyncio # 全局vLLM实例 vllm_engine = None class VLLMRequest(BaseModel): prompt: str max_tokens: int = 256 temperature: float = 0.7 top_p: float = 0.9 def initialize_vllm(model_name: str = "facebook/opt-125m"): """初始化vLLM引擎""" global vllm_engine if vllm_engine isNone: vllm_engine = LLM( model=model_name, tensor_parallel_size=1, gpu_memory_utilization=0.9 ) server = Server("vllm-mcp-server") @server.tool() asyncdef vllm_generate(request: VLLMRequest) -> str: """使用vLLM生成文本""" try: sampling_params = SamplingParams( temperature=request.temperature, top_p=request.top_p, max_tokens=request.max_tokens ) outputs = vllm_engine.generate([request.prompt], sampling_params) return outputs[0].outputs[0].text except Exception as e: returnf"vLLM生成失败: {str(e)}" @server.list_resources() asyncdef list_vllm_models() -> list: """列出支持的vLLM模型""" return [ mcp.Resource( uri="vllm://facebook/opt-125m", name="OPT-125M", description="Facebook OPT 125M参数模型" ), mcp.Resource( uri="vllm://gpt2", name="GPT-2", description="OpenAI GPT-2模型" ) ] if __name__ == "__main__": # 初始化vLLM initialize_vllm() mcp.run(server, transport='stdio') </pre>

三、在线模型扩展:OpenAI/DeepSeek适配器

3.1 OpenAI MCP适配器

<pre data-tool="mdnice编辑器" style="-webkit-tap-highlight-color: transparent; margin: 10px 0px; padding: 0px; outline: 0px; max-width: 100%; box-sizing: border-box !important; overflow-wrap: break-word !important; border-radius: 5px; box-shadow: rgba(0, 0, 0, 0.55) 0px 2px 10px; text-align: left;"># openai_mcp_server.py import mcp.server as mcp from mcp.server import Server from openai import OpenAI from pydantic import BaseModel import os client = OpenAI(api_key=os.getenv("OPENAI_API_KEY")) server = Server("openai-mcp-server") class OpenAIChatRequest(BaseModel): message: str model: str = "gpt-3.5-turbo" temperature: float = 0.7 @server.tool() asyncdef chat_completion(request: OpenAIChatRequest) -> str: """使用OpenAI API进行对话补全""" try: response = client.chat.completions.create( model=request.model, messages=[{"role": "user", "content": request.message}], temperature=request.temperature ) return response.choices[0].message.content except Exception as e: returnf"OpenAI API调用失败: {str(e)}" @server.list_resources() asyncdef list_openai_models() -> list: """列出可用的OpenAI模型""" return [ mcp.Resource( uri="openai://gpt-3.5-turbo", name="GPT-3.5-Turbo", description="OpenAI GPT-3.5 Turbo模型" ), mcp.Resource( uri="openai://gpt-4", name="GPT-4", description="OpenAI GPT-4模型" ) ] if __name__ == "__main__": mcp.run(server, transport='stdio') </pre>

3.2 DeepSeek MCP适配器

<pre data-tool="mdnice编辑器" style="-webkit-tap-highlight-color: transparent; margin: 10px 0px; padding: 0px; outline: 0px; max-width: 100%; box-sizing: border-box !important; overflow-wrap: break-word !important; border-radius: 5px; box-shadow: rgba(0, 0, 0, 0.55) 0px 2px 10px; text-align: left;"># deepseek_mcp_server.py import mcp.server as mcp from mcp.server import Server from openai import OpenAI from pydantic import BaseModel import os # DeepSeek的API与OpenAI兼容,但使用不同的base_url client = OpenAI( api_key=os.getenv("DEEPSEEK_API_KEY"), base_url="https://api.deepseek.com/v1" ) server = Server("deepseek-mcp-server") class DeepSeekRequest(BaseModel): message: str model: str = "deepseek-chat" temperature: float = 0.7 @server.tool() asyncdef deepseek_chat(request: DeepSeekRequest) -> str: """使用DeepSeek API进行对话""" try: response = client.chat.completions.create( model=request.model, messages=[{"role": "user", "content": request.message}], temperature=request.temperature ) return response.choices[0].message.content except Exception as e: returnf"DeepSeek API调用失败: {str(e)}" if __name__ == "__main__": mcp.run(server, transport='stdio') </pre>

四、提示词模板设计:动态注入上下文

4.1 基础模板设计

<pre data-tool="mdnice编辑器" style="-webkit-tap-highlight-color: transparent; margin: 10px 0px; padding: 0px; outline: 0px; max-width: 100%; box-sizing: border-box !important; overflow-wrap: break-word !important; border-radius: 5px; box-shadow: rgba(0, 0, 0, 0.55) 0px 2px 10px; text-align: left;"># prompt_templates.py from string import Template from datetime import datetime class PromptTemplate: def __init__(self, template_str: str): self.template = Template(template_str) def render(self, **kwargs) -> str: """渲染模板""" # 添加默认上下文 defaults = { 'current_time': datetime.now().strftime("%Y-%m-%d %H:%M:%S"), 'system_role': "你是一个有帮助的AI助手" } defaults.update(kwargs) return self.template.safe_substitute(defaults) # 定义各种场景的模板 TEMPLATES = { "code_assistant": PromptTemplate(""" $system_role 当前时间: $current_time 请帮助我解决以下编程问题: $user_query 请提供详细的代码示例和解释。 """), "content_writer": PromptTemplate(""" $system_role 当前时间: $current_time 请根据以下要求创作内容: 主题: $topic 字数要求: $word_count 风格: $style 请开始创作: """), "data_analyzer": PromptTemplate(""" $system_role 当前时间: $current_time 请分析以下数据: 数据集描述: $dataset_description 分析目标: $analysis_goal 请提供详细的分析结果: """) } </pre>

4.2 动态上下文注入

<pre data-tool="mdnice编辑器" style="-webkit-tap-highlight-color: transparent; margin: 10px 0px; padding: 0px; outline: 0px; max-width: 100%; box-sizing: border-box !important; overflow-wrap: break-word !important; border-radius: 5px; box-shadow: rgba(0, 0, 0, 0.55) 0px 2px 10px; text-align: left;"># context_manager.py from typing import Dict, Any from prompt_templates import TEMPLATES class ContextManager: def __init__(self): self.context_stores = {} def add_context(self, key: str, context: Any): """添加上下文信息""" self.context_stores[key] = context def get_context(self, key: str, default=None): """获取上下文信息""" return self.context_stores.get(key, default) def generate_prompt(self, template_name: str, user_input: str, **extra_context) -> str: """生成最终提示词""" if template_name notin TEMPLATES: raise ValueError(f"未知的模板: {template_name}") # 合并所有上下文 context = { 'user_query': user_input, **self.context_stores, **extra_context } return TEMPLATES[template_name].render(**context) # 使用示例 context_manager = ContextManager() context_manager.add_context("user_level", "advanced") context_manager.add_context("preferred_language", "Python") prompt = context_manager.generate_prompt( "code_assistant", "如何实现一个快速排序算法?", complexity="high" ) </pre>

4.3 多轮对话上下文管理

<pre data-tool="mdnice编辑器" style="-webkit-tap-highlight-color: transparent; margin: 10px 0px; padding: 0px; outline: 0px; max-width: 100%; box-sizing: border-box !important; overflow-wrap: break-word !important; border-radius: 5px; box-shadow: rgba(0, 0, 0, 0.55) 0px 2px 10px; text-align: left;"># conversation_manager.py from typing import List, Dict from dataclasses import dataclass @dataclass class Message: role: str # "user", "assistant", "system" content: str timestamp: str class ConversationManager: def __init__(self, max_history: int = 10): self.history: List[Message] = [] self.max_history = max_history def add_message(self, role: str, content: str): """添加消息到历史记录""" from datetime import datetime message = Message( role=role, content=content, timestamp=datetime.now().isoformat() ) self.history.append(message) # 保持历史记录长度 if len(self.history) > self.max_history: self.history = self.history[-self.max_history:] def get_conversation_context(self) -> str: """获取对话上下文""" context_lines = [] for msg in self.history: context_lines.append(f"{msg.role}: {msg.content}") return"\n".join(context_lines) def generate_contextual_prompt(self, user_input: str, template_name: str) -> str: """生成包含对话上下文的提示词""" from prompt_templates import TEMPLATES conversation_context = self.get_conversation_context() prompt = TEMPLATES[template_name].render( user_query=user_input, conversation_history=conversation_context, current_time=datetime.now().strftime("%Y-%m-%d %H:%M:%S") ) return prompt </pre>

五、完整集成示例

5.1 综合MCP服务器

<pre data-tool="mdnice编辑器" style="-webkit-tap-highlight-color: transparent; margin: 10px 0px; padding: 0px; outline: 0px; max-width: 100%; box-sizing: border-box !important; overflow-wrap: break-word !important; border-radius: 5px; box-shadow: rgba(0, 0, 0, 0.55) 0px 2px 10px; text-align: left;"># comprehensive_mcp_server.py import mcp.server as mcp from mcp.server import Server from pydantic import BaseModel from typing import Optional import os # 导入各个模块 from ollama_integration import OllamaIntegration from openai_integration import OpenAIIntegration from prompt_system import PromptSystem server = Server("comprehensive-llm-server") class LLMRequest(BaseModel): prompt: str model_type: str = "ollama"# ollama, openai, deepseek model_name: Optional[str] = None max_tokens: int = 512 temperature: float = 0.7 # 初始化各个集成模块 ollama_integration = OllamaIntegration() openai_integration = OpenAIIntegration() prompt_system = PromptSystem() @server.tool() asyncdef generate_text(request: LLMRequest) -> str: """统一的文本生成接口""" # 使用提示词系统增强用户输入 enhanced_prompt = prompt_system.enhance_prompt( request.prompt, context=prompt_system.get_current_context() ) # 根据模型类型选择后端 if request.model_type == "ollama": result = await ollama_integration.generate( enhanced_prompt, request.model_name, request.max_tokens ) elif request.model_type == "openai": result = await openai_integration.chat_completion( enhanced_prompt, request.model_name, request.temperature ) else: return"不支持的模型类型" # 记录到对话历史 prompt_system.add_to_history("user", request.prompt) prompt_system.add_to_history("assistant", result) return result @server.list_resources() asyncdef list_all_models() -> list: """列出所有可用的模型""" ollama_models = await ollama_integration.list_models() openai_models = openai_integration.list_models() return ollama_models + openai_models if __name__ == "__main__": mcp.run(server, transport='stdio') </pre>

5.2 客户端使用示例

<pre data-tool="mdnice编辑器" style="-webkit-tap-highlight-color: transparent; margin: 10px 0px; padding: 0px; outline: 0px; max-width: 100%; box-sizing: border-box !important; overflow-wrap: break-word !important; border-radius: 5px; box-shadow: rgba(0, 0, 0, 0.55) 0px 2px 10px; text-align: left;"># client_example.py import asyncio from mcp import ClientSession from mcp.client.stdio import stdio_client asyncdef main(): # 连接到MCP服务器 asyncwith stdio_client("python", ["comprehensive_mcp_server.py"]) as (read, write): asyncwith ClientSession(read, write) as session: # 初始化会话 await session.initialize() # 列出可用资源 resources = await session.list_resources() print("可用模型:", resources) # 使用Ollama生成文本 response = await session.call_tool( "generate_text", { "prompt": "解释一下机器学习的基本概念", "model_type": "ollama", "model_name": "llama2", "max_tokens": 300 } ) print("生成的响应:", response) if __name__ == "__main__": asyncio.run(main()) </pre>

六、最佳实践与优化建议

6.1 性能优化

  1. 连接池管理:为频繁使用的模型连接创建连接池
  2. 缓存机制:对常见请求结果进行缓存
  3. 批量处理:支持批量提示词处理以提高效率

6.2 安全考虑

  1. API密钥管理:使用环境变量或密钥管理系统
  2. 输入验证:对所有输入进行严格的验证和清理
  3. 访问控制:实现基于角色的访问控制

6.3 监控与日志

  1. 性能监控:跟踪响应时间和资源使用情况
  2. 使用日志:记录详细的请求和响应日志
  3. 错误处理:实现完善的错误处理和重试机制

推荐学习

行业首个「知识图谱+测试开发」深度整合课程【人工智能测试开发训练营】,赠送智能体工具。提供企业级解决方案,人工智能的管理平台部署,实现智能化测试,落地大模型,实现从传统手工转向用AI和自动化来实现测试,提升效率和质量。

image.png

总结

本文详细介绍了如何将MCP与大型语言模型进行深度集成,涵盖了本地模型(Ollama/vLLM)和在线模型(OpenAI/DeepSeek)的接入方案,以及提示词模板设计和动态上下文注入的高级技巧。

通过MCP协议,我们可以构建更加模块化、可扩展的AI应用系统,实现不同模型之间的无缝切换和组合使用。这种架构不仅提高了系统的灵活性,还为未来的功能扩展奠定了坚实的基础。

希望本教程能够帮助你在实际项目中成功实现MCP与LLM的深度集成,构建出更加强大和智能的AI应用。

推荐阅读
2025大语言模型部署实战指南:从个人笔记本到企业级服务的全栈方案 - 霍格沃兹测试开发学社 - 博客园
Playwright实战:写UI自动化脚本,速度直接起飞 - 霍格沃兹测试开发学社 - 博客园
2025大模型应用平台选型指南:从个人助手到企业级智能体,5大平台场景化拆解 - 霍格沃兹测试开发学社 - 博客园

©著作权归作者所有,转载或内容合作请联系作者
平台声明:文章内容(如有图片或视频亦包括在内)由作者上传并发布,文章内容仅代表作者本人观点,简书系信息发布平台,仅提供信息存储服务。

推荐阅读更多精彩内容