GitHub Copilot SDK 入门:五分钟构建你的第一个 AI Agent

作者:乱世不浮生日期:2026/3/20

TL;DR

The core value of the GitHub Copilot SDK is not the convenience of "calling an LLM" (that's already been solved by the OpenAI SDK, LangChain, etc.), but rather providing a production-proven Agent runtime.

The problems it actually solves are:

  • Orchestration complexity: Planner, tool routing, and state management are built-in
  • Stability: Reliability guaranteed by millions of developers using it daily
  • Evolvability: New models and tool capabilities are automatically updated by the CLI

When you start building your next AI application, ask yourself two questions:

  1. Where is my core value? If it's in business logic and tool definitions, use the SDK; if it's in low-level orchestration innovation, build your own framework.
  2. How fast do you need to reach production? The SDK lets you skip 80% of the infrastructure work and focus on the last 20% of differentiated capability.

The barrier to Agent development has dropped, but the real challenge is: defining valuable tools, designing smooth interactions, and solving real problems. Technology is no longer the bottleneck — imagination is.

Introduction: Why Agent Development Is No Longer Just for Experts

In January 2026, GitHub released the Copilot SDK, marking a pivotal shift in AI Agent development from "expert territory" to "mainstream tooling."

Before this, building an AI Agent capable of autonomous planning, tool invocation, and file editing required you to:

  • Choose and integrate an LLM service (OpenAI, Anthropic, Azure...)
  • Build your own Agent orchestrator (planner, tool routing, state management)
  • Handle streaming output, error retries, and context management
  • Implement tool definition standards (function calling schema)

This process was complex and fragile. Open-source frameworks (LangChain, AutoGPT) lowered the barrier, but still required deep understanding of Agent runtime mechanics. The real turning point: GitHub opened up the production-grade Agent runtime from Copilot CLI as an SDK.

What does this mean? You can launch a complete Agent runtime with just 5 lines of code:

1import asyncio
2from copilot import CopilotClient
3
4async def main():
5    client = CopilotClient()
6    await client.start()
7    session = await client.create_session({"model": "gpt-4.1"})
8    response = await session.send_and_wait({"prompt": "Explain quantum entanglement"})
9    print(response.data.content)
10
11asyncio.run(main())
12

No need to worry about model integration, prompt engineering, or response parsing — all of this has been battle-tested by Copilot CLI across millions of developers. You only need to define business logic; the SDK handles everything else.

Goal of this article: Through a complete weather assistant example, help you understand:

  1. How the SDK communicates with the CLI (the architectural essence)
  2. How the tool invocation mechanism works (how the LLM "decides" to call your code)
  3. The key leap from toy to tool (streaming responses, event listening, state management)

Whether you want to quickly validate an AI application idea or build a customized Agent for your enterprise, this article is the starting point.

Prerequisites: Setting Up Your Environment

Before writing any code, make sure your development environment meets the following requirements.

Prerequisites Checklist

1. Install the GitHub Copilot CLI

The SDK itself does not contain AI inference capabilities — it communicates with the Copilot CLI via JSON-RPC. The CLI is the real "engine"; the SDK is the "steering wheel."

1# macOS/Linux
2brew install copilot-cli
3
4# Verify installation
5copilot --version
6

2. Authenticate Your GitHub Account

1copilot login
2

You need a GitHub Copilot subscription (individual or enterprise). If using BYOK (Bring Your Own Key) mode, you can skip this step.

Verify the Environment

Run the following command to confirm the CLI is working:

1copilot -p "Explain recursion in one sentence"
2

If you see an AI response, the environment is ready.

Step 1: Send Your First Message

Install the SDK

Create a project directory and install the Python SDK:

1mkdir copilot-demo && cd copilot-demo
2# working in virtual env
3python -m venv venv && source venv/bin/activate
4pip install github-copilot-sdk
5

Minimal Code Example

Create main.py:

1import asyncio
2from copilot import CopilotClient
3
4async def main():
5    client = CopilotClient()
6    await client.start()
7    
8    session = await client.create_session({"model": "gpt-4.1"})
9    response = await session.send_and_wait({"prompt": "What is quantum entanglement?"})
10    
11    print(response.data.content)
12    
13    await client.stop()
14
15asyncio.run(main())
16

Run it:

1python main.py
2

You'll see the AI's complete response. With just 9 lines of code, a complete AI conversation is done.

Execution Flow Breakdown

What happens behind this code?

11. client.start()      SDK launches the Copilot CLI process (runs in the background)
22. create_session()    Requests the CLI to create a session via JSON-RPC
33. send_and_wait()     Sends the prompt; the CLI forwards it to the LLM
44. LLM inference       Response is returned to the SDK through the CLI
55. response.data       SDK parses the JSON response and extracts the content
6

The Architectural Essence: The SDK Is the CLI's "Remote Control"

GitHub's design philosophy is separation of concerns:

ComponentResponsibility
Copilot CLIAgent runtime (planning, tool invocation, LLM communication)
SDKProcess management, JSON-RPC wrapper, event listening
Your codeBusiness logic and tool definitions

Advantages of this architecture:

  • Independent CLI upgrades: New models and tool capabilities don't require SDK changes
  • Low multi-language support cost: Each language SDK only needs to implement a JSON-RPC client
  • Debug-friendly: The CLI can run independently, making it easy to observe logs and troubleshoot

JSON-RPC Communication Example

When you call send_and_wait(), the actual request the SDK sends:

1{
2  "jsonrpc": "2.0",
3  "method": "session.send",
4  "params": {
5    "sessionId": "abc123",
6    "prompt": "What is quantum entanglement?"
7  },
8  "id": 1
9}
10

CLI response:

1{
2  "jsonrpc": "2.0",
3  "result": {
4    "data": {
5      "content": "Quantum entanglement refers to a phenomenon where two or more quantum systems..."
6    }
7  },
8  "id": 1
9}
10

Understanding this is important: The SDK is not "calling an LLM" — it's "calling the CLI." The CLI has already encapsulated all the complexity.

Step 2: Real-Time AI Responses — Streaming Output

Why Streaming Responses Are Needed

When using send_and_wait(), you must wait for the LLM to generate a complete response before seeing any output. For long-form generation (such as code explanations or documentation), users might stare at a blank screen for 10–30 seconds.

Streaming responses let the AI output text word by word, like a typewriter — improving user experience while also allowing you to catch early signs that the model is going off track.

Event Listening Mechanism

Modify main.py to enable streaming output:

1import asyncio
2import sys
3from copilot import CopilotClient
4from copilot.generated.session_events import SessionEventType
5
6async def main():
7    client = CopilotClient()
8    await client.start()
9    
10    session = await client.create_session({
11        "model": "gpt-4.1",
12        "streaming": True,  # Enable streaming mode
13    })
14    
15    # Listen for response deltas
16    def handle_event(event):
17        if event.type == SessionEventType.ASSISTANT_MESSAGE_DELTA:
18            sys.stdout.write(event.data.delta_content)
19            sys.stdout.flush()
20        if event.type == SessionEventType.SESSION_IDLE:
21            print()  # Newline when complete
22    
23    session.on(handle_event)
24    
25    await session.send_and_wait({"prompt": "Write a code example of quicksort"})
26
27    await client.stop()
28
29asyncio.run(main())
30

After running it, you'll see results gradually "stream in" rather than appearing all at once.

The Design Philosophy of the Event-Driven Model

The SDK uses the Observer pattern to handle the asynchronous event stream from the CLI:

1CLI generates events  SDK parses  Dispatches to listeners  Your handle_event() executes
2

Main event types:

EventTriggered WhenTypical Use
ASSISTANT_MESSAGE_DELTAAI generates partial contentReal-time display
ASSISTANT_MESSAGEAI completes a full messageGet final content
SESSION_IDLESession enters idle stateMark task complete
TOOL_CALLAI decides to invoke a toolLogging, auth check

Code Comparison: Synchronous vs. Streaming

Synchronous mode — suitable for short responses:

1response = await session.send_and_wait({"prompt": "1+1=?"})
2print(response.data.content)  # Wait and print all at once
3

Streaming mode — suitable for long-form content:

1session.on(lambda event: 
2    print(event.data.delta_content, end="") 
3    if event.type == SessionEventType.ASSISTANT_MESSAGE_DELTA 
4    else None
5)
6await session.send_and_wait({"prompt": "Write an article"})
7

Technical Details Under the Hood

Streaming responses are based on Server-Sent Events (SSE) or WebSocket:

  1. The CLI receives a token stream from the LLM
  2. For each token received, the CLI sends a message_delta event to the SDK
  3. The SDK triggers your event listener
  4. The user immediately sees new content

This design lets your application perceive the AI's "thinking process", not just the final result.

Step 3: Giving the AI Capabilities — Custom Tools

The Essence of Tools: Letting the LLM Call Your Code

Up to now, the AI can only "talk" — it cannot interact with the outside world. Tools are the core capability of an Agent: you define functions, and the AI decides when to call them.

For example:

  1. User: "What's the weather in Beijing today?"
  2. AI thinks: I need weather data → call get_weather("Beijing")
  3. Your code: returns {"temperature": "15°C", "condition": "sunny"}
  4. AI synthesizes: "Beijing is sunny today, 15°C."

Key point: The AI autonomously decides whether to call a tool and what parameters to pass.

Three Elements of a Tool Definition

A tool contains:

  1. Description: Tells the AI what this tool does
  2. Parameter schema: Defines the structure of input parameters (using Pydantic)
  3. Handler: The Python function that actually executes

Complete Weather Assistant Example

Create weather_assistant.py:

1import asyncio
2import random
3import sys
4from copilot import CopilotClient
5from copilot.tools import define_tool
6from copilot.generated.session_events import SessionEventType
7from pydantic import BaseModel, Field
8
9# 1. Define parameter schema
10class GetWeatherParams(BaseModel):
11    city: str = Field(description="City name, e.g., Beijing, Shanghai")
12
13# 2. Define tool (description + handler)
14@define_tool(description="Get current weather for a specified city")
15async def get_weather(params: GetWeatherParams) -> dict:
16    city = params.city
17    
18    # In production, call a real weather API here
19    # Using mock data for demonstration
20    conditions = ["sunny", "cloudy", "rainy", "overcast"]
21    temp = random.randint(10, 30)
22    condition = random.choice(conditions)
23    
24    return {
25        "city": city,
26        "temperature": f"{temp}°C",
27        "condition": condition
28    }
29
30async def main():
31    client = CopilotClient()
32    await client.start()
33    
34    # 3. Pass tools to session
35    session = await client.create_session({
36        "model": "gpt-4.1",
37        "streaming": True,
38        "tools": [get_weather],  # Register tool
39    })
40    
41    # Listen for streaming responses
42    def handle_event(event):
43        if event.type == SessionEventType.ASSISTANT_MESSAGE_DELTA:
44            sys.stdout.write(event.data.delta_content)
45            sys.stdout.flush()
46        if event.type == SessionEventType.SESSION_IDLE:
47            print("\n")
48    
49    session.on(handle_event)
50    
51    # Send a prompt that requires tool calls
52    await session.send_and_wait({
53        "prompt": "What's the weather like in Beijing and Shanghai? Compare them."
54    })
55    
56    await client.stop()
57
58asyncio.run(main())
59

Run:

1python weather_assistant.py
2

Execution Flow Explained

When you ask "What's the weather in Beijing and Shanghai":

  1. AI analyzes the question → weather data needed
  2. AI checks available tools → finds the get_weather function
  3. AI decides to call → get_weather(city="Beijing")
  4. SDK triggers the handler → your function returns {"temperature": "22°C", ...}
  5. AI receives the result → calls get_weather(city="Shanghai") again
  6. AI synthesizes the answer → "Beijing is sunny at 22°C; Shanghai is overcast at 18°C..."

The AI will automatically call the tool multiple times (once for Beijing, once for Shanghai) — you don't need to write any loop logic.

Why the Parameter Schema Matters

Why define parameters with Pydantic?

1class GetWeatherParams(BaseModel):
2    city: str = Field(description="City name")
3    unit: str = Field(default="celsius", description="Temperature unit: celsius or fahrenheit")
4

The SDK converts this schema to JSON Schema and passes it to the LLM:

1{
2  "type": "object",
3  "properties": {
4    "city": {"type": "string", "description": "City name"},
5    "unit": {"type": "string", "description": "Temperature unit"}
6  },
7  "required": ["city"]
8}
9

The LLM extracts parameters based on this schema. Therefore, the clearer the description, the more accurately the AI will invoke the tool.

Step 4: Building an Interactive Assistant

Now let's combine all the capabilities: streaming output + tool invocation + command-line interaction.

Complete Runnable Code

Create interactive_assistant.py:

1import asyncio
2import random
3import sys
4from copilot import CopilotClient
5from copilot.tools import define_tool
6from copilot.generated.session_events import SessionEventType
7from pydantic import BaseModel, Field
8
9# Define tools
10class GetWeatherParams(BaseModel):
11    city: str = Field(description="City name, e.g., Beijing, Shanghai, Guangzhou")
12
13@define_tool(description="Get current weather for a specified city")
14async def get_weather(params: GetWeatherParams) -> dict:
15    city = params.city
16    conditions = ["sunny", "cloudy", "rainy", "overcast", "hazy"]
17    temp = random.randint(5, 35)
18    condition = random.choice(conditions)
19    humidity = random.randint(30, 90)
20    
21    return {
22        "city": city,
23        "temperature": f"{temp}°C",
24        "condition": condition,
25        "humidity": f"{humidity}%"
26    }
27
28async def main():
29    client = CopilotClient()
30    await client.start()
31    
32    session = await client.create_session({
33        "model": "gpt-4.1",
34        "streaming": True,
35        "tools": [get_weather],
36    })
37    
38    # Event listeners
39    def handle_event(event):
40        if event.type == SessionEventType.ASSISTANT_MESSAGE_DELTA:
41            sys.stdout.write(event.data.delta_content)
42            sys.stdout.flush()
43        if event.type == SessionEventType.SESSION_IDLE:
44            print()  # Newline when complete
45    
46    session.on(handle_event)
47    
48    # Interactive conversation loop
49    print("🌤️  Weather Assistant (type 'exit' to quit)")
50    print("Try: 'What's the weather in Beijing?' or 'Compare weather in Guangzhou and Shenzhen'\n")
51    
52    while True:
53        try:
54            user_input = input("You: ")
55        except EOFError:
56            break
57        
58        if user_input.lower() in ["exit", "quit"]:
59            break
60        
61        if not user_input.strip():
62            continue
63        
64        sys.stdout.write("Assistant: ")
65        await session.send_and_wait({"prompt": user_input})
66        print()  # Extra newline
67    
68    await client.stop()
69    print("Goodbye!")
70
71asyncio.run(main())
72

Sample Output

1python interactive_assistant.py
2

Example conversation:

1🌤️  Weather Assistant (type 'exit' to quit)
2Try: 'What's the weather in Beijing?' or 'Compare weather in Guangzhou and Shenzhen'
3
4You: Compare weather in Guangzhou and Shenzhen
5Assistant: Guangzhou: 21°C, sunny, 84% humidity.
6Shenzhen: 33°C, hazy, 77% humidity.
7Shenzhen is significantly warmer and hazier, while Guangzhou is cooler and sunnier with slightly higher humidity.
8
9You: What's the weather in Shenzhen
10Assistant: The weather in Shenzhen is 8°C, overcast, with 47% humidity.
11
12You: quit
13Goodbye!
14

Key Design Considerations

1. Session Persistence

Notice that we create the session only once, and reuse it throughout the entire conversation loop. This means:

  • The AI remembers previous conversation content
  • Follow-up questions like "What about tomorrow?" work (the AI knows which city you mean)
  • Tool call history is also retained

2. Async I/O Done Right

1# Using input() in while True loop
2user_input = input("You: ")  # Synchronous blocking, but acceptable here
3
4# send_and_wait() is async
5await session.send_and_wait({"prompt": user_input})
6

Why is input() blocking acceptable here? Because we're waiting for user input, not an I/O operation. The real async behavior happens when communicating with the CLI.

3. Graceful Exit

1try:
2    user_input = input("You: ")
3except EOFError:  # Catch Ctrl+D
4    break
5

Handling EOFError and common exit commands (exit, quit) ensures a smooth user experience.

Extension Ideas

Based on this framework, you can quickly extend functionality:

Add more tools:

1@define_tool(description="Query real-time stock price")
2async def get_stock_price(params): ...
3
4@define_tool(description="Search information on the web")
5async def web_search(params): ...
6
7session = await client.create_session({
8    "tools": [get_weather, get_stock_price, web_search],
9})
10

The AI will automatically select the appropriate tool based on the user's question.

Add a system prompt:

1session = await client.create_session({
2    "model": "gpt-4.1",
3    "tools": [get_weather],
4    "system_message": {
5        "content": "You are a professional weather assistant. Keep answers concise but informative."
6    }
7})
8

Log tool calls:

1def handle_event(event):
2    if event.type == SessionEventType.TOOL_CALL:
3        print(f"\n[Debug] AI called tool: {event.data.tool_name}")
4        print(f"[Debug] Arguments: {event.data.arguments}\n")
5

Debugging Tips

During development, observing CLI logs is crucial for understanding Agent behavior.

Start a standalone CLI server:

1# Start CLI server in debug mode
2copilot --headless --log-level debug --port 9999
3
4# Optional: specify log directory
5copilot --headless --log-level debug --port 9999 --log-dir ./logs
6

Connect from your code:

1client = CopilotClient({
2    'cli_url': 'http://localhost:9999',
3})
4await client.start()  # Connects directly without starting a new process
5

View logs:

By default, logs are saved in ~/.copilot/logs/, with an independent log file for each server process. Use tail -f to monitor in real time:

1tail -f ~/.copilot/logs/process-<timestamp>-<pid>.log
2

Debug tool calls:

1def handle_event(event):
2    # Tool call starts
3    if event.type == SessionEventType.TOOL_USER_REQUESTED:
4        print(f"[Tool Call] {event.data.tool_name}")
5        print(f"Arguments: {event.data.arguments}")
6    
7    # Tool execution result
8    if event.type == SessionEventType.TOOL_EXECUTION_COMPLETE:
9        print(f"[Tool Result] {event.data.tool_name}")
10        print(f"Result: {event.data.result}")
11    
12    # AI's final response
13    if event.type == SessionEventType.ASSISTANT_MESSAGE:
14        print(f"[Assistant] {event.data.content[:100]}...")
15
16session.on(handle_event)
17

This pattern gives you a clear view of the entire tool call chain: AI decision → tool execution → result return → final response.


GitHub Copilot SDK 入门:五分钟构建你的第一个 AI Agent》 是转载文章,点击查看原文


相关推荐


【浏览器MCP组件】 chrome-devtools的快捷方式和MCP配置
伊玛目的门徒2026/3/11

创建Chrome调试快捷方式 右键点击桌面空白处,选择"新建"-"快捷方式"。在目标位置输入以下命令: C:\Users\luke\AppData\Local\ms-playwright\chromium-1208\chrome-win64\chrome.exe --remote-debugging-port=9222 --user-data-dir="%USERPROFILE%\ChromeDebugProfile" 为快捷方式命名,例如"Chrome调试模式"。双击此快捷方式将启动指定版


从零开始的Web3学习 10| Solidity 视图和纯函数 (View and Pure Functions)
庭前云落2026/3/3

在 Solidity 中,view 和 pure 是用于修饰函数的关键字,它们描述了函数对区块链状态的读写行为。正确使用这两个修饰符可以提高代码的可读性,并帮助编译器进行静态检查。 1. view 函数 承诺:不修改状态,但可以读取状态变量。允许的操作: 读取状态变量(如 uint public data)。调用其他 view 或 pure 函数。访问 address(this).balance 或 block.number 等区块链数据(这些不属于“状态修改”)。 禁止的操作:任何会改变状态的


再论自然数全加和 - 质数螺旋
铸人2026/2/23

下面考虑质数螺旋 曾经以1开始绘制螺旋图,但是计算质数坐标的时候就出现困难。所以我们用0开始,并把它放在螺旋的中心。 观察如下图像, 最中心的数字0,不算大小。圈数为 ,对应的数的个数,也就是面积为, 这些圈的最小值是0,最大值是, 相邻两项的差为, 这是一个二阶等差数列,对应的数值的和为, 这些数值,并不关心旋转的起点。仔细观察我们发现这些质数构成的线都几乎都是对角线,相当于旋转了45°的结果,既然如此,我们把起点旋转45°,看看能不能把斜线变成横竖的直线。


字节发力,豆包大模型2.0 震撼来袭(附 Trae 实测)
苍何2026/2/15

这是苍何的第 496 篇原创! 大家好,我是苍何。 其实在早些时候,我就深度参与了豆包大模型2.0 的内测。 今天,终于,豆包大模型 2.0 正式发布了。 说实话,这次的升级幅度,属实把我整不会了。 先说结论:「豆包 2.0 Pro 全面对标 GPT 5.2 和 Gemini 3 Pro」。 「人类最后的考试」HLE-Text 拿下 54.2 分最高分,ICPC 编程竞赛金牌,IMO 数学奥赛也是金牌。 好家伙,字节这是要掀桌子啊。 豆包 2.0,到底升级了啥 这次发布的是一整个系列,包含 P


2026 AI Agent 风口必看|四大技术变革+多Agent实战
User_芊芊君子2026/2/6

🎁个人主页:User_芊芊君子 🎉欢迎大家点赞👍评论📝收藏⭐文章 🔍系列专栏:AI 文章目录: 一、先破后立:2026年AI Agent的核心变革(新颖切入点)1.1 变革1:架构升级——从“四段式”到“PDA+记忆+反思”闭环1.2 变革2:协同升级——A2A协议主导,多Agent协作常态化1.3 变革3:工具升级——MCP协议统一,工具调用标准化1.4 变革4:能力升级——Skills模块化,Agent能力可复用 二、实战落地:2026年多Agent协作项目(


Settings,变量保存
cfqq19892026/1/28

作用: 变量在exe文件内。比txt操作方便。 步骤: 就这么简单: Settings.Default.Save();  // 放到窗口关闭事件中。 private void Form1_Load(object sender, EventArgs e) { fa = new FA(); //【4】订阅委托广播 fa.wt_get += wt_get; //


ooder-agent v0.6.2 升级实测:SDK 封装 + Skill 化 VFS,AI 一键生成分布式存储应用
OneCodeCN2026/1/19

作为一名深耕分布式Agent框架的开发者,我踩过最多的坑,就是分布式存储的配置复杂、断网数据丢失、自定义应用开发成本高这三大难题。 直到上手 ooder-agent v0.6.2 版本,我才发现原来分布式存储应用可以这么简单——这次升级直接把两个核心痛点连根拔起:agent-sdk 深度封装降低开发门槛,skill-vfs 变身完整Skill程序适配复杂网络场景,更关键的是,AI一句话就能生成存储应用,零代码自动部署。 今天就从技术角度,聊聊这次升级的两大核心亮点和实际使用价值。 一、核心升级1


JNI是什么?
自由生长20242026/1/11

JNI是什么? JNI(Java Native Interface,Java本地接口)是Java平台自1.1版本起提供的标准编程接口,它是一套强大的编程框架,允许运行在Java虚拟机(JVM)中的Java代码与用C、C++等其他编程语言编写的本地代码进行交互。 核心特点 功能扩展:允许Java程序调用本地代码,实现标准Java类库无法支持的功能 性能优化:对于性能敏感的计算密集型任务(如图像处理、音视频编解码、复杂数学运算),本地代码通常比Java实现更高效 代码复用:可以重用已有的C/C++


fmtlib/fmt仓库熟悉
LumiTiger2026/1/2

一、仓库(fmtlib/fmt)依赖/用到的开源库 fmt 核心设计为无外部运行时依赖(self-contained),仅在特定功能/实现中引用少量开源算法/工具(非链接依赖): Dragonbox: 内嵌该开源算法(https://github.com/jk-jeon/dragonbox),用于实现 IEEE 754 浮点数的高性能格式化(保证正确舍入、短长度、往返一致性),是 fmt 浮点格式化的核心实现基础。构建/测试类工具(非业务依赖): CMake:跨平台构建系统;oss-f


面向课堂与自习场景的智能坐姿识别系统——从行为感知到可视化部署的完整工程【YOLOv8】
我是杰尼2025/12/24

面向课堂与自习场景的智能坐姿识别系统——从行为感知到可视化部署的完整工程【YOLOv8】 一、研究背景:为什么要做“坐姿识别”? 在信息化学习与办公环境中,久坐与不良坐姿已成为青少年与上班族普遍面临的健康问题。长期驼背、前倾、低头等坐姿行为,容易引发: 脊柱侧弯、颈椎病 注意力下降、学习效率降低 视觉疲劳与肌肉劳损 传统的坐姿管理主要依赖人工监督或简单硬件传感器,不仅成本高、实时性差,而且难以规模化推广。 随着计算机视觉与深度学习技术的发展,基于摄像头的坐姿自动识别系统逐渐成为一种可行且低成

首页编辑器站点地图

本站内容在 CC BY-SA 4.0 协议下发布

Copyright © 2026 XYZ博客