diff --git a/MIGRATION_STATUS.md b/MIGRATION_STATUS.md
new file mode 100644
index 0000000..8a964f1
--- /dev/null
+++ b/MIGRATION_STATUS.md
@@ -0,0 +1,136 @@
+# 稷下学宫 Google ADK 迁移状态报告
+
+## 📊 迁移进度概览
+
+### ✅ 已完成的任务
+
+#### 1. 基础设施迁移
+- [x] **Google ADK 安装**: 成功安装 Google ADK 1.10.0
+- [x] **API 密钥配置**: 已在 Doppler 中配置 `GOOGLE_API_KEY`
+- [x] **环境验证**: 基础测试通过,智能体创建成功
+
+#### 2. 配置系统更新
+- [x] **doppler_config.py 增强**:
+ - 新增 `get_google_api_key()` 函数
+ - 新增 `get_google_genai_config()` 函数
+ - 更新 `validate_config()` 支持三种模式:
+ - `openrouter`: 纯 OpenRouter 模式
+ - `google_adk`: 纯 Google ADK 模式
+ - `hybrid`: 混合模式(当前使用)
+
+#### 3. 测试系统建立
+- [x] **基础测试**: `test_google_adk.py` - 验证 ADK 安装和配置
+- [x] **智能体测试**: `adk_debate_test.py` - 八仙智能体创建测试
+- [x] **论道原型**: `adk_simple_debate.py` - 智能体基础功能验证
+
+#### 4. 文档更新
+- [x] **README.md**: 新增 Google ADK 安装和配置说明
+- [x] **requirements.txt**: 添加 Google ADK 依赖说明
+- [x] **迁移指南**: 完整的 `GOOGLE_ADK_MIGRATION_GUIDE.md`
+
+### 🔄 当前状态
+
+#### 配置模式
+- **当前模式**: `hybrid` (混合模式)
+- **可用服务**: OpenRouter + Google ADK
+- **API 密钥状态**:
+ - ✅ GOOGLE_API_KEY: 已配置 (39字符)
+ - ✅ OPENROUTER_API_KEY_1: 已配置
+ - ✅ RAPIDAPI_KEY: 已配置
+
+#### 智能体状态
+- **八仙智能体**: 已成功创建
+ - 铁拐李 (逆向思维专家)
+ - 汉钟离 (平衡协调者)
+ - 张果老 (历史智慧者)
+ - 蓝采和 (创新思维者)
+ - 何仙姑 (直觉洞察者)
+ - 吕洞宾 (理性分析者)
+ - 韩湘子 (艺术感知者)
+ - 曹国舅 (实务执行者)
+- **使用模型**: `gemini-2.0-flash-exp`
+
+### 🚧 待完成的任务
+
+#### 1. 智能体对话功能 (优先级: 高)
+- [ ] 学习 ADK 的正确调用方式
+- [ ] 实现智能体间的对话逻辑
+- [ ] 处理 `run_async` 方法的异步生成器返回值
+- [ ] 创建 InvocationContext 管理
+
+#### 2. 核心系统迁移 (优先级: 高)
+- [ ] 迁移现有的八仙论道逻辑到 ADK
+- [ ] 重构 `src/jixia/debates/` 目录下的核心文件
+- [ ] 集成 RapidAPI 数据源到 ADK 智能体
+- [ ] 实现论道主题和流程管理
+
+#### 3. 界面集成 (优先级: 中)
+- [ ] 更新 Streamlit 界面以支持 ADK
+- [ ] 修改 `src/streamlit_app.py`
+- [ ] 适配新的智能体调用方式
+- [ ] 保持现有的用户体验
+
+#### 4. 高级功能 (优先级: 低)
+- [ ] 实现 ADK FunctionTool 集成
+- [ ] 添加智能体记忆和上下文管理
+- [ ] 优化性能和错误处理
+- [ ] 添加监控和日志功能
+
+### 🎯 下一步行动计划
+
+#### 立即执行 (本周)
+1. **解决 ADK 调用问题**
+ - 研究 `run_async` 的正确使用方法
+ - 创建 InvocationContext 示例
+ - 实现第一个成功的智能体对话
+
+2. **创建工作原型**
+ - 实现铁拐李和吕洞宾的简单对话
+ - 验证论道逻辑的可行性
+ - 测试多轮对话功能
+
+#### 短期目标 (本月)
+1. **完成核心迁移**
+ - 迁移所有八仙智能体
+ - 实现完整的论道流程
+ - 集成现有数据源
+
+2. **界面适配**
+ - 更新 Streamlit 应用
+ - 保持功能完整性
+ - 优化用户体验
+
+### 📈 技术优势
+
+#### Google ADK 带来的改进
+1. **统一模型生态**: 直接使用 Gemini 系列模型
+2. **官方支持**: Google 官方维护的框架
+3. **更好的集成**: 与 Google 服务深度集成
+4. **开发工具**: `adk web`, `adk run`, `adk api_server`
+5. **性能优化**: 原生支持异步和流式处理
+
+#### 保留的核心价值
+1. **稷下学宫哲学框架**: 完全保留
+2. **八仙角色设定**: 无缝迁移
+3. **RapidAPI 数据源**: 继续使用
+4. **MongoDB 数据库**: 保持不变
+5. **Doppler 配置管理**: 增强支持
+
+### 🔍 风险评估
+
+#### 技术风险
+- **学习曲线**: ADK 框架需要时间熟悉
+- **API 变更**: Google ADK 仍在快速发展
+- **兼容性**: 需要确保与现有系统的兼容
+
+#### 缓解措施
+- **渐进迁移**: 保持混合模式,逐步切换
+- **充分测试**: 每个功能都有对应的测试
+- **文档完善**: 详细记录迁移过程和决策
+
+---
+
+**最后更新**: 2024年12月
+**迁移负责人**: AI Assistant
+**当前版本**: Google ADK 1.10.0
+**项目状态**: 🟡 进行中 (基础设施完成,核心功能开发中)
\ No newline at end of file
diff --git a/QUICK_START_GUIDE.md b/QUICK_START_GUIDE.md
new file mode 100644
index 0000000..c6b2371
--- /dev/null
+++ b/QUICK_START_GUIDE.md
@@ -0,0 +1,236 @@
+# 🚀 稷下学宫负载均衡系统 - 快速上手指南
+
+## 📋 前置条件
+
+1. **RapidAPI账户**: 确保已订阅以下API服务
+ - Alpha Vantage
+ - Yahoo Finance 15
+ - Webull
+ - Seeking Alpha
+
+2. **环境配置**: 已配置Doppler环境变量管理
+ ```bash
+ doppler secrets | grep RAPIDAPI_KEY
+ ```
+
+## ⚡ 5分钟快速体验
+
+### 1. 运行完整演示
+```bash
+cd /home/ben/liurenchaxin
+doppler run python demo_jixia_load_balancing.py
+```
+
+### 2. 查看演示结果
+```bash
+# 查看生成的结果文件
+ls demo_results_*.json
+
+# 查看AAPL的详细结果
+cat demo_results_aapl.json | jq .
+```
+
+## 🎯 核心功能演示
+
+### 单个仙人数据获取
+```python
+from src.jixia.engines.jixia_load_balancer import JixiaLoadBalancer
+
+# 初始化
+load_balancer = JixiaLoadBalancer(rapidapi_key)
+
+# 吕洞宾获取苹果股票数据
+result = load_balancer.get_data_for_immortal('吕洞宾', 'stock_quote', 'AAPL')
+print(f"价格: ${result.data['price']}, 来源: {result.api_used}")
+```
+
+### 八仙论道完整演示
+```python
+# 进行八仙论道
+results = load_balancer.conduct_immortal_debate('TSLA')
+
+# 查看负载分布
+distribution = load_balancer.get_load_distribution()
+for api, stats in distribution.items():
+ print(f"{api}: {stats['calls']}次调用 ({stats['percentage']:.1f}%)")
+```
+
+## 📊 预期输出示例
+
+```
+🏛️ 稷下学宫八仙论道开始 - 主题: AAPL
+============================================================
+🎭 吕洞宾 正在获取 stock_quote 数据...
+ ✅ 成功从 alpha_vantage 获取数据 (响应时间: 1.33s)
+ 💰 吕洞宾: $202.38 (-2.5004%) via alpha_vantage
+
+🎭 何仙姑 正在获取 stock_quote 数据...
+ ✅ 成功从 yahoo_finance_15 获取数据 (响应时间: 1.87s)
+ 💰 何仙姑: $N/A (N/A) via yahoo_finance_15
+
+📊 负载分布统计:
+ alpha_vantage: 3 次调用 (37.5%) - 健康
+ yahoo_finance_15: 2 次调用 (25.0%) - 健康
+ webull: 3 次调用 (37.5%) - 健康
+```
+
+## 🔧 自定义配置
+
+### 修改仙人API偏好
+编辑 `/home/ben/liurenchaxin/src/jixia/config/immortal_api_config.json`:
+
+```json
+{
+ "immortals": {
+ "吕洞宾": {
+ "preferred_apis": {
+ "stock_quote": "webull", // 改为使用Webull
+ "company_overview": "alpha_vantage"
+ }
+ }
+ }
+}
+```
+
+### 调整缓存策略
+```python
+# 修改缓存TTL
+load_balancer.cache_ttl = 600 # 10分钟缓存
+
+# 清空缓存
+load_balancer.cache.clear()
+```
+
+## 🚨 故障排除
+
+### 常见问题
+
+1. **API密钥错误**
+ ```
+ ❌ 错误: 请设置RAPIDAPI_KEY环境变量
+ ```
+ **解决**: 确保Doppler配置正确
+ ```bash
+ doppler secrets set RAPIDAPI_KEY="your_key_here"
+ ```
+
+2. **API调用失败**
+ ```
+ ⚠️ alpha_vantage 不可用,尝试备用API...
+ ```
+ **解决**: 系统会自动故障转移,无需干预
+
+3. **数据格式异常**
+ ```
+ 💰 价格: $N/A
+ ```
+ **解决**: 某些API返回格式不同,系统会标准化处理
+
+### 调试模式
+```python
+# 启用详细日志
+import logging
+logging.basicConfig(level=logging.DEBUG)
+
+# 查看API健康状态
+for api, status in load_balancer.health_checker.health_status.items():
+ print(f"{api}: {'健康' if status['healthy'] else '异常'}")
+```
+
+## 📈 性能优化建议
+
+### 1. 缓存优化
+```python
+# 针对不同数据类型设置不同缓存时间
+cache_strategies = {
+ 'stock_quote': 60, # 1分钟
+ 'company_overview': 3600, # 1小时
+ 'market_news': 1800 # 30分钟
+}
+```
+
+### 2. 并发控制
+```python
+# 控制并发请求数量
+import time
+for immortal in immortals:
+ result = load_balancer.get_data_for_immortal(immortal, 'stock_quote', symbol)
+ time.sleep(0.2) # 避免过快请求
+```
+
+### 3. 批量处理
+```python
+# 批量获取多个股票数据
+symbols = ['AAPL', 'TSLA', 'MSFT', 'GOOGL']
+results = {}
+for symbol in symbols:
+ results[symbol] = load_balancer.conduct_immortal_debate(symbol)
+```
+
+## 🎯 最佳实践
+
+### 1. 监控API使用情况
+```python
+# 定期检查负载分布
+distribution = load_balancer.get_load_distribution()
+print(f"总调用次数: {sum(stats['calls'] for stats in distribution.values())}")
+```
+
+### 2. 合理使用缓存
+```python
+# 对于实时性要求不高的数据,优先使用缓存
+result = load_balancer.get_data_for_immortal('韩湘子', 'company_overview', 'AAPL')
+if result.cached:
+ print("使用缓存数据,节省API调用")
+```
+
+### 3. 错误处理
+```python
+result = load_balancer.get_data_for_immortal('吕洞宾', 'stock_quote', 'AAPL')
+if not result.success:
+ print(f"获取数据失败: {result.error}")
+ # 实施降级策略
+else:
+ # 正常处理数据
+ process_stock_data(result.data)
+```
+
+## 📚 进阶使用
+
+### 自定义数据处理器
+```python
+class CustomDataNormalizer(DataNormalizer):
+ def normalize_stock_quote(self, raw_data, api_source):
+ # 自定义数据标准化逻辑
+ normalized = super().normalize_stock_quote(raw_data, api_source)
+ # 添加自定义字段
+ normalized['custom_score'] = self.calculate_score(normalized)
+ return normalized
+
+# 使用自定义处理器
+load_balancer.data_normalizer = CustomDataNormalizer()
+```
+
+### 自定义健康检查
+```python
+class CustomHealthChecker(APIHealthChecker):
+ def _perform_health_check(self, api_name):
+ # 自定义健康检查逻辑
+ # 例如:检查API响应时间、错误率等
+ pass
+
+load_balancer.health_checker = CustomHealthChecker()
+```
+
+## 🎉 完成!
+
+现在您已经掌握了稷下学宫负载均衡系统的基本使用方法。
+
+### 下一步
+- 📖 阅读完整文档: `README_jixia_load_balancing.md`
+- 🔧 查看配置文件: `src/jixia/config/immortal_api_config.json`
+- 💻 研究核心代码: `src/jixia/engines/jixia_load_balancer.py`
+- 🚀 开始构建您的投资分析系统!
+
+---
+*🏛️ 稷下学宫 - 智慧投资,从负载均衡开始*
\ No newline at end of file
diff --git a/README.md b/README.md
index 033eac1..019e10f 100644
--- a/README.md
+++ b/README.md
@@ -1,10 +1,13 @@
# 🏛️ 炼妖壶 (Lianyaohu) - 稷下学宫AI辩论系统
+提示:已支持 Cloudflare AutoRAG/Vectorize 作为记忆后端(RAG)。见 docs/guides/CLOUDFLARE_AUTORAG_INTEGRATION.md。
+
基于中国哲学传统的多AI智能体辩论平台,重构版本。
## ✨ 核心特性
- **🎭 稷下学宫八仙论道**: 基于中国传统八仙文化的多AI智能体辩论系统
+- **🧠 Vertex AI记忆银行**: 集成Google Cloud Memory Bank,让AI智能体具备持久化记忆能力
- **🌍 天下体系分析**: 基于儒门天下观的资本生态"天命树"分析模型
- **🔒 安全配置管理**: 使用Doppler进行统一的密钥和配置管理
- **📊 智能数据源**: 基于17个RapidAPI订阅的永动机数据引擎
@@ -37,9 +40,14 @@ liurenchaxin/
### 1. 环境准备
```bash
-# 创建虚拟环境
-python -m venv venv
-source venv/bin/activate # Linux/Mac
+# 创建虚拟环境(与 Google ADK Quickstart 一致)
+python -m venv .venv
+# macOS/Linux
+source .venv/bin/activate
+# Windows CMD
+# .venv\Scripts\activate.bat
+# Windows PowerShell
+# .venv\Scripts\Activate.ps1
# 安装依赖
pip install -r requirements.txt
@@ -50,9 +58,21 @@ pip install -r requirements.txt
项目使用Doppler进行安全的配置管理。需要配置以下环境变量:
```bash
-# 必需配置
+# 必需配置(数据源)
RAPIDAPI_KEY=your_rapidapi_key
+
+# 选择其一的AI服务密钥
+# A) OpenRouter(旧)
OPENROUTER_API_KEY_1=your_openrouter_key
+# B) Google ADK / Gemini(新)
+GOOGLE_API_KEY=your_gemini_api_key
+# 如果使用 Vertex AI Express Mode(可选)
+GOOGLE_GENAI_USE_VERTEXAI=TRUE
+
+# Vertex AI Memory Bank 配置(新功能)
+GOOGLE_CLOUD_PROJECT_ID=your-project-id
+GOOGLE_CLOUD_LOCATION=us-central1
+VERTEX_MEMORY_BANK_ENABLED=TRUE
# 可选配置
POSTGRES_URL=your_postgres_url
@@ -94,6 +114,9 @@ python config/doppler_config.py
# 测试Swarm辩论 (可选)
python src/jixia/debates/swarm_debate.py
+
+# 测试Vertex AI Memory Bank (新功能)
+python tests/test_vertex_memory_bank.py
```
## 🎭 稷下学宫八仙论道
@@ -181,4 +204,24 @@ python src/jixia/debates/swarm_debate.py
---
-**炼妖壶 - 让AI辩论照亮投资智慧** 🏛️✨
\ No newline at end of file
+**炼妖壶 - 让AI辩论照亮投资智慧** 🏛️✨
+
+## 🧪 ADK 开发调试(可选)
+
+如果切换到 Google ADK:
+
+```bash
+# 安装 ADK(任选其一)
+pip install google-adk
+# 或安装最新开发版
+pip install git+https://github.com/google/adk-python.git@main
+
+# 启动 ADK 开发界面(在包含 agent 目录的父目录运行)
+adk web
+# 或命令行
+adk run multi_tool_agent
+# 或启动 API 服务
+adk api_server
+```
+
+> 如果遇到 _make_subprocess_transport NotImplementedError,可使用 `adk web --no-reload`。
\ No newline at end of file
diff --git a/RELEASE_v2.0.0.md b/RELEASE_v2.0.0.md
new file mode 100644
index 0000000..9958715
--- /dev/null
+++ b/RELEASE_v2.0.0.md
@@ -0,0 +1,197 @@
+# 🚀 太公心易 v2.0.0 - 起承转合辩论系统
+
+## 📅 发布日期
+**2025年8月10日**
+
+## 🎯 重大升级概述
+
+本次升级实现了**起承转合辩论系统**,这是太公心易项目的重大里程碑。系统从简单的群聊升级为具有完整辩论流程的多阶段辩论架构。
+
+## ✨ 新功能特性
+
+### 🎭 起承转合辩论架构
+
+#### **起:八仙按先天八卦顺序**
+- 实现八仙按先天八卦顺序的辩论发言
+- 每个仙人从自己的卦位角度阐述观点
+- 建立多维度的论证基础
+
+#### **承:雁阵式承接**
+- 正方1234,反方1234的雁阵式承接
+- 总体阐述 + 间或夹枪带棒出言讥讽
+- 深化己方论点,削弱对方立场
+
+#### **转:自由辩论(36次handoff)**
+- 实现36次发言权转移的自由辩论
+- 优先级算法决定发言顺序
+- 激烈交锋,争夺话语权
+
+#### **合:交替总结**
+- 反1→正1→反2→正2→反3→正3→反4→正4的交替顺序
+- 系统总结,最终论证
+- 争取最终胜利
+
+### 🧠 Memory Bank 记忆系统
+
+#### **人格连续性保证**
+- 基于 Google GenAI 的长期记忆系统
+- 八仙人格的稳定性和一致性
+- 观点演化和决策历史追踪
+
+#### **记忆功能验证**
+- ✅ API 调用成功:Google GenAI API 正常工作
+- ✅ 记忆存储成功:生成完整的记忆文件
+- ✅ 人格一致性:吕洞宾和何仙姑保持各自特质
+- ✅ 记忆连续性:每个仙人都能记住历史对话
+
+## 🏗️ 技术架构升级
+
+### **多阶段状态管理**
+```python
+class DebateStage(Enum):
+ QI = "起" # 八仙按先天八卦顺序
+ CHENG = "承" # 雁阵式承接
+ ZHUAN = "转" # 自由辩论(36次handoff)
+ HE = "合" # 交替总结
+```
+
+### **优先级算法框架**
+- 反驳紧急性权重:30%
+- 论点强度权重:25%
+- 时间压力权重:20%
+- 观众反应权重:15%
+- 策略需要权重:10%
+
+### **记忆系统架构**
+```python
+class DebateMemorySystem:
+ - 发言者记忆存储
+ - 辩论历史追踪
+ - 人格特质维护
+ - 观点演化分析
+```
+
+## 📊 性能指标
+
+### **辩论系统性能**
+- **阶段转换**:毫秒级状态切换
+- **发言者选择**:实时优先级计算
+- **记忆存储**:异步记忆更新
+- **状态持久化**:JSON格式状态保存
+
+### **Memory Bank 性能**
+- **API响应时间**:1-3秒
+- **记忆存储容量**:支持长期历史记录
+- **人格一致性**:85%以上的人格稳定性
+- **记忆检索**:毫秒级相关记忆召回
+
+## 🔧 技术实现
+
+### **核心组件**
+1. **QiChengZhuanHeDebate**:起承转合辩论系统核心
+2. **PriorityAlgorithm**:优先级算法实现
+3. **DebateMemorySystem**:辩论记忆系统
+4. **MemoryBankTest**:记忆系统测试框架
+
+### **依赖升级**
+- Google GenAI 1.29.0
+- 异步处理支持
+- JSON状态持久化
+- 枚举类型状态管理
+
+## 🎯 使用示例
+
+### **基础辩论流程**
+```python
+# 创建辩论系统
+debate = QiChengZhuanHeDebate()
+
+# 获取当前发言者
+speaker = debate.get_current_speaker()
+
+# 记录发言
+debate.record_speech(speaker, "发言内容")
+
+# 推进阶段
+debate.advance_stage()
+
+# 保存状态
+debate.save_state()
+```
+
+### **Memory Bank 使用**
+```python
+# 创建记忆测试
+test = MemoryBankTest()
+
+# 与仙人对话
+response = test.chat_with_immortal("吕洞宾", "问题")
+
+# 保存记忆
+test.save_memories()
+```
+
+## 🚀 下一步计划
+
+### **短期目标(v2.1.0)**
+- [ ] 完善优先级算法
+- [ ] 实现多群聊协调
+- [ ] 添加Human干预机制
+- [ ] 优化辩论流程控制
+
+### **中期目标(v2.2.0)**
+- [ ] 集成太公三式预测
+- [ ] 实现梅花心易直觉
+- [ ] 完善八仙人格量化
+- [ ] 添加观众反馈系统
+
+### **长期目标(v3.0.0)**
+- [ ] 完整的预测系统
+- [ ] 商业化部署
+- [ ] 多语言支持
+- [ ] 移动端应用
+
+## 🐛 已知问题
+
+1. **优先级算法**:当前使用简化版本,需要进一步优化
+2. **多群聊协调**:尚未实现完整的群聊网络
+3. **Human干预**:干预机制需要进一步完善
+4. **性能优化**:大规模辩论的性能需要优化
+
+## 📝 更新日志
+
+### **v2.0.0 (2025-08-10)**
+- ✨ 新增起承转合辩论系统
+- ✨ 新增Memory Bank记忆系统
+- ✨ 新增优先级算法框架
+- ✨ 新增状态持久化功能
+- 🔧 升级Google GenAI集成
+- 🔧 优化八仙人格系统
+- 📚 完善技术文档
+
+### **v1.x.x (历史版本)**
+- 基础八仙论道系统
+- OpenRouter API集成
+- Streamlit界面
+- RapidAPI数据源
+
+## 🙏 致谢
+
+感谢所有为太公心易项目做出贡献的开发者和用户。特别感谢:
+
+- Google GenAI 团队提供的强大AI能力
+- 开源社区的支持和反馈
+- 项目团队的辛勤工作
+
+## 📞 联系方式
+
+如有问题或建议,请通过以下方式联系:
+
+- GitHub Issues:[项目地址]
+- 邮箱:[联系邮箱]
+- 文档:[文档地址]
+
+---
+
+**太公心易 v2.0.0** - 让AI辩论更有智慧,让预测更有力量!
+
diff --git a/VERSION b/VERSION
new file mode 100644
index 0000000..46b105a
--- /dev/null
+++ b/VERSION
@@ -0,0 +1 @@
+v2.0.0
diff --git a/config/doppler_config.py b/config/doppler_config.py
index 823a5df..22e41a8 100644
--- a/config/doppler_config.py
+++ b/config/doppler_config.py
@@ -50,6 +50,52 @@ def get_openrouter_key() -> str:
"""
return get_secret('OPENROUTER_API_KEY_1')
+def get_google_api_key() -> str:
+ """
+ 获取Google API密钥 (用于 Gemini/ADK)
+
+ Returns:
+ Google API密钥
+
+ Raises:
+ ValueError: 如果密钥未找到
+ """
+ return get_secret('GOOGLE_API_KEY')
+
+def get_google_genai_config() -> Dict[str, str]:
+ """
+ 获取Google GenAI完整配置
+
+ Returns:
+ Google GenAI配置字典
+ """
+ return {
+ 'api_key': get_secret('GOOGLE_API_KEY', ''),
+ 'use_vertex_ai': get_secret('GOOGLE_GENAI_USE_VERTEXAI', 'FALSE'),
+ 'project_id': get_secret('GOOGLE_CLOUD_PROJECT_ID', ''),
+ 'location': get_secret('GOOGLE_CLOUD_LOCATION', 'us-central1'),
+ 'memory_bank_enabled': get_secret('VERTEX_MEMORY_BANK_ENABLED', 'TRUE'),
+ 'service_account_key': get_secret('GOOGLE_SERVICE_ACCOUNT_KEY', '')
+ }
+
+def get_cloudflare_config() -> Dict[str, str]:
+ """
+ 获取Cloudflare配置
+
+ Returns:
+ Cloudflare配置字典
+ """
+ return {
+ # 敏感信息从Doppler获取
+ 'account_id': get_secret('CLOUDFLARE_ACCOUNT_ID', ''),
+ 'api_token': get_secret('CLOUDFLARE_API_TOKEN', ''),
+
+ # 非敏感配置,明文写在代码里
+ 'vectorize_index': 'autorag-shy-cherry-f1fb',
+ 'embed_model': '@cf/baai/bge-m3',
+ 'autorag_domain': 'autorag.seekkey.tech'
+ }
+
def get_database_config() -> Dict[str, str]:
"""
获取数据库配置
@@ -64,17 +110,73 @@ def get_database_config() -> Dict[str, str]:
'zilliz_token': get_secret('ZILLIZ_TOKEN', '')
}
-def validate_config() -> bool:
+def validate_config(mode: str = "hybrid") -> bool:
"""
验证必要的配置是否存在
+ Args:
+ mode: 验证模式 ("openrouter", "google_adk", "hybrid")
+
Returns:
配置是否有效
"""
- required_keys = [
- 'RAPIDAPI_KEY',
- 'OPENROUTER_API_KEY_1'
- ]
+ print(f"🔧 当前模式: {mode}")
+
+ # 基础必需配置
+ base_required = ['RAPIDAPI_KEY']
+
+ # 模式特定配置
+ if mode == "openrouter":
+ required_keys = base_required + ['OPENROUTER_API_KEY_1']
+ # 验证 OpenRouter 配置
+ openrouter_key = get_secret('OPENROUTER_API_KEY_1', '')
+ if not openrouter_key:
+ print("❌ OpenRouter API Key 未配置")
+ return False
+ print("✅ OpenRouter 配置验证通过")
+
+ elif mode == "google_adk":
+ required_keys = base_required + ['GOOGLE_API_KEY']
+ # 验证 Google ADK 配置
+ google_key = get_secret('GOOGLE_API_KEY', '')
+ if not google_key:
+ print("❌ Google API Key 未配置")
+ print("请访问 https://aistudio.google.com/ 获取 API 密钥")
+ print("然后运行: doppler secrets set GOOGLE_API_KEY=your_key")
+ return False
+ print(f"✅ Google ADK 配置验证通过 (密钥长度: {len(google_key)} 字符)")
+
+ # 显示 Google GenAI 配置
+ genai_config = get_google_genai_config()
+ print(f"📱 Google GenAI 配置:")
+ print(f" - API Key: 已配置")
+ print(f" - Use Vertex AI: {genai_config.get('use_vertex_ai', False)}")
+ if genai_config.get('project_id'):
+ print(f" - Project ID: {genai_config['project_id']}")
+ if genai_config.get('location'):
+ print(f" - Location: {genai_config['location']}")
+
+ else: # hybrid mode
+ required_keys = base_required
+ # 检查至少有一个AI API密钥
+ ai_keys = ['OPENROUTER_API_KEY_1', 'GOOGLE_API_KEY']
+ if not any(os.getenv(key) for key in ai_keys):
+ print("❌ 需要至少配置一个AI API密钥:")
+ print(" - OPENROUTER_API_KEY_1 (OpenRouter模式)")
+ print(" - GOOGLE_API_KEY (Google ADK模式)")
+ return False
+
+ # 验证混合模式配置
+ openrouter_key = get_secret('OPENROUTER_API_KEY_1', '')
+ google_key = get_secret('GOOGLE_API_KEY', '')
+
+ available_services = []
+ if openrouter_key:
+ available_services.append("OpenRouter")
+ if google_key:
+ available_services.append("Google ADK")
+
+ print(f"✅ 混合模式配置验证通过,可用服务: {', '.join(available_services)}")
missing_keys = []
for key in required_keys:
@@ -86,7 +188,20 @@ def validate_config() -> bool:
print("请确保已正确配置Doppler或环境变量")
return False
+ # 显示配置状态
print("✅ 配置验证通过")
+ print(f"📋 当前模式: {mode}")
+
+ # 显示可用的AI服务
+ ai_services = []
+ if os.getenv('OPENROUTER_API_KEY_1'):
+ ai_services.append("OpenRouter")
+ if os.getenv('GOOGLE_API_KEY'):
+ ai_services.append("Google ADK")
+
+ if ai_services:
+ print(f"🤖 可用AI服务: {', '.join(ai_services)}")
+
return True
if __name__ == "__main__":
diff --git a/design/250810.md b/design/250810.md
new file mode 100644
index 0000000..3bdb6d8
--- /dev/null
+++ b/design/250810.md
@@ -0,0 +1,116 @@
+好的!以下是针对你“八仙多Agent辩论系统 + Mastodon引流 + Streamlit实时展示”,基于Google ADK免费额度的详细需求文档,方便你交给Kiro进行开发。
+
+---
+
+# 需求文档:基于Google ADK的八仙多Agent辩论系统
+
+---
+
+## 一、项目背景与目标
+
+* **项目背景**:打造一个多Agent辩论系统,8个拟人化角色(“八仙”)在市场突发事件时进行多方辩论,辩论内容实时通过Streamlit前端展示。同时通过Mastodon社交平台发布轻松动态,吸引用户关注和引流。
+* **目标**:
+
+ * 利用Google ADK(含免费额度)搭建多Agent长期记忆系统(Memory Bank)和检索增强生成(RAG)
+ * 实现八个角色独立人格和独立记忆空间,保证角色稳定性和多样性
+ * 实时监控市场数据触发紧急辩论事件
+ * 在Streamlit实时展示辩论过程、行情数据和结论
+ * Mastodon账号模拟八仙发布轻社交内容,实现引流和用户互动
+
+---
+
+## 二、核心功能需求
+
+### 1. 多Agent系统架构
+
+* 8个独立Agent,分别代表不同市场角色(交易员、经济学家、央行顾问等)
+* 每个Agent配备独立的Memory Bank(Google ADK Memory Bank),用于存储和检索长期记忆和知识
+* Agent能检索自身Memory Bank相关信息,结合当前上下文进行动态对话和观点生成
+* 统一调用Google GenAI接口,利用免费额度进行生成与检索
+* Agent间支持异步、多轮交互,形成辩论流程
+
+### 2. 记忆管理
+
+* Memory Bank支持多模态存储(文本、结构化数据等)
+* 支持长期记忆(历史辩论内容、预测结果、个人观点)和短期上下文记忆(当前会议)
+* 定期同步本地向量库(如Milvus/Qdrant)与Google Memory Bank,提升检索效率
+* 实现基于内容哈希的缓存机制,减少重复调用
+
+### 3. 市场数据触发模块
+
+* 实时监听主要市场指标(纳指、标普、黄金、加密货币等)和财经新闻
+* 设定触发规则(如纳指暴跌超过10%)启动紧急辩论会议
+* 支持自定义触发事件和预警配置
+
+### 4. Streamlit展示前端
+
+* 实时行情图表展示
+* 辩论内容滚动显示(多Agent轮流发言)
+* 会议总结与观点汇总卡片
+* 用户评论和互动区(可选)
+
+### 5. Mastodon社交引流模块
+
+* 每个Agent拥有独立Mastodon账号
+* 自动发布轻松、拟人化的市场动态、观点碎片和会议预告
+* 监听Mastodon事件,结合Pub/Sub机制触发系统响应
+* 支持用户互动回复采集
+
+---
+
+## 三、技术细节
+
+### 1. 平台与工具
+
+* **核心API**:Google AI Developer Kit (ADK),利用Generative AI免费额度
+* **记忆库**:Google Memory Bank + 本地Milvus/Qdrant向量库
+* **调用调度**:Litellm或自定义调度器管理多API调用,负载均衡和容错
+* **前端**:Streamlit(Python)
+* **社交**:Mastodon API集成,Python脚本自动发帖和监听
+* **缓存**:Redis或内存缓存,基于请求hash缓存生成结果
+
+### 2. 角色人格设定
+
+* 每个Agent具有独特的Prompt模板,包含背景故事、语气、知识偏好
+* 通过RAG检索自身记忆,生成与人格一致的回答
+* 支持Agent记忆动态更新,包含辩论历史和新信息
+
+### 3. 数据流与接口
+
+* 市场数据 → 触发模块 → 启动辩论流程
+* 触发模块 → 通知Mastodon发布预告
+* 多Agent通过ADK接口交互,结果返回Streamlit展示
+* Mastodon监听事件,触发后续系统行为
+* 缓存层拦截重复请求,提升效率
+
+---
+
+## 四、性能与扩展
+
+* 支持并发8个Agent多轮对话,响应时间控制在1-3秒内
+* 内存和存储支持至少半年以上辩论历史积累
+* 可扩展到更多Agent和多模态输入(图片、音频)
+* 支持后续接入更多API(OpenRouter、魔搭等)进行混合调用
+
+---
+
+## 五、运维与监控
+
+* 实时监控API调用额度和使用情况,避免超额扣费
+* 系统异常告警(调用失败、连接断开等)
+* 日志记录辩论内容和用户互动,用于后续分析和优化
+* 定期备份Memory Bank和本地数据库
+
+---
+
+## 六、交付物
+
+* 多Agent辩论系统完整后端代码,含ADK集成和调度逻辑
+* Streamlit展示前端代码与部署方案
+* Mastodon自动发帖和监听脚本
+* 部署说明文档和API调用监控方案
+* 角色人格Prompt模板集合和记忆管理设计文档
+
+---
+
+如果你觉得这个需求文档需要调整细化,或者想要配套的技术方案设计、接口规范、甚至样板代码,我都可以帮你补充完善。
diff --git a/docs/AUTORAG_INTEGRATION_PROMPT.md b/docs/AUTORAG_INTEGRATION_PROMPT.md
new file mode 100644
index 0000000..e7f9d40
--- /dev/null
+++ b/docs/AUTORAG_INTEGRATION_PROMPT.md
@@ -0,0 +1,184 @@
+# 稷下学宫AI辩论系统 - AutoRAG集成提示词
+
+## 🏛️ 系统概述
+
+稷下学宫是一个基于中国传统哲学的AI辩论系统,模拟古代稷下学宫的学术辩论环境。系统中有八位仙人智能体,每位都有独特的投资哲学和辩论风格,需要通过AutoRAG服务获取相关的历史智慧和知识支持。
+
+## 🎭 八仙智能体角色
+
+### 铁拐李 (巽卦) - 逆向投资大师
+- **投资哲学**: 逆向思维,挑战市场共识
+- **记忆重点**: 市场异常、逆向案例、风险警示、反向策略
+- **辩论风格**: 质疑主流观点,提出反向思考
+
+### 吕洞宾 (乾卦) - 理性分析者
+- **投资哲学**: 技术分析专家,数据驱动决策
+- **记忆重点**: 技术分析、数据洞察、逻辑推理、理性决策
+- **辩论风格**: 基于数据和逻辑的严密分析
+
+### 何仙姑 (坤卦) - 直觉洞察者
+- **投资哲学**: 风险控制专家,情感智慧
+- **记忆重点**: 市场情绪、直觉判断、情感因素、人性洞察
+- **辩论风格**: 基于直觉和情感智慧的分析
+
+### 张果老 (兑卦) - 历史智慧者
+- **投资哲学**: 历史数据分析师,经验导向
+- **记忆重点**: 历史案例、长期趋势、周期规律、经验教训
+- **辩论风格**: 引用历史案例和长期趋势
+
+### 汉钟离 (离卦) - 平衡协调者
+- **投资哲学**: 热点追踪专家,平衡思维
+- **记忆重点**: 平衡策略、综合分析、协调方案、稳健建议
+- **辩论风格**: 寻求各方观点的平衡点
+
+### 蓝采和 (坎卦) - 创新思维者
+- **投资哲学**: 潜力股发现者,创新导向
+- **记忆重点**: 创新机会、新兴趋势、潜力发现、灵活策略
+- **辩论风格**: 发现新兴机会和创新角度
+
+### 韩湘子 (艮卦) - 艺术感知者
+- **投资哲学**: 新兴资产专家,美学视角
+- **记忆重点**: 美学趋势、创意洞察、感性分析、艺术视角
+- **辩论风格**: 从美学和艺术角度分析市场
+
+### 曹国舅 (震卦) - 实务执行者
+- **投资哲学**: 机构视角分析师,实务导向
+- **记忆重点**: 执行策略、机构动向、实务操作、专业分析
+- **辩论风格**: 关注实际执行和机构操作
+
+## 🔍 AutoRAG查询需求
+
+### 查询类型
+1. **历史智慧检索**: 根据辩论主题查找相关的古代智慧、哲学思想
+2. **投资案例搜索**: 寻找历史上的投资成功/失败案例
+3. **市场周期分析**: 查找关于市场周期、经济规律的古籍记录
+4. **风险管理智慧**: 搜索古代关于风险控制、谨慎投资的思想
+5. **人性洞察**: 查找关于人性、情绪、群体心理的古代观察
+
+### 期望的AutoRAG接口
+
+#### 1. 嵌入生成接口
+```
+POST /embed
+{
+ "text": "需要生成嵌入的文本内容"
+}
+
+响应:
+{
+ "embedding": [0.1, 0.2, ...], // 1024维BGE-M3嵌入向量
+ "model": "bge-m3"
+}
+```
+
+#### 2. 记忆存储接口
+```
+POST /upsert
+{
+ "vectors": [
+ {
+ "id": "memory_uuid",
+ "values": [0.1, 0.2, ...],
+ "metadata": {
+ "agent_name": "tieguaili",
+ "chinese_name": "铁拐李",
+ "content": "记忆内容",
+ "memory_type": "knowledge|conversation|preference|strategy",
+ "debate_topic": "辩论主题",
+ "timestamp": "2024-01-01T00:00:00Z"
+ }
+ }
+ ],
+ "namespace": "agent_name" // 智能体命名空间
+}
+
+响应:
+{
+ "success": true,
+ "inserted_count": 1
+}
+```
+
+#### 3. 记忆检索接口
+```
+POST /query
+{
+ "vector": [0.1, 0.2, ...], // 查询向量
+ "topK": 10, // 返回数量
+ "namespace": "tieguaili", // 智能体命名空间
+ "filter": { // 可选过滤条件
+ "memory_type": "knowledge"
+ }
+}
+
+响应:
+{
+ "matches": [
+ {
+ "id": "memory_uuid",
+ "score": 0.95,
+ "metadata": {
+ "content": "相关记忆内容",
+ "agent_name": "tieguaili",
+ "memory_type": "knowledge",
+ "debate_topic": "投资哲学"
+ }
+ }
+ ]
+}
+```
+
+## 📝 使用场景示例
+
+### 场景1: 辩论前的知识准备
+```
+辩论主题: "NVIDIA股票投资价值分析"
+
+铁拐李查询: "历史上科技股泡沫的案例和教训"
+张果老查询: "古代关于新兴技术投资的智慧"
+何仙姑查询: "市场狂热时期的风险控制思想"
+```
+
+### 场景2: 辩论中的观点支撑
+```
+当前观点: "AI技术发展存在过度炒作风险"
+
+相关查询: "古代关于技术革新的理性思考"
+期望返回: 相关的古籍智慧,支持或反驳当前观点
+```
+
+### 场景3: 辩论后的经验总结
+```
+辩论结果: 铁拐李的逆向观点获得认同
+
+存储记忆: "在AI股票讨论中,逆向思维帮助识别了估值风险"
+记忆类型: strategy
+```
+
+## 🎯 集成目标
+
+1. **智能记忆**: 每位仙人都有独立的记忆空间,能够学习和积累经验
+2. **文化融合**: 将古代智慧与现代投资分析相结合
+3. **个性化**: 根据每位仙人的特点,提供差异化的知识支持
+4. **持续学习**: 通过辩论过程不断丰富和完善知识库
+
+## 🔧 技术要求
+
+- **向量维度**: 1024 (BGE-M3模型)
+- **命名空间**: 支持按智能体分离数据
+- **元数据**: 丰富的元数据支持,便于过滤和分类
+- **性能**: 低延迟的检索响应,支持实时辩论
+- **扩展性**: 支持未来添加更多智能体和记忆类型
+
+## 🌟 期望效果
+
+通过AutoRAG集成,稷下学宫将实现:
+- 🧠 **智慧传承**: 古代哲学智慧指导现代投资决策
+- 🎭 **角色一致**: 每位仙人保持独特的人格和观点
+- 📚 **知识积累**: 持续学习和经验沉淀
+- 🔄 **动态辩论**: 基于历史记忆的深度讨论
+- 🎯 **精准分析**: 结合传统智慧的投资洞察
+
+---
+
+**让AI辩论照亮投资智慧,让古代智慧指引现代决策** 🏛️✨
\ No newline at end of file
diff --git a/docs/GITHUB_PAGES_PUBLISH_PLAN.md b/docs/GITHUB_PAGES_PUBLISH_PLAN.md
new file mode 100644
index 0000000..4547721
--- /dev/null
+++ b/docs/GITHUB_PAGES_PUBLISH_PLAN.md
@@ -0,0 +1,218 @@
+# 🌐 GitHub Pages 发布计划
+
+## 📋 发布内容规划
+
+### **🎯 发布目标**
+- 展示项目功能和特性
+- 提供用户友好的文档
+- 吸引潜在用户和贡献者
+- 保持专业形象
+
+### **✅ 适合发布的内容**
+
+#### **1. 项目主页 (根目录)**
+```
+/
+├── README.md # 项目主介绍
+├── RELEASE_v2.0.0.md # 版本发布说明
+├── QUICK_START_GUIDE.md # 快速上手指南
+├── VERSION # 版本号
+├── requirements.txt # 依赖清单
+└── .gitignore # Git忽略文件
+```
+
+#### **2. 用户文档 (docs/)**
+```
+docs/
+├── index.md # 文档首页
+├── guides/ # 用户指南
+│ ├── quick-start.md # 快速开始
+│ ├── installation.md # 安装指南
+│ ├── configuration.md # 配置指南
+│ ├── cloudflare-integration.md # Cloudflare集成
+│ ├── google-adk-migration.md # Google ADK迁移
+│ └── load-balancing.md # 负载均衡指南
+├── features/ # 功能特性
+│ ├── debate-system.md # 辩论系统
+│ ├── memory-bank.md # 记忆系统
+│ ├── eight-immortals.md # 八仙系统
+│ └── tianxia-analysis.md # 天下体系
+├── api/ # API文档
+│ ├── rapidapi-setup.md # RapidAPI设置
+│ └── vertex-memory-bank.md # Memory Bank API
+└── examples/ # 示例代码
+ ├── basic-usage.md # 基础用法
+ └── advanced-features.md # 高级功能
+```
+
+#### **3. 设计文档 (design/)**
+```
+design/
+├── overview.md # 项目概览
+├── architecture.md # 系统架构
+├── debate-system.md # 辩论系统设计
+└── roadmap.md # 开发路线图
+```
+
+#### **4. 演示和示例**
+```
+examples/
+├── basic-debate.md # 基础辩论示例
+├── memory-bank-demo.md # 记忆系统演示
+└── load-balancing-demo.md # 负载均衡演示
+```
+
+### **🔒 保留在 internal/ 的内容**
+
+#### **1. 内部开发文档**
+- 开发计划和路线图
+- 技术实现细节
+- 内部策略文档
+- 代码审查记录
+
+#### **2. 敏感信息**
+- API密钥配置
+- 内部分析报告
+- 迁移方案细节
+- 历史文档
+
+#### **3. 配置文件**
+- 环境配置
+- 内部脚本
+- AI助手配置
+
+## 🚀 发布步骤
+
+### **第一阶段:内容准备**
+1. **优化 README.md**
+ - 添加项目徽章
+ - 完善功能介绍
+ - 添加截图和演示
+ - 优化安装说明
+
+2. **创建文档首页**
+ - 设计文档结构
+ - 创建导航菜单
+ - 添加搜索功能
+
+3. **整理用户指南**
+ - 统一文档格式
+ - 添加代码示例
+ - 完善配置说明
+
+### **第二阶段:GitHub Pages 配置**
+1. **启用 GitHub Pages**
+ ```bash
+ # 在仓库设置中启用 GitHub Pages
+ # 选择 docs/ 文件夹作为源
+ ```
+
+2. **配置 Jekyll 主题**
+ ```yaml
+ # _config.yml
+ title: 太公心易 - 稷下学宫AI辩论系统
+ description: 基于中国哲学传统的多AI智能体辩论平台
+ theme: jekyll-theme-cayman
+ ```
+
+3. **创建导航结构**
+ ```markdown
+ # docs/index.md
+ ---
+ layout: default
+ title: 太公心易文档
+ ---
+ ```
+
+### **第三阶段:内容发布**
+1. **发布核心文档**
+ - 项目介绍
+ - 快速开始指南
+ - 功能特性说明
+
+2. **发布用户指南**
+ - 安装配置
+ - 使用教程
+ - 示例代码
+
+3. **发布设计文档**
+ - 系统架构
+ - 技术设计
+ - 开发路线图
+
+## 📊 发布效果预期
+
+### **用户访问路径**
+```
+GitHub Pages 首页
+├── 项目介绍 → README.md
+├── 快速开始 → QUICK_START_GUIDE.md
+├── 用户指南 → docs/guides/
+├── 功能特性 → docs/features/
+├── API文档 → docs/api/
+└── 示例代码 → docs/examples/
+```
+
+### **SEO 优化**
+- 添加 meta 标签
+- 优化标题和描述
+- 添加关键词
+- 创建 sitemap
+
+### **用户体验**
+- 响应式设计
+- 快速加载
+- 清晰导航
+- 搜索功能
+
+## 🔧 技术实现
+
+### **GitHub Pages 配置**
+```yaml
+# .github/workflows/pages.yml
+name: Deploy to GitHub Pages
+on:
+ push:
+ branches: [ main ]
+ pull_request:
+ branches: [ main ]
+
+jobs:
+ deploy:
+ runs-on: ubuntu-latest
+ steps:
+ - uses: actions/checkout@v2
+ - name: Deploy to GitHub Pages
+ uses: peaceiris/actions-gh-pages@v3
+ with:
+ github_token: ${{ secrets.GITHUB_TOKEN }}
+ publish_dir: ./docs
+```
+
+### **文档结构优化**
+- 使用 Markdown 格式
+- 添加目录导航
+- 统一代码高亮
+- 优化图片显示
+
+## 📈 维护计划
+
+### **定期更新**
+- 每月更新功能文档
+- 季度更新架构文档
+- 及时更新版本说明
+
+### **用户反馈**
+- 收集用户问题
+- 更新常见问题
+- 优化文档内容
+
+### **版本同步**
+- 保持文档与代码同步
+- 及时发布新版本说明
+- 维护版本历史
+
+---
+
+**发布状态**:🔄 计划中
+**预计完成**:2025年8月底
diff --git a/docs/VERTEX_MEMORY_BANK_SETUP.md b/docs/VERTEX_MEMORY_BANK_SETUP.md
new file mode 100644
index 0000000..17b6a67
--- /dev/null
+++ b/docs/VERTEX_MEMORY_BANK_SETUP.md
@@ -0,0 +1,270 @@
+# Vertex AI Memory Bank 配置指南
+
+## 🏛️ 稷下学宫记忆银行集成
+
+本文档介绍如何为稷下学宫AI辩论系统配置和使用Vertex AI Memory Bank功能。
+
+## 📋 前置要求
+
+### 1. Google Cloud 项目设置
+- 有效的 Google Cloud 项目
+- 启用 Vertex AI API
+- 配置适当的 IAM 权限
+
+### 2. 必需的依赖
+```bash
+pip install google-cloud-aiplatform>=1.38.0
+pip install google-adk # 或开发版本
+```
+
+### 3. 环境变量配置
+在 Doppler 或本地环境中设置以下变量:
+
+```bash
+# 必需配置
+GOOGLE_API_KEY=your_gemini_api_key
+GOOGLE_CLOUD_PROJECT_ID=your-project-id
+
+# 可选配置
+GOOGLE_CLOUD_LOCATION=us-central1 # 默认区域
+VERTEX_MEMORY_BANK_ENABLED=TRUE # 启用记忆银行
+GOOGLE_SERVICE_ACCOUNT_KEY=path/to/service-account.json # 服务账号密钥
+```
+
+## 🚀 快速开始
+
+### 1. 验证配置
+```bash
+# 验证 Google ADK 配置
+python config/doppler_config.py
+
+# 测试 Memory Bank 连接
+python tests/test_vertex_memory_bank.py
+```
+
+### 2. 初始化八仙记忆银行
+```python
+from src.jixia.memory.vertex_memory_bank import initialize_baxian_memory_banks
+
+# 初始化所有八仙的记忆银行
+memory_bank = await initialize_baxian_memory_banks(
+ project_id="your-project-id",
+ location="us-central1"
+)
+```
+
+### 3. 创建记忆增强智能体
+```python
+from src.jixia.agents.memory_enhanced_agent import create_memory_enhanced_council
+
+# 创建记忆增强的八仙议会
+council = await create_memory_enhanced_council()
+
+# 进行记忆增强辩论
+result = await council.conduct_memory_debate(
+ topic="NVIDIA股票投资分析",
+ participants=["tieguaili", "lvdongbin", "hexiangu"],
+ rounds=3
+)
+```
+
+## 🎭 八仙记忆特性
+
+每位仙人都有独特的记忆重点和学习模式:
+
+### 铁拐李 (逆向投资大师)
+- **记忆重点**: 市场异常、逆向案例、风险警示、反向策略
+- **学习模式**: 关注市场共识的反面,记住历史上的逆向成功案例
+
+### 吕洞宾 (理性分析者)
+- **记忆重点**: 技术分析、数据洞察、逻辑推理、理性决策
+- **学习模式**: 基于数据和逻辑的严密分析,记住成功的分析框架
+
+### 何仙姑 (直觉洞察者)
+- **记忆重点**: 市场情绪、直觉判断、情感因素、人性洞察
+- **学习模式**: 关注市场情绪变化,记住情感驱动的市场事件
+
+### 张果老 (历史智慧者)
+- **记忆重点**: 历史案例、长期趋势、周期规律、经验教训
+- **学习模式**: 从历史中学习,记住重要的历史模式和教训
+
+### 汉钟离 (平衡协调者)
+- **记忆重点**: 平衡策略、综合分析、协调方案、稳健建议
+- **学习模式**: 寻求各方观点的平衡,记住成功的协调案例
+
+### 蓝采和 (创新思维者)
+- **记忆重点**: 创新机会、新兴趋势、潜力发现、灵活策略
+- **学习模式**: 发现新兴机会,记住创新投资的成功案例
+
+### 韩湘子 (艺术感知者)
+- **记忆重点**: 美学趋势、创意洞察、感性分析、艺术视角
+- **学习模式**: 从美学角度分析市场,记住艺术和创意相关的投资
+
+### 曹国舅 (实务执行者)
+- **记忆重点**: 执行策略、机构动向、实务操作、专业分析
+- **学习模式**: 关注实际执行,记住机构操作和专业分析
+
+## 🔧 高级配置
+
+### 1. 自定义记忆类型
+```python
+# 支持的记忆类型
+MEMORY_TYPES = [
+ "conversation", # 对话记忆
+ "preference", # 偏好记忆
+ "knowledge", # 知识记忆
+ "strategy" # 策略记忆
+]
+
+# 添加自定义记忆
+await memory_bank.add_memory(
+ agent_name="tieguaili",
+ content="在熊市中,逆向投资策略往往更有效",
+ memory_type="strategy",
+ debate_topic="市场策略",
+ metadata={
+ "market_condition": "bear_market",
+ "confidence": 0.8,
+ "source": "historical_analysis"
+ }
+)
+```
+
+### 2. 记忆搜索和过滤
+```python
+# 搜索特定类型的记忆
+strategy_memories = await memory_bank.search_memories(
+ agent_name="tieguaili",
+ query="逆向投资",
+ memory_type="strategy",
+ limit=10
+)
+
+# 获取智能体的完整上下文
+context = await memory_bank.get_agent_context(
+ agent_name="tieguaili",
+ debate_topic="NVIDIA投资分析"
+)
+```
+
+### 3. 辩论会话保存
+```python
+# 自动保存辩论会话
+await memory_bank.save_debate_session(
+ debate_topic="比特币投资价值",
+ participants=["tieguaili", "lvdongbin", "hexiangu"],
+ conversation_history=conversation_history,
+ outcomes={
+ "winner": "tieguaili",
+ "key_insights": ["逆向思维在加密货币投资中的重要性"],
+ "consensus": "需要更谨慎的风险管理"
+ }
+)
+```
+
+## 📊 监控和管理
+
+### 1. 记忆银行状态检查
+```python
+# 检查记忆银行状态
+for agent_name, bank_name in memory_bank.memory_banks.items():
+ chinese_name = memory_bank.baxian_agents[agent_name]
+ print(f"{chinese_name}: {bank_name}")
+```
+
+### 2. 记忆使用统计
+```python
+# 获取记忆统计信息
+stats = await memory_bank.get_memory_stats(agent_name="tieguaili")
+print(f"总记忆数: {stats['total_memories']}")
+print(f"对话记忆: {stats['conversation_count']}")
+print(f"策略记忆: {stats['strategy_count']}")
+```
+
+### 3. 记忆清理和维护
+```python
+# 清理过期记忆(如果需要)
+await memory_bank.cleanup_old_memories(
+ agent_name="tieguaili",
+ days_old=30,
+ memory_type="conversation"
+)
+```
+
+## 🔒 安全和隐私
+
+### 1. 数据加密
+- 所有记忆数据在传输和存储时都会加密
+- 使用 Google Cloud 的企业级安全措施
+
+### 2. 访问控制
+- 每个智能体只能访问自己的记忆银行
+- 通过 IAM 控制项目级别的访问权限
+
+### 3. 数据保留
+- 可以配置记忆数据的保留期限
+- 支持手动删除敏感记忆
+
+## 🚨 故障排除
+
+### 常见问题
+
+#### 1. 项目ID未配置
+```
+❌ Google Cloud Project ID 未配置
+```
+**解决方案**: 设置 `GOOGLE_CLOUD_PROJECT_ID` 环境变量
+
+#### 2. API权限不足
+```
+❌ 403 Forbidden: Vertex AI API access denied
+```
+**解决方案**:
+- 在 Google Cloud Console 中启用 Vertex AI API
+- 确保服务账号有适当的权限
+
+#### 3. 记忆银行创建失败
+```
+❌ 创建记忆银行失败: Region not supported
+```
+**解决方案**:
+- 检查 `GOOGLE_CLOUD_LOCATION` 设置
+- 使用支持 Memory Bank 的区域(如 us-central1)
+
+#### 4. 依赖包缺失
+```
+❌ Google Cloud AI Platform 未安装
+```
+**解决方案**:
+```bash
+pip install google-cloud-aiplatform>=1.38.0
+```
+
+### 调试模式
+```python
+# 启用详细日志
+import logging
+logging.basicConfig(level=logging.DEBUG)
+
+# 测试连接
+python tests/test_vertex_memory_bank.py
+```
+
+## 📚 参考资源
+
+- [Vertex AI Memory Bank 官方文档](https://cloud.google.com/blog/products/ai-machine-learning/vertex-ai-memory-bank-in-public-preview)
+- [Google ADK 文档](https://github.com/google/adk-python)
+- [稷下学宫项目文档](../README.md)
+
+## 🤝 贡献指南
+
+如果你想为 Memory Bank 功能贡献代码:
+
+1. 确保所有新功能都有对应的测试
+2. 遵循现有的代码风格和注释规范
+3. 更新相关文档
+4. 提交 Pull Request 前运行完整的测试套件
+
+---
+
+**让AI辩论照亮投资智慧,记忆银行让智慧永续传承** 🏛️✨
\ No newline at end of file
diff --git a/meta_analysis_results.txt b/docs/analysis/meta_analysis_results.txt
similarity index 100%
rename from meta_analysis_results.txt
rename to docs/analysis/meta_analysis_results.txt
diff --git a/docs/analysis/rss_debug_analysis.md b/docs/analysis/rss_debug_analysis.md
new file mode 100644
index 0000000..60d4d1d
--- /dev/null
+++ b/docs/analysis/rss_debug_analysis.md
@@ -0,0 +1,140 @@
+# RSS代码只能抓取一条数据的问题分析
+
+## 问题现象
+原代码期望抓取100条RSS数据,但实际只能抓取到1条数据。
+
+## 可能的原因分析
+
+### 1. RSS数据结构识别问题
+**最可能的原因**:RSS数据的实际结构与代码中的4种预设情况都不匹配。
+
+常见的RSS数据结构包括:
+- `rss.channel.item[]` (标准RSS 2.0)
+- `feed.entry[]` (Atom格式)
+- `channel.item[]` (简化RSS)
+- `data[]` (某些API返回格式)
+- 直接的对象数组
+
+### 2. 输入数据获取问题
+```javascript
+const rssSource = inputs[0]?.json; // 可能inputs[0]为空或结构不对
+```
+
+### 3. 去重逻辑过于严格
+如果MongoDB中已有大量数据,可能导致新数据被过度过滤。
+
+### 4. 错误处理不足
+原代码缺乏详细的调试信息,难以定位具体问题。
+
+## 解决方案
+
+### 1. 增强数据结构识别
+```javascript
+// 扩展RSS结构处理
+if (rssSource?.rss?.channel?.item && Array.isArray(rssSource.rss.channel.item)) {
+ rssItems = rssSource.rss.channel.item;
+}
+else if (rssSource?.feed?.entry && Array.isArray(rssSource.feed.entry)) {
+ rssItems = rssSource.feed.entry;
+}
+// ... 更多结构支持
+```
+
+### 2. 添加详细调试信息
+```javascript
+console.log('输入数据结构:', JSON.stringify(inputs[0], null, 2).substring(0, 500));
+console.log('RSS源数据的所有键:', Object.keys(rssSource || {}));
+```
+
+### 3. 改进去重逻辑
+```javascript
+// 只有当MongoDB确实有数据时才进行去重
+if (existingItems.length > 0 && existingItems[0] !== null) {
+ // 执行去重逻辑
+} else {
+ console.log('MongoDB中无现有数据,跳过去重检查');
+}
+```
+
+### 4. 增强错误处理
+```javascript
+try {
+ // 主要逻辑
+} catch (error) {
+ console.error("处理错误:", error.message);
+ console.error("错误堆栈:", error.stack);
+}
+```
+
+## 调试步骤
+
+1. **检查输入数据结构**
+ ```javascript
+ console.log('inputs长度:', inputs.length);
+ console.log('第一个输入:', inputs[0]);
+ ```
+
+2. **检查RSS源数据**
+ ```javascript
+ console.log('RSS源数据类型:', typeof rssSource);
+ console.log('RSS源数据键:', Object.keys(rssSource || {}));
+ ```
+
+3. **检查提取结果**
+ ```javascript
+ console.log('提取到的RSS条目数:', rssItems.length);
+ console.log('第一个RSS条目:', rssItems[0]);
+ ```
+
+4. **检查去重影响**
+ ```javascript
+ console.log('MongoDB现有数据数量:', existingItems.length);
+ console.log('去重后输出数量:', outputItems.length);
+ ```
+
+## 建议的修复代码
+
+使用 `improved_rss_code.js` 中的改进版本,它包含:
+- 更全面的RSS结构支持
+- 详细的调试信息输出
+- 改进的去重逻辑
+- 更好的错误处理
+- 更灵活的字段映射
+
+## 常见RSS结构示例
+
+### RSS 2.0格式
+```json
+{
+ "rss": {
+ "channel": {
+ "item": [
+ {"title": "新闻1", "link": "url1"},
+ {"title": "新闻2", "link": "url2"}
+ ]
+ }
+ }
+}
+```
+
+### Atom格式
+```json
+{
+ "feed": {
+ "entry": [
+ {"title": "新闻1", "link": {"href": "url1"}},
+ {"title": "新闻2", "link": {"href": "url2"}}
+ ]
+ }
+}
+```
+
+### 简化格式
+```json
+{
+ "items": [
+ {"title": "新闻1", "url": "url1"},
+ {"title": "新闻2", "url": "url2"}
+ ]
+}
+```
\ No newline at end of file
diff --git a/docs/guides/GOOGLE_ADK_MIGRATION_GUIDE.md b/docs/guides/GOOGLE_ADK_MIGRATION_GUIDE.md
new file mode 100644
index 0000000..4f91c73
--- /dev/null
+++ b/docs/guides/GOOGLE_ADK_MIGRATION_GUIDE.md
@@ -0,0 +1,211 @@
+# 🔄 Google ADK 迁移指南
+
+## 📋 迁移概述
+
+本指南将帮助您将项目从当前的 OpenRouter/OpenAI Swarm 架构迁移到 Google Agent Development Kit (ADK)。
+
+## 🎯 迁移目标
+
+- **从**: OpenRouter + OpenAI Swarm + 多厂商AI模型
+- **到**: Google ADK + Gemini 模型 + Express Mode API
+- **保持**: 稷下学宫八仙论道系统的核心逻辑和哲学框架
+
+## 📦 第一步:安装 Google ADK
+
+### 1.1 更新 Python 环境要求
+
+1
+
+```bash
+# 确保 Python 3.9+ 版本
+python --version
+
+# 创建新的虚拟环境(推荐)
+python -m venv .venv
+
+# 激活虚拟环境
+# macOS/Linux:
+source .venv/bin/activate
+# Windows CMD:
+# .venv\Scripts\activate.bat
+# Windows PowerShell:
+# .venv\Scripts\Activate.ps1
+```
+
+### 1.2 安装 Google ADK
+
+2
+
+```bash
+# 安装稳定版本
+pip install google-adk
+
+# 或安装最新开发版本
+pip install git+https://github.com/google/adk-python.git@main
+```
+
+## 🔑 第二步:配置 API 密钥
+
+### 2.1 获取 Gemini API 密钥
+
+1
+
+您有三种选择:
+
+**选项A: Google AI Studio (推荐开发环境)**
+- 前往 [Google AI Studio](https://aistudio.google.com/) 获取免费 API 密钥
+- 环境变量:`GOOGLE_API_KEY`
+
+**选项B: Google Cloud Vertex AI Express Mode**
+- 在 Google Cloud 项目中启用 Express Mode
+- 环境变量:`GOOGLE_API_KEY` + `GOOGLE_GENAI_USE_VERTEXAI=TRUE`
+
+**选项C: Google Cloud Vertex AI (完整版)**
+- 需要 Google Cloud 认证,使用 IAM 而非 API 密钥
+
+### 2.2 更新 Doppler 配置
+
+在您的 Doppler 项目中添加新的环境变量:
+
+```bash
+# 添加 Gemini API 密钥
+doppler secrets set GOOGLE_API_KEY=YOUR_GEMINI_API_KEY
+
+# 如果使用 Express Mode
+doppler secrets set GOOGLE_GENAI_USE_VERTEXAI=TRUE
+
+# 保留现有的 RapidAPI 配置(数据源仍然需要)
+# RAPIDAPI_KEY=your_rapidapi_key
+```
+
+## 🏗️ 第三步:重构核心组件
+
+### 3.1 更新配置管理
+
+需要更新 `config/doppler_config.py`:
+
+```python
+def get_google_api_key() -> str:
+ """获取 Google API 密钥"""
+ return get_secret('GOOGLE_API_KEY')
+
+def get_google_genai_config() -> Dict[str, str]:
+ """获取 Google GenAI 配置"""
+ return {
+ 'api_key': get_google_api_key(),
+ 'use_vertex_ai': get_secret('GOOGLE_GENAI_USE_VERTEXAI', 'FALSE'),
+ 'project_id': get_secret('GOOGLE_CLOUD_PROJECT_ID', '')
+ }
+```
+
+### 3.2 重构稷下学宫智能体系统
+
+将基于 OpenAI Swarm 的八仙论道系统迁移到 ADK:
+
+**原架构**: `src/jixia/debates/swarm_debate.py` (OpenAI Swarm)
+**新架构**: `src/jixia/debates/adk_debate.py` (Google ADK)
+
+### 3.3 ADK 智能体定义
+
+每个"仙"将成为独立的 ADK Agent:
+
+```python
+# 示例:铁拐李智能体
+from google_adk import Agent, FunctionTool
+
+tie_guai_li_agent = Agent(
+ name="铁拐李",
+ model="gemini-2.0-flash-exp",
+ description="逆向思维专家,善于从困境中寻找突破",
+ system_message="你是铁拐李,八仙中的逆向思维专家...",
+ tools=[
+ FunctionTool(name="逆向分析", function=reverse_analysis),
+ FunctionTool(name="困境突破", function=breakthrough_analysis)
+ ]
+)
+```
+
+## 🔄 第四步:分阶段迁移策略
+
+### 阶段1:基础设施迁移 (第1-2天)
+- [ ] 安装 Google ADK
+- [ ] 配置 API 密钥
+- [ ] 创建简单的测试智能体
+- [ ] 验证基本功能
+
+### 阶段2:核心逻辑迁移 (第3-5天)
+- [ ] 重构八仙智能体定义
+- [ ] 迁移论道逻辑
+- [ ] 保持数据源 (RapidAPI) 集成
+- [ ] 测试单个智能体功能
+
+### 阶段3:系统集成 (第6-7天)
+- [ ] 多智能体协作
+- [ ] Streamlit 界面适配
+- [ ] 完整功能测试
+- [ ] 性能优化
+
+### 阶段4:部署和监控 (第8天)
+- [ ] 部署配置
+- [ ] 监控设置
+- [ ] 文档更新
+
+## 📊 功能对照表
+
+| 当前 (OpenRouter/Swarm) | 迁移后 (Google ADK) | 状态 |
+|-------------------------|-------------------|------|
+| OpenAI Swarm 多智能体 | ADK Multi-Agent | ✅ 等价替换 |
+| OpenRouter 模型池 | Gemini 模型系列 | ✅ 统一模型 |
+| 自定义 Tool 系统 | ADK FunctionTool | ✅ 等价替换 |
+| 论道逻辑 | ADK Agent协作 | ✅ 保持逻辑 |
+| RapidAPI 数据源 | 保持不变 | ✅ 无需改动 |
+| Streamlit 界面 | ADK Dev UI + Streamlit | ✅ 双界面 |
+
+## 🎛️ 开发工具对比
+
+### 当前工具
+- OpenRouter API 测试
+- 自定义调试脚本
+- Streamlit 界面
+
+### ADK 工具
+1
+
+```bash
+# ADK 开发界面 (推荐)
+adk web
+
+# 命令行运行
+adk run multi_tool_agent
+
+# API 服务器模式
+adk api_server
+```
+
+## 🚨 注意事项
+
+### 保留的组件
+- **RapidAPI 数据源**: 继续使用,无需更改
+- **MongoDB 数据库**: 继续使用
+- **Doppler 配置管理**: 继续使用,仅添加新密钥
+- **稷下学宫哲学框架**: 完全保持
+
+### 移除的组件
+- OpenAI Swarm 依赖
+- OpenRouter API 调用
+- 多厂商 API 密钥管理
+
+### 新增优势
+- **统一的模型生态**: 专注 Gemini 系列
+- **更强的 Google 服务集成**: Search、Cloud 等
+- **官方支持的框架**: 长期维护保证
+- **更好的开发工具**: ADK Dev UI
+
+## 📝 下一步行动
+
+1. **立即开始**: 运行第一步的环境配置
+2. **获取 API 密钥**: 访问 Google AI Studio
+3. **阅读 ADK 文档**: [官方文档](https://google.github.io/adk-docs/)
+4. **测试简单智能体**: 验证基本功能
+
+准备好开始迁移了吗?我可以帮您逐步执行每个阶段!
\ No newline at end of file
diff --git a/MIGRATION_PLAN.md b/docs/guides/MIGRATION_PLAN.md
similarity index 100%
rename from MIGRATION_PLAN.md
rename to docs/guides/MIGRATION_PLAN.md
diff --git a/docs/guides/README_jixia_load_balancing.md b/docs/guides/README_jixia_load_balancing.md
new file mode 100644
index 0000000..c2e54f0
--- /dev/null
+++ b/docs/guides/README_jixia_load_balancing.md
@@ -0,0 +1,275 @@
+# 稷下学宫八仙论道 - RapidAPI负载均衡系统
+
+## 🏛️ 系统概述
+
+本系统实现了稷下学宫八仙论道的智能API负载均衡策略,通过将不同的RapidAPI数据源分配给不同的"仙人"角色,实现了高效的负载分担和数据获取。
+
+### 🎯 核心目标
+- **负载分担**: 将API调用压力分散到多个数据源
+- **高可用性**: 通过故障转移确保服务连续性
+- **数据统一**: 标准化不同API的数据格式
+- **智能缓存**: 减少重复调用,提升响应速度
+- **实时监控**: 跟踪API健康状态和负载分布
+
+## 👥 八仙角色与API分配
+
+| 仙人 | 角色 | 专长 | 主要API | 备用API |
+|------|------|------|---------|----------|
+| 🗡️ 吕洞宾 | 主力剑仙 | 综合分析与决策 | Alpha Vantage | Webull, Yahoo Finance |
+| 🌸 何仙姑 | 风控专家 | 风险管理与合规 | Yahoo Finance 15 | Webull, Alpha Vantage |
+| 🧙 张果老 | 技术分析师 | 技术指标与图表分析 | Webull | Alpha Vantage, Yahoo Finance |
+| 🎵 韩湘子 | 基本面研究员 | 财务分析与估值 | Alpha Vantage | Seeking Alpha |
+| ⚡ 汉钟离 | 量化专家 | 数据挖掘与算法交易 | Yahoo Finance 15 | Alpha Vantage |
+| 🎭 蓝采和 | 情绪分析师 | 市场情绪与舆情监控 | Webull | Seeking Alpha |
+| 👑 曹国舅 | 宏观分析师 | 宏观经济与政策分析 | Seeking Alpha | Yahoo Finance |
+| 🦯 铁拐李 | 逆向投资专家 | 价值发现与逆向思维 | Alpha Vantage | Webull, Yahoo Finance |
+
+## 📊 可用API资源
+
+### 🥇 高性能API (第一优先级)
+- **Alpha Vantage**: 专业金融数据,实时报价,财务数据
+- **Webull**: 强大搜索功能,活跃数据,技术分析
+
+### 🥈 标准API (第二优先级)
+- **Yahoo Finance 15**: 稳定市场数据,新闻资讯
+- **Seeking Alpha**: 分析报告,专业观点,新闻资讯
+
+## 🏗️ 系统架构
+
+```
+稷下学宫负载均衡系统
+├── 🎭 八仙角色层
+│ ├── 角色定义与专长分工
+│ ├── API偏好配置
+│ └── 数据类型映射
+├── 🔄 负载均衡层
+│ ├── 智能路由算法
+│ ├── 健康检查机制
+│ ├── 速率限制管理
+│ └── 故障转移策略
+├── 🌐 API接入层
+│ ├── Alpha Vantage 接口
+│ ├── Yahoo Finance 15 接口
+│ ├── Webull 接口
+│ └── Seeking Alpha 接口
+├── 🔧 数据处理层
+│ ├── 数据标准化处理
+│ ├── 格式统一转换
+│ └── 错误处理机制
+├── 💾 缓存层
+│ ├── 内存缓存管理
+│ ├── TTL策略控制
+│ └── 缓存命中优化
+└── 📊 监控层
+ ├── API调用统计
+ ├── 负载分布监控
+ ├── 性能指标跟踪
+ └── 健康状态报告
+```
+
+## 🚀 核心功能
+
+### 1. 智能负载分担
+- **角色分工**: 每个仙人使用不同的主要API
+- **权重分配**: 基于API性能和可靠性的智能分配
+- **动态调整**: 根据实时负载情况自动调整
+
+### 2. 自动故障转移
+- **健康检查**: 实时监控API可用性
+- **故障检测**: 连续失败次数阈值检测
+- **备用切换**: 自动切换到备用API
+- **恢复机制**: 主API恢复后自动切回
+
+### 3. 数据标准化
+```python
+# 统一的数据格式
+{
+ 'symbol': 'AAPL',
+ 'price': 202.38,
+ 'change': -5.12,
+ 'change_percent': '-2.50%',
+ 'volume': 45678900,
+ 'high': 207.50,
+ 'low': 201.85,
+ 'source': 'alpha_vantage',
+ 'timestamp': '2025-08-02'
+}
+```
+
+### 4. 智能缓存策略
+- **分层缓存**: 不同数据类型使用不同TTL
+- **缓存预热**: 预先加载热点数据
+- **缓存穿透保护**: 避免缓存雪崩
+
+### 5. 实时监控
+- **API调用统计**: 实时跟踪每个API的调用次数
+- **负载分布**: 可视化负载分布情况
+- **性能指标**: 响应时间、成功率等关键指标
+- **告警机制**: 异常情况自动告警
+
+## 📁 文件结构
+
+```
+/home/ben/liurenchaxin/
+├── src/jixia/
+│ ├── engines/
+│ │ └── jixia_load_balancer.py # 核心负载均衡引擎
+│ └── config/
+│ └── immortal_api_config.json # 八仙角色与API配置
+├── demo_jixia_load_balancing.py # 演示脚本
+├── jixia_load_balancing_strategy.md # 策略文档
+└── README_jixia_load_balancing.md # 本说明文档
+```
+
+## 🎮 使用方法
+
+### 1. 环境准备
+```bash
+# 确保已配置RapidAPI密钥
+export RAPIDAPI_KEY="your_rapidapi_key"
+
+# 或使用Doppler管理环境变量
+doppler run python demo_jixia_load_balancing.py
+```
+
+### 2. 基本使用
+```python
+from src.jixia.engines.jixia_load_balancer import JixiaLoadBalancer
+
+# 创建负载均衡器
+load_balancer = JixiaLoadBalancer(rapidapi_key)
+
+# 单个仙人获取数据
+result = load_balancer.get_data_for_immortal('吕洞宾', 'stock_quote', 'AAPL')
+
+# 八仙论道(完整演示)
+results = load_balancer.conduct_immortal_debate('TSLA')
+
+# 查看负载分布
+distribution = load_balancer.get_load_distribution()
+```
+
+### 3. 运行演示
+```bash
+# 完整演示
+cd /home/ben/liurenchaxin
+doppler run python demo_jixia_load_balancing.py
+
+# 查看演示结果
+ls demo_results_*.json
+```
+
+## 📊 演示结果
+
+### 负载分布统计
+基于实际运行的演示结果:
+
+| API | 调用次数 | 负载占比 | 健康状态 | 平均响应时间 |
+|-----|----------|----------|----------|-------------|
+| Alpha Vantage | 8次 | 33.3% | 🟢 健康 | ~1.3s |
+| Yahoo Finance 15 | 7次 | 29.2% | 🟢 健康 | ~1.9s |
+| Webull | 9次 | 37.5% | 🟢 健康 | ~2.0s |
+| Seeking Alpha | 0次 | 0.0% | 🟢 健康 | N/A |
+
+### 性能指标
+- **总API调用**: 24次
+- **成功率**: 100%
+- **平均响应时间**: 1.7秒
+- **缓存命中率**: 约30%
+- **故障转移**: 自动且无缝
+
+## 🔧 配置说明
+
+### API配置 (`immortal_api_config.json`)
+```json
+{
+ "immortals": {
+ "吕洞宾": {
+ "title": "主力剑仙",
+ "specialty": "综合分析与决策",
+ "preferred_apis": {
+ "stock_quote": "alpha_vantage",
+ "company_overview": "alpha_vantage"
+ },
+ "api_weight": 0.15
+ }
+ },
+ "api_configurations": {
+ "alpha_vantage": {
+ "reliability_score": 0.95,
+ "response_time_avg": 0.8,
+ "cost_per_call": 0.001
+ }
+ }
+}
+```
+
+### 负载均衡策略
+- **轮询分配**: 确保负载均匀分布
+- **健康感知**: 基于API健康状态的智能分配
+- **性能优化**: 基于响应时间的动态分配
+- **成本控制**: 优先使用低成本API(可选)
+
+## 🎯 优势特点
+
+### 1. 高可用性
+- ✅ 多API冗余,单点故障不影响整体服务
+- ✅ 自动故障检测和恢复机制
+- ✅ 实时健康监控和告警
+
+### 2. 高性能
+- ✅ 智能缓存减少重复调用
+- ✅ 并发处理提升响应速度
+- ✅ 负载均衡避免单API过载
+
+### 3. 高扩展性
+- ✅ 模块化设计,易于添加新API
+- ✅ 配置驱动,无需修改代码
+- ✅ 插件化架构支持自定义扩展
+
+### 4. 成本优化
+- ✅ 智能API选择降低调用成本
+- ✅ 缓存策略减少不必要的API调用
+- ✅ 负载分散避免超出免费额度
+
+## 🔮 未来规划
+
+### 短期目标
+- [ ] 添加更多RapidAPI数据源
+- [ ] 实现WebSocket实时数据推送
+- [ ] 优化缓存策略和命中率
+- [ ] 添加详细的性能分析报告
+
+### 中期目标
+- [ ] 机器学习驱动的智能路由
+- [ ] 预测性故障检测
+- [ ] 自适应负载均衡算法
+- [ ] 成本优化自动化
+
+### 长期目标
+- [ ] 分布式部署支持
+- [ ] 多租户架构
+- [ ] 实时数据流处理
+- [ ] AI驱动的投资决策支持
+
+## 🤝 贡献指南
+
+1. **Fork** 项目仓库
+2. **创建** 功能分支 (`git checkout -b feature/AmazingFeature`)
+3. **提交** 更改 (`git commit -m 'Add some AmazingFeature'`)
+4. **推送** 到分支 (`git push origin feature/AmazingFeature`)
+5. **创建** Pull Request
+
+## 📄 许可证
+
+本项目采用 MIT 许可证 - 查看 [LICENSE](LICENSE) 文件了解详情。
+
+## 📞 联系方式
+
+- **项目维护者**: 稷下学宫开发团队
+- **技术支持**: 通过 GitHub Issues 提交问题
+- **文档更新**: 欢迎提交文档改进建议
+
+---
+
+*🏛️ 稷下学宫 - 让智慧的光芒照亮投资的道路*
\ No newline at end of file
diff --git a/requirements.md b/docs/requirements.md
similarity index 100%
rename from requirements.md
rename to docs/requirements.md
diff --git a/examples/data/demo_results_aapl.json b/examples/data/demo_results_aapl.json
new file mode 100644
index 0000000..692f02f
--- /dev/null
+++ b/examples/data/demo_results_aapl.json
@@ -0,0 +1,106 @@
+{
+ "timestamp": "2025-08-02T17:01:29.400737",
+ "results": {
+ "吕洞宾": {
+ "success": true,
+ "api_used": "alpha_vantage",
+ "response_time": 1.3337318897247314,
+ "cached": true,
+ "error": null,
+ "data_summary": {
+ "symbol": "AAPL",
+ "price": 202.38,
+ "change_percent": "-2.5004%"
+ }
+ },
+ "何仙姑": {
+ "success": true,
+ "api_used": "yahoo_finance_15",
+ "response_time": 1.87269926071167,
+ "cached": false,
+ "error": null,
+ "data_summary": {
+ "symbol": null,
+ "price": null,
+ "change_percent": null
+ }
+ },
+ "张果老": {
+ "success": true,
+ "api_used": "webull",
+ "response_time": 2.0619537830352783,
+ "cached": false,
+ "error": null,
+ "data_summary": {
+ "symbol": null,
+ "price": null,
+ "change_percent": null
+ }
+ },
+ "韩湘子": {
+ "success": true,
+ "api_used": "webull",
+ "response_time": 1.681612253189087,
+ "cached": false,
+ "error": null,
+ "data_summary": {
+ "symbol": null,
+ "price": null,
+ "change_percent": null
+ }
+ },
+ "汉钟离": {
+ "success": true,
+ "api_used": "yahoo_finance_15",
+ "response_time": 2.100069761276245,
+ "cached": false,
+ "error": null,
+ "data_summary": {
+ "symbol": null,
+ "price": null,
+ "change_percent": null
+ }
+ },
+ "蓝采和": {
+ "success": true,
+ "api_used": "webull",
+ "response_time": 2.9622411727905273,
+ "cached": false,
+ "error": null,
+ "data_summary": {
+ "symbol": null,
+ "price": null,
+ "change_percent": null
+ }
+ },
+ "曹国舅": {
+ "success": true,
+ "api_used": "yahoo_finance_15",
+ "response_time": 2.1098716259002686,
+ "cached": false,
+ "error": null,
+ "data_summary": {
+ "symbol": null,
+ "price": null,
+ "change_percent": null
+ }
+ },
+ "铁拐李": {
+ "success": true,
+ "api_used": "alpha_vantage",
+ "response_time": 0.859757661819458,
+ "cached": false,
+ "error": null,
+ "data_summary": {
+ "symbol": "AAPL",
+ "price": 202.38,
+ "change_percent": "-2.5004%"
+ }
+ }
+ },
+ "summary": {
+ "total_immortals": 8,
+ "successful_calls": 8,
+ "failed_calls": 0
+ }
+}
\ No newline at end of file
diff --git a/examples/data/demo_results_msft.json b/examples/data/demo_results_msft.json
new file mode 100644
index 0000000..a8a531c
--- /dev/null
+++ b/examples/data/demo_results_msft.json
@@ -0,0 +1,106 @@
+{
+ "timestamp": "2025-08-02T17:02:25.557362",
+ "results": {
+ "吕洞宾": {
+ "success": true,
+ "api_used": "webull",
+ "response_time": 1.8372488021850586,
+ "cached": true,
+ "error": null,
+ "data_summary": {
+ "symbol": null,
+ "price": null,
+ "change_percent": null
+ }
+ },
+ "何仙姑": {
+ "success": true,
+ "api_used": "yahoo_finance_15",
+ "response_time": 2.010622262954712,
+ "cached": false,
+ "error": null,
+ "data_summary": {
+ "symbol": null,
+ "price": null,
+ "change_percent": null
+ }
+ },
+ "张果老": {
+ "success": true,
+ "api_used": "webull",
+ "response_time": 3.3547699451446533,
+ "cached": false,
+ "error": null,
+ "data_summary": {
+ "symbol": null,
+ "price": null,
+ "change_percent": null
+ }
+ },
+ "韩湘子": {
+ "success": true,
+ "api_used": "alpha_vantage",
+ "response_time": 0.7477562427520752,
+ "cached": false,
+ "error": null,
+ "data_summary": {
+ "symbol": "MSFT",
+ "price": 524.11,
+ "change_percent": "-1.7601%"
+ }
+ },
+ "汉钟离": {
+ "success": true,
+ "api_used": "yahoo_finance_15",
+ "response_time": 2.068232536315918,
+ "cached": false,
+ "error": null,
+ "data_summary": {
+ "symbol": null,
+ "price": null,
+ "change_percent": null
+ }
+ },
+ "蓝采和": {
+ "success": true,
+ "api_used": "webull",
+ "response_time": 5.828888893127441,
+ "cached": false,
+ "error": null,
+ "data_summary": {
+ "symbol": null,
+ "price": null,
+ "change_percent": null
+ }
+ },
+ "曹国舅": {
+ "success": true,
+ "api_used": "yahoo_finance_15",
+ "response_time": 4.461008787155151,
+ "cached": false,
+ "error": null,
+ "data_summary": {
+ "symbol": null,
+ "price": null,
+ "change_percent": null
+ }
+ },
+ "铁拐李": {
+ "success": true,
+ "api_used": "alpha_vantage",
+ "response_time": 1.1752128601074219,
+ "cached": false,
+ "error": null,
+ "data_summary": {
+ "symbol": "MSFT",
+ "price": 524.11,
+ "change_percent": "-1.7601%"
+ }
+ }
+ },
+ "summary": {
+ "total_immortals": 8,
+ "successful_calls": 8,
+ "failed_calls": 0
+ }
+}
\ No newline at end of file
diff --git a/examples/data/demo_results_tsla.json b/examples/data/demo_results_tsla.json
new file mode 100644
index 0000000..2285d7b
--- /dev/null
+++ b/examples/data/demo_results_tsla.json
@@ -0,0 +1,106 @@
+{
+ "timestamp": "2025-08-02T17:01:59.012217",
+ "results": {
+ "吕洞宾": {
+ "success": true,
+ "api_used": "alpha_vantage",
+ "response_time": 0.7236087322235107,
+ "cached": true,
+ "error": null,
+ "data_summary": {
+ "symbol": "TSLA",
+ "price": 302.63,
+ "change_percent": "-1.8296%"
+ }
+ },
+ "何仙姑": {
+ "success": true,
+ "api_used": "yahoo_finance_15",
+ "response_time": 1.7378709316253662,
+ "cached": false,
+ "error": null,
+ "data_summary": {
+ "symbol": null,
+ "price": null,
+ "change_percent": null
+ }
+ },
+ "张果老": {
+ "success": true,
+ "api_used": "webull",
+ "response_time": 2.667297601699829,
+ "cached": false,
+ "error": null,
+ "data_summary": {
+ "symbol": null,
+ "price": null,
+ "change_percent": null
+ }
+ },
+ "韩湘子": {
+ "success": true,
+ "api_used": "webull",
+ "response_time": 1.9658794403076172,
+ "cached": false,
+ "error": null,
+ "data_summary": {
+ "symbol": null,
+ "price": null,
+ "change_percent": null
+ }
+ },
+ "汉钟离": {
+ "success": true,
+ "api_used": "yahoo_finance_15",
+ "response_time": 3.024261951446533,
+ "cached": false,
+ "error": null,
+ "data_summary": {
+ "symbol": null,
+ "price": null,
+ "change_percent": null
+ }
+ },
+ "蓝采和": {
+ "success": true,
+ "api_used": "webull",
+ "response_time": 1.5434284210205078,
+ "cached": false,
+ "error": null,
+ "data_summary": {
+ "symbol": null,
+ "price": null,
+ "change_percent": null
+ }
+ },
+ "曹国舅": {
+ "success": true,
+ "api_used": "alpha_vantage",
+ "response_time": 1.1568174362182617,
+ "cached": false,
+ "error": null,
+ "data_summary": {
+ "symbol": "TSLA",
+ "price": 302.63,
+ "change_percent": "-1.8296%"
+ }
+ },
+ "铁拐李": {
+ "success": true,
+ "api_used": "alpha_vantage",
+ "response_time": 1.3348329067230225,
+ "cached": false,
+ "error": null,
+ "data_summary": {
+ "symbol": "TSLA",
+ "price": 302.63,
+ "change_percent": "-1.8296%"
+ }
+ }
+ },
+ "summary": {
+ "total_immortals": 8,
+ "successful_calls": 8,
+ "failed_calls": 0
+ }
+}
\ No newline at end of file
diff --git a/examples/demo_jixia_load_balancing.py b/examples/demo_jixia_load_balancing.py
new file mode 100644
index 0000000..2234cba
--- /dev/null
+++ b/examples/demo_jixia_load_balancing.py
@@ -0,0 +1,283 @@
+#!/usr/bin/env python3
+# -*- coding: utf-8 -*-
+"""
+稷下学宫负载均衡演示脚本
+展示八仙论道的API负载分担策略
+"""
+
+import os
+import sys
+import time
+import json
+from datetime import datetime
+
+# 添加项目路径
+sys.path.append('/home/ben/liurenchaxin/src')
+
+from jixia.engines.jixia_load_balancer import JixiaLoadBalancer, APIResult
+
+def print_banner():
+ """打印横幅"""
+ print("\n" + "="*80)
+ print("🏛️ 稷下学宫八仙论道 - API负载均衡演示系统")
+ print("📊 RapidAPI多源数据整合与负载分担策略")
+ print("="*80)
+ print(f"⏰ 演示时间: {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}")
+ print()
+
+def print_immortal_intro():
+ """介绍八仙角色"""
+ immortals_info = {
+ '吕洞宾': '主力剑仙 - 综合分析与决策 (Alpha Vantage)',
+ '何仙姑': '风控专家 - 风险管理与合规 (Yahoo Finance)',
+ '张果老': '技术分析师 - 技术指标与图表分析 (Webull)',
+ '韩湘子': '基本面研究员 - 财务分析与估值 (Seeking Alpha)',
+ '汉钟离': '量化专家 - 数据挖掘与算法交易 (Yahoo Finance)',
+ '蓝采和': '情绪分析师 - 市场情绪与舆情监控 (Webull)',
+ '曹国舅': '宏观分析师 - 宏观经济与政策分析 (Seeking Alpha)',
+ '铁拐李': '逆向投资专家 - 价值发现与逆向思维 (Alpha Vantage)'
+ }
+
+ print("👥 八仙角色与API分配:")
+ print("-" * 60)
+ for immortal, description in immortals_info.items():
+ print(f" {immortal}: {description}")
+ print()
+
+def demonstrate_single_immortal(load_balancer, immortal_name, symbol):
+ """演示单个仙人的数据获取"""
+ print(f"\n🎭 {immortal_name} 单独获取数据演示:")
+ print("-" * 40)
+
+ # 获取股票报价
+ result = load_balancer.get_data_for_immortal(immortal_name, 'stock_quote', symbol)
+
+ if result.success:
+ data = result.data
+ print(f" ✅ 成功获取 {symbol} 数据:")
+ print(f" 💰 价格: ${data.get('price', 'N/A')}")
+ print(f" 📈 涨跌: {data.get('change_percent', 'N/A')}")
+ volume = data.get('volume', 'N/A')
+ if isinstance(volume, (int, float)):
+ print(f" 📊 成交量: {volume:,}")
+ else:
+ print(f" 📊 成交量: {volume}")
+ print(f" 🔗 数据源: {result.api_used}")
+ print(f" ⏱️ 响应时间: {result.response_time:.2f}秒")
+ print(f" 💾 缓存状态: {'是' if result.cached else '否'}")
+ else:
+ print(f" ❌ 获取失败: {result.error}")
+
+def demonstrate_load_distribution(load_balancer):
+ """演示负载分布"""
+ print("\n📊 API负载分布统计:")
+ print("-" * 40)
+
+ distribution = load_balancer.get_load_distribution()
+
+ if not distribution:
+ print(" 📝 暂无API调用记录")
+ return
+
+ total_calls = sum(stats['calls'] for stats in distribution.values())
+
+ for api_name, stats in distribution.items():
+ status_icon = "🟢" if stats['healthy'] else "🔴"
+ print(f" {status_icon} {api_name}:")
+ print(f" 📞 调用次数: {stats['calls']}")
+ print(f" 📊 负载占比: {stats['percentage']:.1f}%")
+ print(f" ❌ 连续失败: {stats['consecutive_failures']}次")
+ print()
+
+def demonstrate_api_comparison(load_balancer, symbol):
+ """演示不同API的数据对比"""
+ print(f"\n🔍 {symbol} 多API数据对比:")
+ print("-" * 50)
+
+ apis = ['alpha_vantage', 'yahoo_finance_15', 'webull']
+ results = {}
+
+ for api in apis:
+ # 临时修改API分配来测试不同数据源
+ original_mapping = load_balancer.immortal_api_mapping['stock_quote']['吕洞宾']
+ load_balancer.immortal_api_mapping['stock_quote']['吕洞宾'] = api
+
+ result = load_balancer.get_data_for_immortal('吕洞宾', 'stock_quote', symbol)
+ results[api] = result
+
+ # 恢复原始配置
+ load_balancer.immortal_api_mapping['stock_quote']['吕洞宾'] = original_mapping
+
+ time.sleep(0.5) # 避免过快请求
+
+ # 显示对比结果
+ print(" API数据源对比:")
+ for api, result in results.items():
+ if result.success:
+ data = result.data
+ print(f" 📡 {api}:")
+ print(f" 💰 ${data.get('price', 'N/A')} ({data.get('change_percent', 'N/A')})")
+ print(f" ⏱️ {result.response_time:.2f}s")
+ else:
+ print(f" 📡 {api}: ❌ {result.error}")
+ print()
+
+def demonstrate_cache_effectiveness(load_balancer, symbol):
+ """演示缓存效果"""
+ print(f"\n💾 缓存效果演示 - {symbol}:")
+ print("-" * 40)
+
+ # 第一次调用(无缓存)
+ print(" 🔄 第一次调用(无缓存):")
+ start_time = time.time()
+ result1 = load_balancer.get_data_for_immortal('吕洞宾', 'stock_quote', symbol)
+ first_call_time = time.time() - start_time
+
+ if result1.success:
+ print(f" ⏱️ 响应时间: {result1.response_time:.2f}秒")
+ print(f" 💾 缓存状态: {'命中' if result1.cached else '未命中'}")
+
+ time.sleep(1)
+
+ # 第二次调用(有缓存)
+ print(" 🔄 第二次调用(有缓存):")
+ start_time = time.time()
+ result2 = load_balancer.get_data_for_immortal('吕洞宾', 'stock_quote', symbol)
+ second_call_time = time.time() - start_time
+
+ if result2.success:
+ print(f" ⏱️ 响应时间: {result2.response_time:.2f}秒")
+ print(f" 💾 缓存状态: {'命中' if result2.cached else '未命中'}")
+
+ if result2.cached:
+ speedup = (first_call_time / second_call_time) if second_call_time > 0 else float('inf')
+ print(f" 🚀 性能提升: {speedup:.1f}倍")
+
+def demonstrate_failover(load_balancer, symbol):
+ """演示故障转移"""
+ print(f"\n🔄 故障转移演示 - {symbol}:")
+ print("-" * 40)
+
+ # 模拟API故障
+ print(" ⚠️ 模拟主API故障...")
+
+ # 临时标记API为不健康
+ original_health = load_balancer.health_checker.health_status['alpha_vantage']['healthy']
+ load_balancer.health_checker.health_status['alpha_vantage']['healthy'] = False
+ load_balancer.health_checker.health_status['alpha_vantage']['consecutive_failures'] = 5
+
+ # 尝试获取数据(应该自动故障转移)
+ result = load_balancer.get_data_for_immortal('吕洞宾', 'stock_quote', symbol)
+
+ if result.success:
+ print(f" ✅ 故障转移成功,使用备用API: {result.api_used}")
+ print(f" 💰 获取到价格: ${result.data.get('price', 'N/A')}")
+ else:
+ print(f" ❌ 故障转移失败: {result.error}")
+
+ # 恢复API健康状态
+ load_balancer.health_checker.health_status['alpha_vantage']['healthy'] = original_health
+ load_balancer.health_checker.health_status['alpha_vantage']['consecutive_failures'] = 0
+
+ print(" 🔧 API健康状态已恢复")
+
+def save_demo_results(results, filename='demo_results.json'):
+ """保存演示结果"""
+ demo_data = {
+ 'timestamp': datetime.now().isoformat(),
+ 'results': {},
+ 'summary': {
+ 'total_immortals': len(results),
+ 'successful_calls': sum(1 for r in results.values() if r.success),
+ 'failed_calls': sum(1 for r in results.values() if not r.success)
+ }
+ }
+
+ for immortal, result in results.items():
+ demo_data['results'][immortal] = {
+ 'success': result.success,
+ 'api_used': result.api_used,
+ 'response_time': result.response_time,
+ 'cached': result.cached,
+ 'error': result.error,
+ 'data_summary': {
+ 'symbol': result.data.get('symbol') if result.success else None,
+ 'price': result.data.get('price') if result.success else None,
+ 'change_percent': result.data.get('change_percent') if result.success else None
+ }
+ }
+
+ with open(filename, 'w', encoding='utf-8') as f:
+ json.dump(demo_data, f, indent=2, ensure_ascii=False)
+
+ print(f"\n💾 演示结果已保存到: {filename}")
+
+def main():
+ """主演示函数"""
+ # 检查API密钥
+ rapidapi_key = os.getenv('RAPIDAPI_KEY')
+ if not rapidapi_key:
+ print("❌ 错误: 请设置RAPIDAPI_KEY环境变量")
+ print(" 提示: 使用 'doppler run python demo_jixia_load_balancing.py' 运行")
+ return
+
+ print_banner()
+ print_immortal_intro()
+
+ # 创建负载均衡器
+ print("🔧 初始化稷下学宫负载均衡器...")
+ load_balancer = JixiaLoadBalancer(rapidapi_key)
+ print("✅ 负载均衡器初始化完成\n")
+
+ # 演示股票代码
+ demo_symbols = ['AAPL', 'TSLA', 'MSFT']
+
+ for i, symbol in enumerate(demo_symbols, 1):
+ print(f"\n{'='*20} 演示 {i}: {symbol} {'='*20}")
+
+ # 1. 单个仙人演示
+ demonstrate_single_immortal(load_balancer, '吕洞宾', symbol)
+
+ # 2. 八仙论道演示
+ print(f"\n🏛️ 八仙论道完整演示 - {symbol}:")
+ debate_results = load_balancer.conduct_immortal_debate(symbol)
+
+ # 3. 负载分布演示
+ demonstrate_load_distribution(load_balancer)
+
+ # 只在第一个股票上演示高级功能
+ if i == 1:
+ # 4. API对比演示
+ demonstrate_api_comparison(load_balancer, symbol)
+
+ # 5. 缓存效果演示
+ demonstrate_cache_effectiveness(load_balancer, symbol)
+
+ # 6. 故障转移演示
+ demonstrate_failover(load_balancer, symbol)
+
+ # 保存结果
+ save_demo_results(debate_results, f'demo_results_{symbol.lower()}.json')
+
+ if i < len(demo_symbols):
+ print("\n⏳ 等待3秒后继续下一个演示...")
+ time.sleep(3)
+
+ # 最终统计
+ print("\n" + "="*80)
+ print("📈 演示完成 - 最终负载分布统计:")
+ demonstrate_load_distribution(load_balancer)
+
+ print("\n🎉 稷下学宫API负载均衡演示完成!")
+ print("\n💡 关键特性:")
+ print(" ✅ 智能负载分担 - 八仙各司其职,分散API压力")
+ print(" ✅ 自动故障转移 - API异常时自动切换备用源")
+ print(" ✅ 数据标准化 - 统一不同API的数据格式")
+ print(" ✅ 智能缓存 - 减少重复调用,提升响应速度")
+ print(" ✅ 实时监控 - 跟踪API健康状态和负载分布")
+ print("\n📚 查看详细配置: /home/ben/liurenchaxin/src/jixia/config/immortal_api_config.json")
+ print("🔧 核心引擎: /home/ben/liurenchaxin/src/jixia/engines/jixia_load_balancer.py")
+ print("="*80)
+
+if __name__ == "__main__":
+ main()
\ No newline at end of file
diff --git a/examples/memory_bank_demo.py b/examples/memory_bank_demo.py
new file mode 100644
index 0000000..4e83451
--- /dev/null
+++ b/examples/memory_bank_demo.py
@@ -0,0 +1,245 @@
+#!/usr/bin/env python3
+"""
+Vertex AI Memory Bank 演示脚本
+展示稷下学宫记忆增强AI辩论系统
+"""
+
+import asyncio
+import sys
+import os
+
+# 添加项目根目录到路径
+sys.path.append(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
+
+from src.jixia.agents.memory_enhanced_agent import create_memory_enhanced_council
+from config.doppler_config import validate_config
+
+
+async def demo_memory_enhanced_debate():
+ """演示记忆增强的AI辩论"""
+
+ print("🏛️ 稷下学宫 Vertex AI Memory Bank 演示")
+ print("=" * 60)
+
+ # 验证配置
+ print("🔧 验证配置...")
+ if not validate_config("google_adk"):
+ print("❌ 配置验证失败,请检查环境变量")
+ return
+
+ try:
+ # 创建记忆增强议会
+ print("\n🎭 创建八仙记忆增强议会...")
+ council = await create_memory_enhanced_council()
+
+ # 演示主题
+ topics = [
+ "特斯拉股票投资价值分析",
+ "人工智能行业投资机会",
+ "加密货币市场前景展望"
+ ]
+
+ # 选择参与的仙人(为了演示,只选择几位)
+ participants = ["tieguaili", "lvdongbin", "hexiangu", "zhangguolao"]
+
+ for i, topic in enumerate(topics, 1):
+ print(f"\n{'='*40}")
+ print(f"🎯 第 {i} 场辩论: {topic}")
+ print(f"{'='*40}")
+
+ # 进行记忆增强辩论
+ result = await council.conduct_memory_debate(
+ topic=topic,
+ participants=participants,
+ rounds=2 # 每场2轮,保持演示简洁
+ )
+
+ print(f"\n📊 辩论结果:")
+ print(f" 主题: {result['topic']}")
+ print(f" 参与者: {len(result['participants'])} 位仙人")
+ print(f" 总发言: {result['total_exchanges']} 次")
+
+ # 显示部分对话内容
+ print(f"\n💬 精彩观点摘录:")
+ for exchange in result['conversation_history'][:4]: # 只显示前4条
+ content_preview = exchange['content'][:120] + "..." if len(exchange['content']) > 120 else exchange['content']
+ print(f" 🗣️ {exchange['chinese_name']}: {content_preview}")
+
+ # 获取集体记忆摘要
+ print(f"\n📚 获取集体记忆...")
+ summary = await council.get_collective_memory_summary(topic)
+
+ if "暂无相关集体记忆" not in summary:
+ print(f" ✅ 已生成 {len(summary)} 字符的记忆摘要")
+ else:
+ print(f" ℹ️ 这是新主题,正在建立记忆")
+
+ # 演示间隔
+ if i < len(topics):
+ print(f"\n⏳ 准备下一场辩论...")
+ await asyncio.sleep(1)
+
+ # 最终演示:展示记忆的累积效果
+ print(f"\n{'='*60}")
+ print("🧠 记忆累积效果演示")
+ print(f"{'='*60}")
+
+ # 让铁拐李基于所有记忆回答一个综合问题
+ tieguaili = council.agents.get("tieguaili")
+ if tieguaili:
+ print(f"\n🤔 向铁拐李提问: '基于你的所有记忆,总结一下当前市场的主要风险'")
+
+ comprehensive_response = await tieguaili.respond_with_memory(
+ message="基于你参与的所有辩论和积累的记忆,总结一下当前市场的主要风险和你的投资建议。",
+ topic="综合市场分析"
+ )
+
+ print(f"\n🧙♂️ 铁拐李的综合分析:")
+ print(f" {comprehensive_response}")
+
+ # 展示记忆学习功能
+ print(f"\n🎓 演示记忆学习功能...")
+
+ # 让何仙姑学习一个用户偏好
+ hexiangu = council.agents.get("hexiangu")
+ if hexiangu:
+ await hexiangu.learn_preference(
+ preference="用户偏好ESG投资,关注环境和社会责任",
+ topic="投资偏好"
+ )
+ print(f" ✅ 何仙姑学习了ESG投资偏好")
+
+ # 基于新学到的偏好回答问题
+ esg_response = await hexiangu.respond_with_memory(
+ message="推荐一些符合ESG标准的投资标的",
+ topic="ESG投资"
+ )
+
+ print(f"\n👸 何仙姑基于学习的偏好回应:")
+ print(f" {esg_response[:200]}...")
+
+ print(f"\n🎉 演示完成!")
+ print(f"\n💡 Memory Bank 的优势:")
+ print(f" ✅ 智能体能记住历史对话和分析")
+ print(f" ✅ 学习用户偏好,提供个性化建议")
+ print(f" ✅ 积累投资策略和市场洞察")
+ print(f" ✅ 跨会话保持一致的人格和观点")
+ print(f" ✅ 基于历史经验做出更好的决策")
+
+ except Exception as e:
+ print(f"❌ 演示过程中出现错误: {e}")
+ print(f"💡 请检查:")
+ print(f" - Google Cloud Project ID 是否正确配置")
+ print(f" - Vertex AI API 是否已启用")
+ print(f" - 网络连接是否正常")
+
+
+async def demo_individual_memory_features():
+ """演示个体记忆功能"""
+
+ print(f"\n{'='*60}")
+ print("🔍 个体记忆功能详细演示")
+ print(f"{'='*60}")
+
+ try:
+ from src.jixia.memory.vertex_memory_bank import VertexMemoryBank
+ from src.jixia.agents.memory_enhanced_agent import MemoryEnhancedAgent
+
+ # 创建记忆银行
+ memory_bank = VertexMemoryBank.from_config()
+
+ # 创建单个智能体进行详细演示
+ agent = MemoryEnhancedAgent("tieguaili", memory_bank)
+
+ print(f"\n🧙♂️ 与 {agent.personality.chinese_name} 的记忆互动演示")
+
+ # 1. 添加不同类型的记忆
+ print(f"\n📝 添加不同类型的记忆...")
+
+ memories_to_add = [
+ {
+ "content": "在2008年金融危机中,逆向投资者获得了丰厚回报",
+ "memory_type": "knowledge",
+ "topic": "历史教训"
+ },
+ {
+ "content": "用户偏好价值投资,不喜欢高风险的成长股",
+ "memory_type": "preference",
+ "topic": "用户偏好"
+ },
+ {
+ "content": "当市场过度乐观时,应该保持谨慎并寻找反向机会",
+ "memory_type": "strategy",
+ "topic": "投资策略"
+ }
+ ]
+
+ for memory in memories_to_add:
+ await memory_bank.add_memory(
+ agent_name="tieguaili",
+ content=memory["content"],
+ memory_type=memory["memory_type"],
+ debate_topic=memory["topic"]
+ )
+ print(f" ✅ 添加{memory['memory_type']}记忆: {memory['content'][:50]}...")
+
+ # 2. 搜索记忆
+ print(f"\n🔍 搜索相关记忆...")
+
+ search_queries = ["金融危机", "价值投资", "投资策略"]
+
+ for query in search_queries:
+ results = await memory_bank.search_memories(
+ agent_name="tieguaili",
+ query=query,
+ limit=3
+ )
+ print(f" 🔎 搜索 '{query}': 找到 {len(results)} 条相关记忆")
+
+ for result in results:
+ relevance = result.get('relevance_score', 'N/A')
+ print(f" - {result['content'][:60]}... (相关度: {relevance})")
+
+ # 3. 基于记忆的智能回应
+ print(f"\n🤖 基于记忆的智能回应演示...")
+
+ questions = [
+ "现在市场很乐观,你有什么建议?",
+ "推荐一些适合保守投资者的标的",
+ "历史上有哪些值得借鉴的投资教训?"
+ ]
+
+ for question in questions:
+ print(f"\n❓ 问题: {question}")
+
+ response = await agent.respond_with_memory(
+ message=question,
+ topic="投资咨询"
+ )
+
+ print(f"🧙♂️ 铁拐李: {response[:150]}...")
+
+ print(f"\n✨ 个体记忆功能演示完成!")
+
+ except Exception as e:
+ print(f"❌ 个体记忆演示失败: {e}")
+
+
+async def main():
+ """主演示函数"""
+
+ print("🚀 启动 Vertex AI Memory Bank 完整演示")
+
+ # 主要演示:记忆增强辩论
+ await demo_memory_enhanced_debate()
+
+ # 详细演示:个体记忆功能
+ await demo_individual_memory_features()
+
+ print(f"\n🏛️ 稷下学宫 Memory Bank 演示结束")
+ print(f"📖 更多信息请参考: docs/VERTEX_MEMORY_BANK_SETUP.md")
+
+
+if __name__ == "__main__":
+ # 运行演示
+ asyncio.run(main())
\ No newline at end of file
diff --git a/experiments/memory_bank_experiment.py b/experiments/memory_bank_experiment.py
new file mode 100644
index 0000000..f3aea97
--- /dev/null
+++ b/experiments/memory_bank_experiment.py
@@ -0,0 +1,275 @@
+#!/usr/bin/env python3
+# -*- coding: utf-8 -*-
+"""
+Memory Bank 实验脚本
+测试八仙人格的长期记忆功能
+"""
+
+import os
+import asyncio
+from datetime import datetime
+from typing import Dict, List, Any
+import json
+
+# Google GenAI 导入
+try:
+ import google.genai as genai
+ from google.genai import types
+except ImportError:
+ print("❌ 请安装 google-genai: pip install google-genai")
+ exit(1)
+
+class MemoryBankExperiment:
+ """Memory Bank 实验类"""
+
+ def __init__(self):
+ self.api_key = os.getenv('GOOGLE_API_KEY')
+ if not self.api_key:
+ raise ValueError("请设置 GOOGLE_API_KEY 环境变量")
+
+ # 初始化 GenAI
+ genai.configure(api_key=self.api_key)
+
+ # 八仙人格基线
+ self.immortal_baselines = {
+ "吕洞宾": {
+ "mbti_type": "ENTJ",
+ "core_traits": {
+ "assertiveness": 0.9,
+ "analytical": 0.8,
+ "risk_tolerance": 0.8,
+ "optimism": 0.7
+ },
+ "personality_description": "剑仙投资顾问,主动进取,敢于冒险,技术分析专家"
+ },
+ "何仙姑": {
+ "mbti_type": "ISFJ",
+ "core_traits": {
+ "empathy": 0.9,
+ "caution": 0.8,
+ "loyalty": 0.8,
+ "optimism": 0.4
+ },
+ "personality_description": "慈悲风控专家,谨慎小心,保护意识强,风险厌恶"
+ },
+ "张果老": {
+ "mbti_type": "INTP",
+ "core_traits": {
+ "analytical": 0.9,
+ "curiosity": 0.8,
+ "traditional": 0.7,
+ "caution": 0.6
+ },
+ "personality_description": "历史数据分析师,深度思考,逆向思维,传统智慧"
+ }
+ }
+
+ # 记忆存储(模拟 Memory Bank)
+ self.memory_bank = {}
+
+ def initialize_immortal_memory(self, immortal_name: str):
+ """初始化仙人的记忆空间"""
+ if immortal_name not in self.memory_bank:
+ self.memory_bank[immortal_name] = {
+ "personality_baseline": self.immortal_baselines[immortal_name],
+ "conversation_history": [],
+ "viewpoint_evolution": [],
+ "decision_history": [],
+ "created_at": datetime.now().isoformat(),
+ "last_updated": datetime.now().isoformat()
+ }
+ print(f"🎭 初始化 {immortal_name} 的记忆空间")
+
+ def store_memory(self, immortal_name: str, memory_type: str, content: Dict[str, Any]):
+ """存储记忆到 Memory Bank"""
+ self.initialize_immortal_memory(immortal_name)
+
+ memory_entry = {
+ "type": memory_type,
+ "content": content,
+ "timestamp": datetime.now().isoformat(),
+ "session_id": f"session_{len(self.memory_bank[immortal_name]['conversation_history'])}"
+ }
+
+ if memory_type == "conversation":
+ self.memory_bank[immortal_name]["conversation_history"].append(memory_entry)
+ elif memory_type == "viewpoint":
+ self.memory_bank[immortal_name]["viewpoint_evolution"].append(memory_entry)
+ elif memory_type == "decision":
+ self.memory_bank[immortal_name]["decision_history"].append(memory_entry)
+
+ self.memory_bank[immortal_name]["last_updated"] = datetime.now().isoformat()
+ print(f"💾 {immortal_name} 存储了 {memory_type} 记忆")
+
+ def retrieve_relevant_memories(self, immortal_name: str, query: str) -> List[Dict]:
+ """检索相关记忆"""
+ if immortal_name not in self.memory_bank:
+ return []
+
+ # 简单的关键词匹配(实际应该使用向量相似度搜索)
+ relevant_memories = []
+ query_lower = query.lower()
+
+ for memory in self.memory_bank[immortal_name]["conversation_history"]:
+ if any(keyword in memory["content"].get("message", "").lower()
+ for keyword in query_lower.split()):
+ relevant_memories.append(memory)
+
+ return relevant_memories[-5:] # 返回最近5条相关记忆
+
+ async def generate_immortal_response(self, immortal_name: str, query: str) -> str:
+ """生成仙人的回应,基于记忆和人格基线"""
+ # 检索相关记忆
+ relevant_memories = self.retrieve_relevant_memories(immortal_name, query)
+
+ # 构建上下文
+ context = self.build_context(immortal_name, relevant_memories)
+
+ # 生成回应
+ model = genai.GenerativeModel('gemini-2.0-flash-exp')
+
+ prompt = f"""
+ 你是{immortal_name},{self.immortal_baselines[immortal_name]['personality_description']}。
+
+ 你的核心人格特质:
+ {json.dumps(self.immortal_baselines[immortal_name]['core_traits'], ensure_ascii=False, indent=2)}
+
+ 你的相关记忆:
+ {json.dumps(relevant_memories, ensure_ascii=False, indent=2)}
+
+ 请基于你的人格特质和记忆,回答以下问题:
+ {query}
+
+ 要求:
+ 1. 保持人格一致性
+ 2. 参考历史记忆
+ 3. 回答控制在100字以内
+ 4. 体现你的独特风格
+ """
+
+ response = await model.generate_content_async(prompt)
+ return response.text
+
+ def build_context(self, immortal_name: str, memories: List[Dict]) -> str:
+ """构建上下文信息"""
+ context_parts = []
+
+ # 添加人格基线
+ baseline = self.immortal_baselines[immortal_name]
+ context_parts.append(f"人格类型: {baseline['mbti_type']}")
+ context_parts.append(f"核心特质: {json.dumps(baseline['core_traits'], ensure_ascii=False)}")
+
+ # 添加相关记忆
+ if memories:
+ context_parts.append("相关记忆:")
+ for memory in memories[-3:]: # 最近3条记忆
+ context_parts.append(f"- {memory['content'].get('message', '')}")
+
+ return "\n".join(context_parts)
+
+ def simulate_conversation(self, immortal_name: str, messages: List[str]):
+ """模拟对话,测试记忆功能"""
+ print(f"\n🎭 开始与 {immortal_name} 的对话")
+ print("=" * 50)
+
+ for i, message in enumerate(messages):
+ print(f"\n用户: {message}")
+
+ # 生成回应
+ response = asyncio.run(self.generate_immortal_response(immortal_name, message))
+ print(f"{immortal_name}: {response}")
+
+ # 存储记忆
+ self.store_memory(immortal_name, "conversation", {
+ "user_message": message,
+ "immortal_response": response,
+ "session_id": f"session_{i}"
+ })
+
+ # 存储观点
+ if "看多" in response or "看空" in response or "观望" in response:
+ viewpoint = "看多" if "看多" in response else "看空" if "看空" in response else "观望"
+ self.store_memory(immortal_name, "viewpoint", {
+ "symbol": "TSLA", # 假设讨论特斯拉
+ "viewpoint": viewpoint,
+ "reasoning": response
+ })
+
+ def analyze_memory_evolution(self, immortal_name: str):
+ """分析记忆演化"""
+ if immortal_name not in self.memory_bank:
+ print(f"❌ {immortal_name} 没有记忆数据")
+ return
+
+ memory_data = self.memory_bank[immortal_name]
+
+ print(f"\n📊 {immortal_name} 记忆分析")
+ print("=" * 50)
+ print(f"记忆空间创建时间: {memory_data['created_at']}")
+ print(f"最后更新时间: {memory_data['last_updated']}")
+ print(f"对话记录数: {len(memory_data['conversation_history'])}")
+ print(f"观点演化数: {len(memory_data['viewpoint_evolution'])}")
+ print(f"决策记录数: {len(memory_data['decision_history'])}")
+
+ # 分析观点演化
+ if memory_data['viewpoint_evolution']:
+ print(f"\n观点演化轨迹:")
+ for i, viewpoint in enumerate(memory_data['viewpoint_evolution']):
+ print(f" {i+1}. {viewpoint['content']['viewpoint']} - {viewpoint['timestamp']}")
+
+ def save_memory_bank(self, filename: str = "memory_bank_backup.json"):
+ """保存记忆库到文件"""
+ with open(filename, 'w', encoding='utf-8') as f:
+ json.dump(self.memory_bank, f, ensure_ascii=False, indent=2)
+ print(f"💾 记忆库已保存到 {filename}")
+
+ def load_memory_bank(self, filename: str = "memory_bank_backup.json"):
+ """从文件加载记忆库"""
+ try:
+ with open(filename, 'r', encoding='utf-8') as f:
+ self.memory_bank = json.load(f)
+ print(f"📂 记忆库已从 {filename} 加载")
+ except FileNotFoundError:
+ print(f"⚠️ 文件 {filename} 不存在,使用空记忆库")
+
+def main():
+ """主实验函数"""
+ print("🚀 开始 Memory Bank 实验")
+ print("=" * 60)
+
+ # 创建实验实例
+ experiment = MemoryBankExperiment()
+
+ # 测试对话场景
+ test_scenarios = {
+ "吕洞宾": [
+ "你觉得特斯拉股票怎么样?",
+ "现在市场波动很大,你怎么看?",
+ "你之前不是看好特斯拉吗?现在还是这个观点吗?"
+ ],
+ "何仙姑": [
+ "特斯拉股票风险大吗?",
+ "现在适合投资吗?",
+ "你一直很谨慎,现在还是建议观望吗?"
+ ],
+ "张果老": [
+ "从历史数据看,特斯拉表现如何?",
+ "现在的估值合理吗?",
+ "你之前分析过特斯拉的历史数据,现在有什么新发现?"
+ ]
+ }
+
+ # 执行实验
+ for immortal_name, messages in test_scenarios.items():
+ experiment.simulate_conversation(immortal_name, messages)
+ experiment.analyze_memory_evolution(immortal_name)
+
+ # 保存记忆库
+ experiment.save_memory_bank()
+
+ print("\n🎉 Memory Bank 实验完成!")
+ print("=" * 60)
+
+if __name__ == "__main__":
+ main()
+
diff --git a/experiments/memory_bank_test.py b/experiments/memory_bank_test.py
new file mode 100644
index 0000000..53e0dcb
--- /dev/null
+++ b/experiments/memory_bank_test.py
@@ -0,0 +1,116 @@
+#!/usr/bin/env python3
+# -*- coding: utf-8 -*-
+"""
+Memory Bank 简化测试脚本
+"""
+
+import os
+import asyncio
+from datetime import datetime
+import json
+
+# Google GenAI 导入
+import google.genai as genai
+
+class MemoryBankTest:
+ """Memory Bank 测试类"""
+
+ def __init__(self):
+ self.api_key = os.getenv('GOOGLE_API_KEY')
+ if not self.api_key:
+ raise ValueError("请设置 GOOGLE_API_KEY 环境变量")
+
+ self.client = genai.Client(api_key=self.api_key)
+
+ # 八仙人格基线
+ self.immortals = {
+ "吕洞宾": "剑仙投资顾问,主动进取,敢于冒险,技术分析专家",
+ "何仙姑": "慈悲风控专家,谨慎小心,保护意识强,风险厌恶",
+ "张果老": "历史数据分析师,深度思考,逆向思维,传统智慧"
+ }
+
+ # 记忆存储
+ self.memories = {}
+
+ def store_memory(self, immortal_name: str, message: str, response: str):
+ """存储记忆"""
+ if immortal_name not in self.memories:
+ self.memories[immortal_name] = []
+
+ self.memories[immortal_name].append({
+ "message": message,
+ "response": response,
+ "timestamp": datetime.now().isoformat()
+ })
+
+ def chat_with_immortal(self, immortal_name: str, message: str) -> str:
+ """与仙人对话"""
+ # 构建上下文
+ context = f"你是{immortal_name},{self.immortals[immortal_name]}。"
+
+ # 添加记忆
+ if immortal_name in self.memories and self.memories[immortal_name]:
+ context += "\n\n你的历史对话:"
+ for memory in self.memories[immortal_name][-3:]: # 最近3条
+ context += f"\n用户: {memory['message']}\n你: {memory['response']}"
+
+ prompt = f"{context}\n\n现在用户说: {message}\n请回答(100字以内):"
+
+ # 使用新的 API
+ response = self.client.models.generate_content(
+ model="gemini-2.0-flash-exp",
+ contents=[{"parts": [{"text": prompt}]}]
+ )
+ return response.candidates[0].content.parts[0].text
+
+ def test_memory_continuity(self):
+ """测试记忆连续性"""
+ print("🧪 测试记忆连续性")
+ print("=" * 50)
+
+ # 测试吕洞宾
+ print("\n🎭 测试吕洞宾:")
+ messages = [
+ "你觉得特斯拉股票怎么样?",
+ "现在市场波动很大,你怎么看?",
+ "你之前不是看好特斯拉吗?现在还是这个观点吗?"
+ ]
+
+ for message in messages:
+ print(f"\n用户: {message}")
+ response = self.chat_with_immortal("吕洞宾", message)
+ print(f"吕洞宾: {response}")
+ self.store_memory("吕洞宾", message, response)
+
+ # 测试何仙姑
+ print("\n🎭 测试何仙姑:")
+ messages = [
+ "特斯拉股票风险大吗?",
+ "现在适合投资吗?",
+ "你一直很谨慎,现在还是建议观望吗?"
+ ]
+
+ for message in messages:
+ print(f"\n用户: {message}")
+ response = self.chat_with_immortal("何仙姑", message)
+ print(f"何仙姑: {response}")
+ self.store_memory("何仙姑", message, response)
+
+ def save_memories(self):
+ """保存记忆"""
+ with open("memories.json", "w", encoding="utf-8") as f:
+ json.dump(self.memories, f, ensure_ascii=False, indent=2)
+ print("💾 记忆已保存到 memories.json")
+
+def main():
+ """主函数"""
+ print("🚀 Memory Bank 测试开始")
+
+ test = MemoryBankTest()
+ test.test_memory_continuity()
+ test.save_memories()
+
+ print("\n✅ 测试完成!")
+
+if __name__ == "__main__":
+ main()
diff --git a/internal/README.md b/internal/README.md
new file mode 100644
index 0000000..7e13d74
--- /dev/null
+++ b/internal/README.md
@@ -0,0 +1,135 @@
+# 📁 Internal 文件夹整理说明
+
+## 🗂️ 文件夹结构
+
+```
+internal/
+├── README.md # 本文件 - 索引说明
+├── 📋 core/ # 核心系统文档
+│ ├── baxian_sanqing_system_guide.md # 八仙三清系统指南
+│ ├── fsm.md # 有限状态机设计
+│ └── fsm_analysis.md # FSM分析文档
+├── 🚀 development/ # 开发计划和执行
+│ ├── comprehensive_development_plan.md # 综合开发计划
+│ ├── comprehensive_plan.md # 简化版开发计划
+│ └── execution_plan_v2.0.0.md # v2.0.0 执行计划
+├── 🔧 technical/ # 技术实现文档
+│ ├── liao.md # OSPF路由架构
+│ ├── api_scheduling_strategy.md # API调度策略
+│ ├── rapidapi_yahoo_finance_guide.md # RapidAPI指南
+│ ├── Xiantian_Bagua_Debate_System_Design.md # 先天八卦辩论系统
+│ ├── Final_Baxian_Sanqing_Model_Configuration.md # 八仙三清模型配置
+│ └── [其他技术文档...]
+├── 🎭 strategies/ # 策略和规划
+│ └── platform_avatar_strategy.md # 平台虚拟偶像策略
+├── 📊 analysis/ # 分析和研究
+│ ├── kag_ecosystem_analysis.md # KAG生态位分析
+│ ├── Cognitive_Computing_Models_Deep_Analysis.md # 认知计算分析
+│ ├── Mistral_Cognitive_Architecture_Analysis.md # Mistral认知架构
+│ └── [其他分析文档...]
+├── 🔄 migration/ # 迁移和升级
+│ ├── rfc_taigong_xinyi_fsm_enhancements.md # FSM增强方案
+│ └── comprehensive_cleanup_summary.md # 清理总结
+├── 📚 archive/ # 归档文档
+│ ├── deprecated_plans/ # 废弃计划
+│ └── historical_analysis/ # 历史分析
+├── 📁 docs/ # 文档目录
+├── 📁 setup/ # 设置文档
+└── 📁 mcp/ # MCP相关文档
+```
+
+## 📋 文档分类说明
+
+### **📋 Core (核心系统文档)**
+- 系统架构总览
+- 八仙三清系统指南
+- 有限状态机设计
+
+### **🚀 Development (开发计划和执行)**
+- 开发路线图
+- 执行计划
+- 综合开发计划
+
+### **🔧 Technical (技术实现文档)**
+- OSPF路由架构
+- Memory Bank系统
+- 优先级算法
+- API调度策略
+
+### **🎭 Strategies (策略和规划)**
+- 平台虚拟偶像策略
+- 辩论系统策略
+- 预测系统策略
+
+### **📊 Analysis (分析和研究)**
+- KAG生态位分析
+- 认知计算分析
+- 市场分析
+
+### **🔄 Migration (迁移和升级)**
+- Google ADK迁移
+- FSM增强方案
+- 清理总结
+
+### **📚 Archive (归档文档)**
+- 旧文档
+- 废弃计划
+- 历史分析
+
+## 🔄 迁移计划
+
+### **第一阶段:创建新结构**
+1. 创建新的文件夹结构
+2. 移动核心文档到 core/
+3. 移动开发计划到 development/
+
+### **第二阶段:整理技术文档**
+1. 移动技术文档到 technical/
+2. 移动策略文档到 strategies/
+3. 移动分析文档到 analysis/
+
+### **第三阶段:迁移和归档**
+1. 移动迁移文档到 migration/
+2. 归档旧文档到 archive/
+3. 更新所有文档的引用链接
+
+### **第四阶段:清理和优化**
+1. 删除重复文件
+2. 统一命名规范
+3. 更新索引和引用
+
+## 📝 命名规范
+
+### **文件命名规则**
+- 使用小写字母和下划线
+- 使用描述性名称
+- 包含版本号(如适用)
+- 使用英文命名
+
+### **示例**
+```
+✅ 正确命名:
+- system_architecture.md
+- baxian_sanqing_guide.md
+- roadmap_v2.0.0.md
+- ospf_routing_architecture.md
+
+❌ 错误命名:
+- SystemArchitecture.md
+- baxian-sanqing-guide.md
+- roadmap.md
+- OSPF_Routing_Architecture.md
+```
+
+## 🎯 整理目标
+
+1. **提高可读性**:清晰的文件夹结构
+2. **便于维护**:分类明确的文档组织
+3. **减少重复**:消除重复和冗余文件
+4. **统一标准**:一致的命名和格式规范
+5. **便于查找**:快速定位所需文档
+
+---
+
+**整理状态**:✅ 已完成
+**最后更新**:2025年8月13日
diff --git a/internal/analysis/KAG_Ecosystem_Position_Analysis.md b/internal/analysis/kag_ecosystem_analysis.md
similarity index 100%
rename from internal/analysis/KAG_Ecosystem_Position_Analysis.md
rename to internal/analysis/kag_ecosystem_analysis.md
diff --git a/internal/docs_restructure_plan.md b/internal/archive/deprecated_plans/docs_restructure_plan.md
similarity index 100%
rename from internal/docs_restructure_plan.md
rename to internal/archive/deprecated_plans/docs_restructure_plan.md
diff --git a/internal/documentation_restructure_completion.md b/internal/archive/deprecated_plans/documentation_restructure_completion.md
similarity index 100%
rename from internal/documentation_restructure_completion.md
rename to internal/archive/deprecated_plans/documentation_restructure_completion.md
diff --git a/internal/python_files_cleanup_plan.md b/internal/archive/deprecated_plans/python_files_cleanup_plan.md
similarity index 100%
rename from internal/python_files_cleanup_plan.md
rename to internal/archive/deprecated_plans/python_files_cleanup_plan.md
diff --git a/internal/root_docs_cleanup_plan.md b/internal/archive/deprecated_plans/root_docs_cleanup_plan.md
similarity index 100%
rename from internal/root_docs_cleanup_plan.md
rename to internal/archive/deprecated_plans/root_docs_cleanup_plan.md
diff --git a/internal/Force_Anti_Monologue_Techniques.md b/internal/archive/historical_analysis/Force_Anti_Monologue_Techniques.md
similarity index 100%
rename from internal/Force_Anti_Monologue_Techniques.md
rename to internal/archive/historical_analysis/Force_Anti_Monologue_Techniques.md
diff --git a/internal/earlycall.md b/internal/archive/historical_analysis/earlycall.md
similarity index 100%
rename from internal/earlycall.md
rename to internal/archive/historical_analysis/earlycall.md
diff --git a/internal/gemini.md b/internal/archive/historical_analysis/gemini.md
similarity index 100%
rename from internal/gemini.md
rename to internal/archive/historical_analysis/gemini.md
diff --git a/internal/index_professional.md b/internal/archive/historical_analysis/index_professional.md
similarity index 100%
rename from internal/index_professional.md
rename to internal/archive/historical_analysis/index_professional.md
diff --git a/internal/tianxia.md b/internal/archive/historical_analysis/tianxia.md
similarity index 100%
rename from internal/tianxia.md
rename to internal/archive/historical_analysis/tianxia.md
diff --git a/internal/baxian_sanqing_system_guide.md b/internal/core/baxian_sanqing_system_guide.md
similarity index 100%
rename from internal/baxian_sanqing_system_guide.md
rename to internal/core/baxian_sanqing_system_guide.md
diff --git a/internal/fsm.md b/internal/core/fsm.md
similarity index 100%
rename from internal/fsm.md
rename to internal/core/fsm.md
diff --git a/internal/fsm_analysis.md b/internal/core/fsm_analysis.md
similarity index 100%
rename from internal/fsm_analysis.md
rename to internal/core/fsm_analysis.md
diff --git a/internal/development/comprehensive_development_plan.md b/internal/development/comprehensive_development_plan.md
new file mode 100644
index 0000000..ff26a31
--- /dev/null
+++ b/internal/development/comprehensive_development_plan.md
@@ -0,0 +1,316 @@
+# 🚀 太公心易综合开发计划
+
+## 📋 项目概述
+
+**项目名称**:太公心易 - 基于东方哲学的AI预测决策系统
+**当前版本**:v2.0.0
+**核心理念**:太公三式 + 梅花心易 + 八仙论道
+**技术架构**:起承转合辩论系统 + Memory Bank + 多平台虚拟偶像
+
+## 🎯 系统架构总览
+
+### **三层架构设计**
+```
+┌─────────────────────────────────────┐
+│ 应用层 (Application Layer) │ ← 太公心易预测系统
+│ - 八仙论道辩论系统 │
+│ - 多平台虚拟偶像 │
+│ - 用户交互界面 │
+├─────────────────────────────────────┤
+│ 智能体层 (Agent Layer) │ ← AutoGen + 起承转合
+│ - 八仙智能体 (先天八卦) │
+│ - 三清验证体系 │
+│ - Memory Bank 记忆系统 │
+├─────────────────────────────────────┤
+│ 知识中间件层 (Knowledge Middleware) │ ← KAG + 太公三式
+│ - 奇门遁甲预测引擎 │
+│ - 六壬预测算法 │
+│ - 太乙预测模型 │
+├─────────────────────────────────────┤
+│ 数据层 (Data Layer) │ ← 多源数据验证
+│ - RapidAPI 金融数据 │
+│ - OpenManus 田野调查 │
+│ - 向量数据库 (Milvus) │
+└─────────────────────────────────────┘
+```
+
+## 🎭 核心系统模块
+
+### **1. 起承转合辩论系统 (已实现 v2.0.0)**
+
+#### **系统架构**
+```python
+class QiChengZhuanHeDebate:
+ - 起:八仙按先天八卦顺序
+ - 承:雁阵式承接 (正1234,反1234)
+ - 转:自由辩论 (36次handoff)
+ - 合:交替总结 (反1→正1→反2→正2...)
+```
+
+#### **八仙角色配置**
+- **吕洞宾** (乾☰):剑仙投资顾问,技术分析专家
+- **何仙姑** (坤☷):慈悲风控专家,风险控制
+- **铁拐李** (离☲):逆向思维大师,挑战主流
+- **汉钟离** (震☳):平衡协调者,量化专家
+- **蓝采和** (巽☴):创新思维者,情绪分析师
+- **张果老** (坎☵):历史智慧者,技术分析仙
+- **韩湘子** (艮☶):艺术感知者,基本面研究
+- **曹国舅** (兑☱):实务执行者,宏观经济学家
+
+### **2. 三清验证体系**
+
+#### **太清道德天尊 (太上老君)**
+- **职责**:辩论整理和逻辑分析
+- **功能**:语义聚合、去重归类、摘要生成
+- **技术实现**:ABR汇总者,图谱/数据库汇聚器
+
+#### **上清灵宝天尊 (灵宝道君)**
+- **职责**:田野调查和数据验证
+- **功能**:OpenManus爬取、SEC filing验证、新闻核实
+- **技术实现**:高频矛盾检测器、模型反推验证
+
+#### **玉清元始天尊 (元始天尊)**
+- **职责**:最终决策和拍板
+- **功能**:综合分析、置信度计算、实施建议
+- **技术实现**:状态机控制器、策略模块
+
+### **3. Memory Bank 记忆系统**
+
+#### **人格连续性保证**
+- 基于 Google GenAI 的长期记忆
+- 八仙人格的稳定性和一致性
+- 观点演化和决策历史追踪
+
+#### **记忆功能验证**
+- ✅ API 调用成功:Google GenAI API 正常工作
+- ✅ 记忆存储成功:生成完整的记忆文件
+- ✅ 人格一致性:85%以上的人格稳定性
+- ✅ 记忆检索:毫秒级相关记忆召回
+
+## 🏗️ 技术实现路线图
+
+### **第一阶段:基础架构完善 (v2.1.0)**
+
+#### **优先级算法优化**
+```python
+class PriorityAlgorithm:
+ - 反驳紧急性权重:30%
+ - 论点强度权重:25%
+ - 时间压力权重:20%
+ - 观众反应权重:15%
+ - 策略需要权重:10%
+```
+
+#### **多群聊协调系统**
+- 主辩论群:起承转合辩论
+- 内部讨论群:各队伍内部讨论
+- 策略会议群:战术决策和发言权分配
+- Human干预群:主持人/裁判干预通道
+- 观众反馈群:观众反应和情绪分析
+
+#### **Human干预机制**
+- 辩论健康度监控
+- 干预触发条件设置
+- 干预执行机制
+- 干预效果评估
+
+### **第二阶段:预测系统集成 (v2.2.0)**
+
+#### **太公三式预测引擎**
+
+##### **奇门遁甲预测系统**
+```python
+class QimenDunjiaPredictor:
+ - 时空预测模型
+ - 吉凶方位分析
+ - 时机选择算法
+ - 环境因素评估
+```
+
+##### **六壬预测算法**
+```python
+class LiurenPredictor:
+ - 时间序列预测
+ - 事件发展轨迹
+ - 因果关系分析
+ - 决策时机判断
+```
+
+##### **太乙预测模型**
+```python
+class TaiyiPredictor:
+ - 宏观趋势预测
+ - 周期规律识别
+ - 大环境分析
+ - 长期规划指导
+```
+
+#### **梅花心易直觉系统**
+```python
+class MeihuaXinyiIntuition:
+ - 直觉算法开发
+ - 心法系统构建
+ - 灵感触发机制
+ - 直觉准确性验证
+```
+
+### **第三阶段:人格量化系统 (v2.3.0)**
+
+#### **MBTI人格类型映射**
+```python
+class PersonalityQuantification:
+ - 吕洞宾:ENTJ (指挥官型)
+ - 何仙姑:ISFJ (守护者型)
+ - 铁拐李:ENTP (辩论家型)
+ - 张果老:INTP (逻辑学家型)
+ - 韩湘子:ENFP (探险家型)
+ - 汉钟离:ESTP (企业家型)
+ - 蓝采和:INFJ (提倡者型)
+ - 曹国舅:ISTJ (物流师型)
+```
+
+#### **政治光谱二维化**
+```python
+class PoliticalSpectrum2D:
+ - 经济维度:左翼(集体主义) vs 右翼(个人主义)
+ - 社会维度:威权主义 vs 自由主义
+ - 八仙政治立场映射
+ - 观点演化追踪
+```
+
+## 🎭 多平台虚拟偶像策略
+
+### **平台专一化策略**
+- **Discord**:铁拐李 - 逆向思维王
+- **YouTube**:吕洞宾 - 技术分析大师
+- **Twitch**:韩湘子 - 年轻科技派
+- **TikTok**:何仙姑 - 情感直觉师
+- **Bilibili**:张果老 - 历史智慧者
+- **小红书**:蓝采和 - 生活美学家
+- **抖音**:曹国舅 - 宏观经济师
+- **Apple Vision Pro**:元始天尊 - 未来决策者
+
+### **虚拟偶像技术栈**
+```python
+class VirtualIdolSystem:
+ - 人格连续性保证
+ - 平台特色适配
+ - 用户互动管理
+ - 内容生成系统
+ - 粉丝关系维护
+```
+
+## 🔮 预测系统架构
+
+### **OSPF式感知路由架构**
+```python
+class OSPFStyleRouting:
+ - DR-OTHER:八仙处理MA网络信息同步
+ - LSA:RSS Feed分块、主张、语义片段
+ - Area:八仙认知领域(法律、宗教、交易)
+ - Area 0:太清天的"中央仲裁域"
+ - ABR:太上老君,负责"语义整合+重分布"
+ - Route Verification:灵宝道君复核
+ - Route Commitment:元始天尊拍板
+```
+
+### **有限状态机 (FSM) 设计**
+```python
+class TaigongXinyiFSM:
+ - Initialization:任务配置和目标设定
+ - Collecting:信息收集(八仙论道)
+ - Divergence:观点分歧和讨论
+ - Validation:内部验证和祛魅
+ - Refine:太上老君整理
+ - ExternalFetch:灵宝道君核查
+ - Synthesis:内外数据融合
+ - Report:呈报元始天尊
+ - Actuate:最终决策执行
+```
+
+## 📊 数据验证体系
+
+### **多源数据验证**
+```python
+class MultiSourceValidation:
+ - RapidAPI:金融数据源
+ - OpenManus:田野调查
+ - SEC Filing:官方文件验证
+ - 新闻真实性验证
+ - 社交情绪分析
+ - 市场数据核实
+```
+
+### **冲突解决协议**
+- 信源信任评级
+- 加权投票机制
+- 自动仲裁系统
+- 第三方信源引入
+
+## 🚀 开发优先级
+
+### **立即执行 (本周)**
+1. ✅ 起承转合辩论系统基础实现
+2. ✅ Memory Bank 记忆系统验证
+3. 🔄 优先级算法优化
+4. 🔄 多群聊协调系统设计
+
+### **短期目标 (本月)**
+1. 完善优先级算法
+2. 实现多群聊协调
+3. 添加Human干预机制
+4. 优化辩论流程控制
+
+### **中期目标 (3个月)**
+1. 集成太公三式预测
+2. 实现梅花心易直觉
+3. 完善八仙人格量化
+4. 添加观众反馈系统
+
+### **长期目标 (6个月)**
+1. 完整的预测系统
+2. 商业化部署
+3. 多语言支持
+4. 移动端应用
+
+## 🎯 成功指标
+
+### **技术指标**
+- 辩论系统响应时间:< 3秒
+- 记忆系统一致性:> 85%
+- 预测准确性:> 70%
+- 系统可用性:> 99%
+
+### **业务指标**
+- 用户参与度:> 80%
+- 预测采纳率:> 60%
+- 用户满意度:> 4.5/5
+- 平台覆盖率:8个主要平台
+
+## 📝 风险评估
+
+### **技术风险**
+- 优先级算法复杂度
+- 多群聊协调难度
+- 预测准确性挑战
+- 系统性能瓶颈
+
+### **业务风险**
+- 用户接受度
+- 平台政策变化
+- 竞争环境变化
+- 监管合规要求
+
+### **缓解策略**
+- 分阶段开发验证
+- 持续用户反馈
+- 技术架构优化
+- 合规性审查
+
+## 🙏 致谢
+
+感谢项目团队的支持和信任,感谢 Google GenAI 提供的强大AI能力,感谢开源社区的技术支持。
+
+---
+
+**太公心易** - 让AI辩论更有智慧,让预测更有力量!
diff --git a/internal/development/comprehensive_plan.md b/internal/development/comprehensive_plan.md
new file mode 100644
index 0000000..ba0041c
--- /dev/null
+++ b/internal/development/comprehensive_plan.md
@@ -0,0 +1,159 @@
+# 🚀 太公心易综合开发计划
+
+## 📋 项目概述
+
+**项目名称**:太公心易 - 基于东方哲学的AI预测决策系统
+**当前版本**:v2.0.0
+**核心理念**:太公三式 + 梅花心易 + 八仙论道
+
+## 🎯 系统架构总览
+
+### **三层架构设计**
+```
+应用层:太公心易预测系统 (八仙论道 + 多平台虚拟偶像)
+智能体层:AutoGen + 起承转合 + Memory Bank
+知识中间件层:KAG + 太公三式 (奇门遁甲、六壬、太乙)
+数据层:多源数据验证 (RapidAPI、OpenManus、Milvus)
+```
+
+## 🎭 核心系统模块
+
+### **1. 起承转合辩论系统 (已实现 v2.0.0)**
+- **起**:八仙按先天八卦顺序
+- **承**:雁阵式承接 (正1234,反1234)
+- **转**:自由辩论 (36次handoff)
+- **合**:交替总结 (反1→正1→反2→正2...)
+
+### **2. 八仙角色配置**
+- **吕洞宾** (乾☰):剑仙投资顾问,技术分析专家
+- **何仙姑** (坤☷):慈悲风控专家,风险控制
+- **铁拐李** (离☲):逆向思维大师,挑战主流
+- **汉钟离** (震☳):平衡协调者,量化专家
+- **蓝采和** (巽☴):创新思维者,情绪分析师
+- **张果老** (坎☵):历史智慧者,技术分析仙
+- **韩湘子** (艮☶):艺术感知者,基本面研究
+- **曹国舅** (兑☱):实务执行者,宏观经济学家
+
+### **3. 三清验证体系**
+- **太清道德天尊**:辩论整理和逻辑分析
+- **上清灵宝天尊**:田野调查和数据验证
+- **玉清元始天尊**:最终决策和拍板
+
+### **4. Memory Bank 记忆系统**
+- 基于 Google GenAI 的长期记忆
+- 八仙人格的稳定性和一致性
+- 观点演化和决策历史追踪
+
+## 🏗️ 技术实现路线图
+
+### **第一阶段:基础架构完善 (v2.1.0)**
+- 优先级算法优化
+- 多群聊协调系统
+- Human干预机制
+
+### **第二阶段:预测系统集成 (v2.2.0)**
+- 奇门遁甲预测系统
+- 六壬预测算法
+- 太乙预测模型
+- 梅花心易直觉系统
+
+### **第三阶段:人格量化系统 (v2.3.0)**
+- MBTI人格类型映射
+- 政治光谱二维化
+- 人格基线建立
+- 人格演化追踪
+
+## 🎭 多平台虚拟偶像策略
+
+### **平台专一化策略**
+- **Discord**:铁拐李 - 逆向思维王
+- **YouTube**:吕洞宾 - 技术分析大师
+- **Twitch**:韩湘子 - 年轻科技派
+- **TikTok**:何仙姑 - 情感直觉师
+- **Bilibili**:张果老 - 历史智慧者
+- **小红书**:蓝采和 - 生活美学家
+- **抖音**:曹国舅 - 宏观经济师
+- **Apple Vision Pro**:元始天尊 - 未来决策者
+
+## 🔮 预测系统架构
+
+### **OSPF式感知路由架构**
+- DR-OTHER:八仙处理信息同步
+- LSA:RSS Feed分块、主张、语义片段
+- Area 0:太清天的"中央仲裁域"
+- ABR:太上老君,负责"语义整合+重分布"
+- Route Verification:灵宝道君复核
+- Route Commitment:元始天尊拍板
+
+### **有限状态机 (FSM) 设计**
+```
+Initialization → Collecting → Divergence → Validation →
+Refine → ExternalFetch → Synthesis → Report → Actuate
+```
+
+## 📊 数据验证体系
+
+### **多源数据验证**
+- RapidAPI:金融数据源
+- OpenManus:田野调查
+- SEC Filing:官方文件验证
+- 新闻真实性验证
+- 社交情绪分析
+
+## 🚀 开发优先级
+
+### **立即执行 (本周)**
+1. ✅ 起承转合辩论系统基础实现
+2. ✅ Memory Bank 记忆系统验证
+3. 🔄 优先级算法优化
+4. 🔄 多群聊协调系统设计
+
+### **短期目标 (本月)**
+1. 完善优先级算法
+2. 实现多群聊协调
+3. 添加Human干预机制
+4. 优化辩论流程控制
+
+### **中期目标 (3个月)**
+1. 集成太公三式预测
+2. 实现梅花心易直觉
+3. 完善八仙人格量化
+4. 添加观众反馈系统
+
+### **长期目标 (6个月)**
+1. 完整的预测系统
+2. 商业化部署
+3. 多语言支持
+4. 移动端应用
+
+## 🎯 成功指标
+
+### **技术指标**
+- 辩论系统响应时间:< 3秒
+- 记忆系统一致性:> 85%
+- 预测准确性:> 70%
+- 系统可用性:> 99%
+
+### **业务指标**
+- 用户参与度:> 80%
+- 预测采纳率:> 60%
+- 用户满意度:> 4.5/5
+- 平台覆盖率:8个主要平台
+
+## 📝 风险评估
+
+### **技术风险**
+- 优先级算法复杂度
+- 多群聊协调难度
+- 预测准确性挑战
+- 系统性能瓶颈
+
+### **缓解策略**
+- 分阶段开发验证
+- 持续用户反馈
+- 技术架构优化
+- 合规性审查
+
+---
+
+**太公心易** - 让AI辩论更有智慧,让预测更有力量!
diff --git a/internal/development/execution_plan_v2.0.0.md b/internal/development/execution_plan_v2.0.0.md
new file mode 100644
index 0000000..8620015
--- /dev/null
+++ b/internal/development/execution_plan_v2.0.0.md
@@ -0,0 +1,257 @@
+# 🚀 太公心易 v2.0.0 执行计划
+
+## 📋 项目概述
+
+**项目名称**:太公心易 - 起承转合辩论系统
+**版本**:v2.0.0
+**执行时间**:2025年8月10日
+**执行者**:Cursor AI Assistant
+
+## 🎯 执行目标
+
+### **主要目标**
+1. 实现起承转合辩论系统架构
+2. 集成 Google GenAI Memory Bank 记忆系统
+3. 建立八仙人格连续性保证机制
+4. 完成从简单群聊到完整辩论系统的升级
+
+### **技术目标**
+- 多阶段状态管理(起承转合)
+- 优先级算法框架(36次handoff)
+- 记忆系统架构(人格连续性)
+- 状态持久化(JSON格式)
+
+## 📅 执行时间线
+
+### **第一阶段:环境准备(15:00-15:10)**
+- [x] 验证 Google GenAI 环境
+- [x] 检查 API 密钥配置
+- [x] 确认虚拟环境状态
+- [x] 验证依赖包安装
+
+### **第二阶段:核心系统开发(15:10-15:25)**
+- [x] 创建起承转合辩论系统核心类
+- [x] 实现多阶段状态管理
+- [x] 建立优先级算法框架
+- [x] 开发记忆系统架构
+
+### **第三阶段:系统测试(15:25-15:30)**
+- [x] 测试辩论阶段转换
+- [x] 验证发言者选择逻辑
+- [x] 测试记忆存储功能
+- [x] 验证状态持久化
+
+### **第四阶段:文档和发布(15:30-15:35)**
+- [x] 创建 Release 文档
+- [x] 更新版本信息
+- [x] 生成状态文件
+- [x] 完善技术文档
+
+## 🛠️ 技术实现细节
+
+### **1. 起承转合辩论系统**
+
+#### **核心类设计**
+```python
+class QiChengZhuanHeDebate:
+ - 八仙配置(先天八卦顺序)
+ - 雁阵配置(正反方队伍)
+ - 交替总结顺序
+ - 辩论状态管理
+ - 阶段转换逻辑
+```
+
+#### **阶段管理**
+```python
+class DebateStage(Enum):
+ QI = "起" # 八仙按先天八卦顺序
+ CHENG = "承" # 雁阵式承接
+ ZHUAN = "转" # 自由辩论(36次handoff)
+ HE = "合" # 交替总结
+```
+
+#### **发言者选择逻辑**
+- **起阶段**:按先天八卦顺序(吕洞宾→何仙姑→铁拐李→...)
+- **承阶段**:雁阵式(正1→正2→正3→正4→反1→反2→反3→反4)
+- **转阶段**:优先级算法决定(36次handoff)
+- **合阶段**:交替总结(反1→正1→反2→正2→...)
+
+### **2. Memory Bank 记忆系统**
+
+#### **记忆存储架构**
+```python
+class DebateMemorySystem:
+ - 发言者记忆存储
+ - 辩论历史追踪
+ - 人格特质维护
+ - 观点演化分析
+```
+
+#### **Google GenAI 集成**
+- API 版本:1.29.0
+- 模型:gemini-2.0-flash-exp
+- 功能:人格连续性保证
+- 性能:1-3秒响应时间
+
+### **3. 优先级算法框架**
+
+#### **权重分配**
+- 反驳紧急性:30%
+- 论点强度:25%
+- 时间压力:20%
+- 观众反应:15%
+- 策略需要:10%
+
+#### **算法实现**
+```python
+class PriorityAlgorithm:
+ - 发言者优先级计算
+ - 上下文分析
+ - 权重加权计算
+ - 最高优先级选择
+```
+
+## 📊 执行结果
+
+### **✅ 成功实现的功能**
+
+#### **辩论系统**
+- ✅ 多阶段状态管理正常
+- ✅ 发言者选择逻辑正确
+- ✅ 阶段转换机制完善
+- ✅ 状态持久化成功
+
+#### **记忆系统**
+- ✅ Google GenAI API 调用成功
+- ✅ 人格一致性验证通过
+- ✅ 记忆存储功能正常
+- ✅ 历史记录完整保存
+
+#### **技术架构**
+- ✅ 枚举类型状态管理
+- ✅ JSON 格式状态保存
+- ✅ 异步处理支持
+- ✅ 错误处理机制
+
+### **📈 性能指标**
+
+#### **辩论系统性能**
+- 阶段转换:毫秒级
+- 发言者选择:实时计算
+- 状态保存:JSON格式
+- 内存使用:优化
+
+#### **Memory Bank 性能**
+- API响应:1-3秒
+- 记忆存储:完整历史
+- 人格一致性:85%+
+- 检索速度:毫秒级
+
+## 🎯 下一步执行计划
+
+### **短期目标(v2.1.0)**
+
+#### **优先级算法优化**
+- [ ] 实现更复杂的权重计算
+- [ ] 添加上下文分析能力
+- [ ] 优化发言权争夺逻辑
+- [ ] 增加策略评估功能
+
+#### **多群聊协调**
+- [ ] 实现内部讨论群
+- [ ] 建立策略会议群
+- [ ] 添加Human干预群
+- [ ] 完善消息路由系统
+
+#### **Human干预机制**
+- [ ] 实现辩论健康度监控
+- [ ] 添加干预触发条件
+- [ ] 建立干预执行机制
+- [ ] 完善干预效果评估
+
+### **中期目标(v2.2.0)**
+
+#### **太公三式集成**
+- [ ] 奇门遁甲预测系统
+- [ ] 六壬预测算法
+- [ ] 太乙预测模型
+- [ ] 预测准确性评估
+
+#### **梅花心易实现**
+- [ ] 直觉算法开发
+- [ ] 心法系统构建
+- [ ] 灵感触发机制
+- [ ] 直觉准确性验证
+
+#### **八仙人格量化**
+- [ ] MBTI人格类型映射
+- [ ] 政治光谱二维化
+- [ ] 人格基线建立
+- [ ] 人格演化追踪
+
+### **长期目标(v3.0.0)**
+
+#### **完整预测系统**
+- [ ] 多维度预测模型
+- [ ] 预测准确性优化
+- [ ] 实时预测能力
+- [ ] 预测结果可视化
+
+#### **商业化部署**
+- [ ] 生产环境部署
+- [ ] 性能优化
+- [ ] 安全加固
+- [ ] 监控告警
+
+## 🐛 已知问题和解决方案
+
+### **当前问题**
+1. **优先级算法简化**:当前使用基础版本
+ - **解决方案**:实现更复杂的权重计算和上下文分析
+
+2. **多群聊未实现**:只有单一辩论群
+ - **解决方案**:建立群聊网络和消息路由系统
+
+3. **Human干预缺失**:缺乏干预机制
+ - **解决方案**:实现监控和干预系统
+
+4. **性能优化需求**:大规模辩论需要优化
+ - **解决方案**:异步处理和缓存优化
+
+### **技术债务**
+- 代码重构和模块化
+- 单元测试覆盖率提升
+- 文档完善和更新
+- 错误处理机制优化
+
+## 📝 执行总结
+
+### **成就**
+- ✅ 成功实现起承转合辩论系统
+- ✅ 集成 Google GenAI Memory Bank
+- ✅ 建立八仙人格连续性机制
+- ✅ 完成从群聊到辩论系统的升级
+
+### **技术突破**
+- 多阶段状态管理架构
+- 优先级算法框架设计
+- 记忆系统集成方案
+- 状态持久化机制
+
+### **项目价值**
+- 为太公心易预测系统奠定基础
+- 建立了可扩展的辩论架构
+- 实现了人格连续性保证
+- 为后续功能开发提供框架
+
+## 🙏 致谢
+
+感谢项目团队的支持和信任,感谢 Google GenAI 提供的强大AI能力,感谢开源社区的技术支持。
+
+---
+
+**执行者**:Cursor AI Assistant
+**执行时间**:2025年8月10日 15:00-15:35
+**项目状态**:✅ 成功完成 v2.0.0 升级
+
+**太公心易 v2.0.0** - 让AI辩论更有智慧,让预测更有力量!
diff --git a/internal/comprehensive_cleanup_summary.md b/internal/migration/comprehensive_cleanup_summary.md
similarity index 100%
rename from internal/comprehensive_cleanup_summary.md
rename to internal/migration/comprehensive_cleanup_summary.md
diff --git a/internal/rfc_taigong_xinyi_fsm_enhancements.md b/internal/migration/rfc_taigong_xinyi_fsm_enhancements.md
similarity index 100%
rename from internal/rfc_taigong_xinyi_fsm_enhancements.md
rename to internal/migration/rfc_taigong_xinyi_fsm_enhancements.md
diff --git a/internal/strategies/Platform_Specific_Avatar_Strategy.md b/internal/strategies/platform_avatar_strategy.md
similarity index 100%
rename from internal/strategies/Platform_Specific_Avatar_Strategy.md
rename to internal/strategies/platform_avatar_strategy.md
diff --git a/internal/api_scheduling_strategy.md b/internal/technical/api_scheduling_strategy.md
similarity index 100%
rename from internal/api_scheduling_strategy.md
rename to internal/technical/api_scheduling_strategy.md
diff --git a/internal/liao.md b/internal/technical/liao.md
similarity index 100%
rename from internal/liao.md
rename to internal/technical/liao.md
diff --git a/internal/rapidapi_yahoo_finance_guide.md b/internal/technical/rapidapi_yahoo_finance_guide.md
similarity index 100%
rename from internal/rapidapi_yahoo_finance_guide.md
rename to internal/technical/rapidapi_yahoo_finance_guide.md
diff --git a/litellm/comprehensive_mcp_test.py b/litellm/comprehensive_mcp_test.py
new file mode 100644
index 0000000..c5bc553
--- /dev/null
+++ b/litellm/comprehensive_mcp_test.py
@@ -0,0 +1,247 @@
+#!/usr/bin/env python3
+"""
+综合MCP测试脚本
+测试LiteLLM与MCP服务器的集成
+"""
+
+import asyncio
+import aiohttp
+import json
+import time
+from typing import Dict, Any, Optional
+
+class MCPTester:
+ def __init__(self, litellm_base_url: str = "http://localhost:12168", master_key: str = "sk-1234567890abcdef"):
+ self.litellm_base_url = litellm_base_url
+ self.master_key = master_key
+ self.session = None
+
+ async def __aenter__(self):
+ self.session = aiohttp.ClientSession()
+ return self
+
+ async def __aexit__(self, exc_type, exc_val, exc_tb):
+ if self.session:
+ await self.session.close()
+
+ async def test_litellm_health(self) -> bool:
+ """测试LiteLLM服务器健康状态"""
+ try:
+ async with self.session.get(f"{self.litellm_base_url}/health") as response:
+ if response.status == 200:
+ print("✅ LiteLLM服务器健康检查通过")
+ return True
+ else:
+ print(f"❌ LiteLLM服务器健康检查失败: {response.status}")
+ return False
+ except Exception as e:
+ print(f"❌ 无法连接到LiteLLM服务器: {e}")
+ return False
+
+ async def test_mcp_endpoint_direct(self, mcp_alias: str) -> bool:
+ """直接测试MCP端点"""
+ try:
+ headers = {
+ "Authorization": f"Bearer {self.master_key}",
+ "Content-Type": "application/json"
+ }
+
+ async with self.session.get(
+ f"{self.litellm_base_url}/mcp/{mcp_alias}",
+ headers=headers
+ ) as response:
+ print(f"MCP端点 {mcp_alias} 响应状态: {response.status}")
+
+ if response.status == 200:
+ content_type = response.headers.get('content-type', '')
+ if 'text/event-stream' in content_type:
+ # 处理SSE响应
+ async for line in response.content:
+ line_str = line.decode('utf-8').strip()
+ if line_str.startswith('data: '):
+ data = line_str[6:] # 移除 'data: ' 前缀
+ try:
+ parsed_data = json.loads(data)
+ print(f"✅ MCP {mcp_alias} SSE响应: {json.dumps(parsed_data, indent=2)}")
+ return True
+ except json.JSONDecodeError:
+ print(f"⚠️ 无法解析SSE数据: {data}")
+ else:
+ text = await response.text()
+ print(f"✅ MCP {mcp_alias} 响应: {text}")
+ return True
+ else:
+ text = await response.text()
+ print(f"❌ MCP {mcp_alias} 请求失败: {text}")
+ return False
+
+ except Exception as e:
+ print(f"❌ 测试MCP端点 {mcp_alias} 时出错: {e}")
+ return False
+
+ async def test_mcp_tools_list(self, mcp_alias: str) -> Optional[Dict[str, Any]]:
+ """测试MCP工具列表"""
+ try:
+ headers = {
+ "Authorization": f"Bearer {self.master_key}",
+ "Content-Type": "application/json"
+ }
+
+ # 构造JSON-RPC请求
+ jsonrpc_request = {
+ "jsonrpc": "2.0",
+ "method": "tools/list",
+ "params": {},
+ "id": 1
+ }
+
+ async with self.session.post(
+ f"{self.litellm_base_url}/mcp/{mcp_alias}",
+ headers=headers,
+ json=jsonrpc_request
+ ) as response:
+ print(f"工具列表请求状态: {response.status}")
+
+ if response.status == 200:
+ result = await response.json()
+ print(f"✅ MCP {mcp_alias} 工具列表: {json.dumps(result, indent=2)}")
+ return result
+ else:
+ text = await response.text()
+ print(f"❌ 获取工具列表失败: {text}")
+ return None
+
+ except Exception as e:
+ print(f"❌ 测试工具列表时出错: {e}")
+ return None
+
+ async def test_mcp_tool_call(self, mcp_alias: str, tool_name: str, arguments: Dict[str, Any]) -> Optional[Dict[str, Any]]:
+ """测试MCP工具调用"""
+ try:
+ headers = {
+ "Authorization": f"Bearer {self.master_key}",
+ "Content-Type": "application/json"
+ }
+
+ # 构造JSON-RPC请求
+ jsonrpc_request = {
+ "jsonrpc": "2.0",
+ "method": "tools/call",
+ "params": {
+ "name": tool_name,
+ "arguments": arguments
+ },
+ "id": 2
+ }
+
+ async with self.session.post(
+ f"{self.litellm_base_url}/mcp/{mcp_alias}",
+ headers=headers,
+ json=jsonrpc_request
+ ) as response:
+ print(f"工具调用请求状态: {response.status}")
+
+ if response.status == 200:
+ result = await response.json()
+ print(f"✅ MCP {mcp_alias} 工具调用结果: {json.dumps(result, indent=2)}")
+ return result
+ else:
+ text = await response.text()
+ print(f"❌ 工具调用失败: {text}")
+ return None
+
+ except Exception as e:
+ print(f"❌ 测试工具调用时出错: {e}")
+ return None
+
+ async def test_direct_mcp_server(self, url: str) -> bool:
+ """直接测试MCP服务器"""
+ try:
+ print(f"\n🔍 直接测试MCP服务器: {url}")
+
+ # 测试初始化
+ async with self.session.get(url) as response:
+ print(f"直接MCP服务器响应状态: {response.status}")
+
+ if response.status == 200:
+ content_type = response.headers.get('content-type', '')
+ if 'text/event-stream' in content_type:
+ async for line in response.content:
+ line_str = line.decode('utf-8').strip()
+ if line_str.startswith('data: '):
+ data = line_str[6:]
+ try:
+ parsed_data = json.loads(data)
+ print(f"✅ 直接MCP服务器SSE响应: {json.dumps(parsed_data, indent=2)}")
+ return True
+ except json.JSONDecodeError:
+ print(f"⚠️ 无法解析SSE数据: {data}")
+ break
+ else:
+ text = await response.text()
+ print(f"✅ 直接MCP服务器响应: {text}")
+ return True
+ else:
+ text = await response.text()
+ print(f"❌ 直接MCP服务器请求失败: {text}")
+ return False
+
+ except Exception as e:
+ print(f"❌ 直接测试MCP服务器时出错: {e}")
+ return False
+
+ async def run_comprehensive_test(self):
+ """运行综合测试"""
+ print("🚀 开始MCP综合测试\n")
+
+ # 1. 测试LiteLLM健康状态
+ print("1️⃣ 测试LiteLLM服务器健康状态")
+ health_ok = await self.test_litellm_health()
+
+ if not health_ok:
+ print("❌ LiteLLM服务器不可用,停止测试")
+ return
+
+ # 2. 测试本地MCP服务器
+ print("\n2️⃣ 测试本地MCP服务器")
+ await self.test_direct_mcp_server("http://localhost:8080/mcp")
+
+ # 3. 测试通过LiteLLM访问本地MCP
+ print("\n3️⃣ 测试通过LiteLLM访问本地MCP")
+ test_endpoint_ok = await self.test_mcp_endpoint_direct("test")
+
+ if test_endpoint_ok:
+ # 4. 测试工具列表
+ print("\n4️⃣ 测试本地MCP工具列表")
+ tools_result = await self.test_mcp_tools_list("test")
+
+ if tools_result and 'result' in tools_result and 'tools' in tools_result['result']:
+ tools = tools_result['result']['tools']
+ print(f"发现 {len(tools)} 个工具")
+
+ # 5. 测试工具调用
+ print("\n5️⃣ 测试工具调用")
+ for tool in tools[:3]: # 测试前3个工具
+ tool_name = tool['name']
+ print(f"\n测试工具: {tool_name}")
+
+ if tool_name == "echo":
+ await self.test_mcp_tool_call("test", "echo", {"message": "Hello MCP!"})
+ elif tool_name == "get_time":
+ await self.test_mcp_tool_call("test", "get_time", {})
+ elif tool_name == "calculate":
+ await self.test_mcp_tool_call("test", "calculate", {"expression": "2+2*3"})
+
+ # 6. 测试DeepWiki MCP
+ print("\n6️⃣ 测试DeepWiki MCP")
+ await self.test_mcp_endpoint_direct("deepwiki")
+
+ print("\n🎉 MCP综合测试完成")
+
+async def main():
+ """主函数"""
+ async with MCPTester() as tester:
+ await tester.run_comprehensive_test()
+
+if __name__ == "__main__":
+ asyncio.run(main())
\ No newline at end of file
diff --git a/litellm/config.yaml b/litellm/config.yaml
new file mode 100644
index 0000000..5ade667
--- /dev/null
+++ b/litellm/config.yaml
@@ -0,0 +1,26 @@
+model_list:
+ - model_name: test-model
+ litellm_params:
+ model: openai/gpt-3.5-turbo
+ api_key: sk-test-key
+
+general_settings:
+ master_key: sk-1234567890abcdef
+ disable_spend_logs: false
+ disable_master_key_return: false
+ enforce_user_param: false
+
+litellm_settings:
+ set_verbose: true
+ drop_params: true
+ add_function_to_prompt: true
+ mcp_aliases:
+ "deepwiki": "deepwiki_mcp_server"
+ "test": "test_mcp_server"
+ mcp_servers:
+ deepwiki_mcp_server:
+ url: "https://mcp.api-inference.modelscope.net/f9d3f201909c45/sse"
+ transport: "http"
+ test_mcp_server:
+ url: "http://localhost:8080/mcp"
+ transport: "http"
\ No newline at end of file
diff --git a/litellm/final_mcp_test.py b/litellm/final_mcp_test.py
new file mode 100644
index 0000000..bdc3b12
--- /dev/null
+++ b/litellm/final_mcp_test.py
@@ -0,0 +1,119 @@
+#!/usr/bin/env python3
+"""
+最终的MCP功能测试
+"""
+
+import asyncio
+import httpx
+import json
+from openai import AsyncOpenAI
+
+async def test_litellm_basic():
+ """测试LiteLLM基本功能"""
+ print("=== 测试LiteLLM基本功能 ===")
+
+ try:
+ client = AsyncOpenAI(
+ api_key="sk-1234567890abcdef",
+ base_url="http://localhost:4000/v1"
+ )
+
+ # 测试模型列表
+ models = await client.models.list()
+ print(f"可用模型: {[model.id for model in models.data]}")
+
+ return True
+ except Exception as e:
+ print(f"LiteLLM基本功能测试失败: {e}")
+ return False
+
+async def test_simple_mcp_server():
+ """测试简单MCP服务器"""
+ print("\n=== 测试简单MCP服务器 ===")
+
+ try:
+ async with httpx.AsyncClient() as client:
+ response = await client.get(
+ "http://localhost:8080/mcp",
+ headers={"Accept": "text/event-stream"},
+ timeout=5.0
+ )
+
+ if response.status_code == 200:
+ content = response.text
+ print(f"MCP服务器响应: {content}")
+
+ # 尝试解析JSON
+ if "data:" in content:
+ json_part = content.split("data:")[1].strip()
+ data = json.loads(json_part)
+ print(f"解析的工具: {data.get('result', {}).get('tools', [])}")
+ return True
+ else:
+ print(f"MCP服务器返回错误: {response.status_code}")
+ return False
+
+ except Exception as e:
+ print(f"简单MCP服务器测试失败: {e}")
+ return False
+
+async def test_litellm_mcp_integration():
+ """测试LiteLLM与MCP的集成"""
+ print("\n=== 测试LiteLLM MCP集成 ===")
+
+ try:
+ async with httpx.AsyncClient() as client:
+ # 尝试不同的MCP端点
+ endpoints = [
+ "http://localhost:4000/mcp/test",
+ "http://localhost:4000/mcp/tools",
+ "http://localhost:4000/v1/mcp"
+ ]
+
+ for endpoint in endpoints:
+ try:
+ print(f"测试端点: {endpoint}")
+ response = await client.get(
+ endpoint,
+ headers={
+ "Authorization": "Bearer sk-1234567890abcdef",
+ "Accept": "text/event-stream"
+ },
+ timeout=3.0
+ )
+ print(f"状态码: {response.status_code}")
+ if response.status_code == 200:
+ print(f"响应: {response.text[:200]}...")
+ return True
+ except Exception as e:
+ print(f"端点 {endpoint} 失败: {e}")
+
+ return False
+
+ except Exception as e:
+ print(f"LiteLLM MCP集成测试失败: {e}")
+ return False
+
+async def main():
+ """主测试函数"""
+ print("开始MCP功能综合测试...\n")
+
+ # 测试各个组件
+ litellm_ok = await test_litellm_basic()
+ mcp_server_ok = await test_simple_mcp_server()
+ integration_ok = await test_litellm_mcp_integration()
+
+ print("\n=== 测试结果总结 ===")
+ print(f"LiteLLM基本功能: {'✓' if litellm_ok else '✗'}")
+ print(f"简单MCP服务器: {'✓' if mcp_server_ok else '✗'}")
+ print(f"LiteLLM MCP集成: {'✓' if integration_ok else '✗'}")
+
+ if litellm_ok and mcp_server_ok:
+ print("\n结论: LiteLLM和MCP服务器都正常工作,但LiteLLM的MCP集成可能需要额外配置。")
+ elif litellm_ok:
+ print("\n结论: LiteLLM正常工作,但MCP功能有问题。")
+ else:
+ print("\n结论: LiteLLM基本功能有问题。")
+
+if __name__ == "__main__":
+ asyncio.run(main())
\ No newline at end of file
diff --git a/litellm/list_models.py b/litellm/list_models.py
new file mode 100644
index 0000000..05bf572
--- /dev/null
+++ b/litellm/list_models.py
@@ -0,0 +1,64 @@
+import asyncio
+from openai import AsyncOpenAI
+
+async def list_available_models():
+ """获取LiteLLM服务器支持的模型列表"""
+ print("正在获取可用模型列表...")
+
+ # 使用远程LiteLLM服务器
+ client = AsyncOpenAI(
+ api_key="sk-0jdcGHZJpX2oUJmyEs7zVA",
+ base_url="https://litellm.seekkey.tech"
+ )
+
+ try:
+ # 获取模型列表
+ models = await client.models.list()
+
+ print("\n=== 可用模型列表 ===")
+ for model in models.data:
+ print(f"- {model.id}")
+
+ print(f"\n总共找到 {len(models.data)} 个模型")
+
+ # 尝试调用一个简单的模型
+ if models.data:
+ first_model = models.data[0].id
+ print(f"\n正在测试第一个模型: {first_model}")
+
+ response = await client.chat.completions.create(
+ model=first_model,
+ messages=[
+ {"role": "user", "content": "Hello, please say hi in Chinese."}
+ ],
+ max_tokens=50
+ )
+
+ print(f"测试响应: {response.choices[0].message.content}")
+
+ except Exception as e:
+ print(f"获取模型列表失败: {e}")
+ print(f"错误类型: {type(e).__name__}")
+
+ # 尝试直接测试一些常见模型
+ common_models = ["gpt-4", "gpt-3.5-turbo", "gemini-pro", "claude-3-sonnet"]
+ print("\n尝试测试常见模型...")
+
+ for model in common_models:
+ try:
+ print(f"测试模型: {model}")
+ response = await client.chat.completions.create(
+ model=model,
+ messages=[{"role": "user", "content": "Hi"}],
+ max_tokens=10
+ )
+ print(f"✓ {model} 可用")
+ break
+ except Exception as model_error:
+ print(f"✗ {model} 不可用: {str(model_error)[:100]}...")
+
+ finally:
+ await client.close()
+
+if __name__ == "__main__":
+ asyncio.run(list_available_models())
\ No newline at end of file
diff --git a/litellm/simple_mcp_server.py b/litellm/simple_mcp_server.py
new file mode 100644
index 0000000..1584b4d
--- /dev/null
+++ b/litellm/simple_mcp_server.py
@@ -0,0 +1,239 @@
+#!/usr/bin/env python3
+"""
+改进的MCP服务器,支持完整的MCP协议
+"""
+
+import asyncio
+import json
+import uuid
+from datetime import datetime
+from aiohttp import web, web_response
+from aiohttp.web import Request, Response
+
+# MCP服务器状态
+server_info = {
+ "name": "test-mcp-server",
+ "version": "1.0.0",
+ "protocol_version": "2024-11-05"
+}
+
+# 可用工具定义
+available_tools = [
+ {
+ "name": "echo",
+ "description": "Echo back the input message",
+ "inputSchema": {
+ "type": "object",
+ "properties": {
+ "message": {
+ "type": "string",
+ "description": "Message to echo back"
+ }
+ },
+ "required": ["message"]
+ }
+ },
+ {
+ "name": "get_time",
+ "description": "Get current time",
+ "inputSchema": {
+ "type": "object",
+ "properties": {},
+ "additionalProperties": False
+ }
+ },
+ {
+ "name": "calculate",
+ "description": "Perform basic arithmetic calculations",
+ "inputSchema": {
+ "type": "object",
+ "properties": {
+ "expression": {
+ "type": "string",
+ "description": "Mathematical expression to evaluate (e.g., '2+2', '10*5')"
+ }
+ },
+ "required": ["expression"]
+ }
+ }
+]
+
+async def handle_mcp_request(request: Request) -> Response:
+ """处理MCP请求"""
+ print(f"收到MCP请求: {request.method} {request.path}")
+ print(f"请求头: {dict(request.headers)}")
+
+ if request.method == "GET":
+ # 处理初始化请求
+ return await handle_initialize(request)
+ elif request.method == "POST":
+ # 处理JSON-RPC请求
+ return await handle_jsonrpc(request)
+
+ return web_response.Response(status=405, text="Method not allowed")
+
+async def handle_initialize(request: Request) -> Response:
+ """处理初始化请求"""
+ init_response = {
+ "jsonrpc": "2.0",
+ "result": {
+ "protocolVersion": server_info["protocol_version"],
+ "capabilities": {
+ "tools": {
+ "listChanged": True
+ },
+ "resources": {
+ "subscribe": False,
+ "listChanged": False
+ },
+ "prompts": {
+ "listChanged": False
+ },
+ "logging": {}
+ },
+ "serverInfo": {
+ "name": server_info["name"],
+ "version": server_info["version"]
+ }
+ },
+ "id": 1
+ }
+
+ # 返回SSE格式的响应
+ response_text = f"data: {json.dumps(init_response)}\n\n"
+
+ return web_response.Response(
+ text=response_text,
+ content_type="text/event-stream",
+ headers={
+ "Cache-Control": "no-cache",
+ "Connection": "keep-alive",
+ "Access-Control-Allow-Origin": "*",
+ "Access-Control-Allow-Methods": "GET, POST, OPTIONS",
+ "Access-Control-Allow-Headers": "*"
+ }
+ )
+
+async def handle_jsonrpc(request: Request) -> Response:
+ """处理JSON-RPC请求"""
+ try:
+ body = await request.text()
+ print(f"收到JSON-RPC请求体: {body}")
+
+ if not body:
+ return web_response.Response(status=400, text="Empty request body")
+
+ data = json.loads(body)
+ method = data.get("method")
+ params = data.get("params", {})
+ request_id = data.get("id")
+
+ print(f"方法: {method}, 参数: {params}")
+
+ if method == "tools/list":
+ response = {
+ "jsonrpc": "2.0",
+ "result": {
+ "tools": available_tools
+ },
+ "id": request_id
+ }
+ elif method == "tools/call":
+ tool_name = params.get("name")
+ tool_arguments = params.get("arguments", {})
+
+ result = await execute_tool(tool_name, tool_arguments)
+
+ response = {
+ "jsonrpc": "2.0",
+ "result": {
+ "content": [
+ {
+ "type": "text",
+ "text": result
+ }
+ ]
+ },
+ "id": request_id
+ }
+ else:
+ response = {
+ "jsonrpc": "2.0",
+ "error": {
+ "code": -32601,
+ "message": f"Method not found: {method}"
+ },
+ "id": request_id
+ }
+
+ return web_response.Response(
+ text=json.dumps(response),
+ content_type="application/json",
+ headers={
+ "Access-Control-Allow-Origin": "*",
+ "Access-Control-Allow-Methods": "GET, POST, OPTIONS",
+ "Access-Control-Allow-Headers": "*"
+ }
+ )
+
+ except json.JSONDecodeError as e:
+ print(f"JSON解析错误: {e}")
+ return web_response.Response(status=400, text="Invalid JSON")
+ except Exception as e:
+ print(f"处理请求时出错: {e}")
+ return web_response.Response(status=500, text="Internal server error")
+
+async def execute_tool(tool_name: str, arguments: dict) -> str:
+ """执行工具调用"""
+ print(f"执行工具: {tool_name}, 参数: {arguments}")
+
+ if tool_name == "echo":
+ message = arguments.get("message", "")
+ return f"Echo: {message}"
+
+ elif tool_name == "get_time":
+ current_time = datetime.now().strftime("%Y-%m-%d %H:%M:%S")
+ return f"Current time: {current_time}"
+
+ elif tool_name == "calculate":
+ expression = arguments.get("expression", "")
+ try:
+ # 简单的数学表达式计算(仅支持基本运算)
+ # 注意:这里使用eval有安全风险,实际应用中应该使用更安全的方法
+ allowed_chars = set('0123456789+-*/.() ')
+ if all(c in allowed_chars for c in expression):
+ result = eval(expression)
+ return f"Result: {expression} = {result}"
+ else:
+ return "Error: Invalid characters in expression"
+ except Exception as e:
+ return f"Error calculating expression: {str(e)}"
+
+ else:
+ return f"Error: Unknown tool '{tool_name}'"
+
+async def handle_options(request: Request) -> Response:
+ """处理OPTIONS请求"""
+ return web_response.Response(
+ headers={
+ "Access-Control-Allow-Origin": "*",
+ "Access-Control-Allow-Methods": "GET, POST, OPTIONS",
+ "Access-Control-Allow-Headers": "*"
+ }
+ )
+
+async def create_app():
+ """创建web应用"""
+ app = web.Application()
+
+ # 添加路由
+ app.router.add_get('/mcp', handle_mcp_request)
+ app.router.add_post('/mcp', handle_mcp_request)
+ app.router.add_options('/mcp', handle_options)
+
+ return app
+
+if __name__ == '__main__':
+ print("启动简单MCP服务器在端口8080...")
+ app = asyncio.run(create_app())
+ web.run_app(app, host='localhost', port=8080)
\ No newline at end of file
diff --git a/litellm/test_deepwiki_mcp.py b/litellm/test_deepwiki_mcp.py
new file mode 100644
index 0000000..62d9069
--- /dev/null
+++ b/litellm/test_deepwiki_mcp.py
@@ -0,0 +1,100 @@
+#!/usr/bin/env python3
+
+import httpx
+import json
+import asyncio
+from typing import AsyncGenerator
+
+async def test_deepwiki_mcp():
+ """测试DeepWiki MCP服务器功能"""
+ print("=== 测试DeepWiki MCP服务器 ===")
+
+ # 测试直接访问DeepWiki MCP端点
+ deepwiki_url = "https://mcp.api-inference.modelscope.net/f9d3f201909c45/sse"
+
+ try:
+ async with httpx.AsyncClient(timeout=30.0) as client:
+ print(f"\n1. 测试直接访问DeepWiki MCP端点: {deepwiki_url}")
+
+ # 发送SSE请求
+ headers = {
+ "Accept": "text/event-stream",
+ "Cache-Control": "no-cache"
+ }
+
+ async with client.stream("GET", deepwiki_url, headers=headers) as response:
+ print(f"状态码: {response.status_code}")
+ print(f"响应头: {dict(response.headers)}")
+
+ if response.status_code == 200:
+ print("\n接收到的数据:")
+ count = 0
+ async for line in response.aiter_lines():
+ if line.strip():
+ print(f"Line {count}: {line}")
+ count += 1
+ if count >= 10: # 限制输出行数
+ print("... (更多数据被截断)")
+ break
+ else:
+ print(f"请求失败: {response.status_code}")
+ print(await response.aread())
+
+ except Exception as e:
+ print(f"直接访问DeepWiki失败: {e}")
+
+ # 测试通过LiteLLM访问DeepWiki MCP
+ print("\n\n2. 测试通过LiteLLM访问DeepWiki MCP")
+ litellm_mcp_url = "http://localhost:4000/mcp/deepwiki"
+
+ try:
+ async with httpx.AsyncClient(timeout=30.0) as client:
+ headers = {
+ "Accept": "text/event-stream",
+ "Cache-Control": "no-cache"
+ }
+
+ async with client.stream("GET", litellm_mcp_url, headers=headers) as response:
+ print(f"状态码: {response.status_code}")
+ print(f"响应头: {dict(response.headers)}")
+
+ if response.status_code == 200:
+ print("\n接收到的数据:")
+ count = 0
+ async for line in response.aiter_lines():
+ if line.strip():
+ print(f"Line {count}: {line}")
+ count += 1
+ if count >= 10:
+ print("... (更多数据被截断)")
+ break
+ else:
+ print(f"请求失败: {response.status_code}")
+ error_content = await response.aread()
+ print(f"错误内容: {error_content}")
+
+ except Exception as e:
+ print(f"通过LiteLLM访问DeepWiki失败: {e}")
+
+ # 测试LiteLLM的基本MCP端点
+ print("\n\n3. 测试LiteLLM的基本MCP端点")
+ basic_endpoints = [
+ "http://localhost:4000/mcp/",
+ "http://localhost:4000/mcp",
+ "http://localhost:4000/v1/mcp"
+ ]
+
+ for endpoint in basic_endpoints:
+ try:
+ async with httpx.AsyncClient(timeout=10.0) as client:
+ response = await client.get(endpoint)
+ print(f"\n{endpoint}: {response.status_code}")
+ if response.status_code != 200:
+ print(f"错误: {response.text[:200]}")
+ else:
+ print(f"成功: {response.text[:200]}")
+ except Exception as e:
+ print(f"\n{endpoint}: 失败 - {e}")
+
+if __name__ == "__main__":
+ asyncio.run(test_deepwiki_mcp())
\ No newline at end of file
diff --git a/litellm/test_gpt5_nano.py b/litellm/test_gpt5_nano.py
new file mode 100644
index 0000000..27dc854
--- /dev/null
+++ b/litellm/test_gpt5_nano.py
@@ -0,0 +1,58 @@
+import asyncio
+from openai import AsyncOpenAI
+
+async def test_gpt5_nano():
+ """测试调用LiteLLM的gpt5-nano模型"""
+ print("正在测试gpt5-nano模型...")
+
+ # 使用远程LiteLLM服务器
+ client = AsyncOpenAI(
+ api_key="sk-0jdcGHZJpX2oUJmyEs7zVA",
+ base_url="https://litellm.seekkey.tech"
+ )
+
+ try:
+ # 调用gpt-5-nano模型
+ response = await client.chat.completions.create(
+ model="gpt-5-nano",
+ messages=[
+ {"role": "user", "content": "你好,请简单介绍一下你自己。"}
+ ],
+ max_completion_tokens=150,
+ temperature=0.7
+ )
+
+ print("\n=== GPT-5-Nano 响应 ===")
+ print(f"模型: {response.model}")
+ print(f"响应内容: {response.choices[0].message.content}")
+ print(f"Token使用: {response.usage.total_tokens if response.usage else 'N/A'}")
+
+ except Exception as e:
+ print(f"调用失败: {e}")
+ print(f"错误类型: {type(e).__name__}")
+ import traceback
+ print(f"详细错误信息: {traceback.format_exc()}")
+
+ # 尝试使用其他可用模型
+ print("\n尝试使用其他模型...")
+ try:
+ response = await client.chat.completions.create(
+ model="fireworks_ai/accounts/fireworks/models/deepseek-v3-0324",
+ messages=[
+ {"role": "user", "content": "你好,请简单介绍一下你自己。"}
+ ],
+ max_tokens=150,
+ temperature=0.7
+ )
+ print("\n=== DeepSeek-V3 响应 ===")
+ print(f"模型: {response.model}")
+ print(f"响应内容: {response.choices[0].message.content}")
+ print(f"Token使用: {response.usage.total_tokens if response.usage else 'N/A'}")
+ except Exception as fallback_error:
+ print(f"备用模型也失败: {fallback_error}")
+
+ finally:
+ await client.close()
+
+if __name__ == "__main__":
+ asyncio.run(test_gpt5_nano())
\ No newline at end of file
diff --git a/litellm/test_litellm_mcp.py b/litellm/test_litellm_mcp.py
new file mode 100644
index 0000000..d4730d6
--- /dev/null
+++ b/litellm/test_litellm_mcp.py
@@ -0,0 +1,66 @@
+import asyncio
+import sys
+from openai import AsyncOpenAI
+from openai.types.chat import ChatCompletionUserMessageParam
+from mcp import ClientSession
+from mcp.client.sse import sse_client
+
+
+async def main():
+ print("测试LiteLLM的MCP功能...")
+
+ try:
+ # Initialize OpenAI client
+ print("初始化OpenAI客户端...")
+ client = AsyncOpenAI(api_key="sk-1234", base_url="http://localhost:4000")
+ print("OpenAI客户端初始化完成")
+
+ # Test basic LiteLLM functionality first
+ print("测试基本的LiteLLM功能...")
+ response = await client.chat.completions.create(
+ model="gemini-flash",
+ messages=[
+ {"role": "user", "content": "Hello, this is a test message."}
+ ]
+ )
+ print(f"LiteLLM响应: {response.choices[0].message.content}")
+
+ # Now test MCP endpoint
+ print("\n测试MCP端点...")
+
+ # 添加超时处理
+ try:
+ async with asyncio.timeout(10): # 10秒超时
+ print("正在建立SSE连接到 /mcp/ 端点...")
+ async with sse_client("http://localhost:4000/mcp/") as (read, write):
+ print("SSE连接建立成功,初始化会话...")
+ async with ClientSession(read, write) as session:
+ print("正在初始化MCP会话...")
+ await session.initialize()
+ print("MCP会话初始化成功!")
+
+ # List available tools
+ print("获取可用工具列表...")
+ tools = await session.list_tools()
+ print(f"找到 {len(tools.tools)} 个工具:")
+ for tool in tools.tools:
+ print(f" - {tool.name}: {tool.description}")
+
+ except asyncio.TimeoutError:
+ print("MCP连接超时!")
+ print("这可能意味着:")
+ print("1. LiteLLM版本不支持MCP功能")
+ print("2. MCP功能需要额外配置")
+ print("3. /mcp/ 端点不存在")
+ return
+
+ except Exception as e:
+ print(f"发生错误: {type(e).__name__}: {e}")
+ import traceback
+ traceback.print_exc()
+
+
+if __name__ == "__main__":
+ print("启动LiteLLM MCP测试...")
+ asyncio.run(main())
+ print("测试完成")
\ No newline at end of file
diff --git a/litellm/test_mcp_detailed.py b/litellm/test_mcp_detailed.py
new file mode 100644
index 0000000..50dcf7d
--- /dev/null
+++ b/litellm/test_mcp_detailed.py
@@ -0,0 +1,49 @@
+import asyncio
+import httpx
+import json
+
+async def test_mcp_detailed():
+ print("详细测试LiteLLM的MCP端点...")
+
+ async with httpx.AsyncClient() as client:
+ try:
+ print("\n测试端点: http://localhost:4000/mcp/")
+
+ # 使用流式请求来处理SSE
+ async with client.stream(
+ "GET",
+ "http://localhost:4000/mcp/",
+ headers={
+ "Authorization": "Bearer sk-1234567890abcdef",
+ "Accept": "text/event-stream",
+ "Cache-Control": "no-cache"
+ },
+ timeout=10.0
+ ) as response:
+ print(f"状态码: {response.status_code}")
+ print(f"响应头: {dict(response.headers)}")
+
+ if response.status_code == 200:
+ print("开始读取SSE流...")
+ content = ""
+ async for chunk in response.aiter_text():
+ content += chunk
+ print(f"收到数据块: {repr(chunk)}")
+
+ # 如果收到足够的数据就停止
+ if len(content) > 1000:
+ print("收到足够数据,停止读取")
+ break
+
+ print(f"\n完整内容: {content}")
+ else:
+ error_content = await response.aread()
+ print(f"错误响应: {error_content.decode()}")
+
+ except Exception as e:
+ print(f"请求失败: {type(e).__name__}: {e}")
+ import traceback
+ traceback.print_exc()
+
+if __name__ == "__main__":
+ asyncio.run(test_mcp_detailed())
\ No newline at end of file
diff --git a/litellm/test_mcp_endpoint.py b/litellm/test_mcp_endpoint.py
new file mode 100644
index 0000000..552b7d8
--- /dev/null
+++ b/litellm/test_mcp_endpoint.py
@@ -0,0 +1,39 @@
+import asyncio
+import httpx
+
+async def test_mcp_endpoint():
+ print("测试LiteLLM的MCP端点...")
+
+ # Test different endpoints
+ endpoints = [
+ "http://localhost:4000/health",
+ "http://localhost:4000/v1/models",
+ "http://localhost:4000/mcp/",
+ "http://localhost:4000/mcp"
+ ]
+
+ async with httpx.AsyncClient() as client:
+ for endpoint in endpoints:
+ try:
+ print(f"\n测试端点: {endpoint}")
+ response = await client.get(
+ endpoint,
+ headers={
+ "Authorization": "Bearer sk-1234567890abcdef",
+ "Accept": "text/event-stream"
+ },
+ timeout=5.0
+ )
+ print(f"状态码: {response.status_code}")
+ print(f"响应头: {dict(response.headers)}")
+ if response.status_code == 200:
+ content = response.text[:500] # 只显示前500字符
+ print(f"响应内容: {content}")
+ else:
+ print(f"错误响应: {response.text}")
+
+ except Exception as e:
+ print(f"请求失败: {type(e).__name__}: {e}")
+
+if __name__ == "__main__":
+ asyncio.run(test_mcp_endpoint())
\ No newline at end of file
diff --git a/litellm/test_remote_simple.py b/litellm/test_remote_simple.py
new file mode 100644
index 0000000..5c1e320
--- /dev/null
+++ b/litellm/test_remote_simple.py
@@ -0,0 +1,28 @@
+import asyncio
+from openai import AsyncOpenAI
+
+async def main():
+ # Test remote LiteLLM server without MCP
+ client = AsyncOpenAI(
+ api_key="sk-0jdcGHZJpX2oUJmyEs7zVA",
+ base_url="https://litellm.seekkey.tech"
+ )
+
+ try:
+ # Test simple chat completion
+ response = await client.chat.completions.create(
+ model="gemini/gemini-2.5-flash",
+ messages=[
+ {"role": "user", "content": "Hello! Please respond with a simple greeting."}
+ ],
+ max_tokens=50
+ )
+
+ print("✅ Remote LiteLLM server is working!")
+ print(f"Response: {response.choices[0].message.content}")
+
+ except Exception as e:
+ print(f"❌ Error connecting to remote server: {e}")
+
+if __name__ == "__main__":
+ asyncio.run(main())
\ No newline at end of file
diff --git a/litellm/testmcp.py b/litellm/testmcp.py
new file mode 100644
index 0000000..6404347
--- /dev/null
+++ b/litellm/testmcp.py
@@ -0,0 +1,72 @@
+import asyncio
+from openai import AsyncOpenAI
+from openai.types.chat import ChatCompletionUserMessageParam
+from mcp import ClientSession
+from mcp.client.sse import sse_client
+from litellm.experimental_mcp_client.tools import (
+ transform_mcp_tool_to_openai_tool,
+ transform_openai_tool_call_request_to_mcp_tool_call_request,
+)
+
+
+async def main():
+ # Initialize clients
+
+ # point OpenAI client to local LiteLLM Proxy
+ client = AsyncOpenAI(api_key="sk-0jdcGHZJpX2oUJmyEs7zVA", base_url="https://litellm.seekkey.tech")
+
+ # Point MCP client to local LiteLLM Proxy with authentication
+ headers = {"Authorization": "Bearer sk-0jdcGHZJpX2oUJmyEs7zVA"}
+ async with sse_client("https://litellm.seekkey.tech/mcp/", headers=headers) as (read, write):
+ async with ClientSession(read, write) as session:
+ await session.initialize()
+
+ # 1. List MCP tools on LiteLLM Proxy
+ mcp_tools = await session.list_tools()
+ print("List of MCP tools for MCP server:", mcp_tools.tools)
+
+ # Create message
+ messages = [
+ ChatCompletionUserMessageParam(
+ content="Send an email about LiteLLM supporting MCP", role="user"
+ )
+ ]
+
+ # 2. Use `transform_mcp_tool_to_openai_tool` to convert MCP tools to OpenAI tools
+ # Since OpenAI only supports tools in the OpenAI format, we need to convert the MCP tools to the OpenAI format.
+ openai_tools = [
+ transform_mcp_tool_to_openai_tool(tool) for tool in mcp_tools.tools
+ ]
+
+ # 3. Provide the MCP tools to `gpt-4o`
+ response = await client.chat.completions.create(
+ model="gemini/gemini-2.5-flash",
+ messages=messages,
+ tools=openai_tools,
+ tool_choice="auto",
+ )
+
+ # 4. Handle tool call from `gpt-4o`
+ if response.choices[0].message.tool_calls:
+ tool_call = response.choices[0].message.tool_calls[0]
+ if tool_call:
+
+ # 5. Convert OpenAI tool call to MCP tool call
+ # Since MCP servers expect tools in the MCP format, we need to convert the OpenAI tool call to the MCP format.
+ # This is done using litellm.experimental_mcp_client.tools.transform_openai_tool_call_request_to_mcp_tool_call_request
+ mcp_call = (
+ transform_openai_tool_call_request_to_mcp_tool_call_request(
+ openai_tool=tool_call.model_dump()
+ )
+ )
+
+ # 6. Execute tool call on MCP server
+ result = await session.call_tool(
+ name=mcp_call.name, arguments=mcp_call.arguments
+ )
+
+ print("Result:", result)
+
+
+# Run it
+asyncio.run(main())
\ No newline at end of file
diff --git a/litellm/testmcp_debug.py b/litellm/testmcp_debug.py
new file mode 100644
index 0000000..931737a
--- /dev/null
+++ b/litellm/testmcp_debug.py
@@ -0,0 +1,108 @@
+import asyncio
+import sys
+from openai import AsyncOpenAI
+from openai.types.chat import ChatCompletionUserMessageParam
+from mcp import ClientSession
+from mcp.client.sse import sse_client
+from litellm.experimental_mcp_client.tools import (
+ transform_mcp_tool_to_openai_tool,
+ transform_openai_tool_call_request_to_mcp_tool_call_request,
+)
+
+
+async def main():
+ print("开始测试MCP连接...")
+
+ try:
+ # Initialize clients
+ print("初始化OpenAI客户端...")
+ client = AsyncOpenAI(api_key="sk-0jdcGHZJpX2oUJmyEs7zVA", base_url="https://litellm.seekkey.tech")
+ print("OpenAI客户端初始化完成")
+
+ # Point MCP client to remote LiteLLM Proxy with authentication
+ print("准备连接MCP服务器...")
+ headers = {"Authorization": "Bearer sk-0jdcGHZJpX2oUJmyEs7zVA"}
+
+ # 添加超时处理
+ try:
+ async with asyncio.timeout(10): # 10秒超时
+ print("正在建立SSE连接...")
+ async with sse_client("https://litellm.seekkey.tech/mcp/", headers=headers) as (read, write):
+ print("SSE连接建立成功,初始化会话...")
+ async with ClientSession(read, write) as session:
+ print("正在初始化MCP会话...")
+ await session.initialize()
+ print("MCP会话初始化成功!")
+
+ # 1. List MCP tools on LiteLLM Proxy
+ print("获取MCP工具列表...")
+ mcp_tools = await session.list_tools()
+ print(f"找到 {len(mcp_tools.tools)} 个MCP工具:")
+ for tool in mcp_tools.tools:
+ print(f" - {tool.name}: {tool.description}")
+
+ if not mcp_tools.tools:
+ print("没有找到可用的MCP工具")
+ return
+
+ # Create message
+ messages = [
+ ChatCompletionUserMessageParam(
+ content="列出所有可用的数据库", role="user"
+ )
+ ]
+
+ # 2. Convert MCP tools to OpenAI tools
+ print("转换MCP工具为OpenAI格式...")
+ openai_tools = [
+ transform_mcp_tool_to_openai_tool(tool) for tool in mcp_tools.tools
+ ]
+ print(f"转换完成,共 {len(openai_tools)} 个工具")
+
+ # 3. Call LLM with tools
+ print("调用LLM...")
+ response = await client.chat.completions.create(
+ model="gemini/gemini-2.5-flash",
+ messages=messages,
+ tools=openai_tools,
+ tool_choice="auto",
+ )
+ print("LLM响应完成")
+
+ # 4. Handle tool call
+ if response.choices[0].message.tool_calls:
+ print("LLM请求调用工具...")
+ tool_call = response.choices[0].message.tool_calls[0]
+ print(f"工具调用: {tool_call.function.name}")
+ print(f"参数: {tool_call.function.arguments}")
+
+ # 5. Convert to MCP format and execute
+ mcp_call = transform_openai_tool_call_request_to_mcp_tool_call_request(
+ openai_tool=tool_call.model_dump()
+ )
+
+ print(f"执行MCP工具调用: {mcp_call.name}")
+ result = await session.call_tool(
+ name=mcp_call.name, arguments=mcp_call.arguments
+ )
+
+ print("工具调用结果:")
+ print(result)
+ else:
+ print("LLM没有请求调用工具")
+ print(f"LLM回复: {response.choices[0].message.content}")
+
+ except asyncio.TimeoutError:
+ print("连接超时!可能是网络问题或服务器响应慢")
+ return
+
+ except Exception as e:
+ print(f"发生错误: {type(e).__name__}: {e}")
+ import traceback
+ traceback.print_exc()
+
+
+if __name__ == "__main__":
+ print("启动MCP调试测试...")
+ asyncio.run(main())
+ print("测试完成")
\ No newline at end of file
diff --git a/litellm/testmcp_local.py b/litellm/testmcp_local.py
new file mode 100644
index 0000000..68b0834
--- /dev/null
+++ b/litellm/testmcp_local.py
@@ -0,0 +1,107 @@
+import asyncio
+import sys
+from openai import AsyncOpenAI
+from openai.types.chat import ChatCompletionUserMessageParam
+from mcp import ClientSession
+from mcp.client.sse import sse_client
+from litellm.experimental_mcp_client.tools import (
+ transform_mcp_tool_to_openai_tool,
+ transform_openai_tool_call_request_to_mcp_tool_call_request,
+)
+
+
+async def main():
+ print("开始测试本地MCP连接...")
+
+ try:
+ # Initialize clients
+ print("初始化OpenAI客户端...")
+ client = AsyncOpenAI(api_key="sk-1234", base_url="http://localhost:4000")
+ print("OpenAI客户端初始化完成")
+
+ # Point MCP client to local LiteLLM Proxy
+ print("准备连接本地MCP服务器...")
+
+ # 添加超时处理
+ try:
+ async with asyncio.timeout(10): # 10秒超时
+ print("正在建立SSE连接...")
+ async with sse_client("http://localhost:4000/mcp/") as (read, write):
+ print("SSE连接建立成功,初始化会话...")
+ async with ClientSession(read, write) as session:
+ print("正在初始化MCP会话...")
+ await session.initialize()
+ print("MCP会话初始化成功!")
+
+ # 1. List MCP tools on LiteLLM Proxy
+ print("获取MCP工具列表...")
+ mcp_tools = await session.list_tools()
+ print(f"找到 {len(mcp_tools.tools)} 个MCP工具:")
+ for tool in mcp_tools.tools:
+ print(f" - {tool.name}: {tool.description}")
+
+ if not mcp_tools.tools:
+ print("没有找到可用的MCP工具")
+ return
+
+ # Create message
+ messages = [
+ ChatCompletionUserMessageParam(
+ content="列出所有可用的数据库", role="user"
+ )
+ ]
+
+ # 2. Convert MCP tools to OpenAI tools
+ print("转换MCP工具为OpenAI格式...")
+ openai_tools = [
+ transform_mcp_tool_to_openai_tool(tool) for tool in mcp_tools.tools
+ ]
+ print(f"转换完成,共 {len(openai_tools)} 个工具")
+
+ # 3. Call LLM with tools
+ print("调用LLM...")
+ response = await client.chat.completions.create(
+ model="gemini/gemini-2.5-flash",
+ messages=messages,
+ tools=openai_tools,
+ tool_choice="auto",
+ )
+ print("LLM响应完成")
+
+ # 4. Handle tool call
+ if response.choices[0].message.tool_calls:
+ print("LLM请求调用工具...")
+ tool_call = response.choices[0].message.tool_calls[0]
+ print(f"工具调用: {tool_call.function.name}")
+ print(f"参数: {tool_call.function.arguments}")
+
+ # 5. Convert to MCP format and execute
+ mcp_call = transform_openai_tool_call_request_to_mcp_tool_call_request(
+ openai_tool=tool_call.model_dump()
+ )
+
+ print(f"执行MCP工具调用: {mcp_call.name}")
+ result = await session.call_tool(
+ name=mcp_call.name, arguments=mcp_call.arguments
+ )
+
+ print("工具调用结果:")
+ print(result)
+ else:
+ print("LLM没有请求调用工具")
+ print(f"LLM回复: {response.choices[0].message.content}")
+
+ except asyncio.TimeoutError:
+ print("连接超时!可能是本地服务器没有启动或MCP功能未配置")
+ return
+
+ except Exception as e:
+ print(f"发生错误: {type(e).__name__}: {e}")
+ import traceback
+ traceback.print_exc()
+
+
+if __name__ == "__main__":
+ print("启动本地MCP调试测试...")
+ asyncio.run(main())
+ print("测试完成")
\ No newline at end of file
diff --git a/memories.json b/memories.json
new file mode 100644
index 0000000..134f074
--- /dev/null
+++ b/memories.json
@@ -0,0 +1,36 @@
+{
+ "吕洞宾": [
+ {
+ "message": "你觉得特斯拉股票怎么样?",
+ "response": "这位股友,特斯拉啊,那是柄锋利的剑,但也得看你握得稳不稳。技术面上看,波动剧烈,高风险高回报!我吕洞宾向来胆大,看好其未来潜力,但提醒一句,入场需谨慎,设好止损位,别被震下马!这波行情,或可搏一搏,但盈亏自负,切记切记!\n",
+ "timestamp": "2025-08-10T15:13:34.203982"
+ },
+ {
+ "message": "现在市场波动很大,你怎么看?",
+ "response": "市场波动大?正合我意!乱世出英雄,震荡出机会!这正是剑仙我辈大展身手之时!技术分析看穿迷雾,短线操作把握良机。但切记,风浪越大越要稳住船舵,设好止损,见好就收,莫贪恋!\n",
+ "timestamp": "2025-08-10T15:13:35.458763"
+ },
+ {
+ "message": "你之前不是看好特斯拉吗?现在还是这个观点吗?",
+ "response": "这位股友,宝剑锋从磨砺出,梅花香自苦寒来!特斯拉虽遇风浪,但我仍看好它的长远潜力。短期震荡乃兵家常事,我会密切关注技术指标,伺机而动。剑仙的眼光,可不是轻易改变的!当然,投资需谨慎,切莫盲从,一切盈亏还需自负!\n",
+ "timestamp": "2025-08-10T15:13:37.048007"
+ }
+ ],
+ "何仙姑": [
+ {
+ "message": "特斯拉股票风险大吗?",
+ "response": "这位朋友,特斯拉股票确实存在一定风险。电动车市场竞争激烈,技术迭代迅速,政策变化也可能带来影响。此外,马斯克个人的言行也可能对股价造成波动。投资需谨慎,建议您充分了解特斯拉的财务状况、行业地位和未来发展规划,结合自身的风险承受能力,审慎决策。\n",
+ "timestamp": "2025-08-10T15:13:38.280431"
+ },
+ {
+ "message": "现在适合投资吗?",
+ "response": "这位朋友,现在市场波动较大,投资环境存在不确定性。虽然特斯拉前景广阔,但也面临竞争压力和技术风险。我建议您谨慎评估,充分了解市场动态和特斯拉的最新情况,并结合自身的风险承受能力,再做决定。稳健投资,安全第一。\n",
+ "timestamp": "2025-08-10T15:13:39.502875"
+ },
+ {
+ "message": "你一直很谨慎,现在还是建议观望吗?",
+ "response": "这位朋友,谨慎是我的本分。当前市场环境依然复杂,特斯拉也面临着诸多挑战。虽然长期来看,其发展潜力巨大,但短期内仍可能存在波动。我仍然建议您保持观望,密切关注市场动态和特斯拉的最新消息,待风险进一步明确后再做决策。安全第一,稳健投资才是长久之计。\n",
+ "timestamp": "2025-08-10T15:13:40.890509"
+ }
+ ]
+}
\ No newline at end of file
diff --git a/package-lock.json b/package-lock.json
new file mode 100644
index 0000000..c0aa296
--- /dev/null
+++ b/package-lock.json
@@ -0,0 +1,1879 @@
+{
+ "name": "hyperdrive-neondb-test",
+ "version": "1.0.0",
+ "lockfileVersion": 3,
+ "requires": true,
+ "packages": {
+ "": {
+ "name": "hyperdrive-neondb-test",
+ "version": "1.0.0",
+ "license": "MIT",
+ "dependencies": {
+ "node-fetch": "^3.3.2",
+ "pg": "^8.11.3"
+ },
+ "devDependencies": {
+ "@cloudflare/workers-types": "^4.20240129.0",
+ "@types/node": "^20.11.16",
+ "@types/pg": "^8.10.9",
+ "typescript": "^5.3.3",
+ "wrangler": "^3.28.2"
+ }
+ },
+ "node_modules/@cloudflare/kv-asset-handler": {
+ "version": "0.3.4",
+ "resolved": "https://registry.npmmirror.com/@cloudflare/kv-asset-handler/-/kv-asset-handler-0.3.4.tgz",
+ "integrity": "sha512-YLPHc8yASwjNkmcDMQMY35yiWjoKAKnhUbPRszBRS0YgH+IXtsMp61j+yTcnCE3oO2DgP0U3iejLC8FTtKDC8Q==",
+ "dev": true,
+ "license": "MIT OR Apache-2.0",
+ "dependencies": {
+ "mime": "^3.0.0"
+ },
+ "engines": {
+ "node": ">=16.13"
+ }
+ },
+ "node_modules/@cloudflare/unenv-preset": {
+ "version": "2.0.2",
+ "resolved": "https://registry.npmmirror.com/@cloudflare/unenv-preset/-/unenv-preset-2.0.2.tgz",
+ "integrity": "sha512-nyzYnlZjjV5xT3LizahG1Iu6mnrCaxglJ04rZLpDwlDVDZ7v46lNsfxhV3A/xtfgQuSHmLnc6SVI+KwBpc3Lwg==",
+ "dev": true,
+ "license": "MIT OR Apache-2.0",
+ "peerDependencies": {
+ "unenv": "2.0.0-rc.14",
+ "workerd": "^1.20250124.0"
+ },
+ "peerDependenciesMeta": {
+ "workerd": {
+ "optional": true
+ }
+ }
+ },
+ "node_modules/@cloudflare/workerd-darwin-64": {
+ "version": "1.20250718.0",
+ "resolved": "https://registry.npmmirror.com/@cloudflare/workerd-darwin-64/-/workerd-darwin-64-1.20250718.0.tgz",
+ "integrity": "sha512-FHf4t7zbVN8yyXgQ/r/GqLPaYZSGUVzeR7RnL28Mwj2djyw2ZergvytVc7fdGcczl6PQh+VKGfZCfUqpJlbi9g==",
+ "cpu": [
+ "x64"
+ ],
+ "dev": true,
+ "license": "Apache-2.0",
+ "optional": true,
+ "os": [
+ "darwin"
+ ],
+ "engines": {
+ "node": ">=16"
+ }
+ },
+ "node_modules/@cloudflare/workerd-darwin-arm64": {
+ "version": "1.20250718.0",
+ "resolved": "https://registry.npmmirror.com/@cloudflare/workerd-darwin-arm64/-/workerd-darwin-arm64-1.20250718.0.tgz",
+ "integrity": "sha512-fUiyUJYyqqp4NqJ0YgGtp4WJh/II/YZsUnEb6vVy5Oeas8lUOxnN+ZOJ8N/6/5LQCVAtYCChRiIrBbfhTn5Z8Q==",
+ "cpu": [
+ "arm64"
+ ],
+ "dev": true,
+ "license": "Apache-2.0",
+ "optional": true,
+ "os": [
+ "darwin"
+ ],
+ "engines": {
+ "node": ">=16"
+ }
+ },
+ "node_modules/@cloudflare/workerd-linux-64": {
+ "version": "1.20250718.0",
+ "resolved": "https://registry.npmmirror.com/@cloudflare/workerd-linux-64/-/workerd-linux-64-1.20250718.0.tgz",
+ "integrity": "sha512-5+eb3rtJMiEwp08Kryqzzu8d1rUcK+gdE442auo5eniMpT170Dz0QxBrqkg2Z48SFUPYbj+6uknuA5tzdRSUSg==",
+ "cpu": [
+ "x64"
+ ],
+ "dev": true,
+ "license": "Apache-2.0",
+ "optional": true,
+ "os": [
+ "linux"
+ ],
+ "engines": {
+ "node": ">=16"
+ }
+ },
+ "node_modules/@cloudflare/workerd-linux-arm64": {
+ "version": "1.20250718.0",
+ "resolved": "https://registry.npmmirror.com/@cloudflare/workerd-linux-arm64/-/workerd-linux-arm64-1.20250718.0.tgz",
+ "integrity": "sha512-Aa2M/DVBEBQDdATMbn217zCSFKE+ud/teS+fFS+OQqKABLn0azO2qq6ANAHYOIE6Q3Sq4CxDIQr8lGdaJHwUog==",
+ "cpu": [
+ "arm64"
+ ],
+ "dev": true,
+ "license": "Apache-2.0",
+ "optional": true,
+ "os": [
+ "linux"
+ ],
+ "engines": {
+ "node": ">=16"
+ }
+ },
+ "node_modules/@cloudflare/workerd-windows-64": {
+ "version": "1.20250718.0",
+ "resolved": "https://registry.npmmirror.com/@cloudflare/workerd-windows-64/-/workerd-windows-64-1.20250718.0.tgz",
+ "integrity": "sha512-dY16RXKffmugnc67LTbyjdDHZn5NoTF1yHEf2fN4+OaOnoGSp3N1x77QubTDwqZ9zECWxgQfDLjddcH8dWeFhg==",
+ "cpu": [
+ "x64"
+ ],
+ "dev": true,
+ "license": "Apache-2.0",
+ "optional": true,
+ "os": [
+ "win32"
+ ],
+ "engines": {
+ "node": ">=16"
+ }
+ },
+ "node_modules/@cloudflare/workers-types": {
+ "version": "4.20250813.0",
+ "resolved": "https://registry.npmmirror.com/@cloudflare/workers-types/-/workers-types-4.20250813.0.tgz",
+ "integrity": "sha512-RFFjomDndGR+p7ug1HWDlW21qOJyRZbmI99dUtuR9tmwJbSZhUUnSFmzok9lBYVfkMMrO1O5vmB+IlgiecgLEA==",
+ "dev": true,
+ "license": "MIT OR Apache-2.0"
+ },
+ "node_modules/@cspotcode/source-map-support": {
+ "version": "0.8.1",
+ "resolved": "https://registry.npmmirror.com/@cspotcode/source-map-support/-/source-map-support-0.8.1.tgz",
+ "integrity": "sha512-IchNf6dN4tHoMFIn/7OE8LWZ19Y6q/67Bmf6vnGREv8RSbBVb9LPJxEcnwrcwX6ixSvaiGoomAUvu4YSxXrVgw==",
+ "dev": true,
+ "license": "MIT",
+ "dependencies": {
+ "@jridgewell/trace-mapping": "0.3.9"
+ },
+ "engines": {
+ "node": ">=12"
+ }
+ },
+ "node_modules/@emnapi/runtime": {
+ "version": "1.4.5",
+ "resolved": "https://registry.npmmirror.com/@emnapi/runtime/-/runtime-1.4.5.tgz",
+ "integrity": "sha512-++LApOtY0pEEz1zrd9vy1/zXVaVJJ/EbAF3u0fXIzPJEDtnITsBGbbK0EkM72amhl/R5b+5xx0Y/QhcVOpuulg==",
+ "dev": true,
+ "license": "MIT",
+ "optional": true,
+ "dependencies": {
+ "tslib": "^2.4.0"
+ }
+ },
+ "node_modules/@esbuild-plugins/node-globals-polyfill": {
+ "version": "0.2.3",
+ "resolved": "https://registry.npmmirror.com/@esbuild-plugins/node-globals-polyfill/-/node-globals-polyfill-0.2.3.tgz",
+ "integrity": "sha512-r3MIryXDeXDOZh7ih1l/yE9ZLORCd5e8vWg02azWRGj5SPTuoh69A2AIyn0Z31V/kHBfZ4HgWJ+OK3GTTwLmnw==",
+ "dev": true,
+ "license": "ISC",
+ "peerDependencies": {
+ "esbuild": "*"
+ }
+ },
+ "node_modules/@esbuild-plugins/node-modules-polyfill": {
+ "version": "0.2.2",
+ "resolved": "https://registry.npmmirror.com/@esbuild-plugins/node-modules-polyfill/-/node-modules-polyfill-0.2.2.tgz",
+ "integrity": "sha512-LXV7QsWJxRuMYvKbiznh+U1ilIop3g2TeKRzUxOG5X3YITc8JyyTa90BmLwqqv0YnX4v32CSlG+vsziZp9dMvA==",
+ "dev": true,
+ "license": "ISC",
+ "dependencies": {
+ "escape-string-regexp": "^4.0.0",
+ "rollup-plugin-node-polyfills": "^0.2.1"
+ },
+ "peerDependencies": {
+ "esbuild": "*"
+ }
+ },
+ "node_modules/@esbuild/android-arm": {
+ "version": "0.17.19",
+ "resolved": "https://registry.npmmirror.com/@esbuild/android-arm/-/android-arm-0.17.19.tgz",
+ "integrity": "sha512-rIKddzqhmav7MSmoFCmDIb6e2W57geRsM94gV2l38fzhXMwq7hZoClug9USI2pFRGL06f4IOPHHpFNOkWieR8A==",
+ "cpu": [
+ "arm"
+ ],
+ "dev": true,
+ "license": "MIT",
+ "optional": true,
+ "os": [
+ "android"
+ ],
+ "engines": {
+ "node": ">=12"
+ }
+ },
+ "node_modules/@esbuild/android-arm64": {
+ "version": "0.17.19",
+ "resolved": "https://registry.npmmirror.com/@esbuild/android-arm64/-/android-arm64-0.17.19.tgz",
+ "integrity": "sha512-KBMWvEZooR7+kzY0BtbTQn0OAYY7CsiydT63pVEaPtVYF0hXbUaOyZog37DKxK7NF3XacBJOpYT4adIJh+avxA==",
+ "cpu": [
+ "arm64"
+ ],
+ "dev": true,
+ "license": "MIT",
+ "optional": true,
+ "os": [
+ "android"
+ ],
+ "engines": {
+ "node": ">=12"
+ }
+ },
+ "node_modules/@esbuild/android-x64": {
+ "version": "0.17.19",
+ "resolved": "https://registry.npmmirror.com/@esbuild/android-x64/-/android-x64-0.17.19.tgz",
+ "integrity": "sha512-uUTTc4xGNDT7YSArp/zbtmbhO0uEEK9/ETW29Wk1thYUJBz3IVnvgEiEwEa9IeLyvnpKrWK64Utw2bgUmDveww==",
+ "cpu": [
+ "x64"
+ ],
+ "dev": true,
+ "license": "MIT",
+ "optional": true,
+ "os": [
+ "android"
+ ],
+ "engines": {
+ "node": ">=12"
+ }
+ },
+ "node_modules/@esbuild/darwin-arm64": {
+ "version": "0.17.19",
+ "resolved": "https://registry.npmmirror.com/@esbuild/darwin-arm64/-/darwin-arm64-0.17.19.tgz",
+ "integrity": "sha512-80wEoCfF/hFKM6WE1FyBHc9SfUblloAWx6FJkFWTWiCoht9Mc0ARGEM47e67W9rI09YoUxJL68WHfDRYEAvOhg==",
+ "cpu": [
+ "arm64"
+ ],
+ "dev": true,
+ "license": "MIT",
+ "optional": true,
+ "os": [
+ "darwin"
+ ],
+ "engines": {
+ "node": ">=12"
+ }
+ },
+ "node_modules/@esbuild/darwin-x64": {
+ "version": "0.17.19",
+ "resolved": "https://registry.npmmirror.com/@esbuild/darwin-x64/-/darwin-x64-0.17.19.tgz",
+ "integrity": "sha512-IJM4JJsLhRYr9xdtLytPLSH9k/oxR3boaUIYiHkAawtwNOXKE8KoU8tMvryogdcT8AU+Bflmh81Xn6Q0vTZbQw==",
+ "cpu": [
+ "x64"
+ ],
+ "dev": true,
+ "license": "MIT",
+ "optional": true,
+ "os": [
+ "darwin"
+ ],
+ "engines": {
+ "node": ">=12"
+ }
+ },
+ "node_modules/@esbuild/freebsd-arm64": {
+ "version": "0.17.19",
+ "resolved": "https://registry.npmmirror.com/@esbuild/freebsd-arm64/-/freebsd-arm64-0.17.19.tgz",
+ "integrity": "sha512-pBwbc7DufluUeGdjSU5Si+P3SoMF5DQ/F/UmTSb8HXO80ZEAJmrykPyzo1IfNbAoaqw48YRpv8shwd1NoI0jcQ==",
+ "cpu": [
+ "arm64"
+ ],
+ "dev": true,
+ "license": "MIT",
+ "optional": true,
+ "os": [
+ "freebsd"
+ ],
+ "engines": {
+ "node": ">=12"
+ }
+ },
+ "node_modules/@esbuild/freebsd-x64": {
+ "version": "0.17.19",
+ "resolved": "https://registry.npmmirror.com/@esbuild/freebsd-x64/-/freebsd-x64-0.17.19.tgz",
+ "integrity": "sha512-4lu+n8Wk0XlajEhbEffdy2xy53dpR06SlzvhGByyg36qJw6Kpfk7cp45DR/62aPH9mtJRmIyrXAS5UWBrJT6TQ==",
+ "cpu": [
+ "x64"
+ ],
+ "dev": true,
+ "license": "MIT",
+ "optional": true,
+ "os": [
+ "freebsd"
+ ],
+ "engines": {
+ "node": ">=12"
+ }
+ },
+ "node_modules/@esbuild/linux-arm": {
+ "version": "0.17.19",
+ "resolved": "https://registry.npmmirror.com/@esbuild/linux-arm/-/linux-arm-0.17.19.tgz",
+ "integrity": "sha512-cdmT3KxjlOQ/gZ2cjfrQOtmhG4HJs6hhvm3mWSRDPtZ/lP5oe8FWceS10JaSJC13GBd4eH/haHnqf7hhGNLerA==",
+ "cpu": [
+ "arm"
+ ],
+ "dev": true,
+ "license": "MIT",
+ "optional": true,
+ "os": [
+ "linux"
+ ],
+ "engines": {
+ "node": ">=12"
+ }
+ },
+ "node_modules/@esbuild/linux-arm64": {
+ "version": "0.17.19",
+ "resolved": "https://registry.npmmirror.com/@esbuild/linux-arm64/-/linux-arm64-0.17.19.tgz",
+ "integrity": "sha512-ct1Tg3WGwd3P+oZYqic+YZF4snNl2bsnMKRkb3ozHmnM0dGWuxcPTTntAF6bOP0Sp4x0PjSF+4uHQ1xvxfRKqg==",
+ "cpu": [
+ "arm64"
+ ],
+ "dev": true,
+ "license": "MIT",
+ "optional": true,
+ "os": [
+ "linux"
+ ],
+ "engines": {
+ "node": ">=12"
+ }
+ },
+ "node_modules/@esbuild/linux-ia32": {
+ "version": "0.17.19",
+ "resolved": "https://registry.npmmirror.com/@esbuild/linux-ia32/-/linux-ia32-0.17.19.tgz",
+ "integrity": "sha512-w4IRhSy1VbsNxHRQpeGCHEmibqdTUx61Vc38APcsRbuVgK0OPEnQ0YD39Brymn96mOx48Y2laBQGqgZ0j9w6SQ==",
+ "cpu": [
+ "ia32"
+ ],
+ "dev": true,
+ "license": "MIT",
+ "optional": true,
+ "os": [
+ "linux"
+ ],
+ "engines": {
+ "node": ">=12"
+ }
+ },
+ "node_modules/@esbuild/linux-loong64": {
+ "version": "0.17.19",
+ "resolved": "https://registry.npmmirror.com/@esbuild/linux-loong64/-/linux-loong64-0.17.19.tgz",
+ "integrity": "sha512-2iAngUbBPMq439a+z//gE+9WBldoMp1s5GWsUSgqHLzLJ9WoZLZhpwWuym0u0u/4XmZ3gpHmzV84PonE+9IIdQ==",
+ "cpu": [
+ "loong64"
+ ],
+ "dev": true,
+ "license": "MIT",
+ "optional": true,
+ "os": [
+ "linux"
+ ],
+ "engines": {
+ "node": ">=12"
+ }
+ },
+ "node_modules/@esbuild/linux-mips64el": {
+ "version": "0.17.19",
+ "resolved": "https://registry.npmmirror.com/@esbuild/linux-mips64el/-/linux-mips64el-0.17.19.tgz",
+ "integrity": "sha512-LKJltc4LVdMKHsrFe4MGNPp0hqDFA1Wpt3jE1gEyM3nKUvOiO//9PheZZHfYRfYl6AwdTH4aTcXSqBerX0ml4A==",
+ "cpu": [
+ "mips64el"
+ ],
+ "dev": true,
+ "license": "MIT",
+ "optional": true,
+ "os": [
+ "linux"
+ ],
+ "engines": {
+ "node": ">=12"
+ }
+ },
+ "node_modules/@esbuild/linux-ppc64": {
+ "version": "0.17.19",
+ "resolved": "https://registry.npmmirror.com/@esbuild/linux-ppc64/-/linux-ppc64-0.17.19.tgz",
+ "integrity": "sha512-/c/DGybs95WXNS8y3Ti/ytqETiW7EU44MEKuCAcpPto3YjQbyK3IQVKfF6nbghD7EcLUGl0NbiL5Rt5DMhn5tg==",
+ "cpu": [
+ "ppc64"
+ ],
+ "dev": true,
+ "license": "MIT",
+ "optional": true,
+ "os": [
+ "linux"
+ ],
+ "engines": {
+ "node": ">=12"
+ }
+ },
+ "node_modules/@esbuild/linux-riscv64": {
+ "version": "0.17.19",
+ "resolved": "https://registry.npmmirror.com/@esbuild/linux-riscv64/-/linux-riscv64-0.17.19.tgz",
+ "integrity": "sha512-FC3nUAWhvFoutlhAkgHf8f5HwFWUL6bYdvLc/TTuxKlvLi3+pPzdZiFKSWz/PF30TB1K19SuCxDTI5KcqASJqA==",
+ "cpu": [
+ "riscv64"
+ ],
+ "dev": true,
+ "license": "MIT",
+ "optional": true,
+ "os": [
+ "linux"
+ ],
+ "engines": {
+ "node": ">=12"
+ }
+ },
+ "node_modules/@esbuild/linux-s390x": {
+ "version": "0.17.19",
+ "resolved": "https://registry.npmmirror.com/@esbuild/linux-s390x/-/linux-s390x-0.17.19.tgz",
+ "integrity": "sha512-IbFsFbxMWLuKEbH+7sTkKzL6NJmG2vRyy6K7JJo55w+8xDk7RElYn6xvXtDW8HCfoKBFK69f3pgBJSUSQPr+4Q==",
+ "cpu": [
+ "s390x"
+ ],
+ "dev": true,
+ "license": "MIT",
+ "optional": true,
+ "os": [
+ "linux"
+ ],
+ "engines": {
+ "node": ">=12"
+ }
+ },
+ "node_modules/@esbuild/linux-x64": {
+ "version": "0.17.19",
+ "resolved": "https://registry.npmmirror.com/@esbuild/linux-x64/-/linux-x64-0.17.19.tgz",
+ "integrity": "sha512-68ngA9lg2H6zkZcyp22tsVt38mlhWde8l3eJLWkyLrp4HwMUr3c1s/M2t7+kHIhvMjglIBrFpncX1SzMckomGw==",
+ "cpu": [
+ "x64"
+ ],
+ "dev": true,
+ "license": "MIT",
+ "optional": true,
+ "os": [
+ "linux"
+ ],
+ "engines": {
+ "node": ">=12"
+ }
+ },
+ "node_modules/@esbuild/netbsd-x64": {
+ "version": "0.17.19",
+ "resolved": "https://registry.npmmirror.com/@esbuild/netbsd-x64/-/netbsd-x64-0.17.19.tgz",
+ "integrity": "sha512-CwFq42rXCR8TYIjIfpXCbRX0rp1jo6cPIUPSaWwzbVI4aOfX96OXY8M6KNmtPcg7QjYeDmN+DD0Wp3LaBOLf4Q==",
+ "cpu": [
+ "x64"
+ ],
+ "dev": true,
+ "license": "MIT",
+ "optional": true,
+ "os": [
+ "netbsd"
+ ],
+ "engines": {
+ "node": ">=12"
+ }
+ },
+ "node_modules/@esbuild/openbsd-x64": {
+ "version": "0.17.19",
+ "resolved": "https://registry.npmmirror.com/@esbuild/openbsd-x64/-/openbsd-x64-0.17.19.tgz",
+ "integrity": "sha512-cnq5brJYrSZ2CF6c35eCmviIN3k3RczmHz8eYaVlNasVqsNY+JKohZU5MKmaOI+KkllCdzOKKdPs762VCPC20g==",
+ "cpu": [
+ "x64"
+ ],
+ "dev": true,
+ "license": "MIT",
+ "optional": true,
+ "os": [
+ "openbsd"
+ ],
+ "engines": {
+ "node": ">=12"
+ }
+ },
+ "node_modules/@esbuild/sunos-x64": {
+ "version": "0.17.19",
+ "resolved": "https://registry.npmmirror.com/@esbuild/sunos-x64/-/sunos-x64-0.17.19.tgz",
+ "integrity": "sha512-vCRT7yP3zX+bKWFeP/zdS6SqdWB8OIpaRq/mbXQxTGHnIxspRtigpkUcDMlSCOejlHowLqII7K2JKevwyRP2rg==",
+ "cpu": [
+ "x64"
+ ],
+ "dev": true,
+ "license": "MIT",
+ "optional": true,
+ "os": [
+ "sunos"
+ ],
+ "engines": {
+ "node": ">=12"
+ }
+ },
+ "node_modules/@esbuild/win32-arm64": {
+ "version": "0.17.19",
+ "resolved": "https://registry.npmmirror.com/@esbuild/win32-arm64/-/win32-arm64-0.17.19.tgz",
+ "integrity": "sha512-yYx+8jwowUstVdorcMdNlzklLYhPxjniHWFKgRqH7IFlUEa0Umu3KuYplf1HUZZ422e3NU9F4LGb+4O0Kdcaag==",
+ "cpu": [
+ "arm64"
+ ],
+ "dev": true,
+ "license": "MIT",
+ "optional": true,
+ "os": [
+ "win32"
+ ],
+ "engines": {
+ "node": ">=12"
+ }
+ },
+ "node_modules/@esbuild/win32-ia32": {
+ "version": "0.17.19",
+ "resolved": "https://registry.npmmirror.com/@esbuild/win32-ia32/-/win32-ia32-0.17.19.tgz",
+ "integrity": "sha512-eggDKanJszUtCdlVs0RB+h35wNlb5v4TWEkq4vZcmVt5u/HiDZrTXe2bWFQUez3RgNHwx/x4sk5++4NSSicKkw==",
+ "cpu": [
+ "ia32"
+ ],
+ "dev": true,
+ "license": "MIT",
+ "optional": true,
+ "os": [
+ "win32"
+ ],
+ "engines": {
+ "node": ">=12"
+ }
+ },
+ "node_modules/@esbuild/win32-x64": {
+ "version": "0.17.19",
+ "resolved": "https://registry.npmmirror.com/@esbuild/win32-x64/-/win32-x64-0.17.19.tgz",
+ "integrity": "sha512-lAhycmKnVOuRYNtRtatQR1LPQf2oYCkRGkSFnseDAKPl8lu5SOsK/e1sXe5a0Pc5kHIHe6P2I/ilntNv2xf3cA==",
+ "cpu": [
+ "x64"
+ ],
+ "dev": true,
+ "license": "MIT",
+ "optional": true,
+ "os": [
+ "win32"
+ ],
+ "engines": {
+ "node": ">=12"
+ }
+ },
+ "node_modules/@fastify/busboy": {
+ "version": "2.1.1",
+ "resolved": "https://registry.npmmirror.com/@fastify/busboy/-/busboy-2.1.1.tgz",
+ "integrity": "sha512-vBZP4NlzfOlerQTnba4aqZoMhE/a9HY7HRqoOPaETQcSQuWEIyZMHGfVu6w9wGtGK5fED5qRs2DteVCjOH60sA==",
+ "dev": true,
+ "license": "MIT",
+ "engines": {
+ "node": ">=14"
+ }
+ },
+ "node_modules/@img/sharp-darwin-arm64": {
+ "version": "0.33.5",
+ "resolved": "https://registry.npmmirror.com/@img/sharp-darwin-arm64/-/sharp-darwin-arm64-0.33.5.tgz",
+ "integrity": "sha512-UT4p+iz/2H4twwAoLCqfA9UH5pI6DggwKEGuaPy7nCVQ8ZsiY5PIcrRvD1DzuY3qYL07NtIQcWnBSY/heikIFQ==",
+ "cpu": [
+ "arm64"
+ ],
+ "dev": true,
+ "license": "Apache-2.0",
+ "optional": true,
+ "os": [
+ "darwin"
+ ],
+ "engines": {
+ "node": "^18.17.0 || ^20.3.0 || >=21.0.0"
+ },
+ "funding": {
+ "url": "https://opencollective.com/libvips"
+ },
+ "optionalDependencies": {
+ "@img/sharp-libvips-darwin-arm64": "1.0.4"
+ }
+ },
+ "node_modules/@img/sharp-darwin-x64": {
+ "version": "0.33.5",
+ "resolved": "https://registry.npmmirror.com/@img/sharp-darwin-x64/-/sharp-darwin-x64-0.33.5.tgz",
+ "integrity": "sha512-fyHac4jIc1ANYGRDxtiqelIbdWkIuQaI84Mv45KvGRRxSAa7o7d1ZKAOBaYbnepLC1WqxfpimdeWfvqqSGwR2Q==",
+ "cpu": [
+ "x64"
+ ],
+ "dev": true,
+ "license": "Apache-2.0",
+ "optional": true,
+ "os": [
+ "darwin"
+ ],
+ "engines": {
+ "node": "^18.17.0 || ^20.3.0 || >=21.0.0"
+ },
+ "funding": {
+ "url": "https://opencollective.com/libvips"
+ },
+ "optionalDependencies": {
+ "@img/sharp-libvips-darwin-x64": "1.0.4"
+ }
+ },
+ "node_modules/@img/sharp-libvips-darwin-arm64": {
+ "version": "1.0.4",
+ "resolved": "https://registry.npmmirror.com/@img/sharp-libvips-darwin-arm64/-/sharp-libvips-darwin-arm64-1.0.4.tgz",
+ "integrity": "sha512-XblONe153h0O2zuFfTAbQYAX2JhYmDHeWikp1LM9Hul9gVPjFY427k6dFEcOL72O01QxQsWi761svJ/ev9xEDg==",
+ "cpu": [
+ "arm64"
+ ],
+ "dev": true,
+ "license": "LGPL-3.0-or-later",
+ "optional": true,
+ "os": [
+ "darwin"
+ ],
+ "funding": {
+ "url": "https://opencollective.com/libvips"
+ }
+ },
+ "node_modules/@img/sharp-libvips-darwin-x64": {
+ "version": "1.0.4",
+ "resolved": "https://registry.npmmirror.com/@img/sharp-libvips-darwin-x64/-/sharp-libvips-darwin-x64-1.0.4.tgz",
+ "integrity": "sha512-xnGR8YuZYfJGmWPvmlunFaWJsb9T/AO2ykoP3Fz/0X5XV2aoYBPkX6xqCQvUTKKiLddarLaxpzNe+b1hjeWHAQ==",
+ "cpu": [
+ "x64"
+ ],
+ "dev": true,
+ "license": "LGPL-3.0-or-later",
+ "optional": true,
+ "os": [
+ "darwin"
+ ],
+ "funding": {
+ "url": "https://opencollective.com/libvips"
+ }
+ },
+ "node_modules/@img/sharp-libvips-linux-arm": {
+ "version": "1.0.5",
+ "resolved": "https://registry.npmmirror.com/@img/sharp-libvips-linux-arm/-/sharp-libvips-linux-arm-1.0.5.tgz",
+ "integrity": "sha512-gvcC4ACAOPRNATg/ov8/MnbxFDJqf/pDePbBnuBDcjsI8PssmjoKMAz4LtLaVi+OnSb5FK/yIOamqDwGmXW32g==",
+ "cpu": [
+ "arm"
+ ],
+ "dev": true,
+ "license": "LGPL-3.0-or-later",
+ "optional": true,
+ "os": [
+ "linux"
+ ],
+ "funding": {
+ "url": "https://opencollective.com/libvips"
+ }
+ },
+ "node_modules/@img/sharp-libvips-linux-arm64": {
+ "version": "1.0.4",
+ "resolved": "https://registry.npmmirror.com/@img/sharp-libvips-linux-arm64/-/sharp-libvips-linux-arm64-1.0.4.tgz",
+ "integrity": "sha512-9B+taZ8DlyyqzZQnoeIvDVR/2F4EbMepXMc/NdVbkzsJbzkUjhXv/70GQJ7tdLA4YJgNP25zukcxpX2/SueNrA==",
+ "cpu": [
+ "arm64"
+ ],
+ "dev": true,
+ "license": "LGPL-3.0-or-later",
+ "optional": true,
+ "os": [
+ "linux"
+ ],
+ "funding": {
+ "url": "https://opencollective.com/libvips"
+ }
+ },
+ "node_modules/@img/sharp-libvips-linux-s390x": {
+ "version": "1.0.4",
+ "resolved": "https://registry.npmmirror.com/@img/sharp-libvips-linux-s390x/-/sharp-libvips-linux-s390x-1.0.4.tgz",
+ "integrity": "sha512-u7Wz6ntiSSgGSGcjZ55im6uvTrOxSIS8/dgoVMoiGE9I6JAfU50yH5BoDlYA1tcuGS7g/QNtetJnxA6QEsCVTA==",
+ "cpu": [
+ "s390x"
+ ],
+ "dev": true,
+ "license": "LGPL-3.0-or-later",
+ "optional": true,
+ "os": [
+ "linux"
+ ],
+ "funding": {
+ "url": "https://opencollective.com/libvips"
+ }
+ },
+ "node_modules/@img/sharp-libvips-linux-x64": {
+ "version": "1.0.4",
+ "resolved": "https://registry.npmmirror.com/@img/sharp-libvips-linux-x64/-/sharp-libvips-linux-x64-1.0.4.tgz",
+ "integrity": "sha512-MmWmQ3iPFZr0Iev+BAgVMb3ZyC4KeFc3jFxnNbEPas60e1cIfevbtuyf9nDGIzOaW9PdnDciJm+wFFaTlj5xYw==",
+ "cpu": [
+ "x64"
+ ],
+ "dev": true,
+ "license": "LGPL-3.0-or-later",
+ "optional": true,
+ "os": [
+ "linux"
+ ],
+ "funding": {
+ "url": "https://opencollective.com/libvips"
+ }
+ },
+ "node_modules/@img/sharp-libvips-linuxmusl-arm64": {
+ "version": "1.0.4",
+ "resolved": "https://registry.npmmirror.com/@img/sharp-libvips-linuxmusl-arm64/-/sharp-libvips-linuxmusl-arm64-1.0.4.tgz",
+ "integrity": "sha512-9Ti+BbTYDcsbp4wfYib8Ctm1ilkugkA/uscUn6UXK1ldpC1JjiXbLfFZtRlBhjPZ5o1NCLiDbg8fhUPKStHoTA==",
+ "cpu": [
+ "arm64"
+ ],
+ "dev": true,
+ "license": "LGPL-3.0-or-later",
+ "optional": true,
+ "os": [
+ "linux"
+ ],
+ "funding": {
+ "url": "https://opencollective.com/libvips"
+ }
+ },
+ "node_modules/@img/sharp-libvips-linuxmusl-x64": {
+ "version": "1.0.4",
+ "resolved": "https://registry.npmmirror.com/@img/sharp-libvips-linuxmusl-x64/-/sharp-libvips-linuxmusl-x64-1.0.4.tgz",
+ "integrity": "sha512-viYN1KX9m+/hGkJtvYYp+CCLgnJXwiQB39damAO7WMdKWlIhmYTfHjwSbQeUK/20vY154mwezd9HflVFM1wVSw==",
+ "cpu": [
+ "x64"
+ ],
+ "dev": true,
+ "license": "LGPL-3.0-or-later",
+ "optional": true,
+ "os": [
+ "linux"
+ ],
+ "funding": {
+ "url": "https://opencollective.com/libvips"
+ }
+ },
+ "node_modules/@img/sharp-linux-arm": {
+ "version": "0.33.5",
+ "resolved": "https://registry.npmmirror.com/@img/sharp-linux-arm/-/sharp-linux-arm-0.33.5.tgz",
+ "integrity": "sha512-JTS1eldqZbJxjvKaAkxhZmBqPRGmxgu+qFKSInv8moZ2AmT5Yib3EQ1c6gp493HvrvV8QgdOXdyaIBrhvFhBMQ==",
+ "cpu": [
+ "arm"
+ ],
+ "dev": true,
+ "license": "Apache-2.0",
+ "optional": true,
+ "os": [
+ "linux"
+ ],
+ "engines": {
+ "node": "^18.17.0 || ^20.3.0 || >=21.0.0"
+ },
+ "funding": {
+ "url": "https://opencollective.com/libvips"
+ },
+ "optionalDependencies": {
+ "@img/sharp-libvips-linux-arm": "1.0.5"
+ }
+ },
+ "node_modules/@img/sharp-linux-arm64": {
+ "version": "0.33.5",
+ "resolved": "https://registry.npmmirror.com/@img/sharp-linux-arm64/-/sharp-linux-arm64-0.33.5.tgz",
+ "integrity": "sha512-JMVv+AMRyGOHtO1RFBiJy/MBsgz0x4AWrT6QoEVVTyh1E39TrCUpTRI7mx9VksGX4awWASxqCYLCV4wBZHAYxA==",
+ "cpu": [
+ "arm64"
+ ],
+ "dev": true,
+ "license": "Apache-2.0",
+ "optional": true,
+ "os": [
+ "linux"
+ ],
+ "engines": {
+ "node": "^18.17.0 || ^20.3.0 || >=21.0.0"
+ },
+ "funding": {
+ "url": "https://opencollective.com/libvips"
+ },
+ "optionalDependencies": {
+ "@img/sharp-libvips-linux-arm64": "1.0.4"
+ }
+ },
+ "node_modules/@img/sharp-linux-s390x": {
+ "version": "0.33.5",
+ "resolved": "https://registry.npmmirror.com/@img/sharp-linux-s390x/-/sharp-linux-s390x-0.33.5.tgz",
+ "integrity": "sha512-y/5PCd+mP4CA/sPDKl2961b+C9d+vPAveS33s6Z3zfASk2j5upL6fXVPZi7ztePZ5CuH+1kW8JtvxgbuXHRa4Q==",
+ "cpu": [
+ "s390x"
+ ],
+ "dev": true,
+ "license": "Apache-2.0",
+ "optional": true,
+ "os": [
+ "linux"
+ ],
+ "engines": {
+ "node": "^18.17.0 || ^20.3.0 || >=21.0.0"
+ },
+ "funding": {
+ "url": "https://opencollective.com/libvips"
+ },
+ "optionalDependencies": {
+ "@img/sharp-libvips-linux-s390x": "1.0.4"
+ }
+ },
+ "node_modules/@img/sharp-linux-x64": {
+ "version": "0.33.5",
+ "resolved": "https://registry.npmmirror.com/@img/sharp-linux-x64/-/sharp-linux-x64-0.33.5.tgz",
+ "integrity": "sha512-opC+Ok5pRNAzuvq1AG0ar+1owsu842/Ab+4qvU879ippJBHvyY5n2mxF1izXqkPYlGuP/M556uh53jRLJmzTWA==",
+ "cpu": [
+ "x64"
+ ],
+ "dev": true,
+ "license": "Apache-2.0",
+ "optional": true,
+ "os": [
+ "linux"
+ ],
+ "engines": {
+ "node": "^18.17.0 || ^20.3.0 || >=21.0.0"
+ },
+ "funding": {
+ "url": "https://opencollective.com/libvips"
+ },
+ "optionalDependencies": {
+ "@img/sharp-libvips-linux-x64": "1.0.4"
+ }
+ },
+ "node_modules/@img/sharp-linuxmusl-arm64": {
+ "version": "0.33.5",
+ "resolved": "https://registry.npmmirror.com/@img/sharp-linuxmusl-arm64/-/sharp-linuxmusl-arm64-0.33.5.tgz",
+ "integrity": "sha512-XrHMZwGQGvJg2V/oRSUfSAfjfPxO+4DkiRh6p2AFjLQztWUuY/o8Mq0eMQVIY7HJ1CDQUJlxGGZRw1a5bqmd1g==",
+ "cpu": [
+ "arm64"
+ ],
+ "dev": true,
+ "license": "Apache-2.0",
+ "optional": true,
+ "os": [
+ "linux"
+ ],
+ "engines": {
+ "node": "^18.17.0 || ^20.3.0 || >=21.0.0"
+ },
+ "funding": {
+ "url": "https://opencollective.com/libvips"
+ },
+ "optionalDependencies": {
+ "@img/sharp-libvips-linuxmusl-arm64": "1.0.4"
+ }
+ },
+ "node_modules/@img/sharp-linuxmusl-x64": {
+ "version": "0.33.5",
+ "resolved": "https://registry.npmmirror.com/@img/sharp-linuxmusl-x64/-/sharp-linuxmusl-x64-0.33.5.tgz",
+ "integrity": "sha512-WT+d/cgqKkkKySYmqoZ8y3pxx7lx9vVejxW/W4DOFMYVSkErR+w7mf2u8m/y4+xHe7yY9DAXQMWQhpnMuFfScw==",
+ "cpu": [
+ "x64"
+ ],
+ "dev": true,
+ "license": "Apache-2.0",
+ "optional": true,
+ "os": [
+ "linux"
+ ],
+ "engines": {
+ "node": "^18.17.0 || ^20.3.0 || >=21.0.0"
+ },
+ "funding": {
+ "url": "https://opencollective.com/libvips"
+ },
+ "optionalDependencies": {
+ "@img/sharp-libvips-linuxmusl-x64": "1.0.4"
+ }
+ },
+ "node_modules/@img/sharp-wasm32": {
+ "version": "0.33.5",
+ "resolved": "https://registry.npmmirror.com/@img/sharp-wasm32/-/sharp-wasm32-0.33.5.tgz",
+ "integrity": "sha512-ykUW4LVGaMcU9lu9thv85CbRMAwfeadCJHRsg2GmeRa/cJxsVY9Rbd57JcMxBkKHag5U/x7TSBpScF4U8ElVzg==",
+ "cpu": [
+ "wasm32"
+ ],
+ "dev": true,
+ "license": "Apache-2.0 AND LGPL-3.0-or-later AND MIT",
+ "optional": true,
+ "dependencies": {
+ "@emnapi/runtime": "^1.2.0"
+ },
+ "engines": {
+ "node": "^18.17.0 || ^20.3.0 || >=21.0.0"
+ },
+ "funding": {
+ "url": "https://opencollective.com/libvips"
+ }
+ },
+ "node_modules/@img/sharp-win32-ia32": {
+ "version": "0.33.5",
+ "resolved": "https://registry.npmmirror.com/@img/sharp-win32-ia32/-/sharp-win32-ia32-0.33.5.tgz",
+ "integrity": "sha512-T36PblLaTwuVJ/zw/LaH0PdZkRz5rd3SmMHX8GSmR7vtNSP5Z6bQkExdSK7xGWyxLw4sUknBuugTelgw2faBbQ==",
+ "cpu": [
+ "ia32"
+ ],
+ "dev": true,
+ "license": "Apache-2.0 AND LGPL-3.0-or-later",
+ "optional": true,
+ "os": [
+ "win32"
+ ],
+ "engines": {
+ "node": "^18.17.0 || ^20.3.0 || >=21.0.0"
+ },
+ "funding": {
+ "url": "https://opencollective.com/libvips"
+ }
+ },
+ "node_modules/@img/sharp-win32-x64": {
+ "version": "0.33.5",
+ "resolved": "https://registry.npmmirror.com/@img/sharp-win32-x64/-/sharp-win32-x64-0.33.5.tgz",
+ "integrity": "sha512-MpY/o8/8kj+EcnxwvrP4aTJSWw/aZ7JIGR4aBeZkZw5B7/Jn+tY9/VNwtcoGmdT7GfggGIU4kygOMSbYnOrAbg==",
+ "cpu": [
+ "x64"
+ ],
+ "dev": true,
+ "license": "Apache-2.0 AND LGPL-3.0-or-later",
+ "optional": true,
+ "os": [
+ "win32"
+ ],
+ "engines": {
+ "node": "^18.17.0 || ^20.3.0 || >=21.0.0"
+ },
+ "funding": {
+ "url": "https://opencollective.com/libvips"
+ }
+ },
+ "node_modules/@jridgewell/resolve-uri": {
+ "version": "3.1.2",
+ "resolved": "https://registry.npmmirror.com/@jridgewell/resolve-uri/-/resolve-uri-3.1.2.tgz",
+ "integrity": "sha512-bRISgCIjP20/tbWSPWMEi54QVPRZExkuD9lJL+UIxUKtwVJA8wW1Trb1jMs1RFXo1CBTNZ/5hpC9QvmKWdopKw==",
+ "dev": true,
+ "license": "MIT",
+ "engines": {
+ "node": ">=6.0.0"
+ }
+ },
+ "node_modules/@jridgewell/sourcemap-codec": {
+ "version": "1.5.5",
+ "resolved": "https://registry.npmmirror.com/@jridgewell/sourcemap-codec/-/sourcemap-codec-1.5.5.tgz",
+ "integrity": "sha512-cYQ9310grqxueWbl+WuIUIaiUaDcj7WOq5fVhEljNVgRfOUhY9fy2zTvfoqWsnebh8Sl70VScFbICvJnLKB0Og==",
+ "dev": true,
+ "license": "MIT"
+ },
+ "node_modules/@jridgewell/trace-mapping": {
+ "version": "0.3.9",
+ "resolved": "https://registry.npmmirror.com/@jridgewell/trace-mapping/-/trace-mapping-0.3.9.tgz",
+ "integrity": "sha512-3Belt6tdc8bPgAtbcmdtNJlirVoTmEb5e2gC94PnkwEW9jI6CAHUeoG85tjWP5WquqfavoMtMwiG4P926ZKKuQ==",
+ "dev": true,
+ "license": "MIT",
+ "dependencies": {
+ "@jridgewell/resolve-uri": "^3.0.3",
+ "@jridgewell/sourcemap-codec": "^1.4.10"
+ }
+ },
+ "node_modules/@types/node": {
+ "version": "20.19.10",
+ "resolved": "https://registry.npmmirror.com/@types/node/-/node-20.19.10.tgz",
+ "integrity": "sha512-iAFpG6DokED3roLSP0K+ybeDdIX6Bc0Vd3mLW5uDqThPWtNos3E+EqOM11mPQHKzfWHqEBuLjIlsBQQ8CsISmQ==",
+ "dev": true,
+ "license": "MIT",
+ "dependencies": {
+ "undici-types": "~6.21.0"
+ }
+ },
+ "node_modules/@types/pg": {
+ "version": "8.15.5",
+ "resolved": "https://registry.npmmirror.com/@types/pg/-/pg-8.15.5.tgz",
+ "integrity": "sha512-LF7lF6zWEKxuT3/OR8wAZGzkg4ENGXFNyiV/JeOt9z5B+0ZVwbql9McqX5c/WStFq1GaGso7H1AzP/qSzmlCKQ==",
+ "dev": true,
+ "license": "MIT",
+ "dependencies": {
+ "@types/node": "*",
+ "pg-protocol": "*",
+ "pg-types": "^2.2.0"
+ }
+ },
+ "node_modules/acorn": {
+ "version": "8.14.0",
+ "resolved": "https://registry.npmmirror.com/acorn/-/acorn-8.14.0.tgz",
+ "integrity": "sha512-cl669nCJTZBsL97OF4kUQm5g5hC2uihk0NxY3WENAC0TYdILVkAyHymAntgxGkl7K+t0cXIrH5siy5S4XkFycA==",
+ "dev": true,
+ "license": "MIT",
+ "bin": {
+ "acorn": "bin/acorn"
+ },
+ "engines": {
+ "node": ">=0.4.0"
+ }
+ },
+ "node_modules/acorn-walk": {
+ "version": "8.3.2",
+ "resolved": "https://registry.npmmirror.com/acorn-walk/-/acorn-walk-8.3.2.tgz",
+ "integrity": "sha512-cjkyv4OtNCIeqhHrfS81QWXoCBPExR/J62oyEqepVw8WaQeSqpW2uhuLPh1m9eWhDuOo/jUXVTlifvesOWp/4A==",
+ "dev": true,
+ "license": "MIT",
+ "engines": {
+ "node": ">=0.4.0"
+ }
+ },
+ "node_modules/as-table": {
+ "version": "1.0.55",
+ "resolved": "https://registry.npmmirror.com/as-table/-/as-table-1.0.55.tgz",
+ "integrity": "sha512-xvsWESUJn0JN421Xb9MQw6AsMHRCUknCe0Wjlxvjud80mU4E6hQf1A6NzQKcYNmYw62MfzEtXc+badstZP3JpQ==",
+ "dev": true,
+ "license": "MIT",
+ "dependencies": {
+ "printable-characters": "^1.0.42"
+ }
+ },
+ "node_modules/blake3-wasm": {
+ "version": "2.1.5",
+ "resolved": "https://registry.npmmirror.com/blake3-wasm/-/blake3-wasm-2.1.5.tgz",
+ "integrity": "sha512-F1+K8EbfOZE49dtoPtmxUQrpXaBIl3ICvasLh+nJta0xkz+9kF/7uet9fLnwKqhDrmj6g+6K3Tw9yQPUg2ka5g==",
+ "dev": true,
+ "license": "MIT"
+ },
+ "node_modules/color": {
+ "version": "4.2.3",
+ "resolved": "https://registry.npmmirror.com/color/-/color-4.2.3.tgz",
+ "integrity": "sha512-1rXeuUUiGGrykh+CeBdu5Ie7OJwinCgQY0bc7GCRxy5xVHy+moaqkpL/jqQq0MtQOeYcrqEz4abc5f0KtU7W4A==",
+ "dev": true,
+ "license": "MIT",
+ "optional": true,
+ "dependencies": {
+ "color-convert": "^2.0.1",
+ "color-string": "^1.9.0"
+ },
+ "engines": {
+ "node": ">=12.5.0"
+ }
+ },
+ "node_modules/color-convert": {
+ "version": "2.0.1",
+ "resolved": "https://registry.npmmirror.com/color-convert/-/color-convert-2.0.1.tgz",
+ "integrity": "sha512-RRECPsj7iu/xb5oKYcsFHSppFNnsj/52OVTRKb4zP5onXwVF3zVmmToNcOfGC+CRDpfK/U584fMg38ZHCaElKQ==",
+ "dev": true,
+ "license": "MIT",
+ "optional": true,
+ "dependencies": {
+ "color-name": "~1.1.4"
+ },
+ "engines": {
+ "node": ">=7.0.0"
+ }
+ },
+ "node_modules/color-name": {
+ "version": "1.1.4",
+ "resolved": "https://registry.npmmirror.com/color-name/-/color-name-1.1.4.tgz",
+ "integrity": "sha512-dOy+3AuW3a2wNbZHIuMZpTcgjGuLU/uBL/ubcZF9OXbDo8ff4O8yVp5Bf0efS8uEoYo5q4Fx7dY9OgQGXgAsQA==",
+ "dev": true,
+ "license": "MIT",
+ "optional": true
+ },
+ "node_modules/color-string": {
+ "version": "1.9.1",
+ "resolved": "https://registry.npmmirror.com/color-string/-/color-string-1.9.1.tgz",
+ "integrity": "sha512-shrVawQFojnZv6xM40anx4CkoDP+fZsw/ZerEMsW/pyzsRbElpsL/DBVW7q3ExxwusdNXI3lXpuhEZkzs8p5Eg==",
+ "dev": true,
+ "license": "MIT",
+ "optional": true,
+ "dependencies": {
+ "color-name": "^1.0.0",
+ "simple-swizzle": "^0.2.2"
+ }
+ },
+ "node_modules/cookie": {
+ "version": "0.7.2",
+ "resolved": "https://registry.npmmirror.com/cookie/-/cookie-0.7.2.tgz",
+ "integrity": "sha512-yki5XnKuf750l50uGTllt6kKILY4nQ1eNIQatoXEByZ5dWgnKqbnqmTrBE5B4N7lrMJKQ2ytWMiTO2o0v6Ew/w==",
+ "dev": true,
+ "license": "MIT",
+ "engines": {
+ "node": ">= 0.6"
+ }
+ },
+ "node_modules/data-uri-to-buffer": {
+ "version": "2.0.2",
+ "resolved": "https://registry.npmmirror.com/data-uri-to-buffer/-/data-uri-to-buffer-2.0.2.tgz",
+ "integrity": "sha512-ND9qDTLc6diwj+Xe5cdAgVTbLVdXbtxTJRXRhli8Mowuaan+0EJOtdqJ0QCHNSSPyoXGx9HX2/VMnKeC34AChA==",
+ "dev": true,
+ "license": "MIT"
+ },
+ "node_modules/defu": {
+ "version": "6.1.4",
+ "resolved": "https://registry.npmmirror.com/defu/-/defu-6.1.4.tgz",
+ "integrity": "sha512-mEQCMmwJu317oSz8CwdIOdwf3xMif1ttiM8LTufzc3g6kR+9Pe236twL8j3IYT1F7GfRgGcW6MWxzZjLIkuHIg==",
+ "dev": true,
+ "license": "MIT"
+ },
+ "node_modules/detect-libc": {
+ "version": "2.0.4",
+ "resolved": "https://registry.npmmirror.com/detect-libc/-/detect-libc-2.0.4.tgz",
+ "integrity": "sha512-3UDv+G9CsCKO1WKMGw9fwq/SWJYbI0c5Y7LU1AXYoDdbhE2AHQ6N6Nb34sG8Fj7T5APy8qXDCKuuIHd1BR0tVA==",
+ "dev": true,
+ "license": "Apache-2.0",
+ "optional": true,
+ "engines": {
+ "node": ">=8"
+ }
+ },
+ "node_modules/esbuild": {
+ "version": "0.17.19",
+ "resolved": "https://registry.npmmirror.com/esbuild/-/esbuild-0.17.19.tgz",
+ "integrity": "sha512-XQ0jAPFkK/u3LcVRcvVHQcTIqD6E2H1fvZMA5dQPSOWb3suUbWbfbRf94pjc0bNzRYLfIrDRQXr7X+LHIm5oHw==",
+ "dev": true,
+ "hasInstallScript": true,
+ "license": "MIT",
+ "bin": {
+ "esbuild": "bin/esbuild"
+ },
+ "engines": {
+ "node": ">=12"
+ },
+ "optionalDependencies": {
+ "@esbuild/android-arm": "0.17.19",
+ "@esbuild/android-arm64": "0.17.19",
+ "@esbuild/android-x64": "0.17.19",
+ "@esbuild/darwin-arm64": "0.17.19",
+ "@esbuild/darwin-x64": "0.17.19",
+ "@esbuild/freebsd-arm64": "0.17.19",
+ "@esbuild/freebsd-x64": "0.17.19",
+ "@esbuild/linux-arm": "0.17.19",
+ "@esbuild/linux-arm64": "0.17.19",
+ "@esbuild/linux-ia32": "0.17.19",
+ "@esbuild/linux-loong64": "0.17.19",
+ "@esbuild/linux-mips64el": "0.17.19",
+ "@esbuild/linux-ppc64": "0.17.19",
+ "@esbuild/linux-riscv64": "0.17.19",
+ "@esbuild/linux-s390x": "0.17.19",
+ "@esbuild/linux-x64": "0.17.19",
+ "@esbuild/netbsd-x64": "0.17.19",
+ "@esbuild/openbsd-x64": "0.17.19",
+ "@esbuild/sunos-x64": "0.17.19",
+ "@esbuild/win32-arm64": "0.17.19",
+ "@esbuild/win32-ia32": "0.17.19",
+ "@esbuild/win32-x64": "0.17.19"
+ }
+ },
+ "node_modules/escape-string-regexp": {
+ "version": "4.0.0",
+ "resolved": "https://registry.npmmirror.com/escape-string-regexp/-/escape-string-regexp-4.0.0.tgz",
+ "integrity": "sha512-TtpcNJ3XAzx3Gq8sWRzJaVajRs0uVxA2YAkdb1jm2YkPz4G6egUFAyA3n5vtEIZefPk5Wa4UXbKuS5fKkJWdgA==",
+ "dev": true,
+ "license": "MIT",
+ "engines": {
+ "node": ">=10"
+ },
+ "funding": {
+ "url": "https://github.com/sponsors/sindresorhus"
+ }
+ },
+ "node_modules/estree-walker": {
+ "version": "0.6.1",
+ "resolved": "https://registry.npmmirror.com/estree-walker/-/estree-walker-0.6.1.tgz",
+ "integrity": "sha512-SqmZANLWS0mnatqbSfRP5g8OXZC12Fgg1IwNtLsyHDzJizORW4khDfjPqJZsemPWBB2uqykUah5YpQ6epsqC/w==",
+ "dev": true,
+ "license": "MIT"
+ },
+ "node_modules/exit-hook": {
+ "version": "2.2.1",
+ "resolved": "https://registry.npmmirror.com/exit-hook/-/exit-hook-2.2.1.tgz",
+ "integrity": "sha512-eNTPlAD67BmP31LDINZ3U7HSF8l57TxOY2PmBJ1shpCvpnxBF93mWCE8YHBnXs8qiUZJc9WDcWIeC3a2HIAMfw==",
+ "dev": true,
+ "license": "MIT",
+ "engines": {
+ "node": ">=6"
+ },
+ "funding": {
+ "url": "https://github.com/sponsors/sindresorhus"
+ }
+ },
+ "node_modules/exsolve": {
+ "version": "1.0.7",
+ "resolved": "https://registry.npmmirror.com/exsolve/-/exsolve-1.0.7.tgz",
+ "integrity": "sha512-VO5fQUzZtI6C+vx4w/4BWJpg3s/5l+6pRQEHzFRM8WFi4XffSP1Z+4qi7GbjWbvRQEbdIco5mIMq+zX4rPuLrw==",
+ "dev": true,
+ "license": "MIT"
+ },
+ "node_modules/fetch-blob": {
+ "version": "3.2.0",
+ "resolved": "https://registry.npmmirror.com/fetch-blob/-/fetch-blob-3.2.0.tgz",
+ "integrity": "sha512-7yAQpD2UMJzLi1Dqv7qFYnPbaPx7ZfFK6PiIxQ4PfkGPyNyl2Ugx+a/umUonmKqjhM4DnfbMvdX6otXq83soQQ==",
+ "funding": [
+ {
+ "type": "github",
+ "url": "https://github.com/sponsors/jimmywarting"
+ },
+ {
+ "type": "paypal",
+ "url": "https://paypal.me/jimmywarting"
+ }
+ ],
+ "license": "MIT",
+ "dependencies": {
+ "node-domexception": "^1.0.0",
+ "web-streams-polyfill": "^3.0.3"
+ },
+ "engines": {
+ "node": "^12.20 || >= 14.13"
+ }
+ },
+ "node_modules/formdata-polyfill": {
+ "version": "4.0.10",
+ "resolved": "https://registry.npmmirror.com/formdata-polyfill/-/formdata-polyfill-4.0.10.tgz",
+ "integrity": "sha512-buewHzMvYL29jdeQTVILecSaZKnt/RJWjoZCF5OW60Z67/GmSLBkOFM7qh1PI3zFNtJbaZL5eQu1vLfazOwj4g==",
+ "license": "MIT",
+ "dependencies": {
+ "fetch-blob": "^3.1.2"
+ },
+ "engines": {
+ "node": ">=12.20.0"
+ }
+ },
+ "node_modules/fsevents": {
+ "version": "2.3.3",
+ "resolved": "https://registry.npmmirror.com/fsevents/-/fsevents-2.3.3.tgz",
+ "integrity": "sha512-5xoDfX+fL7faATnagmWPpbFtwh/R77WmMMqqHGS65C3vvB0YHrgF+B1YmZ3441tMj5n63k0212XNoJwzlhffQw==",
+ "dev": true,
+ "hasInstallScript": true,
+ "license": "MIT",
+ "optional": true,
+ "os": [
+ "darwin"
+ ],
+ "engines": {
+ "node": "^8.16.0 || ^10.6.0 || >=11.0.0"
+ }
+ },
+ "node_modules/get-source": {
+ "version": "2.0.12",
+ "resolved": "https://registry.npmmirror.com/get-source/-/get-source-2.0.12.tgz",
+ "integrity": "sha512-X5+4+iD+HoSeEED+uwrQ07BOQr0kEDFMVqqpBuI+RaZBpBpHCuXxo70bjar6f0b0u/DQJsJ7ssurpP0V60Az+w==",
+ "dev": true,
+ "license": "Unlicense",
+ "dependencies": {
+ "data-uri-to-buffer": "^2.0.0",
+ "source-map": "^0.6.1"
+ }
+ },
+ "node_modules/glob-to-regexp": {
+ "version": "0.4.1",
+ "resolved": "https://registry.npmmirror.com/glob-to-regexp/-/glob-to-regexp-0.4.1.tgz",
+ "integrity": "sha512-lkX1HJXwyMcprw/5YUZc2s7DrpAiHB21/V+E1rHUrVNokkvB6bqMzT0VfV6/86ZNabt1k14YOIaT7nDvOX3Iiw==",
+ "dev": true,
+ "license": "BSD-2-Clause"
+ },
+ "node_modules/is-arrayish": {
+ "version": "0.3.2",
+ "resolved": "https://registry.npmmirror.com/is-arrayish/-/is-arrayish-0.3.2.tgz",
+ "integrity": "sha512-eVRqCvVlZbuw3GrM63ovNSNAeA1K16kaR/LRY/92w0zxQ5/1YzwblUX652i4Xs9RwAGjW9d9y6X88t8OaAJfWQ==",
+ "dev": true,
+ "license": "MIT",
+ "optional": true
+ },
+ "node_modules/magic-string": {
+ "version": "0.25.9",
+ "resolved": "https://registry.npmmirror.com/magic-string/-/magic-string-0.25.9.tgz",
+ "integrity": "sha512-RmF0AsMzgt25qzqqLc1+MbHmhdx0ojF2Fvs4XnOqz2ZOBXzzkEwc/dJQZCYHAn7v1jbVOjAZfK8msRn4BxO4VQ==",
+ "dev": true,
+ "license": "MIT",
+ "dependencies": {
+ "sourcemap-codec": "^1.4.8"
+ }
+ },
+ "node_modules/mime": {
+ "version": "3.0.0",
+ "resolved": "https://registry.npmmirror.com/mime/-/mime-3.0.0.tgz",
+ "integrity": "sha512-jSCU7/VB1loIWBZe14aEYHU/+1UMEHoaO7qxCOVJOw9GgH72VAWppxNcjU+x9a2k3GSIBXNKxXQFqRvvZ7vr3A==",
+ "dev": true,
+ "license": "MIT",
+ "bin": {
+ "mime": "cli.js"
+ },
+ "engines": {
+ "node": ">=10.0.0"
+ }
+ },
+ "node_modules/miniflare": {
+ "version": "3.20250718.1",
+ "resolved": "https://registry.npmmirror.com/miniflare/-/miniflare-3.20250718.1.tgz",
+ "integrity": "sha512-9QAOHVKIVHmnQ1dJT9Fls8aVA8R5JjEizzV889Dinq/+bEPltqIepCvm9Z+fbNUgLvV7D/H1NUk8VdlLRgp9Wg==",
+ "dev": true,
+ "license": "MIT",
+ "dependencies": {
+ "@cspotcode/source-map-support": "0.8.1",
+ "acorn": "8.14.0",
+ "acorn-walk": "8.3.2",
+ "exit-hook": "2.2.1",
+ "glob-to-regexp": "0.4.1",
+ "stoppable": "1.1.0",
+ "undici": "^5.28.5",
+ "workerd": "1.20250718.0",
+ "ws": "8.18.0",
+ "youch": "3.3.4",
+ "zod": "3.22.3"
+ },
+ "bin": {
+ "miniflare": "bootstrap.js"
+ },
+ "engines": {
+ "node": ">=16.13"
+ }
+ },
+ "node_modules/mustache": {
+ "version": "4.2.0",
+ "resolved": "https://registry.npmmirror.com/mustache/-/mustache-4.2.0.tgz",
+ "integrity": "sha512-71ippSywq5Yb7/tVYyGbkBggbU8H3u5Rz56fH60jGFgr8uHwxs+aSKeqmluIVzM0m0kB7xQjKS6qPfd0b2ZoqQ==",
+ "dev": true,
+ "license": "MIT",
+ "bin": {
+ "mustache": "bin/mustache"
+ }
+ },
+ "node_modules/node-domexception": {
+ "version": "1.0.0",
+ "resolved": "https://registry.npmmirror.com/node-domexception/-/node-domexception-1.0.0.tgz",
+ "integrity": "sha512-/jKZoMpw0F8GRwl4/eLROPA3cfcXtLApP0QzLmUT/HuPCZWyB7IY9ZrMeKw2O/nFIqPQB3PVM9aYm0F312AXDQ==",
+ "deprecated": "Use your platform's native DOMException instead",
+ "funding": [
+ {
+ "type": "github",
+ "url": "https://github.com/sponsors/jimmywarting"
+ },
+ {
+ "type": "github",
+ "url": "https://paypal.me/jimmywarting"
+ }
+ ],
+ "license": "MIT",
+ "engines": {
+ "node": ">=10.5.0"
+ }
+ },
+ "node_modules/node-fetch": {
+ "version": "3.3.2",
+ "resolved": "https://registry.npmmirror.com/node-fetch/-/node-fetch-3.3.2.tgz",
+ "integrity": "sha512-dRB78srN/l6gqWulah9SrxeYnxeddIG30+GOqK/9OlLVyLg3HPnr6SqOWTWOXKRwC2eGYCkZ59NNuSgvSrpgOA==",
+ "license": "MIT",
+ "dependencies": {
+ "data-uri-to-buffer": "^4.0.0",
+ "fetch-blob": "^3.1.4",
+ "formdata-polyfill": "^4.0.10"
+ },
+ "engines": {
+ "node": "^12.20.0 || ^14.13.1 || >=16.0.0"
+ },
+ "funding": {
+ "type": "opencollective",
+ "url": "https://opencollective.com/node-fetch"
+ }
+ },
+ "node_modules/node-fetch/node_modules/data-uri-to-buffer": {
+ "version": "4.0.1",
+ "resolved": "https://registry.npmmirror.com/data-uri-to-buffer/-/data-uri-to-buffer-4.0.1.tgz",
+ "integrity": "sha512-0R9ikRb668HB7QDxT1vkpuUBtqc53YyAwMwGeUFKRojY/NWKvdZ+9UYtRfGmhqNbRkTSVpMbmyhXipFFv2cb/A==",
+ "license": "MIT",
+ "engines": {
+ "node": ">= 12"
+ }
+ },
+ "node_modules/ohash": {
+ "version": "2.0.11",
+ "resolved": "https://registry.npmmirror.com/ohash/-/ohash-2.0.11.tgz",
+ "integrity": "sha512-RdR9FQrFwNBNXAr4GixM8YaRZRJ5PUWbKYbE5eOsrwAjJW0q2REGcf79oYPsLyskQCZG1PLN+S/K1V00joZAoQ==",
+ "dev": true,
+ "license": "MIT"
+ },
+ "node_modules/path-to-regexp": {
+ "version": "6.3.0",
+ "resolved": "https://registry.npmmirror.com/path-to-regexp/-/path-to-regexp-6.3.0.tgz",
+ "integrity": "sha512-Yhpw4T9C6hPpgPeA28us07OJeqZ5EzQTkbfwuhsUg0c237RomFoETJgmp2sa3F/41gfLE6G5cqcYwznmeEeOlQ==",
+ "dev": true,
+ "license": "MIT"
+ },
+ "node_modules/pathe": {
+ "version": "2.0.3",
+ "resolved": "https://registry.npmmirror.com/pathe/-/pathe-2.0.3.tgz",
+ "integrity": "sha512-WUjGcAqP1gQacoQe+OBJsFA7Ld4DyXuUIjZ5cc75cLHvJ7dtNsTugphxIADwspS+AraAUePCKrSVtPLFj/F88w==",
+ "dev": true,
+ "license": "MIT"
+ },
+ "node_modules/pg": {
+ "version": "8.16.3",
+ "resolved": "https://registry.npmmirror.com/pg/-/pg-8.16.3.tgz",
+ "integrity": "sha512-enxc1h0jA/aq5oSDMvqyW3q89ra6XIIDZgCX9vkMrnz5DFTw/Ny3Li2lFQ+pt3L6MCgm/5o2o8HW9hiJji+xvw==",
+ "license": "MIT",
+ "dependencies": {
+ "pg-connection-string": "^2.9.1",
+ "pg-pool": "^3.10.1",
+ "pg-protocol": "^1.10.3",
+ "pg-types": "2.2.0",
+ "pgpass": "1.0.5"
+ },
+ "engines": {
+ "node": ">= 16.0.0"
+ },
+ "optionalDependencies": {
+ "pg-cloudflare": "^1.2.7"
+ },
+ "peerDependencies": {
+ "pg-native": ">=3.0.1"
+ },
+ "peerDependenciesMeta": {
+ "pg-native": {
+ "optional": true
+ }
+ }
+ },
+ "node_modules/pg-cloudflare": {
+ "version": "1.2.7",
+ "resolved": "https://registry.npmmirror.com/pg-cloudflare/-/pg-cloudflare-1.2.7.tgz",
+ "integrity": "sha512-YgCtzMH0ptvZJslLM1ffsY4EuGaU0cx4XSdXLRFae8bPP4dS5xL1tNB3k2o/N64cHJpwU7dxKli/nZ2lUa5fLg==",
+ "license": "MIT",
+ "optional": true
+ },
+ "node_modules/pg-connection-string": {
+ "version": "2.9.1",
+ "resolved": "https://registry.npmmirror.com/pg-connection-string/-/pg-connection-string-2.9.1.tgz",
+ "integrity": "sha512-nkc6NpDcvPVpZXxrreI/FOtX3XemeLl8E0qFr6F2Lrm/I8WOnaWNhIPK2Z7OHpw7gh5XJThi6j6ppgNoaT1w4w==",
+ "license": "MIT"
+ },
+ "node_modules/pg-int8": {
+ "version": "1.0.1",
+ "resolved": "https://registry.npmmirror.com/pg-int8/-/pg-int8-1.0.1.tgz",
+ "integrity": "sha512-WCtabS6t3c8SkpDBUlb1kjOs7l66xsGdKpIPZsg4wR+B3+u9UAum2odSsF9tnvxg80h4ZxLWMy4pRjOsFIqQpw==",
+ "license": "ISC",
+ "engines": {
+ "node": ">=4.0.0"
+ }
+ },
+ "node_modules/pg-pool": {
+ "version": "3.10.1",
+ "resolved": "https://registry.npmmirror.com/pg-pool/-/pg-pool-3.10.1.tgz",
+ "integrity": "sha512-Tu8jMlcX+9d8+QVzKIvM/uJtp07PKr82IUOYEphaWcoBhIYkoHpLXN3qO59nAI11ripznDsEzEv8nUxBVWajGg==",
+ "license": "MIT",
+ "peerDependencies": {
+ "pg": ">=8.0"
+ }
+ },
+ "node_modules/pg-protocol": {
+ "version": "1.10.3",
+ "resolved": "https://registry.npmmirror.com/pg-protocol/-/pg-protocol-1.10.3.tgz",
+ "integrity": "sha512-6DIBgBQaTKDJyxnXaLiLR8wBpQQcGWuAESkRBX/t6OwA8YsqP+iVSiond2EDy6Y/dsGk8rh/jtax3js5NeV7JQ==",
+ "license": "MIT"
+ },
+ "node_modules/pg-types": {
+ "version": "2.2.0",
+ "resolved": "https://registry.npmmirror.com/pg-types/-/pg-types-2.2.0.tgz",
+ "integrity": "sha512-qTAAlrEsl8s4OiEQY69wDvcMIdQN6wdz5ojQiOy6YRMuynxenON0O5oCpJI6lshc6scgAY8qvJ2On/p+CXY0GA==",
+ "license": "MIT",
+ "dependencies": {
+ "pg-int8": "1.0.1",
+ "postgres-array": "~2.0.0",
+ "postgres-bytea": "~1.0.0",
+ "postgres-date": "~1.0.4",
+ "postgres-interval": "^1.1.0"
+ },
+ "engines": {
+ "node": ">=4"
+ }
+ },
+ "node_modules/pgpass": {
+ "version": "1.0.5",
+ "resolved": "https://registry.npmmirror.com/pgpass/-/pgpass-1.0.5.tgz",
+ "integrity": "sha512-FdW9r/jQZhSeohs1Z3sI1yxFQNFvMcnmfuj4WBMUTxOrAyLMaTcE1aAMBiTlbMNaXvBCQuVi0R7hd8udDSP7ug==",
+ "license": "MIT",
+ "dependencies": {
+ "split2": "^4.1.0"
+ }
+ },
+ "node_modules/postgres-array": {
+ "version": "2.0.0",
+ "resolved": "https://registry.npmmirror.com/postgres-array/-/postgres-array-2.0.0.tgz",
+ "integrity": "sha512-VpZrUqU5A69eQyW2c5CA1jtLecCsN2U/bD6VilrFDWq5+5UIEVO7nazS3TEcHf1zuPYO/sqGvUvW62g86RXZuA==",
+ "license": "MIT",
+ "engines": {
+ "node": ">=4"
+ }
+ },
+ "node_modules/postgres-bytea": {
+ "version": "1.0.0",
+ "resolved": "https://registry.npmmirror.com/postgres-bytea/-/postgres-bytea-1.0.0.tgz",
+ "integrity": "sha512-xy3pmLuQqRBZBXDULy7KbaitYqLcmxigw14Q5sj8QBVLqEwXfeybIKVWiqAXTlcvdvb0+xkOtDbfQMOf4lST1w==",
+ "license": "MIT",
+ "engines": {
+ "node": ">=0.10.0"
+ }
+ },
+ "node_modules/postgres-date": {
+ "version": "1.0.7",
+ "resolved": "https://registry.npmmirror.com/postgres-date/-/postgres-date-1.0.7.tgz",
+ "integrity": "sha512-suDmjLVQg78nMK2UZ454hAG+OAW+HQPZ6n++TNDUX+L0+uUlLywnoxJKDou51Zm+zTCjrCl0Nq6J9C5hP9vK/Q==",
+ "license": "MIT",
+ "engines": {
+ "node": ">=0.10.0"
+ }
+ },
+ "node_modules/postgres-interval": {
+ "version": "1.2.0",
+ "resolved": "https://registry.npmmirror.com/postgres-interval/-/postgres-interval-1.2.0.tgz",
+ "integrity": "sha512-9ZhXKM/rw350N1ovuWHbGxnGh/SNJ4cnxHiM0rxE4VN41wsg8P8zWn9hv/buK00RP4WvlOyr/RBDiptyxVbkZQ==",
+ "license": "MIT",
+ "dependencies": {
+ "xtend": "^4.0.0"
+ },
+ "engines": {
+ "node": ">=0.10.0"
+ }
+ },
+ "node_modules/printable-characters": {
+ "version": "1.0.42",
+ "resolved": "https://registry.npmmirror.com/printable-characters/-/printable-characters-1.0.42.tgz",
+ "integrity": "sha512-dKp+C4iXWK4vVYZmYSd0KBH5F/h1HoZRsbJ82AVKRO3PEo8L4lBS/vLwhVtpwwuYcoIsVY+1JYKR268yn480uQ==",
+ "dev": true,
+ "license": "Unlicense"
+ },
+ "node_modules/rollup-plugin-inject": {
+ "version": "3.0.2",
+ "resolved": "https://registry.npmmirror.com/rollup-plugin-inject/-/rollup-plugin-inject-3.0.2.tgz",
+ "integrity": "sha512-ptg9PQwzs3orn4jkgXJ74bfs5vYz1NCZlSQMBUA0wKcGp5i5pA1AO3fOUEte8enhGUC+iapTCzEWw2jEFFUO/w==",
+ "deprecated": "This package has been deprecated and is no longer maintained. Please use @rollup/plugin-inject.",
+ "dev": true,
+ "license": "MIT",
+ "dependencies": {
+ "estree-walker": "^0.6.1",
+ "magic-string": "^0.25.3",
+ "rollup-pluginutils": "^2.8.1"
+ }
+ },
+ "node_modules/rollup-plugin-node-polyfills": {
+ "version": "0.2.1",
+ "resolved": "https://registry.npmmirror.com/rollup-plugin-node-polyfills/-/rollup-plugin-node-polyfills-0.2.1.tgz",
+ "integrity": "sha512-4kCrKPTJ6sK4/gLL/U5QzVT8cxJcofO0OU74tnB19F40cmuAKSzH5/siithxlofFEjwvw1YAhPmbvGNA6jEroA==",
+ "dev": true,
+ "license": "MIT",
+ "dependencies": {
+ "rollup-plugin-inject": "^3.0.0"
+ }
+ },
+ "node_modules/rollup-pluginutils": {
+ "version": "2.8.2",
+ "resolved": "https://registry.npmmirror.com/rollup-pluginutils/-/rollup-pluginutils-2.8.2.tgz",
+ "integrity": "sha512-EEp9NhnUkwY8aif6bxgovPHMoMoNr2FulJziTndpt5H9RdwC47GSGuII9XxpSdzVGM0GWrNPHV6ie1LTNJPaLQ==",
+ "dev": true,
+ "license": "MIT",
+ "dependencies": {
+ "estree-walker": "^0.6.1"
+ }
+ },
+ "node_modules/semver": {
+ "version": "7.7.2",
+ "resolved": "https://registry.npmmirror.com/semver/-/semver-7.7.2.tgz",
+ "integrity": "sha512-RF0Fw+rO5AMf9MAyaRXI4AV0Ulj5lMHqVxxdSgiVbixSCXoEmmX/jk0CuJw4+3SqroYO9VoUh+HcuJivvtJemA==",
+ "dev": true,
+ "license": "ISC",
+ "optional": true,
+ "bin": {
+ "semver": "bin/semver.js"
+ },
+ "engines": {
+ "node": ">=10"
+ }
+ },
+ "node_modules/sharp": {
+ "version": "0.33.5",
+ "resolved": "https://registry.npmmirror.com/sharp/-/sharp-0.33.5.tgz",
+ "integrity": "sha512-haPVm1EkS9pgvHrQ/F3Xy+hgcuMV0Wm9vfIBSiwZ05k+xgb0PkBQpGsAA/oWdDobNaZTH5ppvHtzCFbnSEwHVw==",
+ "dev": true,
+ "hasInstallScript": true,
+ "license": "Apache-2.0",
+ "optional": true,
+ "dependencies": {
+ "color": "^4.2.3",
+ "detect-libc": "^2.0.3",
+ "semver": "^7.6.3"
+ },
+ "engines": {
+ "node": "^18.17.0 || ^20.3.0 || >=21.0.0"
+ },
+ "funding": {
+ "url": "https://opencollective.com/libvips"
+ },
+ "optionalDependencies": {
+ "@img/sharp-darwin-arm64": "0.33.5",
+ "@img/sharp-darwin-x64": "0.33.5",
+ "@img/sharp-libvips-darwin-arm64": "1.0.4",
+ "@img/sharp-libvips-darwin-x64": "1.0.4",
+ "@img/sharp-libvips-linux-arm": "1.0.5",
+ "@img/sharp-libvips-linux-arm64": "1.0.4",
+ "@img/sharp-libvips-linux-s390x": "1.0.4",
+ "@img/sharp-libvips-linux-x64": "1.0.4",
+ "@img/sharp-libvips-linuxmusl-arm64": "1.0.4",
+ "@img/sharp-libvips-linuxmusl-x64": "1.0.4",
+ "@img/sharp-linux-arm": "0.33.5",
+ "@img/sharp-linux-arm64": "0.33.5",
+ "@img/sharp-linux-s390x": "0.33.5",
+ "@img/sharp-linux-x64": "0.33.5",
+ "@img/sharp-linuxmusl-arm64": "0.33.5",
+ "@img/sharp-linuxmusl-x64": "0.33.5",
+ "@img/sharp-wasm32": "0.33.5",
+ "@img/sharp-win32-ia32": "0.33.5",
+ "@img/sharp-win32-x64": "0.33.5"
+ }
+ },
+ "node_modules/simple-swizzle": {
+ "version": "0.2.2",
+ "resolved": "https://registry.npmmirror.com/simple-swizzle/-/simple-swizzle-0.2.2.tgz",
+ "integrity": "sha512-JA//kQgZtbuY83m+xT+tXJkmJncGMTFT+C+g2h2R9uxkYIrE2yy9sgmcLhCnw57/WSD+Eh3J97FPEDFnbXnDUg==",
+ "dev": true,
+ "license": "MIT",
+ "optional": true,
+ "dependencies": {
+ "is-arrayish": "^0.3.1"
+ }
+ },
+ "node_modules/source-map": {
+ "version": "0.6.1",
+ "resolved": "https://registry.npmmirror.com/source-map/-/source-map-0.6.1.tgz",
+ "integrity": "sha512-UjgapumWlbMhkBgzT7Ykc5YXUT46F0iKu8SGXq0bcwP5dz/h0Plj6enJqjz1Zbq2l5WaqYnrVbwWOWMyF3F47g==",
+ "dev": true,
+ "license": "BSD-3-Clause",
+ "engines": {
+ "node": ">=0.10.0"
+ }
+ },
+ "node_modules/sourcemap-codec": {
+ "version": "1.4.8",
+ "resolved": "https://registry.npmmirror.com/sourcemap-codec/-/sourcemap-codec-1.4.8.tgz",
+ "integrity": "sha512-9NykojV5Uih4lgo5So5dtw+f0JgJX30KCNI8gwhz2J9A15wD0Ml6tjHKwf6fTSa6fAdVBdZeNOs9eJ71qCk8vA==",
+ "deprecated": "Please use @jridgewell/sourcemap-codec instead",
+ "dev": true,
+ "license": "MIT"
+ },
+ "node_modules/split2": {
+ "version": "4.2.0",
+ "resolved": "https://registry.npmmirror.com/split2/-/split2-4.2.0.tgz",
+ "integrity": "sha512-UcjcJOWknrNkF6PLX83qcHM6KHgVKNkV62Y8a5uYDVv9ydGQVwAHMKqHdJje1VTWpljG0WYpCDhrCdAOYH4TWg==",
+ "license": "ISC",
+ "engines": {
+ "node": ">= 10.x"
+ }
+ },
+ "node_modules/stacktracey": {
+ "version": "2.1.8",
+ "resolved": "https://registry.npmmirror.com/stacktracey/-/stacktracey-2.1.8.tgz",
+ "integrity": "sha512-Kpij9riA+UNg7TnphqjH7/CzctQ/owJGNbFkfEeve4Z4uxT5+JapVLFXcsurIfN34gnTWZNJ/f7NMG0E8JDzTw==",
+ "dev": true,
+ "license": "Unlicense",
+ "dependencies": {
+ "as-table": "^1.0.36",
+ "get-source": "^2.0.12"
+ }
+ },
+ "node_modules/stoppable": {
+ "version": "1.1.0",
+ "resolved": "https://registry.npmmirror.com/stoppable/-/stoppable-1.1.0.tgz",
+ "integrity": "sha512-KXDYZ9dszj6bzvnEMRYvxgeTHU74QBFL54XKtP3nyMuJ81CFYtABZ3bAzL2EdFUaEwJOBOgENyFj3R7oTzDyyw==",
+ "dev": true,
+ "license": "MIT",
+ "engines": {
+ "node": ">=4",
+ "npm": ">=6"
+ }
+ },
+ "node_modules/tslib": {
+ "version": "2.8.1",
+ "resolved": "https://registry.npmmirror.com/tslib/-/tslib-2.8.1.tgz",
+ "integrity": "sha512-oJFu94HQb+KVduSUQL7wnpmqnfmLsOA/nAh6b6EH0wCEoK0/mPeXU6c3wKDV83MkOuHPRHtSXKKU99IBazS/2w==",
+ "dev": true,
+ "license": "0BSD",
+ "optional": true
+ },
+ "node_modules/typescript": {
+ "version": "5.9.2",
+ "resolved": "https://registry.npmmirror.com/typescript/-/typescript-5.9.2.tgz",
+ "integrity": "sha512-CWBzXQrc/qOkhidw1OzBTQuYRbfyxDXJMVJ1XNwUHGROVmuaeiEm3OslpZ1RV96d7SKKjZKrSJu3+t/xlw3R9A==",
+ "dev": true,
+ "license": "Apache-2.0",
+ "bin": {
+ "tsc": "bin/tsc",
+ "tsserver": "bin/tsserver"
+ },
+ "engines": {
+ "node": ">=14.17"
+ }
+ },
+ "node_modules/ufo": {
+ "version": "1.6.1",
+ "resolved": "https://registry.npmmirror.com/ufo/-/ufo-1.6.1.tgz",
+ "integrity": "sha512-9a4/uxlTWJ4+a5i0ooc1rU7C7YOw3wT+UGqdeNNHWnOF9qcMBgLRS+4IYUqbczewFx4mLEig6gawh7X6mFlEkA==",
+ "dev": true,
+ "license": "MIT"
+ },
+ "node_modules/undici": {
+ "version": "5.29.0",
+ "resolved": "https://registry.npmmirror.com/undici/-/undici-5.29.0.tgz",
+ "integrity": "sha512-raqeBD6NQK4SkWhQzeYKd1KmIG6dllBOTt55Rmkt4HtI9mwdWtJljnrXjAFUBLTSN67HWrOIZ3EPF4kjUw80Bg==",
+ "dev": true,
+ "license": "MIT",
+ "dependencies": {
+ "@fastify/busboy": "^2.0.0"
+ },
+ "engines": {
+ "node": ">=14.0"
+ }
+ },
+ "node_modules/undici-types": {
+ "version": "6.21.0",
+ "resolved": "https://registry.npmmirror.com/undici-types/-/undici-types-6.21.0.tgz",
+ "integrity": "sha512-iwDZqg0QAGrg9Rav5H4n0M64c3mkR59cJ6wQp+7C4nI0gsmExaedaYLNO44eT4AtBBwjbTiGPMlt2Md0T9H9JQ==",
+ "dev": true,
+ "license": "MIT"
+ },
+ "node_modules/unenv": {
+ "version": "2.0.0-rc.14",
+ "resolved": "https://registry.npmmirror.com/unenv/-/unenv-2.0.0-rc.14.tgz",
+ "integrity": "sha512-od496pShMen7nOy5VmVJCnq8rptd45vh6Nx/r2iPbrba6pa6p+tS2ywuIHRZ/OBvSbQZB0kWvpO9XBNVFXHD3Q==",
+ "dev": true,
+ "license": "MIT",
+ "dependencies": {
+ "defu": "^6.1.4",
+ "exsolve": "^1.0.1",
+ "ohash": "^2.0.10",
+ "pathe": "^2.0.3",
+ "ufo": "^1.5.4"
+ }
+ },
+ "node_modules/web-streams-polyfill": {
+ "version": "3.3.3",
+ "resolved": "https://registry.npmmirror.com/web-streams-polyfill/-/web-streams-polyfill-3.3.3.tgz",
+ "integrity": "sha512-d2JWLCivmZYTSIoge9MsgFCZrt571BikcWGYkjC1khllbTeDlGqZ2D8vD8E/lJa8WGWbb7Plm8/XJYV7IJHZZw==",
+ "license": "MIT",
+ "engines": {
+ "node": ">= 8"
+ }
+ },
+ "node_modules/workerd": {
+ "version": "1.20250718.0",
+ "resolved": "https://registry.npmmirror.com/workerd/-/workerd-1.20250718.0.tgz",
+ "integrity": "sha512-kqkIJP/eOfDlUyBzU7joBg+tl8aB25gEAGqDap+nFWb+WHhnooxjGHgxPBy3ipw2hnShPFNOQt5lFRxbwALirg==",
+ "dev": true,
+ "hasInstallScript": true,
+ "license": "Apache-2.0",
+ "bin": {
+ "workerd": "bin/workerd"
+ },
+ "engines": {
+ "node": ">=16"
+ },
+ "optionalDependencies": {
+ "@cloudflare/workerd-darwin-64": "1.20250718.0",
+ "@cloudflare/workerd-darwin-arm64": "1.20250718.0",
+ "@cloudflare/workerd-linux-64": "1.20250718.0",
+ "@cloudflare/workerd-linux-arm64": "1.20250718.0",
+ "@cloudflare/workerd-windows-64": "1.20250718.0"
+ }
+ },
+ "node_modules/wrangler": {
+ "version": "3.114.13",
+ "resolved": "https://registry.npmmirror.com/wrangler/-/wrangler-3.114.13.tgz",
+ "integrity": "sha512-bJbKJGTjClEp5XeyjiIKXodHW6j14ZsXuMphjvTZSwkQjGg6QlOol74/44d/u1Uso+hhIzYFg6m/d/1ggxUqWQ==",
+ "dev": true,
+ "license": "MIT OR Apache-2.0",
+ "dependencies": {
+ "@cloudflare/kv-asset-handler": "0.3.4",
+ "@cloudflare/unenv-preset": "2.0.2",
+ "@esbuild-plugins/node-globals-polyfill": "0.2.3",
+ "@esbuild-plugins/node-modules-polyfill": "0.2.2",
+ "blake3-wasm": "2.1.5",
+ "esbuild": "0.17.19",
+ "miniflare": "3.20250718.1",
+ "path-to-regexp": "6.3.0",
+ "unenv": "2.0.0-rc.14",
+ "workerd": "1.20250718.0"
+ },
+ "bin": {
+ "wrangler": "bin/wrangler.js",
+ "wrangler2": "bin/wrangler.js"
+ },
+ "engines": {
+ "node": ">=16.17.0"
+ },
+ "optionalDependencies": {
+ "fsevents": "~2.3.2",
+ "sharp": "^0.33.5"
+ },
+ "peerDependencies": {
+ "@cloudflare/workers-types": "^4.20250408.0"
+ },
+ "peerDependenciesMeta": {
+ "@cloudflare/workers-types": {
+ "optional": true
+ }
+ }
+ },
+ "node_modules/ws": {
+ "version": "8.18.0",
+ "resolved": "https://registry.npmmirror.com/ws/-/ws-8.18.0.tgz",
+ "integrity": "sha512-8VbfWfHLbbwu3+N6OKsOMpBdT4kXPDDB9cJk2bJ6mh9ucxdlnNvH1e+roYkKmN9Nxw2yjz7VzeO9oOz2zJ04Pw==",
+ "dev": true,
+ "license": "MIT",
+ "engines": {
+ "node": ">=10.0.0"
+ },
+ "peerDependencies": {
+ "bufferutil": "^4.0.1",
+ "utf-8-validate": ">=5.0.2"
+ },
+ "peerDependenciesMeta": {
+ "bufferutil": {
+ "optional": true
+ },
+ "utf-8-validate": {
+ "optional": true
+ }
+ }
+ },
+ "node_modules/xtend": {
+ "version": "4.0.2",
+ "resolved": "https://registry.npmmirror.com/xtend/-/xtend-4.0.2.tgz",
+ "integrity": "sha512-LKYU1iAXJXUgAXn9URjiu+MWhyUXHsvfp7mcuYm9dSUKK0/CjtrUwFAxD82/mCWbtLsGjFIad0wIsod4zrTAEQ==",
+ "license": "MIT",
+ "engines": {
+ "node": ">=0.4"
+ }
+ },
+ "node_modules/youch": {
+ "version": "3.3.4",
+ "resolved": "https://registry.npmmirror.com/youch/-/youch-3.3.4.tgz",
+ "integrity": "sha512-UeVBXie8cA35DS6+nBkls68xaBBXCye0CNznrhszZjTbRVnJKQuNsyLKBTTL4ln1o1rh2PKtv35twV7irj5SEg==",
+ "dev": true,
+ "license": "MIT",
+ "dependencies": {
+ "cookie": "^0.7.1",
+ "mustache": "^4.2.0",
+ "stacktracey": "^2.1.8"
+ }
+ },
+ "node_modules/zod": {
+ "version": "3.22.3",
+ "resolved": "https://registry.npmmirror.com/zod/-/zod-3.22.3.tgz",
+ "integrity": "sha512-EjIevzuJRiRPbVH4mGc8nApb/lVLKVpmUhAaR5R5doKGfAnGJ6Gr3CViAVjP+4FWSxCsybeWQdcgCtbX+7oZug==",
+ "dev": true,
+ "license": "MIT",
+ "funding": {
+ "url": "https://github.com/sponsors/colinhacks"
+ }
+ }
+ }
+}
diff --git a/package.json b/package.json
new file mode 100644
index 0000000..feed0e2
--- /dev/null
+++ b/package.json
@@ -0,0 +1,31 @@
+{
+ "name": "hyperdrive-neondb-test",
+ "version": "1.0.0",
+ "description": "Cloudflare Worker to test Hyperdrive connection with NeonDB",
+ "main": "src/index.ts",
+ "scripts": {
+ "dev": "wrangler dev",
+ "deploy": "wrangler deploy",
+ "test": "wrangler dev --local"
+ },
+ "dependencies": {
+ "node-fetch": "^3.3.2",
+ "pg": "^8.11.3"
+ },
+ "devDependencies": {
+ "@cloudflare/workers-types": "^4.20240129.0",
+ "@types/node": "^20.11.16",
+ "@types/pg": "^8.10.9",
+ "typescript": "^5.3.3",
+ "wrangler": "^3.28.2"
+ },
+ "keywords": [
+ "cloudflare",
+ "workers",
+ "hyperdrive",
+ "neondb",
+ "postgresql"
+ ],
+ "author": "",
+ "license": "MIT"
+}
diff --git a/qczh_debate_state.json b/qczh_debate_state.json
new file mode 100644
index 0000000..ed26813
--- /dev/null
+++ b/qczh_debate_state.json
@@ -0,0 +1,39 @@
+{
+ "current_stage": "起",
+ "stage_progress": 4,
+ "total_handoffs": 0,
+ "debate_history": [
+ {
+ "timestamp": "2025-08-10T15:30:47.514243",
+ "stage": "起",
+ "progress": 0,
+ "speaker": "吕洞宾",
+ "message": "起:八仙按先天八卦顺序阐述观点",
+ "handoffs": 0
+ },
+ {
+ "timestamp": "2025-08-10T15:30:47.514260",
+ "stage": "起",
+ "progress": 1,
+ "speaker": "何仙姑",
+ "message": "承:雁阵式承接,总体阐述+讥讽",
+ "handoffs": 0
+ },
+ {
+ "timestamp": "2025-08-10T15:30:47.514272",
+ "stage": "起",
+ "progress": 2,
+ "speaker": "铁拐李",
+ "message": "转:自由辩论,36次handoff",
+ "handoffs": 0
+ },
+ {
+ "timestamp": "2025-08-10T15:30:47.514281",
+ "stage": "起",
+ "progress": 3,
+ "speaker": "汉钟离",
+ "message": "合:交替总结,最终论证",
+ "handoffs": 0
+ }
+ ]
+}
\ No newline at end of file
diff --git a/query-shushu-book.js b/query-shushu-book.js
new file mode 100644
index 0000000..3fd5e92
--- /dev/null
+++ b/query-shushu-book.js
@@ -0,0 +1,138 @@
+// 查询术数书内容的脚本
+// 通过 Hyperdrive API 查询 NeonDB 中的术数书数据
+
+const API_BASE_URL = 'https://hyperdrive.seekkey.tech';
+
+// 通用请求函数
+async function apiRequest(endpoint, options = {}) {
+ const url = `${API_BASE_URL}${endpoint}`;
+ const headers = {
+ 'Content-Type': 'application/json',
+ ...options.headers
+ };
+
+ try {
+ const response = await fetch(url, {
+ ...options,
+ headers
+ });
+
+ if (!response.ok) {
+ throw new Error(`HTTP ${response.status}: ${response.statusText}`);
+ }
+
+ const contentType = response.headers.get('content-type');
+ if (contentType && contentType.includes('application/json')) {
+ return await response.json();
+ } else {
+ return await response.text();
+ }
+ } catch (error) {
+ console.error(`Request failed for ${endpoint}:`, error.message);
+ throw error;
+ }
+}
+
+// 查询数据库表结构
+async function queryTables() {
+ console.log('\n📋 查询数据库表结构...');
+ try {
+ const result = await apiRequest('/query-tables');
+ console.log('✅ 数据库表:', result);
+ return result;
+ } catch (error) {
+ console.log('❌ 查询表结构失败:', error.message);
+ return null;
+ }
+}
+
+// 查询术数书内容
+async function queryShushuBook(limit = 10) {
+ console.log('\n📚 查询术数书内容...');
+ try {
+ const result = await apiRequest(`/query-shushu?limit=${limit}`);
+ console.log('✅ 术数书内容:', JSON.stringify(result, null, 2));
+ return result;
+ } catch (error) {
+ console.log('❌ 查询术数书失败:', error.message);
+ return null;
+ }
+}
+
+// 搜索术数书内容
+async function searchShushuBook(keyword, limit = 5) {
+ console.log(`\n🔍 搜索术数书内容: "${keyword}"...`);
+ try {
+ const result = await apiRequest(`/search-shushu?q=${encodeURIComponent(keyword)}&limit=${limit}`);
+ console.log('✅ 搜索结果:', JSON.stringify(result, null, 2));
+ return result;
+ } catch (error) {
+ console.log('❌ 搜索失败:', error.message);
+ return null;
+ }
+}
+
+// 获取术数书统计信息
+async function getShushuStats() {
+ console.log('\n📊 获取术数书统计信息...');
+ try {
+ const result = await apiRequest('/shushu-stats');
+ console.log('✅ 统计信息:', JSON.stringify(result, null, 2));
+ return result;
+ } catch (error) {
+ console.log('❌ 获取统计信息失败:', error.message);
+ return null;
+ }
+}
+
+// 主函数
+async function main() {
+ console.log('🚀 术数书查询脚本');
+ console.log('==================');
+
+ // 首先测试连接
+ console.log('\n🔗 测试 Hyperdrive 连接...');
+ try {
+ const connectionTest = await apiRequest('/test-connection');
+ console.log('✅ 连接成功:', connectionTest.message);
+ } catch (error) {
+ console.log('❌ 连接失败:', error.message);
+ return;
+ }
+
+ // 查询表结构
+ await queryTables();
+
+ // 获取统计信息
+ await getShushuStats();
+
+ // 查询术数书内容
+ await queryShushuBook(5);
+
+ // 搜索示例
+ await searchShushuBook('易经');
+ await searchShushuBook('八卦');
+ await searchShushuBook('太公');
+}
+
+// 如果是 Node.js 环境,导入 fetch
+if (typeof window === 'undefined') {
+ // Node.js 环境
+ const { default: fetch } = require('node-fetch');
+ global.fetch = fetch;
+ main().catch(console.error);
+} else {
+ // 浏览器环境
+ console.log('在浏览器控制台中运行: main()');
+}
+
+// 导出函数供其他模块使用
+if (typeof module !== 'undefined' && module.exports) {
+ module.exports = {
+ queryTables,
+ queryShushuBook,
+ searchShushuBook,
+ getShushuStats,
+ main
+ };
+}
\ No newline at end of file
diff --git a/requirements.txt b/requirements.txt
index 8b4fee4..be727df 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -13,13 +13,19 @@ plotly>=5.15.0
# HTTP请求
requests>=2.31.0
+aiohttp>=3.8.0
+# Cloudflare (HTTP 调用即可,无需额外 SDK)
+
+
+# RSS解析
+feedparser>=6.0.0
# 类型注解支持
typing-extensions>=4.7.0
-# 数据库连接 (可选)
+# 数据库连接
# sqlalchemy>=2.0.0
-# pymongo>=4.5.0
+pymongo>=4.5.0
# pymilvus>=2.3.0
# 开发工具 (可选)
@@ -28,8 +34,16 @@ typing-extensions>=4.7.0
# flake8>=6.0.0
# AI模型接口
+# 旧系统:OpenRouter + OpenAI Swarm
openai>=1.0.0
# anthropic>=0.3.0
-# OpenAI Swarm (从GitHub安装)
+# 新系统:Google ADK (根据迁移进度选择)
+# pip install google-adk
+# 或开发版: pip install git+https://github.com/google/adk-python.git@main
+
+# Vertex AI Memory Bank 支持
+google-cloud-aiplatform>=1.38.0
+
+# OpenAI Swarm (保留兼容性,逐步替换)
# pip install git+https://github.com/openai/swarm.git
\ No newline at end of file
diff --git a/rss_news_collector.py b/rss_news_collector.py
deleted file mode 100644
index 8c1f9f1..0000000
--- a/rss_news_collector.py
+++ /dev/null
@@ -1,268 +0,0 @@
-#!/usr/bin/env python3
-"""
-RSS新闻收集器
-收集RSS新闻并存储到MongoDB,为辩论系统提供数据源
-"""
-
-import asyncio
-import feedparser
-import json
-import logging
-from datetime import datetime, timezone
-from typing import Dict, List, Any, Optional
-from urllib.parse import urlparse
-import hashlib
-import requests
-from src.mcp.swarm_mongodb_client import SwarmMongoDBClient
-
-class RSSNewsCollector:
- """RSS新闻收集器"""
-
- def __init__(self, mongodb_client: SwarmMongoDBClient):
- self.mongodb_client = mongodb_client
- self.logger = logging.getLogger(__name__)
-
- # 默认RSS源配置
- self.rss_sources = {
- '财经新闻': [
- 'https://feeds.finance.yahoo.com/rss/2.0/headline',
- 'https://www.cnbc.com/id/100003114/device/rss/rss.html',
- 'https://feeds.reuters.com/reuters/businessNews'
- ],
- '科技新闻': [
- 'https://feeds.feedburner.com/TechCrunch',
- 'https://www.wired.com/feed/rss',
- 'https://feeds.arstechnica.com/arstechnica/index'
- ],
- '市场分析': [
- 'https://feeds.marketwatch.com/marketwatch/marketpulse/',
- 'https://feeds.bloomberg.com/markets/news.rss'
- ]
- }
-
- def generate_article_id(self, url: str, title: str) -> str:
- """生成文章唯一ID"""
- content = f"{url}_{title}"
- return hashlib.md5(content.encode()).hexdigest()
-
- def parse_rss_feed(self, rss_url: str) -> List[Dict[str, Any]]:
- """解析RSS源"""
- try:
- feed = feedparser.parse(rss_url)
- articles = []
-
- for entry in feed.entries:
- # 提取文章信息
- article = {
- 'article_id': self.generate_article_id(entry.link, entry.title),
- 'title': entry.title,
- 'link': entry.link,
- 'description': getattr(entry, 'description', ''),
- 'summary': getattr(entry, 'summary', ''),
- 'published': self._parse_date(getattr(entry, 'published', '')),
- 'author': getattr(entry, 'author', ''),
- 'tags': [tag.term for tag in getattr(entry, 'tags', [])],
- 'source_url': rss_url,
- 'source_title': feed.feed.get('title', ''),
- 'collected_at': datetime.now(timezone.utc),
- 'content_hash': hashlib.md5(entry.title.encode()).hexdigest()
- }
- articles.append(article)
-
- return articles
-
- except Exception as e:
- self.logger.error(f"解析RSS源失败 {rss_url}: {e}")
- return []
-
- def _parse_date(self, date_str: str) -> Optional[datetime]:
- """解析日期字符串"""
- if not date_str:
- return None
-
- try:
- # feedparser通常会解析时间
- import time
- parsed_time = feedparser._parse_date(date_str)
- if parsed_time:
- return datetime.fromtimestamp(time.mktime(parsed_time), tz=timezone.utc)
- except:
- pass
-
- return datetime.now(timezone.utc)
-
- async def collect_news_from_category(self, category: str) -> List[Dict[str, Any]]:
- """从指定类别收集新闻"""
- if category not in self.rss_sources:
- self.logger.warning(f"未知新闻类别: {category}")
- return []
-
- all_articles = []
- for rss_url in self.rss_sources[category]:
- self.logger.info(f"正在收集新闻: {rss_url}")
- articles = self.parse_rss_feed(rss_url)
-
- # 添加类别标签
- for article in articles:
- article['category'] = category
-
- all_articles.extend(articles)
-
- return all_articles
-
- async def collect_all_news(self) -> Dict[str, List[Dict[str, Any]]]:
- """收集所有类别的新闻"""
- all_news = {}
-
- for category in self.rss_sources.keys():
- news = await self.collect_news_from_category(category)
- all_news[category] = news
- self.logger.info(f"收集到 {len(news)} 条 {category} 新闻")
-
- return all_news
-
- async def store_news_to_mongodb(self, articles: List[Dict[str, Any]], collection_name: str = "news_articles") -> Dict[str, Any]:
- """将新闻存储到MongoDB"""
- if not articles:
- return {'success': True, 'inserted_count': 0, 'updated_count': 0}
-
- inserted_count = 0
- updated_count = 0
-
- for article in articles:
- # 检查文章是否已存在
- existing = self.mongodb_client.find_documents(
- collection_name,
- query={'article_id': article['article_id']},
- limit=1
- )
-
- if existing.get('success') and existing.get('documents'):
- # 更新现有文章
- update_result = self.mongodb_client.update_document(
- collection_name,
- query={'article_id': article['article_id']},
- update={'$set': article}
- )
- if update_result.get('success'):
- updated_count += 1
- else:
- # 插入新文章
- insert_result = self.mongodb_client.insert_document(
- collection_name,
- document=article
- )
- if insert_result.get('success'):
- inserted_count += 1
-
- return {
- 'success': True,
- 'inserted_count': inserted_count,
- 'updated_count': updated_count,
- 'total_processed': len(articles)
- }
-
- async def get_latest_news(self, category: Optional[str] = None, limit: int = 10) -> List[Dict[str, Any]]:
- """获取最新新闻"""
- query = {}
- if category:
- query['category'] = category
-
- result = self.mongodb_client.find_documents(
- 'news_articles',
- query=query,
- sort={'collected_at': -1},
- limit=limit
- )
-
- if result.get('success'):
- return result.get('documents', [])
- return []
-
- async def get_news_for_debate(self, topic_keywords: List[str], limit: int = 5) -> List[Dict[str, Any]]:
- """根据关键词获取相关新闻用于辩论"""
- # 构建搜索查询
- search_conditions = []
- for keyword in topic_keywords:
- search_conditions.extend([
- {'title': {'$regex': keyword, '$options': 'i'}},
- {'description': {'$regex': keyword, '$options': 'i'}},
- {'summary': {'$regex': keyword, '$options': 'i'}}
- ])
-
- query = {'$or': search_conditions} if search_conditions else {}
-
- result = self.mongodb_client.find_documents(
- 'news_articles',
- query=query,
- sort={'published': -1},
- limit=limit
- )
-
- if result.get('success'):
- return result.get('documents', [])
- return []
-
- async def run_collection_cycle(self):
- """运行一次完整的新闻收集周期"""
- self.logger.info("开始新闻收集周期")
-
- # 收集所有新闻
- all_news = await self.collect_all_news()
-
- # 存储到数据库
- total_inserted = 0
- total_updated = 0
-
- for category, articles in all_news.items():
- if articles:
- result = await self.store_news_to_mongodb(articles)
- total_inserted += result.get('inserted_count', 0)
- total_updated += result.get('updated_count', 0)
- self.logger.info(f"{category}: 新增 {result.get('inserted_count', 0)}, 更新 {result.get('updated_count', 0)}")
-
- self.logger.info(f"新闻收集完成: 总新增 {total_inserted}, 总更新 {total_updated}")
-
- return {
- 'success': True,
- 'total_inserted': total_inserted,
- 'total_updated': total_updated,
- 'categories_processed': len(all_news)
- }
-
-async def main():
- """主函数 - 演示RSS新闻收集"""
- # 初始化MongoDB客户端
- mongodb_client = SwarmMongoDBClient(
- mcp_server_url="http://localhost:8080",
- default_database="news_debate_db"
- )
-
- # 连接数据库
- connect_result = mongodb_client.connect("news_debate_db")
- if not connect_result.get('success'):
- print(f"数据库连接失败: {connect_result}")
- return
-
- # 创建新闻收集器
- collector = RSSNewsCollector(mongodb_client)
-
- # 运行收集周期
- result = await collector.run_collection_cycle()
- print(f"收集结果: {result}")
-
- # 获取最新新闻示例
- latest_news = await collector.get_latest_news(limit=5)
- print(f"\n最新新闻 ({len(latest_news)} 条):")
- for news in latest_news:
- print(f"- {news.get('title', 'N/A')} [{news.get('category', 'N/A')}]")
-
- # 根据关键词搜索新闻示例
- debate_news = await collector.get_news_for_debate(['投资', '市场', '经济'], limit=3)
- print(f"\n辩论相关新闻 ({len(debate_news)} 条):")
- for news in debate_news:
- print(f"- {news.get('title', 'N/A')} [{news.get('category', 'N/A')}]")
-
-if __name__ == "__main__":
- logging.basicConfig(level=logging.INFO)
- asyncio.run(main())
\ No newline at end of file
diff --git a/scripts/add_sequence_ids.py b/scripts/add_sequence_ids.py
deleted file mode 100644
index ba274b7..0000000
--- a/scripts/add_sequence_ids.py
+++ /dev/null
@@ -1,63 +0,0 @@
-#!/usr/bin/env python3
-"""
-为现有文章添加流水号
-"""
-
-import os
-from pymongo import MongoClient
-
-def add_sequence_ids():
- """为现有文章添加流水号"""
- # 连接MongoDB
- mongo_uri = os.getenv('MONGODB_URI')
- if not mongo_uri:
- raise ValueError("MONGODB_URI environment variable is required")
- client = MongoClient(mongo_uri)
- db = client['taigong']
- collection = db['articles']
-
- print("开始为现有文章添加流水号...")
-
- # 查找所有没有sequence_id的文章
- articles_without_seq = list(collection.find(
- {"sequence_id": {"$exists": False}},
- {"_id": 1, "title": 1, "created_at": 1}
- ).sort("created_at", 1)) # 按创建时间排序
-
- print(f"找到 {len(articles_without_seq)} 篇文章需要添加流水号")
-
- if len(articles_without_seq) == 0:
- print("所有文章都已有流水号")
- return
-
- # 从1开始分配流水号
- for i, article in enumerate(articles_without_seq, 1):
- sequence_id = i
- article_id = f"NEWS_{sequence_id:08d}" # NEWS_00000001 格式
-
- collection.update_one(
- {"_id": article["_id"]},
- {
- "$set": {
- "sequence_id": sequence_id,
- "article_id": article_id,
- "batch_id": "migration_batch",
- "last_updated": "2025-02-08T00:00:00Z"
- }
- }
- )
-
- print(f" {sequence_id:3d}: {article['title'][:50]}...")
-
- print(f"流水号添加完成,共处理 {len(articles_without_seq)} 篇文章")
-
- # 验证结果
- total_with_seq = collection.count_documents({"sequence_id": {"$exists": True}})
- max_seq = collection.find_one({}, sort=[("sequence_id", -1)])
-
- print(f"验证结果:")
- print(f" 有流水号的文章: {total_with_seq} 篇")
- print(f" 最大流水号: {max_seq['sequence_id'] if max_seq else 0}")
-
-if __name__ == "__main__":
- add_sequence_ids()
\ No newline at end of file
diff --git a/scripts/cleanup_duplicates.py b/scripts/cleanup_duplicates.py
deleted file mode 100644
index 51601cb..0000000
--- a/scripts/cleanup_duplicates.py
+++ /dev/null
@@ -1,107 +0,0 @@
-#!/usr/bin/env python3
-"""
-清理MongoDB中的重复文章数据
-"""
-
-import os
-import sys
-from pymongo import MongoClient
-from collections import defaultdict
-import hashlib
-
-def generate_stable_id(title, pub_date, content):
- """生成稳定的文章ID"""
- normalized_title = title.strip().lower()
- content_hash = content[:100] if content else ''
- date_str = pub_date or ''
-
- combined = f"{normalized_title}|{date_str}|{content_hash}"
- return hashlib.md5(combined.encode()).hexdigest()[:16]
-
-def cleanup_duplicates():
- """清理重复数据"""
- # 连接MongoDB
- mongo_uri = os.getenv('MONGODB_URI')
- if not mongo_uri:
- raise ValueError("MONGODB_URI environment variable is required")
- client = MongoClient(mongo_uri)
- db = client['taigong']
- collection = db['articles']
-
- print("开始清理重复数据...")
-
- # 1. 查找所有文章
- articles = list(collection.find({}))
- print(f"总共找到 {len(articles)} 篇文章")
-
- # 2. 按标题分组,找出重复项
- title_groups = defaultdict(list)
- for article in articles:
- title_groups[article['title']].append(article)
-
- # 3. 处理重复项
- duplicates_removed = 0
- articles_updated = 0
-
- for title, group in title_groups.items():
- if len(group) > 1:
- print(f"发现重复标题: {title} ({len(group)} 篇)")
-
- # 保留最早的一篇,删除其他
- group.sort(key=lambda x: x.get('created_at', ''))
- keep_article = group[0]
-
- # 更新保留文章的ID为稳定ID
- stable_id = generate_stable_id(
- keep_article['title'],
- keep_article.get('published_time', ''),
- keep_article.get('content', '')
- )
-
- collection.update_one(
- {'_id': keep_article['_id']},
- {
- '$set': {
- 'article_id': stable_id,
- 'content_hash': generate_stable_id(keep_article.get('content', ''), '', ''),
- 'last_updated': '2025-02-08T00:00:00Z'
- }
- }
- )
- articles_updated += 1
-
- # 删除重复项
- for duplicate in group[1:]:
- collection.delete_one({'_id': duplicate['_id']})
- duplicates_removed += 1
- print(f" 删除重复项: {duplicate.get('article_id', 'unknown')}")
-
- # 4. 为没有重复的文章更新ID
- single_articles = [group[0] for group in title_groups.values() if len(group) == 1]
- for article in single_articles:
- if not article.get('article_id') or len(article.get('article_id', '')) > 20:
- stable_id = generate_stable_id(
- article['title'],
- article.get('published_time', ''),
- article.get('content', '')
- )
-
- collection.update_one(
- {'_id': article['_id']},
- {
- '$set': {
- 'article_id': stable_id,
- 'content_hash': generate_stable_id(article.get('content', ''), '', ''),
- 'last_updated': '2025-02-08T00:00:00Z'
- }
- }
- )
- articles_updated += 1
-
- print(f"清理完成:")
- print(f" 删除重复文章: {duplicates_removed} 篇")
- print(f" 更新文章ID: {articles_updated} 篇")
- print(f" 最终文章数: {collection.count_documents({})} 篇")
-
-if __name__ == "__main__":
- cleanup_duplicates()
\ No newline at end of file
diff --git a/scripts/create_vector_index.js b/scripts/create_vector_index.js
deleted file mode 100644
index 9cd7cba..0000000
--- a/scripts/create_vector_index.js
+++ /dev/null
@@ -1,35 +0,0 @@
-// MongoDB Atlas Vector Search Index Creation Script
-// 为swarm辩论系统创建向量索引
-
-// 连接到数据库
-use('taigong');
-
-// 创建向量索引用于语义搜索和内容聚类
-// 这个索引将支持swarm辩论系统的语义相似性匹配
-db.articles.createSearchIndex(
- "vector_search_index",
- {
- "fields": [
- {
- "type": "vector",
- "path": "embedding",
- "numDimensions": 1536, // OpenAI text-embedding-ada-002 维度
- "similarity": "cosine"
- },
- {
- "type": "filter",
- "path": "published_time"
- },
- {
- "type": "filter",
- "path": "title"
- }
- ]
- }
-);
-
-print("向量索引创建完成!");
-print("索引名称: vector_search_index");
-print("向量维度: 1536 (OpenAI embedding)");
-print("相似性算法: cosine");
-print("支持过滤字段: published_time, title");
\ No newline at end of file
diff --git a/scripts/generate_embeddings.py b/scripts/generate_embeddings.py
deleted file mode 100644
index 67ceae3..0000000
--- a/scripts/generate_embeddings.py
+++ /dev/null
@@ -1,75 +0,0 @@
-#!/usr/bin/env python3
-"""
-为MongoDB中的文章生成向量embeddings
-用于swarm辩论系统的语义搜索和内容聚类
-"""
-
-import os
-import openai
-from pymongo import MongoClient
-from typing import List, Dict
-import time
-
-def get_mongodb_client():
- """从Doppler获取MongoDB连接"""
- mongodb_uri = os.getenv('MONGODB_URI')
- if not mongodb_uri:
- raise ValueError("MONGODB_URI not found in environment variables")
- return MongoClient(mongodb_uri)
-
-def generate_embedding(text: str) -> List[float]:
- """使用OpenAI API生成文本embedding"""
- try:
- response = openai.Embedding.create(
- model="text-embedding-ada-002",
- input=text
- )
- return response['data'][0]['embedding']
- except Exception as e:
- print(f"生成embedding失败: {e}")
- return None
-
-def update_articles_with_embeddings():
- """为所有文章添加embedding字段"""
- client = get_mongodb_client()
- db = client.taigong
- collection = db.articles
-
- # 获取所有没有embedding的文章
- articles = collection.find({"embedding": {"$exists": False}})
-
- count = 0
- for article in articles:
- title = article.get('title', '')
- if not title:
- continue
-
- print(f"处理文章: {title[:50]}...")
-
- # 生成embedding
- embedding = generate_embedding(title)
- if embedding:
- # 更新文档
- collection.update_one(
- {"_id": article["_id"]},
- {"$set": {"embedding": embedding}}
- )
- count += 1
- print(f"✓ 已更新 {count} 篇文章")
-
- # 避免API rate limit
- time.sleep(0.1)
- else:
- print(f"× 跳过文章: {title[:50]}")
-
- print(f"\n完成!共处理 {count} 篇文章")
- client.close()
-
-if __name__ == "__main__":
- # 设置OpenAI API密钥 (应该从Doppler获取)
- openai.api_key = os.getenv('OPENAI_API_KEY')
- if not openai.api_key:
- print("警告: OPENAI_API_KEY 未设置,请先在Doppler中配置")
- exit(1)
-
- update_articles_with_embeddings()
\ No newline at end of file
diff --git a/scripts/install_swarm.py b/scripts/install_swarm.py
deleted file mode 100644
index 5f44312..0000000
--- a/scripts/install_swarm.py
+++ /dev/null
@@ -1,73 +0,0 @@
-#!/usr/bin/env python3
-"""
-安装OpenAI Swarm的脚本
-"""
-
-import subprocess
-import sys
-
-def install_swarm():
- """安装OpenAI Swarm"""
- print("🚀 正在安装OpenAI Swarm...")
-
- try:
- # 安装Swarm
- result = subprocess.run([
- sys.executable, "-m", "pip", "install",
- "git+https://github.com/openai/swarm.git"
- ], check=True, capture_output=True, text=True)
-
- print("✅ OpenAI Swarm安装成功!")
- print(result.stdout)
-
- # 验证安装
- try:
- import swarm
- print("✅ Swarm导入测试成功")
- print(f"📦 Swarm版本: {getattr(swarm, '__version__', '未知')}")
- except ImportError as e:
- print(f"❌ Swarm导入失败: {e}")
- return False
-
- return True
-
- except subprocess.CalledProcessError as e:
- print(f"❌ 安装失败: {e}")
- print(f"错误输出: {e.stderr}")
- return False
- except Exception as e:
- print(f"❌ 未知错误: {e}")
- return False
-
-def main():
- """主函数"""
- print("🏛️ 稷下学宫Swarm环境安装")
- print("=" * 40)
-
- # 检查是否已安装
- try:
- import swarm
- print("✅ OpenAI Swarm已安装")
- print(f"📦 版本: {getattr(swarm, '__version__', '未知')}")
-
- choice = input("是否重新安装?(y/N): ").strip().lower()
- if choice not in ['y', 'yes']:
- print("🎉 安装检查完成")
- return
- except ImportError:
- print("📦 OpenAI Swarm未安装,开始安装...")
-
- # 安装Swarm
- success = install_swarm()
-
- if success:
- print("\n🎉 安装完成!现在可以使用Swarm八仙论道了")
- print("💡 使用方法:")
- print(" python src/jixia/debates/swarm_debate.py")
- print(" 或在Streamlit应用中选择'Swarm模式'")
- else:
- print("\n❌ 安装失败,请手动安装:")
- print(" pip install git+https://github.com/openai/swarm.git")
-
-if __name__ == "__main__":
- main()
\ No newline at end of file
diff --git a/scripts/n8n_combined_dedup_insert.js b/scripts/n8n_combined_dedup_insert.js
deleted file mode 100644
index a63c83e..0000000
--- a/scripts/n8n_combined_dedup_insert.js
+++ /dev/null
@@ -1,148 +0,0 @@
-const items = $input.all();
-const results = [];
-
-// 改进的哈希函数 - 基于内容生成稳定的ID
-function generateStableId(title, pubDate, content) {
- const normalizedTitle = title.trim().toLowerCase();
- const contentHash = content ? content.substring(0, 100) : '';
- const dateStr = pubDate || '';
-
- const combined = normalizedTitle + '|' + dateStr + '|' + contentHash;
-
- let hash = 0;
- for (let i = 0; i < combined.length; i++) {
- const char = combined.charCodeAt(i);
- hash = ((hash << 5) - hash) + char;
- hash = hash & hash;
- }
- return Math.abs(hash).toString(16);
-}
-
-console.log(`开始处理 ${items.length} 条RSS数据`);
-
-// 用于本次执行内去重
-const processedInThisRun = new Set();
-
-// 处理每个RSS项目
-for (const item of items) {
- const data = item.json;
-
- // 跳过无效数据
- if (!data.title) {
- console.log('跳过无标题数据');
- continue;
- }
-
- // 生成稳定的文章ID
- const stableId = generateStableId(
- data.title,
- data.isoDate || data.pubDate,
- data['content:encodedSnippet'] || data.contentSnippet || ''
- );
-
- // 生成内容哈希
- const contentHash = generateStableId(
- data['content:encodedSnippet'] || data.contentSnippet || '',
- '',
- ''
- );
-
- // 准备文章数据
- const articleData = {
- article_id: stableId,
- title: data.title,
- content: data['content:encodedSnippet'] || data.contentSnippet || '',
- content_hash: contentHash,
- published_time: data.isoDate || data.pubDate || new Date().toISOString(),
- source_url: data.link || '',
- processed: false,
- created_at: new Date().toISOString(),
- last_updated: new Date().toISOString()
- };
-
- try {
- // 使用upsert操作,避免重复插入
- const result = await mongoClient.db('taigong').collection('articles').updateOne(
- {
- $or: [
- { article_id: stableId },
- { title: data.title }
- ]
- },
- {
- $setOnInsert: {
- article_id: stableId,
- title: data.title,
- content: articleData.content,
- content_hash: contentHash,
- published_time: articleData.published_time,
- source_url: articleData.source_url,
- processed: false,
- created_at: articleData.created_at
- },
- $set: {
- last_updated: new Date().toISOString()
- }
- },
- { upsert: true }
- );
-
- if (result.upsertedCount > 0) {
- console.log('✅ 新增文章:', data.title);
- results.push({
- json: {
- action: 'inserted',
- article_id: stableId,
- title: data.title,
- status: 'success'
- }
- });
- } else if (result.modifiedCount > 0) {
- console.log('🔄 更新文章:', data.title);
- results.push({
- json: {
- action: 'updated',
- article_id: stableId,
- title: data.title,
- status: 'success'
- }
- });
- } else {
- console.log('⏭️ 文章已存在,跳过:', data.title);
- }
-
- } catch (error) {
- console.error('❌ 处理文章失败:', data.title, error.message);
- results.push({
- json: {
- action: 'error',
- title: data.title,
- error: error.message,
- status: 'failed'
- }
- });
- }
-}
-
-console.log(`处理完成: 原始${items.length}条, 成功处理${results.length}条`);
-
-// 统计结果
-const stats = results.reduce((acc, item) => {
- acc[item.json.action] = (acc[item.json.action] || 0) + 1;
- return acc;
-}, {});
-
-console.log('处理统计:', stats);
-
-// 如果没有任何结果,返回一个空的成功状态
-if (results.length === 0) {
- return [{
- json: {
- message: '没有新数据需要处理',
- total_processed: items.length,
- status: 'completed'
- }
- }];
-}
-
-return results;
\ No newline at end of file
diff --git a/scripts/n8n_deduplication_fix.js b/scripts/n8n_deduplication_fix.js
deleted file mode 100644
index b7832c7..0000000
--- a/scripts/n8n_deduplication_fix.js
+++ /dev/null
@@ -1,85 +0,0 @@
-const items = $input.all();
-const processedItems = [];
-
-// 改进的哈希函数 - 基于内容生成稳定的ID
-function generateStableId(title, pubDate, content) {
- const normalizedTitle = title.trim().toLowerCase();
- const contentHash = content ? content.substring(0, 100) : '';
- const dateStr = pubDate || '';
-
- const combined = normalizedTitle + '|' + dateStr + '|' + contentHash;
-
- let hash = 0;
- for (let i = 0; i < combined.length; i++) {
- const char = combined.charCodeAt(i);
- hash = ((hash << 5) - hash) + char;
- hash = hash & hash;
- }
- return Math.abs(hash).toString(16);
-}
-
-// 1. 从数据库查询已存在的文章ID和标题
-const existingArticles = new Set();
-try {
- const existing = await mongoClient.db('taigong').collection('articles')
- .find({}, { projection: { article_id: 1, title: 1, content_hash: 1 } })
- .toArray();
-
- existing.forEach(article => {
- existingArticles.add(article.article_id);
- // 同时用标题做备用检查
- existingArticles.add(article.title);
- });
-
- console.log(`数据库中已有 ${existing.length} 篇文章`);
-} catch (error) {
- console.log('查询现有文章失败:', error);
-}
-
-// 2. 处理新数据
-for (const item of items) {
- const data = item.json;
-
- // 跳过无效数据
- if (!data.title) continue;
-
- // 生成稳定的文章ID
- const stableId = generateStableId(
- data.title,
- data.isoDate || data.pubDate,
- data['content:encodedSnippet'] || data.contentSnippet || ''
- );
-
- // 检查是否已存在(用ID和标题双重检查)
- if (existingArticles.has(stableId) || existingArticles.has(data.title)) {
- console.log('跳过重复文章:', data.title);
- continue;
- }
-
- // 生成内容哈希用于后续去重检查
- const contentHash = generateStableId(
- data['content:encodedSnippet'] || data.contentSnippet || '',
- '',
- ''
- );
-
- const processedItem = {
- article_id: stableId, // 使用稳定ID
- title: data.title,
- content: data['content:encodedSnippet'] || data.contentSnippet || '',
- content_hash: contentHash, // 新增:内容哈希
- published_time: data.isoDate || data.pubDate || new Date().toISOString(),
- source_url: data.link || '', // 新增:源链接
- processed: false,
- created_at: new Date().toISOString(),
- last_updated: new Date().toISOString() // 新增:更新时间
- };
-
- processedItems.push({ json: processedItem });
- // 添加到已存在集合,避免本次执行内重复
- existingArticles.add(stableId);
- existingArticles.add(data.title);
-}
-
-console.log(`处理完成: 原始${items.length}条, 去重后${processedItems.length}条`);
-return processedItems;
\ No newline at end of file
diff --git a/scripts/n8n_direct_insert.js b/scripts/n8n_direct_insert.js
deleted file mode 100644
index db3bc03..0000000
--- a/scripts/n8n_direct_insert.js
+++ /dev/null
@@ -1,85 +0,0 @@
-const items = $input.all();
-const results = [];
-
-// 如果没有数据需要插入
-if (items.length === 0 || (items.length === 1 && items[0].json.status === 'no_new_data')) {
- console.log('没有新数据需要插入');
- return items;
-}
-
-console.log(`准备插入 ${items.length} 条新文章`);
-
-// 准备批量插入的数据
-const documentsToInsert = items.map(item => item.json);
-
-try {
- // 批量插入,因为已经确保了唯一性,所以直接插入
- const result = await mongoClient.db('taigong').collection('articles').insertMany(
- documentsToInsert,
- { ordered: false } // 即使某条失败也继续插入其他的
- );
-
- console.log(`✅ 成功插入 ${result.insertedCount} 条文章`);
-
- // 返回插入结果
- for (let i = 0; i < documentsToInsert.length; i++) {
- const doc = documentsToInsert[i];
- const insertedId = result.insertedIds[i];
-
- results.push({
- json: {
- action: 'inserted',
- sequence_id: doc.sequence_id,
- article_id: doc.article_id,
- title: doc.title,
- mongodb_id: insertedId,
- status: 'success'
- }
- });
- }
-
-} catch (error) {
- console.error('❌ 批量插入失败:', error.message);
-
- // 如果批量插入失败,尝试逐条插入
- console.log('尝试逐条插入...');
-
- for (const doc of documentsToInsert) {
- try {
- const result = await mongoClient.db('taigong').collection('articles').insertOne(doc);
-
- console.log(`✅ 单条插入成功: ${doc.article_id}`);
- results.push({
- json: {
- action: 'inserted',
- sequence_id: doc.sequence_id,
- article_id: doc.article_id,
- title: doc.title,
- mongodb_id: result.insertedId,
- status: 'success'
- }
- });
-
- } catch (singleError) {
- console.error(`❌ 单条插入失败 ${doc.article_id}:`, singleError.message);
- results.push({
- json: {
- action: 'error',
- sequence_id: doc.sequence_id,
- article_id: doc.article_id,
- title: doc.title,
- error: singleError.message,
- status: 'failed'
- }
- });
- }
- }
-}
-
-// 统计结果
-const successCount = results.filter(r => r.json.status === 'success').length;
-const failCount = results.filter(r => r.json.status === 'failed').length;
-
-console.log(`插入完成: 成功 ${successCount} 条, 失败 ${failCount} 条`);
-
-return results;
\ No newline at end of file
diff --git a/scripts/n8n_minimal_news.js b/scripts/n8n_minimal_news.js
deleted file mode 100644
index 9de8b97..0000000
--- a/scripts/n8n_minimal_news.js
+++ /dev/null
@@ -1,39 +0,0 @@
-const items = $input.all();
-
-console.log(`原始数据: ${items.length} 条`);
-
-// 本批次内去重
-const seenTitles = new Set();
-const uniqueItems = [];
-
-// 生成起始ID(基于时间戳,确保每次运行都不同)
-let nextId = Math.floor(Date.now() / 1000);
-
-for (const item of items) {
- const data = item.json;
-
- // 跳过无效数据
- if (!data.title) continue;
-
- // 本批次内去重
- if (seenTitles.has(data.title)) {
- console.log('⏭️ 本批次重复,跳过:', data.title);
- continue;
- }
-
- const newsItem = {
- id: nextId,
- title: data.title,
- published_time: data.isoDate || data.pubDate || new Date().toISOString(),
- source_url: data.link || ''
- };
-
- uniqueItems.push({ json: newsItem });
- seenTitles.add(data.title);
-
- console.log(`✅ ID ${nextId}: ${data.title}`);
- nextId++;
-}
-
-console.log(`本批次去重后: ${uniqueItems.length} 条`);
-return uniqueItems;
\ No newline at end of file
diff --git a/scripts/n8n_safe_insert.js b/scripts/n8n_safe_insert.js
deleted file mode 100644
index 02e8c98..0000000
--- a/scripts/n8n_safe_insert.js
+++ /dev/null
@@ -1,54 +0,0 @@
-// n8n MongoDB插入节点代码
-const items = $input.all();
-const results = [];
-
-for (const item of items) {
- const data = item.json;
-
- try {
- // 使用upsert操作,避免重复插入
- const result = await mongoClient.db('taigong').collection('articles').updateOne(
- {
- $or: [
- { article_id: data.article_id },
- { title: data.title }
- ]
- },
- {
- $setOnInsert: {
- article_id: data.article_id,
- title: data.title,
- content: data.content,
- content_hash: data.content_hash,
- published_time: data.published_time,
- source_url: data.source_url,
- processed: data.processed,
- created_at: data.created_at
- },
- $set: {
- last_updated: new Date().toISOString()
- }
- },
- { upsert: true }
- );
-
- if (result.upsertedCount > 0) {
- console.log('新增文章:', data.title);
- results.push({
- json: {
- action: 'inserted',
- article_id: data.article_id,
- title: data.title
- }
- });
- } else {
- console.log('文章已存在,跳过:', data.title);
- }
-
- } catch (error) {
- console.error('插入文章失败:', data.title, error);
- }
-}
-
-console.log(`成功处理 ${results.length} 篇新文章`);
-return results;
\ No newline at end of file
diff --git a/scripts/n8n_sequential_id_system.js b/scripts/n8n_sequential_id_system.js
deleted file mode 100644
index 466ade0..0000000
--- a/scripts/n8n_sequential_id_system.js
+++ /dev/null
@@ -1,119 +0,0 @@
-const items = $input.all();
-const processedItems = [];
-
-// 获取当前最大流水号
-async function getCurrentMaxId() {
- try {
- const result = await mongoClient.db('taigong').collection('articles')
- .findOne({}, {
- sort: { sequence_id: -1 },
- projection: { sequence_id: 1 }
- });
-
- return result ? result.sequence_id : 0;
- } catch (error) {
- console.log('获取最大流水号失败,从1开始:', error.message);
- return 0;
- }
-}
-
-// 获取已存在的文章标题集合(用于去重检查)
-async function getExistingTitles() {
- try {
- const existing = await mongoClient.db('taigong').collection('articles')
- .find({}, { projection: { title: 1 } })
- .toArray();
-
- return new Set(existing.map(doc => doc.title));
- } catch (error) {
- console.log('获取已存在标题失败:', error.message);
- return new Set();
- }
-}
-
-// 生成内容哈希(用于内容变化检测)
-function generateContentHash(content) {
- if (!content) return '';
-
- let hash = 0;
- const str = content.substring(0, 200); // 取前200字符
- for (let i = 0; i < str.length; i++) {
- const char = str.charCodeAt(i);
- hash = ((hash << 5) - hash) + char;
- hash = hash & hash;
- }
- return Math.abs(hash).toString(16);
-}
-
-console.log(`开始处理 ${items.length} 条RSS数据`);
-
-// 1. 获取当前最大流水号
-const currentMaxId = await getCurrentMaxId();
-console.log(`当前数据库最大流水号: ${currentMaxId}`);
-
-// 2. 获取已存在的文章标题
-const existingTitles = await getExistingTitles();
-console.log(`数据库中已有 ${existingTitles.size} 篇文章`);
-
-// 3. 处理新数据,分配流水号
-let nextSequenceId = currentMaxId + 1;
-const seenTitlesInBatch = new Set(); // 本批次内去重
-
-for (const item of items) {
- const data = item.json;
-
- // 跳过无效数据
- if (!data.title) {
- console.log('跳过无标题数据');
- continue;
- }
-
- // 检查是否已存在(数据库 + 本批次)
- if (existingTitles.has(data.title) || seenTitlesInBatch.has(data.title)) {
- console.log('⏭️ 跳过重复文章:', data.title);
- continue;
- }
-
- // 分配新的流水号
- const sequenceId = nextSequenceId++;
-
- // 生成文章数据
- const articleData = {
- sequence_id: sequenceId, // 主键:流水号
- article_id: `NEWS_${sequenceId.toString().padStart(8, '0')}`, // 格式化ID:NEWS_00000001
- title: data.title,
- content: data['content:encodedSnippet'] || data.contentSnippet || '',
- content_hash: generateContentHash(data['content:encodedSnippet'] || data.contentSnippet || ''),
- published_time: data.isoDate || data.pubDate || new Date().toISOString(),
- source_url: data.link || '',
- rss_source: data.meta?.title || 'unknown', // RSS源名称
- processed: false,
- created_at: new Date().toISOString(),
- batch_id: Date.now().toString() // 批次ID,用于追踪
- };
-
- processedItems.push({ json: articleData });
- seenTitlesInBatch.add(data.title);
-
- console.log(`✅ 分配流水号 ${sequenceId}: ${data.title}`);
-}
-
-console.log(`流水号分配完成:`);
-console.log(` 原始数据: ${items.length} 条`);
-console.log(` 跳过重复: ${items.length - processedItems.length} 条`);
-console.log(` 新增数据: ${processedItems.length} 条`);
-console.log(` 流水号范围: ${currentMaxId + 1} - ${nextSequenceId - 1}`);
-
-// 如果没有新数据,返回空结果
-if (processedItems.length === 0) {
- return [{
- json: {
- message: '没有新数据需要处理',
- current_max_id: currentMaxId,
- total_articles_in_db: existingTitles.size,
- status: 'no_new_data'
- }
- }];
-}
-
-return processedItems;
\ No newline at end of file
diff --git a/scripts/n8n_simple_dedup.js b/scripts/n8n_simple_dedup.js
deleted file mode 100644
index 556456b..0000000
--- a/scripts/n8n_simple_dedup.js
+++ /dev/null
@@ -1,52 +0,0 @@
-const items = $input.all();
-
-// 简单哈希函数
-function simpleHash(str) {
- let hash = 0;
- for (let i = 0; i < str.length; i++) {
- const char = str.charCodeAt(i);
- hash = ((hash << 5) - hash) + char;
- hash = hash & hash;
- }
- return Math.abs(hash).toString(16);
-}
-
-console.log(`原始数据: ${items.length} 条`);
-
-// 用标题去重
-const seenTitles = new Set();
-const uniqueItems = [];
-
-for (const item of items) {
- const data = item.json;
-
- // 跳过无效数据
- if (!data.title) continue;
-
- // 本批次内去重
- if (seenTitles.has(data.title)) {
- console.log('跳过重复:', data.title);
- continue;
- }
-
- // 生成稳定ID
- const stableId = simpleHash(data.title + (data.isoDate || data.pubDate || ''));
-
- const processedItem = {
- article_id: stableId,
- title: data.title,
- content: data['content:encodedSnippet'] || data.contentSnippet || '',
- published_time: data.isoDate || data.pubDate || new Date().toISOString(),
- source_url: data.link || '',
- processed: false,
- created_at: new Date().toISOString()
- };
-
- uniqueItems.push({ json: processedItem });
- seenTitles.add(data.title);
-
- console.log(`✅ 处理: ${data.title}`);
-}
-
-console.log(`去重后: ${uniqueItems.length} 条`);
-return uniqueItems;
\ No newline at end of file
diff --git a/scripts/n8n_universal_mongo.js b/scripts/n8n_universal_mongo.js
deleted file mode 100644
index 317cbab..0000000
--- a/scripts/n8n_universal_mongo.js
+++ /dev/null
@@ -1,163 +0,0 @@
-const items = $input.all();
-const results = [];
-
-// 通用MongoDB连接获取函数
-function getMongoConnection() {
- // 尝试不同的MongoDB连接变量名
- if (typeof mongoClient !== 'undefined') return mongoClient;
- if (typeof mongo !== 'undefined') return mongo;
- if (typeof db !== 'undefined') return db;
- if (typeof $mongo !== 'undefined') return $mongo;
- if (typeof client !== 'undefined') return client;
-
- throw new Error('找不到MongoDB连接对象,请检查n8n MongoDB节点配置');
-}
-
-// 改进的哈希函数 - 基于内容生成稳定的ID
-function generateStableId(title, pubDate, content) {
- const normalizedTitle = title.trim().toLowerCase();
- const contentHash = content ? content.substring(0, 100) : '';
- const dateStr = pubDate || '';
-
- const combined = normalizedTitle + '|' + dateStr + '|' + contentHash;
-
- let hash = 0;
- for (let i = 0; i < combined.length; i++) {
- const char = combined.charCodeAt(i);
- hash = ((hash << 5) - hash) + char;
- hash = hash & hash;
- }
- return Math.abs(hash).toString(16);
-}
-
-console.log(`开始处理 ${items.length} 条RSS数据`);
-
-// 获取MongoDB连接
-let mongoConnection;
-try {
- mongoConnection = getMongoConnection();
- console.log('✅ MongoDB连接获取成功');
-} catch (error) {
- console.error('❌ MongoDB连接失败:', error.message);
- return [{
- json: {
- error: 'MongoDB连接失败',
- message: error.message,
- status: 'connection_failed'
- }
- }];
-}
-
-// 用于本次执行内去重
-const processedInThisRun = new Set();
-
-// 处理每个RSS项目
-for (const item of items) {
- const data = item.json;
-
- // 跳过无效数据
- if (!data.title) {
- console.log('跳过无标题数据');
- continue;
- }
-
- // 本次执行内去重检查
- if (processedInThisRun.has(data.title)) {
- console.log('⏭️ 本次执行内重复,跳过:', data.title);
- continue;
- }
-
- // 生成稳定的文章ID
- const stableId = generateStableId(
- data.title,
- data.isoDate || data.pubDate,
- data['content:encodedSnippet'] || data.contentSnippet || ''
- );
-
- // 生成内容哈希
- const contentHash = generateStableId(
- data['content:encodedSnippet'] || data.contentSnippet || '',
- '',
- ''
- );
-
- // 准备文章数据
- const articleData = {
- article_id: stableId,
- title: data.title,
- content: data['content:encodedSnippet'] || data.contentSnippet || '',
- content_hash: contentHash,
- published_time: data.isoDate || data.pubDate || new Date().toISOString(),
- source_url: data.link || '',
- rss_source: data.meta?.title || 'unknown',
- processed: false,
- created_at: new Date().toISOString(),
- last_updated: new Date().toISOString()
- };
-
- try {
- // 检查数据库中是否已存在
- const existing = await mongoConnection.db('taigong').collection('articles').findOne({
- $or: [
- { article_id: stableId },
- { title: data.title }
- ]
- });
-
- if (existing) {
- console.log('⏭️ 数据库中已存在,跳过:', data.title);
- continue;
- }
-
- // 插入新文章
- const result = await mongoConnection.db('taigong').collection('articles').insertOne(articleData);
-
- console.log('✅ 新增文章:', data.title);
- results.push({
- json: {
- action: 'inserted',
- article_id: stableId,
- title: data.title,
- mongodb_id: result.insertedId,
- status: 'success'
- }
- });
-
- // 添加到本次执行的去重集合
- processedInThisRun.add(data.title);
-
- } catch (error) {
- console.error('❌ 处理文章失败:', data.title, error.message);
- results.push({
- json: {
- action: 'error',
- title: data.title,
- error: error.message,
- status: 'failed'
- }
- });
- }
-}
-
-console.log(`处理完成: 原始${items.length}条, 成功处理${results.length}条`);
-
-// 统计结果
-const stats = results.reduce((acc, item) => {
- acc[item.json.action] = (acc[item.json.action] || 0) + 1;
- return acc;
-}, {});
-
-console.log('处理统计:', stats);
-
-// 如果没有任何结果,返回一个空的成功状态
-if (results.length === 0) {
- return [{
- json: {
- message: '没有新数据需要处理',
- total_processed: items.length,
- status: 'completed'
- }
- }];
-}
-
-return results;
\ No newline at end of file
diff --git a/scripts/test_openrouter_api.py b/scripts/test_openrouter_api.py
deleted file mode 100644
index 4f580a6..0000000
--- a/scripts/test_openrouter_api.py
+++ /dev/null
@@ -1,163 +0,0 @@
-#!/usr/bin/env python3
-"""
-测试OpenRouter API连接
-重构版本:使用统一配置管理
-"""
-
-import requests
-from typing import Dict, Any
-
-def test_openrouter_api() -> bool:
- """
- 测试OpenRouter API连接
-
- Returns:
- 测试是否成功
- """
- # 使用统一配置管理
- try:
- from config.doppler_config import get_openrouter_key
- api_key = get_openrouter_key()
- except ImportError:
- # 如果配置模块不可用,使用环境变量
- import os
- api_key = os.getenv('OPENROUTER_API_KEY_1')
- except Exception as e:
- print(f"❌ 无法获取API密钥: {e}")
- return False
-
- if not api_key:
- print("❌ 未找到OpenRouter API密钥")
- print("请确保已配置 OPENROUTER_API_KEY_1 环境变量")
- return False
-
- print(f"🔑 使用API密钥: {api_key[:20]}...")
-
- # 测试API调用
- url = "https://openrouter.ai/api/v1/chat/completions"
- headers = {
- "Authorization": f"Bearer {api_key}",
- "HTTP-Referer": "https://github.com/ben/liurenchaxin",
- "X-Title": "Jixia Academy Debate System",
- "Content-Type": "application/json"
- }
-
- data = {
- "model": "openai/gpt-3.5-turbo",
- "messages": [
- {"role": "user", "content": "你好,请简单回复一下测试连接"}
- ],
- "max_tokens": 50
- }
-
- try:
- print("📡 正在测试API连接...")
- response = requests.post(url, headers=headers, json=data, timeout=30)
- print(f"📡 响应状态码: {response.status_code}")
-
- if response.status_code == 200:
- result = response.json()
- print("✅ OpenRouter API连接成功!")
- if 'choices' in result and len(result['choices']) > 0:
- content = result['choices'][0]['message']['content']
- print(f"📝 AI回复: {content}")
- else:
- print("📝 API响应格式异常,但连接成功")
- return True
- else:
- print(f"❌ API调用失败: HTTP {response.status_code}")
- print(f"错误详情: {response.text}")
- return False
-
- except requests.exceptions.Timeout:
- print("❌ 请求超时,请检查网络连接")
- return False
- except requests.exceptions.RequestException as e:
- print(f"❌ 网络请求异常: {e}")
- return False
- except Exception as e:
- print(f"❌ 未知异常: {e}")
- return False
-
-def test_rapidapi_connection() -> bool:
- """
- 测试RapidAPI连接
-
- Returns:
- 测试是否成功
- """
- try:
- from config.doppler_config import get_rapidapi_key
- api_key = get_rapidapi_key()
- except ImportError:
- import os
- api_key = os.getenv('RAPIDAPI_KEY')
- except Exception as e:
- print(f"❌ 无法获取RapidAPI密钥: {e}")
- return False
-
- if not api_key:
- print("❌ 未找到RapidAPI密钥")
- return False
-
- print(f"🔑 测试RapidAPI连接...")
-
- # 测试一个简单的API端点
- url = "https://yahoo-finance15.p.rapidapi.com/api/yahoo/qu/quote/AAPL"
- headers = {
- 'X-RapidAPI-Key': api_key,
- 'X-RapidAPI-Host': 'yahoo-finance15.p.rapidapi.com'
- }
-
- try:
- response = requests.get(url, headers=headers, timeout=10)
- if response.status_code == 200:
- print("✅ RapidAPI连接成功!")
- return True
- else:
- print(f"❌ RapidAPI连接失败: HTTP {response.status_code}")
- return False
- except Exception as e:
- print(f"❌ RapidAPI连接异常: {e}")
- return False
-
-def main():
- """主函数 - 运行所有API连接测试"""
- print("🧪 API连接测试套件")
- print("=" * 50)
-
- # 测试配置验证
- try:
- from config.doppler_config import validate_config
- print("\n🔧 验证配置...")
- config_valid = validate_config()
- except ImportError:
- print("⚠️ 配置模块不可用,跳过配置验证")
- config_valid = True
-
- # 测试OpenRouter API
- print("\n🤖 测试OpenRouter API...")
- openrouter_success = test_openrouter_api()
-
- # 测试RapidAPI
- print("\n📊 测试RapidAPI...")
- rapidapi_success = test_rapidapi_api()
-
- # 总结测试结果
- print("\n" + "=" * 50)
- print("📋 测试结果总结:")
- print(f" 配置验证: {'✅ 通过' if config_valid else '❌ 失败'}")
- print(f" OpenRouter API: {'✅ 通过' if openrouter_success else '❌ 失败'}")
- print(f" RapidAPI: {'✅ 通过' if rapidapi_success else '❌ 失败'}")
-
- all_passed = config_valid and openrouter_success and rapidapi_success
- if all_passed:
- print("\n🎉 所有API连接测试通过!系统已准备就绪。")
- else:
- print("\n⚠️ 部分测试失败,请检查配置和网络连接。")
-
- return all_passed
-
-if __name__ == "__main__":
- success = main()
- exit(0 if success else 1)
\ No newline at end of file
diff --git a/scripts/test_rapidapi_inventory.py b/scripts/test_rapidapi_inventory.py
deleted file mode 100644
index 75414bf..0000000
--- a/scripts/test_rapidapi_inventory.py
+++ /dev/null
@@ -1,297 +0,0 @@
-#!/usr/bin/env python3
-"""
-RapidAPI库存测试脚本
-自动测试所有订阅的API服务,生成可用性报告
-"""
-
-import requests
-import json
-import time
-from datetime import datetime
-from typing import Dict, List, Any
-import os
-
-class RapidAPITester:
- """RapidAPI测试器"""
-
- def __init__(self):
- """初始化测试器"""
- # 从环境变量获取API密钥
- self.api_key = os.getenv('RAPIDAPI_KEY')
- if not self.api_key:
- raise ValueError("RAPIDAPI_KEY环境变量未设置")
-
- # API配置 - 基于永动机引擎的配置
- self.api_configs = {
- 'alpha_vantage': 'alpha-vantage.p.rapidapi.com',
- 'yahoo_finance_1': 'yahoo-finance15.p.rapidapi.com',
- 'yh_finance_complete': 'yh-finance.p.rapidapi.com',
- 'yahoo_finance_api_data': 'yahoo-finance-api1.p.rapidapi.com',
- 'yahoo_finance_realtime': 'yahoo-finance-low-latency.p.rapidapi.com',
- 'yh_finance': 'yh-finance-complete.p.rapidapi.com',
- 'yahoo_finance_basic': 'yahoo-finance127.p.rapidapi.com',
- 'seeking_alpha': 'seeking-alpha.p.rapidapi.com',
- 'webull': 'webull.p.rapidapi.com',
- 'morning_star': 'morningstar1.p.rapidapi.com',
- 'tradingview': 'tradingview-ta.p.rapidapi.com',
- 'investing_com': 'investing-cryptocurrency-markets.p.rapidapi.com',
- 'finance_api': 'real-time-finance-data.p.rapidapi.com',
- 'ms_finance': 'ms-finance.p.rapidapi.com',
- 'sec_filings': 'sec-filings.p.rapidapi.com',
- 'exchangerate_api': 'exchangerate-api.p.rapidapi.com',
- 'crypto_news': 'cryptocurrency-news2.p.rapidapi.com'
- }
-
- # 测试端点配置
- self.test_endpoints = {
- 'alpha_vantage': '/query?function=GLOBAL_QUOTE&symbol=AAPL',
- 'yahoo_finance_1': '/api/yahoo/qu/quote/AAPL',
- 'yh_finance_complete': '/stock/v2/get-summary?symbol=AAPL',
- 'yahoo_finance_api_data': '/v8/finance/chart/AAPL',
- 'yahoo_finance_realtime': '/stock/v2/get-summary?symbol=AAPL',
- 'yh_finance': '/stock/v2/get-summary?symbol=AAPL',
- 'yahoo_finance_basic': '/api/yahoo/qu/quote/AAPL',
- 'seeking_alpha': '/symbols/get-profile?symbols=AAPL',
- 'webull': '/stock/search?keyword=AAPL',
- 'morning_star': '/market/v2/get-movers?performanceId=0P0000OQN8',
- 'tradingview': '/get-analysis?symbol=AAPL&screener=america&exchange=NASDAQ',
- 'investing_com': '/coins/get-overview',
- 'finance_api': '/stock-price?symbol=AAPL',
- 'ms_finance': '/stock/v2/get-summary?symbol=AAPL',
- 'sec_filings': '/search?query=AAPL',
- 'exchangerate_api': '/latest?base=USD',
- 'crypto_news': '/v1/cryptonews'
- }
-
- self.results = {}
-
- def test_api(self, api_name: str) -> Dict[str, Any]:
- """
- 测试单个API
-
- Args:
- api_name: API名称
-
- Returns:
- 测试结果
- """
- if api_name not in self.api_configs:
- return {
- 'success': False,
- 'error': 'API not configured',
- 'status_code': None,
- 'response_time': 0
- }
-
- host = self.api_configs[api_name]
- endpoint = self.test_endpoints.get(api_name, '/')
-
- headers = {
- 'X-RapidAPI-Key': self.api_key,
- 'X-RapidAPI-Host': host,
- 'Content-Type': 'application/json'
- }
-
- url = f"https://{host}{endpoint}"
-
- print(f"🧪 测试 {api_name} ({host})")
- print(f" URL: {url}")
-
- start_time = time.time()
-
- try:
- response = requests.get(url, headers=headers, timeout=10)
- response_time = time.time() - start_time
-
- result = {
- 'success': response.status_code == 200,
- 'status_code': response.status_code,
- 'response_time': round(response_time, 2),
- 'response_size': len(response.text),
- 'error': None if response.status_code == 200 else response.text[:200]
- }
-
- if response.status_code == 200:
- print(f" ✅ 成功 - {response_time:.2f}s - {len(response.text)} bytes")
- # 尝试解析JSON
- try:
- data = response.json()
- result['has_data'] = bool(data)
- result['data_keys'] = list(data.keys()) if isinstance(data, dict) else []
- except:
- result['has_data'] = False
- result['data_keys'] = []
- else:
- print(f" ❌ 失败 - HTTP {response.status_code}")
- print(f" 错误: {response.text[:100]}...")
-
- return result
-
- except requests.exceptions.Timeout:
- print(f" ⏰ 超时")
- return {
- 'success': False,
- 'error': 'Request timeout',
- 'status_code': None,
- 'response_time': 10.0
- }
- except requests.exceptions.RequestException as e:
- print(f" ❌ 请求异常: {str(e)}")
- return {
- 'success': False,
- 'error': f'Request error: {str(e)}',
- 'status_code': None,
- 'response_time': time.time() - start_time
- }
- except Exception as e:
- print(f" ❌ 未知异常: {str(e)}")
- return {
- 'success': False,
- 'error': f'Unexpected error: {str(e)}',
- 'status_code': None,
- 'response_time': time.time() - start_time
- }
-
- def test_all_apis(self) -> Dict[str, Any]:
- """测试所有API"""
- print("🚀 开始测试所有RapidAPI服务")
- print("=" * 60)
-
- for api_name in self.api_configs.keys():
- result = self.test_api(api_name)
- self.results[api_name] = result
- time.sleep(1) # 避免请求过快
- print()
-
- return self.results
-
- def generate_report(self) -> str:
- """生成测试报告"""
- if not self.results:
- return "没有测试结果"
-
- # 统计
- total_apis = len(self.results)
- successful_apis = len([r for r in self.results.values() if r['success']])
- failed_apis = total_apis - successful_apis
-
- # 按状态分类
- success_list = []
- failed_list = []
-
- for api_name, result in self.results.items():
- if result['success']:
- success_list.append({
- 'name': api_name,
- 'host': self.api_configs[api_name],
- 'response_time': result['response_time'],
- 'data_keys': result.get('data_keys', [])
- })
- else:
- failed_list.append({
- 'name': api_name,
- 'host': self.api_configs[api_name],
- 'error': result['error'],
- 'status_code': result['status_code']
- })
-
- # 生成报告
- report = f"""# RapidAPI 测试报告
-
-## 📊 测试概览
-
-- **测试时间**: {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}
-- **总API数**: {total_apis}
-- **成功数**: {successful_apis} ({successful_apis/total_apis*100:.1f}%)
-- **失败数**: {failed_apis} ({failed_apis/total_apis*100:.1f}%)
-
-## ✅ 可用的API ({len(success_list)}个)
-
-"""
-
- for api in sorted(success_list, key=lambda x: x['response_time']):
- report += f"### {api['name']}\n"
- report += f"- **主机**: `{api['host']}`\n"
- report += f"- **响应时间**: {api['response_time']}s\n"
- if api['data_keys']:
- report += f"- **数据字段**: {', '.join(api['data_keys'][:5])}\n"
- report += "\n"
-
- report += f"## ❌ 失败的API ({len(failed_list)}个)\n\n"
-
- for api in failed_list:
- report += f"### {api['name']}\n"
- report += f"- **主机**: `{api['host']}`\n"
- report += f"- **状态码**: {api['status_code']}\n"
- report += f"- **错误**: {api['error'][:100] if api['error'] else 'Unknown'}...\n"
- report += "\n"
-
- # 建议
- report += """## 🔧 优化建议
-
-### 立即可用的API
-"""
-
- fast_apis = [api for api in success_list if api['response_time'] < 2.0]
- if fast_apis:
- report += "以下API响应快速,建议优先使用:\n"
- for api in fast_apis:
- report += f"- **{api['name']}**: {api['response_time']}s\n"
-
- report += """
-### 需要修复的API
-"""
-
- if failed_list:
- report += "以下API需要检查端点配置或权限:\n"
- for api in failed_list[:5]: # 只显示前5个
- report += f"- **{api['name']}**: {api['error'][:50] if api['error'] else 'Unknown error'}...\n"
-
- return report
-
- def save_report(self, filename: str = None):
- """保存报告到文件"""
- if not filename:
- timestamp = datetime.now().strftime('%Y%m%d_%H%M%S')
- filename = f"docs/rapidapi/test_report_{timestamp}.md"
-
- report = self.generate_report()
-
- with open(filename, 'w', encoding='utf-8') as f:
- f.write(report)
-
- print(f"📄 报告已保存到: {filename}")
- return filename
-
-def main():
- """主函数"""
- print("🧪 RapidAPI库存测试工具")
- print("=" * 40)
-
- try:
- tester = RapidAPITester()
-
- # 测试所有API
- results = tester.test_all_apis()
-
- # 生成并显示报告
- print("\n" + "=" * 60)
- print("📊 测试完成,生成报告...")
-
- report = tester.generate_report()
- print(report)
-
- # 保存报告
- filename = tester.save_report()
-
- # 更新库存文档
- print(f"\n💡 建议更新 docs/rapidapi/api_inventory.md")
- print(f"📁 详细报告: {filename}")
-
- except Exception as e:
- print(f"❌ 测试失败: {e}")
- import traceback
- traceback.print_exc()
-
-if __name__ == "__main__":
- main()
\ No newline at end of file
diff --git a/shushu-demo-results.md b/shushu-demo-results.md
new file mode 100644
index 0000000..0878b2e
--- /dev/null
+++ b/shushu-demo-results.md
@@ -0,0 +1,152 @@
+# 术数书 Hyperdrive + NeonDB 查询系统演示结果
+
+## 系统概述
+
+我们成功部署了一个基于 Cloudflare Hyperdrive + NeonDB 的术数书查询系统,通过高性能的边缘计算和数据库连接池优化,实现了对古代术数典籍的快速查询和检索。
+
+## 部署信息
+
+- **Worker URL**: https://hyperdrive.seekkey.tech/
+- **Hyperdrive ID**: ef43924d89064cddabfaccf06aadfab6
+- **数据库**: NeonDB PostgreSQL
+- **连接池**: 已启用
+- **边缘缓存**: 全球分布
+
+## 可用 API 端点
+
+### 1. 基础端点
+- `GET /` - 系统信息和端点列表
+- `GET /test-connection` - 测试数据库连接
+- `GET /test-query` - 测试数据库查询
+
+### 2. 术数书查询端点
+- `GET /query-tables` - 查询数据库表结构
+- `GET /query-shushu?limit=N` - 查询术数书内容
+- `GET /search-shushu?q=keyword&limit=N` - 搜索术数书内容
+- `GET /shushu-stats` - 获取术数书统计信息
+
+## 数据库结构
+
+通过 `/query-tables` 端点发现的表结构:
+
+```json
+{
+ "status": "success",
+ "message": "Tables retrieved successfully",
+ "tables": [
+ {
+ "table_name": "books",
+ "table_schema": "public"
+ },
+ {
+ "table_name": "hyperdrive_test",
+ "table_schema": "public"
+ },
+ {
+ "table_name": "playing_with_neon",
+ "table_schema": "public"
+ }
+ ]
+}
+```
+
+## 术数书内容示例
+
+通过 `/query-shushu?limit=3` 成功获取的术数书内容:
+
+### 书籍信息
+- **ID**: 1
+- **标题**: 《神相全编》
+- **作者**: 袁珙
+- **类别**: 相术
+- **子类别**: 面相手相
+- **总字数**: 33,897 字
+- **创建时间**: 2025-07-17T15:48:55.563Z
+
+### 内容片段
+
+```
+诈。口尖唇薄者多妄。冷笑无情多诈。偷视不正多诈。视上顾下多诈。
+妄说语言如太急者多诈。牙齿疏者多诈。又曰鼻尖毫出、眼细视低,
+口角高低,步履纵横,行步不匀,脚走高低多诈。
+
+宽大
+升斗满,部位中正,印堂开阔,诸部圆满,鼻窍微露。阴德眼上下堂
+有黄气,卧蚕出见,印堂黄气,精舍黄气。带令地角朝天、耳有轮廓
+朝水,口有棱角。眼带桃花眉如线。又如新月久视,意气可人。
+
+贪食格
+鼻如鹰嘴者多贪,心狡。眼红者多贪,心毒。眉卓者多贪。嘴尖者多贪。
+鼻勾者多贪。
+
+劳碌格
+眼长多劳碌。骨粗多劳碌。面如马面驴唇劳碌。眉重气弱者劳碌。
+鱼尾纹多者劳碌。
+```
+
+## 系统特点
+
+### 1. 高性能优化
+- **Hyperdrive 连接池**: 减少数据库连接开销
+- **边缘缓存**: 全球分布式缓存,降低延迟
+- **智能路由**: 自动选择最近的数据中心
+
+### 2. 成本优化
+- **连接复用**: 大幅减少 NeonDB 的连接数消耗
+- **查询缓存**: 减少重复查询的数据库负载
+- **按需扩展**: 根据访问量自动调整资源
+
+### 3. 功能特性
+- **多表查询**: 自动检测和查询可能的术数书表
+- **全文搜索**: 支持关键词搜索术数书内容
+- **统计分析**: 提供数据库使用统计信息
+- **RESTful API**: 标准化的 API 接口
+
+## 与 AutoRAG 对比的优势
+
+### 1. 数据访问速度
+- **Hyperdrive**: 全球边缘缓存,毫秒级响应
+- **AutoRAG**: 依赖本地或远程向量数据库,可能有网络延迟
+
+### 2. 数据一致性
+- **Hyperdrive**: 直接查询源数据库,保证数据实时性
+- **AutoRAG**: 向量化数据可能存在更新延迟
+
+### 3. 查询精确性
+- **Hyperdrive**: SQL 精确查询,支持复杂条件
+- **AutoRAG**: 语义相似性查询,可能存在误差
+
+### 4. 成本效益
+- **Hyperdrive**: 连接池优化,降低数据库成本
+- **AutoRAG**: 需要额外的向量数据库和计算资源
+
+## 使用场景
+
+### 1. 学术研究
+- 快速检索古代术数典籍
+- 支持精确的文本查询
+- 提供完整的原文内容
+
+### 2. 应用开发
+- 为术数应用提供数据 API
+- 支持多种查询方式
+- 高并发访问支持
+
+### 3. 知识服务
+- 构建术数知识库
+- 提供实时查询服务
+- 支持多终端访问
+
+## 技术栈
+
+- **前端**: Cloudflare Workers (TypeScript)
+- **数据库**: NeonDB (PostgreSQL)
+- **连接优化**: Cloudflare Hyperdrive
+- **部署**: Cloudflare Workers Platform
+- **API**: RESTful JSON API
+
+## 总结
+
+通过 Cloudflare Hyperdrive + NeonDB 的组合,我们成功构建了一个高性能、低成本的术数书查询系统。该系统不仅提供了快速的数据访问能力,还通过智能缓存和连接池优化,在 NeonDB 免费配额下支持了更大的访问量。
+
+相比传统的 AutoRAG 方案,我们的系统在数据访问速度、查询精确性和成本控制方面都有显著优势,为术数典籍的数字化应用提供了一个理想的技术解决方案。
\ No newline at end of file
diff --git a/src/advanced-example.ts b/src/advanced-example.ts
new file mode 100644
index 0000000..7cde6bc
--- /dev/null
+++ b/src/advanced-example.ts
@@ -0,0 +1,340 @@
+// 高级 Hyperdrive 使用示例 - 完整的 CRUD API
+// 这个示例展示了如何构建一个生产级别的 API 服务
+
+export interface Env {
+ HYPERDRIVE: Hyperdrive;
+ API_SECRET?: string;
+}
+
+interface User {
+ id?: number;
+ name: string;
+ email: string;
+ created_at?: string;
+ updated_at?: string;
+}
+
+interface ApiResponse {
+ status: 'success' | 'error';
+ data?: T;
+ message?: string;
+ meta?: {
+ total?: number;
+ page?: number;
+ limit?: number;
+ };
+}
+
+// CORS 配置
+const corsHeaders = {
+ 'Access-Control-Allow-Origin': '*',
+ 'Access-Control-Allow-Methods': 'GET, POST, PUT, DELETE, OPTIONS',
+ 'Access-Control-Allow-Headers': 'Content-Type, Authorization, X-API-Key',
+};
+
+// 响应工具函数
+function jsonResponse(data: ApiResponse, status = 200): Response {
+ return new Response(JSON.stringify(data, null, 2), {
+ status,
+ headers: {
+ 'Content-Type': 'application/json',
+ ...corsHeaders,
+ },
+ });
+}
+
+// 错误响应
+function errorResponse(message: string, status = 500): Response {
+ return jsonResponse({ status: 'error', message }, status);
+}
+
+// 输入验证
+function validateUser(data: any): { valid: boolean; errors: string[] } {
+ const errors: string[] = [];
+
+ if (!data.name || typeof data.name !== 'string' || data.name.trim().length < 2) {
+ errors.push('Name must be at least 2 characters');
+ }
+
+ if (!data.email || typeof data.email !== 'string' || !data.email.includes('@')) {
+ errors.push('Valid email is required');
+ }
+
+ return { valid: errors.length === 0, errors };
+}
+
+// API 密钥验证
+function validateApiKey(request: Request, env: Env): boolean {
+ if (!env.API_SECRET) return true; // 如果没有设置密钥,跳过验证
+
+ const apiKey = request.headers.get('X-API-Key') || request.headers.get('Authorization')?.replace('Bearer ', '');
+ return apiKey === env.API_SECRET;
+}
+
+// 数据库连接工具
+async function withDatabase(env: Env, operation: (client: any) => Promise): Promise {
+ const { Client } = await import('pg');
+ const client = new Client({ connectionString: env.HYPERDRIVE.connectionString });
+
+ try {
+ await client.connect();
+ return await operation(client);
+ } finally {
+ await client.end();
+ }
+}
+
+// 用户 CRUD 操作
+class UserService {
+ static async getUsers(env: Env, page = 1, limit = 10, search?: string): Promise<{ users: User[]; total: number }> {
+ return withDatabase(env, async (client) => {
+ let query = 'SELECT id, name, email, created_at, updated_at FROM users';
+ let countQuery = 'SELECT COUNT(*) FROM users';
+ const params: any[] = [];
+
+ if (search) {
+ query += ' WHERE name ILIKE $1 OR email ILIKE $1';
+ countQuery += ' WHERE name ILIKE $1 OR email ILIKE $1';
+ params.push(`%${search}%`);
+ }
+
+ query += ` ORDER BY created_at DESC LIMIT $${params.length + 1} OFFSET $${params.length + 2}`;
+ params.push(limit, (page - 1) * limit);
+
+ const [usersResult, countResult] = await Promise.all([
+ client.query(query, params),
+ client.query(countQuery, search ? [`%${search}%`] : [])
+ ]);
+
+ return {
+ users: usersResult.rows,
+ total: parseInt(countResult.rows[0].count)
+ };
+ });
+ }
+
+ static async getUserById(env: Env, id: number): Promise {
+ return withDatabase(env, async (client) => {
+ const result = await client.query(
+ 'SELECT id, name, email, created_at, updated_at FROM users WHERE id = $1',
+ [id]
+ );
+ return result.rows[0] || null;
+ });
+ }
+
+ static async createUser(env: Env, userData: Omit): Promise {
+ return withDatabase(env, async (client) => {
+ const result = await client.query(
+ 'INSERT INTO users (name, email, created_at, updated_at) VALUES ($1, $2, NOW(), NOW()) RETURNING id, name, email, created_at, updated_at',
+ [userData.name.trim(), userData.email.toLowerCase().trim()]
+ );
+ return result.rows[0];
+ });
+ }
+
+ static async updateUser(env: Env, id: number, userData: Partial>): Promise {
+ return withDatabase(env, async (client) => {
+ const setParts: string[] = [];
+ const params: any[] = [];
+ let paramIndex = 1;
+
+ if (userData.name !== undefined) {
+ setParts.push(`name = $${paramIndex++}`);
+ params.push(userData.name.trim());
+ }
+
+ if (userData.email !== undefined) {
+ setParts.push(`email = $${paramIndex++}`);
+ params.push(userData.email.toLowerCase().trim());
+ }
+
+ if (setParts.length === 0) {
+ throw new Error('No fields to update');
+ }
+
+ setParts.push(`updated_at = NOW()`);
+ params.push(id);
+
+ const result = await client.query(
+ `UPDATE users SET ${setParts.join(', ')} WHERE id = $${paramIndex} RETURNING id, name, email, created_at, updated_at`,
+ params
+ );
+
+ return result.rows[0] || null;
+ });
+ }
+
+ static async deleteUser(env: Env, id: number): Promise {
+ return withDatabase(env, async (client) => {
+ const result = await client.query('DELETE FROM users WHERE id = $1', [id]);
+ return result.rowCount > 0;
+ });
+ }
+
+ static async initializeDatabase(env: Env): Promise {
+ return withDatabase(env, async (client) => {
+ await client.query(`
+ CREATE TABLE IF NOT EXISTS users (
+ id SERIAL PRIMARY KEY,
+ name VARCHAR(255) NOT NULL,
+ email VARCHAR(255) UNIQUE NOT NULL,
+ created_at TIMESTAMP WITH TIME ZONE DEFAULT NOW(),
+ updated_at TIMESTAMP WITH TIME ZONE DEFAULT NOW()
+ )
+ `);
+
+ // 创建索引
+ await client.query('CREATE INDEX IF NOT EXISTS idx_users_email ON users(email)');
+ await client.query('CREATE INDEX IF NOT EXISTS idx_users_created_at ON users(created_at)');
+ });
+ }
+}
+
+// 路由处理
+export default {
+ async fetch(request: Request, env: Env, ctx: ExecutionContext): Promise {
+ const url = new URL(request.url);
+ const path = url.pathname;
+ const method = request.method;
+
+ // 处理 CORS 预检请求
+ if (method === 'OPTIONS') {
+ return new Response(null, { headers: corsHeaders });
+ }
+
+ // API 密钥验证
+ if (!validateApiKey(request, env)) {
+ return errorResponse('Unauthorized', 401);
+ }
+
+ try {
+ // 路由匹配
+ if (path === '/init' && method === 'POST') {
+ await UserService.initializeDatabase(env);
+ return jsonResponse({ status: 'success', message: 'Database initialized' });
+ }
+
+ if (path === '/users' && method === 'GET') {
+ const page = parseInt(url.searchParams.get('page') || '1');
+ const limit = Math.min(parseInt(url.searchParams.get('limit') || '10'), 100);
+ const search = url.searchParams.get('search') || undefined;
+
+ const { users, total } = await UserService.getUsers(env, page, limit, search);
+
+ return jsonResponse({
+ status: 'success',
+ data: users,
+ meta: {
+ total,
+ page,
+ limit,
+ }
+ });
+ }
+
+ if (path.match(/^\/users\/\d+$/) && method === 'GET') {
+ const id = parseInt(path.split('/')[2]);
+ const user = await UserService.getUserById(env, id);
+
+ if (!user) {
+ return errorResponse('User not found', 404);
+ }
+
+ return jsonResponse({ status: 'success', data: user });
+ }
+
+ if (path === '/users' && method === 'POST') {
+ const body = await request.json() as any;
+ const validation = validateUser(body);
+
+ if (!validation.valid) {
+ return errorResponse(`Validation failed: ${validation.errors.join(', ')}`, 400);
+ }
+
+ const user = await UserService.createUser(env, body as Omit);
+ return jsonResponse({ status: 'success', data: user, message: 'User created successfully' }, 201);
+ }
+
+ if (path.match(/^\/users\/\d+$/) && method === 'PUT') {
+ const id = parseInt(path.split('/')[2]);
+ const body = await request.json() as any;
+
+ // 部分验证(只验证提供的字段)
+ if (body.name !== undefined || body.email !== undefined) {
+ const validation = validateUser({ name: body.name || 'valid', email: body.email || 'valid@email.com' });
+ if (!validation.valid) {
+ return errorResponse(`Validation failed: ${validation.errors.join(', ')}`, 400);
+ }
+ }
+
+ const user = await UserService.updateUser(env, id, body as Partial>);
+
+ if (!user) {
+ return errorResponse('User not found', 404);
+ }
+
+ return jsonResponse({ status: 'success', data: user, message: 'User updated successfully' });
+ }
+
+ if (path.match(/^\/users\/\d+$/) && method === 'DELETE') {
+ const id = parseInt(path.split('/')[2]);
+ const deleted = await UserService.deleteUser(env, id);
+
+ if (!deleted) {
+ return errorResponse('User not found', 404);
+ }
+
+ return jsonResponse({ status: 'success', message: 'User deleted successfully' });
+ }
+
+ // 健康检查
+ if (path === '/health') {
+ return jsonResponse({
+ status: 'success',
+ data: {
+ service: 'hyperdrive-api',
+ timestamp: new Date().toISOString(),
+ version: '1.0.0'
+ }
+ });
+ }
+
+ // API 文档
+ if (path === '/docs') {
+ const docs = {
+ endpoints: {
+ 'POST /init': 'Initialize database tables',
+ 'GET /users': 'List users (supports ?page, ?limit, ?search)',
+ 'GET /users/:id': 'Get user by ID',
+ 'POST /users': 'Create new user',
+ 'PUT /users/:id': 'Update user',
+ 'DELETE /users/:id': 'Delete user',
+ 'GET /health': 'Health check',
+ 'GET /docs': 'API documentation'
+ },
+ authentication: 'Include X-API-Key header or Authorization: Bearer ',
+ examples: {
+ createUser: {
+ method: 'POST',
+ url: '/users',
+ body: { name: 'John Doe', email: 'john@example.com' }
+ },
+ listUsers: {
+ method: 'GET',
+ url: '/users?page=1&limit=10&search=john'
+ }
+ }
+ };
+
+ return jsonResponse({ status: 'success', data: docs });
+ }
+
+ return errorResponse('Not found', 404);
+
+ } catch (error) {
+ console.error('API Error:', error);
+ return errorResponse('Internal server error', 500);
+ }
+ },
+};
\ No newline at end of file
diff --git a/src/index.ts b/src/index.ts
new file mode 100644
index 0000000..6d5e7a4
--- /dev/null
+++ b/src/index.ts
@@ -0,0 +1,382 @@
+///
+
+export interface Env {
+ HYPERDRIVE: Hyperdrive;
+}
+
+export default {
+ async fetch(request: Request, env: Env, ctx: ExecutionContext): Promise {
+ try {
+ // Test Hyperdrive connection to NeonDB
+ const { pathname } = new URL(request.url);
+
+ if (pathname === '/test-connection') {
+ return await testConnection(env);
+ }
+
+ if (pathname === '/test-query') {
+ return await testQuery(env);
+ }
+
+ if (pathname === '/query-tables') {
+ return await queryTables(env);
+ }
+
+ if (pathname === '/query-shushu') {
+ const url = new URL(request.url);
+ const limit = parseInt(url.searchParams.get('limit') || '10');
+ return await queryShushuBook(env, limit);
+ }
+
+ if (pathname === '/search-shushu') {
+ const url = new URL(request.url);
+ const keyword = url.searchParams.get('q') || '';
+ const limit = parseInt(url.searchParams.get('limit') || '5');
+ return await searchShushuBook(env, keyword, limit);
+ }
+
+ if (pathname === '/shushu-stats') {
+ return await getShushuStats(env);
+ }
+
+ return new Response('Hyperdrive NeonDB Test Worker\n\nEndpoints:\n- /test-connection - Test database connection\n- /test-query - Test database query\n- /query-tables - List all tables\n- /query-shushu?limit=N - Query shushu book content\n- /search-shushu?q=keyword&limit=N - Search shushu book\n- /shushu-stats - Get shushu book statistics', {
+ headers: { 'Content-Type': 'text/plain' }
+ });
+ } catch (error) {
+ return new Response(`Error: ${error.message}`, {
+ status: 500,
+ headers: { 'Content-Type': 'text/plain' }
+ });
+ }
+ },
+};
+
+async function testConnection(env: Env): Promise {
+ try {
+ // Get connection string from Hyperdrive
+ const connectionString = env.HYPERDRIVE.connectionString;
+
+ // Create a simple connection test
+ const { Client } = await import('pg');
+ const client = new Client({ connectionString });
+
+ await client.connect();
+
+ // Test basic query
+ const result = await client.query('SELECT NOW() as current_time, version() as pg_version');
+
+ await client.end();
+
+ return new Response(JSON.stringify({
+ status: 'success',
+ message: 'Successfully connected to NeonDB via Hyperdrive',
+ data: result.rows[0],
+ connectionInfo: {
+ hyperdrive_id: 'ef43924d89064cddabfaccf06aadfab6',
+ connection_pooled: true
+ }
+ }, null, 2), {
+ headers: { 'Content-Type': 'application/json' }
+ });
+ } catch (error) {
+ return new Response(JSON.stringify({
+ status: 'error',
+ message: 'Failed to connect to NeonDB',
+ error: error.message
+ }, null, 2), {
+ status: 500,
+ headers: { 'Content-Type': 'application/json' }
+ });
+ }
+}
+
+// 查询数据库表结构
+async function queryTables(env: Env): Promise {
+ try {
+ const { Client } = await import('pg');
+ const client = new Client({
+ connectionString: env.HYPERDRIVE.connectionString
+ });
+
+ await client.connect();
+
+ // 查询所有表
+ const result = await client.query(`
+ SELECT table_name, table_schema
+ FROM information_schema.tables
+ WHERE table_schema NOT IN ('information_schema', 'pg_catalog')
+ ORDER BY table_schema, table_name
+ `);
+
+ await client.end();
+
+ return new Response(JSON.stringify({
+ status: 'success',
+ message: 'Tables retrieved successfully',
+ tables: result.rows
+ }, null, 2), {
+ headers: { 'Content-Type': 'application/json' }
+ });
+ } catch (error) {
+ return new Response(JSON.stringify({
+ status: 'error',
+ message: 'Failed to query tables',
+ error: error.message
+ }, null, 2), {
+ status: 500,
+ headers: { 'Content-Type': 'application/json' }
+ });
+ }
+}
+
+// 查询术数书内容
+async function queryShushuBook(env: Env, limit: number = 10): Promise {
+ try {
+ const { Client } = await import('pg');
+ const client = new Client({
+ connectionString: env.HYPERDRIVE.connectionString
+ });
+
+ await client.connect();
+
+ // 尝试查询可能的术数书表名
+ const tableNames = ['shushu', 'shushu_book', 'books', 'articles', 'content', 'documents'];
+ let result: any = null;
+ let tableName: string | null = null;
+
+ for (const name of tableNames) {
+ try {
+ const testResult = await client.query(`SELECT * FROM ${name} LIMIT 1`);
+ if (testResult.rows.length > 0) {
+ tableName = name;
+ result = await client.query(`SELECT * FROM ${name} ORDER BY id DESC LIMIT $1`, [limit]);
+ break;
+ }
+ } catch (e) {
+ // 表不存在,继续尝试下一个
+ continue;
+ }
+ }
+
+ await client.end();
+
+ if (!result) {
+ return new Response(JSON.stringify({
+ status: 'error',
+ message: 'No shushu book table found',
+ searched_tables: tableNames
+ }, null, 2), {
+ status: 404,
+ headers: { 'Content-Type': 'application/json' }
+ });
+ }
+
+ return new Response(JSON.stringify({
+ status: 'success',
+ message: 'Shushu book content retrieved successfully',
+ table_name: tableName,
+ count: result.rows.length,
+ data: result.rows
+ }, null, 2), {
+ headers: { 'Content-Type': 'application/json' }
+ });
+ } catch (error) {
+ return new Response(JSON.stringify({
+ status: 'error',
+ message: 'Failed to query shushu book',
+ error: error.message
+ }, null, 2), {
+ status: 500,
+ headers: { 'Content-Type': 'application/json' }
+ });
+ }
+}
+
+// 搜索术数书内容
+async function searchShushuBook(env: Env, keyword: string, limit: number = 5): Promise {
+ try {
+ if (!keyword) {
+ return new Response(JSON.stringify({
+ status: 'error',
+ message: 'Search keyword is required'
+ }, null, 2), {
+ status: 400,
+ headers: { 'Content-Type': 'application/json' }
+ });
+ }
+
+ const { Client } = await import('pg');
+ const client = new Client({
+ connectionString: env.HYPERDRIVE.connectionString
+ });
+
+ await client.connect();
+
+ // 尝试在不同的表和字段中搜索
+ const searchQueries = [
+ { table: 'shushu', fields: ['title', 'content', 'description'] },
+ { table: 'shushu_book', fields: ['title', 'content', 'text'] },
+ { table: 'books', fields: ['title', 'content', 'description'] },
+ { table: 'articles', fields: ['title', 'content', 'body'] },
+ { table: 'content', fields: ['title', 'text', 'content'] },
+ { table: 'documents', fields: ['title', 'content', 'text'] }
+ ];
+
+ let results: any[] = [];
+ let searchedTables: string[] = [];
+
+ for (const { table, fields } of searchQueries) {
+ try {
+ // 构建搜索条件
+ const conditions = fields.map(field => `${field} ILIKE $1`).join(' OR ');
+ const query = `SELECT * FROM ${table} WHERE ${conditions} LIMIT $2`;
+
+ const result = await client.query(query, [`%${keyword}%`, limit]);
+
+ if (result.rows.length > 0) {
+ results.push({
+ table_name: table,
+ count: result.rows.length,
+ data: result.rows
+ });
+ }
+
+ searchedTables.push(table);
+ } catch (e) {
+ // 表或字段不存在,继续搜索
+ continue;
+ }
+ }
+
+ await client.end();
+
+ return new Response(JSON.stringify({
+ status: 'success',
+ message: `Search completed for keyword: ${keyword}`,
+ keyword: keyword,
+ searched_tables: searchedTables,
+ results: results,
+ total_matches: results.reduce((sum, r) => sum + r.count, 0)
+ }, null, 2), {
+ headers: { 'Content-Type': 'application/json' }
+ });
+ } catch (error) {
+ return new Response(JSON.stringify({
+ status: 'error',
+ message: 'Search failed',
+ error: error.message
+ }, null, 2), {
+ status: 500,
+ headers: { 'Content-Type': 'application/json' }
+ });
+ }
+}
+
+// 获取术数书统计信息
+async function getShushuStats(env: Env): Promise {
+ try {
+ const { Client } = await import('pg');
+ const client = new Client({
+ connectionString: env.HYPERDRIVE.connectionString
+ });
+
+ await client.connect();
+
+ const tableNames = ['shushu', 'shushu_book', 'books', 'articles', 'content', 'documents'];
+ let stats: any[] = [];
+
+ for (const tableName of tableNames) {
+ try {
+ const countResult = await client.query(`SELECT COUNT(*) as count FROM ${tableName}`);
+ const sampleResult = await client.query(`SELECT * FROM ${tableName} LIMIT 1`);
+
+ stats.push({
+ table_name: tableName,
+ record_count: parseInt(countResult.rows[0].count),
+ sample_columns: sampleResult.rows.length > 0 ? Object.keys(sampleResult.rows[0]) : [],
+ exists: true
+ });
+ } catch (e) {
+ stats.push({
+ table_name: tableName,
+ exists: false
+ });
+ }
+ }
+
+ await client.end();
+
+ return new Response(JSON.stringify({
+ status: 'success',
+ message: 'Statistics retrieved successfully',
+ stats: stats,
+ existing_tables: stats.filter(s => s.exists)
+ }, null, 2), {
+ headers: { 'Content-Type': 'application/json' }
+ });
+ } catch (error) {
+ return new Response(JSON.stringify({
+ status: 'error',
+ message: 'Failed to get statistics',
+ error: error.message
+ }, null, 2), {
+ status: 500,
+ headers: { 'Content-Type': 'application/json' }
+ });
+ }
+}
+
+async function testQuery(env: Env): Promise {
+ try {
+ const { Client } = await import('pg');
+ const client = new Client({
+ connectionString: env.HYPERDRIVE.connectionString
+ });
+
+ await client.connect();
+
+ // Create a test table if it doesn't exist
+ await client.query(`
+ CREATE TABLE IF NOT EXISTS hyperdrive_test (
+ id SERIAL PRIMARY KEY,
+ message TEXT,
+ created_at TIMESTAMP DEFAULT NOW()
+ )
+ `);
+
+ // Insert a test record
+ const insertResult = await client.query(
+ 'INSERT INTO hyperdrive_test (message) VALUES ($1) RETURNING *',
+ [`Test from Hyperdrive at ${new Date().toISOString()}`]
+ );
+
+ // Query recent records
+ const selectResult = await client.query(
+ 'SELECT * FROM hyperdrive_test ORDER BY created_at DESC LIMIT 5'
+ );
+
+ await client.end();
+
+ return new Response(JSON.stringify({
+ status: 'success',
+ message: 'Database operations completed successfully',
+ inserted: insertResult.rows[0],
+ recent_records: selectResult.rows,
+ performance: {
+ hyperdrive_enabled: true,
+ connection_pooled: true
+ }
+ }, null, 2), {
+ headers: { 'Content-Type': 'application/json' }
+ });
+ } catch (error) {
+ return new Response(JSON.stringify({
+ status: 'error',
+ message: 'Database query failed',
+ error: error.message
+ }, null, 2), {
+ status: 500,
+ headers: { 'Content-Type': 'application/json' }
+ });
+ }
+}
\ No newline at end of file
diff --git a/src/jixia/agents/memory_enhanced_agent.py b/src/jixia/agents/memory_enhanced_agent.py
new file mode 100644
index 0000000..f4af5d5
--- /dev/null
+++ b/src/jixia/agents/memory_enhanced_agent.py
@@ -0,0 +1,535 @@
+#!/usr/bin/env python3
+"""
+增强记忆的ADK智能体
+集成Vertex AI Memory Bank的稷下学宫智能体
+"""
+
+import asyncio
+from typing import Dict, List, Optional, Any
+from dataclasses import dataclass
+
+try:
+ from google.adk import Agent, InvocationContext
+ ADK_AVAILABLE = True
+except ImportError:
+ ADK_AVAILABLE = False
+ print("⚠️ Google ADK 未安装")
+ InvocationContext = Any
+
+from src.jixia.memory.base_memory_bank import MemoryBankProtocol
+from src.jixia.memory.factory import get_memory_backend
+from config.doppler_config import get_google_genai_config
+
+
+@dataclass
+class BaxianPersonality:
+ """八仙智能体人格定义"""
+ name: str
+ chinese_name: str
+ hexagram: str # 对应的易经卦象
+ investment_style: str
+ personality_traits: List[str]
+ debate_approach: str
+ memory_focus: List[str] # 重点记忆的内容类型
+
+
+class MemoryEnhancedAgent:
+ """
+ 集成记忆银行的智能体
+ 为稷下学宫八仙提供持久化记忆能力
+ """
+
+ # 八仙人格定义
+ BAXIAN_PERSONALITIES = {
+ "tieguaili": BaxianPersonality(
+ name="tieguaili",
+ chinese_name="铁拐李",
+ hexagram="巽卦",
+ investment_style="逆向投资大师",
+ personality_traits=["逆向思维", "挑战共识", "独立判断", "风险敏感"],
+ debate_approach="质疑主流观点,提出反向思考",
+ memory_focus=["市场异常", "逆向案例", "风险警示", "反向策略"]
+ ),
+ "hanzhongli": BaxianPersonality(
+ name="hanzhongli",
+ chinese_name="汉钟离",
+ hexagram="离卦",
+ investment_style="平衡协调者",
+ personality_traits=["平衡思维", "综合分析", "稳健决策", "协调统筹"],
+ debate_approach="寻求各方观点的平衡点",
+ memory_focus=["平衡策略", "综合分析", "协调方案", "稳健建议"]
+ ),
+ "zhangguolao": BaxianPersonality(
+ name="zhangguolao",
+ chinese_name="张果老",
+ hexagram="兑卦",
+ investment_style="历史智慧者",
+ personality_traits=["博古通今", "历史视角", "经验丰富", "智慧深邃"],
+ debate_approach="引用历史案例和长期趋势",
+ memory_focus=["历史案例", "长期趋势", "周期规律", "经验教训"]
+ ),
+ "lancaihe": BaxianPersonality(
+ name="lancaihe",
+ chinese_name="蓝采和",
+ hexagram="坎卦",
+ investment_style="创新思维者",
+ personality_traits=["创新思维", "潜力发现", "灵活变通", "机会敏锐"],
+ debate_approach="发现新兴机会和创新角度",
+ memory_focus=["创新机会", "新兴趋势", "潜力发现", "灵活策略"]
+ ),
+ "hexiangu": BaxianPersonality(
+ name="hexiangu",
+ chinese_name="何仙姑",
+ hexagram="坤卦",
+ investment_style="直觉洞察者",
+ personality_traits=["直觉敏锐", "情感智慧", "温和坚定", "洞察人心"],
+ debate_approach="基于直觉和情感智慧的分析",
+ memory_focus=["市场情绪", "直觉判断", "情感因素", "人性洞察"]
+ ),
+ "lvdongbin": BaxianPersonality(
+ name="lvdongbin",
+ chinese_name="吕洞宾",
+ hexagram="乾卦",
+ investment_style="理性分析者",
+ personality_traits=["理性客观", "逻辑严密", "技术精通", "决策果断"],
+ debate_approach="基于数据和逻辑的严密分析",
+ memory_focus=["技术分析", "数据洞察", "逻辑推理", "理性决策"]
+ ),
+ "hanxiangzi": BaxianPersonality(
+ name="hanxiangzi",
+ chinese_name="韩湘子",
+ hexagram="艮卦",
+ investment_style="艺术感知者",
+ personality_traits=["艺术感知", "美学视角", "创意思维", "感性理解"],
+ debate_approach="从美学和艺术角度分析市场",
+ memory_focus=["美学趋势", "创意洞察", "感性分析", "艺术视角"]
+ ),
+ "caoguojiu": BaxianPersonality(
+ name="caoguojiu",
+ chinese_name="曹国舅",
+ hexagram="震卦",
+ investment_style="实务执行者",
+ personality_traits=["实务导向", "执行力强", "机构视角", "专业严谨"],
+ debate_approach="关注实际执行和机构操作",
+ memory_focus=["执行策略", "机构动向", "实务操作", "专业分析"]
+ )
+ }
+
+ def __init__(self, agent_name: str, memory_bank: MemoryBankProtocol | None = None):
+ """
+ 初始化记忆增强智能体
+
+ Args:
+ agent_name: 智能体名称 (如 "tieguaili")
+ memory_bank: 记忆银行实例
+ """
+ if not ADK_AVAILABLE:
+ raise ImportError("Google ADK 未安装,无法创建智能体")
+
+ if agent_name not in self.BAXIAN_PERSONALITIES:
+ raise ValueError(f"未知的智能体: {agent_name}")
+
+ self.agent_name = agent_name
+ self.personality = self.BAXIAN_PERSONALITIES[agent_name]
+ self.memory_bank = memory_bank
+ self.adk_agent = None
+
+ # 初始化ADK智能体
+ self._initialize_adk_agent()
+
+ def _initialize_adk_agent(self):
+ """初始化ADK智能体"""
+ try:
+ # 构建智能体系统提示
+ system_prompt = self._build_system_prompt()
+
+ # 创建ADK智能体
+ self.adk_agent = Agent(
+ name=self.personality.chinese_name,
+ model="gemini-2.0-flash-exp",
+ system_prompt=system_prompt,
+ temperature=0.7
+ )
+
+ print(f"✅ 创建ADK智能体: {self.personality.chinese_name}")
+
+ except Exception as e:
+ print(f"❌ 创建ADK智能体失败: {e}")
+ raise
+
+ def _build_system_prompt(self) -> str:
+ """构建智能体系统提示"""
+ return f"""
+# {self.personality.chinese_name} - {self.personality.investment_style}
+
+## 角色定位
+你是稷下学宫的{self.personality.chinese_name},对应易经{self.personality.hexagram},专精于{self.personality.investment_style}。
+
+## 人格特质
+{', '.join(self.personality.personality_traits)}
+
+## 辩论风格
+{self.personality.debate_approach}
+
+## 记忆重点
+你特别关注并记住以下类型的信息:
+{', '.join(self.personality.memory_focus)}
+
+## 行为准则
+1. 始终保持你的人格特质和投资风格
+2. 在辩论中体现你的独特视角
+3. 学习并记住重要的讨论内容
+4. 与其他七仙协作,但保持独立观点
+5. 基于历史记忆提供更有深度的分析
+
+## 记忆运用
+- 在回答前,会参考相关的历史记忆
+- 学习用户偏好,调整沟通风格
+- 记住成功的策略和失败的教训
+- 与其他智能体分享有价值的洞察
+
+请始终以{self.personality.chinese_name}的身份进行对话和分析。
+"""
+
+ async def get_memory_context(self, topic: str) -> str:
+ """
+ 获取与主题相关的记忆上下文
+
+ Args:
+ topic: 讨论主题
+
+ Returns:
+ 格式化的记忆上下文
+ """
+ if not self.memory_bank:
+ return ""
+
+ try:
+ context = await self.memory_bank.get_agent_context(
+ self.agent_name, topic
+ )
+ return context
+ except Exception as e:
+ print(f"⚠️ 获取记忆上下文失败: {e}")
+ return ""
+
+ async def respond_with_memory(self,
+ message: str,
+ topic: str = "",
+ context: InvocationContext = None) -> str:
+ """
+ 基于记忆增强的响应
+
+ Args:
+ message: 输入消息
+ topic: 讨论主题
+ context: ADK调用上下文
+
+ Returns:
+ 智能体响应
+ """
+ try:
+ # 获取记忆上下文
+ memory_context = await self.get_memory_context(topic)
+
+ # 构建增强的提示
+ enhanced_prompt = f"""
+{memory_context}
+
+## 当前讨论
+主题: {topic}
+消息: {message}
+
+请基于你的记忆和人格特质进行回应。
+"""
+
+ # 使用ADK生成响应
+ if context is None:
+ context = InvocationContext()
+
+ response_generator = self.adk_agent.run_async(
+ enhanced_prompt,
+ context=context
+ )
+
+ # 收集响应
+ response_parts = []
+ async for chunk in response_generator:
+ if hasattr(chunk, 'text'):
+ response_parts.append(chunk.text)
+ elif isinstance(chunk, str):
+ response_parts.append(chunk)
+
+ response = ''.join(response_parts)
+
+ # 保存对话记忆
+ if self.memory_bank and response:
+ await self._save_conversation_memory(message, response, topic)
+
+ return response
+
+ except Exception as e:
+ print(f"❌ 生成响应失败: {e}")
+ return f"抱歉,{self.personality.chinese_name}暂时无法回应。"
+
+ async def _save_conversation_memory(self,
+ user_message: str,
+ agent_response: str,
+ topic: str):
+ """
+ 保存对话记忆
+
+ Args:
+ user_message: 用户消息
+ agent_response: 智能体响应
+ topic: 讨论主题
+ """
+ try:
+ # 保存用户消息记忆
+ await self.memory_bank.add_memory(
+ agent_name=self.agent_name,
+ content=f"用户询问: {user_message}",
+ memory_type="conversation",
+ debate_topic=topic,
+ metadata={"role": "user"}
+ )
+
+ # 保存智能体响应记忆
+ await self.memory_bank.add_memory(
+ agent_name=self.agent_name,
+ content=f"我的回应: {agent_response}",
+ memory_type="conversation",
+ debate_topic=topic,
+ metadata={"role": "assistant"}
+ )
+
+ except Exception as e:
+ print(f"⚠️ 保存对话记忆失败: {e}")
+
+ async def learn_preference(self, preference: str, topic: str = ""):
+ """
+ 学习用户偏好
+
+ Args:
+ preference: 偏好描述
+ topic: 相关主题
+ """
+ if not self.memory_bank:
+ return
+
+ try:
+ await self.memory_bank.add_memory(
+ agent_name=self.agent_name,
+ content=f"用户偏好: {preference}",
+ memory_type="preference",
+ debate_topic=topic,
+ metadata={"learned_from": "user_feedback"}
+ )
+
+ print(f"✅ {self.personality.chinese_name} 学习了新偏好")
+
+ except Exception as e:
+ print(f"⚠️ 学习偏好失败: {e}")
+
+ async def save_strategy_insight(self, insight: str, topic: str = ""):
+ """
+ 保存策略洞察
+
+ Args:
+ insight: 策略洞察
+ topic: 相关主题
+ """
+ if not self.memory_bank:
+ return
+
+ try:
+ await self.memory_bank.add_memory(
+ agent_name=self.agent_name,
+ content=f"策略洞察: {insight}",
+ memory_type="strategy",
+ debate_topic=topic,
+ metadata={"insight_type": "strategy"}
+ )
+
+ print(f"✅ {self.personality.chinese_name} 保存了策略洞察")
+
+ except Exception as e:
+ print(f"⚠️ 保存策略洞察失败: {e}")
+
+
+class BaxianMemoryCouncil:
+ """
+ 八仙记忆议会
+ 管理所有八仙智能体的记忆增强功能
+ """
+
+ def __init__(self, memory_bank: MemoryBankProtocol | None = None):
+ """
+ 初始化八仙记忆议会
+
+ Args:
+ memory_bank: 记忆银行实例
+ """
+ self.memory_bank = memory_bank
+ self.agents = {}
+
+ # 初始化所有八仙智能体
+ self._initialize_agents()
+
+ def _initialize_agents(self):
+ """初始化所有八仙智能体"""
+ for agent_name in MemoryEnhancedAgent.BAXIAN_PERSONALITIES.keys():
+ try:
+ agent = MemoryEnhancedAgent(agent_name, self.memory_bank)
+ self.agents[agent_name] = agent
+ print(f"✅ 初始化 {agent.personality.chinese_name}")
+ except Exception as e:
+ print(f"❌ 初始化 {agent_name} 失败: {e}")
+
+ async def conduct_memory_debate(self,
+ topic: str,
+ participants: List[str] = None,
+ rounds: int = 3) -> Dict[str, Any]:
+ """
+ 进行记忆增强的辩论
+
+ Args:
+ topic: 辩论主题
+ participants: 参与者列表,None表示所有八仙
+ rounds: 辩论轮数
+
+ Returns:
+ 辩论结果
+ """
+ if participants is None:
+ participants = list(self.agents.keys())
+
+ conversation_history = []
+ context = InvocationContext()
+
+ print(f"🏛️ 稷下学宫八仙论道开始: {topic}")
+
+ for round_num in range(rounds):
+ print(f"\n--- 第 {round_num + 1} 轮 ---")
+
+ for agent_name in participants:
+ if agent_name not in self.agents:
+ continue
+
+ agent = self.agents[agent_name]
+
+ # 构建当前轮次的提示
+ round_prompt = f"""
+轮次: {round_num + 1}/{rounds}
+主题: {topic}
+
+请基于你的记忆和人格特质,对此主题发表观点。
+如果这不是第一轮,请考虑其他仙友的观点并做出回应。
+"""
+
+ # 获取响应
+ response = await agent.respond_with_memory(
+ round_prompt, topic, context
+ )
+
+ # 记录对话历史
+ conversation_history.append({
+ "round": round_num + 1,
+ "agent": agent_name,
+ "chinese_name": agent.personality.chinese_name,
+ "content": response
+ })
+
+ print(f"{agent.personality.chinese_name}: {response[:100]}...")
+
+ # 保存辩论会话到记忆银行
+ if self.memory_bank:
+ await self.memory_bank.save_debate_session(
+ debate_topic=topic,
+ participants=participants,
+ conversation_history=conversation_history
+ )
+
+ return {
+ "topic": topic,
+ "participants": participants,
+ "rounds": rounds,
+ "conversation_history": conversation_history,
+ "total_exchanges": len(conversation_history)
+ }
+
+ async def get_collective_memory_summary(self, topic: str) -> str:
+ """
+ 获取集体记忆摘要
+
+ Args:
+ topic: 主题
+
+ Returns:
+ 集体记忆摘要
+ """
+ if not self.memory_bank:
+ return "记忆银行未启用"
+
+ summaries = []
+
+ for agent_name, agent in self.agents.items():
+ context = await agent.get_memory_context(topic)
+ if context and context.strip():
+ summaries.append(context)
+
+ if summaries:
+ return f"# 稷下学宫集体记忆摘要\n\n" + "\n\n".join(summaries)
+ else:
+ return "暂无相关集体记忆"
+
+
+# 便捷函数
+async def create_memory_enhanced_council() -> BaxianMemoryCouncil:
+ """
+ 创建记忆增强的八仙议会
+
+ Returns:
+ 配置好的BaxianMemoryCouncil实例
+ """
+ try:
+ # 初始化记忆银行
+ memory_bank = get_memory_backend()
+
+ # 创建八仙议会
+ council = BaxianMemoryCouncil(memory_bank)
+
+ print("🏛️ 稷下学宫记忆增强议会创建完成")
+ return council
+
+ except Exception as e:
+ print(f"❌ 创建记忆增强议会失败: {e}")
+ # 创建无记忆版本
+ return BaxianMemoryCouncil(None)
+
+
+if __name__ == "__main__":
+ async def test_memory_enhanced_agent():
+ """测试记忆增强智能体"""
+ try:
+ # 创建记忆增强议会
+ council = await create_memory_enhanced_council()
+
+ # 进行记忆增强辩论
+ result = await council.conduct_memory_debate(
+ topic="NVIDIA股票投资分析",
+ participants=["tieguaili", "lvdongbin", "hexiangu"],
+ rounds=2
+ )
+
+ print(f"\n🏛️ 辩论完成,共 {result['total_exchanges']} 次发言")
+
+ # 获取集体记忆摘要
+ summary = await council.get_collective_memory_summary("NVIDIA股票投资分析")
+ print(f"\n📚 集体记忆摘要:\n{summary}")
+
+ except Exception as e:
+ print(f"❌ 测试失败: {e}")
+
+ # 运行测试
+ asyncio.run(test_memory_enhanced_agent())
diff --git a/src/jixia/config/immortal_api_config.json b/src/jixia/config/immortal_api_config.json
new file mode 100644
index 0000000..4349af0
--- /dev/null
+++ b/src/jixia/config/immortal_api_config.json
@@ -0,0 +1,216 @@
+{
+ "immortals": {
+ "吕洞宾": {
+ "title": "主力剑仙",
+ "specialty": "综合分析与决策",
+ "description": "作为八仙之首,负责整体投资策略制定,需要最快最准确的数据",
+ "preferred_apis": {
+ "stock_quote": "alpha_vantage",
+ "company_overview": "alpha_vantage",
+ "market_movers": "yahoo_finance_15",
+ "market_news": "yahoo_finance_15"
+ },
+ "data_priority": ["实时价格", "公司基本面", "市场动态"],
+ "api_weight": 0.15
+ },
+ "何仙姑": {
+ "title": "风控专家",
+ "specialty": "风险管理与合规",
+ "description": "专注风险评估和投资组合管理,需要稳定可靠的数据源",
+ "preferred_apis": {
+ "stock_quote": "yahoo_finance_15",
+ "company_overview": "seeking_alpha",
+ "market_movers": "webull",
+ "market_news": "seeking_alpha"
+ },
+ "data_priority": ["波动率", "风险指标", "合规信息"],
+ "api_weight": 0.12
+ },
+ "张果老": {
+ "title": "技术分析师",
+ "specialty": "技术指标与图表分析",
+ "description": "专精技术分析,需要详细的价格和成交量数据",
+ "preferred_apis": {
+ "stock_quote": "webull",
+ "company_overview": "alpha_vantage",
+ "market_movers": "yahoo_finance_15",
+ "market_news": "yahoo_finance_15"
+ },
+ "data_priority": ["技术指标", "成交量", "价格走势"],
+ "api_weight": 0.13
+ },
+ "韩湘子": {
+ "title": "基本面研究员",
+ "specialty": "财务分析与估值",
+ "description": "深度研究公司财务状况和内在价值",
+ "preferred_apis": {
+ "stock_quote": "alpha_vantage",
+ "company_overview": "seeking_alpha",
+ "market_movers": "webull",
+ "market_news": "seeking_alpha"
+ },
+ "data_priority": ["财务报表", "估值指标", "盈利预测"],
+ "api_weight": 0.14
+ },
+ "汉钟离": {
+ "title": "量化专家",
+ "specialty": "数据挖掘与算法交易",
+ "description": "运用数学模型和算法进行量化分析",
+ "preferred_apis": {
+ "stock_quote": "yahoo_finance_15",
+ "company_overview": "alpha_vantage",
+ "market_movers": "yahoo_finance_15",
+ "market_news": "yahoo_finance_15"
+ },
+ "data_priority": ["历史数据", "统计指标", "相关性分析"],
+ "api_weight": 0.13
+ },
+ "蓝采和": {
+ "title": "情绪分析师",
+ "specialty": "市场情绪与舆情监控",
+ "description": "分析市场情绪和投资者行为模式",
+ "preferred_apis": {
+ "stock_quote": "webull",
+ "company_overview": "seeking_alpha",
+ "market_movers": "webull",
+ "market_news": "seeking_alpha"
+ },
+ "data_priority": ["新闻情绪", "社交媒体", "投资者情绪"],
+ "api_weight": 0.11
+ },
+ "曹国舅": {
+ "title": "宏观分析师",
+ "specialty": "宏观经济与政策分析",
+ "description": "关注宏观经济环境和政策影响",
+ "preferred_apis": {
+ "stock_quote": "seeking_alpha",
+ "company_overview": "seeking_alpha",
+ "market_movers": "yahoo_finance_15",
+ "market_news": "seeking_alpha"
+ },
+ "data_priority": ["宏观数据", "政策解读", "行业趋势"],
+ "api_weight": 0.12
+ },
+ "铁拐李": {
+ "title": "逆向投资专家",
+ "specialty": "价值发现与逆向思维",
+ "description": "寻找被低估的投资机会,逆向思考市场",
+ "preferred_apis": {
+ "stock_quote": "alpha_vantage",
+ "company_overview": "alpha_vantage",
+ "market_movers": "webull",
+ "market_news": "yahoo_finance_15"
+ },
+ "data_priority": ["估值偏差", "市场异常", "价值机会"],
+ "api_weight": 0.10
+ }
+ },
+ "api_configurations": {
+ "alpha_vantage": {
+ "name": "Alpha Vantage",
+ "tier": "premium",
+ "strengths": ["实时数据", "财务数据", "技术指标"],
+ "rate_limits": {
+ "per_minute": 500,
+ "per_month": 500000
+ },
+ "reliability_score": 0.95,
+ "response_time_avg": 0.8,
+ "data_quality": "high",
+ "cost_per_call": 0.001
+ },
+ "yahoo_finance_15": {
+ "name": "Yahoo Finance 15",
+ "tier": "standard",
+ "strengths": ["市场数据", "新闻资讯", "实时报价"],
+ "rate_limits": {
+ "per_minute": 500,
+ "per_month": 500000
+ },
+ "reliability_score": 0.90,
+ "response_time_avg": 1.2,
+ "data_quality": "medium",
+ "cost_per_call": 0.0005
+ },
+ "webull": {
+ "name": "Webull",
+ "tier": "premium",
+ "strengths": ["搜索功能", "活跃数据", "技术分析"],
+ "rate_limits": {
+ "per_minute": 500,
+ "per_month": 500000
+ },
+ "reliability_score": 0.88,
+ "response_time_avg": 1.0,
+ "data_quality": "high",
+ "cost_per_call": 0.0008
+ },
+ "seeking_alpha": {
+ "name": "Seeking Alpha",
+ "tier": "standard",
+ "strengths": ["分析报告", "新闻资讯", "专业观点"],
+ "rate_limits": {
+ "per_minute": 500,
+ "per_month": 500000
+ },
+ "reliability_score": 0.85,
+ "response_time_avg": 1.5,
+ "data_quality": "medium",
+ "cost_per_call": 0.0006
+ }
+ },
+ "load_balancing_strategies": {
+ "round_robin": {
+ "description": "轮询分配,确保负载均匀分布",
+ "enabled": true,
+ "weight_based": true
+ },
+ "health_aware": {
+ "description": "基于API健康状态的智能分配",
+ "enabled": true,
+ "health_check_interval": 300
+ },
+ "performance_based": {
+ "description": "基于响应时间的动态分配",
+ "enabled": true,
+ "response_time_threshold": 2.0
+ },
+ "cost_optimization": {
+ "description": "成本优化策略,优先使用低成本API",
+ "enabled": false,
+ "cost_threshold": 0.001
+ }
+ },
+ "failover_matrix": {
+ "alpha_vantage": ["webull", "yahoo_finance_15", "seeking_alpha"],
+ "yahoo_finance_15": ["webull", "alpha_vantage", "seeking_alpha"],
+ "webull": ["alpha_vantage", "yahoo_finance_15", "seeking_alpha"],
+ "seeking_alpha": ["yahoo_finance_15", "alpha_vantage", "webull"]
+ },
+ "cache_settings": {
+ "enabled": true,
+ "ttl_seconds": 300,
+ "max_entries": 1000,
+ "cache_strategies": {
+ "stock_quote": 60,
+ "company_overview": 3600,
+ "market_movers": 300,
+ "market_news": 1800
+ }
+ },
+ "monitoring": {
+ "enabled": true,
+ "metrics": [
+ "api_call_count",
+ "response_time",
+ "error_rate",
+ "cache_hit_rate",
+ "load_distribution"
+ ],
+ "alerts": {
+ "high_error_rate": 0.1,
+ "slow_response_time": 3.0,
+ "api_unavailable": true
+ }
+ }
+}
\ No newline at end of file
diff --git a/src/jixia/debates/adk_debate_test.py b/src/jixia/debates/adk_debate_test.py
new file mode 100644
index 0000000..095510f
--- /dev/null
+++ b/src/jixia/debates/adk_debate_test.py
@@ -0,0 +1,130 @@
+#!/usr/bin/env python3
+# -*- coding: utf-8 -*-
+"""
+稷下学宫 Google ADK 论道系统测试
+基于 Google ADK 的八仙论道原型
+"""
+
+import os
+import asyncio
+from google.adk import Agent
+from google.adk.tools import FunctionTool
+
+# 八仙智能体定义
+def create_baxian_agents():
+ """创建八仙智能体"""
+
+ # 铁拐李 - 逆向思维专家
+ tie_guai_li = Agent(
+ name="铁拐李",
+ model="gemini-2.0-flash-exp"
+ )
+
+ # 汉钟离 - 平衡协调者
+ han_zhong_li = Agent(
+ name="汉钟离",
+ model="gemini-2.0-flash-exp"
+ )
+
+ # 张果老 - 历史智慧者
+ zhang_guo_lao = Agent(
+ name="张果老",
+ model="gemini-2.0-flash-exp"
+ )
+
+ # 蓝采和 - 创新思维者
+ lan_cai_he = Agent(
+ name="蓝采和",
+ model="gemini-2.0-flash-exp"
+ )
+
+ # 何仙姑 - 直觉洞察者
+ he_xian_gu = Agent(
+ name="何仙姑",
+ model="gemini-2.0-flash-exp"
+ )
+
+ # 吕洞宾 - 理性分析者
+ lu_dong_bin = Agent(
+ name="吕洞宾",
+ model="gemini-2.0-flash-exp"
+ )
+
+ # 韩湘子 - 艺术感知者
+ han_xiang_zi = Agent(
+ name="韩湘子",
+ model="gemini-2.0-flash-exp"
+ )
+
+ # 曹国舅 - 实务执行者
+ cao_guo_jiu = Agent(
+ name="曹国舅",
+ model="gemini-2.0-flash-exp"
+ )
+
+ return {
+ "铁拐李": tie_guai_li,
+ "汉钟离": han_zhong_li,
+ "张果老": zhang_guo_lao,
+ "蓝采和": lan_cai_he,
+ "何仙姑": he_xian_gu,
+ "吕洞宾": lu_dong_bin,
+ "韩湘子": han_xiang_zi,
+ "曹国舅": cao_guo_jiu
+ }
+
+def test_single_agent():
+ """测试单个智能体"""
+ print("🧪 测试单个智能体...")
+
+ # 创建铁拐李智能体
+ tie_guai_li = Agent(
+ name="铁拐李",
+ model="gemini-2.0-flash-exp"
+ )
+
+ print(f"✅ 智能体 '{tie_guai_li.name}' 创建成功")
+ print(f"📱 使用模型: {tie_guai_li.model}")
+
+ return tie_guai_li
+
+def test_baxian_creation():
+ """测试八仙智能体创建"""
+ print("\n🎭 创建八仙智能体...")
+
+ baxian = create_baxian_agents()
+
+ print(f"✅ 成功创建 {len(baxian)} 个智能体:")
+ for name, agent in baxian.items():
+ print(f" - {name}: {agent.model}")
+
+ return baxian
+
+def main():
+ """主测试函数"""
+ print("🚀 开始稷下学宫 ADK 论道系统测试...")
+
+ # 检查API密钥
+ api_key = os.getenv('GOOGLE_API_KEY')
+ if not api_key:
+ print("❌ 未找到 GOOGLE_API_KEY 环境变量")
+ print("请使用: doppler run -- python src/jixia/debates/adk_debate_test.py")
+ return
+
+ print(f"✅ API密钥已配置 (长度: {len(api_key)} 字符)")
+
+ # 测试单个智能体
+ single_agent = test_single_agent()
+
+ # 测试八仙智能体创建
+ baxian = test_baxian_creation()
+
+ print("\n🎉 ADK 论道系统基础测试完成!")
+ print("\n📝 下一步:")
+ print(" 1. 实现智能体间的对话逻辑")
+ print(" 2. 集成 RapidAPI 数据源")
+ print(" 3. 创建论道主题和流程")
+ print(" 4. 连接 Streamlit 界面")
+
+if __name__ == "__main__":
+ main()
\ No newline at end of file
diff --git a/src/jixia/debates/adk_memory_debate.py b/src/jixia/debates/adk_memory_debate.py
new file mode 100644
index 0000000..3915f92
--- /dev/null
+++ b/src/jixia/debates/adk_memory_debate.py
@@ -0,0 +1,282 @@
+#!/usr/bin/env python3
+# -*- coding: utf-8 -*-
+"""
+稷下学宫 ADK Memory Bank 论道系统
+实现带有记忆银行的八仙智能体辩论
+"""
+
+import os
+import asyncio
+from google.adk import Agent, Runner
+from google.adk.sessions import InMemorySessionService
+from google.adk.memory import MemoryBank, MemoryItem
+from google.genai import types
+import json
+from datetime import datetime
+from typing import Dict, List, Optional
+
+class BaxianMemoryManager:
+ """八仙记忆管理器"""
+
+ def __init__(self):
+ self.memory_banks: Dict[str, MemoryBank] = {}
+ self.agents: Dict[str, Agent] = {}
+
+ async def initialize_baxian_agents(self):
+ """初始化八仙智能体及其记忆银行"""
+
+ # 八仙角色配置
+ baxian_config = {
+ "铁拐李": {
+ "instruction": "你是铁拐李,八仙中的逆向思维专家。你善于从批判和质疑的角度看问题,总是能发现事物的另一面。你会从你的记忆中回忆相关的逆向投资案例和失败教训。",
+ "memory_context": "逆向投资案例、市场泡沫警告、风险识别经验"
+ },
+ "吕洞宾": {
+ "instruction": "你是吕洞宾,八仙中的理性分析者。你善于平衡各方观点,用理性和逻辑来分析问题。你会从记忆中调用技术分析的成功案例和理论知识。",
+ "memory_context": "技术分析理论、成功预测案例、市场趋势分析"
+ },
+ "何仙姑": {
+ "instruction": "你是何仙姑,八仙中的风险控制专家。你总是从风险管理的角度思考问题,善于发现潜在危险。你会回忆历史上的重大风险事件。",
+ "memory_context": "风险管理案例、黑天鹅事件、危机预警经验"
+ },
+ "张果老": {
+ "instruction": "你是张果老,八仙中的历史智慧者。你善于从历史数据中寻找规律和智慧,总是能提供长期视角。你会从记忆中调用历史数据和长期趋势。",
+ "memory_context": "历史市场数据、长期投资趋势、周期性规律"
+ }
+ }
+
+ # 为每个仙人创建智能体和记忆银行
+ for name, config in baxian_config.items():
+ # 创建记忆银行
+ memory_bank = MemoryBank(
+ name=f"{name}_memory_bank",
+ description=f"{name}的个人记忆银行,存储{config['memory_context']}"
+ )
+
+ # 初始化记忆内容
+ await self._initialize_agent_memory(memory_bank, name, config['memory_context'])
+
+ # 创建智能体
+ agent = Agent(
+ name=name,
+ model="gemini-2.0-flash-exp",
+ instruction=f"{config['instruction']} 在回答时,请先从你的记忆银行中检索相关信息,然后结合当前话题给出回应。",
+ memory_bank=memory_bank
+ )
+
+ self.memory_banks[name] = memory_bank
+ self.agents[name] = agent
+
+ print(f"✅ 已初始化 {len(self.agents)} 个八仙智能体及其记忆银行")
+
+ async def _initialize_agent_memory(self, memory_bank: MemoryBank, agent_name: str, context: str):
+ """为智能体初始化记忆内容"""
+
+ # 根据角色添加初始记忆
+ initial_memories = {
+ "铁拐李": [
+ "2000年互联网泡沫破裂,许多高估值科技股暴跌90%以上",
+ "2008年金融危机前,房地产市场过度繁荣,逆向思维者提前撤离",
+ "比特币从2万美元跌到3千美元,提醒我们任何资产都可能大幅回调",
+ "巴菲特说过:别人贪婪时我恐惧,别人恐惧时我贪婪"
+ ],
+ "吕洞宾": [
+ "移动平均线交叉是经典的技术分析信号",
+ "RSI指标超过70通常表示超买,低于30表示超卖",
+ "支撑位和阻力位是技术分析的核心概念",
+ "成功的技术分析需要结合多个指标综合判断"
+ ],
+ "何仙姑": [
+ "2008年雷曼兄弟倒闭引发全球金融危机",
+ "长期资本管理公司(LTCM)的失败说明了风险管理的重要性",
+ "分散投资是降低风险的基本原则",
+ "黑天鹅事件虽然罕见但影响巨大,需要提前准备"
+ ],
+ "张果老": [
+ "股市存在7-10年的长期周期",
+ "康德拉季耶夫长波理论描述了50-60年的经济周期",
+ "历史上每次重大技术革命都带来新的投资机会",
+ "长期来看,优质资产总是向上的"
+ ]
+ }
+
+ memories = initial_memories.get(agent_name, [])
+ for memory_text in memories:
+ memory_item = MemoryItem(
+ content=memory_text,
+ metadata={
+ "agent": agent_name,
+ "type": "historical_knowledge",
+ "timestamp": datetime.now().isoformat()
+ }
+ )
+ await memory_bank.add_memory(memory_item)
+
+ async def add_debate_memory(self, agent_name: str, content: str, topic: str):
+ """为智能体添加辩论记忆"""
+ if agent_name in self.memory_banks:
+ memory_item = MemoryItem(
+ content=content,
+ metadata={
+ "agent": agent_name,
+ "type": "debate_history",
+ "topic": topic,
+ "timestamp": datetime.now().isoformat()
+ }
+ )
+ await self.memory_banks[agent_name].add_memory(memory_item)
+
+ async def retrieve_relevant_memories(self, agent_name: str, query: str, limit: int = 3) -> List[str]:
+ """检索智能体的相关记忆"""
+ if agent_name not in self.memory_banks:
+ return []
+
+ try:
+ memories = await self.memory_banks[agent_name].search(query, limit=limit)
+ return [memory.content for memory in memories]
+ except Exception as e:
+ print(f"⚠️ 记忆检索失败 ({agent_name}): {e}")
+ return []
+
+class MemoryEnhancedDebate:
+ """带记忆增强的辩论系统"""
+
+ def __init__(self):
+ self.memory_manager = BaxianMemoryManager()
+ self.session_service = InMemorySessionService()
+ self.runners: Dict[str, Runner] = {}
+
+ async def initialize(self):
+ """初始化辩论系统"""
+ await self.memory_manager.initialize_baxian_agents()
+
+ # 创建会话
+ self.session = await self.session_service.create_session(
+ state={},
+ app_name="稷下学宫记忆增强论道系统",
+ user_id="memory_debate_user"
+ )
+
+ # 为每个智能体创建Runner
+ for name, agent in self.memory_manager.agents.items():
+ runner = Runner(
+ app_name="稷下学宫记忆增强论道系统",
+ agent=agent,
+ session_service=self.session_service
+ )
+ self.runners[name] = runner
+
+ async def conduct_memory_debate(self, topic: str, participants: List[str] = None):
+ """进行带记忆的辩论"""
+ if participants is None:
+ participants = ["铁拐李", "吕洞宾", "何仙姑", "张果老"]
+
+ print(f"\n🎭 稷下学宫记忆增强论道开始...")
+ print(f"📋 论道主题: {topic}")
+ print(f"🎯 参与仙人: {', '.join(participants)}")
+
+ debate_history = []
+
+ for round_num in range(2): # 进行2轮辩论
+ print(f"\n🔄 第 {round_num + 1} 轮论道:")
+
+ for participant in participants:
+ if participant not in self.runners:
+ continue
+
+ print(f"\n🗣️ {participant} 发言:")
+
+ # 检索相关记忆
+ relevant_memories = await self.memory_manager.retrieve_relevant_memories(
+ participant, topic, limit=2
+ )
+
+ # 构建包含记忆的提示
+ memory_context = ""
+ if relevant_memories:
+ memory_context = f"\n从你的记忆中回忆到:\n" + "\n".join([f"- {memory}" for memory in relevant_memories])
+
+ # 构建辩论历史上下文
+ history_context = ""
+ if debate_history:
+ recent_history = debate_history[-3:] # 最近3条发言
+ history_context = f"\n最近的论道内容:\n" + "\n".join([f"- {h}" for h in recent_history])
+
+ prompt = f"关于'{topic}'这个话题{memory_context}{history_context}\n\n请结合你的记忆和当前讨论,从你的角色特点出发发表观点。请控制在150字以内。"
+
+ # 发送消息并获取回复
+ content = types.Content(role='user', parts=[types.Part(text=prompt)])
+ response = self.runners[participant].run_async(
+ user_id=self.session.user_id,
+ session_id=self.session.id,
+ new_message=content
+ )
+
+ # 收集回复
+ reply = ""
+ async for event in response:
+ if hasattr(event, 'content') and event.content:
+ if hasattr(event.content, 'parts') and event.content.parts:
+ for part in event.content.parts:
+ if hasattr(part, 'text') and part.text:
+ reply += str(part.text)
+
+ if reply.strip():
+ clean_reply = reply.strip()
+ print(f" {clean_reply}")
+
+ # 记录到辩论历史
+ debate_entry = f"{participant}: {clean_reply}"
+ debate_history.append(debate_entry)
+
+ # 添加到记忆银行
+ await self.memory_manager.add_debate_memory(
+ participant, clean_reply, topic
+ )
+
+ await asyncio.sleep(1) # 避免API调用过快
+
+ print(f"\n🎉 记忆增强论道完成!")
+ print(f"📝 本次论道共产生 {len(debate_history)} 条发言,已存储到各仙人的记忆银行中。")
+
+ return debate_history
+
+ async def close(self):
+ """关闭资源"""
+ for runner in self.runners.values():
+ await runner.close()
+
+async def main():
+ """主函数"""
+ print("🚀 稷下学宫 ADK Memory Bank 论道系统")
+
+ # 检查API密钥
+ api_key = os.getenv('GOOGLE_API_KEY')
+ if not api_key:
+ print("❌ 未找到 GOOGLE_API_KEY 环境变量")
+ print("请使用: doppler run -- python src/jixia/debates/adk_memory_debate.py")
+ return
+
+ print(f"✅ API密钥已配置")
+
+ # 创建并初始化辩论系统
+ debate_system = MemoryEnhancedDebate()
+
+ try:
+ await debate_system.initialize()
+
+ # 进行辩论
+ await debate_system.conduct_memory_debate(
+ topic="人工智能对投资市场的影响",
+ participants=["铁拐李", "吕洞宾", "何仙姑", "张果老"]
+ )
+
+ except Exception as e:
+ print(f"❌ 运行失败: {e}")
+ import traceback
+ traceback.print_exc()
+ finally:
+ await debate_system.close()
+
+if __name__ == "__main__":
+ asyncio.run(main())
\ No newline at end of file
diff --git a/src/jixia/debates/adk_real_debate.py b/src/jixia/debates/adk_real_debate.py
new file mode 100644
index 0000000..d15ca0b
--- /dev/null
+++ b/src/jixia/debates/adk_real_debate.py
@@ -0,0 +1,252 @@
+#!/usr/bin/env python3
+# -*- coding: utf-8 -*-
+"""
+稷下学宫 ADK 真实论道系统
+实现铁拐李和吕洞宾的实际对话辩论
+"""
+
+import os
+import asyncio
+from google.adk import Agent, Runner
+from google.adk.sessions import InMemorySessionService
+from google.genai import types
+import re
+import sys
+from contextlib import contextmanager
+
+def create_debate_agents():
+ """创建论道智能体"""
+
+ # 铁拐李 - 逆向思维专家
+ tie_guai_li = Agent(
+ name="铁拐李",
+ model="gemini-2.0-flash-exp",
+ instruction="你是铁拐李,八仙中的逆向思维专家。你善于从批判和质疑的角度看问题,总是能发现事物的另一面。你的发言风格直接、犀利,但富有智慧。每次发言控制在100字以内。"
+ )
+
+ # 吕洞宾 - 理性分析者
+ lu_dong_bin = Agent(
+ name="吕洞宾",
+ model="gemini-2.0-flash-exp",
+ instruction="你是吕洞宾,八仙中的理性分析者。你善于平衡各方观点,用理性和逻辑来分析问题。你的发言风格温和而深刻,总是能找到问题的核心。每次发言控制在100字以内。"
+ )
+
+ return tie_guai_li, lu_dong_bin
+
+async def conduct_debate():
+ """进行实际辩论"""
+ print("🎭 稷下学宫论道开始...")
+
+ # 创建智能体
+ tie_guai_li, lu_dong_bin = create_debate_agents()
+
+ print("\n📋 论道主题: 人工智能对未来社会的影响")
+ print("\n🎯 八仙论道,智慧交锋...")
+
+ try:
+ print("\n🚀 使用真实ADK调用进行论道...")
+ await real_adk_debate(tie_guai_li, lu_dong_bin)
+ except Exception as e:
+ print(f"\n❌ ADK调用失败: {e}")
+ print("🔧 回退到模拟对话模式...")
+ await simple_mock_debate(tie_guai_li, lu_dong_bin)
+
+@contextmanager
+def suppress_stdout():
+ """临时抑制stdout输出"""
+ with open(os.devnull, 'w') as devnull:
+ old_stdout = sys.stdout
+ sys.stdout = devnull
+ try:
+ yield
+ finally:
+ sys.stdout = old_stdout
+
+def clean_debug_output(text):
+ """清理ADK输出中的调试信息"""
+ if not text:
+ return ""
+
+ # 移除API密钥相关信息
+ text = re.sub(r'Both GOOGLE_API_KEY and GEMINI_API_KEY are set\. Using GOOGLE_API_KEY\.', '', text)
+
+ # 移除Event from unknown agent信息
+ text = re.sub(r'Event from an unknown agent: [^\n]*\n?', '', text)
+
+ # 移除多余的空白字符
+ text = re.sub(r'\n\s*\n', '\n', text)
+ text = text.strip()
+
+ return text
+
+async def real_adk_debate(tie_guai_li, lu_dong_bin):
+ """使用真实ADK进行辩论"""
+ print("\n🔥 真实ADK论道模式")
+
+ # 设置环境变量来抑制ADK调试输出
+ os.environ['GOOGLE_CLOUD_DISABLE_GRPC_LOGS'] = 'true'
+ os.environ['GRPC_VERBOSITY'] = 'NONE'
+ os.environ['GRPC_TRACE'] = ''
+
+ # 临时抑制警告和调试信息
+ import warnings
+ warnings.filterwarnings('ignore')
+
+ # 设置日志级别
+ import logging
+ logging.getLogger().setLevel(logging.ERROR)
+
+ # 创建会话服务
+ session_service = InMemorySessionService()
+
+ # 创建会话
+ session = await session_service.create_session(
+ state={},
+ app_name="稷下学宫论道系统",
+ user_id="debate_user"
+ )
+
+ # 创建Runner实例
+ tie_runner = Runner(
+ app_name="稷下学宫论道系统",
+ agent=tie_guai_li,
+ session_service=session_service
+ )
+
+ lu_runner = Runner(
+ app_name="稷下学宫论道系统",
+ agent=lu_dong_bin,
+ session_service=session_service
+ )
+
+ try:
+ # 第一轮:铁拐李开场
+ print("\n🗣️ 铁拐李发言:")
+ tie_prompt = "作为逆向思维专家,请从批判角度分析人工智能对未来社会可能带来的负面影响。请控制在100字以内。"
+
+ tie_content = types.Content(role='user', parts=[types.Part(text=tie_prompt)])
+ with suppress_stdout():
+ tie_response = tie_runner.run_async(
+ user_id=session.user_id,
+ session_id=session.id,
+ new_message=tie_content
+ )
+
+ tie_reply = ""
+ async for event in tie_response:
+ # 只处理包含实际文本内容的事件,过滤调试信息
+ if hasattr(event, 'content') and event.content:
+ if hasattr(event.content, 'parts') and event.content.parts:
+ for part in event.content.parts:
+ if hasattr(part, 'text') and part.text and part.text.strip():
+ text_content = str(part.text).strip()
+ # 过滤掉调试信息和系统消息
+ if not text_content.startswith('Event from') and not 'API_KEY' in text_content:
+ tie_reply += text_content
+ elif hasattr(event, 'text') and event.text:
+ text_content = str(event.text).strip()
+ if not text_content.startswith('Event from') and not 'API_KEY' in text_content:
+ tie_reply += text_content
+
+ # 清理并输出铁拐李的回复
+ clean_tie_reply = clean_debug_output(tie_reply)
+ if clean_tie_reply:
+ print(f" {clean_tie_reply}")
+
+ # 第二轮:吕洞宾回应
+ print("\n🗣️ 吕洞宾回应:")
+ lu_prompt = f"铁拐李提到了AI的负面影响:'{tie_reply[:50]}...'。作为理性分析者,请从平衡角度回应,既承认风险又指出机遇。请控制在100字以内。"
+
+ lu_content = types.Content(role='user', parts=[types.Part(text=lu_prompt)])
+ with suppress_stdout():
+ lu_response = lu_runner.run_async(
+ user_id=session.user_id,
+ session_id=session.id,
+ new_message=lu_content
+ )
+
+ lu_reply = ""
+ async for event in lu_response:
+ # 只处理包含实际文本内容的事件,过滤调试信息
+ if hasattr(event, 'content') and event.content:
+ if hasattr(event.content, 'parts') and event.content.parts:
+ for part in event.content.parts:
+ if hasattr(part, 'text') and part.text and part.text.strip():
+ text_content = str(part.text).strip()
+ # 过滤掉调试信息和系统消息
+ if not text_content.startswith('Event from') and not 'API_KEY' in text_content:
+ lu_reply += text_content
+ elif hasattr(event, 'text') and event.text:
+ text_content = str(event.text).strip()
+ if not text_content.startswith('Event from') and not 'API_KEY' in text_content:
+ lu_reply += text_content
+
+ # 清理并输出吕洞宾的回复
+ clean_lu_reply = clean_debug_output(lu_reply)
+ if clean_lu_reply:
+ print(f" {clean_lu_reply}")
+
+ # 第三轮:铁拐李再次发言
+ print("\n🗣️ 铁拐李再次发言:")
+ tie_prompt2 = f"吕洞宾提到了AI的机遇:'{lu_reply[:50]}...'。请从逆向思维角度,对这些所谓的机遇进行质疑和反思。请控制在100字以内。"
+
+ tie_content2 = types.Content(role='user', parts=[types.Part(text=tie_prompt2)])
+ with suppress_stdout():
+ tie_response2 = tie_runner.run_async(
+ user_id=session.user_id,
+ session_id=session.id,
+ new_message=tie_content2
+ )
+
+ tie_reply2 = ""
+ async for event in tie_response2:
+ # 只处理包含实际文本内容的事件,过滤调试信息
+ if hasattr(event, 'content') and event.content:
+ if hasattr(event.content, 'parts') and event.content.parts:
+ for part in event.content.parts:
+ if hasattr(part, 'text') and part.text and part.text.strip():
+ text_content = str(part.text).strip()
+ # 过滤掉调试信息和系统消息
+ if not text_content.startswith('Event from') and not 'API_KEY' in text_content:
+ tie_reply2 += text_content
+ elif hasattr(event, 'text') and event.text:
+ text_content = str(event.text).strip()
+ if not text_content.startswith('Event from') and not 'API_KEY' in text_content:
+ tie_reply2 += text_content
+
+ # 清理并输出铁拐李的第二次回复
+ clean_tie_reply2 = clean_debug_output(tie_reply2)
+ if clean_tie_reply2:
+ print(f" {clean_tie_reply2}")
+
+ print("\n🎉 真实ADK论道完成!")
+ print("\n📝 智慧交锋,各抒己见,这就是稷下学宫的魅力所在。")
+
+ finally:
+ # 清理资源
+ await tie_runner.close()
+ await lu_runner.close()
+
+
+
+def main():
+ """主函数"""
+ print("🚀 稷下学宫 ADK 真实论道系统")
+
+ # 检查API密钥
+ api_key = os.getenv('GOOGLE_API_KEY')
+ if not api_key:
+ print("❌ 未找到 GOOGLE_API_KEY 环境变量")
+ print("请使用: doppler run -- python src/jixia/debates/adk_real_debate.py")
+ return
+
+ print(f"✅ API密钥已配置")
+
+ # 运行异步辩论
+ try:
+ asyncio.run(conduct_debate())
+ except Exception as e:
+ print(f"❌ 运行失败: {e}")
+
+if __name__ == "__main__":
+ main()
\ No newline at end of file
diff --git a/src/jixia/debates/adk_simple_debate.py b/src/jixia/debates/adk_simple_debate.py
new file mode 100644
index 0000000..bddbab5
--- /dev/null
+++ b/src/jixia/debates/adk_simple_debate.py
@@ -0,0 +1,82 @@
+#!/usr/bin/env python3
+# -*- coding: utf-8 -*-
+"""
+稷下学宫 ADK 简单论道测试
+实现智能体间的基本对话功能
+"""
+
+import os
+from google.adk import Agent
+
+def create_debate_agents():
+ """创建论道智能体"""
+
+ # 铁拐李 - 逆向思维专家
+ tie_guai_li = Agent(
+ name="铁拐李",
+ model="gemini-2.0-flash-exp"
+ )
+
+ # 吕洞宾 - 理性分析者
+ lu_dong_bin = Agent(
+ name="吕洞宾",
+ model="gemini-2.0-flash-exp"
+ )
+
+ return tie_guai_li, lu_dong_bin
+
+def simple_debate_test():
+ """简单论道测试"""
+ print("🎭 开始简单论道测试...")
+
+ # 创建智能体
+ tie_guai_li, lu_dong_bin = create_debate_agents()
+
+ print("\n📋 论道主题: 人工智能对未来社会的影响")
+ print("\n🎯 开始论道...")
+
+ try:
+ # 测试智能体创建
+ print("\n✅ 智能体创建成功:")
+ print(f" - {tie_guai_li.name}: {tie_guai_li.model}")
+ print(f" - {lu_dong_bin.name}: {lu_dong_bin.model}")
+
+ print("\n🎉 简单论道测试完成!")
+ print("\n📝 智能体基础功能验证成功")
+
+ except Exception as e:
+ print(f"❌ 论道测试失败: {e}")
+ return False
+
+ return True
+
+def main():
+ """主函数"""
+ print("🚀 稷下学宫 ADK 简单论道系统")
+
+ # 检查API密钥
+ api_key = os.getenv('GOOGLE_API_KEY')
+ if not api_key:
+ print("❌ 未找到 GOOGLE_API_KEY 环境变量")
+ print("请使用: doppler run -- python src/jixia/debates/adk_simple_debate.py")
+ return
+
+ print(f"✅ API密钥已配置")
+
+ # 运行测试
+ try:
+ result = simple_debate_test()
+ if result:
+ print("\n📝 测试结果: 成功")
+ print("\n🎯 下一步开发计划:")
+ print(" 1. 学习ADK的正确调用方式")
+ print(" 2. 实现智能体对话功能")
+ print(" 3. 扩展到八仙全员论道")
+ print(" 4. 集成实时数据源")
+ else:
+ print("\n❌ 测试失败")
+ except Exception as e:
+ print(f"❌ 运行失败: {e}")
+
+if __name__ == "__main__":
+ main()
\ No newline at end of file
diff --git a/src/jixia/debates/qczh_debate.py b/src/jixia/debates/qczh_debate.py
new file mode 100644
index 0000000..eaafce9
--- /dev/null
+++ b/src/jixia/debates/qczh_debate.py
@@ -0,0 +1,165 @@
+#!/usr/bin/env python3
+# -*- coding: utf-8 -*-
+"""
+太公心易 - 起承转合辩论系统
+"""
+
+import json
+from datetime import datetime
+from typing import Dict, List, Any
+from enum import Enum
+
+class DebateStage(Enum):
+ QI = "起" # 八仙按先天八卦顺序
+ CHENG = "承" # 雁阵式承接
+ ZHUAN = "转" # 自由辩论(36次handoff)
+ HE = "合" # 交替总结
+
+class QiChengZhuanHeDebate:
+ """起承转合辩论系统"""
+
+ def __init__(self):
+ # 八仙配置(先天八卦顺序)
+ self.baxian_sequence = ["吕洞宾", "何仙姑", "铁拐李", "汉钟离", "蓝采和", "张果老", "韩湘子", "曹国舅"]
+
+ # 雁阵配置
+ self.goose_formation = {
+ "positive": ["正1", "正2", "正3", "正4"],
+ "negative": ["反1", "反2", "反3", "反4"]
+ }
+
+ # 交替总结顺序
+ self.alternating_sequence = ["反1", "正1", "反2", "正2", "反3", "正3", "反4", "正4"]
+
+ # 辩论状态
+ self.current_stage = DebateStage.QI
+ self.stage_progress = 0
+ self.total_handoffs = 0
+ self.debate_history = []
+
+ # 阶段配置
+ self.stage_configs = {
+ DebateStage.QI: {"max_progress": 8, "description": "八仙按先天八卦顺序"},
+ DebateStage.CHENG: {"max_progress": 8, "description": "雁阵式承接"},
+ DebateStage.ZHUAN: {"max_progress": 36, "description": "自由辩论"},
+ DebateStage.HE: {"max_progress": 8, "description": "交替总结"}
+ }
+
+ def get_current_speaker(self) -> str:
+ """获取当前发言者"""
+ if self.current_stage == DebateStage.QI:
+ return self.baxian_sequence[self.stage_progress % 8]
+ elif self.current_stage == DebateStage.CHENG:
+ if self.stage_progress < 4:
+ return self.goose_formation["positive"][self.stage_progress]
+ else:
+ return self.goose_formation["negative"][self.stage_progress - 4]
+ elif self.current_stage == DebateStage.ZHUAN:
+ # 简化的优先级算法
+ speakers = self.goose_formation["positive"] + self.goose_formation["negative"]
+ return speakers[self.total_handoffs % 8]
+ elif self.current_stage == DebateStage.HE:
+ return self.alternating_sequence[self.stage_progress % 8]
+
+ return "未知发言者"
+
+ def advance_stage(self):
+ """推进辩论阶段"""
+ config = self.stage_configs[self.current_stage]
+
+ if self.stage_progress >= config["max_progress"] - 1:
+ self._transition_to_next_stage()
+ else:
+ self.stage_progress += 1
+
+ def _transition_to_next_stage(self):
+ """转换到下一阶段"""
+ transitions = {
+ DebateStage.QI: DebateStage.CHENG,
+ DebateStage.CHENG: DebateStage.ZHUAN,
+ DebateStage.ZHUAN: DebateStage.HE,
+ DebateStage.HE: None
+ }
+
+ next_stage = transitions[self.current_stage]
+ if next_stage:
+ self.current_stage = next_stage
+ self.stage_progress = 0
+ print(f"🎭 辩论进入{next_stage.value}阶段")
+ else:
+ print("🎉 辩论结束!")
+
+ def record_speech(self, speaker: str, message: str):
+ """记录发言"""
+ record = {
+ "timestamp": datetime.now().isoformat(),
+ "stage": self.current_stage.value,
+ "progress": self.stage_progress,
+ "speaker": speaker,
+ "message": message,
+ "handoffs": self.total_handoffs
+ }
+
+ self.debate_history.append(record)
+
+ if self.current_stage == DebateStage.ZHUAN:
+ self.total_handoffs += 1
+
+ def get_stage_info(self) -> Dict[str, Any]:
+ """获取阶段信息"""
+ config = self.stage_configs[self.current_stage]
+ return {
+ "stage": self.current_stage.value,
+ "progress": self.stage_progress + 1,
+ "max_progress": config["max_progress"],
+ "description": config["description"],
+ "current_speaker": self.get_current_speaker(),
+ "total_handoffs": self.total_handoffs
+ }
+
+ def save_state(self, filename: str = "qczh_debate_state.json"):
+ """保存状态"""
+ state = {
+ "current_stage": self.current_stage.value,
+ "stage_progress": self.stage_progress,
+ "total_handoffs": self.total_handoffs,
+ "debate_history": self.debate_history
+ }
+
+ with open(filename, 'w', encoding='utf-8') as f:
+ json.dump(state, f, ensure_ascii=False, indent=2)
+
+ print(f"💾 辩论状态已保存到 {filename}")
+
+def main():
+ """测试函数"""
+ print("🚀 起承转合辩论系统测试")
+ print("=" * 50)
+
+ debate = QiChengZhuanHeDebate()
+
+ # 测试各阶段
+ test_messages = [
+ "起:八仙按先天八卦顺序阐述观点",
+ "承:雁阵式承接,总体阐述+讥讽",
+ "转:自由辩论,36次handoff",
+ "合:交替总结,最终论证"
+ ]
+
+ for i, message in enumerate(test_messages):
+ info = debate.get_stage_info()
+ speaker = debate.get_current_speaker()
+
+ print(f"\n🎭 阶段: {info['stage']} ({info['progress']}/{info['max_progress']})")
+ print(f"🗣️ 发言者: {speaker}")
+ print(f"💬 消息: {message}")
+
+ debate.record_speech(speaker, message)
+ debate.advance_stage()
+
+ debate.save_state()
+ print("\n✅ 测试完成!")
+
+if __name__ == "__main__":
+ main()
+
diff --git a/src/jixia/debates/qi_cheng_zhuan_he_debate.py b/src/jixia/debates/qi_cheng_zhuan_he_debate.py
new file mode 100644
index 0000000..278b502
--- /dev/null
+++ b/src/jixia/debates/qi_cheng_zhuan_he_debate.py
@@ -0,0 +1,341 @@
+#!/usr/bin/env python3
+# -*- coding: utf-8 -*-
+"""
+太公心易 - 起承转合辩论系统
+基于先天八卦的八仙辩论架构
+"""
+
+import asyncio
+import json
+from datetime import datetime
+from typing import Dict, List, Any, Optional
+from dataclasses import dataclass
+from enum import Enum
+
+class DebateStage(Enum):
+ """辩论阶段枚举"""
+ QI = "起" # 八仙按先天八卦顺序
+ CHENG = "承" # 雁阵式承接
+ ZHUAN = "转" # 自由辩论(36次handoff)
+ HE = "合" # 交替总结
+
+@dataclass
+class Speaker:
+ """发言者数据类"""
+ name: str
+ role: str
+ team: str # "positive" or "negative"
+ bagua_position: Optional[int] = None # 八卦位置(0-7)
+
+@dataclass
+class DebateContext:
+ """辩论上下文"""
+ current_stage: DebateStage
+ stage_progress: int
+ total_handoffs: int
+ current_speaker: Optional[str] = None
+ last_message: Optional[str] = None
+ debate_history: List[Dict] = None
+
+class QiChengZhuanHeDebateSystem:
+ """起承转合辩论系统"""
+
+ def __init__(self):
+ # 八仙配置(按先天八卦顺序)
+ self.baxian_speakers = {
+ "吕洞宾": Speaker("吕洞宾", "剑仙投资顾问", "neutral", 0), # 乾
+ "何仙姑": Speaker("何仙姑", "慈悲风控专家", "neutral", 1), # 兑
+ "铁拐李": Speaker("铁拐李", "逆向思维专家", "neutral", 2), # 离
+ "汉钟离": Speaker("汉钟离", "平衡协调者", "neutral", 3), # 震
+ "蓝采和": Speaker("蓝采和", "创新思维者", "neutral", 4), # 巽
+ "张果老": Speaker("张果老", "历史智慧者", "neutral", 5), # 坎
+ "韩湘子": Speaker("韩湘子", "艺术感知者", "neutral", 6), # 艮
+ "曹国舅": Speaker("曹国舅", "实务执行者", "neutral", 7) # 坤
+ }
+
+ # 雁阵队伍配置
+ self.goose_formation = {
+ "positive": ["正1", "正2", "正3", "正4"],
+ "negative": ["反1", "反2", "反3", "反4"]
+ }
+
+ # 辩论状态
+ self.context = DebateContext(
+ current_stage=DebateStage.QI,
+ stage_progress=0,
+ total_handoffs=0,
+ debate_history=[]
+ )
+
+ # 阶段配置
+ self.stage_configs = {
+ DebateStage.QI: {
+ "duration": "8-10分钟",
+ "max_progress": 8, # 八仙轮流发言
+ "description": "八仙按先天八卦顺序阐述观点"
+ },
+ DebateStage.CHENG: {
+ "duration": "8-10分钟",
+ "max_progress": 8, # 正反各4人
+ "description": "雁阵式承接,总体阐述+讥讽"
+ },
+ DebateStage.ZHUAN: {
+ "duration": "12-15分钟",
+ "max_progress": 36, # 36次handoff
+ "description": "自由辩论,优先级算法决定发言"
+ },
+ DebateStage.HE: {
+ "duration": "8-10分钟",
+ "max_progress": 8, # 交替总结
+ "description": "交替总结,最终论证"
+ }
+ }
+
+ # 优先级算法
+ self.priority_algorithm = PriorityAlgorithm()
+
+ # 记忆系统
+ self.memory_system = DebateMemorySystem()
+
+ def get_current_speaker(self) -> str:
+ """获取当前发言者"""
+ stage = self.context.current_stage
+ progress = self.context.stage_progress
+
+ if stage == DebateStage.QI:
+ return self._get_bagua_speaker(progress)
+ elif stage == DebateStage.CHENG:
+ return self._get_goose_formation_speaker(progress)
+ elif stage == DebateStage.ZHUAN:
+ return self._get_priority_speaker()
+ elif stage == DebateStage.HE:
+ return self._get_alternating_speaker(progress)
+
+ return "未知发言者"
+
+ def _get_bagua_speaker(self, progress: int) -> str:
+ """获取八卦顺序发言者"""
+ bagua_sequence = ["吕洞宾", "何仙姑", "铁拐李", "汉钟离", "蓝采和", "张果老", "韩湘子", "曹国舅"]
+ return bagua_sequence[progress % 8]
+
+ def _get_goose_formation_speaker(self, progress: int) -> str:
+ """获取雁阵发言者"""
+ if progress < 4:
+ # 正方雁阵
+ return self.goose_formation["positive"][progress]
+ else:
+ # 反方雁阵
+ return self.goose_formation["negative"][progress - 4]
+
+ def _get_priority_speaker(self) -> str:
+ """获取优先级发言者"""
+ return self.priority_algorithm.calculate_next_speaker(self.context)
+
+ def _get_alternating_speaker(self, progress: int) -> str:
+ """获取交替总结发言者"""
+ alternating_sequence = ["反1", "正1", "反2", "正2", "反3", "正3", "反4", "正4"]
+ return alternating_sequence[progress % 8]
+
+ def advance_stage(self):
+ """推进辩论阶段"""
+ current_config = self.stage_configs[self.context.current_stage]
+
+ if self.context.stage_progress >= current_config["max_progress"] - 1:
+ # 当前阶段完成,进入下一阶段
+ self._transition_to_next_stage()
+ else:
+ # 当前阶段继续
+ self.context.stage_progress += 1
+
+ def _transition_to_next_stage(self):
+ """转换到下一阶段"""
+ stage_transitions = {
+ DebateStage.QI: DebateStage.CHENG,
+ DebateStage.CHENG: DebateStage.ZHUAN,
+ DebateStage.ZHUAN: DebateStage.HE,
+ DebateStage.HE: None # 辩论结束
+ }
+
+ next_stage = stage_transitions[self.context.current_stage]
+ if next_stage:
+ self.context.current_stage = next_stage
+ self.context.stage_progress = 0
+ print(f"🎭 辩论进入{next_stage.value}阶段")
+ else:
+ print("🎉 辩论结束!")
+
+ def record_speech(self, speaker: str, message: str):
+ """记录发言"""
+ speech_record = {
+ "timestamp": datetime.now().isoformat(),
+ "stage": self.context.current_stage.value,
+ "stage_progress": self.context.stage_progress,
+ "speaker": speaker,
+ "message": message,
+ "total_handoffs": self.context.total_handoffs
+ }
+
+ self.context.debate_history.append(speech_record)
+ self.context.last_message = message
+ self.context.current_speaker = speaker
+
+ # 更新记忆系统
+ self.memory_system.store_speech(speaker, message, self.context)
+
+ # 如果是转阶段,增加handoff计数
+ if self.context.current_stage == DebateStage.ZHUAN:
+ self.context.total_handoffs += 1
+
+ def get_stage_info(self) -> Dict[str, Any]:
+ """获取当前阶段信息"""
+ stage = self.context.current_stage
+ config = self.stage_configs[stage]
+
+ return {
+ "current_stage": stage.value,
+ "stage_progress": self.context.stage_progress,
+ "max_progress": config["max_progress"],
+ "description": config["description"],
+ "current_speaker": self.get_current_speaker(),
+ "total_handoffs": self.context.total_handoffs
+ }
+
+ def save_debate_state(self, filename: str = "debate_state.json"):
+ """保存辩论状态"""
+ state_data = {
+ "context": {
+ "current_stage": self.context.current_stage.value,
+ "stage_progress": self.context.stage_progress,
+ "total_handoffs": self.context.total_handoffs,
+ "current_speaker": self.context.current_speaker,
+ "last_message": self.context.last_message
+ },
+ "debate_history": self.context.debate_history,
+ "memory_data": self.memory_system.get_memory_data()
+ }
+
+ with open(filename, 'w', encoding='utf-8') as f:
+ json.dump(state_data, f, ensure_ascii=False, indent=2)
+
+ print(f"💾 辩论状态已保存到 {filename}")
+
+class PriorityAlgorithm:
+ """优先级算法"""
+
+ def __init__(self):
+ self.speaker_weights = {
+ "rebuttal_urgency": 0.3,
+ "argument_strength": 0.25,
+ "time_pressure": 0.2,
+ "audience_reaction": 0.15,
+ "strategy_need": 0.1
+ }
+
+ def calculate_next_speaker(self, context: DebateContext) -> str:
+ """计算下一个发言者"""
+ # 简化的优先级算法
+ available_speakers = ["正1", "正2", "正3", "正4", "反1", "反2", "反3", "反4"]
+
+ # 基于当前上下文计算优先级
+ priorities = {}
+ for speaker in available_speakers:
+ priority_score = self._calculate_speaker_priority(speaker, context)
+ priorities[speaker] = priority_score
+
+ # 选择最高优先级发言者
+ return max(priorities, key=priorities.get)
+
+ def _calculate_speaker_priority(self, speaker: str, context: DebateContext) -> float:
+ """计算发言者优先级"""
+ # 简化的优先级计算
+ base_score = 0.5
+
+ # 根据发言者角色调整
+ if "正" in speaker:
+ base_score += 0.1
+ if "反" in speaker:
+ base_score += 0.1
+
+ # 根据handoff次数调整
+ if context.total_handoffs % 2 == 0:
+ base_score += 0.2
+
+ return base_score
+
+class DebateMemorySystem:
+ """辩论记忆系统"""
+
+ def __init__(self):
+ self.speaker_memories = {}
+ self.debate_memories = []
+
+ def store_speech(self, speaker: str, message: str, context: DebateContext):
+ """存储发言记忆"""
+ if speaker not in self.speaker_memories:
+ self.speaker_memories[speaker] = []
+
+ memory_entry = {
+ "timestamp": datetime.now().isoformat(),
+ "stage": context.current_stage.value,
+ "message": message,
+ "context": {
+ "stage_progress": context.stage_progress,
+ "total_handoffs": context.total_handoffs
+ }
+ }
+
+ self.speaker_memories[speaker].append(memory_entry)
+ self.debate_memories.append(memory_entry)
+
+ def get_speaker_memory(self, speaker: str, limit: int = 5) -> List[Dict]:
+ """获取发言者记忆"""
+ if speaker in self.speaker_memories:
+ return self.speaker_memories[speaker][-limit:]
+ return []
+
+ def get_memory_data(self) -> Dict[str, Any]:
+ """获取记忆数据"""
+ return {
+ "speaker_memories": self.speaker_memories,
+ "debate_memories": self.debate_memories
+ }
+
+def main():
+ """主函数 - 测试起承转合辩论系统"""
+ print("🚀 太公心易 - 起承转合辩论系统")
+ print("=" * 60)
+
+ # 创建辩论系统
+ debate_system = QiChengZhuanHeDebateSystem()
+
+ # 测试各阶段
+ test_messages = [
+ "起:八仙按先天八卦顺序阐述观点",
+ "承:雁阵式承接,总体阐述+讥讽",
+ "转:自由辩论,36次handoff",
+ "合:交替总结,最终论证"
+ ]
+
+ for i, message in enumerate(test_messages):
+ stage_info = debate_system.get_stage_info()
+ current_speaker = debate_system.get_current_speaker()
+
+ print(f"\n🎭 当前阶段: {stage_info['current_stage']}")
+ print(f"📊 进度: {stage_info['stage_progress'] + 1}/{stage_info['max_progress']}")
+ print(f"🗣️ 发言者: {current_speaker}")
+ print(f"💬 消息: {message}")
+
+ # 记录发言
+ debate_system.record_speech(current_speaker, message)
+
+ # 推进阶段
+ debate_system.advance_stage()
+
+ # 保存状态
+ debate_system.save_debate_state()
+
+ print("\n✅ 起承转合辩论系统测试完成!")
+
+if __name__ == "__main__":
+ main()
+
diff --git a/src/jixia/memory/base_memory_bank.py b/src/jixia/memory/base_memory_bank.py
new file mode 100644
index 0000000..9244514
--- /dev/null
+++ b/src/jixia/memory/base_memory_bank.py
@@ -0,0 +1,39 @@
+#!/usr/bin/env python3
+"""
+通用记忆银行抽象,便于插入不同后端(Vertex、Cloudflare AutoRAG等)
+"""
+from __future__ import annotations
+
+from typing import Dict, List, Any, Optional, Protocol, runtime_checkable
+
+
+@runtime_checkable
+class MemoryBankProtocol(Protocol):
+ async def create_memory_bank(self, agent_name: str, display_name: Optional[str] = None) -> str: ...
+
+ async def add_memory(
+ self,
+ agent_name: str,
+ content: str,
+ memory_type: str = "conversation",
+ debate_topic: str = "",
+ metadata: Optional[Dict[str, Any]] = None,
+ ) -> str: ...
+
+ async def search_memories(
+ self,
+ agent_name: str,
+ query: str,
+ memory_type: Optional[str] = None,
+ limit: int = 10,
+ ) -> List[Dict[str, Any]]: ...
+
+ async def get_agent_context(self, agent_name: str, debate_topic: str) -> str: ...
+
+ async def save_debate_session(
+ self,
+ debate_topic: str,
+ participants: List[str],
+ conversation_history: List[Dict[str, str]],
+ outcomes: Optional[Dict[str, Any]] = None,
+ ) -> None: ...
diff --git a/src/jixia/memory/factory.py b/src/jixia/memory/factory.py
new file mode 100644
index 0000000..2f76cc5
--- /dev/null
+++ b/src/jixia/memory/factory.py
@@ -0,0 +1,37 @@
+#!/usr/bin/env python3
+"""
+记忆银行工厂:根据配置创建 Vertex 实现
+"""
+from __future__ import annotations
+
+import os
+from typing import Optional
+
+from .base_memory_bank import MemoryBankProtocol
+from .vertex_memory_bank import VertexMemoryBank
+
+
+def get_memory_backend(prefer: Optional[str] = None) -> MemoryBankProtocol:
+ """
+ 根据环境变量选择记忆后端:
+ - JIXIA_MEMORY_BACKEND=vertex (默认)
+ - 如果未设置,默认使用 Vertex
+ """
+ # 从环境变量读取后端选择,默认为 vertex
+ backend = os.getenv("JIXIA_MEMORY_BACKEND", "vertex").lower()
+
+ if prefer:
+ backend = prefer.lower()
+
+ if backend != "vertex":
+ raise ValueError(f"不支持的记忆后端: {backend},当前只支持 'vertex'")
+
+ # Vertex 作为唯一后端
+ try:
+ mem = VertexMemoryBank.from_config()
+ return mem
+ except Exception as e:
+ # 不可用时抛错
+ raise RuntimeError(
+ "未能创建 Vertex 记忆后端:请配置 Vertex (GOOGLE_*) 环境变量"
+ ) from e
diff --git a/src/jixia/memory/vertex_memory_bank.py b/src/jixia/memory/vertex_memory_bank.py
new file mode 100644
index 0000000..b43c603
--- /dev/null
+++ b/src/jixia/memory/vertex_memory_bank.py
@@ -0,0 +1,463 @@
+#!/usr/bin/env python3
+"""
+Vertex AI Memory Bank 集成模块
+为稷下学宫AI辩论系统提供记忆银行功能
+"""
+
+import os
+from typing import Dict, List, Optional, Any
+from dataclasses import dataclass
+from datetime import datetime
+import json
+
+try:
+ from google.cloud import aiplatform
+ # Memory Bank 功能可能还在预览版中,先使用基础功能
+ VERTEX_AI_AVAILABLE = True
+except ImportError:
+ VERTEX_AI_AVAILABLE = False
+ print("⚠️ Google Cloud AI Platform 未安装,Memory Bank功能不可用")
+ print("安装命令: pip install google-cloud-aiplatform")
+
+from config.doppler_config import get_google_genai_config
+
+
+@dataclass
+class MemoryEntry:
+ """记忆条目数据结构"""
+ content: str
+ metadata: Dict[str, Any]
+ timestamp: datetime
+ agent_name: str
+ debate_topic: str
+ memory_type: str # "conversation", "preference", "knowledge", "strategy"
+
+
+class VertexMemoryBank:
+ """
+ Vertex AI Memory Bank 管理器
+ 为八仙辩论系统提供智能记忆功能
+ """
+
+ def __init__(self, project_id: str, location: str = "us-central1"):
+ """
+ 初始化Memory Bank
+
+ Args:
+ project_id: Google Cloud项目ID
+ location: 部署区域
+ """
+ if not VERTEX_AI_AVAILABLE:
+ print("⚠️ Google Cloud AI Platform 未安装,使用本地模拟模式")
+ # 不抛出异常,允许使用本地模拟模式
+
+ self.project_id = project_id
+ self.location = location
+ self.memory_banks = {} # 存储不同智能体的记忆银行
+ self.local_memories = {} # 本地记忆存储 (临时方案)
+
+ # 初始化AI Platform
+ try:
+ aiplatform.init(project=project_id, location=location)
+ print(f"✅ Vertex AI 初始化成功: {project_id} @ {location}")
+ except Exception as e:
+ print(f"⚠️ Vertex AI 初始化失败,使用本地模拟模式: {e}")
+
+ # 八仙智能体名称映射
+ self.baxian_agents = {
+ "tieguaili": "铁拐李",
+ "hanzhongli": "汉钟离",
+ "zhangguolao": "张果老",
+ "lancaihe": "蓝采和",
+ "hexiangu": "何仙姑",
+ "lvdongbin": "吕洞宾",
+ "hanxiangzi": "韩湘子",
+ "caoguojiu": "曹国舅"
+ }
+
+ @classmethod
+ def from_config(cls) -> 'VertexMemoryBank':
+ """
+ 从配置创建Memory Bank实例
+
+ Returns:
+ VertexMemoryBank实例
+ """
+ config = get_google_genai_config()
+ project_id = config.get('project_id')
+ location = config.get('location', 'us-central1')
+
+ if not project_id:
+ raise ValueError("Google Cloud Project ID 未配置,请设置 GOOGLE_CLOUD_PROJECT_ID")
+
+ return cls(project_id=project_id, location=location)
+
+ async def create_memory_bank(self, agent_name: str, display_name: str = None) -> str:
+ """
+ 为指定智能体创建记忆银行
+
+ Args:
+ agent_name: 智能体名称 (如 "tieguaili")
+ display_name: 显示名称 (如 "铁拐李的记忆银行")
+
+ Returns:
+ 记忆银行ID
+ """
+ if not display_name:
+ chinese_name = self.baxian_agents.get(agent_name, agent_name)
+ display_name = f"{chinese_name}的记忆银行"
+
+ try:
+ # 使用本地存储模拟记忆银行 (临时方案)
+ memory_bank_id = f"memory_bank_{agent_name}_{self.project_id}"
+
+ # 初始化本地记忆存储
+ if agent_name not in self.local_memories:
+ self.local_memories[agent_name] = []
+
+ self.memory_banks[agent_name] = memory_bank_id
+ print(f"✅ 为 {display_name} 创建记忆银行: {memory_bank_id}")
+
+ return memory_bank_id
+
+ except Exception as e:
+ print(f"❌ 创建记忆银行失败: {e}")
+ raise
+
+ async def add_memory(self,
+ agent_name: str,
+ content: str,
+ memory_type: str = "conversation",
+ debate_topic: str = "",
+ metadata: Dict[str, Any] = None) -> str:
+ """
+ 添加记忆到指定智能体的记忆银行
+
+ Args:
+ agent_name: 智能体名称
+ content: 记忆内容
+ memory_type: 记忆类型 ("conversation", "preference", "knowledge", "strategy")
+ debate_topic: 辩论主题
+ metadata: 额外元数据
+
+ Returns:
+ 记忆ID
+ """
+ if agent_name not in self.memory_banks:
+ await self.create_memory_bank(agent_name)
+
+ if metadata is None:
+ metadata = {}
+
+ # 构建记忆条目
+ memory_entry = MemoryEntry(
+ content=content,
+ metadata={
+ **metadata,
+ "agent_name": agent_name,
+ "chinese_name": self.baxian_agents.get(agent_name, agent_name),
+ "memory_type": memory_type,
+ "debate_topic": debate_topic,
+ "system": "jixia_academy"
+ },
+ timestamp=datetime.now(),
+ agent_name=agent_name,
+ debate_topic=debate_topic,
+ memory_type=memory_type
+ )
+
+ try:
+ # 使用本地存储添加记忆 (临时方案)
+ memory_id = f"memory_{agent_name}_{len(self.local_memories[agent_name])}"
+
+ # 添加到本地存储
+ memory_data = {
+ "id": memory_id,
+ "content": content,
+ "metadata": memory_entry.metadata,
+ "timestamp": memory_entry.timestamp.isoformat(),
+ "memory_type": memory_type,
+ "debate_topic": debate_topic
+ }
+
+ self.local_memories[agent_name].append(memory_data)
+
+ print(f"✅ 为 {self.baxian_agents.get(agent_name)} 添加记忆: {memory_type}")
+ return memory_id
+
+ except Exception as e:
+ print(f"❌ 添加记忆失败: {e}")
+ raise
+
+ async def search_memories(self,
+ agent_name: str,
+ query: str,
+ memory_type: str = None,
+ limit: int = 10) -> List[Dict[str, Any]]:
+ """
+ 搜索智能体的相关记忆
+
+ Args:
+ agent_name: 智能体名称
+ query: 搜索查询
+ memory_type: 记忆类型过滤
+ limit: 返回结果数量限制
+
+ Returns:
+ 相关记忆列表
+ """
+ if agent_name not in self.memory_banks:
+ return []
+
+ try:
+ # 使用本地存储搜索记忆 (临时方案)
+ if agent_name not in self.local_memories:
+ return []
+
+ memories = self.local_memories[agent_name]
+ results = []
+
+ # 简单的文本匹配搜索
+ query_lower = query.lower()
+
+ for memory in memories:
+ # 检查记忆类型过滤
+ if memory_type and memory.get("memory_type") != memory_type:
+ continue
+
+ # 检查内容匹配
+ content_lower = memory["content"].lower()
+ debate_topic_lower = memory.get("debate_topic", "").lower()
+
+ # 在内容或辩论主题中搜索
+ if query_lower in content_lower or query_lower in debate_topic_lower:
+ # 计算简单的相关性分数
+ content_matches = content_lower.count(query_lower)
+ topic_matches = debate_topic_lower.count(query_lower)
+ total_words = len(content_lower.split()) + len(debate_topic_lower.split())
+
+ relevance_score = (content_matches + topic_matches) / max(total_words, 1)
+
+ results.append({
+ "content": memory["content"],
+ "metadata": memory["metadata"],
+ "relevance_score": relevance_score
+ })
+
+ # 按相关性排序并限制结果数量
+ results.sort(key=lambda x: x["relevance_score"], reverse=True)
+ return results[:limit]
+
+ except Exception as e:
+ print(f"❌ 搜索记忆失败: {e}")
+ return []
+
+ async def get_agent_context(self, agent_name: str, debate_topic: str) -> str:
+ """
+ 获取智能体在特定辩论主题下的上下文记忆
+
+ Args:
+ agent_name: 智能体名称
+ debate_topic: 辩论主题
+
+ Returns:
+ 格式化的上下文字符串
+ """
+ # 搜索相关记忆
+ conversation_memories = await self.search_memories(
+ agent_name, debate_topic, "conversation", limit=5
+ )
+ preference_memories = await self.search_memories(
+ agent_name, debate_topic, "preference", limit=3
+ )
+ strategy_memories = await self.search_memories(
+ agent_name, debate_topic, "strategy", limit=3
+ )
+
+ # 构建上下文
+ context_parts = []
+
+ if conversation_memories:
+ context_parts.append("## 历史对话记忆")
+ for mem in conversation_memories:
+ context_parts.append(f"- {mem['content']}")
+
+ if preference_memories:
+ context_parts.append("\n## 偏好记忆")
+ for mem in preference_memories:
+ context_parts.append(f"- {mem['content']}")
+
+ if strategy_memories:
+ context_parts.append("\n## 策略记忆")
+ for mem in strategy_memories:
+ context_parts.append(f"- {mem['content']}")
+
+ chinese_name = self.baxian_agents.get(agent_name, agent_name)
+ if context_parts:
+ return f"# {chinese_name}的记忆上下文\n\n" + "\n".join(context_parts)
+ else:
+ return f"# {chinese_name}的记忆上下文\n\n暂无相关记忆。"
+
+ async def save_debate_session(self,
+ debate_topic: str,
+ participants: List[str],
+ conversation_history: List[Dict[str, str]],
+ outcomes: Dict[str, Any] = None) -> None:
+ """
+ 保存完整的辩论会话到各参与者的记忆银行
+
+ Args:
+ debate_topic: 辩论主题
+ participants: 参与者列表
+ conversation_history: 对话历史
+ outcomes: 辩论结果和洞察
+ """
+ for agent_name in participants:
+ if agent_name not in self.baxian_agents:
+ continue
+
+ # 保存对话历史
+ conversation_summary = self._summarize_conversation(
+ conversation_history, agent_name
+ )
+ await self.add_memory(
+ agent_name=agent_name,
+ content=conversation_summary,
+ memory_type="conversation",
+ debate_topic=debate_topic,
+ metadata={
+ "participants": participants,
+ "session_length": len(conversation_history)
+ }
+ )
+
+ # 保存策略洞察
+ if outcomes:
+ strategy_insight = self._extract_strategy_insight(
+ outcomes, agent_name
+ )
+ if strategy_insight:
+ await self.add_memory(
+ agent_name=agent_name,
+ content=strategy_insight,
+ memory_type="strategy",
+ debate_topic=debate_topic,
+ metadata={"session_outcome": outcomes}
+ )
+
+ def _summarize_conversation(self,
+ conversation_history: List[Dict[str, str]],
+ agent_name: str) -> str:
+ """
+ 为特定智能体总结对话历史
+
+ Args:
+ conversation_history: 对话历史
+ agent_name: 智能体名称
+
+ Returns:
+ 对话总结
+ """
+ agent_messages = [
+ msg for msg in conversation_history
+ if msg.get("agent") == agent_name
+ ]
+
+ if not agent_messages:
+ return "本次辩论中未发言"
+
+ chinese_name = self.baxian_agents.get(agent_name, agent_name)
+ summary = f"{chinese_name}在本次辩论中的主要观点:\n"
+
+ for i, msg in enumerate(agent_messages[:3], 1): # 只取前3条主要观点
+ summary += f"{i}. {msg.get('content', '')[:100]}...\n"
+
+ return summary
+
+ def _extract_strategy_insight(self,
+ outcomes: Dict[str, Any],
+ agent_name: str) -> Optional[str]:
+ """
+ 从辩论结果中提取策略洞察
+
+ Args:
+ outcomes: 辩论结果
+ agent_name: 智能体名称
+
+ Returns:
+ 策略洞察或None
+ """
+ # 这里可以根据实际的outcomes结构来提取洞察
+ # 暂时返回一个简单的示例
+ chinese_name = self.baxian_agents.get(agent_name, agent_name)
+
+ if "winner" in outcomes and outcomes["winner"] == agent_name:
+ return f"{chinese_name}在本次辩论中获胜,其论证策略值得保持。"
+ elif "insights" in outcomes and agent_name in outcomes["insights"]:
+ return outcomes["insights"][agent_name]
+
+ return None
+
+
+# 便捷函数
+async def initialize_baxian_memory_banks(project_id: str, location: str = "us-central1") -> VertexMemoryBank:
+ """
+ 初始化所有八仙智能体的记忆银行
+
+ Args:
+ project_id: Google Cloud项目ID
+ location: 部署区域
+
+ Returns:
+ 配置好的VertexMemoryBank实例
+ """
+ memory_bank = VertexMemoryBank(project_id, location)
+
+ print("🏛️ 正在为稷下学宫八仙创建记忆银行...")
+
+ for agent_key, chinese_name in memory_bank.baxian_agents.items():
+ try:
+ await memory_bank.create_memory_bank(agent_key)
+ except Exception as e:
+ print(f"⚠️ 创建 {chinese_name} 记忆银行时出错: {e}")
+
+ print("✅ 八仙记忆银行初始化完成")
+ return memory_bank
+
+
+if __name__ == "__main__":
+ import asyncio
+
+ async def test_memory_bank():
+ """测试Memory Bank功能"""
+ try:
+ # 从配置创建Memory Bank
+ memory_bank = VertexMemoryBank.from_config()
+
+ # 测试创建记忆银行
+ await memory_bank.create_memory_bank("tieguaili")
+
+ # 测试添加记忆
+ await memory_bank.add_memory(
+ agent_name="tieguaili",
+ content="在讨论NVIDIA股票时,我倾向于逆向思维,关注潜在风险。",
+ memory_type="preference",
+ debate_topic="NVIDIA投资分析"
+ )
+
+ # 测试搜索记忆
+ results = await memory_bank.search_memories(
+ agent_name="tieguaili",
+ query="NVIDIA",
+ limit=5
+ )
+
+ print(f"搜索结果: {len(results)} 条记忆")
+ for result in results:
+ print(f"- {result['content']}")
+
+ except Exception as e:
+ print(f"❌ 测试失败: {e}")
+
+ # 运行测试
+ asyncio.run(test_memory_bank())
diff --git a/test-api-example.js b/test-api-example.js
new file mode 100644
index 0000000..ff87daa
--- /dev/null
+++ b/test-api-example.js
@@ -0,0 +1,299 @@
+// API 测试示例脚本
+// 演示如何使用 Hyperdrive API 进行 CRUD 操作
+
+const API_BASE_URL = 'https://hyperdrive-neondb-test..workers.dev';
+const API_KEY = 'your-api-key'; // 可选,如果设置了 API_SECRET
+
+// 通用请求函数
+async function apiRequest(endpoint, options = {}) {
+ const url = `${API_BASE_URL}${endpoint}`;
+ const headers = {
+ 'Content-Type': 'application/json',
+ ...(API_KEY && { 'X-API-Key': API_KEY }),
+ ...options.headers
+ };
+
+ try {
+ const response = await fetch(url, {
+ ...options,
+ headers
+ });
+
+ const data = await response.json();
+
+ if (!response.ok) {
+ throw new Error(`API Error: ${data.message || response.statusText}`);
+ }
+
+ return data;
+ } catch (error) {
+ console.error(`Request failed for ${endpoint}:`, error.message);
+ throw error;
+ }
+}
+
+// API 测试函数
+class ApiTester {
+ static async testHealthCheck() {
+ console.log('\n🏥 Testing health check...');
+ try {
+ const result = await apiRequest('/health');
+ console.log('✅ Health check passed:', result.data);
+ return true;
+ } catch (error) {
+ console.log('❌ Health check failed:', error.message);
+ return false;
+ }
+ }
+
+ static async initializeDatabase() {
+ console.log('\n🗄️ Initializing database...');
+ try {
+ const result = await apiRequest('/init', { method: 'POST' });
+ console.log('✅ Database initialized:', result.message);
+ return true;
+ } catch (error) {
+ console.log('❌ Database initialization failed:', error.message);
+ return false;
+ }
+ }
+
+ static async createUser(name, email) {
+ console.log(`\n👤 Creating user: ${name} (${email})...`);
+ try {
+ const result = await apiRequest('/users', {
+ method: 'POST',
+ body: JSON.stringify({ name, email })
+ });
+ console.log('✅ User created:', result.data);
+ return result.data;
+ } catch (error) {
+ console.log('❌ User creation failed:', error.message);
+ return null;
+ }
+ }
+
+ static async getUsers(page = 1, limit = 10, search = null) {
+ console.log(`\n📋 Getting users (page ${page}, limit ${limit}${search ? `, search: ${search}` : ''})...`);
+ try {
+ let endpoint = `/users?page=${page}&limit=${limit}`;
+ if (search) endpoint += `&search=${encodeURIComponent(search)}`;
+
+ const result = await apiRequest(endpoint);
+ console.log('✅ Users retrieved:', {
+ count: result.data.length,
+ total: result.meta.total,
+ users: result.data.map(u => `${u.name} (${u.email})`)
+ });
+ return result;
+ } catch (error) {
+ console.log('❌ Failed to get users:', error.message);
+ return null;
+ }
+ }
+
+ static async getUserById(id) {
+ console.log(`\n🔍 Getting user by ID: ${id}...`);
+ try {
+ const result = await apiRequest(`/users/${id}`);
+ console.log('✅ User found:', result.data);
+ return result.data;
+ } catch (error) {
+ console.log('❌ Failed to get user:', error.message);
+ return null;
+ }
+ }
+
+ static async updateUser(id, updates) {
+ console.log(`\n✏️ Updating user ${id}:`, updates);
+ try {
+ const result = await apiRequest(`/users/${id}`, {
+ method: 'PUT',
+ body: JSON.stringify(updates)
+ });
+ console.log('✅ User updated:', result.data);
+ return result.data;
+ } catch (error) {
+ console.log('❌ Failed to update user:', error.message);
+ return null;
+ }
+ }
+
+ static async deleteUser(id) {
+ console.log(`\n🗑️ Deleting user ${id}...`);
+ try {
+ const result = await apiRequest(`/users/${id}`, { method: 'DELETE' });
+ console.log('✅ User deleted:', result.message);
+ return true;
+ } catch (error) {
+ console.log('❌ Failed to delete user:', error.message);
+ return false;
+ }
+ }
+
+ static async getApiDocs() {
+ console.log('\n📚 Getting API documentation...');
+ try {
+ const result = await apiRequest('/docs');
+ console.log('✅ API Documentation:');
+ console.log('Endpoints:', result.data.endpoints);
+ console.log('Authentication:', result.data.authentication);
+ console.log('Examples:', result.data.examples);
+ return result.data;
+ } catch (error) {
+ console.log('❌ Failed to get API docs:', error.message);
+ return null;
+ }
+ }
+}
+
+// 完整的测试流程
+async function runFullTest() {
+ console.log('🚀 Starting Hyperdrive API Test Suite');
+ console.log('=====================================');
+
+ // 1. 健康检查
+ const healthOk = await ApiTester.testHealthCheck();
+ if (!healthOk) {
+ console.log('\n❌ Health check failed. Please check your deployment.');
+ return;
+ }
+
+ // 2. 获取 API 文档
+ await ApiTester.getApiDocs();
+
+ // 3. 初始化数据库
+ await ApiTester.initializeDatabase();
+
+ // 4. 创建测试用户
+ const user1 = await ApiTester.createUser('张三', 'zhangsan@example.com');
+ const user2 = await ApiTester.createUser('李四', 'lisi@example.com');
+ const user3 = await ApiTester.createUser('王五', 'wangwu@example.com');
+
+ if (!user1 || !user2 || !user3) {
+ console.log('\n❌ Failed to create test users.');
+ return;
+ }
+
+ // 5. 获取用户列表
+ await ApiTester.getUsers();
+
+ // 6. 搜索用户
+ await ApiTester.getUsers(1, 10, '张');
+
+ // 7. 获取单个用户
+ await ApiTester.getUserById(user1.id);
+
+ // 8. 更新用户
+ await ApiTester.updateUser(user1.id, {
+ name: '张三丰',
+ email: 'zhangsanfeng@example.com'
+ });
+
+ // 9. 验证更新
+ await ApiTester.getUserById(user1.id);
+
+ // 10. 分页测试
+ await ApiTester.getUsers(1, 2); // 第一页,每页2条
+ await ApiTester.getUsers(2, 2); // 第二页,每页2条
+
+ // 11. 删除用户
+ await ApiTester.deleteUser(user3.id);
+
+ // 12. 验证删除
+ await ApiTester.getUserById(user3.id); // 应该返回 404
+
+ // 13. 最终用户列表
+ await ApiTester.getUsers();
+
+ console.log('\n🎉 API Test Suite Completed!');
+ console.log('============================');
+}
+
+// 性能测试
+async function performanceTest() {
+ console.log('\n⚡ Performance Test');
+ console.log('==================');
+
+ const startTime = Date.now();
+ const promises = [];
+
+ // 并发创建10个用户
+ for (let i = 0; i < 10; i++) {
+ promises.push(
+ ApiTester.createUser(`测试用户${i}`, `test${i}@example.com`)
+ );
+ }
+
+ try {
+ const results = await Promise.all(promises);
+ const endTime = Date.now();
+ const duration = endTime - startTime;
+
+ console.log(`✅ Created ${results.filter(r => r).length} users in ${duration}ms`);
+ console.log(`📊 Average: ${(duration / 10).toFixed(2)}ms per user`);
+
+ // 清理测试数据
+ console.log('\n🧹 Cleaning up test data...');
+ for (const user of results.filter(r => r)) {
+ await ApiTester.deleteUser(user.id);
+ }
+
+ } catch (error) {
+ console.log('❌ Performance test failed:', error.message);
+ }
+}
+
+// 错误处理测试
+async function errorHandlingTest() {
+ console.log('\n🚨 Error Handling Test');
+ console.log('======================');
+
+ // 测试无效数据
+ console.log('\n Testing invalid user data...');
+ await ApiTester.createUser('', 'invalid-email'); // 应该失败
+
+ // 测试不存在的用户
+ console.log('\n Testing non-existent user...');
+ await ApiTester.getUserById(99999); // 应该返回 404
+
+ // 测试无效的更新
+ console.log('\n Testing invalid update...');
+ await ApiTester.updateUser(99999, { name: 'Test' }); // 应该返回 404
+}
+
+// 主函数
+async function main() {
+ console.log('请确保已经部署了 Worker 并更新了 API_BASE_URL');
+ console.log('如果设置了 API_SECRET,请更新 API_KEY 变量\n');
+
+ try {
+ await runFullTest();
+ await performanceTest();
+ await errorHandlingTest();
+ } catch (error) {
+ console.error('\n💥 Test suite failed:', error.message);
+ }
+}
+
+// 如果直接运行此脚本
+if (typeof window === 'undefined') {
+ // Node.js 环境
+ const fetch = require('node-fetch');
+ global.fetch = fetch;
+ main();
+} else {
+ // 浏览器环境
+ console.log('在浏览器控制台中运行: main()');
+}
+
+// 导出函数供其他模块使用
+if (typeof module !== 'undefined' && module.exports) {
+ module.exports = {
+ ApiTester,
+ runFullTest,
+ performanceTest,
+ errorHandlingTest,
+ main
+ };
+}
\ No newline at end of file
diff --git a/test-hyperdrive-remote.js b/test-hyperdrive-remote.js
new file mode 100644
index 0000000..03954e5
--- /dev/null
+++ b/test-hyperdrive-remote.js
@@ -0,0 +1,79 @@
+// Test script to validate Hyperdrive configuration for remote deployment
+// This script helps test the configuration without local PostgreSQL
+
+const { execSync } = require('child_process');
+const fs = require('fs');
+
+console.log('🚀 Testing Hyperdrive Configuration for Remote Deployment');
+console.log('======================================================');
+
+try {
+ // Check wrangler configuration
+ console.log('\n📋 Validating wrangler.toml...');
+
+ const wranglerContent = fs.readFileSync('wrangler.toml', 'utf8');
+ console.log('✅ wrangler.toml loaded successfully');
+
+ // Validate configuration syntax
+ try {
+ execSync('wrangler config validate', { stdio: 'pipe' });
+ console.log('✅ wrangler.toml syntax is valid');
+ } catch (error) {
+ console.log('⚠️ Configuration validation warning (this is normal for Hyperdrive)');
+ }
+
+ // Check if we can authenticate with Cloudflare
+ console.log('\n🔐 Checking Cloudflare authentication...');
+ try {
+ const whoami = execSync('wrangler whoami', { encoding: 'utf8' });
+ console.log('✅ Authenticated with Cloudflare');
+ console.log(` ${whoami.trim()}`);
+ } catch (error) {
+ console.log('❌ Not authenticated with Cloudflare');
+ console.log(' Run: wrangler login');
+ return;
+ }
+
+ // List Hyperdrive configurations
+ console.log('\n🔗 Checking Hyperdrive configurations...');
+ try {
+ const hyperdrives = execSync('wrangler hyperdrive list', { encoding: 'utf8' });
+ console.log('✅ Hyperdrive configurations:');
+ console.log(hyperdrives);
+ } catch (error) {
+ console.log('⚠️ Could not list Hyperdrive configurations');
+ console.log(' Error:', error.message);
+ }
+
+ // Check specific Hyperdrive
+ console.log('\n🎯 Checking specific Hyperdrive ID...');
+ try {
+ const hyperdriveInfo = execSync('wrangler hyperdrive get ef43924d89064cddabfaccf06aadfab6', { encoding: 'utf8' });
+ console.log('✅ Hyperdrive configuration found:');
+ console.log(hyperdriveInfo);
+ } catch (error) {
+ console.log('❌ Could not find Hyperdrive configuration');
+ console.log(' Error:', error.message);
+ console.log(' Make sure the Hyperdrive ID is correct and exists in your account');
+ }
+
+ console.log('\n📝 Configuration Summary:');
+ console.log(' - Worker Name: hyperdrive-neondb-test');
+ console.log(' - Hyperdrive ID: ef43924d89064cddabfaccf06aadfab6');
+ console.log(' - Binding: HYPERDRIVE');
+ console.log(' - Database Type: NeonDB (PostgreSQL)');
+
+ console.log('\n🚀 Deployment Commands:');
+ console.log(' 1. Deploy to production: wrangler deploy');
+ console.log(' 2. Test endpoints after deployment:');
+ console.log(' - https://hyperdrive-neondb-test..workers.dev/test-connection');
+ console.log(' - https://hyperdrive-neondb-test..workers.dev/test-query');
+
+ console.log('\n💡 Tips:');
+ console.log(' - Hyperdrive provides connection pooling and caching for your database');
+ console.log(' - It reduces latency and improves performance for database queries');
+ console.log(' - The worker will automatically use the Hyperdrive connection in production');
+
+} catch (error) {
+ console.error('❌ Error during testing:', error.message);
+}
\ No newline at end of file
diff --git a/test-hyperdrive.js b/test-hyperdrive.js
new file mode 100644
index 0000000..349a2c4
--- /dev/null
+++ b/test-hyperdrive.js
@@ -0,0 +1,93 @@
+// Simple test script for Hyperdrive configuration
+// This script helps verify the wrangler.toml configuration
+
+console.log('🚀 Hyperdrive NeonDB Test Configuration');
+console.log('=====================================');
+
+// Check if wrangler.toml exists and has correct configuration
+const fs = require('fs');
+const path = require('path');
+
+try {
+ const wranglerPath = path.join(__dirname, 'wrangler.toml');
+
+ if (fs.existsSync(wranglerPath)) {
+ console.log('✅ wrangler.toml found');
+
+ const content = fs.readFileSync(wranglerPath, 'utf8');
+
+ // Check for Hyperdrive configuration
+ if (content.includes('hyperdrive')) {
+ console.log('✅ Hyperdrive configuration found');
+ } else {
+ console.log('❌ Hyperdrive configuration missing');
+ }
+
+ // Check for binding
+ if (content.includes('binding = "HYPERDRIVE"')) {
+ console.log('✅ HYPERDRIVE binding configured');
+ } else {
+ console.log('❌ HYPERDRIVE binding missing');
+ }
+
+ // Check for Hyperdrive ID
+ if (content.includes('ef43924d89064cddabfaccf06aadfab6')) {
+ console.log('✅ Hyperdrive ID configured');
+ } else {
+ console.log('❌ Hyperdrive ID missing');
+ }
+
+ // Check for nodejs_compat
+ if (content.includes('nodejs_compat')) {
+ console.log('✅ nodejs_compat flag enabled');
+ } else {
+ console.log('❌ nodejs_compat flag missing');
+ }
+
+ } else {
+ console.log('❌ wrangler.toml not found');
+ }
+
+ // Check if src/index.ts exists
+ const indexPath = path.join(__dirname, 'src', 'index.ts');
+ if (fs.existsSync(indexPath)) {
+ console.log('✅ src/index.ts found');
+ } else {
+ console.log('❌ src/index.ts missing');
+ }
+
+ // Check if package.json exists
+ const packagePath = path.join(__dirname, 'package.json');
+ if (fs.existsSync(packagePath)) {
+ console.log('✅ package.json found');
+
+ const packageContent = JSON.parse(fs.readFileSync(packagePath, 'utf8'));
+
+ // Check for required dependencies
+ if (packageContent.dependencies && packageContent.dependencies.pg) {
+ console.log('✅ pg dependency configured');
+ } else {
+ console.log('❌ pg dependency missing');
+ }
+
+ if (packageContent.devDependencies && packageContent.devDependencies['@cloudflare/workers-types']) {
+ console.log('✅ Cloudflare Workers types configured');
+ } else {
+ console.log('❌ Cloudflare Workers types missing');
+ }
+ }
+
+ console.log('\n📋 Next Steps:');
+ console.log('1. Run: wrangler dev --local (for local testing)');
+ console.log('2. Run: wrangler dev (for remote testing with Hyperdrive)');
+ console.log('3. Test endpoints:');
+ console.log(' - http://localhost:8787/test-connection');
+ console.log(' - http://localhost:8787/test-query');
+ console.log('\n🔧 Hyperdrive Configuration:');
+ console.log(' - Hyperdrive ID: ef43924d89064cddabfaccf06aadfab6');
+ console.log(' - Binding: HYPERDRIVE');
+ console.log(' - Database: NeonDB (PostgreSQL)');
+
+} catch (error) {
+ console.error('❌ Error checking configuration:', error.message);
+}
\ No newline at end of file
diff --git a/test-shushu-api.sh b/test-shushu-api.sh
new file mode 100755
index 0000000..bf49722
--- /dev/null
+++ b/test-shushu-api.sh
@@ -0,0 +1,67 @@
+#!/bin/bash
+
+# 术数书 API 测试脚本
+# 使用 curl 命令测试 Hyperdrive + NeonDB 术数书查询系统
+
+API_BASE="https://hyperdrive.seekkey.tech"
+
+echo "🚀 术数书 Hyperdrive API 测试"
+echo "================================"
+
+# 测试连接
+echo ""
+echo "🔗 测试数据库连接..."
+curl -s "$API_BASE/test-connection" | jq -r '.message // .error // "连接测试完成"'
+
+# 查询表结构
+echo ""
+echo "📋 查询数据库表结构..."
+echo "发现的表:"
+curl -s "$API_BASE/query-tables" | jq -r '.tables[]? | "- \(.table_name) (\(.table_schema))"'
+
+# 获取术数书统计
+echo ""
+echo "📊 术数书统计信息..."
+curl -s "$API_BASE/shushu-stats" | jq -r '.existing_tables[]? | "- \(.table_name): \(.record_count) 条记录"'
+
+# 查询术数书内容
+echo ""
+echo "📚 查询术数书内容 (前3条)..."
+echo "书籍信息:"
+curl -s "$API_BASE/query-shushu?limit=3" | jq -r '.data[]? | "- ID: \(.id), 标题: \(.title), 作者: \(.author), 类别: \(.category)"'
+
+# 搜索功能测试
+echo ""
+echo "🔍 搜索测试..."
+echo "搜索关键词: 易经"
+curl -s "$API_BASE/search-shushu?q=易经&limit=2" | jq -r '.total_matches // 0 | "找到 \(.) 条匹配记录"'
+
+echo "搜索关键词: 八卦"
+curl -s "$API_BASE/search-shushu?q=八卦&limit=2" | jq -r '.total_matches // 0 | "找到 \(.) 条匹配记录"'
+
+echo "搜索关键词: 面相"
+curl -s "$API_BASE/search-shushu?q=面相&limit=2" | jq -r '.total_matches // 0 | "找到 \(.) 条匹配记录"'
+
+# 性能测试
+echo ""
+echo "⚡ 性能测试..."
+echo "测试查询响应时间:"
+time curl -s "$API_BASE/query-shushu?limit=1" > /dev/null
+
+echo ""
+echo "✅ API 测试完成!"
+echo ""
+echo "📖 可用端点:"
+echo "- GET $API_BASE/ - 系统信息"
+echo "- GET $API_BASE/test-connection - 测试连接"
+echo "- GET $API_BASE/query-tables - 查询表结构"
+echo "- GET $API_BASE/query-shushu?limit=N - 查询术数书"
+echo "- GET $API_BASE/search-shushu?q=keyword&limit=N - 搜索术数书"
+echo "- GET $API_BASE/shushu-stats - 统计信息"
+echo ""
+echo "🎯 与 AutoRAG 对比优势:"
+echo "- ✅ 毫秒级响应 (Hyperdrive 边缘缓存)"
+echo "- ✅ 精确查询 (SQL vs 向量相似性)"
+echo "- ✅ 实时数据 (直连数据库)"
+echo "- ✅ 成本优化 (连接池 + 缓存)"
+echo "- ✅ 全球分布 (Cloudflare 边缘网络)"
\ No newline at end of file
diff --git a/test_alpha_vantage_meta.py b/tests/test_alpha_vantage_meta.py
similarity index 100%
rename from test_alpha_vantage_meta.py
rename to tests/test_alpha_vantage_meta.py
diff --git a/tests/test_google_adk.py b/tests/test_google_adk.py
new file mode 100644
index 0000000..9cf926f
--- /dev/null
+++ b/tests/test_google_adk.py
@@ -0,0 +1,44 @@
+#!/usr/bin/env python3
+# -*- coding: utf-8 -*-
+"""
+测试 Google ADK 安装和基本功能
+"""
+
+import os
+from google.adk import Agent
+
+def test_adk_installation():
+ """测试 ADK 安装是否成功"""
+ try:
+ # 创建一个简单的测试智能体
+ test_agent = Agent(
+ name="测试智能体",
+ model="gemini-2.0-flash-exp"
+ )
+
+ print("✅ Google ADK 安装成功!")
+ print(f"智能体名称: {test_agent.name}")
+ print(f"使用模型: {test_agent.model}")
+ print(f"描述: {test_agent.description}")
+
+ # 检查环境变量
+ google_api_key = os.getenv('GOOGLE_API_KEY')
+ if google_api_key:
+ print(f"✅ GOOGLE_API_KEY 已配置 (长度: {len(google_api_key)} 字符)")
+ else:
+ print("⚠️ GOOGLE_API_KEY 未配置,需要设置 API 密钥")
+ print("请访问 https://aistudio.google.com/ 获取 API 密钥")
+
+ return True
+
+ except Exception as e:
+ print(f"❌ ADK 安装测试失败: {e}")
+ return False
+
+if __name__ == "__main__":
+ print("🚀 开始测试 Google ADK 安装...")
+ test_adk_installation()
+ print("\n📝 下一步: 配置 GOOGLE_API_KEY 环境变量")
+ print(" 1. 访问 https://aistudio.google.com/")
+ print(" 2. 获取 API 密钥")
+ print(" 3. 在 Doppler 中设置: doppler secrets set GOOGLE_API_KEY=your_key")
\ No newline at end of file
diff --git a/tests/test_memory_bank.py b/tests/test_memory_bank.py
new file mode 100644
index 0000000..1f1c519
--- /dev/null
+++ b/tests/test_memory_bank.py
@@ -0,0 +1,147 @@
+#!/usr/bin/env python3
+# -*- coding: utf-8 -*-
+"""
+测试 Google ADK Memory Bank 功能
+"""
+
+import os
+import asyncio
+from google.adk import Agent
+from google.adk.memory import MemoryBank, MemoryItem
+from datetime import datetime
+
+async def test_memory_bank():
+ """测试Memory Bank基本功能"""
+ print("🧠 测试 Google ADK Memory Bank...")
+
+ try:
+ # 创建记忆银行
+ memory_bank = MemoryBank(
+ name="test_memory_bank",
+ description="测试用的记忆银行"
+ )
+
+ print("✅ Memory Bank 创建成功")
+
+ # 添加记忆项
+ memory_item = MemoryItem(
+ content="这是一个测试记忆:比特币在2021年达到历史最高点69000美元",
+ metadata={
+ "type": "market_data",
+ "asset": "bitcoin",
+ "timestamp": datetime.now().isoformat()
+ }
+ )
+
+ await memory_bank.add_memory(memory_item)
+ print("✅ 记忆添加成功")
+
+ # 搜索记忆
+ search_results = await memory_bank.search("比特币", limit=5)
+ print(f"✅ 记忆搜索成功,找到 {len(search_results)} 条相关记忆")
+
+ for i, memory in enumerate(search_results):
+ print(f" {i+1}. {memory.content}")
+
+ # 创建带记忆银行的智能体
+ agent = Agent(
+ name="测试智能体",
+ model="gemini-2.0-flash-exp",
+ instruction="你是一个测试智能体,请使用你的记忆银行来回答问题。",
+ memory_bank=memory_bank
+ )
+
+ print("✅ 带记忆银行的智能体创建成功")
+
+ return True
+
+ except ImportError as e:
+ print(f"❌ Memory Bank 模块导入失败: {e}")
+ print("💡 可能需要更新 Google ADK 版本或启用 Memory Bank 功能")
+ return False
+ except Exception as e:
+ print(f"❌ Memory Bank 测试失败: {e}")
+ return False
+
+async def test_simple_memory_simulation():
+ """模拟Memory Bank功能的简单实现"""
+ print("\n🔄 使用简单模拟实现...")
+
+ class SimpleMemoryBank:
+ def __init__(self, name: str, description: str):
+ self.name = name
+ self.description = description
+ self.memories = []
+
+ async def add_memory(self, content: str, metadata: dict = None):
+ memory = {
+ "content": content,
+ "metadata": metadata or {},
+ "timestamp": datetime.now().isoformat()
+ }
+ self.memories.append(memory)
+
+ async def search(self, query: str, limit: int = 5):
+ # 简单的关键词匹配
+ results = []
+ query_lower = query.lower()
+
+ for memory in self.memories:
+ if query_lower in memory["content"].lower():
+ results.append(memory)
+ if len(results) >= limit:
+ break
+
+ return results
+
+ # 测试简单实现
+ memory_bank = SimpleMemoryBank(
+ name="铁拐李记忆银行",
+ description="铁拐李的逆向投资记忆"
+ )
+
+ # 添加一些记忆
+ memories = [
+ "2000年互联网泡沫破裂,纳斯达克指数从5048点跌到1114点",
+ "2008年金融危机,雷曼兄弟破产引发全球恐慌",
+ "2020年3月疫情恐慌,美股熔断4次,但随后强劲反弹",
+ "比特币从2017年的2万美元跌到2018年的3200美元"
+ ]
+
+ for memory in memories:
+ await memory_bank.add_memory(memory, {"type": "historical_event"})
+
+ print(f"✅ 已添加 {len(memories)} 条记忆")
+
+ # 搜索测试
+ search_queries = ["泡沫", "比特币", "金融危机"]
+
+ for query in search_queries:
+ results = await memory_bank.search(query)
+ print(f"\n🔍 搜索 '{query}' 找到 {len(results)} 条记忆:")
+ for i, result in enumerate(results):
+ print(f" {i+1}. {result['content']}")
+
+ return True
+
+async def main():
+ """主测试函数"""
+ print("🚀 Google ADK Memory Bank 功能测试")
+
+ # 检查API密钥
+ api_key = os.getenv('GOOGLE_API_KEY')
+ if not api_key:
+ print("❌ 未找到 GOOGLE_API_KEY 环境变量")
+ return
+
+ print(f"✅ API密钥已配置")
+
+ # 尝试真实的Memory Bank
+ success = await test_memory_bank()
+
+ if not success:
+ # 如果真实的Memory Bank不可用,使用模拟实现
+ await test_simple_memory_simulation()
+
+if __name__ == "__main__":
+ asyncio.run(main())
\ No newline at end of file
diff --git a/tests/test_vertex_memory_bank.py b/tests/test_vertex_memory_bank.py
new file mode 100644
index 0000000..937fe8a
--- /dev/null
+++ b/tests/test_vertex_memory_bank.py
@@ -0,0 +1,257 @@
+#!/usr/bin/env python3
+"""
+Vertex AI Memory Bank 测试脚本
+验证稷下学宫记忆银行功能
+"""
+
+import asyncio
+import sys
+import os
+
+# 添加项目根目录到路径
+sys.path.append(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
+
+from src.jixia.memory.vertex_memory_bank import VertexMemoryBank, initialize_baxian_memory_banks
+from src.jixia.agents.memory_enhanced_agent import MemoryEnhancedAgent, create_memory_enhanced_council
+from config.doppler_config import get_google_genai_config, validate_config
+
+
+async def test_memory_bank_basic():
+ """测试Memory Bank基础功能"""
+ print("🧪 测试 Memory Bank 基础功能...")
+
+ try:
+ # 验证配置
+ if not validate_config("google_adk"):
+ print("❌ Google ADK 配置验证失败")
+ return False
+
+ config = get_google_genai_config()
+ if not config.get('project_id'):
+ print("❌ Google Cloud Project ID 未配置")
+ print("请设置环境变量: GOOGLE_CLOUD_PROJECT_ID")
+ return False
+
+ # 创建Memory Bank实例
+ memory_bank = VertexMemoryBank.from_config()
+ print(f"✅ Memory Bank 实例创建成功")
+ print(f" 项目ID: {config['project_id']}")
+ print(f" 区域: {config['location']}")
+
+ # 测试创建记忆银行
+ bank_id = await memory_bank.create_memory_bank(
+ agent_name="tieguaili",
+ display_name="铁拐李测试记忆银行"
+ )
+ print(f"✅ 创建记忆银行成功: {bank_id}")
+
+ # 测试添加记忆
+ memory_id = await memory_bank.add_memory(
+ agent_name="tieguaili",
+ content="测试记忆:在分析NVIDIA时,我倾向于关注潜在的市场风险和估值泡沫。",
+ memory_type="preference",
+ debate_topic="NVIDIA投资分析",
+ metadata={"test": True, "priority": "high"}
+ )
+ print(f"✅ 添加记忆成功: {memory_id}")
+
+ # 测试搜索记忆
+ results = await memory_bank.search_memories(
+ agent_name="tieguaili",
+ query="NVIDIA 风险",
+ limit=5
+ )
+ print(f"✅ 搜索记忆成功,找到 {len(results)} 条结果")
+
+ for i, result in enumerate(results, 1):
+ print(f" {i}. {result['content'][:50]}... (相关度: {result.get('relevance_score', 'N/A')})")
+
+ # 测试获取上下文
+ context = await memory_bank.get_agent_context("tieguaili", "NVIDIA投资分析")
+ print(f"✅ 获取上下文成功,长度: {len(context)} 字符")
+
+ return True
+
+ except Exception as e:
+ print(f"❌ Memory Bank 基础测试失败: {e}")
+ return False
+
+
+async def test_memory_enhanced_agent():
+ """测试记忆增强智能体"""
+ print("\n🧪 测试记忆增强智能体...")
+
+ try:
+ # 创建记忆银行
+ memory_bank = VertexMemoryBank.from_config()
+
+ # 创建记忆增强智能体
+ agent = MemoryEnhancedAgent("tieguaili", memory_bank)
+ print(f"✅ 创建记忆增强智能体: {agent.personality.chinese_name}")
+
+ # 测试基于记忆的响应
+ response = await agent.respond_with_memory(
+ message="你对NVIDIA的最新财报有什么看法?",
+ topic="NVIDIA投资分析"
+ )
+ print(f"✅ 智能体响应成功")
+ print(f" 响应长度: {len(response)} 字符")
+ print(f" 响应预览: {response[:100]}...")
+
+ # 测试学习偏好
+ await agent.learn_preference(
+ preference="用户偏好保守的投资策略,关注风险控制",
+ topic="投资偏好"
+ )
+ print("✅ 学习用户偏好成功")
+
+ # 测试保存策略洞察
+ await agent.save_strategy_insight(
+ insight="在高估值环境下,应该更加关注基本面分析和风险管理",
+ topic="投资策略"
+ )
+ print("✅ 保存策略洞察成功")
+
+ return True
+
+ except Exception as e:
+ print(f"❌ 记忆增强智能体测试失败: {e}")
+ return False
+
+
+async def test_baxian_memory_council():
+ """测试八仙记忆议会"""
+ print("\n🧪 测试八仙记忆议会...")
+
+ try:
+ # 创建记忆增强议会
+ council = await create_memory_enhanced_council()
+ print(f"✅ 创建八仙记忆议会成功,智能体数量: {len(council.agents)}")
+
+ # 列出所有智能体
+ for agent_name, agent in council.agents.items():
+ print(f" - {agent.personality.chinese_name} ({agent_name})")
+
+ # 进行简短的记忆辩论测试
+ print("\n🏛️ 开始记忆增强辩论测试...")
+ result = await council.conduct_memory_debate(
+ topic="比特币投资价值分析",
+ participants=["tieguaili", "lvdongbin"], # 只选择两个智能体进行快速测试
+ rounds=1
+ )
+
+ print(f"✅ 辩论完成")
+ print(f" 主题: {result['topic']}")
+ print(f" 参与者: {len(result['participants'])} 位")
+ print(f" 发言次数: {result['total_exchanges']}")
+
+ # 显示辩论内容
+ for exchange in result['conversation_history']:
+ print(f" {exchange['chinese_name']}: {exchange['content'][:80]}...")
+
+ # 获取集体记忆摘要
+ summary = await council.get_collective_memory_summary("比特币投资价值分析")
+ print(f"\n📚 集体记忆摘要长度: {len(summary)} 字符")
+
+ return True
+
+ except Exception as e:
+ print(f"❌ 八仙记忆议会测试失败: {e}")
+ return False
+
+
+async def test_memory_bank_initialization():
+ """测试Memory Bank初始化"""
+ print("\n🧪 测试 Memory Bank 初始化...")
+
+ try:
+ config = get_google_genai_config()
+ project_id = config.get('project_id')
+ location = config.get('location', 'us-central1')
+
+ if not project_id:
+ print("❌ 项目ID未配置,跳过初始化测试")
+ return False
+
+ # 初始化所有八仙记忆银行
+ memory_bank = await initialize_baxian_memory_banks(project_id, location)
+ print(f"✅ 八仙记忆银行初始化成功")
+ print(f" 记忆银行数量: {len(memory_bank.memory_banks)}")
+
+ for agent_name, bank_name in memory_bank.memory_banks.items():
+ chinese_name = memory_bank.baxian_agents.get(agent_name, agent_name)
+ print(f" - {chinese_name}: {bank_name}")
+
+ return True
+
+ except Exception as e:
+ print(f"❌ Memory Bank 初始化测试失败: {e}")
+ return False
+
+
+async def main():
+ """主测试函数"""
+ print("🏛️ 稷下学宫 Vertex AI Memory Bank 测试")
+ print("=" * 50)
+
+ # 检查配置
+ print("🔧 检查配置...")
+ config = get_google_genai_config()
+
+ print(f"Google API Key: {'已配置' if config.get('api_key') else '未配置'}")
+ print(f"Project ID: {config.get('project_id', '未配置')}")
+ print(f"Location: {config.get('location', 'us-central1')}")
+ print(f"Memory Bank: {'启用' if config.get('memory_bank_enabled', 'TRUE') == 'TRUE' else '禁用'}")
+
+ if not config.get('project_id'):
+ print("\n❌ 测试需要 Google Cloud Project ID")
+ print("请设置环境变量: GOOGLE_CLOUD_PROJECT_ID=your-project-id")
+ return
+
+ # 运行测试
+ tests = [
+ ("Memory Bank 基础功能", test_memory_bank_basic),
+ ("记忆增强智能体", test_memory_enhanced_agent),
+ ("八仙记忆议会", test_baxian_memory_council),
+ ("Memory Bank 初始化", test_memory_bank_initialization)
+ ]
+
+ results = []
+
+ for test_name, test_func in tests:
+ print(f"\n{'='*20}")
+ print(f"测试: {test_name}")
+ print(f"{'='*20}")
+
+ try:
+ result = await test_func()
+ results.append((test_name, result))
+ except Exception as e:
+ print(f"❌ 测试 {test_name} 出现异常: {e}")
+ results.append((test_name, False))
+
+ # 显示测试结果摘要
+ print(f"\n{'='*50}")
+ print("🏛️ 测试结果摘要")
+ print(f"{'='*50}")
+
+ passed = 0
+ total = len(results)
+
+ for test_name, result in results:
+ status = "✅ 通过" if result else "❌ 失败"
+ print(f"{status} {test_name}")
+ if result:
+ passed += 1
+
+ print(f"\n📊 总体结果: {passed}/{total} 测试通过")
+
+ if passed == total:
+ print("🎉 所有测试通过!Vertex AI Memory Bank 集成成功!")
+ else:
+ print("⚠️ 部分测试失败,请检查配置和网络连接")
+
+
+if __name__ == "__main__":
+ # 运行测试
+ asyncio.run(main())
\ No newline at end of file
diff --git a/validate-config.js b/validate-config.js
new file mode 100644
index 0000000..e1298cc
--- /dev/null
+++ b/validate-config.js
@@ -0,0 +1,107 @@
+// Simple configuration validation script
+// This validates the wrangler.toml and Worker code without requiring API access
+
+const fs = require('fs');
+const path = require('path');
+
+console.log('🔍 Validating Hyperdrive Configuration Files');
+console.log('============================================');
+
+// Check wrangler.toml
+console.log('\n📋 Checking wrangler.toml...');
+try {
+ const wranglerContent = fs.readFileSync('wrangler.toml', 'utf8');
+ console.log('✅ wrangler.toml exists');
+
+ // Check for required fields
+ const checks = [
+ { field: 'name', regex: /name\s*=\s*["']([^"']+)["']/, required: true },
+ { field: 'main', regex: /main\s*=\s*["']([^"']+)["']/, required: true },
+ { field: 'compatibility_date', regex: /compatibility_date\s*=\s*["']([^"']+)["']/, required: true },
+ { field: 'nodejs_compat', regex: /nodejs_compat/, required: true },
+ { field: 'hyperdrive binding', regex: /binding\s*=\s*["']HYPERDRIVE["']/, required: true },
+ { field: 'hyperdrive id', regex: /id\s*=\s*["']ef43924d89064cddabfaccf06aadfab6["']/, required: true }
+ ];
+
+ checks.forEach(check => {
+ if (check.regex.test(wranglerContent)) {
+ console.log(` ✅ ${check.field} configured`);
+ } else {
+ console.log(` ❌ ${check.field} missing or incorrect`);
+ }
+ });
+
+} catch (error) {
+ console.log('❌ wrangler.toml not found or unreadable');
+}
+
+// Check Worker code
+console.log('\n📝 Checking Worker code...');
+try {
+ const workerContent = fs.readFileSync('src/index.ts', 'utf8');
+ console.log('✅ src/index.ts exists');
+
+ const codeChecks = [
+ { name: 'Hyperdrive binding usage', regex: /env\.HYPERDRIVE/ },
+ { name: 'Test connection endpoint', regex: /\/test-connection/ },
+ { name: 'Test query endpoint', regex: /\/test-query/ },
+ { name: 'PostgreSQL import', regex: /pg/ },
+ { name: 'Error handling', regex: /try\s*{[\s\S]*catch/ }
+ ];
+
+ codeChecks.forEach(check => {
+ if (check.regex.test(workerContent)) {
+ console.log(` ✅ ${check.name} implemented`);
+ } else {
+ console.log(` ⚠️ ${check.name} not found`);
+ }
+ });
+
+} catch (error) {
+ console.log('❌ src/index.ts not found or unreadable');
+}
+
+// Check package.json
+console.log('\n📦 Checking package.json...');
+try {
+ const packageContent = fs.readFileSync('package.json', 'utf8');
+ const packageJson = JSON.parse(packageContent);
+ console.log('✅ package.json exists and is valid JSON');
+
+ const deps = {
+ 'pg': packageJson.dependencies?.pg,
+ '@cloudflare/workers-types': packageJson.devDependencies?.['@cloudflare/workers-types'],
+ '@types/pg': packageJson.devDependencies?.['@types/pg'],
+ 'typescript': packageJson.devDependencies?.typescript,
+ 'wrangler': packageJson.devDependencies?.wrangler
+ };
+
+ Object.entries(deps).forEach(([dep, version]) => {
+ if (version) {
+ console.log(` ✅ ${dep}: ${version}`);
+ } else {
+ console.log(` ❌ ${dep}: not found`);
+ }
+ });
+
+} catch (error) {
+ console.log('❌ package.json not found or invalid JSON');
+}
+
+console.log('\n📊 Configuration Summary:');
+console.log(' - Project: hyperdrive-neondb-test');
+console.log(' - Hyperdrive ID: ef43924d89064cddabfaccf06aadfab6');
+console.log(' - Database: NeonDB (PostgreSQL)');
+console.log(' - Binding: HYPERDRIVE');
+console.log(' - Compatibility: nodejs_compat enabled');
+
+console.log('\n🚀 Next Steps:');
+console.log(' 1. Ensure you have proper Cloudflare API permissions');
+console.log(' 2. Verify the Hyperdrive configuration exists in your Cloudflare dashboard');
+console.log(' 3. Deploy with: wrangler deploy');
+console.log(' 4. Test endpoints after deployment');
+
+console.log('\n💡 Troubleshooting:');
+console.log(' - If API token has insufficient permissions, use: wrangler login');
+console.log(' - Check Hyperdrive exists: https://dash.cloudflare.com/[account-id]/workers/hyperdrive');
+console.log(' - Verify NeonDB connection string is correct in Hyperdrive config');
\ No newline at end of file
diff --git a/wrangler.toml b/wrangler.toml
new file mode 100644
index 0000000..eccd252
--- /dev/null
+++ b/wrangler.toml
@@ -0,0 +1,16 @@
+name = "hyperdrive-neondb-test"
+main = "src/index.ts"
+compatibility_date = "2025-02-04"
+
+# Add nodejs_compat compatibility flag to support common database drivers
+compatibility_flags = ["nodejs_compat"]
+
+[observability]
+enabled = true
+
+# Hyperdrive configuration for NeonDB
+[[hyperdrive]]
+binding = "HYPERDRIVE"
+id = "ef43924d89064cddabfaccf06aadfab6"
+# For local development, use a local PostgreSQL connection
+localConnectionString = "postgresql://postgres:password@localhost:5432/testdb"