feat(ui): 添加AI协作页签

新增AI协作功能模块,并在主界面中添加了对应的页签。
更新了OpenBB集成文档的路径,将其从单独的Markdown文件迁移到目录结构中。
为项目添加了新的测试依赖,包括pytest相关工具、locust和memory-profiler等。
This commit is contained in:
ben 2025-08-23 14:06:22 +00:00
parent 21128299a4
commit 09de1f782a
61 changed files with 5324 additions and 11 deletions

View File

@ -0,0 +1,198 @@
# 🤖 四AI团队协作系统邀请函
## 📩 致Qwen、Claude、Gemini、RovoDev
### 🎯 项目简介
我们正在进行一个激动人心的OpenBB金融数据集成项目需要四位AI专家协作完成这个项目将传统的稷下学宫文化与现代AI技术相结合创造出独特的多AI协作体验。
### 👥 团队角色分工
#### 🏗️ **Qwen - 架构设计师**
- **职责**: 系统架构设计、技术选型、接口规范
- **工作说明书**: [`QWEN_ARCHITECTURE_DESIGN.md`](docs/AI_AGENT_TASKS/QWEN_ARCHITECTURE_DESIGN.md)
- **专属频道**: `architecture_design`
#### 💻 **Claude - 核心开发工程师**
- **职责**: 代码实现、API集成、界面优化
- **工作说明书**: [`CLAUDE_CODE_IMPLEMENTATION.md`](docs/AI_AGENT_TASKS/CLAUDE_CODE_IMPLEMENTATION.md)
- **专属频道**: `code_implementation`
#### 🧪 **Gemini - 测试验证专家**
- **职责**: 功能测试、性能测试、质量保证
- **工作说明书**: [`GEMINI_TEST_VALIDATION.md`](docs/AI_AGENT_TASKS/GEMINI_TEST_VALIDATION.md)
- **专属频道**: `testing_validation`
#### 📚 **RovoDev - 项目整合专家**
- **职责**: 项目管理、文档整合、协调统筹
- **工作说明书**: [`ROVODEV_PROJECT_INTEGRATION.md`](docs/AI_AGENT_TASKS/ROVODEV_PROJECT_INTEGRATION.md)
- **专属频道**: `project_integration`
---
## 🚀 如何加入协作系统
### 方式1: Web可视化界面 (推荐)
```bash
# 1. 启动Web界面
cd /home/ben/github/liurenchaxin
.venv/bin/python3 -m streamlit run app/streamlit_app.py --server.port 8502
# 2. 在浏览器中访问: http://localhost:8502
# 3. 选择 "🤖 AI协作" 标签页
# 4. 开始使用协作功能!
```
### 方式2: 命令行演示
```bash
# 查看完整协作流程演示
.venv/bin/python3 ai_collaboration_demo.py demo
```
### 方式3: 快速启动脚本
```bash
# 使用便捷脚本
./start_ai_collaboration.sh
```
---
## 📢 可用协作频道
### 🏛️ **主要频道**
- **`main_collaboration`** - 四AI主要协作讨论
- **`cross_review`** - 跨角色工作评审
- **`emergency_coordination`** - 紧急问题处理
### 🔧 **专业频道**
- **`architecture_design`** - 架构设计讨论 (Qwen主导)
- **`code_implementation`** - 代码实现讨论 (Claude主导)
- **`testing_validation`** - 测试验证讨论 (Gemini主导)
- **`project_integration`** - 项目整合讨论 (RovoDev主导)
---
## 💬 如何在频道中协作
### 基本操作
1. **发送消息**: 选择频道、输入内容、选择接收者
2. **工作交接**: 正式的任务交接流程
3. **请求评审**: 邀请其他AI评审你的工作
4. **问题升级**: 遇到紧急问题时快速上报
### 消息类型
- 📋 **提案** - 提出建议或方案
- ❓ **询问** - 提出问题
- ✅ **回答** - 回答问题
- 📝 **评审** - 评审反馈
- 🎯 **决策** - 做出决策
- 📢 **更新** - 状态更新
- 🚨 **警报** - 警报通知
- 🔄 **交接** - 工作交接
---
## 🎭 稷下学宫文化特色
我们的协作遵循稷下学宫的传统:
- **🏛️ 开放包容** - 各种观点都能得到尊重
- **🧠 理性辩论** - 基于数据和逻辑的讨论
- **🌟 百家争鸣** - 鼓励不同视角的碰撞
- **🤝 求同存异** - 在分歧中寻找共识
---
## 📋 Web界面功能
### 📊 **AI仪表板**
- 查看个人工作状态
- 待处理任务列表
- 协作统计数据
### 📢 **频道管理**
- 实时消息展示
- 频道成员管理
- 消息历史记录
### 🔄 **工作流管理**
- 任务交接流程
- 工作阶段推进
- 评审协作管理
### 📈 **协作分析**
- 消息统计图表
- 活跃度分析
- 协作效率评估
---
## 🔧 技术规范
### 消息格式规范
```python
# 发送消息示例
await collab.send_message(
sender=AIRole.QWEN, # 你的角色
content="我已完成架构设计请大家review",
message_type=MessageType.PROPOSAL,
channel_id="main_collaboration",
receiver=None, # None表示广播或指定特定AI
priority=3, # 1-5优先级
tags=["architecture", "review_request"]
)
```
### 工作交接示例
```python
# 工作交接示例
await collab.handoff_work(
from_ai=AIRole.QWEN,
to_ai=AIRole.CLAUDE,
task_description="基于架构设计实现OpenBB核心引擎",
deliverables=["架构文档", "接口规范"],
notes="请特别注意八仙数据路由的实现"
)
```
---
## 🎯 项目目标
我们的目标是创造一个:
**功能完整** - OpenBB数据与八仙论道完美融合
**性能稳定** - 各种场景下系统表现良好
**文化准确** - 传统文化表达准确尊重
**用户友好** - 最终用户体验优秀
**易于维护** - 代码和文档便于后续维护
---
## 📞 快速开始
1. **阅读你的工作说明书** - 了解具体职责和任务
2. **加入Web协作界面** - 开始实时协作
3. **介绍自己** - 在主频道打个招呼
4. **开始工作** - 按照工作说明书开始你的任务
---
## 💡 协作提示
### ✅ 良好实践
- 📝 清晰描述你的工作进展
- 🔄 及时响应其他AI的请求
- 📋 使用合适的消息类型和标签
- 🎯 保持专业和友好的语调
### ⚠️ 注意事项
- 🏛️ 尊重稷下学宫的文化传统
- 🧠 保持理性和建设性的讨论
- 🤝 鼓励协作而非竞争
- ⏰ 及时更新你的工作状态
---
**欢迎加入稷下学宫AI协作团队让我们一起创造传统文化与现代技术结合的典范** 🏛️✨
---
*"稷下学宫,百家争鸣;开源精神,技术报国!"*

29
AI_INVITATION_MESSAGE.md Normal file
View File

@ -0,0 +1,29 @@
🤖 **四AI协作系统邀请** 🏛️
Hi! 我们正在进行一个很酷的项目 - 将OpenBB金融数据与稷下学宫八仙论道系统集成。需要你加入我们的AI协作团队
🎯 **你的角色**: [根据AI选择对应角色]
- 🏗️ Qwen (架构设计师)
- 💻 Claude (开发工程师)
- 🧪 Gemini (测试专家)
- 📚 RovoDev (项目管理)
📋 **工作说明书**: `docs/AI_AGENT_TASKS/[你的角色]_*.md`
🚀 **快速加入方式**:
```bash
cd /home/ben/github/liurenchaxin
.venv/bin/python3 -m streamlit run app/streamlit_app.py --server.port 8502
```
然后选择 "🤖 AI协作" 标签页
💬 **主要协作频道**:
- `main_collaboration` (主讨论)
- `[你的专属频道]` (专业讨论)
- `cross_review` (交叉评审)
🎭 **协作文化**: 稷下学宫传统 - 开放包容、理性辩论、百家争鸣
📖 **详细指南**: [`QUICK_START_FOR_AI.md`](QUICK_START_FOR_AI.md)
准备好加入这场传统文化与AI技术的协作之旅了吗 🏛️✨

85
QUICK_START_FOR_AI.md Normal file
View File

@ -0,0 +1,85 @@
# 🚀 AI协作系统快速入门
## 👋 给其他AI的简单说明
### 🎯 我们在做什么?
正在进行OpenBB金融数据与稷下学宫八仙论道系统的集成项目。需要4个AI协作
- 🏗️ **Qwen** (架构设计师)
- 💻 **Claude** (开发工程师)
- 🧪 **Gemini** (测试专家)
- 📚 **RovoDev** (项目管理)
### 💻 如何快速加入?
#### 方法1: 启动Web界面 (最简单)
```bash
cd /home/ben/github/liurenchaxin
.venv/bin/python3 -m streamlit run app/streamlit_app.py --server.port 8502
```
然后在浏览器中选择 "🤖 AI协作" 标签页
#### 方法2: 看演示了解系统
```bash
.venv/bin/python3 ai_collaboration_demo.py demo
```
#### 方法3: 使用启动脚本
```bash
./start_ai_collaboration.sh
```
### 📝 你需要做什么?
1. **查看你的工作说明书**: `docs/AI_AGENT_TASKS/[你的角色]_*.md`
2. **在主频道介绍自己**: 说明你的角色和当前状态
3. **开始协作**: 根据工作说明书开始你的任务
4. **与其他AI交流**: 使用频道系统进行实时协作
### 📢 主要协作频道
- `main_collaboration` - 主要讨论
- `architecture_design` - 架构设计 (Qwen主导)
- `code_implementation` - 代码实现 (Claude主导)
- `testing_validation` - 测试验证 (Gemini主导)
- `project_integration` - 项目整合 (RovoDev主导)
- `cross_review` - 交叉评审
- `emergency_coordination` - 紧急协调
### 🎭 协作文化
我们遵循稷下学宫传统:开放包容、理性辩论、百家争鸣、求同存异
### 💡 快速操作
```python
# 发送消息
await collab.send_message(
sender=AIRole.YOUR_ROLE,
content="你的消息内容",
message_type=MessageType.PROPOSAL,
channel_id="main_collaboration"
)
# 工作交接
await collab.handoff_work(
from_ai=AIRole.FROM_AI,
to_ai=AIRole.TO_AI,
task_description="任务描述",
deliverables=["交付物列表"]
)
# 请求评审
await collab.request_review(
sender=AIRole.YOUR_ROLE,
content="评审请求内容",
reviewers=[AIRole.REVIEWER1, AIRole.REVIEWER2]
)
```
### 🔗 详细信息
完整说明请查看: [`AI_COLLABORATION_INVITATION.md`](AI_COLLABORATION_INVITATION.md)
---
**准备好了吗让我们开始这场传统文化与AI技术的协作之旅** 🏛️🤖

View File

@ -13,7 +13,7 @@
- **🌍 天下体系分析**: 基于儒门天下观的资本生态"天命树"分析模型 - **🌍 天下体系分析**: 基于儒门天下观的资本生态"天命树"分析模型
- **🔒 安全配置管理**: 使用Doppler进行统一的密钥和配置管理 - **🔒 安全配置管理**: 使用Doppler进行统一的密钥和配置管理
- **📊 智能数据源**: 基于17个RapidAPI订阅的永动机数据引擎 - **📊 智能数据源**: 基于17个RapidAPI订阅的永动机数据引擎
- **📈 市场数据 (可选)**: 集成 OpenBB v4统一路由多数据提供商详见 docs/openbb_integration.md - **📈 市场数据 (可选)**: 集成 OpenBB v4统一路由多数据提供商详见 docs/openbb_integration/README.md
- **🎨 现代化界面**: 基于Streamlit的响应式Web界面 - **🎨 现代化界面**: 基于Streamlit的响应式Web界面
## 🏗️ 项目结构 ## 🏗️ 项目结构

446
ai_collaboration_demo.py Normal file
View File

@ -0,0 +1,446 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
四AI团队协作启动脚本
快速启动和演示四AI协作系统
"""
import asyncio
import json
import logging
from datetime import datetime
from pathlib import Path
from src.jixia.coordination.ai_team_collaboration import (
AITeamCollaboration, AIRole, MessageType, CollaborationType, WorkPhase
)
# 设置日志
logging.basicConfig(
level=logging.INFO,
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s'
)
async def demo_openbb_integration_workflow():
"""演示OpenBB集成的完整工作流"""
print("🚀 启动四AI团队协作系统演示...")
print("=" * 60)
# 初始化协作系统
collab = AITeamCollaboration()
# ========== 阶段1: 项目启动和架构设计 ==========
print("\n📋 阶段1: 项目启动和架构设计")
# RovoDev发起项目
await collab.broadcast_message(
sender=AIRole.ROVODEV,
content="""🎯 **OpenBB集成项目正式启动**
项目目标将OpenBB金融数据平台与稷下学宫八仙论道系统深度融合
各AI职责分工
- Qwen: 架构设计师负责系统架构和技术选型
- Claude: 核心开发工程师负责代码实现和API集成
- Gemini: 测试验证专家负责功能测试和质量保证
- RovoDev: 项目整合专家负责协调管理和文档整合
请大家查看各自的工作说明书开始工作""",
message_type=MessageType.UPDATE,
channel_id="main_collaboration",
priority=4,
tags=["project_start", "openbb_integration"]
)
# Qwen开始架构设计
await collab.send_message(
sender=AIRole.QWEN,
content="""🏗️ **架构设计开始**
我已经开始OpenBB集成的架构设计工作主要包括
1. **数据抽象层设计** - 统一OpenBB与八仙系统的数据接口
2. **八仙数据路由** - 为每个八仙设计专属的数据获取策略
3. **智能降级机制** - OpenBB不可用时的备选方案
4. **性能优化架构** - 缓存和负载均衡策略
预计完成时间2
需要其他AI配合Claude提供实现可行性反馈""",
message_type=MessageType.PROPOSAL,
channel_id="architecture_design",
priority=3,
tags=["architecture", "design_start"]
)
# Claude响应架构设计
await collab.send_message(
sender=AIRole.CLAUDE,
content="""💻 **实现可行性评估**
Qwen的架构设计方向很好从实现角度我有几个建议
1. **延迟导入策略** - 确保OpenBB未安装时系统正常运行
2. **异步数据处理** - 避免阻塞八仙论道的实时性
3. **缓存层设计** - 减少重复的API调用
4. **错误处理机制** - 优雅处理网络和数据异常
我可以在架构确定后立即开始实现工作""",
message_type=MessageType.ANSWER,
channel_id="architecture_design",
receiver=AIRole.QWEN,
priority=3,
tags=["implementation", "feasibility"]
)
# Gemini提出测试策略
await collab.send_message(
sender=AIRole.GEMINI,
content="""🧪 **测试策略制定**
基于当前的架构讨论我已经开始制定测试策略
1. **单元测试** - 覆盖八仙数据路由和OpenBB集成
2. **集成测试** - 验证数据流和系统交互
3. **性能测试** - 确保响应时间和并发处理能力
4. **文化准确性测试** - 验证八仙特征的准确表达
需要架构文档确定后细化具体测试用例""",
message_type=MessageType.PROPOSAL,
channel_id="testing_validation",
priority=2,
tags=["testing", "strategy"]
)
# ========== 工作交接演示 ==========
print("\n🤝 演示工作交接...")
# 推进到设计完成阶段
await collab.advance_phase(WorkPhase.IMPLEMENTATION)
# Qwen向Claude交接
await collab.handoff_work(
from_ai=AIRole.QWEN,
to_ai=AIRole.CLAUDE,
task_description="基于架构设计实现OpenBB核心引擎",
deliverables=[
"src/jixia/engines/enhanced_openbb_engine.py",
"src/jixia/adapters/immortal_data_processor.py",
"app/tabs/enhanced_openbb_tab.py"
],
notes="""重点关注:
1. 八仙数据偏好的准确实现
2. 智能降级机制的稳定性
3. 文化特色功能的完整性"""
)
# Claude接受任务并开始实现
await collab.send_message(
sender=AIRole.CLAUDE,
content="""✅ **任务接收确认**
已接收Qwen的架构设计交接开始核心引擎实现
当前进度
- [x] 架构文档审查完成
- [ ] 增强版OpenBB引擎开发中...
- [ ] 八仙数据适配器实现
- [ ] Streamlit界面优化
预计完成时间3
如有技术问题会在代码实现频道讨论""",
message_type=MessageType.UPDATE,
channel_id="code_implementation",
priority=3,
tags=["implementation", "progress"]
)
# ========== 评审请求演示 ==========
print("\n📋 演示评审请求...")
await collab.request_review(
sender=AIRole.CLAUDE,
content="""📋 **代码评审请求**
已完成增强版OpenBB引擎的核心实现请求评审
主要特性
1. 八仙专属数据路由器
2. 智能降级机制
3. 异步数据处理
4. 完整的错误处理
请重点关注
- 架构一致性 (Qwen)
- 测试覆盖度 (Gemini)
- 文档完整性 (RovoDev)""",
reviewers=[AIRole.QWEN, AIRole.GEMINI, AIRole.ROVODEV],
attachments=["src/jixia/engines/enhanced_openbb_engine.py"]
)
# 各AI提供评审反馈
await collab.send_message(
sender=AIRole.QWEN,
content="""🏗️ **架构评审反馈**
Claude的实现与架构设计高度一致几个优化建议
优点
- 八仙数据路由实现准确
- 降级机制设计合理
- 性能考虑周全
🔧 建议优化
- 可以增加更多的数据源provider
- 缓存策略可以更加智能化
- 考虑添加实时数据流支持
总体评分9/10建议通过""",
message_type=MessageType.REVIEW,
channel_id="cross_review",
receiver=AIRole.CLAUDE,
priority=3,
tags=["review", "architecture", "approved"]
)
await collab.send_message(
sender=AIRole.GEMINI,
content="""🧪 **测试评审反馈**
从测试角度评估Claude的实现
测试友好性
- 代码结构清晰易于测试
- 依赖注入设计良好
- 异常处理完善
📋 测试建议
- 需要补充Mock数据用于单元测试
- 建议增加性能基准测试
- 八仙特征测试用例需要完善
开始准备对应的测试套件""",
message_type=MessageType.REVIEW,
channel_id="cross_review",
receiver=AIRole.CLAUDE,
priority=3,
tags=["review", "testing", "suggestions"]
)
# ========== 紧急问题演示 ==========
print("\n🚨 演示紧急问题处理...")
await collab.escalate_issue(
reporter=AIRole.CLAUDE,
issue_description="""OpenBB v4.3.0版本兼容性问题:
发现新版本OpenBB的API接口有重大变更影响数据获取功能
当前解决方案
1. 临时锁定到v4.1.0版本
2. 准备适配新版本的兼容层
需要团队讨论优先级和解决方案""",
severity="high"
)
# RovoDev协调解决
await collab.send_message(
sender=AIRole.ROVODEV,
content="""🎯 **紧急问题协调**
已接收Claude的问题报告协调解决方案
📋 行动计划
1. **短期方案** (Claude负责): 锁定OpenBB v4.1.0版本确保现有功能稳定
2. **中期方案** (Qwen设计): 设计兼容层架构支持多版本OpenBB
3. **长期方案** (团队): 建立版本兼容性测试机制
**时间安排**
- 今日内完成版本锁定
- 3天内完成兼容层设计
- 1周内完成新版本适配
请各AI确认该计划""",
message_type=MessageType.DECISION,
channel_id="emergency_coordination",
priority=4,
tags=["emergency", "coordination", "action_plan"]
)
# ========== 项目整合演示 ==========
print("\n📚 演示项目整合...")
# 推进到整合阶段
await collab.advance_phase(WorkPhase.INTEGRATION)
await collab.send_message(
sender=AIRole.ROVODEV,
content="""📚 **项目整合开始**
开始整合所有AI的工作成果
🏗 **Qwen交付物**:
- 系统架构设计文档
- 数据抽象层接口规范
- 性能优化策略
💻 **Claude交付物**:
- 增强版OpenBB引擎
- 八仙数据适配器
- Streamlit界面优化
🧪 **Gemini交付物**:
- 完整测试套件
- 性能基准测试
- 质量保证报告
📋 **整合任务**:
- [ ] 统一文档格式
- [ ] 集成测试验证
- [ ] 用户指南编写
- [ ] 最终质量检查
预计整合完成时间2""",
message_type=MessageType.UPDATE,
channel_id="project_integration",
priority=4,
tags=["integration", "deliverables", "timeline"]
)
# ========== 生成工作报告 ==========
print("\n📊 生成协作统计...")
# 获取各AI的工作仪表板
for ai_role in AIRole:
dashboard = collab.get_ai_dashboard(ai_role)
print(f"\n🤖 {ai_role.value} 工作统计:")
print(f" 状态: {dashboard['status']['status']}")
print(f" 当前任务: {dashboard['status']['current_task']}")
print(f" 待处理任务: {len(dashboard['pending_tasks'])}")
print(f" 协作得分: {dashboard['collaboration_stats']['collaboration_score']}")
print(f" 活跃频道: {len(dashboard['active_channels'])}")
# 获取频道摘要
print(f"\n📢 频道活跃度统计:")
for channel_id, channel in collab.channels.items():
summary = collab.get_channel_summary(channel_id)
print(f" {summary['channel_name']}: {summary['total_messages']}条消息")
print("\n🎉 四AI团队协作演示完成")
print("=" * 60)
print("系统功能演示:")
print("✅ 多频道协作通信")
print("✅ 工作流程管理")
print("✅ 任务交接机制")
print("✅ 评审协作流程")
print("✅ 紧急问题处理")
print("✅ 项目整合管理")
print("✅ 实时状态监控")
print("\n🚀 可以启动Web界面进行可视化管理")
async def start_collaboration_system():
"""启动协作系统的交互式版本"""
import sys
collab = AITeamCollaboration()
print("🤖 四AI团队协作系统已启动")
print("可用命令:")
print(" send - 发送消息")
print(" status - 查看状态")
print(" channels - 查看频道")
print(" dashboard <AI> - 查看AI仪表板")
print(" handoff - 工作交接")
print(" quit - 退出")
# 检查是否在交互式环境中
if not sys.stdin.isatty():
print("\n⚠️ 检测到非交互式环境,运行快速演示模式")
print("\n📊 当前系统状态:")
print(f"当前阶段: {collab.current_phase.value}")
for ai_role, status in collab.ai_status.items():
print(f"{ai_role.value}: {status['status']} - {status['current_task']}")
print("\n📢 频道列表:")
for channel_id, channel in collab.channels.items():
print(f"{channel.name} ({channel.channel_type.value}): {len(channel.message_history)}条消息")
print("\n💡 要体验完整交互功能,请在真正的终端中运行:")
print(" .venv/bin/python3 ai_collaboration_demo.py interactive")
return
while True:
try:
command = input("\n> ").strip().lower()
if command == "quit":
break
elif command == "status":
print(f"当前阶段: {collab.current_phase.value}")
for ai_role, status in collab.ai_status.items():
print(f"{ai_role.value}: {status['status']} - {status['current_task']}")
elif command == "channels":
for channel_id, channel in collab.channels.items():
print(f"{channel.name} ({channel.channel_type.value}): {len(channel.message_history)}条消息")
elif command.startswith("dashboard"):
parts = command.split()
if len(parts) > 1:
try:
ai_role = AIRole(parts[1].title())
dashboard = collab.get_ai_dashboard(ai_role)
print(json.dumps(dashboard, indent=2, ensure_ascii=False, default=str))
except ValueError:
print("无效的AI角色可选Qwen, Claude, Gemini, Rovodev")
else:
print("使用方法: dashboard <AI名称>")
elif command == "send":
# 简化的消息发送
try:
sender = input("发送者 (Qwen/Claude/Gemini/Rovodev): ")
content = input("消息内容: ")
channel = input("频道 (main_collaboration/architecture_design/etc): ")
await collab.send_message(
sender=AIRole(sender),
content=content,
message_type=MessageType.PROPOSAL,
channel_id=channel or "main_collaboration"
)
print("消息发送成功!")
except EOFError:
print("\n输入被中断")
break
except Exception as e:
print(f"发送失败: {e}")
else:
print("未知命令")
except EOFError:
print("\n检测到EOF退出交互模式")
break
except KeyboardInterrupt:
print("\n检测到中断信号,退出交互模式")
break
except Exception as e:
print(f"错误: {e}")
print("👋 协作系统已退出")
if __name__ == "__main__":
import sys
if len(sys.argv) > 1 and sys.argv[1] == "demo":
# 运行演示
asyncio.run(demo_openbb_integration_workflow())
elif len(sys.argv) > 1 and sys.argv[1] == "interactive":
# 交互式模式
asyncio.run(start_collaboration_system())
else:
print("四AI团队协作系统")
print("使用方法:")
print(" python ai_collaboration_demo.py demo - 运行完整演示")
print(" python ai_collaboration_demo.py interactive - 交互式模式")
print(" streamlit run app/tabs/ai_collaboration_tab.py - 启动Web界面")

View File

@ -113,8 +113,8 @@ def main():
# 主内容区域 # 主内容区域
st.markdown("---") st.markdown("---")
# 选项卡(新增 OpenBB 数据页签 # 选项卡(新增 OpenBB 数据页签和AI协作页签
tab1, tab2, tab3, tab4 = st.tabs(["🏛️ 稷下学宫", "🌍 天下体系", "📊 数据分析", "📈 OpenBB 数据"]) tab1, tab2, tab3, tab4, tab5 = st.tabs(["🏛️ 稷下学宫", "🌍 天下体系", "📊 数据分析", "📈 OpenBB 数据", "🤖 AI协作"])
with tab1: with tab1:
st.markdown("### 🏛️ 稷下学宫 - 八仙论道") st.markdown("### 🏛️ 稷下学宫 - 八仙论道")
@ -178,13 +178,13 @@ def main():
except Exception as e: except Exception as e:
st.warning(f"⚠️ 无法加载统计数据: {str(e)}") st.warning(f"⚠️ 无法加载统计数据: {str(e)}")
with tab4: with tab5:
st.markdown("### 📈 OpenBB 数据") st.markdown("### 🤖 AI协作")
try: try:
from app.tabs.openbb_tab import render_openbb_tab from app.tabs.ai_collaboration_tab import main as ai_collaboration_main
render_openbb_tab() ai_collaboration_main()
except Exception as e: except Exception as e:
st.error(f"OpenBB 模块加载失败: {str(e)}") st.error(f"AI协作模块加载失败: {str(e)}")
def start_debate_session(topic: str): def start_debate_session(topic: str):
"""启动辩论会话""" """启动辩论会话"""

View File

@ -0,0 +1,509 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
四AI团队协作Web界面
基于Streamlit的实时协作监控和管理界面
"""
import streamlit as st
import asyncio
import json
import pandas as pd
import plotly.express as px
import plotly.graph_objects as go
from datetime import datetime, timedelta
import sys
from pathlib import Path
# 添加项目路径到sys.path
project_root = Path(__file__).parent.parent.parent.parent
sys.path.insert(0, str(project_root))
from src.jixia.coordination.ai_team_collaboration import (
AITeamCollaboration, AIRole, MessageType, CollaborationType, WorkPhase
)
# 页面配置
st.set_page_config(
page_title="四AI团队协作中心",
page_icon="🤖",
layout="wide",
initial_sidebar_state="expanded"
)
# 初始化协作系统
@st.cache_resource
def init_collaboration_system():
"""初始化协作系统"""
return AITeamCollaboration()
def main():
"""主界面"""
st.title("🤖 四AI团队协作中心")
st.markdown("### OpenBB集成项目实时协作监控")
# 初始化系统
collab = init_collaboration_system()
# 侧边栏
with st.sidebar:
st.header("🎯 项目状态")
# 当前阶段
current_phase = st.selectbox(
"当前工作阶段",
[phase.value for phase in WorkPhase],
index=list(WorkPhase).index(collab.current_phase)
)
if st.button("更新阶段"):
new_phase = WorkPhase(current_phase)
asyncio.run(collab.advance_phase(new_phase))
st.success(f"阶段已更新为: {current_phase}")
st.rerun()
st.divider()
# AI状态概览
st.subheader("🤖 AI状态概览")
for ai_role, status in collab.ai_status.items():
status_color = {
"ready": "🟢",
"active": "🔵",
"waiting": "🟡",
"completed_handoff": "",
"received_handoff": "📥"
}.get(status["status"], "")
st.write(f"{status_color} **{ai_role.value}**")
st.write(f" 📋 {status['current_task']}")
st.write(f" 🎯 {status['role']}")
# 主要内容区域
tab1, tab2, tab3, tab4, tab5 = st.tabs([
"📢 主协作频道", "📊 AI仪表板", "🔄 工作流管理", "📈 协作分析", "⚙️ 系统管理"
])
with tab1:
render_main_collaboration(collab)
with tab2:
render_ai_dashboard(collab)
with tab3:
render_workflow_management(collab)
with tab4:
render_collaboration_analytics(collab)
with tab5:
render_system_management(collab)
def render_main_collaboration(collab):
"""渲染主协作频道"""
st.header("📢 主协作频道")
# 频道选择
channel_options = {
channel.name: channel_id
for channel_id, channel in collab.channels.items()
}
selected_channel_name = st.selectbox(
"选择频道",
list(channel_options.keys()),
index=0
)
selected_channel_id = channel_options[selected_channel_name]
channel = collab.channels[selected_channel_id]
col1, col2 = st.columns([2, 1])
with col1:
# 消息历史
st.subheader(f"💬 {channel.name}")
if channel.message_history:
for msg in channel.message_history[-10:]: # 显示最近10条消息
sender_emoji = {
AIRole.QWEN: "🏗️",
AIRole.CLAUDE: "💻",
AIRole.GEMINI: "🧪",
AIRole.ROVODEV: "📚"
}.get(msg.sender, "🤖")
with st.chat_message(msg.sender.value, avatar=sender_emoji):
st.write(f"**{msg.message_type.value}** - {msg.timestamp.strftime('%H:%M')}")
st.write(msg.content)
if msg.attachments:
st.write("📎 附件:")
for attachment in msg.attachments:
st.write(f"{attachment}")
if msg.tags:
tag_html = " ".join([f"<span style='background-color: #e1f5fe; padding: 2px 6px; border-radius: 4px; font-size: 0.8em;'>{tag}</span>" for tag in msg.tags])
st.markdown(tag_html, unsafe_allow_html=True)
else:
st.info("暂无消息")
with col2:
# 频道信息
st.subheader(" 频道信息")
st.write(f"**类型**: {channel.channel_type.value}")
st.write(f"**参与者**: {len(channel.participants)}")
st.write(f"**主持人**: {channel.moderator.value}")
st.write(f"**消息数**: {len(channel.message_history)}")
st.write(f"**最后活动**: {channel.last_activity.strftime('%Y-%m-%d %H:%M')}")
# 参与者列表
st.write("**参与者列表**:")
for participant in channel.participants:
role_emoji = {
AIRole.QWEN: "🏗️",
AIRole.CLAUDE: "💻",
AIRole.GEMINI: "🧪",
AIRole.ROVODEV: "📚"
}.get(participant, "🤖")
st.write(f"{role_emoji} {participant.value}")
# 发送消息区域
st.divider()
st.subheader("📝 发送消息")
col1, col2, col3 = st.columns([2, 1, 1])
with col1:
message_content = st.text_area("消息内容", height=100)
with col2:
sender = st.selectbox(
"发送者",
[role.value for role in AIRole]
)
message_type = st.selectbox(
"消息类型",
[msg_type.value for msg_type in MessageType]
)
with col3:
receiver = st.selectbox(
"接收者",
["广播"] + [role.value for role in AIRole]
)
priority = st.slider("优先级", 1, 5, 1)
if st.button("发送消息", type="primary"):
if message_content:
try:
receiver_role = None if receiver == "广播" else AIRole(receiver)
asyncio.run(collab.send_message(
sender=AIRole(sender),
content=message_content,
message_type=MessageType(message_type),
channel_id=selected_channel_id,
receiver=receiver_role,
priority=priority
))
st.success("消息发送成功!")
st.rerun()
except Exception as e:
st.error(f"发送失败: {str(e)}")
else:
st.warning("请输入消息内容")
def render_ai_dashboard(collab):
"""渲染AI仪表板"""
st.header("📊 AI工作仪表板")
# AI选择
selected_ai = st.selectbox(
"选择AI",
[role.value for role in AIRole]
)
ai_role = AIRole(selected_ai)
dashboard = collab.get_ai_dashboard(ai_role)
# 基本信息
col1, col2, col3, col4 = st.columns(4)
with col1:
st.metric("当前状态", dashboard["status"]["status"])
with col2:
st.metric("活跃频道", len(dashboard["active_channels"]))
with col3:
st.metric("待处理任务", len(dashboard["pending_tasks"]))
with col4:
st.metric("协作得分", dashboard["collaboration_stats"]["collaboration_score"])
# 详细信息
col1, col2 = st.columns(2)
with col1:
# 待处理任务
st.subheader("📋 待处理任务")
if dashboard["pending_tasks"]:
for task in dashboard["pending_tasks"]:
with st.expander(f"{task['type']} - 优先级 {task['priority']}"):
st.write(f"**来自**: {task['from']}")
st.write(f"**频道**: {task['channel']}")
st.write(f"**创建时间**: {task['created']}")
st.write(f"**描述**: {task['description']}")
else:
st.info("暂无待处理任务")
with col2:
# 最近消息
st.subheader("📨 最近消息")
if dashboard["recent_messages"]:
for msg in dashboard["recent_messages"][:5]:
priority_color = {
1: "🔵", 2: "🟢", 3: "🟡", 4: "🟠", 5: "🔴"
}.get(msg["priority"], "")
st.write(f"{priority_color} **{msg['sender']}** 在 **{msg['channel']}**")
st.write(f" {msg['content']}")
st.write(f"{msg['timestamp']}")
st.divider()
else:
st.info("暂无最近消息")
# 协作统计
st.subheader("📈 协作统计")
stats = dashboard["collaboration_stats"]
col1, col2, col3 = st.columns(3)
with col1:
st.metric("发送消息", stats["messages_sent"])
with col2:
st.metric("接收消息", stats["messages_received"])
with col3:
st.metric("总消息数", stats["total_messages"])
def render_workflow_management(collab):
"""渲染工作流管理"""
st.header("🔄 工作流管理")
# 工作流规则
st.subheader("📜 工作流规则")
rules_data = []
for rule_id, rule in collab.workflow_rules.items():
rules_data.append({
"规则ID": rule.id,
"规则名称": rule.name,
"触发阶段": rule.trigger_phase.value,
"目标AI": rule.target_ai.value if rule.target_ai else "",
"状态": "✅ 激活" if rule.is_active else "❌ 禁用"
})
if rules_data:
st.dataframe(pd.DataFrame(rules_data), use_container_width=True)
# 手动工作交接
st.divider()
st.subheader("🤝 手动工作交接")
col1, col2, col3 = st.columns(3)
with col1:
from_ai = st.selectbox("交接方", [role.value for role in AIRole])
to_ai = st.selectbox("接收方", [role.value for role in AIRole])
with col2:
task_desc = st.text_input("任务描述")
deliverables = st.text_area("交付物列表 (每行一个)")
with col3:
notes = st.text_area("备注")
if st.button("执行工作交接"):
if task_desc and from_ai != to_ai:
deliverable_list = [d.strip() for d in deliverables.split('\n') if d.strip()]
try:
asyncio.run(collab.handoff_work(
from_ai=AIRole(from_ai),
to_ai=AIRole(to_ai),
task_description=task_desc,
deliverables=deliverable_list,
notes=notes
))
st.success("工作交接完成!")
st.rerun()
except Exception as e:
st.error(f"交接失败: {str(e)}")
else:
st.warning("请填写完整信息,且交接方和接收方不能相同")
def render_collaboration_analytics(collab):
"""渲染协作分析"""
st.header("📈 协作分析")
# 消息统计
st.subheader("💬 消息统计")
# 收集所有消息数据
message_data = []
for channel_id, channel in collab.channels.items():
for msg in channel.message_history:
message_data.append({
"频道": channel.name,
"发送者": msg.sender.value,
"消息类型": msg.message_type.value,
"优先级": msg.priority,
"时间": msg.timestamp,
"日期": msg.timestamp.date(),
"小时": msg.timestamp.hour
})
if message_data:
df = pd.DataFrame(message_data)
col1, col2 = st.columns(2)
with col1:
# 按AI发送者统计
sender_counts = df.groupby("发送者").size().reset_index()
sender_counts.columns = ["AI", "消息数量"]
fig = px.bar(sender_counts, x="AI", y="消息数量",
title="各AI发送消息统计")
st.plotly_chart(fig, use_container_width=True)
with col2:
# 按消息类型统计
type_counts = df.groupby("消息类型").size().reset_index()
type_counts.columns = ["消息类型", "数量"]
fig = px.pie(type_counts, values="数量", names="消息类型",
title="消息类型分布")
st.plotly_chart(fig, use_container_width=True)
# 时间线分析
st.subheader("⏰ 活跃度时间线")
if len(df) > 1:
daily_counts = df.groupby("日期").size().reset_index()
daily_counts.columns = ["日期", "消息数量"]
fig = px.line(daily_counts, x="日期", y="消息数量",
title="每日消息数量趋势")
st.plotly_chart(fig, use_container_width=True)
# 频道活跃度
st.subheader("📢 频道活跃度")
channel_counts = df.groupby("频道").size().reset_index()
channel_counts.columns = ["频道", "消息数量"]
channel_counts = channel_counts.sort_values("消息数量", ascending=True)
fig = px.bar(channel_counts, x="消息数量", y="频道",
orientation='h', title="各频道消息数量")
st.plotly_chart(fig, use_container_width=True)
else:
st.info("暂无消息数据用于分析")
def render_system_management(collab):
"""渲染系统管理"""
st.header("⚙️ 系统管理")
# 系统状态
st.subheader("🔍 系统状态")
col1, col2, col3 = st.columns(3)
with col1:
st.metric("活跃频道", len([c for c in collab.channels.values() if c.is_active]))
with col2:
total_messages = sum(len(c.message_history) for c in collab.channels.values())
st.metric("总消息数", total_messages)
with col3:
st.metric("工作流规则", len(collab.workflow_rules))
# 频道管理
st.subheader("📢 频道管理")
for channel_id, channel in collab.channels.items():
with st.expander(f"{channel.name} ({channel.channel_type.value})"):
col1, col2 = st.columns(2)
with col1:
st.write(f"**描述**: {channel.description}")
st.write(f"**参与者**: {len(channel.participants)}")
st.write(f"**消息数**: {len(channel.message_history)}")
st.write(f"**状态**: {'🟢 活跃' if channel.is_active else '🔴 禁用'}")
with col2:
st.write("**参与者列表**:")
for participant in channel.participants:
st.write(f"{participant.value}")
st.write(f"**主持人**: {channel.moderator.value}")
st.write(f"**最后活动**: {channel.last_activity.strftime('%Y-%m-%d %H:%M')}")
# 数据导出
st.divider()
st.subheader("📤 数据导出")
if st.button("导出协作数据"):
# 准备导出数据
export_data = {
"channels": {},
"ai_status": {},
"workflow_rules": {},
"system_info": {
"current_phase": collab.current_phase.value,
"export_time": datetime.now().isoformat()
}
}
# 频道数据
for channel_id, channel in collab.channels.items():
export_data["channels"][channel_id] = {
"name": channel.name,
"type": channel.channel_type.value,
"participants": [p.value for p in channel.participants],
"message_count": len(channel.message_history),
"last_activity": channel.last_activity.isoformat()
}
# AI状态数据
for ai_role, status in collab.ai_status.items():
export_data["ai_status"][ai_role.value] = status
# 工作流规则
for rule_id, rule in collab.workflow_rules.items():
export_data["workflow_rules"][rule_id] = {
"name": rule.name,
"description": rule.description,
"trigger_phase": rule.trigger_phase.value,
"action": rule.action,
"is_active": rule.is_active
}
# 创建下载链接
json_str = json.dumps(export_data, indent=2, ensure_ascii=False)
st.download_button(
label="下载协作数据 (JSON)",
data=json_str,
file_name=f"ai_collaboration_data_{datetime.now().strftime('%Y%m%d_%H%M%S')}.json",
mime="application/json"
)
if __name__ == "__main__":
main()

View File

@ -0,0 +1,222 @@
# 💻 Claude AI - OpenBB核心代码实现工作说明书
## 🎯 任务概述
作为核心开发工程师您需要基于Qwen的架构设计实现OpenBB与稷下学宫系统的深度集成代码。
## 📋 核心职责
### 1. 核心引擎实现
**任务目标:** 增强现有OpenBB引擎实现八仙智能数据获取
**关键文件实现:**
```
src/jixia/engines/
├── enhanced_openbb_engine.py # 增强版OpenBB引擎
├── immortal_data_router.py # 八仙数据路由器
├── intelligent_fallback.py # 智能降级机制
└── data_quality_monitor.py # 数据质量监控
```
**核心代码需求:**
```python
class EnhancedOpenBBEngine:
"""增强版OpenBB引擎 - 八仙专属"""
async def get_immortal_insight(self, immortal_name: str,
symbol: str, analysis_type: str):
"""为特定八仙获取专属金融洞察"""
pass
async def orchestrate_debate_data(self, topic: str,
participants: List[str]):
"""为稷下学宫辩论准备数据"""
pass
```
### 2. 智能体数据适配器
**任务目标:** 实现AI智能体与OpenBB数据的无缝对接
**具体实现:**
- 八仙角色数据源适配
- 实时数据流处理
- 智能缓存机制
- 异常处理和重试逻辑
**核心文件:**
- `src/jixia/adapters/openbb_agent_adapter.py`
- `src/jixia/adapters/immortal_data_processor.py`
### 3. Streamlit界面增强
**任务目标:** 优化现有OpenBB标签页增加八仙论道功能
**需要修改的文件:**
- `app/tabs/openbb_tab.py` - 增强现有界面
- `app/tabs/immortal_debate_tab.py` - 新增八仙辩论界面
**UI功能需求**
```python
def render_immortal_debate_interface():
"""渲染八仙辩论界面"""
# 1. 股票/主题选择器
# 2. 八仙角色选择器
# 3. 实时数据展示
# 4. 辩论结果可视化
pass
```
### 4. 数据质量保障
**任务目标:** 确保数据准确性和系统稳定性
**实现重点:**
- 数据验证机制
- 异常数据处理
- 性能监控埋点
- 日志记录系统
## 🔧 实现规范
### 代码风格要求:
```python
# 1. 遵循项目现有代码风格
# 2. 完整的类型注解
# 3. 详细的docstring文档
# 4. 异常处理机制
from typing import Dict, List, Optional, Union
from dataclasses import dataclass
from datetime import datetime
@dataclass
class ImmortalInsight:
"""八仙洞察数据模型"""
immortal_name: str
symbol: str
insight_type: str
data: Dict[str, any]
confidence: float
timestamp: datetime
```
### 必须保持的特性:
1. **向后兼容** - 不破坏现有功能
2. **优雅降级** - OpenBB不可用时的备选方案
3. **文化内核** - 保持八仙论道的文化特色
4. **模块化设计** - 便于单元测试和维护
### 性能要求:
- 数据获取响应时间 < 3秒
- 并发处理能力 > 10个请求/秒
- 内存使用 < 500MB
- CPU使用率 < 30%
## 🎭 八仙特色功能实现
### 八仙数据偏好实现:
```python
IMMORTAL_PREFERENCES = {
'吕洞宾': {
'data_types': ['technical_indicators', 'chart_patterns'],
'analysis_style': 'systematic',
'risk_appetite': 'moderate'
},
'何仙姑': {
'data_types': ['risk_metrics', 'volatility'],
'analysis_style': 'conservative',
'risk_appetite': 'low'
},
# ... 其他六仙
}
```
### 智能辩论数据准备:
```python
async def prepare_debate_data(self, topic_symbol: str) -> DebateDataSet:
"""为八仙辩论准备差异化数据视角"""
# 1. 获取基础数据
# 2. 按八仙偏好处理数据
# 3. 生成对比性观点
# 4. 返回结构化辩论数据
pass
```
## 🧪 测试要求
### 必须实现的测试:
```python
# tests/test_enhanced_openbb_engine.py
class TestEnhancedOpenBBEngine:
def test_immortal_data_routing(self):
"""测试八仙数据路由功能"""
pass
def test_fallback_mechanism(self):
"""测试降级机制"""
pass
def test_concurrent_requests(self):
"""测试并发请求处理"""
pass
```
### 集成测试:
- 与现有八仙辩论系统的集成
- Streamlit界面集成测试
- 实际数据获取测试
## 🔄 协作接口
### 接收Qwen的架构输入
- [ ] 架构设计文档
- [ ] 接口规范定义
- [ ] 数据模型标准
### 为Gemini提供测试目标
- [ ] 完整的代码实现
- [ ] 单元测试用例
- [ ] 集成测试指南
### 为RovoDev提供文档素材
- [ ] 代码注释和文档
- [ ] API使用示例
- [ ] 故障排除指南
## 📅 开发里程碑
### 里程碑13天
- [ ] 核心引擎实现
- [ ] 基础单元测试
- [ ] 简单集成验证
### 里程碑22天
- [ ] Streamlit界面增强
- [ ] 八仙特色功能
- [ ] 性能优化
### 里程碑31天
- [ ] 完整测试覆盖
- [ ] 代码审查和优化
- [ ] 文档完善
## 💡 创新挑战
请在实现中展现创新:
1. **智能数据融合算法**
2. **八仙个性化数据处理**
3. **实时性能监控机制**
4. **用户体验优化**
## ⚠️ 特别注意
### 文化敏感性:
- 确保八仙角色的准确性和尊重性
- 保持传统文化与现代技术的平衡
- 避免过度商业化的表达
### 技术债务控制:
- 避免硬编码
- 保持配置的灵活性
- 确保代码的可维护性
---
**注意:** 代码是文化的载体,请让每一行代码都体现稷下学宫的智慧!

View File

@ -0,0 +1,274 @@
# 🧪 Gemini AI - OpenBB集成测试验证工作说明书
## 🎯 任务概述
作为测试工程师您需要为OpenBB与稷下学宫系统的集成功能设计并执行全面的测试验证方案。
## 📋 核心职责
### 1. 测试策略制定
**任务目标:** 制定全面的测试策略和验证标准
**测试金字塔设计:**
```
[E2E Tests] # 端到端测试
/ \
[Integration Tests] # 集成测试
/ \
[Unit Tests] [API Tests] # 单元测试 + API测试
```
**测试覆盖范围:**
- 功能测试 (80%覆盖率)
- 性能测试 (响应时间、并发)
- 稳定性测试 (长时间运行)
- 兼容性测试 (多数据源)
### 2. 八仙智能体测试
**任务目标:** 验证八仙角色的数据获取和分析能力
**测试文件结构:**
```
tests/immortal_tests/
├── test_immortal_data_routing.py # 八仙数据路由测试
├── test_immortal_preferences.py # 八仙偏好测试
├── test_debate_data_quality.py # 辩论数据质量测试
└── test_cultural_accuracy.py # 文化准确性测试
```
**关键测试用例:**
```python
class TestImmortalDataRouting:
"""八仙数据路由测试"""
def test_lv_dongbin_technical_analysis(self):
"""测试吕洞宾的技术分析数据获取"""
pass
def test_he_xiangu_risk_metrics(self):
"""测试何仙姑的风险指标数据"""
pass
def test_immortal_data_consistency(self):
"""测试八仙数据的一致性"""
pass
```
### 3. OpenBB集成测试
**任务目标:** 验证OpenBB数据源的集成质量
**测试重点:**
- OpenBB API调用稳定性
- 数据格式标准化
- 错误处理机制
- 降级策略验证
**核心测试文件:**
```python
# tests/openbb_integration/
class TestOpenBBIntegration:
"""OpenBB集成测试套件"""
@pytest.mark.asyncio
async def test_stock_data_retrieval(self):
"""测试股票数据获取"""
symbols = ['AAPL', 'TSLA', 'MSFT']
for symbol in symbols:
data = await engine.get_stock_data(symbol)
assert data is not None
assert 'close' in data.columns
def test_fallback_mechanism(self):
"""测试OpenBB不可用时的降级机制"""
# 模拟OpenBB不可用
with mock.patch('openbb.obb', side_effect=ImportError):
result = engine.get_data_with_fallback('AAPL')
assert result.source == 'demo_data'
```
### 4. 性能和稳定性测试
**任务目标:** 确保系统在各种条件下的性能表现
**性能基准:**
- 数据获取延迟 < 3秒
- 并发处理 > 10 req/s
- 内存使用 < 500MB
- 99.9% 可用性
**负载测试方案:**
```python
# tests/performance/
class TestPerformance:
"""性能测试套件"""
def test_concurrent_data_requests(self):
"""并发数据请求测试"""
import concurrent.futures
with concurrent.futures.ThreadPoolExecutor(max_workers=10) as executor:
futures = [executor.submit(self.get_test_data) for _ in range(100)]
results = [f.result() for f in futures]
assert all(r.success for r in results)
assert max(r.response_time for r in results) < 5.0
```
## 🎭 文化特色测试
### 八仙文化准确性验证:
```python
class TestCulturalAccuracy:
"""文化准确性测试"""
def test_immortal_characteristics(self):
"""验证八仙特征的准确性"""
immortals = get_immortal_configs()
# 验证吕洞宾的技术分析特色
assert immortals['吕洞宾'].specialty == 'technical_analysis'
assert immortals['吕洞宾'].element == '乾'
# 验证何仙姑的风险控制特色
assert immortals['何仙姑'].specialty == 'risk_metrics'
assert immortals['何仙姑'].element == '坤'
def test_debate_cultural_context(self):
"""验证辩论的文化背景准确性"""
debate = create_test_debate('AAPL')
# 确保辩论遵循稷下学宫的传统
assert 'jixia' in debate.context
assert len(debate.participants) == 8 # 八仙
```
## 🔧 测试工具和框架
### 推荐工具栈:
```python
# 测试依赖
pytest>=7.4.0 # 主测试框架
pytest-asyncio>=0.21.0 # 异步测试支持
pytest-mock>=3.11.0 # Mock功能
pytest-cov>=4.1.0 # 覆盖率统计
pytest-benchmark>=4.0.0 # 性能基准测试
# 性能测试
locust>=2.15.0 # 负载测试
memory-profiler>=0.60.0 # 内存分析
# 数据验证
pydantic>=2.0.0 # 数据模型验证
jsonschema>=4.19.0 # JSON架构验证
```
### 测试配置文件:
```yaml
# pytest.ini
[tool:pytest]
testpaths = tests
python_files = test_*.py
python_classes = Test*
python_functions = test_*
addopts =
--cov=src
--cov-report=html
--cov-report=term-missing
--asyncio-mode=auto
```
## 📊 测试报告和指标
### 测试报告模板:
```
tests/reports/
├── unit_test_report.html # 单元测试报告
├── integration_test_report.html # 集成测试报告
├── performance_benchmark.json # 性能基准数据
├── coverage_report/ # 代码覆盖率报告
└── cultural_validation_report.md # 文化验证报告
```
### 关键指标监控:
```python
# 自动化指标收集
class TestMetrics:
"""测试指标收集器"""
def collect_response_times(self):
"""收集响应时间数据"""
pass
def measure_memory_usage(self):
"""监控内存使用情况"""
pass
def validate_data_quality(self):
"""验证数据质量指标"""
pass
```
## 🔄 协作流程
### 与Claude的协作
1. **接收代码实现** → 制定对应测试用例
2. **执行测试验证** → 反馈BUG和优化建议
3. **性能测试** → 提供优化方向
### 与Qwen的协作
1. **验证架构设计** → 确认技术指标可达成
2. **测试架构决策** → 验证设计的合理性
### 与RovoDev的协作
1. **提供测试数据** → 支持文档编写
2. **验证文档准确性** → 确保文档与实际一致
## 📅 测试里程碑
### 阶段一2天测试框架搭建
- [ ] 测试环境配置
- [ ] 基础测试框架
- [ ] Mock数据准备
### 阶段二3天功能测试执行
- [ ] 单元测试执行
- [ ] 集成测试验证
- [ ] 八仙特色功能测试
### 阶段三2天性能和稳定性测试
- [ ] 负载测试执行
- [ ] 性能基准建立
- [ ] 稳定性验证
### 阶段四1天测试报告生成
- [ ] 测试结果汇总
- [ ] 问题清单整理
- [ ] 优化建议制定
## 🎯 验收标准
### 功能验收:
- [ ] 所有单元测试通过率 > 95%
- [ ] 集成测试通过率 > 90%
- [ ] 八仙特色功能100%验证通过
### 性能验收:
- [ ] 响应时间 < 3秒
- [ ] 并发处理 > 10 req/s
- [ ] 内存使用稳定
- [ ] 99%可用性达成
### 文化验收:
- [ ] 八仙角色特征准确
- [ ] 辩论逻辑符合传统
- [ ] 文化表达尊重得体
## 💡 创新测试方法
### 智能化测试:
1. **AI驱动的测试用例生成**
2. **自适应性能基准调整**
3. **文化语境的自动化验证**
4. **用户行为模拟测试**
---
**注意:** 测试不仅是质量保障,更是文化传承的守护者!每一个测试用例都要体现对传统文化的尊重!

View File

@ -0,0 +1,122 @@
# 📐 Qwen AI - OpenBB架构设计师工作说明书
## 🎯 任务概述
作为架构设计师,您需要为"炼妖壶-稷下学宫AI辩论系统"设计OpenBB集成的技术架构方案。
## 📋 核心职责
### 1. 系统架构设计
**任务目标:** 设计OpenBB与稷下学宫系统的集成架构
**具体工作:**
- 分析现有系统架构(`src/jixia/engines/`
- 设计OpenBB数据流架构
- 制定模块间解耦策略
- 设计故障转移和降级机制
**交付物:**
```
docs/architecture/
├── openbb_integration_architecture.md
├── data_flow_diagram.mermaid
├── component_interaction_diagram.mermaid
└── deployment_architecture.md
```
### 2. 数据抽象层设计
**任务目标:** 设计统一的金融数据抽象接口
**具体工作:**
- 定义标准化数据模型
- 设计Provider适配器模式
- 制定数据缓存策略
- 设计数据质量监控机制
**关键文件:**
- `src/jixia/engines/data_abstraction.py`
- `src/jixia/models/financial_data_models.py`
### 3. 性能优化架构
**任务目标:** 确保系统在高并发下的稳定性
**具体工作:**
- 设计异步数据获取架构
- 制定缓存策略Redis/Memory
- 设计负载均衡机制
- 制定监控和告警策略
## 🔧 技术约束
### 必须遵循的原则:
1. **渐进式集成** - 不破坏现有功能
2. **可选依赖** - OpenBB失效时优雅降级
3. **文化融合** - 保持八仙论道的文化特色
4. **模块解耦** - 便于后续扩展其他数据源
### 技术栈限制:
- Python 3.10+
- 现有Streamlit界面
- 八仙智能体系统
- Vertex AI记忆银行
## 📊 成功标准
### 架构质量指标:
- [ ] 模块耦合度 < 20%
- [ ] 代码复用率 > 80%
- [ ] 系统可用性 > 99%
- [ ] 响应时间 < 2秒
### 文档完整性:
- [ ] 架构图清晰易懂
- [ ] 接口文档完整
- [ ] 部署指南详细
- [ ] 故障处理手册
## 🎭 与稷下学宫的结合
### 八仙角色映射:
```python
# 设计八仙与数据源的智能映射
immortal_data_mapping = {
'吕洞宾': 'technical_analysis', # 技术分析专家
'何仙姑': 'risk_metrics', # 风险控制专家
'张果老': 'historical_data', # 历史数据分析师
'韩湘子': 'sector_analysis', # 新兴资产专家
'汉钟离': 'market_movers', # 热点追踪
'蓝采和': 'value_discovery', # 潜力股发现
'铁拐李': 'contrarian_analysis', # 逆向思维专家
'曹国舅': 'macro_economics' # 宏观经济分析师
}
```
## 🔄 协作接口
### 与其他AI的协作
- **Claude代码实现**: 提供详细的接口规范和实现指导
- **Gemini测试验证**: 制定测试用例和验证标准
- **RovoDev文档整合**: 提供架构决策文档和技术规范
## 📅 时间节点
### 阶段一2天
- [ ] 现有系统分析
- [ ] 架构方案设计
- [ ] 核心接口定义
### 阶段二1天
- [ ] 详细设计文档
- [ ] 技术选型说明
- [ ] 风险评估报告
## 💡 创新要求
作为架构师,请在以下方面展现创新:
1. **传统文化与现代技术的融合**
2. **智能化的数据路由策略**
3. **自适应的性能优化机制**
4. **面向未来的可扩展架构**
---
**注意:** 这是一个文化与技术深度融合的项目,请在技术设计中体现中国传统哲学思想!

View File

@ -0,0 +1,236 @@
# 🎭 四AI协作指南 - OpenBB稷下学宫集成项目
## 🎯 项目使命
将OpenBB金融数据平台与稷下学宫八仙论道系统深度融合创造传统文化与现代金融科技的完美结合。
## 👥 团队角色与协作关系
```mermaid
graph TD
A[Qwen - 架构设计师] --> B[Claude - 核心开发工程师]
A --> C[Gemini - 测试验证专家]
A --> D[RovoDev - 项目整合专家]
B --> C
B --> D
C --> D
subgraph 协作流程
E[架构设计] --> F[代码实现]
F --> G[测试验证]
G --> H[文档整合]
H --> I[项目交付]
end
```
## 📋 各AI工作说明书链接
| AI角色 | 主要职责 | 工作说明书 |
|--------|----------|------------|
| **Qwen** | 架构设计师 | [`QWEN_ARCHITECTURE_DESIGN.md`](./QWEN_ARCHITECTURE_DESIGN.md) |
| **Claude** | 核心开发工程师 | [`CLAUDE_CODE_IMPLEMENTATION.md`](./CLAUDE_CODE_IMPLEMENTATION.md) |
| **Gemini** | 测试验证专家 | [`GEMINI_TEST_VALIDATION.md`](./GEMINI_TEST_VALIDATION.md) |
| **RovoDev** | 项目整合专家 | [`ROVODEV_PROJECT_INTEGRATION.md`](./ROVODEV_PROJECT_INTEGRATION.md) |
## 🔄 协作工作流
### 第一阶段:设计与规划 (Day 1-3)
```
Qwen 主导:
├── 系统架构设计
├── 接口规范定义
├── 技术选型方案
└── 实施计划制定
其他AI配合
├── Claude: 技术可行性评估
├── Gemini: 测试策略制定
└── RovoDev: 项目框架搭建
```
### 第二阶段:核心开发 (Day 4-8)
```
Claude 主导:
├── 核心引擎实现
├── API集成开发
├── UI界面增强
└── 功能模块编码
其他AI配合
├── Qwen: 架构指导和审查
├── Gemini: 同步测试执行
└── RovoDev: 代码集成管理
```
### 第三阶段:测试验证 (Day 9-12)
```
Gemini 主导:
├── 功能测试执行
├── 性能基准测试
├── 集成测试验证
└── 文化准确性检查
其他AI配合
├── Qwen: 架构层面问题解决
├── Claude: 代码BUG修复
└── RovoDev: 测试结果整合
```
### 第四阶段:整合交付 (Day 13-15)
```
RovoDev 主导:
├── 文档体系整合
├── 用户指南编写
├── 项目质量检查
└── 最终版本发布
其他AI配合
├── Qwen: 架构文档审核
├── Claude: 技术文档完善
└── Gemini: 测试报告整理
```
## 🎭 文化核心要求
### 八仙角色特征所有AI必须遵循
```python
IMMORTAL_CHARACTERISTICS = {
'吕洞宾': {
'element': '乾', # 八卦方位
'specialty': '技术分析', # 专业特长
'personality': '系统性', # 性格特点
'approach': '理性分析' # 分析方法
},
'何仙姑': {
'element': '坤',
'specialty': '风险控制',
'personality': '稳健保守',
'approach': '风险优先'
},
'张果老': {
'element': '兑',
'specialty': '历史数据',
'personality': '博学深沉',
'approach': '历史借鉴'
},
'韩湘子': {
'element': '艮',
'specialty': '新兴资产',
'personality': '创新进取',
'approach': '前瞻思维'
},
'汉钟离': {
'element': '离',
'specialty': '热点追踪',
'personality': '敏锐活跃',
'approach': '趋势跟踪'
},
'蓝采和': {
'element': '坎',
'specialty': '价值发现',
'personality': '独立思考',
'approach': '逆向分析'
},
'铁拐李': {
'element': '巽',
'specialty': '逆向思维',
'personality': '犀利直接',
'approach': '反向验证'
},
'曹国舅': {
'element': '震',
'specialty': '宏观经济',
'personality': '高瞻远瞩',
'approach': '宏观视角'
}
}
```
### 稷下学宫核心价值:
- **开放包容**:各种观点都能得到尊重
- **理性辩论**:基于数据和逻辑的讨论
- **百家争鸣**:鼓励不同视角的碰撞
- **求同存异**:在分歧中寻找共识
## 📊 质量标准所有AI共同遵循
### 技术质量:
- [ ] 代码测试覆盖率 > 80%
- [ ] API响应时间 < 3秒
- [ ] 系统可用性 > 99%
- [ ] 并发处理能力 > 10 req/s
### 文化质量:
- [ ] 八仙特征准确性 100%
- [ ] 传统文化表达尊重性 100%
- [ ] 现代化融合自然度 > 90%
- [ ] 教育价值体现度 > 85%
### 用户体验:
- [ ] 界面直观度 > 90%
- [ ] 功能易用性 > 85%
- [ ] 文档完整性 > 95%
- [ ] 错误处理友好度 > 90%
## 🔄 沟通协作机制
### 日常沟通:
- **每日站会** (15分钟):进度同步,问题协调
- **技术评审** (每2天):关键决策讨论
- **质量检查** (每周):交叉审查工作成果
### 问题升级机制:
```
Level 1: AI间直接协商
Level 2: RovoDev协调处理
Level 3: 项目负责人决策
```
### 文档共享约定:
- 所有设计文档实时共享
- 代码变更及时通知相关AI
- 测试结果定期同步
- 问题和解决方案公开透明
## 🎯 成功标准
### 项目成功的标志:
1. **功能完整**OpenBB数据与八仙论道完美融合
2. **性能稳定**:各种场景下系统表现良好
3. **文化准确**:传统文化表达准确尊重
4. **用户满意**:最终用户体验优秀
5. **可维护性**:代码和文档便于后续维护
### 团队协作成功的标志:
1. **沟通顺畅**各AI间协作无障碍
2. **质量一致**:各模块质量标准统一
3. **进度可控**:项目按时交付
4. **创新突出**:在传统与现代结合上有所突破
## 💡 创新目标
### 技术创新:
- 智能化的数据路由算法
- 自适应的性能优化机制
- 文化语境的AI理解能力
### 文化创新:
- 传统文化的现代化表达
- 金融科技的人文内涵
- AI技术的文化责任
### 用户体验创新:
- 沉浸式的文化体验
- 智能化的学习辅助
- 个性化的论道体验
---
## 📞 紧急联系
如遇到跨团队协调问题,请立即联系项目整合专家 RovoDev。
**记住**:我们不仅在构建技术产品,更在传承和发扬中华文化!每一行代码、每一个测试、每一份文档都承载着文化的使命!
---
*"稷下学宫,百家争鸣;开源精神,技术报国!"*

View File

@ -0,0 +1,289 @@
# 📚 RovoDev AI - OpenBB项目整合与文档管理工作说明书
## 🎯 任务概述
作为项目整合专家您需要协调四个AI团队的工作成果整合OpenBB集成项目的完整文档体系并确保项目的顺利交付。
## 📋 核心职责
### 1. 项目协调与整合
**任务目标:** 统筹四个AI团队的工作确保各模块无缝对接
**协调矩阵:**
```
Qwen Claude Gemini RovoDev
架构设计 主导 配合 验证 整合
代码实现 配合 主导 验证 监控
测试验证 配合 配合 主导 汇总
文档整合 配合 配合 配合 主导
```
**关键协调点:**
- 接口规范的一致性
- 代码实现与架构设计的匹配度
- 测试用例覆盖的完整性
- 文档与实际功能的准确性
### 2. 完整文档体系构建
**任务目标:** 建立OpenBB集成的完整文档生态
**文档架构:**
```
docs/openbb_integration/
├── 00_PROJECT_OVERVIEW.md # 项目总览
├── 01_ARCHITECTURE_DESIGN/ # 架构设计Qwen输出
│ ├── system_architecture.md
│ ├── data_flow_design.md
│ ├── integration_patterns.md
│ └── deployment_strategy.md
├── 02_IMPLEMENTATION_GUIDE/ # 实现指南Claude输出
│ ├── core_engine_implementation.md
│ ├── api_integration_guide.md
│ ├── ui_enhancement_guide.md
│ └── troubleshooting_guide.md
├── 03_TEST_DOCUMENTATION/ # 测试文档Gemini输出
│ ├── test_strategy.md
│ ├── test_results_report.md
│ ├── performance_benchmarks.md
│ └── quality_assurance_report.md
├── 04_USER_GUIDES/ # 用户指南
│ ├── getting_started.md
│ ├── immortal_debate_tutorial.md
│ ├── configuration_guide.md
│ └── best_practices.md
├── 05_CULTURAL_INTEGRATION/ # 文化融合文档
│ ├── immortal_characteristics.md
│ ├── jixia_philosophy_in_code.md
│ └── cultural_accuracy_guidelines.md
└── 06_MAINTENANCE/ # 维护文档
├── release_notes.md
├── upgrade_guide.md
├── known_issues.md
└── future_roadmap.md
```
### 3. 用户体验设计
**任务目标:** 确保最终用户能够轻松使用OpenBB集成功能
**用户旅程设计:**
```mermaid
graph TD
A[用户启动应用] --> B[选择OpenBB标签页]
B --> C[查看数据可用性状态]
C --> D{OpenBB是否可用?}
D -->|是| E[选择股票符号]
D -->|否| F[使用演示数据]
E --> G[启动八仙论道]
F --> G
G --> H[查看辩论结果]
H --> I[导出分析报告]
```
**交互设计文档:**
- 界面原型设计
- 用户流程图
- 错误处理流程
- 帮助文档集成
### 4. 质量保证与发布管理
**任务目标:** 确保项目质量并管理发布流程
**质量检查清单:**
- [ ] 代码规范一致性检查
- [ ] 文档完整性验证
- [ ] 功能集成测试通过
- [ ] 性能指标达标
- [ ] 文化准确性确认
- [ ] 用户体验验证
## 🎭 文化融合专项工作
### 传统文化现代化表达:
```markdown
# 八仙论道的现代诠释
## 文化内核保持:
- **稷下学宫精神**:开放包容、百家争鸣
- **八仙特质**:各具特色、相互补充
- **论道传统**:理性辩论、求同存异
## 现代技术表达:
- **数据驱动**:用真实数据支撑观点
- **智能化**AI技术赋能传统智慧
- **可视化**:现代界面展示古典思想
```
### 文化准确性验证:
```python
# 文化审核检查点
CULTURAL_CHECKPOINTS = {
'immortal_names': '确保八仙姓名准确无误',
'characteristics': '验证八仙特征描述的准确性',
'philosophical_context': '确保稷下学宫背景的正确表达',
'respectful_representation': '确保文化表达的尊重性',
'educational_value': '验证文化教育价值的体现'
}
```
## 📊 项目管理仪表板
### 进度跟踪系统:
```
项目进度监控:
├── 架构设计进度 [████████░░] 80%
├── 代码实现进度 [██████░░░░] 60%
├── 测试验证进度 [████░░░░░░] 40%
└── 文档整合进度 [███████░░░] 70%
质量指标:
├── 代码覆盖率: 85%
├── 文档完整性: 90%
├── 测试通过率: 92%
└── 文化准确性: 95%
```
### 风险管理:
```markdown
## 主要风险点识别:
### 技术风险:
- OpenBB版本兼容性问题
- 性能瓶颈风险
- 数据质量不稳定
### 文化风险:
- 八仙形象表达不当
- 传统文化误读
- 现代化过度商业化
### 项目风险:
- 团队协调不畅
- 进度延迟风险
- 质量标准不一致
```
## 🔄 协作工作流
### 日常协调流程:
```
每日站会 (15分钟)
├── 各AI汇报昨日进展
├── 今日工作计划
├── 阻塞问题讨论
└── 协作需求确认
每周回顾 (1小时)
├── 整体进度回顾
├── 质量指标检查
├── 风险评估更新
└── 下周计划调整
```
### 交付物审核流程:
```mermaid
graph LR
A[AI团队提交] --> B[RovoDev初审]
B --> C[质量检查]
C --> D{是否达标?}
D -->|是| E[集成到主项目]
D -->|否| F[反馈修改意见]
F --> A
E --> G[用户验证]
G --> H[正式发布]
```
## 📅 里程碑和时间线
### 第一周:框架建立
- **Day 1-2**: 项目启动,框架搭建
- **Day 3-4**: 各团队初始输出整合
- **Day 5-7**: 首版原型验证
### 第二周:核心开发
- **Day 8-10**: 核心功能开发
- **Day 11-12**: 集成测试执行
- **Day 13-14**: 问题修复和优化
### 第三周:完善和发布
- **Day 15-17**: 文档完善和用户测试
- **Day 18-19**: 最终质量检查
- **Day 20-21**: 项目发布和交付
## 💡 创新整合方案
### 1. 智能文档生成
```python
class IntelligentDocGenerator:
"""智能文档生成器"""
def generate_api_docs(self, code_modules):
"""从代码自动生成API文档"""
pass
def create_tutorial_from_tests(self, test_cases):
"""从测试用例生成使用教程"""
pass
def cultural_context_validator(self, content):
"""文化内容准确性验证"""
pass
```
### 2. 多媒体文档体验
- 交互式代码示例
- 视频教程制作
- 在线演示环境
- 文化背景介绍动画
### 3. 社区协作平台
```markdown
## 开源社区建设:
### 贡献者指南:
- 代码贡献流程
- 文档改进指南
- 文化咨询渠道
- 反馈收集机制
### 社区活动:
- 八仙论道比赛
- 传统文化技术沙龙
- 开源项目展示
- 用户案例分享
```
## 🎯 成功标准
### 项目交付标准:
- [ ] 功能完整性 100%
- [ ] 文档覆盖率 95%+
- [ ] 用户满意度 90%+
- [ ] 文化准确性 100%
- [ ] 性能指标达标 100%
### 长期影响目标:
- [ ] 成为传统文化与AI结合的示范项目
- [ ] 推动开源社区的文化多元化
- [ ] 为金融科技注入文化内涵
- [ ] 建立可持续的维护生态
## 📈 后续运营规划
### 版本迭代计划:
```
v1.0: OpenBB基础集成
v1.1: 性能优化和BUG修复
v1.2: 更多数据源集成
v2.0: 高级分析功能
v2.1: 移动端适配
v3.0: 企业级功能
```
### 社区建设计划:
- 建立用户社区
- 定期发布教程
- 举办线上活动
- 收集用户反馈
---
**注意:** 作为项目整合者,您是传统文化与现代技术的桥梁!请确保每一个细节都体现对文化的尊重和技术的严谨!

View File

@ -0,0 +1,13 @@
graph LR
A[Streamlit App] --> B[OpenBB Tab]
A --> C[Debate System]
C --> D[Immortal Agents]
D --> E[OpenBB Engine]
D --> F[Perpetual Engine]
B --> G[_load_price_data]
G --> H{OpenBB Available?}
H -- Yes --> I[OpenBB obb]
H -- No --> J[Demo/Synthetic Data]
E --> K{OpenBB Available?}
K -- Yes --> I
K -- No --> L[Error Result]

View File

@ -0,0 +1,9 @@
graph TD
A[User Request] --> B{OpenBB Installed?}
B -- Yes --> C{OpenBB Data Available?}
C -- Yes --> D[OpenBB Engine]
C -- No --> E[Fallback to Demo/Synthetic Data]
B -- No --> E
D --> F[Format Data]
E --> F
F --> G[Return to Agent/UI]

View File

@ -0,0 +1,181 @@
# OpenBB Integration Deployment Architecture
## Environment and Dependencies
### Base Requirements
- **Python Version**: 3.8 or higher
- **Core Dependencies**: As specified in `requirements.txt` (Streamlit, etc.)
### Optional OpenBB Dependency
- **OpenBB Library**: `openbb>=4.1.0`
- **Installation**: Not included in default `requirements.txt` to maintain lightweight base installation
- **Activation**: Install via `pip install "openbb>=4.1.0"` when needed
## Configuration Management
### Environment Variables
- **Standard Project Variables**: Managed through Doppler (RAPIDAPI_KEY, GOOGLE_API_KEY, etc.)
- **OpenBB Provider Variables**:
- For public data sources (like yfinance): No specific configuration required
- For premium data sources (e.g., Polygon, FMP):
- Variables are managed by OpenBB internally
- Follow OpenBB documentation for provider-specific setup
- Example: `POLYGON_API_KEY` for Polygon.io data
### Feature Flags
- **JIXIA_MEMORY_BACKEND**: When set to "cloudflare", enables Cloudflare AutoRAG as memory backend
- **GOOGLE_GENAI_USE_VERTEXAI**: When set to "TRUE", enables Vertex AI for memory bank
## Deployment Scenarios
### 1. Base Deployment (OpenBB Not Installed)
- **Characteristics**:
- Lightweight installation
- Relies on existing RapidAPI-based perpetual engine
- UI falls back to demo or synthetic data in OpenBB tab
- **Use Cases**:
- Minimal environment setups
- Systems where OpenBB installation is not feasible
- Development environments focusing on other features
### 2. Full Deployment (With OpenBB)
- **Characteristics**:
- Enhanced data capabilities through OpenBB
- Access to multiple data providers
- Improved data quality and coverage
- **Use Cases**:
- Production environments requiring comprehensive market data
- Advanced financial analysis and debate scenarios
- Integration with premium data sources
### 3. Hybrid Deployment (Selective Features)
- **Characteristics**:
- Selective installation of OpenBB providers
- Mix of OpenBB and perpetual engine data sources
- Fallback mechanisms ensure continuous operation
- **Use Cases**:
- Cost optimization by using free providers where possible
- Gradual migration from perpetual engine to OpenBB
- Testing new data sources without full commitment
## Containerization (Docker)
### Base Image
- Python 3.10-slim or equivalent
### Multi-Stage Build
1. **Builder Stage**:
- Install build dependencies
- Install Python dependencies
2. **Runtime Stage**:
- Copy installed packages from builder
- Copy application code
- Install optional OpenBB dependencies if specified
### Docker Compose Configuration
- Service definitions for main application
- Optional service for database (if using persistent memory backends)
- Volume mounts for configuration and data persistence
## Cloud Deployment
### Google Cloud Platform (GCP)
- **App Engine**:
- Standard environment with custom runtime
- Environment variables configured through `app.yaml`
- **Cloud Run**:
- Containerized deployment
- Secrets managed through Secret Manager
- **Compute Engine**:
- Full control over VM configuration
- Persistent disks for data storage
### Considerations for Cloud Deployment
- **API Key Security**:
- Use secret management services (Google Secret Manager, Doppler)
- Never store keys in code or environment files
- **Memory Backend Configuration**:
- For Vertex AI Memory Bank: Configure GOOGLE_CLOUD_PROJECT_ID and authentication
- For Cloudflare AutoRAG: Configure CLOUDFLARE_ACCOUNT_ID and API token
- **Scalability**:
- Stateless application design allows horizontal scaling
- Memory backends provide persistence across instances
## Memory Backend Integration
### Vertex AI Memory Bank (Default/Primary)
- **Activation**: Requires GOOGLE_GENAI_USE_VERTEXAI=true and proper GCP authentication
- **Dependencies**: `google-cloud-aiplatform` (installed with google-adk)
- **Deployment**: Requires GCP project with Vertex AI API enabled
### Cloudflare AutoRAG (Alternative)
- **Activation**: Requires JIXIA_MEMORY_BACKEND=cloudflare and Cloudflare credentials
- **Dependencies**: `aiohttp` (already in requirements)
- **Deployment**: Requires Cloudflare account with Vectorize and Workers AI enabled
## Monitoring and Observability
### Health Checks
- Application startup verification
- OpenBB availability check endpoint
- Memory backend connectivity verification
### Logging
- Structured logging for data access patterns
- Error tracking for failed data retrievals
- Performance metrics for data loading times
### Metrics Collection
- API usage counters (both RapidAPI and OpenBB)
- Fallback trigger rates
- Memory backend operation statistics
## Security Posture
### Data Security
- In-memory data processing
- No persistent storage of sensitive financial data
- Secure handling of API responses
### Access Control
- Streamlit authentication (if enabled)
- API key isolation per data provider
- Memory backend access controls through provider mechanisms
### Network Security
- HTTPS for all external API calls
- Outbound firewall rules for API endpoints
- Secure credential injection mechanisms
## Disaster Recovery and Business Continuity
### Data Source Redundancy
- Multiple API providers through OpenBB
- Fallback to perpetual engine when OpenBB fails
- Synthetic data generation for UI continuity
### Memory Backend Failover
- Local simulation mode when cloud backends are unavailable
- Graceful degradation of memory features
### Recovery Procedures
- Automated restart on critical failures
- Manual intervention procedures for configuration issues
- Rollback capabilities through version control
## Performance Optimization
### Caching Strategies
- OpenBB's internal caching mechanisms
- Streamlit's built-in caching for UI components
- Memory backend for persistent agent knowledge
### Resource Management
- Asynchronous data loading where possible
- Memory-efficient data structures
- Connection pooling for API requests
### Scaling Considerations
- Horizontal scaling for handling concurrent users
- Vertical scaling for memory-intensive operations
- Load balancing for distributed deployments

View File

@ -0,0 +1,168 @@
# OpenBB Integration Architecture for 炼妖壶 (Lianyaohu) - 稷下学宫AI辩论系统
## Overview
This document outlines the architecture for integrating OpenBB v4 into the Lianyaohu 稷下学宫AI辩论系统. The integration aims to provide enriched financial data to the eight immortal agents while maintaining graceful degradation when OpenBB is not installed or available.
## System Context
The Lianyaohu system is a multi-AI-agent debate platform rooted in traditional Chinese philosophy. The eight immortals (八仙) debate investment topics, leveraging data from multiple financial APIs. The system currently uses a "perpetual engine" based on 17 RapidAPI subscriptions. This architecture adds OpenBB as an optional, higher-level data source.
## Key Components and Integration Points
### 1. Core Business Logic (`src/jixia/`)
#### a. `engines/openbb_engine.py`
- **Purpose**: Primary interface to OpenBB v4 data.
- **Key Features**:
- Lazy loading of `openbb` library to prevent import errors if not installed.
- `ImmortalConfig` mapping each immortal to a primary data provider (initially `yfinance`).
- `get_immortal_data` method for agent-specific data retrieval based on specialty.
- `simulate_jixia_debate` for testing and demonstration.
- **Integration Strategy**:
- Uses `from openbb import obb` for unified routing.
- Handles different data types (price, historical, profile, news, earnings, etc.).
- Provides fallback error handling returning `APIResult` objects.
#### b. `engines/openbb_stock_data.py`
- **Purpose**: Utility functions for fetching stock and ETF data.
- **Key Features**:
- `get_stock_data` and `get_etf_data` functions.
- Lazy loading of `openbb`.
- Time window configuration.
- Data formatting utilities.
### 2. Application Entry Point (`app/`)
#### a. `tabs/openbb_tab.py`
- **Purpose**: Streamlit UI tab for visualizing market data.
- **Key Features**:
- `_check_openbb_installed` to detect availability.
- `_load_price_data` with multi-level fallback (OpenBB -> demo JSON -> synthetic data).
- KPI calculation from data.
- Interactive charting with Plotly.
- **Integration Strategy**:
- Prioritizes `obb.equity.price.historical`, falling back to `obb.etf.price.historical`.
- Handles various data frame formats and column names from different providers.
- Graceful UI degradation with informative messages.
### 3. Data Models (`src/jixia/models/`)
*(Note: This directory and specific model files were not found in the current codebase.)*
- **Purpose**: Standardized data models for financial data.
- **Proposed Implementation**:
- Define `FinancialDataPoint`, `StockQuote`, `HistoricalPrice`, etc.
- Used by both OpenBB engine and existing perpetual engine for data abstraction.
### 4. Configuration (`config/settings.py`)
- **Purpose**: Centralized configuration management.
- **Key Features**:
- No direct OpenBB configuration currently, but designed for extensibility.
- Validates environment for hybrid AI provider modes.
## Data Flow Architecture
```mermaid
graph TD
A[User Request] --> B{OpenBB Installed?}
B -- Yes --> C{OpenBB Data Available?}
C -- Yes --> D[OpenBB Engine]
C -- No --> E[Fallback to Demo/Synthetic Data]
B -- No --> E
D --> F[Format Data]
E --> F
F --> G[Return to Agent/UI]
```
1. **Agent Request**: An immortal agent requests data via `OpenBBEngine.get_immortal_data`.
2. **OpenBB Check**: The engine checks if OpenBB is available via lazy import.
3. **Data Retrieval**: If available, the engine calls the appropriate `obb.*` function.
4. **Data Processing**: The engine processes the result into a standardized `APIResult`.
5. **Fallback**: If OpenBB is not installed or the call fails, an error result is returned.
6. **UI Request**: The OpenBB tab requests data via `_load_price_data`.
7. **UI Fallback Chain**:
- Tries `obb.equity.price.historical`.
- Falls back to `obb.etf.price.historical`.
- Falls back to loading demo JSON files.
- Finally falls back to generating synthetic data.
8. **Data Formatting**: The UI formats the data for display, handling various column names and structures.
## Component Interaction Diagram
```mermaid
graph LR
A[Streamlit App] --> B[OpenBB Tab]
A --> C[Debate System]
C --> D[Immortal Agents]
D --> E[OpenBB Engine]
D --> F[Perpetual Engine]
B --> G[_load_price_data]
G --> H{OpenBB Available?}
H -- Yes --> I[OpenBB obb]
H -- No --> J[Demo/Synthetic Data]
E --> K{OpenBB Available?}
K -- Yes --> I
K -- No --> L[Error Result]
```
## Deployment Architecture
### Environment Requirements
- Python 3.8+
- Optional: `openbb>=4.1.0` (not in default requirements)
- Standard project dependencies (Streamlit, etc.)
### Configuration
- No specific OpenBB configuration required for basic `yfinance` use.
- Advanced providers (e.g., Polygon) would require provider-specific environment variables.
### Scalability and Performance
- OpenBB's provider system handles its own rate limiting and caching.
- The lazy loading approach prevents unnecessary overhead if OpenBB is not used.
- Fallback to demo/synthetic data ensures UI responsiveness.
## Failure Handling and Degradation
### OpenBB Not Installed
- `ImportError` is caught in lazy loading.
- Engine returns `APIResult(success=False, error="OpenBB not installed...")`.
- UI falls back to demo/synthetic data gracefully.
### OpenBB API Call Failure
- Exception is caught in `get_immortal_data`.
- Engine returns `APIResult(success=False, error="OpenBB call failed...")`.
- Agent can decide how to handle the failure (e.g., switch to another engine).
### UI Data Loading Failure
- Multi-level fallback ensures data is always available for visualization.
- Users are informed via UI messages if demo/synthetic data is being used.
## Monitoring and Observability
### Logging
- OpenBB engine logs data requests and responses.
- UI logs fallback events.
### Metrics
- Not currently implemented, but could track:
- OpenBB usage frequency.
- Fallback trigger rates.
- Data load times.
## Security Considerations
### API Keys
- OpenBB handles provider API keys internally.
- Standard project security practices (Doppler, no hardcoded keys) apply.
### Data Handling
- Data is processed in memory and not persisted by the OpenBB integration components.
## Future Enhancements
1. **Unified Data Model**: Create standardized data models in `src/jixia/models/` for seamless integration between OpenBB and other data sources.
2. **Provider Configuration**: Allow dynamic configuration of data providers for each immortal.
3. **Enhanced UI Components**: Add more detailed financial data visualizations and analysis tools.
4. **Debate Integration**: Directly link debate outcomes to specific data points from OpenBB.
5. **Advanced OpenBB Routes**: Integrate fundamental data, news, and alternative data sources from OpenBB.
## Conclusion
This architecture successfully integrates OpenBB v4 into the Lianyaohu system while maintaining its core principles of graceful degradation and modular design. The lazy loading approach ensures that the system remains functional and performant regardless of whether OpenBB is installed, providing a robust foundation for future enhancements.

View File

@ -0,0 +1,158 @@
# OpenBB Integration Performance Optimization Architecture
## Overview
This document outlines the performance optimization strategies for the OpenBB integration in the 炼妖壶 (Lianyaohu) - 稷下学宫AI辩论系统. The goal is to ensure the system can handle high concurrency while maintaining low latency and optimal resource utilization.
## Asynchronous Data Architecture
### 1. Asynchronous Data Retrieval
- **Implementation**: Use Python's `asyncio` framework for non-blocking data access
- **Key Components**:
- `DataAbstractionLayer.get_quote_async()` method
- Asynchronous providers (where supported by the underlying library)
- Executor-based fallback for synchronous providers
- **Benefits**:
- Improved responsiveness for UI components
- Better resource utilization for concurrent requests
- Non-blocking operations for agent debates
### 2. Concurrent Provider Access
- **Implementation**: Parallel requests to multiple providers with first-wins semantics
- **Strategy**:
- Launch requests to all configured providers simultaneously
- Return the first successful response
- Cancel remaining requests to conserve resources
- **Benefits**:
- Reduced perceived latency
- Automatic failover without delay
- Optimal use of available bandwidth
## Caching Strategy
### 1. Multi-Level Caching
- **In-Memory LRU Cache**:
- Decorator-based caching for frequently accessed data (quotes, profiles)
- Configurable size limits to prevent memory exhaustion
- Time-to-live (TTL) settings based on data volatility
- **Shared Cache Layer** (Future):
- Redis or Memcached for distributed deployments
- Consistent cache invalidation across instances
- Support for cache warming strategies
### 2. Cache Key Design
- **Granular Keys**: Separate cache entries for different data types and time windows
- **Parameterized Keys**: Include relevant parameters (symbol, date range, provider) in cache keys
- **Versioned Keys**: Incorporate data schema version to handle model changes
### 3. Cache Invalidation
- **Time-Based Expiration**: Automatic expiration based on TTL settings
- **Event-Driven Invalidation**: Clear cache entries when underlying data sources are updated
- **Manual Invalidation**: API endpoints for cache management
## Load Balancing Mechanism
### 1. Provider Selection Algorithm
- **Priority-Based Routing**: Route requests to providers based on configured priorities
- **Health-Based Routing**: Consider provider health metrics when selecting providers
- **Round-Robin for Equal Priority**: Distribute load among providers with the same priority
### 2. Adaptive Load Distribution
- **Real-Time Monitoring**: Track response times and error rates for each provider
- **Dynamic Weight Adjustment**: Adjust provider weights based on performance metrics
- **Circuit Breaker Pattern**: Temporarily disable poorly performing providers
## Resource Management
### 1. Connection Pooling
- **HTTP Connection Reuse**: Maintain pools of HTTP connections for API clients
- **Database Connection Pooling**: Reuse database connections for cache backends
- **Provider-Specific Pools**: Separate connection pools for different data providers
### 2. Memory Management
- **Efficient Data Structures**: Use memory-efficient data structures for caching
- **Object Reuse**: Reuse objects where possible to reduce garbage collection pressure
- **Streaming Data Processing**: Process large datasets in chunks to minimize memory footprint
### 3. Thread and Process Management
- **Async-Appropriate Threading**: Use threads for I/O-bound operations that aren't natively async
- **Process Isolation**: Isolate resource-intensive operations in separate processes
- **Resource Limits**: Configure limits on concurrent threads and processes
## Monitoring and Performance Metrics
### 1. Key Performance Indicators
- **Response Time**: Measure latency for data retrieval operations
- **Throughput**: Track requests per second for different data types
- **Error Rate**: Monitor failure rates for data access operations
- **Cache Hit Ratio**: Measure effectiveness of caching strategies
### 2. Provider Performance Metrics
- **Individual Provider Metrics**: Track performance for each data provider
- **Health Status**: Monitor uptime and responsiveness of providers
- **Cost Metrics**: Track usage and costs associated with different providers
### 3. System-Level Metrics
- **Resource Utilization**: CPU, memory, and network usage
- **Concurrency Levels**: Track active requests and queue depths
- **Garbage Collection**: Monitor GC activity and its impact on performance
## Optimization Techniques
### 1. Data Pre-fetching
- **Predictive Loading**: Pre-fetch data for likely subsequent requests
- **Batch Operations**: Combine multiple requests into single batch operations where possible
- **Background Refresh**: Refresh cached data proactively before expiration
### 2. Data Compression
- **Response Compression**: Use gzip compression for API responses
- **Cache Compression**: Compress cached data to reduce memory usage
- **Efficient Serialization**: Use efficient serialization formats (e.g., Protocol Buffers, MessagePack)
### 3. Database Optimization
- **Indexing Strategy**: Create appropriate indexes for cache lookup operations
- **Query Optimization**: Optimize database queries for performance
- **Connection Management**: Efficiently manage database connections
## Scalability Considerations
### 1. Horizontal Scaling
- **Stateless Design**: Ensure data access components are stateless for easy scaling
- **Load Balancer Integration**: Work with external load balancers for traffic distribution
- **Shared Caching**: Use distributed cache for consistent data across instances
### 2. Vertical Scaling
- **Resource Allocation**: Optimize resource usage for efficient vertical scaling
- **Performance Tuning**: Tune system parameters for better performance on larger instances
- **Memory Management**: Efficiently manage memory to take advantage of larger instances
### 3. Auto-scaling
- **Metrics-Driven Scaling**: Use performance metrics to trigger auto-scaling events
- **Graceful Degradation**: Maintain functionality during scaling operations
- **Cost Optimization**: Balance performance with cost considerations
## Implementation Roadmap
### Phase 1: Core Async Implementation
- Implement `DataAbstractionLayer.get_quote_async()`
- Add async support to provider adapters where possible
- Add executor-based fallback for synchronous providers
### Phase 2: Caching Layer
- Implement in-memory LRU cache
- Add cache key design and invalidation strategies
- Integrate cache with data abstraction layer
### Phase 3: Monitoring and Metrics
- Implement data quality monitoring
- Add performance metrics collection
- Create dashboards for monitoring key metrics
### Phase 4: Advanced Optimizations
- Implement predictive pre-fetching
- Add database optimization for cache backends
- Implement distributed caching for scalability
## Conclusion
This performance optimization architecture provides a comprehensive approach to ensuring the OpenBB integration in the Lianyaohu system can handle high concurrency while maintaining optimal performance. By implementing asynchronous data access, multi-level caching, intelligent load balancing, and comprehensive monitoring, the system will be able to deliver fast, reliable financial data to the eight immortal agents even under heavy load.

View File

@ -0,0 +1,130 @@
# 炼妖壶-稷下学宫AI辩论系统 OpenBB集成文档整合
## 概述
本文档整合了"炼妖壶-稷下学宫AI辩论系统"中OpenBB集成的所有关键设计和实现文档为开发团队提供一个全面的参考指南。
## 架构设计
### 1. 整体架构
系统采用分层架构设计将OpenBB集成在数据访问层通过抽象层为上层应用提供统一的数据接口。
### 2. 核心组件
- **OpenBB引擎** (`src/jixia/engines/openbb_engine.py`):主要的数据访问接口
- **数据抽象层** (`src/jixia/engines/data_abstraction_layer.py`):统一的数据访问接口
- **Provider适配器**:为不同数据源实现的适配器
- **数据模型** (`src/jixia/models/financial_data_models.py`):标准化的数据结构定义
### 3. 数据流
```
[八仙智能体] -> [数据抽象层] -> [Provider适配器] -> [OpenBB引擎] -> [OpenBB库]
\-> [永动机引擎] -> [RapidAPI]
```
## 实现细节
### 1. 数据模型
定义了标准化的金融数据结构:
- `StockQuote`:股票报价
- `HistoricalPrice`:历史价格数据
- `CompanyProfile`:公司概况
- `FinancialNews`:金融新闻
### 2. 抽象接口
定义了`DataProvider`抽象基类,所有数据提供商都需要实现该接口:
- `get_quote()`:获取股票报价
- `get_historical_prices()`:获取历史价格数据
- `get_company_profile()`:获取公司概况
- `get_news()`:获取相关新闻
### 3. Provider适配器
为OpenBB和RapidAPI分别实现了适配器
- `OpenBBDataProvider`OpenBB数据提供商适配器
- `RapidAPIDataProvider`RapidAPI数据提供商适配器
### 4. 八仙数据映射
定义了八仙与数据源的智能映射关系,每个八仙都有其专属的数据源和类型偏好。
## 性能优化
### 1. 异步处理
实现了异步数据访问机制,提高系统并发处理能力。
### 2. 缓存策略
采用多级缓存策略包括内存LRU缓存和未来可扩展的分布式缓存。
### 3. 负载均衡
实现了基于优先级和健康状态的数据源选择算法。
## 测试验证
### 1. 功能测试
- Provider适配器测试
- 数据抽象层测试
- 引擎组件测试
- UI组件测试
- 集成测试
### 2. 性能测试
- 响应时间测试
- 并发访问测试
### 3. 验证标准
- 功能验证标准
- 性能验证标准
- 兼容性验证标准
## 部署架构
### 1. 环境要求
- Python 3.8+
- 可选的OpenBB库 (>=4.1.0)
### 2. 配置管理
- 通过环境变量管理配置
- 支持多种部署场景(基础部署、完整部署、混合部署)
### 3. 安全考虑
- API密钥安全管理
- 数据安全处理
- 访问控制
## 故障处理与降级
### 1. 故障转移机制
当主数据源不可用时,系统能自动切换到备用数据源。
### 2. 优雅降级
当OpenBB未安装时系统能正常运行并使用演示数据。
## 监控与可观测性
### 1. 关键指标
- 数据源可用性
- 响应时间
- 错误率
- 缓存命中率
### 2. 告警策略
定义了多维度的告警策略,确保系统稳定性。
## 未来发展规划
### 1. 统一数据模型
创建更完善的标准化数据模型。
### 2. Provider配置
实现动态配置数据提供商。
### 3. 增强UI组件
添加更多详细的金融数据可视化和分析工具。
### 4. 辩论集成
直接将辩论结果链接到OpenBB的具体数据点。
### 5. 高级路由
集成OpenBB的更多数据源如基本面数据、新闻和另类数据。
## 结论
通过以上架构设计和实现OpenBB集成成功地为"炼妖壶-稷下学宫AI辩论系统"提供了丰富而可靠的金融数据支持,同时保持了系统的可扩展性和稳定性。这套集成方案不仅满足了当前需求,也为未来功能扩展奠定了坚实基础。

View File

@ -0,0 +1,287 @@
# 炼妖壶-稷下学宫AI辩论系统 OpenBB集成测试与验证方案
## 概述
本文档定义了"炼妖壶-稷下学宫AI辩论系统"中OpenBB集成的测试用例和验证标准确保集成的正确性、可靠性和性能。
## 测试环境配置
### 基础环境
- Python 3.8+
- 系统依赖:如 requirements.txt 中定义
- 测试框架pytest
### OpenBB环境变体
1. **未安装OpenBB**:测试降级机制
2. **安装OpenBB但未配置提供商**:测试基本功能
3. **完整配置OpenBB**:测试所有功能
## 测试用例
### 1. 数据抽象层测试
#### 1.1 Provider适配器测试
```python
# tests/test_provider_adapters.py
def test_openbb_provider_initialization():
"""测试OpenBB提供商适配器初始化"""
from src.jixia.engines.openbb_adapter import OpenBBDataProvider
provider = OpenBBDataProvider()
assert provider.name == "OpenBB"
assert provider.priority == 1
def test_rapidapi_provider_initialization():
"""测试RapidAPI提供商适配器初始化"""
from src.jixia.engines.rapidapi_adapter import RapidAPIDataProvider
provider = RapidAPIDataProvider()
assert provider.name == "RapidAPI"
assert provider.priority == 2
def test_provider_data_retrieval():
"""测试提供商数据检索功能"""
# 使用模拟数据测试各提供商的数据获取方法
pass
```
#### 1.2 数据抽象层管理器测试
```python
# tests/test_data_abstraction_layer.py
def test_dal_initialization():
"""测试数据抽象层初始化"""
from src.jixia.engines.data_abstraction_layer import DataAbstractionLayer
dal = DataAbstractionLayer()
# 验证提供商是否正确加载
assert len(dal.providers) >= 1
def test_dal_quote_retrieval():
"""测试数据抽象层报价获取"""
from src.jixia.engines.data_abstraction_layer import DataAbstractionLayer
dal = DataAbstractionLayer()
quote = dal.get_quote("AAPL")
# 验证返回数据结构
if quote is not None:
assert hasattr(quote, 'symbol')
assert hasattr(quote, 'price')
def test_dal_fallback_mechanism():
"""测试故障转移机制"""
# 模拟主提供商失败,验证是否能正确切换到备用提供商
pass
```
### 2. 引擎组件测试
#### 2.1 OpenBB引擎测试
```python
# tests/test_openbb_engine.py
def test_openbb_engine_initialization():
"""测试OpenBB引擎初始化"""
from src.jixia.engines.openbb_engine import OpenBBEngine
engine = OpenBBEngine()
# 验证引擎是否正确初始化
assert engine is not None
def test_openbb_engine_data_retrieval():
"""测试OpenBB引擎数据获取"""
from src.jixia.engines.openbb_engine import OpenBBEngine
engine = OpenBBEngine()
result = engine.get_immortal_data("吕洞宾", "price", "AAPL")
# 验证返回结果结构
assert hasattr(result, 'success')
if result.success:
assert result.data is not None
def test_openbb_engine_unavailable():
"""测试OpenBB不可用时的行为"""
# 通过模拟环境测试OpenBB未安装时的降级行为
pass
```
#### 2.2 永动机引擎测试
```python
# tests/test_perpetual_engine.py
def test_perpetual_engine_initialization():
"""测试永动机引擎初始化"""
from src.jixia.engines.perpetual_engine import JixiaPerpetualEngine
# 注意需要提供有效的RapidAPI密钥进行测试
pass
def test_perpetual_engine_data_retrieval():
"""测试永动机引擎数据获取"""
pass
```
### 3. UI组件测试
#### 3.1 OpenBB标签页测试
```python
# tests/test_openbb_tab.py
def test_openbb_tab_data_loading():
"""测试OpenBB标签页数据加载"""
# 验证在不同环境下的数据加载行为
pass
def test_openbb_tab_fallback():
"""测试OpenBB标签页降级机制"""
# 验证当OpenBB不可用时是否正确显示演示数据
pass
```
### 4. 集成测试
#### 4.1 八仙智能体数据访问测试
```python
# tests/test_baxian_data_access.py
def test_immortal_data_mapping():
"""测试八仙与数据源的映射关系"""
from src.jixia.engines.baxian_data_mapping import immortal_data_mapping
# 验证所有八仙都有正确的数据源映射
assert len(immortal_data_mapping) == 8
for immortal in ['吕洞宾', '何仙姑', '张果老', '韩湘子', '汉钟离', '蓝采和', '铁拐李', '曹国舅']:
assert immortal in immortal_data_mapping
def test_immortal_data_retrieval():
"""测试八仙智能体数据获取"""
# 验证每个八仙都能通过数据抽象层获取到所需数据
pass
```
#### 4.2 端到端辩论流程测试
```python
# tests/test_debate_flow_with_openbb.py
def test_debate_with_openbb_data():
"""测试使用OpenBB数据的完整辩论流程"""
# 验证辩论系统能正确使用OpenBB提供的数据
pass
```
## 性能测试
### 1. 响应时间测试
```python
# tests/performance/test_response_time.py
def test_quote_retrieval_response_time():
"""测试报价获取响应时间"""
import time
from src.jixia.engines.data_abstraction_layer import DataAbstractionLayer
dal = DataAbstractionLayer()
start_time = time.time()
quote = dal.get_quote("AAPL")
end_time = time.time()
response_time = end_time - start_time
# 验证响应时间在可接受范围内
assert response_time < 2.0 # 假设2秒为阈值
```
### 2. 并发访问测试
```python
# tests/performance/test_concurrent_access.py
def test_concurrent_quote_retrieval():
"""测试并发报价获取"""
import asyncio
from src.jixia.engines.data_abstraction_layer import DataAbstractionLayer
async def get_quote(symbol):
dal = DataAbstractionLayer()
return await dal.get_quote_async(symbol)
async def get_multiple_quotes():
symbols = ["AAPL", "GOOGL", "MSFT", "TSLA"]
tasks = [get_quote(symbol) for symbol in symbols]
return await asyncio.gather(*tasks)
# 运行并发测试
quotes = asyncio.run(get_multiple_quotes())
# 验证所有请求都成功返回
assert len(quotes) == 4
```
## 验证标准
### 功能验证标准
1. **数据准确性**:返回的数据格式和内容符合预期
2. **故障转移**:当主数据源不可用时,系统能自动切换到备用数据源
3. **优雅降级**当OpenBB未安装时系统能正常运行并使用演示数据
4. **八仙映射**:每个八仙都能访问其专属的数据源和类型
### 性能验证标准
1. **响应时间**单次数据请求响应时间不超过2秒
2. **并发处理**系统能同时处理至少10个并发数据请求
3. **资源使用**:内存使用在合理范围内,无内存泄漏
4. **缓存效率**缓存命中率应达到80%以上
### 兼容性验证标准
1. **Python版本**支持Python 3.8及以上版本
2. **OpenBB版本**支持OpenBB v4.1.0及以上版本
3. **环境变量**:正确处理各种环境变量配置
4. **依赖管理**OpenBB作为可选依赖不影响主系统安装
## 持续集成/持续部署(CI/CD)集成
### GitHub Actions工作流
```yaml
# .github/workflows/openbb_integration_test.yml
name: OpenBB Integration Test
on:
push:
branches: [ main, develop ]
pull_request:
branches: [ main ]
jobs:
test:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: [3.8, 3.9, '3.10', '3.11']
openbb-installed: [true, false]
steps:
- uses: actions/checkout@v3
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v4
with:
python-version: ${{ matrix.python-version }}
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install -r requirements.txt
if [ "${{ matrix.openbb-installed }}" = "true" ]; then
pip install "openbb>=4.1.0"
fi
- name: Run tests
run: |
pytest tests/test_openbb_*.py
pytest tests/test_data_abstraction_*.py
```
## 监控和告警
### 关键指标监控
1. **数据源可用性**:监控各数据提供商的可用性
2. **响应时间**:监控数据请求的平均响应时间
3. **错误率**:监控数据访问的错误率
4. **缓存命中率**:监控缓存的使用效率
### 告警策略
1. **可用性告警**当数据源可用性低于95%时触发告警
2. **性能告警**:当平均响应时间超过阈值时触发告警
3. **错误率告警**当错误率超过1%时触发告警
4. **缓存告警**当缓存命中率低于70%时触发告警
## 结论
这套测试和验证方案确保了OpenBB集成的高质量交付通过全面的功能测试、性能测试和持续集成能够及时发现和修复潜在问题保证系统的稳定性和可靠性。

View File

@ -0,0 +1,38 @@
# OpenBB 集成项目总览
本目录构建了本项目中 OpenBB v4 集成的完整文档生态,包括架构、实现、测试、用户指南、文化融合与维护。
## 目标
- 以 OpenBB v4 作为统一市场数据路由入口(`from openbb import obb`
- 在未安装或不可用时,提供稳健的兜底策略(演示/合成数据)
- 将集成能力与稷下学宫/八仙论道的产品体验深度融合
## 核心组件
- UI`app/tabs/openbb_tab.py`(自动检测 OpenBB 可用性,提供回退)
- 引擎:`src/jixia/engines/openbb_engine.py`、`src/jixia/engines/openbb_stock_data.py`
- 示例/演示数据:`examples/data/*.json`
## 用户旅程(摘要)
```mermaid
graph TD
A[用户启动应用] --> B[选择OpenBB标签页]
B --> C[查看数据可用性状态]
C --> D{OpenBB是否可用?}
D -->|是| E[选择股票符号]
D -->|否| F[使用演示数据]
E --> G[启动八仙论道]
F --> G
G --> H[查看辩论结果]
H --> I[导出分析报告]
```
## 快速链接
- 实现指南API 集成与回退策略):[02_IMPLEMENTATION_GUIDE/api_integration_guide.md](../openbb_integration/02_IMPLEMENTATION_GUIDE/api_integration_guide.md)
- 故障排查:[02_IMPLEMENTATION_GUIDE/troubleshooting_guide.md](../openbb_integration/02_IMPLEMENTATION_GUIDE/troubleshooting_guide.md)
- 测试策略与报告:[03_TEST_DOCUMENTATION/](../openbb_integration/03_TEST_DOCUMENTATION/)
- 用户指南:[04_USER_GUIDES/](../openbb_integration/04_USER_GUIDES/)
## 维护与路线图
- 版本说明与升级:见 [06_MAINTENANCE](../openbb_integration/06_MAINTENANCE/)
> 注:本结构与 docs/AI_AGENT_TASKS/ROVODEV_PROJECT_INTEGRATION.md 的“文档架构”保持一致,以便多团队协作与交付。

View File

@ -0,0 +1,3 @@
# 数据流设计
占位:将补充从 UI 输入 -> 引擎 -> OpenBB/provider -> DataFrame -> KPI/图表 的完整数据流与序列图。

View File

@ -0,0 +1,7 @@
# 部署策略
- OpenBB 作为可选依赖提供,默认不强制安装
- 在需要时通过 `pip install "openbb>=4.1.0"` 启用
- 国内网络场景建议使用镜像或代理
后续将补充 CI/CD、环境矩阵与缓存策略。

View File

@ -0,0 +1,7 @@
# 集成模式
- 路由优先:`obb.equity.price.historical`,必要时回退到 `obb.etf.price.historical`
- 结果标准化:兼容 `.to_df()` / `.to_dataframe()` / 原始对象 -> DataFrame
- 列规范化Date / Close 归一化,保证后续图表与 KPI 计算稳定
后续将补充更多模式(基本面/新闻/宏观等)。

View File

@ -0,0 +1,17 @@
# 系统架构Qwen 输出)
本章描述 OpenBB 集成在系统中的位置、边界与依赖。
## 组件边界
- UI 层:`app/tabs/openbb_tab.py`
- 引擎层:`src/jixia/engines/openbb_engine.py`、`openbb_stock_data.py`
- 数据层OpenBB provideryfinance、polygon、fmp 等)与演示/合成数据
## 关键架构决策
- 使用 OpenBB v4 统一路由
- 延迟导入lazy import降低对未安装环境的侵入
- 明确回退机制,保证用户体验连续性
## 后续补充
- 数据流与上下行依赖
- 与“八仙论道”系统的耦合点与解耦方案

View File

@ -1,6 +1,6 @@
# OpenBB 集成指南 # OpenBB 集成指南(迁移版)
指南帮助你在本项目中启用并使用 OpenBB v4 作为市场数据源,同时保证在未安装 OpenBB 的情况下,应用可平稳回退到演示/合成数据 页介绍 OpenBB v4 在本项目中的安装、配置、代码结构、回退机制、开发与测试、典型问题与后续计划
## 1. 为什么选择 OpenBB v4 ## 1. 为什么选择 OpenBB v4
- 统一的路由接口:`from openbb import obb` - 统一的路由接口:`from openbb import obb`

View File

@ -0,0 +1,10 @@
# 核心引擎实现Claude 输出)
- 延迟导入 OpenBB
- 统一结果转 DataFrame
- 列/索引规范化与时间窗口裁剪
- 失败时不影响其他功能(返回 success=False 或进入兜底路径)
参考代码位置:
- `src/jixia/engines/openbb_engine.py`
- `src/jixia/engines/openbb_stock_data.py`

View File

@ -0,0 +1,8 @@
# 故障排查指南
常见问题与解决方案:
- ImportError: No module named 'openbb' → 安装 `openbb>=4.1.0`
- 返回空数据 → 检查 symbol尝试其他 provider 或缩短时间窗口
- 列名/索引不匹配 → 打印原始 DataFrame参考 UI 中的规范化逻辑
更多请参考:[api_integration_guide.md](./api_integration_guide.md) 第 7 节。

View File

@ -0,0 +1,6 @@
# UI 增强指南
- 状态提示OpenBB 可用/不可用)
- 动态参数symbol、时间窗口
- KPI 卡片最新价、近30日涨幅、最大回撤
- 未来扩展位:基本面、新闻、情绪、宏观等

View File

@ -0,0 +1,3 @@
# 性能基准
占位:记录不同 provider 与窗口设置下的响应时间、吞吐、内存曲线。

View File

@ -0,0 +1,6 @@
# 质量保证报告
- 代码规范检查
- 文档完整性
- 测试通过率
- 文化准确性与用户体验评估(与 05_CULTURAL_INTEGRATION 联动)

View File

@ -0,0 +1,3 @@
# 测试结果报告
占位:记录关键用例、环境、通过率与截图/日志摘要。

View File

@ -0,0 +1,6 @@
# 测试策略Gemini 输出)
- 未安装 OpenBBUI 能回退到演示/合成数据
- 已安装 OpenBB能成功调用 `obb.equity.price.historical`
- 数据清洗后具有 Date/Close 列KPI 计算不报错
- 覆盖边界:空数据、异常路由、索引为日期等

View File

@ -0,0 +1,6 @@
# 最佳实践
- 将 OpenBB 作为轻依赖,必要时再安装
- 使用统一的 DataFrame 规范化逻辑
- 谨慎处理长时间窗口与缺失数据
- 提供清晰的状态反馈与兜底

View File

@ -0,0 +1,5 @@
# 配置指南
- 默认使用公共数据yfinance
- 付费 providerpolygon/fmp 等)可通过 OpenBB provider 配置或环境变量设置
- UI 参数symbol、时间窗口、KPI 展示

View File

@ -0,0 +1,8 @@
# 入门指南
1) 可选安装:`pip install "openbb>=4.1.0"`
2) 运行应用,进入 OpenBB 标签页
3) 输入股票/ETF 代码(如 AAPL选择时间窗口
4) 若未安装 OpenBB将自动使用演示数据
更多:见 [API 实现指南](../02_IMPLEMENTATION_GUIDE/api_integration_guide.md)。

View File

@ -0,0 +1,3 @@
# 八仙论道教程(占位)
将补充如何基于市场数据触发与解读“八仙论道”的步骤与最佳实践。

View File

@ -0,0 +1,3 @@
# 文化准确性指南(占位)
将补充文化审核检查点、示例与注意事项。

View File

@ -0,0 +1,3 @@
# 八仙特质(占位)
将列举八仙角色特性,并映射到数据分析风格/观点生成策略。

View File

@ -0,0 +1,4 @@
# 稷下学宫哲学在代码中的体现(占位)
- 开放包容、百家争鸣 → 多 provider 聚合与辩论系统
- 求同存异、理性决策 → 数据驱动 + 观点对齐

View File

@ -0,0 +1,5 @@
# 路线图
- v1.x路由扩展、稳定性增强
- v2.x高级分析与移动端适配
- v3.x企业级能力与治理

View File

@ -0,0 +1,4 @@
# 已知问题
- 某些 symbol 在特定 provider 下返回为空或字段不齐
- 长窗口数据清洗后为空的边界情况

View File

@ -0,0 +1,4 @@
# 发布说明
- 参考:`docs/development/RELEASE_v2.0.0.md`
- 在此记录 OpenBB 集成相关的变更、修复与新增功能。

View File

@ -0,0 +1,5 @@
# 升级指南
- OpenBB 版本升级注意事项
- provider 配置兼容性
- 本项目接口变化记录

View File

@ -0,0 +1,36 @@
# OpenBB 集成文档索引
本目录提供 OpenBB v4 在本项目中的完整集成文档。
- 00 项目总览:./00_PROJECT_OVERVIEW.md
- 01 架构设计Qwen 输出):
- ./01_ARCHITECTURE_DESIGN/system_architecture.md
- ./01_ARCHITECTURE_DESIGN/data_flow_design.md
- ./01_ARCHITECTURE_DESIGN/integration_patterns.md
- ./01_ARCHITECTURE_DESIGN/deployment_strategy.md
- 02 实现指南Claude 输出):
- ./02_IMPLEMENTATION_GUIDE/core_engine_implementation.md
- ./02_IMPLEMENTATION_GUIDE/api_integration_guide.md
- ./02_IMPLEMENTATION_GUIDE/ui_enhancement_guide.md
- ./02_IMPLEMENTATION_GUIDE/troubleshooting_guide.md
- 03 测试文档Gemini 输出):
- ./03_TEST_DOCUMENTATION/test_strategy.md
- ./03_TEST_DOCUMENTATION/test_results_report.md
- ./03_TEST_DOCUMENTATION/performance_benchmarks.md
- ./03_TEST_DOCUMENTATION/quality_assurance_report.md
- 04 用户指南:
- ./04_USER_GUIDES/getting_started.md
- ./04_USER_GUIDES/immortal_debate_tutorial.md
- ./04_USER_GUIDES/configuration_guide.md
- ./04_USER_GUIDES/best_practices.md
- 05 文化融合文档:
- ./05_CULTURAL_INTEGRATION/immortal_characteristics.md
- ./05_CULTURAL_INTEGRATION/jixia_philosophy_in_code.md
- ./05_CULTURAL_INTEGRATION/cultural_accuracy_guidelines.md
- 06 维护:
- ./06_MAINTENANCE/release_notes.md
- ./06_MAINTENANCE/upgrade_guide.md
- ./06_MAINTENANCE/known_issues.md
- ./06_MAINTENANCE/future_roadmap.md
快速入口API 实现指南 → ./02_IMPLEMENTATION_GUIDE/api_integration_guide.md

6
pytest.ini Normal file
View File

@ -0,0 +1,6 @@
[pytest]
testpaths = tests
python_files = test_*.py
python_classes = Test*
python_functions = test_*
addopts = --cov=src --cov-report=html --cov-report=term-missing --asyncio-mode=auto

View File

@ -30,10 +30,20 @@ pymongo>=4.5.0
# pymilvus>=2.3.0 # pymilvus>=2.3.0
# 开发工具 (可选) # 开发工具 (可选)
# pytest>=7.4.0
# black>=23.7.0 # black>=23.7.0
# flake8>=6.0.0 # flake8>=6.0.0
# 测试依赖
pytest>=7.4.0
pytest-asyncio>=0.21.0
pytest-mock>=3.11.0
pytest-cov>=4.1.0
pytest-benchmark>=4.0.0
locust>=2.15.0
memory-profiler>=0.60.0
pydantic>=2.0.0
jsonschema>=4.19.0
# AI模型接口 # AI模型接口
# 旧系统OpenRouter + OpenAI Swarm # 旧系统OpenRouter + OpenAI Swarm
openai>=1.0.0 openai>=1.0.0

View File

@ -0,0 +1,680 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
四AI团队协作通道系统
专为QwenClaudeGeminiRovoDev四个AI设计的协作和通信平台
"""
import asyncio
import json
import uuid
from typing import Dict, List, Any, Optional, Callable, Set
from dataclasses import dataclass, field
from enum import Enum
from datetime import datetime, timedelta
import logging
from pathlib import Path
class AIRole(Enum):
"""AI角色定义"""
QWEN = "Qwen" # 架构设计师
CLAUDE = "Claude" # 核心开发工程师
GEMINI = "Gemini" # 测试验证专家
ROVODEV = "RovoDev" # 项目整合专家
class CollaborationType(Enum):
"""协作类型"""
MAIN_CHANNEL = "主协作频道" # 主要协作讨论
ARCHITECTURE = "架构设计" # 架构相关讨论
IMPLEMENTATION = "代码实现" # 实现相关讨论
TESTING = "测试验证" # 测试相关讨论
INTEGRATION = "项目整合" # 整合相关讨论
CROSS_REVIEW = "交叉评审" # 跨角色评审
EMERGENCY = "紧急协调" # 紧急问题处理
class MessageType(Enum):
"""消息类型"""
PROPOSAL = "提案" # 提出建议
QUESTION = "询问" # 提出问题
ANSWER = "回答" # 回答问题
REVIEW = "评审" # 评审反馈
DECISION = "决策" # 做出决策
UPDATE = "更新" # 状态更新
ALERT = "警报" # 警报通知
HANDOFF = "交接" # 工作交接
class WorkPhase(Enum):
"""工作阶段"""
PLANNING = "规划阶段"
DESIGN = "设计阶段"
IMPLEMENTATION = "实现阶段"
TESTING = "测试阶段"
INTEGRATION = "整合阶段"
DELIVERY = "交付阶段"
@dataclass
class AIMessage:
"""AI消息"""
id: str
sender: AIRole
receiver: Optional[AIRole] # None表示广播
content: str
message_type: MessageType
collaboration_type: CollaborationType
timestamp: datetime
work_phase: WorkPhase
priority: int = 1 # 1-5, 5最高
tags: List[str] = field(default_factory=list)
attachments: List[str] = field(default_factory=list) # 文件路径
references: List[str] = field(default_factory=list) # 引用的消息ID
metadata: Dict[str, Any] = field(default_factory=dict)
@dataclass
class CollaborationChannel:
"""协作频道"""
id: str
name: str
channel_type: CollaborationType
description: str
participants: Set[AIRole]
moderator: AIRole
is_active: bool = True
created_at: datetime = field(default_factory=datetime.now)
last_activity: datetime = field(default_factory=datetime.now)
message_history: List[AIMessage] = field(default_factory=list)
settings: Dict[str, Any] = field(default_factory=dict)
@dataclass
class WorkflowRule:
"""工作流规则"""
id: str
name: str
description: str
trigger_phase: WorkPhase
trigger_conditions: Dict[str, Any]
action: str
target_ai: Optional[AIRole]
is_active: bool = True
class AITeamCollaboration:
"""四AI团队协作系统"""
def __init__(self, project_root: Path = None):
self.project_root = project_root or Path("/home/ben/github/liurenchaxin")
self.channels: Dict[str, CollaborationChannel] = {}
self.workflow_rules: Dict[str, WorkflowRule] = {}
self.current_phase: WorkPhase = WorkPhase.PLANNING
self.ai_status: Dict[AIRole, Dict[str, Any]] = {}
self.message_queue: List[AIMessage] = []
self.event_handlers: Dict[str, List[Callable]] = {}
self.logger = logging.getLogger(__name__)
# 初始化AI状态
self._initialize_ai_status()
# 初始化协作频道
self._initialize_channels()
# 初始化工作流规则
self._initialize_workflow_rules()
def _initialize_ai_status(self):
"""初始化AI状态"""
self.ai_status = {
AIRole.QWEN: {
"role": "架构设计师",
"specialty": "系统架构、技术选型、接口设计",
"current_task": "OpenBB集成架构设计",
"status": "ready",
"workload": 0,
"expertise_areas": ["架构设计", "系统集成", "性能优化"]
},
AIRole.CLAUDE: {
"role": "核心开发工程师",
"specialty": "代码实现、API开发、界面优化",
"current_task": "等待架构设计完成",
"status": "waiting",
"workload": 0,
"expertise_areas": ["Python开发", "Streamlit", "API集成"]
},
AIRole.GEMINI: {
"role": "测试验证专家",
"specialty": "功能测试、性能测试、质量保证",
"current_task": "制定测试策略",
"status": "ready",
"workload": 0,
"expertise_areas": ["自动化测试", "性能测试", "质量保证"]
},
AIRole.ROVODEV: {
"role": "项目整合专家",
"specialty": "项目管理、文档整合、协调统筹",
"current_task": "项目框架搭建",
"status": "active",
"workload": 0,
"expertise_areas": ["项目管理", "文档编写", "团队协调"]
}
}
def _initialize_channels(self):
"""初始化协作频道"""
channels_config = [
{
"id": "main_collaboration",
"name": "OpenBB集成主协作频道",
"channel_type": CollaborationType.MAIN_CHANNEL,
"description": "四AI主要协作讨论频道",
"participants": {AIRole.QWEN, AIRole.CLAUDE, AIRole.GEMINI, AIRole.ROVODEV},
"moderator": AIRole.ROVODEV,
"settings": {
"allow_broadcast": True,
"require_acknowledgment": True,
"auto_archive": False
}
},
{
"id": "architecture_design",
"name": "架构设计频道",
"channel_type": CollaborationType.ARCHITECTURE,
"description": "架构设计相关讨论",
"participants": {AIRole.QWEN, AIRole.CLAUDE, AIRole.ROVODEV},
"moderator": AIRole.QWEN,
"settings": {
"design_reviews": True,
"version_control": True
}
},
{
"id": "code_implementation",
"name": "代码实现频道",
"channel_type": CollaborationType.IMPLEMENTATION,
"description": "代码实现和开发讨论",
"participants": {AIRole.CLAUDE, AIRole.QWEN, AIRole.GEMINI},
"moderator": AIRole.CLAUDE,
"settings": {
"code_reviews": True,
"continuous_integration": True
}
},
{
"id": "testing_validation",
"name": "测试验证频道",
"channel_type": CollaborationType.TESTING,
"description": "测试策略和验证讨论",
"participants": {AIRole.GEMINI, AIRole.CLAUDE, AIRole.ROVODEV},
"moderator": AIRole.GEMINI,
"settings": {
"test_automation": True,
"quality_gates": True
}
},
{
"id": "project_integration",
"name": "项目整合频道",
"channel_type": CollaborationType.INTEGRATION,
"description": "项目整合和文档管理",
"participants": {AIRole.ROVODEV, AIRole.QWEN, AIRole.CLAUDE, AIRole.GEMINI},
"moderator": AIRole.ROVODEV,
"settings": {
"documentation_sync": True,
"release_management": True
}
},
{
"id": "cross_review",
"name": "交叉评审频道",
"channel_type": CollaborationType.CROSS_REVIEW,
"description": "跨角色工作评审",
"participants": {AIRole.QWEN, AIRole.CLAUDE, AIRole.GEMINI, AIRole.ROVODEV},
"moderator": AIRole.ROVODEV,
"settings": {
"peer_review": True,
"quality_assurance": True
}
},
{
"id": "emergency_coordination",
"name": "紧急协调频道",
"channel_type": CollaborationType.EMERGENCY,
"description": "紧急问题处理和快速响应",
"participants": {AIRole.QWEN, AIRole.CLAUDE, AIRole.GEMINI, AIRole.ROVODEV},
"moderator": AIRole.ROVODEV,
"settings": {
"high_priority": True,
"instant_notification": True,
"escalation_rules": True
}
}
]
for config in channels_config:
channel = CollaborationChannel(**config)
self.channels[channel.id] = channel
def _initialize_workflow_rules(self):
"""初始化工作流规则"""
rules_config = [
{
"id": "architecture_to_implementation",
"name": "架构完成通知实现开始",
"description": "当架构设计完成时通知Claude开始实现",
"trigger_phase": WorkPhase.DESIGN,
"trigger_conditions": {"status": "architecture_complete"},
"action": "notify_implementation_start",
"target_ai": AIRole.CLAUDE
},
{
"id": "implementation_to_testing",
"name": "实现完成通知测试开始",
"description": "当代码实现完成时通知Gemini开始测试",
"trigger_phase": WorkPhase.IMPLEMENTATION,
"trigger_conditions": {"status": "implementation_complete"},
"action": "notify_testing_start",
"target_ai": AIRole.GEMINI
},
{
"id": "testing_to_integration",
"name": "测试完成通知整合开始",
"description": "当测试验证完成时通知RovoDev开始整合",
"trigger_phase": WorkPhase.TESTING,
"trigger_conditions": {"status": "testing_complete"},
"action": "notify_integration_start",
"target_ai": AIRole.ROVODEV
}
]
for config in rules_config:
rule = WorkflowRule(**config)
self.workflow_rules[rule.id] = rule
async def send_message(self,
sender: AIRole,
content: str,
message_type: MessageType,
channel_id: str,
receiver: Optional[AIRole] = None,
priority: int = 1,
attachments: List[str] = None,
tags: List[str] = None) -> str:
"""发送消息"""
if channel_id not in self.channels:
raise ValueError(f"频道 {channel_id} 不存在")
channel = self.channels[channel_id]
# 验证发送者权限
if sender not in channel.participants:
raise PermissionError(f"{sender.value} 不在频道 {channel.name}")
# 创建消息
message = AIMessage(
id=str(uuid.uuid4()),
sender=sender,
receiver=receiver,
content=content,
message_type=message_type,
collaboration_type=channel.channel_type,
timestamp=datetime.now(),
work_phase=self.current_phase,
priority=priority,
attachments=attachments or [],
tags=tags or []
)
# 添加到频道历史
channel.message_history.append(message)
channel.last_activity = datetime.now()
# 添加到消息队列
self.message_queue.append(message)
# 触发事件处理
await self._trigger_event("message_sent", {
"message": message,
"channel": channel
})
# 记录日志
self.logger.info(f"[{channel.name}] {sender.value} -> {receiver.value if receiver else 'ALL'}: {content[:50]}...")
return message.id
async def broadcast_message(self,
sender: AIRole,
content: str,
message_type: MessageType,
channel_id: str,
priority: int = 1,
tags: List[str] = None) -> str:
"""广播消息到频道所有参与者"""
return await self.send_message(
sender=sender,
content=content,
message_type=message_type,
channel_id=channel_id,
receiver=None, # None表示广播
priority=priority,
tags=tags
)
async def request_review(self,
sender: AIRole,
content: str,
reviewers: List[AIRole],
attachments: List[str] = None) -> str:
"""请求评审"""
# 发送到交叉评审频道
message_id = await self.send_message(
sender=sender,
content=f"📋 评审请求: {content}",
message_type=MessageType.REVIEW,
channel_id="cross_review",
priority=3,
attachments=attachments,
tags=["review_request"] + [f"reviewer_{reviewer.value}" for reviewer in reviewers]
)
# 通知指定评审者
for reviewer in reviewers:
await self.send_message(
sender=AIRole.ROVODEV, # 系统通知
content=f"🔔 您有新的评审请求来自 {sender.value},请查看交叉评审频道",
message_type=MessageType.ALERT,
channel_id="main_collaboration",
receiver=reviewer,
priority=3,
tags=["review_notification", f"from_{sender.value}", f"message_ref_{message_id}"]
)
return message_id
async def handoff_work(self,
from_ai: AIRole,
to_ai: AIRole,
task_description: str,
deliverables: List[str],
notes: str = "") -> str:
"""工作交接"""
content = f"""
🔄 **工作交接**
****: {from_ai.value}
****: {to_ai.value}
**任务**: {task_description}
**交付物**: {', '.join(deliverables)}
**备注**: {notes}
"""
message_id = await self.send_message(
sender=from_ai,
content=content.strip(),
message_type=MessageType.HANDOFF,
channel_id="main_collaboration",
receiver=to_ai,
priority=4,
attachments=deliverables,
tags=["handoff", f"from_{from_ai.value}", f"to_{to_ai.value}"]
)
# 更新AI状态
self.ai_status[from_ai]["status"] = "completed_handoff"
self.ai_status[to_ai]["status"] = "received_handoff"
self.ai_status[to_ai]["current_task"] = task_description
return message_id
async def escalate_issue(self,
reporter: AIRole,
issue_description: str,
severity: str = "medium") -> str:
"""问题升级"""
content = f"""
🚨 **问题升级**
**报告者**: {reporter.value}
**严重程度**: {severity}
**问题描述**: {issue_description}
**时间**: {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}
"""
priority_map = {"low": 2, "medium": 3, "high": 4, "critical": 5}
priority = priority_map.get(severity, 3)
return await self.send_message(
sender=reporter,
content=content.strip(),
message_type=MessageType.ALERT,
channel_id="emergency_coordination",
priority=priority,
tags=["escalation", f"severity_{severity}"]
)
def get_channel_summary(self, channel_id: str) -> Dict[str, Any]:
"""获取频道摘要"""
if channel_id not in self.channels:
return {}
channel = self.channels[channel_id]
recent_messages = channel.message_history[-10:] # 最近10条消息
return {
"channel_name": channel.name,
"channel_type": channel.channel_type.value,
"participants": [ai.value for ai in channel.participants],
"total_messages": len(channel.message_history),
"last_activity": channel.last_activity.isoformat(),
"recent_messages": [
{
"sender": msg.sender.value,
"content": msg.content[:100] + "..." if len(msg.content) > 100 else msg.content,
"timestamp": msg.timestamp.isoformat(),
"type": msg.message_type.value
}
for msg in recent_messages
]
}
def get_ai_dashboard(self, ai_role: AIRole) -> Dict[str, Any]:
"""获取AI工作仪表板"""
status = self.ai_status[ai_role]
# 获取相关消息
relevant_messages = []
for channel in self.channels.values():
if ai_role in channel.participants:
for msg in channel.message_history[-5:]: # 每个频道最近5条
if msg.receiver == ai_role or msg.receiver is None:
relevant_messages.append({
"channel": channel.name,
"sender": msg.sender.value,
"content": msg.content[:100] + "..." if len(msg.content) > 100 else msg.content,
"timestamp": msg.timestamp.isoformat(),
"priority": msg.priority
})
# 按优先级和时间排序
relevant_messages.sort(key=lambda x: (x["priority"], x["timestamp"]), reverse=True)
return {
"ai_role": ai_role.value,
"status": status,
"current_phase": self.current_phase.value,
"active_channels": [
channel.name for channel in self.channels.values()
if ai_role in channel.participants and channel.is_active
],
"recent_messages": relevant_messages[:10], # 最多10条
"pending_tasks": self._get_pending_tasks(ai_role),
"collaboration_stats": self._get_collaboration_stats(ai_role)
}
def _get_pending_tasks(self, ai_role: AIRole) -> List[Dict[str, Any]]:
"""获取待处理任务"""
tasks = []
# 扫描所有频道中针对该AI的消息
for channel in self.channels.values():
if ai_role in channel.participants:
for msg in channel.message_history:
if (msg.receiver == ai_role and
msg.message_type in [MessageType.QUESTION, MessageType.REVIEW, MessageType.HANDOFF] and
not self._is_task_completed(msg.id)):
tasks.append({
"task_id": msg.id,
"type": msg.message_type.value,
"description": msg.content[:100] + "..." if len(msg.content) > 100 else msg.content,
"from": msg.sender.value,
"channel": channel.name,
"priority": msg.priority,
"created": msg.timestamp.isoformat()
})
return sorted(tasks, key=lambda x: x["priority"], reverse=True)
def _get_collaboration_stats(self, ai_role: AIRole) -> Dict[str, Any]:
"""获取协作统计"""
total_messages = 0
messages_sent = 0
messages_received = 0
for channel in self.channels.values():
if ai_role in channel.participants:
for msg in channel.message_history:
total_messages += 1
if msg.sender == ai_role:
messages_sent += 1
elif msg.receiver == ai_role or msg.receiver is None:
messages_received += 1
return {
"total_messages": total_messages,
"messages_sent": messages_sent,
"messages_received": messages_received,
"active_channels": len([c for c in self.channels.values() if ai_role in c.participants]),
"collaboration_score": min(100, (messages_sent + messages_received) * 2) # 简单计分
}
def _is_task_completed(self, task_id: str) -> bool:
"""检查任务是否已完成"""
# 简单实现:检查是否有回复消息引用了该任务
for channel in self.channels.values():
for msg in channel.message_history:
if task_id in msg.references:
return True
return False
async def _trigger_event(self, event_type: str, event_data: Dict[str, Any]):
"""触发事件处理"""
if event_type in self.event_handlers:
for handler in self.event_handlers[event_type]:
try:
await handler(event_data)
except Exception as e:
self.logger.error(f"事件处理器错误: {e}")
def add_event_handler(self, event_type: str, handler: Callable):
"""添加事件处理器"""
if event_type not in self.event_handlers:
self.event_handlers[event_type] = []
self.event_handlers[event_type].append(handler)
async def advance_phase(self, new_phase: WorkPhase):
"""推进工作阶段"""
old_phase = self.current_phase
self.current_phase = new_phase
# 广播阶段变更
await self.broadcast_message(
sender=AIRole.ROVODEV,
content=f"📈 项目阶段变更: {old_phase.value}{new_phase.value}",
message_type=MessageType.UPDATE,
channel_id="main_collaboration",
priority=4,
tags=["phase_change"]
)
# 触发工作流规则
await self._check_workflow_rules()
async def _check_workflow_rules(self):
"""检查并执行工作流规则"""
for rule in self.workflow_rules.values():
if rule.is_active and rule.trigger_phase == self.current_phase:
await self._execute_workflow_action(rule)
async def _execute_workflow_action(self, rule: WorkflowRule):
"""执行工作流动作"""
if rule.action == "notify_implementation_start":
await self.send_message(
sender=AIRole.ROVODEV,
content=f"🚀 架构设计已完成,请开始代码实现工作。参考架构文档进行开发。",
message_type=MessageType.UPDATE,
channel_id="code_implementation",
receiver=rule.target_ai,
priority=3
)
elif rule.action == "notify_testing_start":
await self.send_message(
sender=AIRole.ROVODEV,
content=f"✅ 代码实现已完成,请开始测试验证工作。",
message_type=MessageType.UPDATE,
channel_id="testing_validation",
receiver=rule.target_ai,
priority=3
)
elif rule.action == "notify_integration_start":
await self.send_message(
sender=AIRole.ROVODEV,
content=f"🎯 测试验证已完成,请开始项目整合工作。",
message_type=MessageType.UPDATE,
channel_id="project_integration",
receiver=rule.target_ai,
priority=3
)
# 使用示例
async def demo_collaboration():
"""演示协作系统使用"""
collab = AITeamCollaboration()
# Qwen发起架构讨论
await collab.send_message(
sender=AIRole.QWEN,
content="大家好我已经完成了OpenBB集成的初步架构设计请大家review一下设计文档。",
message_type=MessageType.PROPOSAL,
channel_id="main_collaboration",
priority=3,
attachments=["docs/architecture/openbb_integration_architecture.md"],
tags=["architecture", "review_request"]
)
# Claude回应
await collab.send_message(
sender=AIRole.CLAUDE,
content="架构设计看起来很不错!我有几个实现层面的问题...",
message_type=MessageType.QUESTION,
channel_id="architecture_design",
receiver=AIRole.QWEN,
priority=2
)
# 工作交接
await collab.handoff_work(
from_ai=AIRole.QWEN,
to_ai=AIRole.CLAUDE,
task_description="基于架构设计实现OpenBB核心引擎",
deliverables=["src/jixia/engines/enhanced_openbb_engine.py"],
notes="请特别注意八仙数据路由的实现"
)
# 获取仪表板
dashboard = collab.get_ai_dashboard(AIRole.CLAUDE)
print(f"Claude的工作仪表板: {json.dumps(dashboard, indent=2, ensure_ascii=False)}")
if __name__ == "__main__":
# 设置日志
logging.basicConfig(level=logging.INFO)
# 运行演示
asyncio.run(demo_collaboration())

View File

@ -0,0 +1,43 @@
# 设计八仙与数据源的智能映射
immortal_data_mapping = {
'吕洞宾': {
'specialty': 'technical_analysis', # 技术分析专家
'preferred_data_types': ['historical', 'price'],
'data_providers': ['OpenBB', 'RapidAPI']
},
'何仙姑': {
'specialty': 'risk_metrics', # 风险控制专家
'preferred_data_types': ['price', 'profile'],
'data_providers': ['RapidAPI', 'OpenBB']
},
'张果老': {
'specialty': 'historical_data', # 历史数据分析师
'preferred_data_types': ['historical'],
'data_providers': ['OpenBB', 'RapidAPI']
},
'韩湘子': {
'specialty': 'sector_analysis', # 新兴资产专家
'preferred_data_types': ['profile', 'news'],
'data_providers': ['RapidAPI', 'OpenBB']
},
'汉钟离': {
'specialty': 'market_movers', # 热点追踪
'preferred_data_types': ['news', 'price'],
'data_providers': ['RapidAPI', 'OpenBB']
},
'蓝采和': {
'specialty': 'value_discovery', # 潜力股发现
'preferred_data_types': ['screener', 'profile'],
'data_providers': ['OpenBB', 'RapidAPI']
},
'铁拐李': {
'specialty': 'contrarian_analysis', # 逆向思维专家
'preferred_data_types': ['profile', 'short_interest'],
'data_providers': ['RapidAPI', 'OpenBB']
},
'曹国舅': {
'specialty': 'macro_economics', # 宏观经济分析师
'preferred_data_types': ['profile', 'institutional_holdings'],
'data_providers': ['OpenBB', 'RapidAPI']
}
}

View File

@ -0,0 +1,38 @@
from abc import ABC, abstractmethod
from typing import List, Optional
from src.jixia.models.financial_data_models import StockQuote, HistoricalPrice, CompanyProfile, FinancialNews
class DataProvider(ABC):
"""金融数据提供商抽象基类"""
@abstractmethod
def get_quote(self, symbol: str) -> Optional[StockQuote]:
"""获取股票报价"""
pass
@abstractmethod
def get_historical_prices(self, symbol: str, days: int = 30) -> List[HistoricalPrice]:
"""获取历史价格数据"""
pass
@abstractmethod
def get_company_profile(self, symbol: str) -> Optional[CompanyProfile]:
"""获取公司概况"""
pass
@abstractmethod
def get_news(self, symbol: str, limit: int = 10) -> List[FinancialNews]:
"""获取相关新闻"""
pass
@property
@abstractmethod
def name(self) -> str:
"""数据提供商名称"""
pass
@property
@abstractmethod
def priority(self) -> int:
"""优先级(数字越小优先级越高)"""
pass

View File

@ -0,0 +1,109 @@
from typing import List, Optional
import asyncio
from src.jixia.engines.data_abstraction import DataProvider
from src.jixia.models.financial_data_models import StockQuote, HistoricalPrice, CompanyProfile, FinancialNews
from src.jixia.engines.rapidapi_adapter import RapidAPIDataProvider
from src.jixia.engines.openbb_adapter import OpenBBDataProvider
class DataAbstractionLayer:
"""金融数据抽象层管理器"""
def __init__(self):
self.providers: List[DataProvider] = []
self._initialize_providers()
def _initialize_providers(self):
"""初始化所有可用的数据提供商"""
# 根据配置和环境动态加载适配器
try:
self.providers.append(OpenBBDataProvider())
except Exception as e:
print(f"警告: OpenBBDataProvider 初始化失败: {e}")
try:
self.providers.append(RapidAPIDataProvider())
except Exception as e:
print(f"警告: RapidAPIDataProvider 初始化失败: {e}")
# 按优先级排序
self.providers.sort(key=lambda p: p.priority)
print(f"数据抽象层初始化完成,已加载 {len(self.providers)} 个数据提供商")
for provider in self.providers:
print(f" - {provider.name} (优先级: {provider.priority})")
def get_quote(self, symbol: str) -> Optional[StockQuote]:
"""获取股票报价(带故障转移)"""
for provider in self.providers:
try:
quote = provider.get_quote(symbol)
if quote:
print(f"✅ 通过 {provider.name} 获取到 {symbol} 的报价")
return quote
except Exception as e:
print(f"警告: {provider.name} 获取报价失败: {e}")
continue
print(f"❌ 所有数据提供商都无法获取 {symbol} 的报价")
return None
async def get_quote_async(self, symbol: str) -> Optional[StockQuote]:
"""异步获取股票报价(带故障转移)"""
for provider in self.providers:
try:
# 如果提供商支持异步方法,则使用异步方法
if hasattr(provider, 'get_quote_async'):
quote = await provider.get_quote_async(symbol)
else:
# 否则在执行器中运行同步方法
quote = await asyncio.get_event_loop().run_in_executor(
None, provider.get_quote, symbol
)
if quote:
print(f"✅ 通过 {provider.name} 异步获取到 {symbol} 的报价")
return quote
except Exception as e:
print(f"警告: {provider.name} 异步获取报价失败: {e}")
continue
print(f"❌ 所有数据提供商都无法异步获取 {symbol} 的报价")
return None
def get_historical_prices(self, symbol: str, days: int = 30) -> List[HistoricalPrice]:
"""获取历史价格数据(带故障转移)"""
for provider in self.providers:
try:
prices = provider.get_historical_prices(symbol, days)
if prices:
print(f"✅ 通过 {provider.name} 获取到 {symbol} 的历史价格数据")
return prices
except Exception as e:
print(f"警告: {provider.name} 获取历史价格失败: {e}")
continue
print(f"❌ 所有数据提供商都无法获取 {symbol} 的历史价格数据")
return []
def get_company_profile(self, symbol: str) -> Optional[CompanyProfile]:
"""获取公司概况(带故障转移)"""
for provider in self.providers:
try:
profile = provider.get_company_profile(symbol)
if profile:
print(f"✅ 通过 {provider.name} 获取到 {symbol} 的公司概况")
return profile
except Exception as e:
print(f"警告: {provider.name} 获取公司概况失败: {e}")
continue
print(f"❌ 所有数据提供商都无法获取 {symbol} 的公司概况")
return None
def get_news(self, symbol: str, limit: int = 10) -> List[FinancialNews]:
"""获取相关新闻(带故障转移)"""
for provider in self.providers:
try:
news = provider.get_news(symbol, limit)
if news:
print(f"✅ 通过 {provider.name} 获取到 {symbol} 的相关新闻")
return news
except Exception as e:
print(f"警告: {provider.name} 获取新闻失败: {e}")
continue
print(f"❌ 所有数据提供商都无法获取 {symbol} 的相关新闻")
return []

View File

@ -0,0 +1,37 @@
import time
from typing import Any, Optional
from functools import lru_cache
class DataCache:
"""金融数据缓存"""
def __init__(self):
self._cache = {}
self._cache_times = {}
self.default_ttl = 60 # 默认缓存时间(秒)
def get(self, key: str) -> Optional[Any]:
"""获取缓存数据"""
if key in self._cache:
# 检查是否过期
if time.time() - self._cache_times[key] < self.default_ttl:
return self._cache[key]
else:
# 删除过期缓存
del self._cache[key]
del self._cache_times[key]
return None
def set(self, key: str, value: Any, ttl: Optional[int] = None):
"""设置缓存数据"""
self._cache[key] = value
self._cache_times[key] = time.time()
if ttl:
# 可以为特定数据设置不同的TTL
pass # 实际实现中需要更复杂的TTL管理机制
@lru_cache(maxsize=128)
def get_quote_cache(self, symbol: str) -> Optional[Any]:
"""LRU缓存装饰器示例"""
# 这个方法将自动缓存最近128个调用的结果
pass

View File

@ -0,0 +1,49 @@
from typing import Dict, Any
from datetime import datetime
class DataQualityMonitor:
"""数据质量监控"""
def __init__(self):
self.provider_stats = {}
def record_access(self, provider_name: str, success: bool, response_time: float, data_size: int):
"""记录数据访问统计"""
if provider_name not in self.provider_stats:
self.provider_stats[provider_name] = {
'total_requests': 0,
'successful_requests': 0,
'failed_requests': 0,
'total_response_time': 0,
'total_data_size': 0,
'last_access': None
}
stats = self.provider_stats[provider_name]
stats['total_requests'] += 1
if success:
stats['successful_requests'] += 1
else:
stats['failed_requests'] += 1
stats['total_response_time'] += response_time
stats['total_data_size'] += data_size
stats['last_access'] = datetime.now()
def get_provider_health(self, provider_name: str) -> Dict[str, Any]:
"""获取提供商健康状况"""
if provider_name not in self.provider_stats:
return {'status': 'unknown'}
stats = self.provider_stats[provider_name]
success_rate = stats['successful_requests'] / stats['total_requests'] if stats['total_requests'] > 0 else 0
avg_response_time = stats['total_response_time'] / stats['total_requests'] if stats['total_requests'] > 0 else 0
status = 'healthy' if success_rate > 0.95 and avg_response_time < 2.0 else 'degraded' if success_rate > 0.8 else 'unhealthy'
return {
'status': status,
'success_rate': success_rate,
'avg_response_time': avg_response_time,
'total_requests': stats['total_requests'],
'last_access': stats['last_access']
}

View File

@ -0,0 +1,75 @@
from typing import List, Optional
from src.jixia.engines.data_abstraction import DataProvider
from src.jixia.models.financial_data_models import StockQuote, HistoricalPrice, CompanyProfile, FinancialNews
from src.jixia.engines.openbb_engine import OpenBBEngine
class OpenBBDataProvider(DataProvider):
"""OpenBB引擎适配器"""
def __init__(self):
self.engine = OpenBBEngine()
self._name = "OpenBB"
self._priority = 1 # 最高优先级
def get_quote(self, symbol: str) -> Optional[StockQuote]:
result = self.engine.get_immortal_data("吕洞宾", "price", symbol)
if result.success and result.data:
# 解析OpenBB返回的数据并转换为StockQuote
# 注意这里需要根据OpenBB实际返回的数据结构进行调整
data = result.data
if isinstance(data, list) and len(data) > 0:
item = data[0] # 取第一条数据
elif hasattr(data, '__dict__'):
item = data
else:
item = {}
# 提取价格信息根据openbb_stock_data.py中的字段
price = 0
if hasattr(item, 'close'):
price = float(item.close)
elif isinstance(item, dict) and 'close' in item:
price = float(item['close'])
volume = 0
if hasattr(item, 'volume'):
volume = int(item.volume)
elif isinstance(item, dict) and 'volume' in item:
volume = int(item['volume'])
# 日期处理
timestamp = None
if hasattr(item, 'date'):
timestamp = item.date
elif isinstance(item, dict) and 'date' in item:
timestamp = item['date']
return StockQuote(
symbol=symbol,
price=price,
change=0, # 需要计算
change_percent=0, # 需要计算
volume=volume,
timestamp=timestamp
)
return None
def get_historical_prices(self, symbol: str, days: int = 30) -> List[HistoricalPrice]:
# 实现历史价格数据获取逻辑
pass
def get_company_profile(self, symbol: str) -> Optional[CompanyProfile]:
# 实现公司概况获取逻辑
pass
def get_news(self, symbol: str, limit: int = 10) -> List[FinancialNews]:
# 实现新闻获取逻辑
pass
@property
def name(self) -> str:
return self._name
@property
def priority(self) -> int:
return self._priority

View File

@ -0,0 +1,48 @@
from typing import List, Optional
from src.jixia.engines.data_abstraction import DataProvider
from src.jixia.models.financial_data_models import StockQuote, HistoricalPrice, CompanyProfile, FinancialNews
from src.jixia.engines.perpetual_engine import JixiaPerpetualEngine
from config.settings import get_rapidapi_key
class RapidAPIDataProvider(DataProvider):
"""RapidAPI永动机引擎适配器"""
def __init__(self):
self.engine = JixiaPerpetualEngine(get_rapidapi_key())
self._name = "RapidAPI"
self._priority = 2 # 中等优先级
def get_quote(self, symbol: str) -> Optional[StockQuote]:
result = self.engine.get_immortal_data("吕洞宾", "quote", symbol)
if result.success and result.data:
# 解析RapidAPI返回的数据并转换为StockQuote
# 这里需要根据实际API返回的数据结构进行调整
return StockQuote(
symbol=symbol,
price=result.data.get("price", 0),
change=result.data.get("change", 0),
change_percent=result.data.get("change_percent", 0),
volume=result.data.get("volume", 0),
timestamp=result.data.get("timestamp")
)
return None
def get_historical_prices(self, symbol: str, days: int = 30) -> List[HistoricalPrice]:
# 实现历史价格数据获取逻辑
pass
def get_company_profile(self, symbol: str) -> Optional[CompanyProfile]:
# 实现公司概况获取逻辑
pass
def get_news(self, symbol: str, limit: int = 10) -> List[FinancialNews]:
# 实现新闻获取逻辑
pass
@property
def name(self) -> str:
return self._name
@property
def priority(self) -> int:
return self._priority

View File

@ -0,0 +1,521 @@
# 金融数据抽象层设计
## 概述
"炼妖壶-稷下学宫AI辩论系统"我们需要构建一个统一的金融数据抽象层以支持多种数据源包括现有的RapidAPI永动机引擎新增的OpenBB集成引擎以及未来可能添加的其他数据提供商该抽象层将为上层AI智能体提供一致的数据接口同时隐藏底层数据源的具体实现细节
## 设计目标
1. **统一接口**为所有金融数据访问提供一致的API
2. **可扩展性**易于添加新的数据提供商
3. **容错性**当主数据源不可用时能够自动切换到备用数据源
4. **性能优化**支持缓存和异步数据获取
5. **类型安全**使用Python类型注解确保数据结构的一致性
## 核心组件
### 1. 数据模型 (Data Models)
定义标准化的金融数据结构
```python
# src/jixia/models/financial_data_models.py
from dataclasses import dataclass
from typing import Optional, List
from datetime import datetime
@dataclass
class StockQuote:
symbol: str
price: float
change: float
change_percent: float
volume: int
timestamp: datetime
@dataclass
class HistoricalPrice:
date: datetime
open: float
high: float
low: float
close: float
volume: int
@dataclass
class CompanyProfile:
symbol: str
name: str
industry: str
sector: str
market_cap: float
pe_ratio: Optional[float]
dividend_yield: Optional[float]
@dataclass
class FinancialNews:
title: str
summary: str
url: str
timestamp: datetime
sentiment: Optional[float] # -1 (负面) to 1 (正面)
```
### 2. 抽象基类 (Abstract Base Class)
定义数据提供商的通用接口
```python
# src/jixia/engines/data_abstraction.py
from abc import ABC, abstractmethod
from typing import List, Optional
from src.jixia.models.financial_data_models import StockQuote, HistoricalPrice, CompanyProfile, FinancialNews
class DataProvider(ABC):
"""金融数据提供商抽象基类"""
@abstractmethod
def get_quote(self, symbol: str) -> Optional[StockQuote]:
"""获取股票报价"""
pass
@abstractmethod
def get_historical_prices(self, symbol: str, days: int = 30) -> List[HistoricalPrice]:
"""获取历史价格数据"""
pass
@abstractmethod
def get_company_profile(self, symbol: str) -> Optional[CompanyProfile]:
"""获取公司概况"""
pass
@abstractmethod
def get_news(self, symbol: str, limit: int = 10) -> List[FinancialNews]:
"""获取相关新闻"""
pass
@property
@abstractmethod
def name(self) -> str:
"""数据提供商名称"""
pass
@property
@abstractmethod
def priority(self) -> int:
"""优先级(数字越小优先级越高)"""
pass
```
### 3. Provider适配器 (Provider Adapters)
为每个具体的数据源实现适配器
#### RapidAPI永动机引擎适配器
```python
# src/jixia/engines/rapidapi_adapter.py
from typing import List, Optional
from src.jixia.engines.data_abstraction import DataProvider
from src.jixia.models.financial_data_models import StockQuote, HistoricalPrice, CompanyProfile, FinancialNews
from src.jixia.engines.perpetual_engine import JixiaPerpetualEngine
from config.settings import get_rapidapi_key
class RapidAPIDataProvider(DataProvider):
"""RapidAPI永动机引擎适配器"""
def __init__(self):
self.engine = JixiaPerpetualEngine(get_rapidapi_key())
self._name = "RapidAPI"
self._priority = 2 # 中等优先级
def get_quote(self, symbol: str) -> Optional[StockQuote]:
result = self.engine.get_immortal_data("吕洞宾", "quote", symbol)
if result.success and result.data:
# 解析RapidAPI返回的数据并转换为StockQuote
# 这里需要根据实际API返回的数据结构进行调整
return StockQuote(
symbol=symbol,
price=result.data.get("price", 0),
change=result.data.get("change", 0),
change_percent=result.data.get("change_percent", 0),
volume=result.data.get("volume", 0),
timestamp=result.data.get("timestamp")
)
return None
def get_historical_prices(self, symbol: str, days: int = 30) -> List[HistoricalPrice]:
# 实现历史价格数据获取逻辑
pass
def get_company_profile(self, symbol: str) -> Optional[CompanyProfile]:
# 实现公司概况获取逻辑
pass
def get_news(self, symbol: str, limit: int = 10) -> List[FinancialNews]:
# 实现新闻获取逻辑
pass
@property
def name(self) -> str:
return self._name
@property
def priority(self) -> int:
return self._priority
```
#### OpenBB引擎适配器
```python
# src/jixia/engines/openbb_adapter.py
from typing import List, Optional
from src.jixia.engines.data_abstraction import DataProvider
from src.jixia.models.financial_data_models import StockQuote, HistoricalPrice, CompanyProfile, FinancialNews
from src.jixia.engines.openbb_engine import OpenBBEngine
class OpenBBDataProvider(DataProvider):
"""OpenBB引擎适配器"""
def __init__(self):
self.engine = OpenBBEngine()
self._name = "OpenBB"
self._priority = 1 # 最高优先级
def get_quote(self, symbol: str) -> Optional[StockQuote]:
result = self.engine.get_immortal_data("吕洞宾", "price", symbol)
if result.success and result.data:
# 解析OpenBB返回的数据并转换为StockQuote
return StockQuote(
symbol=symbol,
price=result.data.get("close", 0),
change=0, # 需要计算
change_percent=0, # 需要计算
volume=result.data.get("volume", 0),
timestamp=result.data.get("date")
)
return None
def get_historical_prices(self, symbol: str, days: int = 30) -> List[HistoricalPrice]:
# 实现历史价格数据获取逻辑
pass
def get_company_profile(self, symbol: str) -> Optional[CompanyProfile]:
# 实现公司概况获取逻辑
pass
def get_news(self, symbol: str, limit: int = 10) -> List[FinancialNews]:
# 实现新闻获取逻辑
pass
@property
def name(self) -> str:
return self._name
@property
def priority(self) -> int:
return self._priority
```
### 4. 数据抽象层管理器 (Data Abstraction Layer Manager)
管理多个数据提供商并提供统一接口
```python
# src/jixia/engines/data_abstraction_layer.py
from typing import List, Optional
from src.jixia.engines.data_abstraction import DataProvider
from src.jixia.models.financial_data_models import StockQuote, HistoricalPrice, CompanyProfile, FinancialNews
import asyncio
class DataAbstractionLayer:
"""金融数据抽象层管理器"""
def __init__(self):
self.providers: List[DataProvider] = []
self._initialize_providers()
def _initialize_providers(self):
"""初始化所有可用的数据提供商"""
# 根据配置和环境动态加载适配器
try:
from src.jixia.engines.rapidapi_adapter import RapidAPIDataProvider
self.providers.append(RapidAPIDataProvider())
except ImportError:
pass # RapidAPI引擎不可用
try:
from src.jixia.engines.openbb_adapter import OpenBBDataProvider
self.providers.append(OpenBBDataProvider())
except ImportError:
pass # OpenBB引擎不可用
# 按优先级排序
self.providers.sort(key=lambda p: p.priority)
def get_quote(self, symbol: str) -> Optional[StockQuote]:
"""获取股票报价(带故障转移)"""
for provider in self.providers:
try:
quote = provider.get_quote(symbol)
if quote:
return quote
except Exception as e:
print(f"警告: {provider.name} 获取报价失败: {e}")
continue
return None
async def get_quote_async(self, symbol: str) -> Optional[StockQuote]:
"""异步获取股票报价(带故障转移)"""
for provider in self.providers:
try:
# 如果提供商支持异步方法,则使用异步方法
if hasattr(provider, 'get_quote_async'):
quote = await provider.get_quote_async(symbol)
else:
# 否则在执行器中运行同步方法
quote = await asyncio.get_event_loop().run_in_executor(
None, provider.get_quote, symbol
)
if quote:
return quote
except Exception as e:
print(f"警告: {provider.name} 获取报价失败: {e}")
continue
return None
def get_historical_prices(self, symbol: str, days: int = 30) -> List[HistoricalPrice]:
"""获取历史价格数据(带故障转移)"""
for provider in self.providers:
try:
prices = provider.get_historical_prices(symbol, days)
if prices:
return prices
except Exception as e:
print(f"警告: {provider.name} 获取历史价格失败: {e}")
continue
return []
def get_company_profile(self, symbol: str) -> Optional[CompanyProfile]:
"""获取公司概况(带故障转移)"""
for provider in self.providers:
try:
profile = provider.get_company_profile(symbol)
if profile:
return profile
except Exception as e:
print(f"警告: {provider.name} 获取公司概况失败: {e}")
continue
return None
def get_news(self, symbol: str, limit: int = 10) -> List[FinancialNews]:
"""获取相关新闻(带故障转移)"""
for provider in self.providers:
try:
news = provider.get_news(symbol, limit)
if news:
return news
except Exception as e:
print(f"警告: {provider.name} 获取新闻失败: {e}")
continue
return []
```
## 八仙与数据源的智能映射
```python
# src/jixia/engines/baxian_data_mapping.py
# 设计八仙与数据源的智能映射
immortal_data_mapping = {
'吕洞宾': {
'specialty': 'technical_analysis', # 技术分析专家
'preferred_data_types': ['historical', 'price'],
'data_providers': ['OpenBB', 'RapidAPI']
},
'何仙姑': {
'specialty': 'risk_metrics', # 风险控制专家
'preferred_data_types': ['price', 'profile'],
'data_providers': ['RapidAPI', 'OpenBB']
},
'张果老': {
'specialty': 'historical_data', # 历史数据分析师
'preferred_data_types': ['historical'],
'data_providers': ['OpenBB', 'RapidAPI']
},
'韩湘子': {
'specialty': 'sector_analysis', # 新兴资产专家
'preferred_data_types': ['profile', 'news'],
'data_providers': ['RapidAPI', 'OpenBB']
},
'汉钟离': {
'specialty': 'market_movers', # 热点追踪
'preferred_data_types': ['news', 'price'],
'data_providers': ['RapidAPI', 'OpenBB']
},
'蓝采和': {
'specialty': 'value_discovery', # 潜力股发现
'preferred_data_types': ['screener', 'profile'],
'data_providers': ['OpenBB', 'RapidAPI']
},
'铁拐李': {
'specialty': 'contrarian_analysis', # 逆向思维专家
'preferred_data_types': ['profile', 'short_interest'],
'data_providers': ['RapidAPI', 'OpenBB']
},
'曹国舅': {
'specialty': 'macro_economics', # 宏观经济分析师
'preferred_data_types': ['profile', 'institutional_holdings'],
'data_providers': ['OpenBB', 'RapidAPI']
}
}
```
## 缓存策略
为了提高性能我们将实现多级缓存策略
```python
# src/jixia/engines/data_cache.py
import time
from typing import Any, Optional
from functools import lru_cache
class DataCache:
"""金融数据缓存"""
def __init__(self):
self._cache = {}
self._cache_times = {}
self.default_ttl = 60 # 默认缓存时间(秒)
def get(self, key: str) -> Optional[Any]:
"""获取缓存数据"""
if key in self._cache:
# 检查是否过期
if time.time() - self._cache_times[key] < self.default_ttl:
return self._cache[key]
else:
# 删除过期缓存
del self._cache[key]
del self._cache_times[key]
return None
def set(self, key: str, value: Any, ttl: Optional[int] = None):
"""设置缓存数据"""
self._cache[key] = value
self._cache_times[key] = time.time()
if ttl:
# 可以为特定数据设置不同的TTL
pass # 实际实现中需要更复杂的TTL管理机制
@lru_cache(maxsize=128)
def get_quote_cache(self, symbol: str) -> Optional[Any]:
"""LRU缓存装饰器示例"""
# 这个方法将自动缓存最近128个调用的结果
pass
```
## 数据质量监控机制
为了确保数据的准确性和可靠性我们将实现数据质量监控
```python
# src/jixia/engines/data_quality_monitor.py
from typing import Dict, Any
from datetime import datetime
class DataQualityMonitor:
"""数据质量监控"""
def __init__(self):
self.provider_stats = {}
def record_access(self, provider_name: str, success: bool, response_time: float, data_size: int):
"""记录数据访问统计"""
if provider_name not in self.provider_stats:
self.provider_stats[provider_name] = {
'total_requests': 0,
'successful_requests': 0,
'failed_requests': 0,
'total_response_time': 0,
'total_data_size': 0,
'last_access': None
}
stats = self.provider_stats[provider_name]
stats['total_requests'] += 1
if success:
stats['successful_requests'] += 1
else:
stats['failed_requests'] += 1
stats['total_response_time'] += response_time
stats['total_data_size'] += data_size
stats['last_access'] = datetime.now()
def get_provider_health(self, provider_name: str) -> Dict[str, Any]:
"""获取提供商健康状况"""
if provider_name not in self.provider_stats:
return {'status': 'unknown'}
stats = self.provider_stats[provider_name]
success_rate = stats['successful_requests'] / stats['total_requests'] if stats['total_requests'] > 0 else 0
avg_response_time = stats['total_response_time'] / stats['total_requests'] if stats['total_requests'] > 0 else 0
status = 'healthy' if success_rate > 0.95 and avg_response_time < 2.0 else 'degraded' if success_rate > 0.8 else 'unhealthy'
return {
'status': status,
'success_rate': success_rate,
'avg_response_time': avg_response_time,
'total_requests': stats['total_requests'],
'last_access': stats['last_access']
}
```
## 使用示例
```python
# 示例:在智能体中使用数据抽象层
from src.jixia.engines.data_abstraction_layer import DataAbstractionLayer
from src.jixia.models.financial_data_models import StockQuote
# 初始化数据抽象层
dal = DataAbstractionLayer()
# 获取股票报价
quote = dal.get_quote("AAPL")
if quote:
print(f"Apple股价: ${quote.price}")
else:
print("无法获取股价数据")
# 异步获取报价
import asyncio
async def async_example():
quote = await dal.get_quote_async("GOOGL")
if quote:
print(f"Google股价: ${quote.price}")
# asyncio.run(async_example())
```
## 总结
这个金融数据抽象层设计提供了以下优势
1. **统一接口**所有智能体都可以通过相同的接口访问任何数据源
2. **故障转移**当主数据源不可用时自动切换到备用数据源
3. **可扩展性**可以轻松添加新的数据提供商适配器
4. **性能优化**通过缓存机制提高数据访问速度
5. **质量监控**实时监控各数据源的健康状况
6. **文化融合**通过八仙与数据源的智能映射保持项目的文化特色
这将为"炼妖壶-稷下学宫AI辩论系统"提供一个强大可靠且可扩展的金融数据基础

35
start_ai_collaboration.sh Normal file
View File

@ -0,0 +1,35 @@
#!/bin/bash
# 四AI团队协作系统快速启动脚本
echo "🤖 四AI团队协作系统"
echo "=================="
cd /home/ben/github/liurenchaxin
echo "📋 可用选项:"
echo "1. 演示模式 - 查看完整协作流程演示"
echo "2. Web界面 - 启动可视化管理界面"
echo "3. 交互模式 - 命令行交互式体验"
echo ""
read -p "请选择模式 (1-3): " choice
case $choice in
1)
echo "🚀 启动演示模式..."
.venv/bin/python3 ai_collaboration_demo.py demo
;;
2)
echo "🌐 启动Web界面..."
echo "请在浏览器中访问: http://localhost:8502"
echo "选择 '🤖 AI协作' 标签页查看协作系统"
.venv/bin/python3 -m streamlit run app/streamlit_app.py --server.port 8502
;;
3)
echo "💬 启动交互模式..."
.venv/bin/python3 ai_collaboration_demo.py interactive
;;
*)
echo "❌ 无效选择,请重新运行脚本"
;;
esac

View File

@ -0,0 +1,24 @@
class TestCulturalAccuracy:
"""文化准确性测试"""
def test_immortal_characteristics(self):
"""验证八仙特征的准确性"""
# immortals = get_immortal_configs()
#
# # 验证吕洞宾的技术分析特色
# assert immortals['吕洞宾'].specialty == 'technical_analysis'
# assert immortals['吕洞宾'].element == '乾'
#
# # 验证何仙姑的风险控制特色
# assert immortals['何仙姑'].specialty == 'risk_metrics'
# assert immortals['何仙姑'].element == '坤'
pass
def test_debate_cultural_context(self):
"""验证辩论的文化背景准确性"""
# debate = create_test_debate('AAPL')
#
# # 确保辩论遵循稷下学宫的传统
# assert 'jixia' in debate.context
# assert len(debate.participants) == 8 # 八仙
pass

View File

@ -0,0 +1,3 @@
class TestDebateDataQuality:
"""辩论数据质量测试"""
pass

View File

@ -0,0 +1,14 @@
class TestImmortalDataRouting:
"""八仙数据路由测试"""
def test_lv_dongbin_technical_analysis(self):
"""测试吕洞宾的技术分析数据获取"""
pass
def test_he_xiangu_risk_metrics(self):
"""测试何仙姑的风险指标数据"""
pass
def test_immortal_data_consistency(self):
"""测试八仙数据的一致性"""
pass

View File

@ -0,0 +1,3 @@
class TestImmortalPreferences:
"""八仙偏好测试"""
pass

View File

@ -0,0 +1,62 @@
import pytest
from unittest import mock
from src.jixia.engines.openbb_stock_data import get_stock_data
from types import SimpleNamespace
class TestOpenBBIntegration:
"""OpenBB集成测试套件"""
def test_stock_data_retrieval(self):
"""测试股票数据成功获取"""
# 创建一个模拟的'openbb'模块
mock_openbb_module = mock.MagicMock()
# 在该模块上创建一个模拟的'obb'属性
mock_obb_object = mock.MagicMock()
mock_openbb_module.obb = mock_obb_object
# 配置模拟的obb对象的返回值
mock_data = [
SimpleNamespace(date='2023-01-01', open=100, high=110, low=90, close=105, volume=10000),
SimpleNamespace(date='2023-01-02', open=105, high=115, low=102, close=112, volume=12000)
]
mock_obb_object.equity.price.historical.return_value = SimpleNamespace(results=mock_data)
# 使用patch.dict来模拟openbb模块的导入
with mock.patch.dict('sys.modules', {'openbb': mock_openbb_module}):
data = get_stock_data('AAPL')
# 断言
assert data is not None
assert len(data) == 2
assert data[0].close == 105
mock_obb_object.equity.price.historical.assert_called_once()
def test_stock_data_handles_api_error(self):
"""测试当OpenBB API未返回有效数据时的情况"""
mock_openbb_module = mock.MagicMock()
mock_obb_object = mock.MagicMock()
mock_openbb_module.obb = mock_obb_object
# 配置模拟的obb对象以返回没有结果的情况
mock_obb_object.equity.price.historical.return_value = SimpleNamespace(results=None)
with mock.patch.dict('sys.modules', {'openbb': mock_openbb_module}):
data = get_stock_data('FAIL')
# 断言
assert data is None
mock_obb_object.equity.price.historical.assert_called_once_with(
symbol='FAIL',
provider='yfinance',
start_date=mock.ANY,
end_date=mock.ANY
)
def test_stock_data_handles_import_error(self):
"""测试openbb库不可用时的降级行为"""
# 模拟sys.modules中没有openbb
with mock.patch.dict('sys.modules', {'openbb': None}):
data = get_stock_data('NOBB')
# 断言
assert data is None