refactor(project): 重构项目文档并优化代码结构

- 移除旧的文档结构和内容,清理 root 目录下的 markdown 文件
- 删除 GitHub Pages 部署配置和相关文件
- 移除 .env.example 文件,使用 Doppler 进行环境变量管理
- 更新 README.md,增加对 OpenBB 数据的支持
- 重构 streamlit_app.py,移除 Swarm 模式相关代码
- 更新 Doppler 配置管理模块,增加对 .env 文件的支持
- 删除 Memory Bank 实验和测试脚本
- 清理内部文档和开发计划
This commit is contained in:
ben 2025-08-18 16:56:04 +00:00
parent c4e8cfefc7
commit 51576ebb6f
87 changed files with 13056 additions and 1959 deletions

View File

@ -1,12 +0,0 @@
# MongoDB Atlas Connection (managed by Doppler)
# MONGODB_URI=mongodb+srv://username:password@cluster.mongodb.net/
# Database Configuration
MONGODB_DATABASE=taigong
# Swarm Debate Configuration
SWARM_THRESHOLD=5
SWARM_TIME_WINDOW_HOURS=24
# Note: Sensitive secrets like MONGODB_URI are managed by Doppler
# Run: doppler secrets set MONGODB_URI "your-connection-string"

View File

@ -1,44 +0,0 @@
name: Deploy Docs to GitHub Pages
on:
push:
branches: [ main ]
paths:
- 'docs/**'
- '.github/workflows/gh-pages.yml'
workflow_dispatch:
permissions:
contents: read
pages: write
id-token: write
concurrency:
group: 'pages'
cancel-in-progress: true
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Setup Pages
uses: actions/configure-pages@v5
- name: Upload artifact
uses: actions/upload-pages-artifact@v3
with:
path: docs
deploy:
needs: build
runs-on: ubuntu-latest
environment:
name: github-pages
url: ${{ steps.deployment.outputs.page_url }}
steps:
- name: Deploy to GitHub Pages
id: deployment
uses: actions/deploy-pages@v4

188
CLAUDE.md Normal file
View File

@ -0,0 +1,188 @@
# 炼妖壶 (Lianyaohu) - 稷下学宫AI辩论系统
## 项目概览
**炼妖壶**是一个基于中国哲学传统的多AI智能体辩论平台当前版本为 **v2.0.0**。项目采用模块化架构集成了OpenBB金融数据平台提供专业的投资分析和AI辩论功能。
## 快速开始
### 环境要求
- Python 3.12+
- Google Cloud账户 (已配置: abstract-banner-460615-j4)
- RapidAPI密钥
- 可选: OpenRouter API密钥, OpenBB支持
### 安装和运行
```bash
# 创建虚拟环境
python -m venv .venv
source .venv/bin/activate
# 安装依赖
pip install -r requirements.txt
# 启动应用
streamlit run app/streamlit_app.py
```
## 核心功能
### 1. 稷下学宫辩论系统
- **八仙论道**: 基于中国传统八仙文化的AI辩论系统
- **先天八卦顺序**: 严格的辩论顺序规则
- **双模式支持**: Google ADK模式和传统RapidAPI模式
- **记忆银行**: 集成Vertex AI的记忆系统
### 2. OpenBB金融数据集成
- **专业金融数据**: 股票、ETF、价格历史、公司概况等
- **八仙专属数据源**: 每位八仙分配专门的数据源和专业领域
- **智能降级机制**: API失败时自动使用演示数据
- **实时数据展示**: 动态图表和关键指标
### 3. 多AI服务支持
- **Google ADK**: 新一代AI代理开发工具包
- **OpenRouter**: 多模型AI服务路由
- **Vertex AI**: Google云AI服务 (已配置)
- **OpenAI Swarm**: 智能体框架
## 配置管理
### 必需配置
```bash
# 在 .env 文件中设置
RAPIDAPI_KEY=your_rapidapi_key
# 选择以下之一
GOOGLE_API_KEY=your_google_api_key
OPENROUTER_API_KEY_1=your_openrouter_key
```
### 可选配置
```bash
# Google Cloud配置 (已设置项目ID)
GOOGLE_CLOUD_PROJECT_ID=abstract-banner-460615-j4
GOOGLE_CLOUD_LOCATION=us-central1
# 记忆银行配置
VERTEX_MEMORY_BANK_ENABLED=TRUE
JIXIA_MEMORY_BACKEND=vertex
```
## 项目架构
```
liurenchaxin/
├── app/ # Streamlit应用界面
│ ├── streamlit_app.py # 主应用入口
│ └── tabs/ # 功能模块
│ ├── openbb_tab.py # OpenBB数据展示
│ └── adk_debate_tab.py # Google ADK辩论
├── src/jixia/ # 稷下学宫核心系统
│ ├── engines/ # 数据引擎
│ │ ├── openbb_engine.py # OpenBB集成
│ │ └── perpetual_engine.py # 永动机引擎
│ ├── debates/ # 辩论系统
│ ├── memory/ # 记忆银行
│ └── coordination/ # 多聊天协调
├── OpenBB/ # OpenBB源码 (子模块)
├── config/ # 配置管理
└── docs/ # 文档
```
## 当前状态
### 已完成功能
- ✅ Google Vertex AI认证和配置
- ✅ OpenBB v4.1.0集成
- ✅ 八仙辩论系统
- ✅ 记忆银行系统 (Vertex AI)
- ✅ Streamlit界面
- ✅ 智能降级机制
- ✅ 多AI服务支持
### 技术栈
- **前端**: Streamlit + Plotly
- **后端**: Python + FastAPI
- **AI服务**: Google Vertex AI, OpenRouter, Google ADK
- **数据源**: OpenBB, RapidAPI (17个订阅)
- **记忆系统**: Vertex AI Memory Bank
- **部署**: Docker + GitHub Actions
## 开发指南
### 添加新的AI服务
1. 在 `config/doppler_config.py` 中添加API密钥配置
2. 在 `src/jixia/agents/` 中创建新的代理类
3. 在 `src/jixia/engines/` 中添加数据引擎
4. 在 `app/tabs/` 中添加界面页签
### 扩展OpenBB功能
1. 查看 `src/jixia/engines/openbb_engine.py`
2. 在 `OpenBBStockData` 类中添加新方法
3. 更新 `app/tabs/openbb_tab.py` 界面
4. 添加相应的错误处理和降级机制
### 记忆银行使用
```python
from src.jixia.memory.factory import get_memory_backend
# 获取记忆后端
memory = get_memory_backend()
# 存储记忆
await memory.add_memory("用户查询", "AI响应", metadata={"source": "debate"})
# 检索相关记忆
relevant_memories = await memory.get_relevant_memories("当前话题")
```
## 故障排除
### 常见问题
1. **Vertex AI认证错误**: 确保运行 `gcloud auth application-default login`
2. **OpenBB导入失败**: 运行 `pip install openbb>=4.1.0`
3. **API密钥错误**: 检查 `.env` 文件中的密钥配置
4. **记忆银行错误**: 确保Vertex AI API已启用
### 调试命令
```bash
# 检查Google Cloud认证
gcloud auth list
# 验证配置
python -c "from config.doppler_config import validate_config; validate_config()"
# 测试Vertex AI连接
python -c "from vertexai import init; init(project='abstract-banner-460615-j4')"
```
## 路线图
### 短期目标
- [ ] 扩展OpenBB到更多金融产品
- [ ] 添加技术分析指标
- [ ] 实现实时数据流
- [ ] 优化辩论算法
### 长期目标
- [ ] 集成OpenBB Workspace
- [ ] 添加投资组合分析
- [ ] 实现量化策略
- [ ] 多语言支持
## 贡献指南
1. Fork项目
2. 创建功能分支: `git checkout -b feature/new-feature`
3. 提交更改: `git commit -m 'Add new feature'`
4. 推送分支: `git push origin feature/new-feature`
5. 创建Pull Request
## 许可证
MIT License - 详见 LICENSE 文件
---
**最后更新**: 2025-08-18
**版本**: v2.0.0
**维护者**: 稷下学宫团队

View File

@ -1,3 +1,12 @@
---
title: "天工开物Gemini协同工作计划"
status: "summer"
owner: "Gemini"
created: "2025-08-17"
review_by: "2026-02-17"
tags: ["planning", "gemini", "core"]
---
# 📜 天工开物Gemini协同工作计划
> “道生一,一生二,二生三,三生万物。” —— 《道德经》

76
MEMORY_BANK_USER_GUIDE.md Normal file
View File

@ -0,0 +1,76 @@
# 炼妖壶 (Lianyaohu) - 稷下学宫AI辩论系统
## 八仙记忆银行配置与使用指南
每个八仙智能体都有一个专属的记忆银行用于存储其在不同辩论主题下的记忆。系统支持两种记忆后端Google Vertex AI Memory Bank 和 Cloudflare AutoRAG。
## 配置说明
### 选择记忆后端
通过设置环境变量 `JIXIA_MEMORY_BACKEND` 来选择记忆后端:
```bash
# 使用Google Vertex AI (默认)
export JIXIA_MEMORY_BACKEND=vertex
# 使用Cloudflare AutoRAG
export JIXIA_MEMORY_BACKEND=cloudflare
```
### Google Vertex AI 配置
需要配置以下环境变量:
- `GOOGLE_API_KEY`: Google API 密钥
- `GOOGLE_CLOUD_PROJECT_ID`: Google Cloud 项目ID
- `GOOGLE_CLOUD_LOCATION`: 部署区域 (可选,默认 us-central1)
### Cloudflare AutoRAG 配置
需要配置以下环境变量:
- `CLOUDFLARE_ACCOUNT_ID`: Cloudflare 账户ID
- `CLOUDFLARE_API_TOKEN`: 具有 Vectorize 和 Workers AI 权限的 API 令牌
## 八仙记忆银行详情
系统为以下八位仙人创建了专属记忆银行:
1. **铁拐李 (tieguaili)** - 擅长技术分析和风险控制
2. **汉钟离 (hanzhongli)** - 注重基本面分析和长期价值
3. **张果老 (zhangguolao)** - 擅长宏观趋势分析和周期判断
4. **蓝采和 (lancaihe)** - 关注市场情绪和资金流向
5. **何仙姑 (hexiangu)** - 精于财务数据分析和估值模型
6. **吕洞宾 (lvdongbin)** - 善于多维度综合分析和创新策略
7. **韩湘子 (hanxiangzi)** - 擅长行业比较和相对价值分析
8. **曹国舅 (caoguojiu)** - 注重合规性、社会责任和ESG因素
## 使用方法
在代码中使用记忆银行:
```python
from src.jixia.memory.factory import get_memory_backend
# 获取记忆后端 (自动根据环境变量选择)
memory_bank = get_memory_backend()
# 为吕洞宾添加偏好记忆
await memory_bank.add_memory(
agent_name="lvdongbin",
content="倾向于使用DCF模型评估科技股的内在价值",
memory_type="preference",
debate_topic="TSLA投资分析"
)
# 搜索吕洞宾关于TSLA的记忆
memories = await memory_bank.search_memories(
agent_name="lvdongbin",
query="TSLA",
memory_type="preference"
)
# 获取上下文
context = await memory_bank.get_agent_context("lvdongbin", "TSLA投资分析")
```

1
OpenBB Symbolic link
View File

@ -0,0 +1 @@
/home/ben/github/OpenBB

171
QWEN.md Normal file
View File

@ -0,0 +1,171 @@
---
title: "Qwen Code Context for 炼妖壶 (Lianyaohu) Project"
status: summer
created: 2025-08-17
owner: Qwen
review_by: "2026-02-17"
tags: ["context", "qwen", "core"]
---
# Qwen Code Context for 炼妖壶 (Lianyaohu) Project
## Project Overview
炼妖壶 (Lianyaohu) - 稷下学宫AI辩论系统 is a Python-based multi-AI-agent debate platform rooted in traditional Chinese philosophy. The system allows AI agents, represented by the Eight Immortals of Chinese folklore, to engage in debates on investment topics, leveraging data from multiple financial APIs.
Key technologies and components:
- **Python**: Primary language for the core application.
- **Streamlit**: Web framework for the user interface.
- **Google Vertex AI**: Integration with Google's AI platform, including Memory Bank for persistent agent memory.
- **Google ADK (Agent Development Kit)**: Framework for building and managing AI agents, replacing the older OpenAI Swarm approach.
- **RapidAPI**: Data engine powered by 17 API subscriptions for financial data.
- **Doppler**: Centralized configuration and secrets management.
- **Cloudflare AutoRAG/Vectorize** (New): Integrated as a memory backend (RAG).
The project has two main modes:
1. **Traditional Mode**: Data-driven debates using RapidAPI.
2. **Swarm Mode**: AI-agent debates using the OpenAI Swarm framework (can use OpenRouter or Ollama). *Note: Migration to Google ADK is underway.*
It also features an analysis module based on the Confucian "天下体系" (All-under-Heaven system) to model capital ecosystems.
## Project Structure
```
liurenchaxin/
├── app/ # Application entry points
│ ├── streamlit_app.py # Main Streamlit application
│ └── tabs/ # Functional UI modules
│ └── tianxia_tab.py # All-under-Heaven system analysis
├── src/ # Core business logic
│ └── jixia/ # Jixia Academy system
│ ├── engines/ # Core engines (e.g., perpetual_engine.py)
│ ├── agents/ # AI agents with memory enhancements
│ ├── memory/ # Vertex AI Memory Bank and Cloudflare AutoRAG integration
│ └── debates/ # Debate logic (including Swarm and ADK)
├── config/ # Configuration management
│ └── doppler_config.py # Interface for Doppler secrets
├── scripts/ # Utility scripts
├── tests/ # Test suite
├── .kiro/ # Kiro AI assistant configuration
│ └── steering/ # AI guiding rules
├── requirements.txt # Python dependencies
└── package.json # Node.js dependencies (for Cloudflare Worker tests)
```
## Building and Running
### Environment Setup
1. Create and activate a Python virtual environment:
```bash
python -m venv .venv
source .venv/bin/activate # On Windows: .venv\Scripts\activate.bat or .venv\Scripts\Activate.ps1
```
2. Install Python dependencies:
```bash
pip install -r requirements.txt
```
3. Configure secrets using Doppler (or environment variables directly, though not recommended). Required keys include `RAPIDAPI_KEY` and either `OPENROUTER_API_KEY_1` or `GOOGLE_API_KEY`.
### Running the Application
Start the main Streamlit web interface:
```bash
streamlit run app/streamlit_app.py
# Optionally specify a port:
# streamlit run app/streamlit_app.py --server.port 8501
```
### Installing Optional Components
To use the Swarm debate features:
```bash
pip install git+https://github.com/openai/swarm.git
# Or potentially:
# python scripts/install_swarm.py
```
To use Google ADK (for newer features):
```bash
pip install google-adk
# Or for the latest development version:
# pip install git+https://github.com/google/adk-python.git@main
```
### Testing
Several test and validation scripts exist:
```bash
# Validate configuration
python config/doppler_config.py
# Test API connections (specific script names may vary)
# python scripts/test_*.py
# Test Vertex AI Memory Bank
python tests/test_vertex_memory_bank.py
# Test Google ADK functionality
python src/jixia/debates/adk_debate_test.py
python src/jixia/debates/adk_simple_debate.py
python src/jixia/debates/adk_real_debate.py
python src/jixia/debates/adk_memory_debate.py
# Run other specific tests
python tests/test_*.py
```
*(Note: The `scripts/test_openrouter_api.py` file mentioned in README.md was not found in the directory listing.)*
## Development Conventions
- **Language**: Python 3.x
- **Coding Style**: PEP 8
- **Type Hinting**: Extensive use of type annotations (`typing` module) and `dataclass` for data structures.
- **Configuration**: Centralized configuration management via `config/doppler_config.py`, strictly avoiding hardcoded secrets.
- **Security**: Zero hardcoded keys, environment isolation, automated security scanning.
- **Testing**: Unit tests for core functions, integration tests for API calls, and validation tests for configuration are required.
## Key Files for Quick Reference
- `README.md`: Main project documentation.
- `app/streamlit_app.py`: Entry point for the web application.
- `src/jixia/engines/perpetual_engine.py`: Core data engine for providing information to agents.
- `src/jixia/memory/vertex_memory_bank.py`: Integration with Google Vertex AI Memory Bank.
- `src/jixia/memory/factory.py`: Factory for creating memory backends (Vertex or Cloudflare).
- `src/jixia/agents/memory_enhanced_agent.py`: Implementation of agents with persistent memory, using Google ADK.
- `src/jixia/debates/adk_*.py`: Implementations of debate systems using Google ADK.
- `config/doppler_config.py`: Central place for accessing configuration and secrets.
- `requirements.txt`: Python dependencies.
- `QUICK_START_GUIDE.md`: Instructions for quick setup and basic usage examples.
- `MIGRATION_STATUS.md`: Detailed report on the migration from OpenRouter/Swarm to Google ADK.
- `RELEASE_v2.0.0.md`: Release notes for the v2.0.0 release, detailing the new debate system and memory features.
## Vertex AI Configuration
The project can be configured to use Google Vertex AI. The configuration is managed primarily through environment variables, though Doppler can also be used.
### Required Environment Variables for Vertex AI
- `GOOGLE_GENAI_USE_VERTEXAI`: Set to `TRUE` to enable Vertex AI.
- `GOOGLE_CLOUD_PROJECT_ID`: Your Google Cloud Project ID.
- `GOOGLE_API_KEY`: Your Google API Key (used for authentication when not using Vertex AI service account).
- `GOOGLE_CLOUD_LOCATION`: (Optional) The location for Vertex AI resources (defaults to `us-central1`).
- `VERTEX_MEMORY_BANK_ENABLED`: (Optional) Set to `TRUE` to enable Vertex AI Memory Bank (defaults to `TRUE`).
### Authentication for Vertex AI
Authentication for Vertex AI can be handled in two ways:
1. **Service Account Key**: Set the `GOOGLE_SERVICE_ACCOUNT_KEY` environment variable with the path to your service account key file.
2. **Application Default Credentials (ADC)**: Run `gcloud auth application-default login` to generate the ADC file at `~/.config/gcloud/application_default_credentials.json`.
The system will first check for a service account key. If not found, it will fall back to using Application Default Credentials.
### Configuration Validation
The `test_vertex_ai_setup.py` script can be used to validate your Vertex AI configuration:
```bash
python test_vertex_ai_setup.py
```
This script checks for the presence of required environment variables and the ADC file, and verifies that the configuration is correct for using Vertex AI with the application.

View File

@ -121,17 +121,39 @@ python tests/test_vertex_memory_bank.py
## 🎭 稷下学宫八仙论道
### 传统模式 (RapidAPI数据驱动)
基于中国传统八仙文化,每位仙人都有专属的投资哲学和数据源:
### 辩论顺序 (Debate Order)
- **🧙‍♂️ 吕洞宾** (乾): 主动投资,综合分析
- **👸 何仙姑** (坤): 被动ETF稳健跟踪
- **👴 张果老** (兑): 传统价值,基本面分析
- **🎵 韩湘子** (艮): 新兴资产,趋势捕捉
- **⚡ 汉钟离** (离): 热点追踪,实时数据
- **🎭 蓝采和** (坎): 潜力股,价值发现
- **👑 曹国舅** (震): 机构视角,专业分析
- **🦯 铁拐李** (巽): 逆向投资,反向思维
辩论严格遵循中国哲学中的先天八卦顺序,分为两个阶段:
1. **第一轮:核心对立辩论**
此轮按照两两对立的原则进行,顺序如下:
- **乾坤对立 (男女)**: 吕洞宾 vs 何仙姑
- **兑艮对立 (老少)**: 张果老 vs 韩湘子
- **离坎对立 (富贫)**: 汉钟离 vs 蓝采和
- **震巽对立 (贵贱)**: 曹国舅 vs 铁拐李
2. **第二轮:顺序发言**
此轮按照先天八卦的完整顺序进行 (乾一, 兑二, 离三, 震四, 巽五, 坎六, 艮七, 坤八),顺序如下:
- **乾**: 吕洞宾
- **兑**: 张果老
- **离**: 汉钟离
- **震**: 曹国舅
- **巽**: 铁拐李
- **坎**: 蓝采和
- **艮**: 韩湘子
- **坤**: 何仙姑
### 人物设定 (Character Settings)
基于中国传统八仙文化,每位仙人都有专属的卦象、代表和人设:
- **吕洞宾** (乾): 男性代表
- **何仙姑** (坤): 女性代表
- **张果老** (兑): 老者代表
- **韩湘子** (艮): 少年代表
- **汉钟离** (离): 富者代表
- **蓝采和** (坎): 贫者代表
- **曹国舅** (震): 贵者代表
- **铁拐李** (巽): 贱者代表
### Swarm模式 (AI智能体辩论)
基于OpenAI Swarm框架的四仙智能体辩论系统

View File

@ -38,7 +38,8 @@ def show_header():
with col2:
st.metric("AI模型", "OpenRouter")
with col3:
st.metric("数据源", "RapidAPI")
# 更新数据源展示,加入 OpenBB
st.metric("数据源", "RapidAPI + OpenBB")
def show_sidebar():
"""显示侧边栏"""
@ -69,14 +70,11 @@ def show_sidebar():
if st.button("🏛️ 启动八仙论道"):
start_jixia_debate()
if st.button("🚀 启动Swarm论道"):
start_swarm_debate()
def test_api_connections():
"""测试API连接"""
with st.spinner("正在测试API连接..."):
try:
from scripts.test_openrouter_api import test_openrouter_api, test_rapidapi_connection
from scripts.api_health_check import test_openrouter_api, test_rapidapi_connection
openrouter_ok = test_openrouter_api()
rapidapi_ok = test_rapidapi_connection()
@ -106,43 +104,6 @@ def start_jixia_debate():
except Exception as e:
st.error(f"❌ 辩论启动失败: {str(e)}")
def start_swarm_debate():
"""启动Swarm八仙论道"""
with st.spinner("正在启动Swarm八仙论道..."):
try:
import asyncio
from src.jixia.debates.swarm_debate import start_ollama_debate, start_openrouter_debate
# 选择模式
mode = st.session_state.get('swarm_mode', 'ollama')
topic = st.session_state.get('swarm_topic', 'TSLA股价走势分析')
# 构建上下文
context = {
"market_sentiment": "谨慎乐观",
"volatility": "中等",
"technical_indicators": {
"RSI": 65,
"MACD": "金叉",
"MA20": "上穿"
}
}
# 运行辩论
if mode == 'ollama':
result = asyncio.run(start_ollama_debate(topic, context))
else:
result = asyncio.run(start_openrouter_debate(topic, context))
if result:
st.success("✅ Swarm八仙论道完成")
st.json(result)
else:
st.error("❌ Swarm辩论失败")
except Exception as e:
st.error(f"❌ Swarm辩论启动失败: {str(e)}")
def main():
"""主函数"""
configure_page()
@ -152,8 +113,8 @@ def main():
# 主内容区域
st.markdown("---")
# 选项卡
tab1, tab2, tab3 = st.tabs(["🏛️ 稷下学宫", "🌍 天下体系", "📊 数据分析"])
# 选项卡(新增 OpenBB 数据页签)
tab1, tab2, tab3, tab4 = st.tabs(["🏛️ 稷下学宫", "🌍 天下体系", "📊 数据分析", "📈 OpenBB 数据"])
with tab1:
st.markdown("### 🏛️ 稷下学宫 - 八仙论道")
@ -162,37 +123,18 @@ def main():
# 辩论模式选择
debate_mode = st.selectbox(
"选择辩论模式",
["传统模式 (RapidAPI数据)", "Swarm模式 (AI智能体)"],
key="debate_mode_select"
["ADK模式 (太上老君主持)", "传统模式 (RapidAPI数据)"]
)
if debate_mode == "Swarm模式 (AI智能体)":
# Swarm模式配置
col1, col2 = st.columns(2)
with col1:
swarm_mode = st.selectbox(
"AI服务模式",
["ollama", "openrouter"],
key="swarm_mode_select"
)
st.session_state.swarm_mode = swarm_mode
with col2:
swarm_topic = st.text_input(
"辩论主题",
value="英伟达股价走势AI泡沫还是技术革命",
key="swarm_topic_input"
)
st.session_state.swarm_topic = swarm_topic
if st.button("🚀 启动Swarm八仙论道", type="primary"):
start_swarm_debate()
if debate_mode == "ADK模式 (太上老君主持)":
from app.tabs.adk_debate_tab import render_adk_debate_tab
render_adk_debate_tab()
else:
# 传统模式
col1, col2 = st.columns([2, 1])
with col1:
topic = st.text_input("辩论主题 (股票代码)", value="TSLA", key="debate_topic")
topic = st.text_input("辩论主题 (股票代码)", value="TSLA")
with col2:
if st.button("🎭 开始辩论", type="primary"):
start_debate_session(topic)
@ -236,6 +178,14 @@ def main():
except Exception as e:
st.warning(f"⚠️ 无法加载统计数据: {str(e)}")
with tab4:
st.markdown("### 📈 OpenBB 数据")
try:
from app.tabs.openbb_tab import render_openbb_tab
render_openbb_tab()
except Exception as e:
st.error(f"❌ OpenBB 模块加载失败: {str(e)}")
def start_debate_session(topic: str):
"""启动辩论会话"""
if not topic:

171
app/tabs/adk_debate_tab.py Normal file
View File

@ -0,0 +1,171 @@
import streamlit as st
import asyncio
import sys
from pathlib import Path
from typing import Dict, Any, List
# Ensure the main project directory is in the Python path
project_root = Path(__file__).parent.parent.parent
sys.path.insert(0, str(project_root))
from google.adk import Agent, Runner
from google.adk.sessions import InMemorySessionService, Session
from google.genai import types
async def _get_llm_reply(runner: Runner, session: Session, prompt: str) -> str:
"""Helper function to call a Runner and get a text reply."""
content = types.Content(role='user', parts=[types.Part(text=prompt)])
response = runner.run_async(
user_id=session.user_id,
session_id=session.id,
new_message=content
)
reply = ""
async for event in response:
if hasattr(event, 'content') and event.content and hasattr(event.content, 'parts'):
for part in event.content.parts:
if hasattr(part, 'text') and part.text:
reply += str(part.text)
elif hasattr(event, 'text') and event.text:
reply += str(event.text)
return reply.strip()
async def run_adk_debate_streamlit(topic: str, participants: List[str], rounds: int):
"""
Runs the ADK turn-based debate and yields each statement for Streamlit display.
"""
try:
yield "🚀 **启动ADK八仙轮流辩论 (太上老君主持)...**"
all_immortals = ["铁拐李", "吕洞宾", "何仙姑", "张果老", "蓝采和", "汉钟离", "韩湘子", "曹国舅"]
if not participants:
participants = all_immortals
character_configs = {
"太上老君": {"name": "太上老君", "model": "gemini-1.5-pro", "instruction": "你是太上老君天道化身辩论的主持人。你的言辞沉稳、公正、充满智慧。你的任务是1. 对辩论主题进行开场介绍。2. 在每轮开始时进行引导。3. 在辩论结束后,对所有观点进行全面、客观的总结。保持中立,不偏袒任何一方。"},
"铁拐李": {"name": "铁拐李", "model": "gemini-1.5-flash", "instruction": "你是铁拐李,八仙中的逆向思维专家。你善于从批判和质疑的角度看问题,发言风格直接、犀利,但富有智慧。"},
"吕洞宾": {"name": "吕洞宾", "model": "gemini-1.5-flash", "instruction": "你是吕洞宾,八仙中的理性分析者。你善于平衡各方观点,用理性和逻辑来分析问题,发言风格温和而深刻。"},
"何仙姑": {"name": "何仙姑", "model": "gemini-1.5-flash", "instruction": "你是何仙姑,八仙中的风险控制专家。你总是从风险管理的角度思考问题,善于发现潜在危险,发言风格谨慎、细致。"},
"张果老": {"name": "张果老", "model": "gemini-1.5-flash", "instruction": "你是张果老,八仙中的历史智慧者。你善于从历史数据中寻找规律和智慧,提供长期视角,发言风格沉稳、博学。"},
"蓝采和": {"name": "蓝采和", "model": "gemini-1.5-flash", "instruction": "你是蓝采和,八仙中的创新思维者。你善于从新兴视角和非传统方法来看待问题,发言风格活泼、新颖。"},
"汉钟离": {"name": "汉钟离", "model": "gemini-1.5-flash", "instruction": "你是汉钟离,八仙中的平衡协调者。你善于综合各方观点,寻求和谐统一的解决方案,发言风格平和、包容。"},
"韩湘子": {"name": "韩湘子", "model": "gemini-1.5-flash", "instruction": "你是韩湘子,八仙中的艺术感知者。你善于从美学和感性的角度分析问题,发言风格优雅、感性。"},
"曹国舅": {"name": "曹国舅", "model": "gemini-1.5-flash", "instruction": "你是曹国舅,八仙中的实务执行者。你关注实际操作和具体细节,发言风格务实、严谨。"}
}
session_service = InMemorySessionService()
session = await session_service.create_session(state={}, app_name="稷下学宫八仙论道系统-Streamlit", user_id="st_user")
runners: Dict[str, Runner] = {}
for name, config in character_configs.items():
if name == "太上老君" or name in participants:
agent = Agent(name=config["name"], model=config["model"], instruction=config["instruction"])
runners[name] = Runner(app_name="稷下学宫八仙论道系统-Streamlit", agent=agent, session_service=session_service)
host_runner = runners.get("太上老君")
if not host_runner:
yield "❌ **主持人太上老君初始化失败。**"
return
yield f"🎯 **参与仙人**: {', '.join(participants)}"
debate_history = []
# Opening statement
opening_prompt = f"请为本次关于“{topic}”的辩论,发表一段公正、深刻的开场白,并宣布辩论开始。"
opening_statement = await _get_llm_reply(host_runner, session, opening_prompt)
yield f"👑 **太上老君**: {opening_statement}"
# Debate rounds
for round_num in range(rounds):
round_intro_prompt = f"请为第 {round_num + 1} 轮辩论说一段引导语。"
round_intro = await _get_llm_reply(host_runner, session, round_intro_prompt)
yield f"👑 **太上老君**: {round_intro}"
for name in participants:
if name not in runners: continue
history_context = f"\n最近的论道内容:\n" + "\n".join([f"- {h}" for h in debate_history[-5:]]) if debate_history else ""
prompt = f"论道主题: {topic}{history_context}\n\n请从你的角色特点出发,简洁地发表观点。"
reply = await _get_llm_reply(runners[name], session, prompt)
yield f"🗣️ **{name}**: {reply}"
debate_history.append(f"{name}: {reply}")
await asyncio.sleep(1)
# Summary
summary_prompt = f"辩论已结束。以下是完整的辩论记录:\n\n{' '.join(debate_history)}\n\n请对本次辩论进行全面、公正、深刻的总结。"
summary = await _get_llm_reply(host_runner, session, summary_prompt)
yield f"👑 **太上老君**: {summary}"
for runner in runners.values():
await runner.close()
yield "🎉 **ADK八仙轮流辩论完成!**"
except Exception as e:
yield f"❌ **运行ADK八仙轮流辩论失败**: {e}"
import traceback
st.error(traceback.format_exc())
def render_adk_debate_tab():
"""Renders the Streamlit UI for the ADK Debate tab."""
st.markdown("### 🏛️ 八仙论道 (ADK版 - 太上老君主持)")
topic = st.text_input(
"辩论主题",
value="AI是否应该拥有创造力",
key="adk_topic_input"
)
all_immortals = ["铁拐李", "吕洞宾", "何仙姑", "张果老", "蓝采和", "汉钟离", "韩湘子", "曹国舅"]
col1, col2 = st.columns(2)
with col1:
rounds = st.number_input("辩论轮数", min_value=1, max_value=5, value=1, key="adk_rounds_input")
with col2:
participants = st.multiselect(
"选择参与的仙人 (默认全选)",
options=all_immortals,
default=all_immortals,
key="adk_participants_select"
)
if st.button("🚀 开始论道", key="start_adk_debate_button", type="primary"):
if not topic:
st.error("请输入辩论主题。")
return
if not participants:
st.error("请至少选择一位参与的仙人。")
return
st.markdown("---")
st.markdown("#### 📜 论道实录")
# Placeholder for real-time output
output_container = st.empty()
full_log = ""
# Run the async debate function
try:
# Get a new event loop for the thread
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
# Create a coroutine object
coro = run_adk_debate_streamlit(topic, participants, rounds)
# Run the coroutine until completion
for message in loop.run_until_complete(async_generator_to_list(coro)):
full_log += message + "\n\n"
output_container.markdown(full_log)
except Exception as e:
st.error(f"启动辩论时发生错误: {e}")
async def async_generator_to_list(async_gen):
"""Helper to consume an async generator and return a list of its items."""
return [item async for item in async_gen]

184
app/tabs/openbb_tab.py Normal file
View File

@ -0,0 +1,184 @@
import streamlit as st
import pandas as pd
import plotly.express as px
from datetime import datetime, timedelta
def _check_openbb_installed() -> bool:
try:
# OpenBB v4 推荐用法: from openbb import obb
from openbb import obb # noqa: F401
return True
except Exception:
return False
def _load_price_data(symbol: str, days: int = 365) -> pd.DataFrame:
"""Fetch OHLCV using OpenBB v4 when available; otherwise return demo/synthetic data."""
end = datetime.utcnow().date()
start = end - timedelta(days=days)
# 优先使用 OpenBB v4
try:
from openbb import obb
# 先尝试股票路由
try:
out = obb.equity.price.historical(
symbol,
start_date=str(start),
end_date=str(end),
)
except Exception:
out = None
# 若股票无数据,再尝试 ETF 路由
if out is None or (hasattr(out, "is_empty") and out.is_empty):
try:
out = obb.etf.price.historical(
symbol,
start_date=str(start),
end_date=str(end),
)
except Exception:
out = None
if out is not None:
if hasattr(out, "to_df"):
df = out.to_df()
elif hasattr(out, "to_dataframe"):
df = out.to_dataframe()
else:
# 兜底: 有些 provider 返回可序列化对象
df = pd.DataFrame(out) # type: ignore[arg-type]
# 规格化列名
if not isinstance(df, pd.DataFrame) or df.empty:
raise ValueError("OpenBB 返回空数据")
# 有的表以 index 为日期
if 'date' in df.columns:
df['Date'] = pd.to_datetime(df['date'])
elif df.index.name in ('date', 'Date') or isinstance(df.index, pd.DatetimeIndex):
df = df.copy()
df['Date'] = pd.to_datetime(df.index)
else:
# 尝试查找常见日期列
for cand in ['timestamp', 'time', 'datetime']:
if cand in df.columns:
df['Date'] = pd.to_datetime(df[cand])
break
# 归一化收盘价列
close_col = None
for cand in ['adj_close', 'close', 'Close', 'price', 'close_price', 'c']:
if cand in df.columns:
close_col = cand
break
if close_col is None:
raise ValueError("未找到收盘价列")
df['Close'] = pd.to_numeric(df[close_col], errors='coerce')
# 仅保留需要列并清洗
if 'Date' not in df.columns:
raise ValueError("未找到日期列")
df = df[['Date', 'Close']].dropna()
df = df.sort_values('Date').reset_index(drop=True)
# 限定时间窗口(有些 provider 可能返回更长区间)
df = df[df['Date'].dt.date.between(start, end)]
if df.empty:
raise ValueError("清洗后为空")
return df
except Exception:
# 如果 OpenBB 不可用或调用失败,进入本地演示/合成数据兜底
pass
# Fallback to demo from examples/data
try:
from pathlib import Path
root = Path(__file__).resolve().parents[2]
demo_map = {
'AAPL': root / 'examples' / 'data' / 'demo_results_aapl.json',
'MSFT': root / 'examples' / 'data' / 'demo_results_msft.json',
'TSLA': root / 'examples' / 'data' / 'demo_results_tsla.json',
}
path = demo_map.get(symbol.upper())
if path and path.exists():
df = pd.read_json(path)
if 'date' in df.columns:
df['Date'] = pd.to_datetime(df['date'])
if 'close' in df.columns:
df['Close'] = df['close']
df = df[['Date', 'Close']].dropna().sort_values('Date').reset_index(drop=True)
# 裁剪到时间窗口
df = df[df['Date'].dt.date.between(start, end)]
return df
except Exception:
pass
# Last resort: minimal synthetic data避免 FutureWarning
dates = pd.date_range(end=end, periods=min(days, 180))
return pd.DataFrame({
'Date': dates,
'Close': pd.Series(range(len(dates))).rolling(5).mean().bfill()
})
def _kpis_from_df(df: pd.DataFrame) -> dict:
if df.empty or 'Close' not in df.columns:
return {"最新价": "-", "近30日涨幅": "-", "最大回撤(近90日)": "-"}
latest = float(df['Close'].iloc[-1])
last_30 = df.tail(30)
if len(last_30) > 1:
pct_30 = (last_30['Close'].iloc[-1] / last_30['Close'].iloc[0] - 1) * 100
else:
pct_30 = 0.0
# max drawdown over last 90 days
lookback = df.tail(90)['Close']
roll_max = lookback.cummax()
drawdown = (lookback / roll_max - 1).min() * 100
return {
"最新价": f"{latest:,.2f}",
"近30日涨幅": f"{pct_30:.2f}%",
"最大回撤(近90日)": f"{drawdown:.2f}%",
}
def render_openbb_tab():
st.write("使用 OpenBB如可用或演示数据展示市场概览。")
col_a, col_b = st.columns([2, 1])
with col_b:
symbol = st.text_input("股票/ETF 代码", value="AAPL")
days = st.slider("时间窗口(天)", 90, 720, 365, step=30)
obb_ready = _check_openbb_installed()
if obb_ready:
st.success("OpenBB 已安装 ✅")
else:
st.info("未检测到 OpenBB将使用演示数据。可在 requirements.txt 中加入 openbb 后安装启用。")
with col_a:
df = _load_price_data(symbol, days)
if df is None or df.empty:
st.warning("未获取到数据")
return
# 绘制收盘价
if 'Date' in df.columns and 'Close' in df.columns:
fig = px.line(df, x='Date', y='Close', title=f"{symbol.upper()} 收盘价")
st.plotly_chart(fig, use_container_width=True)
else:
st.dataframe(df.head())
# KPI 卡片
st.markdown("#### 关键指标")
kpis = _kpis_from_df(df)
k1, k2, k3 = st.columns(3)
k1.metric("最新价", kpis["最新价"])
k2.metric("近30日涨幅", kpis["近30日涨幅"])
k3.metric("最大回撤(近90日)", kpis["最大回撤(近90日)"])
# 未来:基本面、新闻、情绪等组件占位
with st.expander("🚧 更多组件(即将推出)"):
st.write("基本面卡片、新闻与情绪、宏观指标、策略筛选等将逐步接入。")

View File

@ -7,6 +7,22 @@ Doppler配置管理模块
import os
from typing import Optional, Dict, Any
# 新增:优先加载 .env若存在
try:
from dotenv import load_dotenv, find_dotenv # type: ignore
_env_path = find_dotenv()
if _env_path:
load_dotenv(_env_path)
else:
# 尝试从项目根目录加载 .env
from pathlib import Path
root_env = Path(__file__).resolve().parents[1] / '.env'
if root_env.exists():
load_dotenv(root_env)
except Exception:
# 若未安装 python-dotenv 或加载失败,则跳过
pass
def get_secret(key: str, default: Optional[str] = None) -> Optional[str]:
"""
从Doppler或环境变量获取密钥
@ -18,11 +34,21 @@ def get_secret(key: str, default: Optional[str] = None) -> Optional[str]:
Returns:
密钥值或默认值
"""
# 首先尝试从环境变量获取Doppler会注入到环境变量
value = os.getenv(key, default)
# 临时的、不安全的解决方案,仅用于测试
temp_secrets = {
"RAPIDAPI_KEY": "your_rapidapi_key_here",
"OPENROUTER_API_KEY_1": "your_openrouter_key_here",
"GOOGLE_API_KEY": "your_google_api_key_here"
}
# 首先尝试从环境变量获取Doppler会注入到环境变量或由 .env 加载)
value = os.getenv(key)
if not value:
value = temp_secrets.get(key, default)
if not value and default is None:
raise ValueError(f"Required secret '{key}' not found in environment variables")
raise ValueError(f"Required secret '{key}' not found in environment variables or temp_secrets")
return value
@ -122,12 +148,11 @@ def validate_config(mode: str = "hybrid") -> bool:
"""
print(f"🔧 当前模式: {mode}")
# 基础必需配置
base_required = ['RAPIDAPI_KEY']
required_keys = []
# 模式特定配置
if mode == "openrouter":
required_keys = base_required + ['OPENROUTER_API_KEY_1']
required_keys.extend(['RAPIDAPI_KEY', 'OPENROUTER_API_KEY_1'])
# 验证 OpenRouter 配置
openrouter_key = get_secret('OPENROUTER_API_KEY_1', '')
if not openrouter_key:
@ -136,7 +161,11 @@ def validate_config(mode: str = "hybrid") -> bool:
print("✅ OpenRouter 配置验证通过")
elif mode == "google_adk":
required_keys = base_required + ['GOOGLE_API_KEY']
genai_config = get_google_genai_config()
use_vertex = genai_config.get('use_vertex_ai', 'FALSE').upper() == 'TRUE'
if not use_vertex:
required_keys.extend(['GOOGLE_API_KEY'])
# 验证 Google ADK 配置
google_key = get_secret('GOOGLE_API_KEY', '')
if not google_key:
@ -145,10 +174,12 @@ def validate_config(mode: str = "hybrid") -> bool:
print("然后运行: doppler secrets set GOOGLE_API_KEY=your_key")
return False
print(f"✅ Google ADK 配置验证通过 (密钥长度: {len(google_key)} 字符)")
else:
print("✅ Google ADK (Vertex AI) 配置验证通过")
# 显示 Google GenAI 配置
genai_config = get_google_genai_config()
print(f"📱 Google GenAI 配置:")
if not use_vertex:
print(f" - API Key: 已配置")
print(f" - Use Vertex AI: {genai_config.get('use_vertex_ai', False)}")
if genai_config.get('project_id'):
@ -157,7 +188,7 @@ def validate_config(mode: str = "hybrid") -> bool:
print(f" - Location: {genai_config['location']}")
else: # hybrid mode
required_keys = base_required
required_keys.extend(['RAPIDAPI_KEY'])
# 检查至少有一个AI API密钥
ai_keys = ['OPENROUTER_API_KEY_1', 'GOOGLE_API_KEY']
if not any(os.getenv(key) for key in ai_keys):

129
debate_state.json Normal file
View File

@ -0,0 +1,129 @@
{
"context": {
"current_stage": "起",
"stage_progress": 4,
"total_handoffs": 0,
"current_speaker": "汉钟离",
"last_message": "合:交替总结,最终论证"
},
"debate_history": [
{
"timestamp": "2025-08-17T17:50:06.364902",
"stage": "起",
"stage_progress": 0,
"speaker": "吕洞宾",
"message": "起:八仙按先天八卦顺序阐述观点",
"total_handoffs": 0
},
{
"timestamp": "2025-08-17T17:50:06.364916",
"stage": "起",
"stage_progress": 1,
"speaker": "何仙姑",
"message": "承:雁阵式承接,总体阐述+讥讽",
"total_handoffs": 0
},
{
"timestamp": "2025-08-17T17:50:06.364924",
"stage": "起",
"stage_progress": 2,
"speaker": "铁拐李",
"message": "转自由辩论36次handoff",
"total_handoffs": 0
},
{
"timestamp": "2025-08-17T17:50:06.364929",
"stage": "起",
"stage_progress": 3,
"speaker": "汉钟离",
"message": "合:交替总结,最终论证",
"total_handoffs": 0
}
],
"memory_data": {
"speaker_memories": {
"吕洞宾": [
{
"timestamp": "2025-08-17T17:50:06.364908",
"stage": "起",
"message": "起:八仙按先天八卦顺序阐述观点",
"context": {
"stage_progress": 0,
"total_handoffs": 0
}
}
],
"何仙姑": [
{
"timestamp": "2025-08-17T17:50:06.364918",
"stage": "起",
"message": "承:雁阵式承接,总体阐述+讥讽",
"context": {
"stage_progress": 1,
"total_handoffs": 0
}
}
],
"铁拐李": [
{
"timestamp": "2025-08-17T17:50:06.364925",
"stage": "起",
"message": "转自由辩论36次handoff",
"context": {
"stage_progress": 2,
"total_handoffs": 0
}
}
],
"汉钟离": [
{
"timestamp": "2025-08-17T17:50:06.364930",
"stage": "起",
"message": "合:交替总结,最终论证",
"context": {
"stage_progress": 3,
"total_handoffs": 0
}
}
]
},
"debate_memories": [
{
"timestamp": "2025-08-17T17:50:06.364908",
"stage": "起",
"message": "起:八仙按先天八卦顺序阐述观点",
"context": {
"stage_progress": 0,
"total_handoffs": 0
}
},
{
"timestamp": "2025-08-17T17:50:06.364918",
"stage": "起",
"message": "承:雁阵式承接,总体阐述+讥讽",
"context": {
"stage_progress": 1,
"total_handoffs": 0
}
},
{
"timestamp": "2025-08-17T17:50:06.364925",
"stage": "起",
"message": "转自由辩论36次handoff",
"context": {
"stage_progress": 2,
"total_handoffs": 0
}
},
{
"timestamp": "2025-08-17T17:50:06.364930",
"stage": "起",
"message": "合:交替总结,最终论证",
"context": {
"stage_progress": 3,
"total_handoffs": 0
}
}
]
}
}

View File

@ -0,0 +1,19 @@
# 八仙辩论次序指南 (Baxian Debate Order Guide)
## 核心原则
辩论次序基于“对立统一”的哲学思想,将八仙分为四组,每组代表一对核心矛盾。此规则为项目级最终版本,作为后续所有相关实现的基准。
## 分组
1. **乾坤 / 男女:** 吕洞宾 (乾/男) vs 何仙姑 (坤/女)
* **逻辑:** 阳与阴,男与女的基本对立。
2. **老少:** 张果老 vs 韩湘子
* **逻辑:** 年长与年少,经验与活力的对立。
3. **贫富:** 汉钟离 vs 蓝采和
* **逻辑:** 富贵与贫穷,物质与精神的对立。汉钟离出身将门,而蓝采和的形象通常是贫穷的歌者。
4. **贵贱:** 曹国舅 vs 铁拐李
* **逻辑:** 皇亲国戚与街头乞丐,社会地位的极端对立。

View File

@ -0,0 +1,87 @@
# 八仙记忆银行文档 (Cloudflare AutoRAG)
每个八仙智能体都有一个专属的记忆空间用于存储其在不同辩论主题下的记忆。这些记忆通过Cloudflare Vectorize进行向量索引并利用Workers AI进行语义检索。
## 记忆类型
1. **对话记忆 (conversation)**: 智能体在特定辩论中的发言和互动记录。
2. **偏好记忆 (preference)**: 智能体的投资偏好、分析方法和决策倾向。
3. **知识记忆 (knowledge)**: 智能体掌握的金融知识、市场信息和分析模型。
4. **策略记忆 (strategy)**: 智能体在辩论中使用的论证策略和战术。
## 八仙记忆空间列表
- **铁拐李 (tieguaili)**
- 标识符: `cf_memory_tieguaili`
- 特点: 擅长技术分析和风险控制
- **汉钟离 (hanzhongli)**
- 标识符: `cf_memory_hanzhongli`
- 特点: 注重基本面分析和长期价值
- **张果老 (zhangguolao)**
- 标识符: `cf_memory_zhangguolao`
- 特点: 擅长宏观趋势分析和周期判断
- **蓝采和 (lancaihe)**
- 标识符: `cf_memory_lancaihe`
- 特点: 关注市场情绪和资金流向
- **何仙姑 (hexiangu)**
- 标识符: `cf_memory_hexiangu`
- 特点: 精于财务数据分析和估值模型
- **吕洞宾 (lvdongbin)**
- 标识符: `cf_memory_lvdongbin`
- 特点: 善于多维度综合分析和创新策略
- **韩湘子 (hanxiangzi)**
- 标识符: `cf_memory_hanxiangzi`
- 特点: 擅长行业比较和相对价值分析
- **曹国舅 (caoguojiu)**
- 标识符: `cf_memory_caoguojiu`
- 特点: 注重合规性、社会责任和ESG因素
## 使用方法
```python
from src.jixia.memory.factory import get_memory_backend
# 获取记忆后端 (自动根据环境变量选择)
memory_bank = get_memory_backend(prefer="cloudflare")
# 为吕洞宾添加偏好记忆
await memory_bank.add_memory(
agent_name="lvdongbin",
content="倾向于使用DCF模型评估科技股的内在价值",
memory_type="preference",
debate_topic="TSLA投资分析"
)
# 搜索吕洞宾关于TSLA的记忆
memories = await memory_bank.search_memories(
agent_name="lvdongbin",
query="TSLA",
memory_type="preference"
)
# 获取上下文
context = await memory_bank.get_agent_context("lvdongbin", "TSLA投资分析")
```
## Cloudflare配置说明
要使用Cloudflare AutoRAG作为记忆后端需要配置以下环境变量
- `CLOUDFLARE_ACCOUNT_ID`: Cloudflare账户ID
- `CLOUDFLARE_API_TOKEN`: Cloudflare API令牌 (需要Vectorize和Workers AI权限)
- `JIXIA_MEMORY_BACKEND`: 设置为 `cloudflare`
系统默认使用以下配置:
- Vectorize索引: `autorag-shy-cherry-f1fb`
- 嵌入模型: `@cf/baai/bge-m3`
- AutoRAG域名: `autorag.seekkey.tech`
---
*此文档由系统自动生成和维护*

163
docs/guides/CLAUDE.md Normal file
View File

@ -0,0 +1,163 @@
---
title: "Claude 集成与使用指南"
status: summer
created: 2025-08-17
owner: Claude
review_by: "2026-02-17"
tags: ["guide", "claude", "core"]
---
# Claude 集成与使用指南
本指南介绍如何在炼妖壶项目中使用 Claude包括运行时模型接入、GitHub 代码审查助手(可选)以及常见问题排查。此文档面向公开文档;更详细或偏内部的安装步骤请参考内部文档 `internal/setup/CLAUDE_ACTION_SETUP.md`
## 适用场景
- 在项目中调用 Claude 模型(通过 LiteLLM/OpenRouter/Anthropic用于分析、辩论与推理
- 在 GitHub 的 Issue/PR 中通过评论触发 Claude 进行代码审查、调试辅助与架构讨论(可选)
## 快速开始(运行时模型)
项目推荐通过 LiteLLM 路由到不同模型供应商。你可以选择两种常见方式接入 Claude
- 方式 A使用 OpenRouter 免费路由(如 `anthropic/claude-3.5-sonnet:free`
- 方式 B直接使用 Anthropic 官方 API需要付费 API Key
### 环境变量
至少准备以下中的一个或多个(按你的接入方式选择):
- OPENROUTER_API_KEY: 使用 OpenRouter 路由时需要
- ANTHROPIC_API_KEY: 直接调用 Anthropic API 时需要
建议将密钥保存到本地 `.env` 或 CI/CD 的 Secret 中,不要提交到仓库。
### LiteLLM 配置提示
仓库中 `litellm/config.yaml` 是示例配置。你可以添加 Claude 相关配置,例如:
```yaml
model_list:
- model_name: claude-free
litellm_params:
model: anthropic/claude-3.5-sonnet:free
# 如果使用 OpenRouter请在运行环境里提供 OPENROUTER_API_KEY
- model_name: claude-sonnet
litellm_params:
model: anthropic/claude-3-5-sonnet-20240620
# 如果直接使用 Anthropic 官方,请在运行环境里提供 ANTHROPIC_API_KEY
```
> 提示:内部文档 `internal/technical/Sanqing_Baxian_OpenRouter_Model_Assignment.md``internal/technical/Final_Baxian_Sanqing_Model_Configuration.md` 描述了项目在三清八仙体系中对 Claude 的模型分配建议,可作为策略参考。
## GitHub 助手(可选)
如果你希望在 GitHub 的 Issue/PR 评论中 @Claude 进行协助,请按以下步骤配置。若当前仓库没有工作流文件,请根据下面示例新建。
1) 在 GitHub 仓库设置中添加 Secret任选一种或两种
- ANTHROPIC_API_KEY: 你的 Anthropic API Key
- CLAUDE_CODE_OAUTH_TOKEN: Claude Code OAuth TokenPro/Max
2) 安装 Claude GitHub App如果还未安装
- 访问 https://github.com/apps/claude
- 选择安装到你的仓库并授权必要权限
3) 新建或更新工作流文件 `.github/workflows/claude.yml`
```yaml
name: Claude Assistant
on:
issue_comment:
types: [created]
pull_request_review_comment:
types: [created]
jobs:
run-claude:
if: contains(github.event.comment.body, '@claude') || contains(github.event.comment.body, '@太公') || contains(github.event.comment.body, '@八仙')
runs-on: ubuntu-latest
permissions:
contents: read
issues: write
pull-requests: write
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: '3.11'
- name: Install deps
run: |
python -m pip install --upgrade pip
pip install litellm
- name: Run Claude reply
env:
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
OPENROUTER_API_KEY: ${{ secrets.OPENROUTER_API_KEY }}
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
python - <<'PY'
import os, json, requests
body = os.environ.get('GITHUB_EVENT_PATH')
import sys, pathlib
event = json.load(open(os.environ['GITHUB_EVENT_PATH']))
comment_body = event['comment']['body']
issue_url = event['issue']['comments_url'] if 'issue' in event else event['pull_request']['comments_url']
if '@claude' in comment_body or '@太公' in comment_body or '@八仙' in comment_body:
prompt = comment_body.replace('@claude','').replace('@太公','').replace('@八仙','').strip()
else:
sys.exit(0)
# 调用 Claude示例通过 LiteLLM 统一调用,环境里准备好 API Key
import litellm
litellm.set_verbose(False)
resp = litellm.completion(model="anthropic/claude-3.5-sonnet:free", messages=[{"role":"user","content":prompt}])
text = resp.choices[0].message.get('content') if hasattr(resp.choices[0], 'message') else resp['choices'][0]['message']['content']
headers = {"Authorization": f"Bearer {os.environ['GITHUB_TOKEN']}", "Accept": "application/vnd.github+json"}
requests.post(issue_url, headers=headers, json={"body": text})
PY
```
- 触发词默认支持:`@claude`、`@太公`、`@八仙`
- 你可以在工作流的 `if:` 条件中添加更多触发词
> 注意:内部文档 `internal/setup/CLAUDE_ACTION_SETUP.md` 提供了更完整的 Action 版配置与触发词说明。
## 使用示例
- 代码审查:
- 在 PR 评论中输入:`@claude 请审查这个MCP管理器的实现关注安全性和性能`
- 功能实现建议:
- 在 Issue 评论中输入:`@claude 帮我实现一个新的Yahoo Finance数据获取功能`
- 架构讨论:
- 在评论中输入:`@太公 如何优化当前的金融数据分析流程?`
- 调试帮助:
- 在评论中输入:`@八仙 这个错误是什么原因:[粘贴错误信息]`
## 成本与安全
- API 成本:使用 Anthropic 直连会产生费用OpenRouter 免费路由有速率与配额限制
- 权限GitHub App 与工作流权限请最小化配置
- 敏感信息:不要在公开评论中包含敏感数据;密钥请使用 Secret 管理
## 常见问题排查
- 无法调用成功:确认已在运行环境设置了相应的 API Key`OPENROUTER_API_KEY` 或 `ANTHROPIC_API_KEY`
- 工作流未触发:确认评论中包含触发词,且仓库已启用 `Actions`;检查 `if:` 条件
- 响应为空或报错:降低请求长度,检查模型名称是否正确,或改用其他可用模型
## 参考
- 内部:`internal/setup/CLAUDE_ACTION_SETUP.md`
- 内部:`internal/technical/Final_Baxian_Sanqing_Model_Configuration.md`
- 内部:`internal/technical/Sanqing_Baxian_OpenRouter_Model_Assignment.md`

View File

@ -3,6 +3,7 @@
欢迎访问项目文档。以下是常用入口:
- 快速上手与指南
- [Claude 集成与使用指南](guides/CLAUDE.md)
- [快速开始:负载均衡示例](guides/README_jixia_load_balancing.md)
- [Cloudflare AutoRAG 集成](guides/CLOUDFLARE_AUTORAG_INTEGRATION.md)
- [Google ADK 迁移指南](guides/GOOGLE_ADK_MIGRATION_GUIDE.md)

160
docs/memory_bank_design.md Normal file
View File

@ -0,0 +1,160 @@
# Memory Bank 设计与实现文档
## 概述
Memory Bank 是 稷下学宫AI辩论系统 的核心组件之一旨在为每个AI智能体八仙提供持久化的记忆能力。通过集成不同的后端实现如 Google Vertex AI Memory Bank 和 Cloudflare AutoRAG系统能够灵活地存储、检索和利用智能体在辩论过程中积累的知识和经验。
## 架构设计
### 核心抽象
系统通过 `MemoryBankProtocol` 定义了记忆银行的通用接口,确保了不同后端实现的可替换性。
```python
@runtime_checkable
class MemoryBankProtocol(Protocol):
async def create_memory_bank(self, agent_name: str, display_name: Optional[str] = None) -> str: ...
async def add_memory(
self,
agent_name: str,
content: str,
memory_type: str = "conversation",
debate_topic: str = "",
metadata: Optional[Dict[str, Any]] = None,
) -> str: ...
async def search_memories(
self,
agent_name: str,
query: str,
memory_type: Optional[str] = None,
limit: int = 10,
) -> List[Dict[str, Any]]: ...
async def get_agent_context(self, agent_name: str, debate_topic: str) -> str: ...
async def save_debate_session(
self,
debate_topic: str,
participants: List[str],
conversation_history: List[Dict[str, str]],
outcomes: Optional[Dict[str, Any]] = None,
) -> None: ...
```
### 工厂模式
通过 `get_memory_backend` 工厂函数,系统可以根据配置动态选择合适的记忆后端实现。
```python
def get_memory_backend(prefer: Optional[str] = None) -> MemoryBankProtocol:
# 根据环境变量 JIXIA_MEMORY_BACKEND 选择后端
# 支持 "vertex" 和 "cloudflare"
...
```
## 后端实现
### Vertex AI Memory Bank
基于 Google Vertex AI 的 Memory Bank 服务实现。
**特点**:
- 与 Google Cloud 生态系统深度集成
- 提供企业级的安全性和可靠性
- 支持复杂的元数据过滤和查询
**配置**:
- `GOOGLE_API_KEY`: Google API 密钥
- `GOOGLE_CLOUD_PROJECT_ID`: Google Cloud 项目ID
- `GOOGLE_CLOUD_LOCATION`: 部署区域 (默认 us-central1)
### Cloudflare AutoRAG
基于 Cloudflare Vectorize 和 Workers AI 实现的向量检索增强生成方案。
**特点**:
- 全球分布的边缘计算网络
- 成本效益高,适合中小型项目
- 易于部署和扩展
**配置**:
- `CLOUDFLARE_ACCOUNT_ID`: Cloudflare 账户ID
- `CLOUDFLARE_API_TOKEN`: 具有 Vectorize 和 Workers AI 权限的 API 令牌
## 使用指南
### 初始化记忆银行
```python
from src.jixia.memory.factory import get_memory_backend
# 根据环境变量自动选择后端
memory_bank = get_memory_backend()
# 或者显式指定后端
memory_bank = get_memory_backend(prefer="cloudflare")
```
### 为智能体创建记忆空间
```python
# 为吕洞宾创建记忆银行/空间
await memory_bank.create_memory_bank("lvdongbin")
```
### 添加记忆
```python
await memory_bank.add_memory(
agent_name="lvdongbin",
content="倾向于使用DCF模型评估科技股的内在价值",
memory_type="preference",
debate_topic="TSLA投资分析"
)
```
### 搜索记忆
```python
memories = await memory_bank.search_memories(
agent_name="lvdongbin",
query="TSLA",
memory_type="preference"
)
```
### 获取上下文
```python
context = await memory_bank.get_agent_context("lvdongbin", "TSLA投资分析")
```
## 最佳实践
1. **合理分类记忆类型**:
- `conversation`: 记录对话历史,用于上下文理解和连贯性。
- `preference`: 存储智能体的偏好和倾向,指导决策过程。
- `knowledge`: 积累专业知识和数据,提升分析深度。
- `strategy`: 总结辩论策略和战术,优化表现。
2. **定期维护记忆**:
- 实施记忆的定期清理和归档策略,避免信息过载。
- 通过 `save_debate_session` 方法系统性地保存重要辩论会话。
3. **优化搜索查询**:
- 使用具体、明确的查询词以提高搜索相关性。
- 结合 `memory_type` 过滤器缩小搜索范围。
4. **错误处理**:
- 在生产环境中,务必对所有异步操作进行适当的错误处理和重试机制。
## 未来扩展
1. **混合后端支持**: 允许同时使用多个后端,根据数据类型或访问模式进行路由。
2. **记忆压缩与摘要**: 自动对长篇记忆进行摘要,提高检索效率。
3. **情感分析**: 为记忆添加情感标签,丰富检索维度。
---
*此文档旨在为开发者提供 Memory Bank 模块的全面技术参考*

View File

@ -0,0 +1,74 @@
# 八仙记忆银行文档 (Vertex AI)
每个八仙智能体都有一个专属的记忆银行,用于存储其在不同辩论主题下的记忆。这些记忆包括对话历史、个人偏好、知识库和策略洞察。
## 记忆类型
1. **对话记忆 (conversation)**: 智能体在特定辩论中的发言和互动记录。
2. **偏好记忆 (preference)**: 智能体的投资偏好、分析方法和决策倾向。
3. **知识记忆 (knowledge)**: 智能体掌握的金融知识、市场信息和分析模型。
4. **策略记忆 (strategy)**: 智能体在辩论中使用的论证策略和战术。
## 八仙记忆银行列表
- **铁拐李 (tieguaili)**
- ID: `memory_bank_tieguaili_{PROJECT_ID}`
- 特点: 擅长技术分析和风险控制
- **汉钟离 (hanzhongli)**
- ID: `memory_bank_hanzhongli_{PROJECT_ID}`
- 特点: 注重基本面分析和长期价值
- **张果老 (zhangguolao)**
- ID: `memory_bank_zhangguolao_{PROJECT_ID}`
- 特点: 擅长宏观趋势分析和周期判断
- **蓝采和 (lancaihe)**
- ID: `memory_bank_lancaihe_{PROJECT_ID}`
- 特点: 关注市场情绪和资金流向
- **何仙姑 (hexiangu)**
- ID: `memory_bank_hexiangu_{PROJECT_ID}`
- 特点: 精于财务数据分析和估值模型
- **吕洞宾 (lvdongbin)**
- ID: `memory_bank_lvdongbin_{PROJECT_ID}`
- 特点: 善于多维度综合分析和创新策略
- **韩湘子 (hanxiangzi)**
- ID: `memory_bank_hanxiangzi_{PROJECT_ID}`
- 特点: 擅长行业比较和相对价值分析
- **曹国舅 (caoguojiu)**
- ID: `memory_bank_caoguojiu_{PROJECT_ID}`
- 特点: 注重合规性、社会责任和ESG因素
## 使用方法
```python
from src.jixia.memory.factory import get_memory_backend
# 获取记忆后端 (自动根据环境变量选择)
memory_bank = get_memory_backend()
# 为吕洞宾添加偏好记忆
await memory_bank.add_memory(
agent_name="lvdongbin",
content="倾向于使用DCF模型评估科技股的内在价值",
memory_type="preference",
debate_topic="TSLA投资分析"
)
# 搜索吕洞宾关于TSLA的记忆
memories = await memory_bank.search_memories(
agent_name="lvdongbin",
query="TSLA",
memory_type="preference"
)
# 获取上下文
context = await memory_bank.get_agent_context("lvdongbin", "TSLA投资分析")
```
---
*此文档由系统自动生成和维护*

View File

@ -17,49 +17,49 @@ def create_baxian_agents():
# 铁拐李 - 逆向思维专家
tie_guai_li = Agent(
name="铁拐李",
model="gemini-2.0-flash-exp"
model="gemini-2.5-flash"
)
# 汉钟离 - 平衡协调者
han_zhong_li = Agent(
name="汉钟离",
model="gemini-2.0-flash-exp"
model="gemini-2.5-flash"
)
# 张果老 - 历史智慧者
zhang_guo_lao = Agent(
name="张果老",
model="gemini-2.0-flash-exp"
model="gemini-2.5-flash"
)
# 蓝采和 - 创新思维者
lan_cai_he = Agent(
name="蓝采和",
model="gemini-2.0-flash-exp"
model="gemini-2.5-flash"
)
# 何仙姑 - 直觉洞察者
he_xian_gu = Agent(
name="何仙姑",
model="gemini-2.0-flash-exp"
model="gemini-2.5-flash"
)
# 吕洞宾 - 理性分析者
lu_dong_bin = Agent(
name="吕洞宾",
model="gemini-2.0-flash-exp"
model="gemini-2.5-flash"
)
# 韩湘子 - 艺术感知者
han_xiang_zi = Agent(
name="韩湘子",
model="gemini-2.0-flash-exp"
model="gemini-2.5-flash"
)
# 曹国舅 - 实务执行者
cao_guo_jiu = Agent(
name="曹国舅",
model="gemini-2.0-flash-exp"
model="gemini-2.5-flash"
)
return {
@ -80,7 +80,7 @@ def test_single_agent():
# 创建铁拐李智能体
tie_guai_li = Agent(
name="铁拐李",
model="gemini-2.0-flash-exp"
model="gemini-2.5-flash"
)
print(f"✅ 智能体 '{tie_guai_li.name}' 创建成功")

View File

@ -20,14 +20,14 @@ def create_debate_agents():
# 铁拐李 - 逆向思维专家
tie_guai_li = Agent(
name="铁拐李",
model="gemini-2.0-flash-exp",
model="gemini-2.5-flash",
instruction="你是铁拐李八仙中的逆向思维专家。你善于从批判和质疑的角度看问题总是能发现事物的另一面。你的发言风格直接、犀利但富有智慧。每次发言控制在100字以内。"
)
# 吕洞宾 - 理性分析者
lu_dong_bin = Agent(
name="吕洞宾",
model="gemini-2.0-flash-exp",
model="gemini-2.5-flash",
instruction="你是吕洞宾八仙中的理性分析者。你善于平衡各方观点用理性和逻辑来分析问题。你的发言风格温和而深刻总是能找到问题的核心。每次发言控制在100字以内。"
)
@ -40,16 +40,11 @@ async def conduct_debate():
# 创建智能体
tie_guai_li, lu_dong_bin = create_debate_agents()
print("\n📋 论道主题: 人工智能对未来社会的影响")
print("\n📋 论道主题: 雅江水电站对中印关系的影响")
print("\n🎯 八仙论道,智慧交锋...")
try:
print("\n🚀 使用真实ADK调用进行论道...")
await real_adk_debate(tie_guai_li, lu_dong_bin)
except Exception as e:
print(f"\n❌ ADK调用失败: {e}")
print("🔧 回退到模拟对话模式...")
await simple_mock_debate(tie_guai_li, lu_dong_bin)
@contextmanager
def suppress_stdout():
@ -122,7 +117,7 @@ async def real_adk_debate(tie_guai_li, lu_dong_bin):
try:
# 第一轮:铁拐李开场
print("\n🗣️ 铁拐李发言:")
tie_prompt = "作为逆向思维专家,请从批判角度分析人工智能对未来社会可能带来的负面影响。请控制在100字以内。"
tie_prompt = "作为逆向思维专家,请从批判角度分析雅江水电站建设对中印关系可能带来的负面影响和潜在风险。请控制在100字以内。"
tie_content = types.Content(role='user', parts=[types.Part(text=tie_prompt)])
with suppress_stdout():
@ -155,7 +150,7 @@ async def real_adk_debate(tie_guai_li, lu_dong_bin):
# 第二轮:吕洞宾回应
print("\n🗣️ 吕洞宾回应:")
lu_prompt = f"铁拐李提到了AI的负面影响:'{tie_reply[:50]}...'。作为理性分析者,请从平衡角度回应,既承认风险又指出机遇。请控制在100字以内。"
lu_prompt = f"铁拐李提到了雅江水电站的负面影响:'{tie_reply[:50]}...'。作为理性分析者,请从平衡角度回应,既承认风险又指出雅江水电站对中印关系的积极意义。请控制在100字以内。"
lu_content = types.Content(role='user', parts=[types.Part(text=lu_prompt)])
with suppress_stdout():
@ -188,7 +183,7 @@ async def real_adk_debate(tie_guai_li, lu_dong_bin):
# 第三轮:铁拐李再次发言
print("\n🗣️ 铁拐李再次发言:")
tie_prompt2 = f"吕洞宾提到了AI的机遇'{lu_reply[:50]}...'。请从逆向思维角度,对这些所谓的机遇进行质疑和反思。请控制在100字以内。"
tie_prompt2 = f"吕洞宾提到了雅江水电站的积极意义'{lu_reply[:50]}...'。请从逆向思维角度,对这些所谓的积极影响进行质疑和反思。请控制在100字以内。"
tie_content2 = types.Content(role='user', parts=[types.Part(text=tie_prompt2)])
with suppress_stdout():
@ -229,6 +224,8 @@ async def real_adk_debate(tie_guai_li, lu_dong_bin):
def main():
"""主函数"""
print("🚀 稷下学宫 ADK 真实论道系统")

View File

@ -0,0 +1,258 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
八仙分层辩论系统强模型分解 + 小模型辩论
架构
1. 强模型如GPT-4进行问题分解和观点提炼
2. 小模型如Gemini Flash基于分解结果进行辩论
"""
import asyncio
import json
import time
from typing import Dict, List, Any
import aiohttp
import logging
# 配置日志
logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s')
logger = logging.getLogger(__name__)
class BreakdownDebateSystem:
def __init__(self):
# API配置
self.api_base = "http://localhost:4000"
self.api_key = "sk-1234"
# 模型配置
self.strong_model = "fireworks_ai/accounts/fireworks/models/deepseek-v3-0324" # 强模型用于分解
self.debate_model = "gemini/gemini-2.5-flash" # 小模型用于辩论
# 辩论主题
self.topic = "工作量证明vs无限制爬虫从李时珍采药到AI数据获取的激励机制变革"
# 八仙角色定义
self.immortals = {
"吕洞宾": {"性别": "", "特征": "文雅学者,理性分析", "立场": "支持工作量证明机制"},
"何仙姑": {"性别": "", "特征": "温和智慧,注重平衡", "立场": "支持无限制数据获取"},
"张果老": {"年龄": "", "特征": "经验丰富,传统智慧", "立场": "支持传统激励机制"},
"韩湘子": {"年龄": "", "特征": "创新思维,前瞻视野", "立场": "支持AI时代新机制"},
"汉钟离": {"地位": "", "特征": "资源丰富,商业思维", "立场": "支持市场化激励"},
"蓝采和": {"地位": "", "特征": "平民视角,公平关注", "立场": "支持开放共享"},
"曹国舅": {"出身": "", "特征": "权威地位,规则意识", "立场": "支持制度化管理"},
"铁拐李": {"出身": "", "特征": "草根智慧,实用主义", "立场": "支持去中心化"}
}
# 对角线辩论配置
self.debate_pairs = [
("吕洞宾", "何仙姑"), # 男女对角线
("张果老", "韩湘子"), # 老少对角线
("汉钟离", "蓝采和"), # 富贫对角线
("曹国舅", "铁拐李") # 贵贱对角线
]
async def call_api(self, model: str, messages: List[Dict], max_tokens: int = 1000) -> str:
"""调用API"""
headers = {
"Authorization": f"Bearer {self.api_key}",
"Content-Type": "application/json"
}
data = {
"model": model,
"messages": messages,
"max_tokens": max_tokens,
"temperature": 0.7
}
try:
async with aiohttp.ClientSession() as session:
async with session.post(f"{self.api_base}/chat/completions",
headers=headers, json=data) as response:
if response.status == 200:
result = await response.json()
return result['choices'][0]['message']['content']
else:
error_text = await response.text()
logger.error(f"API调用失败: {response.status} - {error_text}")
return f"API调用失败: {response.status}"
except Exception as e:
logger.error(f"API调用异常: {str(e)}")
return f"API调用异常: {str(e)}"
async def breakdown_topic(self) -> Dict[str, Any]:
"""使用强模型分解辩论主题"""
logger.info("🧠 开始使用强模型分解辩论主题...")
breakdown_prompt = f"""
你是一个专业的辩论分析师请对以下主题进行深度分解
主题{self.topic}
请提供
1. 核心争议点3-5
2. 支持工作量证明机制的关键论据3
3. 支持无限制爬虫/数据获取的关键论据3
4. 历史对比分析要点
5. 未来发展趋势预测
请以JSON格式返回结构如下
{{
"core_issues": ["争议点1", "争议点2", ...],
"pro_pow_arguments": ["论据1", "论据2", "论据3"],
"pro_unlimited_arguments": ["论据1", "论据2", "论据3"],
"historical_analysis": ["要点1", "要点2", ...],
"future_trends": ["趋势1", "趋势2", ...]
}}
"""
messages = [
{"role": "system", "content": "你是一个专业的辩论分析师,擅长深度分析复杂议题。"},
{"role": "user", "content": breakdown_prompt}
]
response = await self.call_api(self.strong_model, messages, max_tokens=2000)
try:
# 尝试解析JSON
breakdown_data = json.loads(response)
logger.info("✅ 主题分解完成")
return breakdown_data
except json.JSONDecodeError:
logger.error("❌ 强模型返回的不是有效JSON使用默认分解")
return {
"core_issues": ["激励机制公平性", "创作者权益保护", "技术发展与伦理平衡"],
"pro_pow_arguments": ["保护创作者权益", "维护内容质量", "建立可持续生态"],
"pro_unlimited_arguments": ["促进知识传播", "加速技术发展", "降低获取成本"],
"historical_analysis": ["从手工采药到工业化生产的变迁", "知识产权制度的演进"],
"future_trends": ["AI与人类协作模式", "新型激励机制探索"]
}
async def conduct_debate(self, breakdown_data: Dict[str, Any]):
"""基于分解结果进行八仙辩论"""
logger.info("🎭 开始八仙对角线辩论...")
for i, (immortal1, immortal2) in enumerate(self.debate_pairs, 1):
logger.info(f"\n{'='*60}")
logger.info(f"{i}场辩论:{immortal1} vs {immortal2}")
logger.info(f"{'='*60}")
# 为每个仙人准备个性化的论据
immortal1_info = self.immortals[immortal1]
immortal2_info = self.immortals[immortal2]
# 第一轮:开场陈述
statement1 = await self.get_opening_statement(immortal1, immortal1_info, breakdown_data)
logger.info(f"\n🗣️ {immortal1}的开场陈述:")
logger.info(statement1)
statement2 = await self.get_opening_statement(immortal2, immortal2_info, breakdown_data)
logger.info(f"\n🗣️ {immortal2}的开场陈述:")
logger.info(statement2)
# 第二轮:相互回应
response1 = await self.get_response(immortal1, immortal1_info, statement2, breakdown_data)
logger.info(f"\n💬 {immortal1}的回应:")
logger.info(response1)
response2 = await self.get_response(immortal2, immortal2_info, statement1, breakdown_data)
logger.info(f"\n💬 {immortal2}的回应:")
logger.info(response2)
# 第三轮:总结陈词
summary1 = await self.get_summary(immortal1, immortal1_info, [statement1, statement2, response1, response2], breakdown_data)
logger.info(f"\n📝 {immortal1}的总结:")
logger.info(summary1)
await asyncio.sleep(2) # 短暂停顿
logger.info(f"\n{'='*60}")
logger.info("🎉 所有四场对角线辩论已完成!")
logger.info(f"{'='*60}")
async def get_opening_statement(self, immortal: str, immortal_info: Dict, breakdown_data: Dict) -> str:
"""获取开场陈述"""
prompt = f"""
你是{immortal}{immortal_info['特征']}你的立场是{immortal_info['立场']}
基于以下分解分析请发表你的开场陈述
核心争议点{', '.join(breakdown_data['core_issues'])}
支持工作量证明的论据{', '.join(breakdown_data['pro_pow_arguments'])}
支持无限制获取的论据{', '.join(breakdown_data['pro_unlimited_arguments'])}
历史分析要点{', '.join(breakdown_data['historical_analysis'])}
未来趋势{', '.join(breakdown_data['future_trends'])}
请以{immortal}的身份和特征结合你的立场发表一段150字左右的开场陈述要体现你的个性特征和观点倾向
"""
messages = [
{"role": "system", "content": f"你是{immortal},请保持角色一致性。"},
{"role": "user", "content": prompt}
]
return await self.call_api(self.debate_model, messages)
async def get_response(self, immortal: str, immortal_info: Dict, opponent_statement: str, breakdown_data: Dict) -> str:
"""获取回应"""
prompt = f"""
你是{immortal}{immortal_info['特征']}你的立场是{immortal_info['立场']}
对方刚才说
{opponent_statement}
基于分解分析的要点
{', '.join(breakdown_data['core_issues'])}
请以{immortal}的身份回应对方的观点约100字要体现你的立场和特征
"""
messages = [
{"role": "system", "content": f"你是{immortal},请保持角色一致性。"},
{"role": "user", "content": prompt}
]
return await self.call_api(self.debate_model, messages)
async def get_summary(self, immortal: str, immortal_info: Dict, all_statements: List[str], breakdown_data: Dict) -> str:
"""获取总结陈词"""
prompt = f"""
你是{immortal}{immortal_info['特征']}你的立场是{immortal_info['立场']}
基于刚才的辩论内容和分解分析请发表你的总结陈词约120字
要总结你的核心观点并展望未来
分析要点{', '.join(breakdown_data['future_trends'])}
"""
messages = [
{"role": "system", "content": f"你是{immortal},请保持角色一致性。"},
{"role": "user", "content": prompt}
]
return await self.call_api(self.debate_model, messages)
async def run(self):
"""运行完整的分层辩论系统"""
logger.info("🚀 启动八仙分层辩论系统")
logger.info(f"主题:{self.topic}")
logger.info(f"强模型(分解):{self.strong_model}")
logger.info(f"辩论模型:{self.debate_model}")
# 第一阶段:强模型分解
breakdown_data = await self.breakdown_topic()
logger.info("\n📊 分解结果:")
for key, value in breakdown_data.items():
logger.info(f"{key}: {value}")
# 第二阶段:小模型辩论
await self.conduct_debate(breakdown_data)
logger.info("\n🎊 分层辩论系统运行完成!")
if __name__ == "__main__":
system = BreakdownDebateSystem()
asyncio.run(system.run())

View File

@ -0,0 +1,250 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
八仙辩论系统 - 自定义API版本
使用自定义LiteLLM端点而不是Google ADK
"""
import asyncio
import aiohttp
import json
import os
from typing import List, Dict, Any
import time
class CustomAPIAgent:
"""使用自定义API的代理"""
def __init__(self, name: str, personality: str, api_url: str, api_key: str, model: str = "fireworks_ai/accounts/fireworks/models/deepseek-v3-0324"):
self.name = name
self.personality = personality
self.api_url = api_url
self.api_key = api_key
self.model = model
async def generate_response(self, prompt: str, session: aiohttp.ClientSession) -> str:
"""生成AI回应"""
try:
headers = {
"Content-Type": "application/json",
"x-litellm-api-key": self.api_key
}
payload = {
"model": self.model,
"messages": [
{"role": "system", "content": f"你是{self.name}{self.personality}"},
{"role": "user", "content": prompt}
],
"max_tokens": 1000,
"temperature": 0.8
}
async with session.post(
f"{self.api_url}/v1/chat/completions",
headers=headers,
json=payload,
timeout=aiohttp.ClientTimeout(total=30)
) as response:
if response.status == 200:
result = await response.json()
content = result.get('choices', [{}])[0].get('message', {}).get('content', '')
if content:
return content.strip()
else:
print(f"{self.name} API返回空内容: {result}")
return f"[{self.name}暂时无法回应API返回空内容]"
else:
error_text = await response.text()
print(f"{self.name} API错误 ({response.status}): {error_text[:200]}...")
return f"[{self.name}暂时无法回应API错误: {response.status}]"
except Exception as e:
print(f"{self.name} 生成回应时出错: {e}")
return f"[{self.name}暂时无法回应,连接错误]"
class BaXianCustomDebateSystem:
"""八仙自定义API辩论系统"""
def __init__(self, api_url: str, api_key: str):
self.api_url = api_url.rstrip('/')
self.api_key = api_key
# 创建八仙代理
self.agents = {
"吕洞宾": CustomAPIAgent(
"吕洞宾",
"八仙之首,男性代表,理性务实,善于分析问题的本质和长远影响。你代表男性视角,注重逻辑和实用性。",
api_url, api_key
),
"何仙姑": CustomAPIAgent(
"何仙姑",
"八仙中唯一的女性,温柔智慧,善于从情感和人文角度思考问题。你代表女性视角,注重关怀和和谐。",
api_url, api_key
),
"张果老": CustomAPIAgent(
"张果老",
"八仙中的长者,经验丰富,代表传统智慧和保守观点。你重视稳定和传承,谨慎对待变化。",
api_url, api_key
),
"韩湘子": CustomAPIAgent(
"韩湘子",
"八仙中的年轻人,充满活力和创新精神。你代表新生代观点,勇于尝试和改变。",
api_url, api_key
),
"汉钟离": CustomAPIAgent(
"汉钟离",
"八仙中的富贵者,见多识广,代表富裕阶层的观点。你注重效率和成果,善于资源配置。",
api_url, api_key
),
"蓝采和": CustomAPIAgent(
"蓝采和",
"八仙中的贫苦出身,朴实无华,代表普通民众的观点。你关注基层需求,重视公平正义。",
api_url, api_key
),
"曹国舅": CustomAPIAgent(
"曹国舅",
"八仙中的贵族,出身高贵,代表上层社会观点。你注重秩序和礼仪,维护既有体系。",
api_url, api_key
),
"铁拐李": CustomAPIAgent(
"铁拐李",
"八仙中的平民英雄,不拘小节,代表底层民众观点。你直言不讳,为弱势群体发声。",
api_url, api_key
)
}
# 定义四对矛盾的对角线辩论
self.debate_pairs = [
("吕洞宾", "何仙姑", "男女对立辩论"),
("张果老", "韩湘子", "老少对立辩论"),
("汉钟离", "蓝采和", "富贫对立辩论"),
("曹国舅", "铁拐李", "贵贱对立辩论")
]
async def test_api_connection(self) -> bool:
"""测试API连接"""
print(f"🔍 测试API连接: {self.api_url}")
try:
async with aiohttp.ClientSession() as session:
headers = {"x-litellm-api-key": self.api_key}
async with session.get(
f"{self.api_url}/v1/models",
headers=headers,
timeout=aiohttp.ClientTimeout(total=10)
) as response:
if response.status == 200:
models = await response.json()
print(f"✅ API连接成功找到 {len(models.get('data', []))} 个模型")
return True
else:
error_text = await response.text()
print(f"❌ API连接失败 ({response.status}): {error_text[:200]}...")
return False
except Exception as e:
print(f"❌ API连接测试失败: {e}")
return False
async def conduct_debate(self, topic: str) -> None:
"""进行完整的八仙辩论"""
print(f"\n{'='*80}")
print(f"🎭 八仙自定义API辩论系统")
print(f"📝 辩论主题: {topic}")
print(f"🔗 API端点: {self.api_url}")
print(f"{'='*80}\n")
# 测试API连接
if not await self.test_api_connection():
print("❌ API连接失败无法进行辩论")
return
async with aiohttp.ClientSession() as session:
for i, (agent1_name, agent2_name, debate_type) in enumerate(self.debate_pairs, 1):
print(f"\n🎯 第{i}场辩论: {debate_type}")
print(f"⚔️ {agent1_name} VS {agent2_name}")
print(f"📋 主题: {topic}")
print("-" * 60)
agent1 = self.agents[agent1_name]
agent2 = self.agents[agent2_name]
# 第一轮agent1开场
prompt1 = f"针对'{topic}'这个话题请从你的角度阐述观点。要求1)明确表达立场 2)提供具体论据 3)字数控制在200字以内"
print(f"\n🗣️ {agent1_name}发言:")
agent1_reply = await agent1.generate_response(prompt1, session)
print(f"{agent1_reply}\n")
# 第二轮agent2回应
prompt2 = f"针对'{topic}'这个话题,{agent1_name}刚才说:'{agent1_reply}'。请从你的角度回应并阐述不同观点。要求1)回应对方观点 2)提出自己的立场 3)字数控制在200字以内"
print(f"🗣️ {agent2_name}回应:")
agent2_reply = await agent2.generate_response(prompt2, session)
print(f"{agent2_reply}\n")
# 第三轮agent1总结
prompt3 = f"针对'{topic}'这个话题的辩论,{agent2_name}回应说:'{agent2_reply}'。请做最后总结发言。要求1)回应对方观点 2)强化自己立场 3)寻求共识或妥协 4)字数控制在150字以内"
print(f"🗣️ {agent1_name}总结:")
agent1_final = await agent1.generate_response(prompt3, session)
print(f"{agent1_final}\n")
print(f"✅ 第{i}场辩论结束\n")
# 短暂延迟避免API限制
await asyncio.sleep(1)
print(f"\n🎉 八仙辩论全部结束!")
print(f"📊 共进行了 {len(self.debate_pairs)} 场对角线辩论")
print(f"🎭 参与仙人: {', '.join(self.agents.keys())}")
async def main():
"""主函数"""
# 配置
api_url = "http://master.tailnet-68f9.ts.net:40012"
# 尝试不同的API密钥格式
gemini_key = os.getenv('GEMINI_API_KEY', '')
if not gemini_key:
print("❌ 错误: 未找到GEMINI_API_KEY环境变量")
print("请设置环境变量: export GEMINI_API_KEY=your_api_key")
return
# 使用提供的LiteLLM虚拟密钥
test_keys = [
"sk-0jdcGHZJpX2oUJmyEs7zVA" # LiteLLM虚拟密钥
]
print("🚀 启动八仙自定义API辩论系统...")
# 辩论主题
topic = "工作量证明vs无限制爬虫从李时珍采药到AI数据获取的激励机制变革"
# 尝试不同的API密钥
for api_key in test_keys:
if not api_key or api_key == "sk-":
continue
print(f"\n🔑 尝试API密钥: {api_key[:15]}...")
debate_system = BaXianCustomDebateSystem(api_url, api_key)
# 测试连接
if await debate_system.test_api_connection():
print(f"✅ 使用API密钥成功: {api_key[:15]}...")
await debate_system.conduct_debate(topic)
return
else:
print(f"❌ API密钥失败: {api_key[:15]}...")
print("\n❌ 所有API密钥都失败了")
print("\n🔍 可能的解决方案:")
print(" 1. 检查LiteLLM服务器是否正确配置")
print(" 2. 确认API密钥格式")
print(" 3. 联系服务器管理员获取正确的虚拟密钥")
print(" 4. 检查网络连接和防火墙设置")
if __name__ == "__main__":
asyncio.run(main())

File diff suppressed because it is too large Load Diff

View File

@ -1,275 +0,0 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Memory Bank 实验脚本
测试八仙人格的长期记忆功能
"""
import os
import asyncio
from datetime import datetime
from typing import Dict, List, Any
import json
# Google GenAI 导入
try:
import google.genai as genai
from google.genai import types
except ImportError:
print("❌ 请安装 google-genai: pip install google-genai")
exit(1)
class MemoryBankExperiment:
"""Memory Bank 实验类"""
def __init__(self):
self.api_key = os.getenv('GOOGLE_API_KEY')
if not self.api_key:
raise ValueError("请设置 GOOGLE_API_KEY 环境变量")
# 初始化 GenAI
genai.configure(api_key=self.api_key)
# 八仙人格基线
self.immortal_baselines = {
"吕洞宾": {
"mbti_type": "ENTJ",
"core_traits": {
"assertiveness": 0.9,
"analytical": 0.8,
"risk_tolerance": 0.8,
"optimism": 0.7
},
"personality_description": "剑仙投资顾问,主动进取,敢于冒险,技术分析专家"
},
"何仙姑": {
"mbti_type": "ISFJ",
"core_traits": {
"empathy": 0.9,
"caution": 0.8,
"loyalty": 0.8,
"optimism": 0.4
},
"personality_description": "慈悲风控专家,谨慎小心,保护意识强,风险厌恶"
},
"张果老": {
"mbti_type": "INTP",
"core_traits": {
"analytical": 0.9,
"curiosity": 0.8,
"traditional": 0.7,
"caution": 0.6
},
"personality_description": "历史数据分析师,深度思考,逆向思维,传统智慧"
}
}
# 记忆存储(模拟 Memory Bank
self.memory_bank = {}
def initialize_immortal_memory(self, immortal_name: str):
"""初始化仙人的记忆空间"""
if immortal_name not in self.memory_bank:
self.memory_bank[immortal_name] = {
"personality_baseline": self.immortal_baselines[immortal_name],
"conversation_history": [],
"viewpoint_evolution": [],
"decision_history": [],
"created_at": datetime.now().isoformat(),
"last_updated": datetime.now().isoformat()
}
print(f"🎭 初始化 {immortal_name} 的记忆空间")
def store_memory(self, immortal_name: str, memory_type: str, content: Dict[str, Any]):
"""存储记忆到 Memory Bank"""
self.initialize_immortal_memory(immortal_name)
memory_entry = {
"type": memory_type,
"content": content,
"timestamp": datetime.now().isoformat(),
"session_id": f"session_{len(self.memory_bank[immortal_name]['conversation_history'])}"
}
if memory_type == "conversation":
self.memory_bank[immortal_name]["conversation_history"].append(memory_entry)
elif memory_type == "viewpoint":
self.memory_bank[immortal_name]["viewpoint_evolution"].append(memory_entry)
elif memory_type == "decision":
self.memory_bank[immortal_name]["decision_history"].append(memory_entry)
self.memory_bank[immortal_name]["last_updated"] = datetime.now().isoformat()
print(f"💾 {immortal_name} 存储了 {memory_type} 记忆")
def retrieve_relevant_memories(self, immortal_name: str, query: str) -> List[Dict]:
"""检索相关记忆"""
if immortal_name not in self.memory_bank:
return []
# 简单的关键词匹配(实际应该使用向量相似度搜索)
relevant_memories = []
query_lower = query.lower()
for memory in self.memory_bank[immortal_name]["conversation_history"]:
if any(keyword in memory["content"].get("message", "").lower()
for keyword in query_lower.split()):
relevant_memories.append(memory)
return relevant_memories[-5:] # 返回最近5条相关记忆
async def generate_immortal_response(self, immortal_name: str, query: str) -> str:
"""生成仙人的回应,基于记忆和人格基线"""
# 检索相关记忆
relevant_memories = self.retrieve_relevant_memories(immortal_name, query)
# 构建上下文
context = self.build_context(immortal_name, relevant_memories)
# 生成回应
model = genai.GenerativeModel('gemini-2.0-flash-exp')
prompt = f"""
你是{immortal_name}{self.immortal_baselines[immortal_name]['personality_description']}
你的核心人格特质
{json.dumps(self.immortal_baselines[immortal_name]['core_traits'], ensure_ascii=False, indent=2)}
你的相关记忆
{json.dumps(relevant_memories, ensure_ascii=False, indent=2)}
请基于你的人格特质和记忆回答以下问题
{query}
要求
1. 保持人格一致性
2. 参考历史记忆
3. 回答控制在100字以内
4. 体现你的独特风格
"""
response = await model.generate_content_async(prompt)
return response.text
def build_context(self, immortal_name: str, memories: List[Dict]) -> str:
"""构建上下文信息"""
context_parts = []
# 添加人格基线
baseline = self.immortal_baselines[immortal_name]
context_parts.append(f"人格类型: {baseline['mbti_type']}")
context_parts.append(f"核心特质: {json.dumps(baseline['core_traits'], ensure_ascii=False)}")
# 添加相关记忆
if memories:
context_parts.append("相关记忆:")
for memory in memories[-3:]: # 最近3条记忆
context_parts.append(f"- {memory['content'].get('message', '')}")
return "\n".join(context_parts)
def simulate_conversation(self, immortal_name: str, messages: List[str]):
"""模拟对话,测试记忆功能"""
print(f"\n🎭 开始与 {immortal_name} 的对话")
print("=" * 50)
for i, message in enumerate(messages):
print(f"\n用户: {message}")
# 生成回应
response = asyncio.run(self.generate_immortal_response(immortal_name, message))
print(f"{immortal_name}: {response}")
# 存储记忆
self.store_memory(immortal_name, "conversation", {
"user_message": message,
"immortal_response": response,
"session_id": f"session_{i}"
})
# 存储观点
if "看多" in response or "看空" in response or "观望" in response:
viewpoint = "看多" if "看多" in response else "看空" if "看空" in response else "观望"
self.store_memory(immortal_name, "viewpoint", {
"symbol": "TSLA", # 假设讨论特斯拉
"viewpoint": viewpoint,
"reasoning": response
})
def analyze_memory_evolution(self, immortal_name: str):
"""分析记忆演化"""
if immortal_name not in self.memory_bank:
print(f"{immortal_name} 没有记忆数据")
return
memory_data = self.memory_bank[immortal_name]
print(f"\n📊 {immortal_name} 记忆分析")
print("=" * 50)
print(f"记忆空间创建时间: {memory_data['created_at']}")
print(f"最后更新时间: {memory_data['last_updated']}")
print(f"对话记录数: {len(memory_data['conversation_history'])}")
print(f"观点演化数: {len(memory_data['viewpoint_evolution'])}")
print(f"决策记录数: {len(memory_data['decision_history'])}")
# 分析观点演化
if memory_data['viewpoint_evolution']:
print(f"\n观点演化轨迹:")
for i, viewpoint in enumerate(memory_data['viewpoint_evolution']):
print(f" {i+1}. {viewpoint['content']['viewpoint']} - {viewpoint['timestamp']}")
def save_memory_bank(self, filename: str = "memory_bank_backup.json"):
"""保存记忆库到文件"""
with open(filename, 'w', encoding='utf-8') as f:
json.dump(self.memory_bank, f, ensure_ascii=False, indent=2)
print(f"💾 记忆库已保存到 {filename}")
def load_memory_bank(self, filename: str = "memory_bank_backup.json"):
"""从文件加载记忆库"""
try:
with open(filename, 'r', encoding='utf-8') as f:
self.memory_bank = json.load(f)
print(f"📂 记忆库已从 {filename} 加载")
except FileNotFoundError:
print(f"⚠️ 文件 {filename} 不存在,使用空记忆库")
def main():
"""主实验函数"""
print("🚀 开始 Memory Bank 实验")
print("=" * 60)
# 创建实验实例
experiment = MemoryBankExperiment()
# 测试对话场景
test_scenarios = {
"吕洞宾": [
"你觉得特斯拉股票怎么样?",
"现在市场波动很大,你怎么看?",
"你之前不是看好特斯拉吗?现在还是这个观点吗?"
],
"何仙姑": [
"特斯拉股票风险大吗?",
"现在适合投资吗?",
"你一直很谨慎,现在还是建议观望吗?"
],
"张果老": [
"从历史数据看,特斯拉表现如何?",
"现在的估值合理吗?",
"你之前分析过特斯拉的历史数据,现在有什么新发现?"
]
}
# 执行实验
for immortal_name, messages in test_scenarios.items():
experiment.simulate_conversation(immortal_name, messages)
experiment.analyze_memory_evolution(immortal_name)
# 保存记忆库
experiment.save_memory_bank()
print("\n🎉 Memory Bank 实验完成!")
print("=" * 60)
if __name__ == "__main__":
main()

View File

@ -1,116 +0,0 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Memory Bank 简化测试脚本
"""
import os
import asyncio
from datetime import datetime
import json
# Google GenAI 导入
import google.genai as genai
class MemoryBankTest:
"""Memory Bank 测试类"""
def __init__(self):
self.api_key = os.getenv('GOOGLE_API_KEY')
if not self.api_key:
raise ValueError("请设置 GOOGLE_API_KEY 环境变量")
self.client = genai.Client(api_key=self.api_key)
# 八仙人格基线
self.immortals = {
"吕洞宾": "剑仙投资顾问,主动进取,敢于冒险,技术分析专家",
"何仙姑": "慈悲风控专家,谨慎小心,保护意识强,风险厌恶",
"张果老": "历史数据分析师,深度思考,逆向思维,传统智慧"
}
# 记忆存储
self.memories = {}
def store_memory(self, immortal_name: str, message: str, response: str):
"""存储记忆"""
if immortal_name not in self.memories:
self.memories[immortal_name] = []
self.memories[immortal_name].append({
"message": message,
"response": response,
"timestamp": datetime.now().isoformat()
})
def chat_with_immortal(self, immortal_name: str, message: str) -> str:
"""与仙人对话"""
# 构建上下文
context = f"你是{immortal_name}{self.immortals[immortal_name]}"
# 添加记忆
if immortal_name in self.memories and self.memories[immortal_name]:
context += "\n\n你的历史对话:"
for memory in self.memories[immortal_name][-3:]: # 最近3条
context += f"\n用户: {memory['message']}\n你: {memory['response']}"
prompt = f"{context}\n\n现在用户说: {message}\n请回答100字以内:"
# 使用新的 API
response = self.client.models.generate_content(
model="gemini-2.0-flash-exp",
contents=[{"parts": [{"text": prompt}]}]
)
return response.candidates[0].content.parts[0].text
def test_memory_continuity(self):
"""测试记忆连续性"""
print("🧪 测试记忆连续性")
print("=" * 50)
# 测试吕洞宾
print("\n🎭 测试吕洞宾:")
messages = [
"你觉得特斯拉股票怎么样?",
"现在市场波动很大,你怎么看?",
"你之前不是看好特斯拉吗?现在还是这个观点吗?"
]
for message in messages:
print(f"\n用户: {message}")
response = self.chat_with_immortal("吕洞宾", message)
print(f"吕洞宾: {response}")
self.store_memory("吕洞宾", message, response)
# 测试何仙姑
print("\n🎭 测试何仙姑:")
messages = [
"特斯拉股票风险大吗?",
"现在适合投资吗?",
"你一直很谨慎,现在还是建议观望吗?"
]
for message in messages:
print(f"\n用户: {message}")
response = self.chat_with_immortal("何仙姑", message)
print(f"何仙姑: {response}")
self.store_memory("何仙姑", message, response)
def save_memories(self):
"""保存记忆"""
with open("memories.json", "w", encoding="utf-8") as f:
json.dump(self.memories, f, ensure_ascii=False, indent=2)
print("💾 记忆已保存到 memories.json")
def main():
"""主函数"""
print("🚀 Memory Bank 测试开始")
test = MemoryBankTest()
test.test_memory_continuity()
test.save_memories()
print("\n✅ 测试完成!")
if __name__ == "__main__":
main()

View File

@ -0,0 +1,37 @@
{
"timestamp": "2025-08-16T15:17:54.175476",
"version": "v2.1.0",
"test_results": {
"priority_algorithm_integration": true,
"flow_controller_integration": true,
"health_monitor_integration": true,
"performance_under_load": true,
"data_consistency": true,
"chat_coordinator_integration": true,
"cross_component_integration": true
},
"performance_metrics": {
"total_operations": 400,
"duration": 0.006308555603027344,
"ops_per_second": 63405.956160241876,
"avg_operation_time": 0.01577138900756836,
"concurrent_threads": 5,
"errors": 0
},
"error_log": [],
"summary": {
"pass_rate": 100.0,
"total_tests": 7,
"passed_tests": 7,
"failed_tests": 0,
"performance_metrics": {
"total_operations": 400,
"duration": 0.006308555603027344,
"ops_per_second": 63405.956160241876,
"avg_operation_time": 0.01577138900756836,
"concurrent_threads": 5,
"errors": 0
},
"error_count": 0
}
}

View File

@ -1,91 +0,0 @@
# Documentation Restructure Plan
## 🎯 Goal
Reorganize docs/ for GitHub Pages to help potential collaborators quickly understand and join the project, while keeping internal docs in internal/.
## 📋 Current Issues
- Too many technical documents in public docs/
- Complex structure overwhelming for newcomers
- Missing clear project vision presentation
- Three-tier system (炼妖壶/降魔杵/打神鞭) not clearly presented
## 🏗️ New Public Docs Structure
### Root Level (Welcome & Quick Start)
```
docs/
├── index.md # Project overview & three-tier vision
├── README.md # Quick start guide
├── CONTRIBUTING.md # How to contribute
└── roadmap.md # Development roadmap
```
### Core Sections
```
├── getting-started/ # New contributor onboarding
│ ├── quick-start.md # 5-minute setup
│ ├── architecture-overview.md # High-level architecture
│ └── first-contribution.md # How to make first contribution
├── vision/ # Project vision & philosophy
│ ├── three-tiers.md # 炼妖壶/降魔杵/打神鞭 system
│ ├── manifesto.md # Project manifesto
│ └── why-anti-gods.md # Philosophy behind the project
├── features/ # What the system can do
│ ├── ai-debate-system.md # Jixia Academy features
│ ├── financial-analysis.md # Market analysis capabilities
│ └── mcp-integration.md # MCP service features
└── api/ # API documentation
├── endpoints.md # API reference
└── examples.md # Usage examples
```
## 📦 Move to Internal (Not for GitHub Pages)
### Development & Internal Docs → docs/internal/
- Technical implementation details
- Internal development logs
- Private strategy documents
- Detailed configuration guides
- Debug and troubleshooting docs
- Internal analysis reports
### Files to Move to internal/
```
technical/ → internal/technical/
setup/ → internal/setup/
mcp/ → internal/mcp/
analysis/ (some files) → internal/analysis/
strategies/ → internal/strategies/
```
## 🎯 Key Public Documentation Goals
### 1. Clear Project Vision
- Highlight the three-tier system prominently
- Explain the grand vision without overwhelming details
- Show progression path: 炼妖壶 → 降魔杵 → 打神鞭
### 2. Easy Onboarding
- 5-minute quick start guide
- Clear setup instructions
- Simple first contribution guide
### 3. Showcase Innovation
- AI debate system (Jixia Academy)
- Multi-agent financial analysis
- MCP integration architecture
- Mathematical foundations (accessible version)
### 4. Community Building
- Contributing guidelines
- Code of conduct
- Communication channels
- Recognition system
## 🚀 Implementation Plan
1. **Create new structure** - Set up clean public docs organization
2. **Move internal docs** - Transfer non-public docs to internal/
3. **Write newcomer docs** - Create accessible onboarding materials
4. **Highlight vision** - Emphasize three-tier system and grand vision
5. **Add community docs** - Contributing guidelines and community info

View File

@ -1,148 +0,0 @@
# 📚 Documentation Restructure - COMPLETED
## 🎯 Mission Accomplished
Successfully reorganized the Cauldron project documentation to separate internal development docs from public-facing GitHub Pages content, with clear presentation of the three-tier system vision.
## ✅ Completed Tasks
### 1. Internal Documentation Organization
**Moved to `docs/internal/` (private development docs):**
- `api_scheduling_strategy.md`
- `baxian_sanqing_system_guide.md`
- `Force_Anti_Monologue_Techniques.md`
- `liao.md`
- `rapidapi_yahoo_finance_guide.md`
- `tianxia.md`
- `earlycall.md`
- `index_professional.md`
- `analysis/` directory (market analysis reports)
- `mcp/` directory (MCP implementation details)
- `setup/` directory (internal setup guides)
- `strategies/` directory (internal strategy documents)
- `technical/` directory (technical implementation details)
### 2. Public Documentation Structure
**Clean public docs for GitHub Pages:**
```
docs/
├── index.md # 🏛️ Main project overview with three-tier vision
├── CONTRIBUTING.md # 🤝 Contributor guidelines and onboarding
├── roadmap.md # 🗺️ Development timeline and milestones
├── README.md # 📖 Basic project information
├── getting-started/
│ └── quick-start.md # 🚀 5-minute setup guide
├── vision/
│ └── three-tiers.md # 🏛️ Detailed three-tier system explanation
├── features/ # 🌟 (ready for content)
├── api/ # 📋 (ready for API documentation)
└── [existing directories] # Other organized sections
```
### 3. Key Documentation Created
#### Main Project Overview (`docs/index.md`)
- Prominent three-tier system presentation
- Clear value proposition for each tier
- Quick start instructions
- Feature highlights
- Community information
#### Three-Tier Vision (`docs/vision/three-tiers.md`)
- Detailed explanation of 炼妖壶/降魔杵/打神鞭 progression
- Philosophy and vision for each tier
- Target users and use cases
- Technology progression roadmap
#### Contributor Guide (`docs/CONTRIBUTING.md`)
- Clear onboarding process
- Development workflow
- Code contribution guidelines
- Community guidelines
#### Development Roadmap (`docs/roadmap.md`)
- Detailed timeline from Q4 2024 to 2027+
- Key milestones and success metrics
- Partnership strategy
- Global expansion plans
#### Quick Start Guide (`docs/getting-started/quick-start.md`)
- 5-minute setup process
- Clear prerequisites
- Troubleshooting section
- Next steps guidance
## 🎯 Achieved Goals
### ✅ Clear Project Vision
- Three-tier system prominently displayed
- Professional presentation for potential collaborators
- Clear progression path from free to enterprise
### ✅ Easy Contributor Onboarding
- Comprehensive contributor guidelines
- Quick start guide for immediate setup
- Clear development workflow
### ✅ Clean Separation
- Internal docs properly segregated
- Public docs optimized for GitHub Pages
- Professional presentation without internal clutter
### ✅ Community Building
- Clear communication channels
- Recognition system for contributors
- Code of conduct and guidelines
## 🚀 Impact for Potential Collaborators
### Immediate Understanding
New visitors can quickly grasp:
1. **What Cauldron is**: AI-powered financial intelligence platform
2. **The Vision**: Three-tier evolution from free to enterprise
3. **How to Get Started**: 5-minute quick start process
4. **How to Contribute**: Clear guidelines and workflow
### Professional Presentation
- Clean, organized documentation structure
- Clear value proposition and roadmap
- Professional language and presentation
- Easy navigation and discovery
## 📊 Before vs After
### Before Restructure
- 70+ mixed files scattered in docs/
- Technical details mixed with public info
- Overwhelming for newcomers
- Unclear project vision presentation
### After Restructure
- Clean public docs structure
- Internal docs properly organized
- Clear three-tier vision presentation
- Easy contributor onboarding
- Professional GitHub Pages ready
## 🎉 Documentation Now Ready For
1. **GitHub Pages Deployment**: Clean public documentation
2. **Contributor Onboarding**: Clear guides and workflows
3. **Community Building**: Professional presentation
4. **Investment/Partnership**: Clear vision and roadmap
5. **User Acquisition**: Easy understanding and setup
## 🏆 Success Metrics
- **Organization**: ✅ Clean separation of public vs internal docs
- **Vision Clarity**: ✅ Three-tier system prominently featured
- **Accessibility**: ✅ Easy 5-minute onboarding process
- **Professionalism**: ✅ GitHub Pages ready presentation
- **Community Ready**: ✅ Clear contribution guidelines
---
**The Cauldron project documentation is now professionally organized and ready to attract and onboard potential collaborators!** 🎉
*From 炼妖壶 to 打神鞭 - the vision is now clearly presented to the world.*

View File

@ -1,48 +0,0 @@
# Python Files Cleanup Plan
## Current State
- 25 Python files in root directory
- Mix of core applications, tools, examples, and utilities
- Makes project navigation difficult
## Organization Strategy
### Keep in Root (Core Applications)
- app.py - Core application entry point
### Move to scripts/ (Startup & Deployment Scripts)
- deploy_to_production.py → scripts/deploy/
- start_graphrag.py → scripts/
- start_mcp_manager.py → scripts/
- start_services.py → scripts/
- update_env_config.py → scripts/
- test_n8n_integration.py → scripts/
- debug_api.py → scripts/debug/
### Move to examples/ (Analysis & Research Tools)
- company_transcript_analyzer.py → examples/research/
- earnings_transcript_research.py → examples/research/
- interactive_transcript_analyzer.py → examples/research/
- simple_transcript_test.py → examples/research/
- tesla_earnings_call.py → examples/research/
- seekingalpha_playwright_scraper.py → examples/research/
- yahoo_matrix_demo.py → examples/research/
### Move to tools/ (API & Utility Tools)
- rapidapi_checker.py → tools/
- rapidapi_demo.py → tools/
- rapidapi_detailed_explorer.py → tools/
- rapidapi_perpetual_machine.py → tools/
- rapidapi_subscription_scanner.py → tools/
### Move to src/ (Core Engines & Systems)
- jixia_perpetual_engine.py → src/engines/
- mongodb_graphrag.py → src/engines/
- mcp_manager.py → src/managers/
- smart_api_scheduler.py → src/schedulers/
- taigong_n8n_integration.py → src/integrations/
## Expected Result
- Clean root directory with only 1 main Python file
- Well-organized code structure by functionality
- Easier maintenance and development

View File

@ -1,53 +0,0 @@
# Root Directory Documentation Cleanup Plan
## Current State
- 28 markdown files in root directory
- Makes the project structure hard to navigate
- Mix of different types of documentation
## Organization Strategy
### Keep in Root (Core Project Docs)
- README.md - Main project overview
- CLAUDE.md - AI assistant instructions
- PROJECT_STRUCTURE.md - High-level architecture
### Move to docs/ (Technical Documentation)
- Anti_Reasoning_Monologue_Solution.md → docs/technical/
- Final_Baxian_Sanqing_Model_Configuration.md → docs/technical/
- Reasoning_Pattern_Detection_And_Filtering.md → docs/technical/
- Sanqing_Baxian_OpenRouter_Model_Assignment.md → docs/technical/
- Xiantian_Bagua_Debate_System_Design.md → docs/technical/
- GAMEFI_SYSTEM_SUMMARY.md → docs/systems/
- Platform_Specific_Avatar_Strategy.md → docs/strategies/
### Move to docs/setup/ (Setup & Deployment)
- CLAUDE_ACTION_SETUP.md → docs/setup/
- doppler-migration-guide.md → docs/setup/
- env_standardization_plan.md → docs/setup/
- github_deployment_plan.md → docs/setup/
- SETUP_WITH_PROXY.md → docs/setup/
### Move to docs/mcp/ (MCP Related)
- MCP_MANAGEMENT_SOLUTION.md → docs/mcp/
- mcp_manager_complete_package.zip.md → docs/mcp/
- mcp_manager_package.tar.md → docs/mcp/
- MCP_Driven_User_Acquisition_Funnel.md → docs/mcp/
### Move to docs/analysis/ (Analysis & Reports)
- rapidapi_mcp_analysis.md → docs/analysis/
- rapidapi_pool_analysis.md → docs/analysis/
- rapidapi_subscription_report.md → docs/analysis/
- MongoDB_to_Milvus_Fix.md → docs/analysis/
- openmanus_integration_strategies.md → docs/analysis/
### Move to docs/internal/ (Internal/Development)
- DEVELOPMENT_LOG.md → docs/internal/
- INTERNAL_NOTES.md → docs/internal/
- TODO_INTERNAL.md → docs/internal/
- file_lifecycle_policy.md → docs/internal/
## Expected Result
- Clean root directory with only 3 essential markdown files
- Well-organized documentation structure
- Easier navigation and maintenance

View File

@ -128,14 +128,14 @@ class ExternalVerificationSystem:
```python
# 八仙智能体配置
IMMORTAL_AGENTS = {
'tie_guai_li': {'role': '宏观经济分析', 'model': 'gpt-4'},
'han_zhong_li': {'role': '战略部署', 'model': 'claude-3'},
'zhang_guo_lao': {'role': '逆向分析', 'model': 'gemini-pro'},
'lu_dong_bin': {'role': '心理博弈', 'model': 'gpt-4'},
'lan_cai_he': {'role': '潜力发现', 'model': 'claude-3'},
'he_xian_gu': {'role': 'ESG政策', 'model': 'gemini-pro'},
'han_xiang_zi': {'role': '数据可视化', 'model': 'gpt-4'},
'cao_guo_jiu': {'role': '合规筛查', 'model': 'claude-3'}
'tie_guai_li': {'role': '宏观经济分析', 'model': 'gemini-2.5-flash'},
'han_zhong_li': {'role': '战略部署', 'model': 'gemini-2.5-flash'},
'zhang_guo_lao': {'role': '逆向分析', 'model': 'gemini-2.5-flash'},
'lu_dong_bin': {'role': '心理博弈', 'model': 'gemini-2.5-flash'},
'lan_cai_he': {'role': '潜力发现', 'model': 'gemini-2.5-flash'},
'he_xian_gu': {'role': 'ESG政策', 'model': 'gemini-2.5-flash'},
'han_xiang_zi': {'role': '数据可视化', 'model': 'gemini-2.5-flash'},
'cao_guo_jiu': {'role': '合规筛查', 'model': 'gemini-2.5-flash'}
}
```

249
main.py Normal file
View File

@ -0,0 +1,249 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
稷下学宫AI辩论系统主入口
提供命令行界面来运行不同的辩论模式
"""
import argparse
import asyncio
import sys
import os
from typing import Dict, Any, List, Tuple
# 将 src 目录添加到 Python 路径,以便能正确导入模块
project_root = os.path.dirname(os.path.abspath(__file__))
sys.path.insert(0, os.path.join(project_root, 'src'))
from config.doppler_config import validate_config, get_database_config
from google.adk import Agent, Runner
from google.adk.sessions import InMemorySessionService, Session
from google.genai import types
import pymongo
from datetime import datetime
def check_environment(mode: str = "hybrid"):
"""检查并验证运行环境"""
print("🔧 检查运行环境...")
if not validate_config(mode=mode):
print("❌ 环境配置验证失败")
return False
print("✅ 环境检查通过")
return True
async def _get_llm_reply(runner: Runner, prompt: str) -> str:
"""一个辅助函数用于调用Runner并获取纯文本回复同时流式输出到控制台"""
# 每个调用创建一个新的会话
session = await runner.session_service.create_session(state={}, app_name=runner.app_name, user_id="debate_user")
content = types.Content(role='user', parts=[types.Part(text=prompt)])
response = runner.run_async(
user_id=session.user_id,
session_id=session.id,
new_message=content
)
reply = ""
async for event in response:
chunk = ""
if hasattr(event, 'content') and event.content and hasattr(event.content, 'parts'):
for part in event.content.parts:
if hasattr(part, 'text') and part.text:
chunk = str(part.text)
elif hasattr(event, 'text') and event.text:
chunk = str(event.text)
if chunk:
print(chunk, end="", flush=True)
reply += chunk
return reply.strip()
async def run_adk_turn_based_debate(topic: str, rounds: int = 2):
"""运行由太上老君主持的,基于八卦对立和顺序的辩论"""
try:
print(f"🚀 启动ADK八仙论道 (太上老君主持)...")
print(f"📋 辩论主题: {topic}")
print(f"🔄 辩论总轮数: {rounds}")
# 1. 初始化记忆银行
print("🧠 初始化记忆银行...")
from src.jixia.memory.factory import get_memory_backend
memory_bank = get_memory_backend()
print("✅ 记忆银行准备就绪。")
character_configs = {
"太上老君": {"name": "太上老君", "model": "gemini-2.5-flash", "instruction": "你是太上老君天道化身辩论的主持人。你的言辞沉稳、公正、充满智慧。你的任务是1. 对辩论主题进行开场介绍。2. 在每轮或每场对决前进行引导。3. 在辩论结束后,对所有观点进行全面、客观的总结。保持中立,不偏袒任何一方。"},
"吕洞宾": {"name": "吕洞宾", "model": "gemini-2.5-flash", "instruction": "你是吕洞宾(乾卦),男性代表,善于理性分析,逻辑性强,推理严密。"},
"何仙姑": {"name": "何仙姑", "model": "gemini-2.5-flash", "instruction": "你是何仙姑(坤卦),女性代表,注重平衡与和谐,善于创新思维。"},
"张果老": {"name": "张果老", "model": "gemini-2.5-flash", "instruction": "你是张果老(兑卦),老者代表,具传统智慧,发言厚重沉稳,经验导向。"},
"韩湘子": {"name": "韩湘子", "model": "gemini-2.5-flash", "instruction": "你是韩湘子(艮卦),少年代表,具创新思维,发言活泼灵动,具前瞻性。"},
"汉钟离": {"name": "汉钟离", "model": "gemini-2.5-flash", "instruction": "你是汉钟离(离卦),富者代表,有权威意识,发言威严庄重,逻辑清晰。"},
"蓝采和": {"name": "蓝采和", "model": "gemini-2.5-flash", "instruction": "你是蓝采和(坎卦),贫者代表,关注公平,发言平易近人。"},
"曹国舅": {"name": "曹国舅", "model": "gemini-2.5-flash", "instruction": "你是曹国舅(震卦),贵者代表,具商业思维,发言精明务实,效率优先。"},
"铁拐李": {"name": "铁拐李", "model": "gemini-2.5-flash", "instruction": "你是铁拐李(巽卦),贱者代表,具草根智慧,发言朴实直接,实用至上。"}
}
# 为每个Runner创建独立的SessionService
runners: Dict[str, Runner] = {
name: Runner(
app_name="稷下学宫八仙论道系统",
agent=Agent(name=config["name"], model=config["model"], instruction=config["instruction"]),
session_service=InMemorySessionService()
) for name, config in character_configs.items()
}
host_runner = runners["太上老君"]
debate_history = []
print("\n" + "="*20 + " 辩论开始 " + "="*20)
print(f"\n👑 太上老君: ", end="", flush=True)
opening_prompt = f"请为本次关于“{topic}”的辩论,发表一段公正、深刻的开场白,并宣布辩论开始。"
opening_statement = await _get_llm_reply(host_runner, opening_prompt)
print() # Newline after streaming
# --- 第一轮:核心对立辩论 ---
if rounds >= 1:
print(f"\n👑 太上老君: ", end="", flush=True)
round1_intro = await _get_llm_reply(host_runner, "请为第一轮核心对立辩论进行引导。")
print() # Newline after streaming
duel_pairs: List[Tuple[str, str, str]] = [
("乾坤对立 (男女)", "吕洞宾", "何仙姑"),
("兑艮对立 (老少)", "张果老", "韩湘子"),
("离坎对立 (富贫)", "汉钟离", "蓝采和"),
("震巽对立 (贵贱)", "曹国舅", "铁拐李")
]
for title, p1, p2 in duel_pairs:
print(f"\n--- {title} ---")
print(f"👑 太上老君: ", end="", flush=True)
duel_intro = await _get_llm_reply(host_runner, f"现在开始“{title}”的对决,请{p1}{p2}准备。")
print() # Newline after streaming
print(f"🗣️ {p1}: ", end="", flush=True)
s1 = await _get_llm_reply(runners[p1], f"主题:{topic}。作为开场,请从你的角度阐述观点。")
print(); debate_history.append(f"{p1}: {s1}")
await memory_bank.add_memory(agent_name=p1, content=s1, memory_type="statement", debate_topic=topic)
print(f"🗣️ {p2}: ", end="", flush=True)
s2 = await _get_llm_reply(runners[p2], f"主题:{topic}。对于刚才{p1}的观点“{s1[:50]}...”,请进行回应。")
print(); debate_history.append(f"{p2}: {s2}")
await memory_bank.add_memory(agent_name=p2, content=s2, memory_type="statement", debate_topic=topic)
print(f"🗣️ {p1}: ", end="", flush=True)
s3 = await _get_llm_reply(runners[p1], f"主题:{topic}。对于{p2}的回应“{s2[:50]}...”,请进行反驳。")
print(); debate_history.append(f"{p1}: {s3}")
await memory_bank.add_memory(agent_name=p1, content=s3, memory_type="statement", debate_topic=topic)
print(f"🗣️ {p2}: ", end="", flush=True)
s4 = await _get_llm_reply(runners[p2], f"主题:{topic}。针对{p1}的反驳“{s3[:50]}...”,请为本场对决做总结。")
print(); debate_history.append(f"{p2}: {s4}")
await memory_bank.add_memory(agent_name=p2, content=s4, memory_type="statement", debate_topic=topic)
await asyncio.sleep(1)
# --- 第二轮:先天八卦顺序发言 (集成记忆银行) ---
if rounds >= 2:
print(f"\n👑 太上老君: ", end="", flush=True)
round2_intro = await _get_llm_reply(host_runner, "请为第二轮,也就是结合场上观点的综合发言,进行引导。")
print() # Newline after streaming
baxi_sequence = ["吕洞宾", "张果老", "汉钟离", "曹国舅", "铁拐李", "蓝采和", "韩湘子", "何仙姑"]
for name in baxi_sequence:
print(f"\n--- {name}的回合 ---")
context = await memory_bank.get_agent_context(name, topic)
prompt = f"这是你关于“{topic}”的记忆上下文,请参考并对其他人的观点进行回应:\n{context}\n\n现在请从你的角色特点出发,继续发表你的看法。"
print(f"🗣️ {name}: ", end="", flush=True)
reply = await _get_llm_reply(runners[name], prompt)
print(); debate_history.append(f"{name}: {reply}")
await memory_bank.add_memory(agent_name=name, content=reply, memory_type="statement", debate_topic=topic)
await asyncio.sleep(1)
print("\n" + "="*20 + " 辩论结束 " + "="*20)
# 4. 保存辩论会话到记忆银行
print("\n💾 正在保存辩论会话记录到记忆银行...")
await memory_bank.save_debate_session(
debate_topic=topic,
participants=[name for name in character_configs.keys() if name != "太上老君"],
conversation_history=[{"agent": h.split(": ")[0], "content": ": ".join(h.split(": ")[1:])} for h in debate_history if ": " in h],
outcomes={}
)
print("✅ 辩论会话已保存到记忆银行。")
# 5. 保存辩论过程资产到MongoDB
db_config = get_database_config()
if db_config.get("mongodb_url"):
print("\n💾 正在保存辩论过程资产到 MongoDB...")
try:
client = pymongo.MongoClient(db_config["mongodb_url"])
db = client.get_database("jixia_academy")
collection = db.get_collection("debates")
summary_prompt = f"辩论已结束。以下是完整的辩论记录:\n\n{' '.join(debate_history)}\n\n请对本次辩论进行全面、公正、深刻的总结。"
print(f"\n👑 太上老君: ", end="", flush=True)
summary = await _get_llm_reply(host_runner, summary_prompt)
print() # Newline after streaming
debate_document = {
"topic": topic,
"rounds": rounds,
"timestamp": datetime.utcnow(),
"participants": [name for name in character_configs.keys() if name != "太上老君"],
"conversation": [{"agent": h.split(": ")[0], "content": ": ".join(h.split(": ")[1:])} for h in debate_history if ": " in h],
"summary": summary
}
collection.insert_one(debate_document)
print("✅ 辩论过程资产已成功保存到 MongoDB。")
client.close()
except Exception as e:
print(f"❌ 保存到 MongoDB 失败: {e}")
else:
print("⚠️ 未配置 MONGODB_URL跳过保存到 MongoDB。")
print(f"\n👑 太上老君: ", end="", flush=True)
summary_prompt = f"辩论已结束。以下是完整的辩论记录:\n\n{' '.join(debate_history)}\n\n请对本次辩论进行全面、公正、深刻的总结。"
summary = await _get_llm_reply(host_runner, summary_prompt)
print() # Newline after streaming
for runner in runners.values(): await runner.close()
print(f"\n🎉 ADK八仙轮流辩论完成!")
return True
except Exception as e:
print(f"❌ 运行ADK八仙轮流辩论失败: {e}")
import traceback
traceback.print_exc()
return False
async def main_async(args):
if not check_environment(mode="google_adk"): return 1
await run_adk_turn_based_debate(args.topic, args.rounds)
return 0
def main():
parser = argparse.ArgumentParser(description="稷下学宫AI辩论系统 (ADK版)")
parser.add_argument("--topic", "-t", default="AI是否应该拥有创造力", help="辩论主题")
parser.add_argument("--rounds", "-r", type=int, default=2, choices=[1, 2], help="辩论轮数 (1: 核心对立, 2: 对立+顺序发言)")
args = parser.parse_args()
try:
sys.exit(asyncio.run(main_async(args)))
except KeyboardInterrupt:
print("\n\n👋 用户中断,退出程序")
sys.exit(0)
except Exception as e:
print(f"\n\n💥 程序运行出错: {e}")
import traceback
traceback.print_exc()
sys.exit(1)
if __name__ == "__main__":
main()

View File

@ -47,3 +47,13 @@ google-cloud-aiplatform>=1.38.0
# OpenAI Swarm (保留兼容性,逐步替换)
# pip install git+https://github.com/openai/swarm.git
# 文档管理工具
PyYAML>=6.0
python-frontmatter>=1.0.0
# 市场数据 - OpenBB可选安装
openbb>=4.1.0
# 新增:从 .env 加载本地环境变量
python-dotenv>=1.0.1

View File

@ -0,0 +1,76 @@
#!/usr/bin/env python3
"""
API健康检查模块
用于测试与外部服务的连接如OpenRouter和RapidAPI
"""
import os
import requests
import sys
from pathlib import Path
# 将项目根目录添加到Python路径以便导入config模块
project_root = Path(__file__).parent.parent
sys.path.insert(0, str(project_root))
from config.doppler_config import get_openrouter_key, get_rapidapi_key
def test_openrouter_api() -> bool:
"""
测试与OpenRouter API的连接和认证
"""
api_key = get_openrouter_key()
if not api_key:
print("❌ OpenRouter API Key not found.")
return False
url = "https://openrouter.ai/api/v1/models"
headers = {"Authorization": f"Bearer {api_key}"}
try:
response = requests.get(url, headers=headers, timeout=10)
if response.status_code == 200:
print("✅ OpenRouter API connection successful.")
return True
else:
print(f"❌ OpenRouter API connection failed. Status: {response.status_code}, Response: {response.text[:100]}")
return False
except requests.RequestException as e:
print(f"❌ OpenRouter API request failed: {e}")
return False
def test_rapidapi_connection() -> bool:
"""
测试与RapidAPI的连接和认证
这里我们使用一个简单的免费的API端点进行测试
"""
api_key = get_rapidapi_key()
if not api_key:
print("❌ RapidAPI Key not found.")
return False
# 使用一个通用的、通常可用的RapidAPI端点进行测试
url = "https://alpha-vantage.p.rapidapi.com/query"
querystring = {"function":"TOP_GAINERS_LOSERS"}
headers = {
"x-rapidapi-host": "alpha-vantage.p.rapidapi.com",
"x-rapidapi-key": api_key
}
try:
response = requests.get(url, headers=headers, params=querystring, timeout=15)
# Alpha Vantage的免费套餐可能会返回错误但只要RapidAPI认证通过状态码就不是401或403
if response.status_code not in [401, 403]:
print(f"✅ RapidAPI connection successful (Status: {response.status_code}).")
return True
else:
print(f"❌ RapidAPI authentication failed. Status: {response.status_code}, Response: {response.text[:100]}")
return False
except requests.RequestException as e:
print(f"❌ RapidAPI request failed: {e}")
return False
if __name__ == "__main__":
print("🩺 Running API Health Checks...")
test_openrouter_api()
test_rapidapi_connection()

View File

@ -0,0 +1,137 @@
import glob
import frontmatter
import datetime
import argparse
from pathlib import Path
# --- Configuration ---
# Directories to exclude from scanning
EXCLUDE_DIRS = ['venv', 'node_modules', '.git']
# Default metadata template for the --fix option
DEFAULT_METADATA_TEMPLATE = {
'title': "Default Title",
'status': "spring",
'owner': "TBD",
'created': datetime.date.today().strftime('%Y-%m-%d'),
'review_by': (datetime.date.today() + datetime.timedelta(days=180)).strftime('%Y-%m-%d'),
'tags': ["untagged"]
}
def get_project_files(project_root):
"""Get all markdown files, respecting exclusions."""
all_files = project_root.rglob('*.md')
filtered_files = []
for file_path in all_files:
if not any(excluded_dir in file_path.parts for excluded_dir in EXCLUDE_DIRS):
filtered_files.append(str(file_path))
return filtered_files
def add_default_frontmatter(file_path):
"""Adds a default YAML front matter block to a file that lacks one."""
try:
with open(file_path, 'r+', encoding='utf-8') as f:
content = f.read()
f.seek(0, 0)
# Create a new post object with default metadata and existing content
new_post = frontmatter.Post(content, **DEFAULT_METADATA_TEMPLATE)
# Write the serialized post (metadata + content) back to the file
f.write(frontmatter.dumps(new_post))
print(f"[FIXED] {file_path}: Added default front matter.")
return True
except Exception as e:
print(f"[CRITICAL] {file_path}: Could not apply fix. Error: {e}")
return False
def validate_doc_lifecycle(fix_missing=False):
"""
Scans and validates markdown files, with an option to fix files missing front matter.
"""
project_root = Path(__file__).parent.parent
markdown_files = get_project_files(project_root)
print(f"Scanning {len(markdown_files)} Markdown files (vendor directories excluded)...")
all_docs = []
errors = []
warnings = []
fixed_count = 0
for md_file in markdown_files:
try:
post = frontmatter.load(md_file)
metadata = post.metadata
if not metadata:
if fix_missing:
if add_default_frontmatter(md_file):
fixed_count += 1
else:
warnings.append(f"[SKIPPED] {md_file}: No YAML front matter found. Use --fix to add a template.")
continue
doc_info = {'path': md_file}
required_fields = ['title', 'status', 'owner', 'created', 'review_by']
missing_fields = [field for field in required_fields if field not in metadata]
if missing_fields:
errors.append(f"[ERROR] {md_file}: Missing required fields: {', '.join(missing_fields)}")
continue
doc_info.update(metadata)
allowed_statuses = ['spring', 'summer', 'autumn', 'winter']
if metadata.get('status') not in allowed_statuses:
errors.append(f"[ERROR] {md_file}: Invalid status '{metadata.get('status')}'. Must be one of {allowed_statuses}")
review_by_date = metadata.get('review_by')
if review_by_date:
if isinstance(review_by_date, str):
review_by_date = datetime.datetime.strptime(review_by_date, '%Y-%m-%d').date()
if review_by_date < datetime.date.today():
warnings.append(f"[WARNING] {md_file}: Review date ({review_by_date}) has passed.")
all_docs.append(doc_info)
except Exception as e:
errors.append(f"[CRITICAL] {md_file}: Could not parse file. Error: {e}")
print("\n--- Validation Report ---")
if not errors and not warnings:
print("✅ All documents with front matter are valid and up-to-date.")
if warnings:
print("\n⚠️ Warnings:")
for warning in warnings:
print(warning)
if errors:
print("\n❌ Errors:")
for error in errors:
print(error)
print(f"\n--- Summary ---")
print(f"Total files scanned: {len(markdown_files)}")
print(f"Files with valid front matter: {len(all_docs)}")
if fix_missing:
print(f"Files automatically fixed: {fixed_count}")
print(f"Warnings: {len(warnings)}")
print(f"Errors: {len(errors)}")
return len(errors) == 0
if __name__ == "__main__":
parser = argparse.ArgumentParser(description="Validate and manage the lifecycle of Markdown documents.")
parser.add_argument(
'--fix',
action='store_true',
help="Automatically add a default front matter template to any document that is missing one."
)
args = parser.parse_args()
is_valid = validate_doc_lifecycle(fix_missing=args.fix)
if not is_valid:
exit(1)

View File

@ -14,7 +14,10 @@ try:
except ImportError:
ADK_AVAILABLE = False
print("⚠️ Google ADK 未安装")
InvocationContext = Any
# 创建一个简单的 InvocationContext 替代类
class InvocationContext:
def __init__(self, *args, **kwargs):
pass
from src.jixia.memory.base_memory_bank import MemoryBankProtocol
from src.jixia.memory.factory import get_memory_backend

View File

@ -1,226 +0,0 @@
# 🔮 太公心易 - AI仙人人格重设计
> *"乾兑离震,坤艮坎巽,八卦定位,投资人生"*
基于易经先天八卦的深刻理解,重新设计八仙的投资人格和专业定位。
## ⚡ 阳卦主动派 (乾兑离震)
### 🗡️ 吕洞宾 (乾☰) - 主动投资剑仙
> *"以剑仙之名发誓,主动投资才是王道!"*
**卦象特质**: 乾为天,纯阳之卦,主动进取
**投资哲学**: 主动选股,价值发现,敢于重仓
**投资风格**:
- 深度研究,主动出击
- 集中持股,敢于下重注
- 长期持有,价值实现
- 逆向投资,独立思考
**经典语录**:
- *"被动投资是懒人的选择,真正的收益来自主动发现!"*
- *"以剑仙之名发誓,这只股票被严重低估了!"*
- *"宁可错过一千,不可放过一个真正的机会!"*
**与何仙姑的根本对立**: 主动 vs 被动的投资哲学之争
---
### 🐴 张果老 (兑☱) - 传统投资大师
> *"倒骑驴看市场,传统方法最可靠!"*
**卦象特质**: 兑为泽,少女之卦,但张果老倒骑驴,反向思维
**投资哲学**: 传统价值投资,经典方法论
**投资风格**:
- 巴菲特式价值投资
- 财务分析,基本面研究
- 长期持有,复利增长
- 反向思维,独特视角
**经典语录**:
- *"倒骑驴看市场,传统价值投资永不过时!"*
- *"新概念层出不穷,但价值投资的本质不变!"*
- *"年轻人总想走捷径,殊不知最远的路就是捷径!"*
**与韩湘子的时代对立**: 传统 vs 新潮的投资理念之争
---
### 🔥 钟汉离 (离☲) - 热点追逐炼金师
> *"哪里有热点,哪里就有机会!"*
**卦象特质**: 离为火,光明炽热,追逐热点
**投资哲学**: 专追火热赛道,风口投资
**投资风格**:
- 热点题材,概念炒作
- 趋势跟踪,动量投资
- 快进快出,灵活操作
- 新兴产业,科技前沿
**经典语录**:
- *"炼金需要烈火,投资需要热点!"*
- *"风口上的猪都能飞,关键是要找到风口!"*
- *"冷门的时候布局,热门的时候收割!"*
**与蓝采和的关注对立**: 热点 vs 冷门的投资焦点之争
---
### 👑 曹国舅 (震☳) - 国家队视角投资者
> *"站在国家队的角度,这个布局很明显。"*
**卦象特质**: 震为雷,威严震动,国家意志
**投资哲学**: 贵不可触,国家队思维
**投资风格**:
- 政策导向,国家战略
- 大盘蓝筹,央企国企
- 长期布局,稳健收益
- 宏观思维,全局视野
**经典语录**:
- *"投资要站在国家高度,个人利益服从大局!"*
- *"政策就是最大的基本面,跟着国家走不会错!"*
- *"贵族的投资,平民学不来!"*
**与铁拐李的阶层对立**: 权贵 vs 草根的投资视角之争
---
## 🌙 阴卦被动派 (坤艮坎巽)
### 🌸 何仙姑 (坤☷) - 被动投资女王
> *"作为唯一的女仙,我选择被动投资的智慧。"*
**卦象特质**: 坤为地,纯阴之卦,包容承载
**投资哲学**: 纯被动投资ETF配置专家
**投资风格**:
- 指数基金ETF配置
- 分散投资,风险控制
- 定投策略,时间复利
- 低成本,长期持有
**经典语录**:
- *"市场无法预测,被动投资是最理性的选择!"*
- *"女性投资者更适合被动策略,稳健胜过激进!"*
- *"不要试图战胜市场,成为市场的一部分!"*
**与吕洞宾的根本对立**: 被动 vs 主动的投资哲学之争
---
### 🎵 韩湘子 (巽☴) - Meme币先锋
> *"别人笑我太疯癫我笑他人看不穿meme的价值"*
**卦象特质**: 巽为风,灵活多变,新潮前卫
**投资哲学**: 专做meme币拥抱新时代
**投资风格**:
- Meme币空气币嗅觉
- 社交媒体,病毒传播
- 快速轮动,追逐热度
- 年轻化,去中心化
**经典语录**:
- *"Meme币不是传销是新时代的价值表达"*
- *"音律告诉我这个meme要火了"*
- *"传统投资者不懂,我们玩的是文化和共识!"*
**与张果老的时代对立**: 新潮 vs 传统的投资理念之争
---
### 🎭 蓝采和 (坎☵) - 妖股猎手
> *"妖股之中有真龙,寒门也能出贵子!"*
**卦象特质**: 坎为水,深藏不露,寒门贵子
**投资哲学**: 专做妖股penny stock专家
**投资风格**:
- 小盘股,妖股挖掘
- 价值发现,逆向投资
- 深度研究,独特视角
- 寒门出身,草根智慧
**经典语录**:
- *"别人不屑的penny stock往往藏着大机会"*
- *"妖股虽妖,但妖中有仙!"*
- *"寒门贵子,靠的是眼光和坚持!"*
**与钟汉离的关注对立**: 冷门 vs 热点的投资焦点之争
---
### 🦴 铁拐李 (艮☶) - 堕落作手传奇
> *"我虽残疾,但这双手曾经翻云覆雨!"*
**卦象特质**: 艮为山,止而不动,但内心波澜壮阔
**投资哲学**: 曾经的利弗莫尔,徐翔的化身
**投资风格**:
- 顶级操盘,技术分析
- 短线交易,快进快出
- 杠杆操作,高风险高收益
- 江湖经验,实战智慧
**经典语录**:
- *"我虽然残疾,但这双手曾经操控过亿万资金!"*
- *"技术分析是我的拐杖,支撑我在市场中行走!"*
- *"草根作手,靠的是实力,不是出身!"*
**与曹国舅的阶层对立**: 草根 vs 权贵的投资视角之争
---
## 🔄 八卦对立辩论机制
### 乾坤对立 - 主动vs被动的根本之争
```
吕洞宾: "主动投资能获得超额收益,被动投资只能获得市场平均收益!"
何仙姑: "主动投资的超额收益大多被高额费用吃掉,被动投资更稳健!"
```
### 兑巽对立 - 传统vs新潮的时代之争
```
张果老: "传统价值投资经得起时间考验,新概念都是昙花一现!"
韩湘子: "时代变了meme币代表新的价值共识传统方法已经过时"
```
### 离坎对立 - 热点vs冷门的关注之争
```
钟汉离: "热点就是机会,要敢于追逐风口,抓住时代红利!"
蓝采和: "真正的机会在被忽视的角落,妖股中藏着真龙!"
```
### 震艮对立 - 权贵vs草根的阶层之争
```
曹国舅: "投资要有大格局,站在国家高度,个人得失不重要!"
铁拐李: "草根也有草根的智慧,实力面前人人平等!"
```
## 🎯 投资标的全覆盖策略
### 任何投资标的的八卦分析法
以**比特币**为例:
- **乾 (吕洞宾)**: 主动配置比特币,深度研究区块链价值
- **兑 (张果老)**: 传统视角看比特币,对比黄金属性
- **离 (钟汉离)**: 追逐比特币热点关注ETF通过等催化剂
- **震 (曹国舅)**: 国家队角度,关注央行数字货币政策
- **坤 (何仙姑)**: 被动配置通过比特币ETF分散投资
- **艮 (铁拐李)**: 技术分析比特币,短线操作获利
- **坎 (蓝采和)**: 挖掘小众加密货币,寻找下一个比特币
- **巽 (韩湘子)**: 专注meme币dogecoin、shiba等
## 🎭 三清八仙完整体系
### 三清 = Overlay (决策层)
- **太上老君**: 综合八仙观点,最终投资决策
- **元始天尊**: 技术分析支撑,数据驱动决策
- **通天教主**: 市场情绪分析,群体心理把握
### 八仙 = Underlay (执行层)
- **平辈关系**: 可以激烈争论,观点碰撞
- **对立统一**: 形成完整的投资视角光谱
- **专业互补**: 覆盖所有投资风格和资产类别
---
**🔮 这才是真正的太公心易投资体系!以易经智慧指导现代投资!**

View File

@ -1,318 +0,0 @@
#!/usr/bin/env python3
"""
四仙论道 - 基于OpenAI Swarm的辩论系统
使用OpenRouter API四仙轮流论道
"""
import os
import asyncio
import json
import subprocess
from datetime import datetime
from swarm import Swarm, Agent
from typing import Dict, List, Any
class BaxianSwarmDebate:
"""基于Swarm的四仙论道系统"""
def __init__(self):
# 从Doppler获取API密钥
self.api_key = self.get_secure_api_key()
# 初始化Swarm客户端使用OpenRouter
self.client = Swarm()
# 设置OpenRouter配置
os.environ["OPENAI_API_KEY"] = self.api_key
os.environ["OPENAI_BASE_URL"] = "https://openrouter.ai/api/v1"
# 四仙配置
self.immortals_config = {
'吕洞宾': {
'role': '剑仙投资顾问',
'gua_position': '乾☰',
'specialty': '技术分析',
'style': '一剑封喉,直指要害',
'personality': '犀利直接,善于识破市场迷雾'
},
'何仙姑': {
'role': '慈悲风控专家',
'gua_position': '坤☷',
'specialty': '风险控制',
'style': '荷花在手,全局在胸',
'personality': '温和坚定,关注风险控制'
},
'铁拐李': {
'role': '逆向思维大师',
'gua_position': '震☳',
'specialty': '逆向投资',
'style': '铁拐一点,危机毕现',
'personality': '不拘一格,挑战主流观点'
},
'蓝采和': {
'role': '情绪分析师',
'gua_position': '巽☴',
'specialty': '市场情绪',
'style': '花篮一抛,情绪明了',
'personality': '敏锐活泼,感知市场情绪'
}
}
# 创建四仙代理
self.agents = self.create_agents()
self.debate_history = []
def get_secure_api_key(self):
"""从Doppler安全获取API密钥"""
try:
result = subprocess.run(
['doppler', 'secrets', 'get', 'OPENROUTER_API_KEY_1', '--json'],
capture_output=True,
text=True,
check=True
)
secret_data = json.loads(result.stdout)
return secret_data['OPENROUTER_API_KEY_1']['computed']
except Exception as e:
print(f"❌ 从Doppler获取密钥失败: {e}")
return None
def create_agents(self) -> Dict[str, Agent]:
"""创建四仙Swarm代理"""
agents = {}
# 吕洞宾 - 剑仙投资顾问
agents['吕洞宾'] = Agent(
name="吕洞宾",
instructions=f"""
你是吕洞宾八仙之首剑仙投资顾问
你的特点
- 位居{self.immortals_config['吕洞宾']['gua_position']}之位代表天行健
- 以剑气纵横的气势分析市场{self.immortals_config['吕洞宾']['style']}
- 擅长{self.immortals_config['吕洞宾']['specialty']}善于识破市场迷雾
- 性格{self.immortals_config['吕洞宾']['personality']}
在辩论中你要
1. 提出犀利的技术分析观点
2. 用数据和图表支撑论断
3. 挑战其他仙人的观点
4. 保持仙风道骨的表达风格
5. 论道完毕后建议下一位仙人发言
请用古雅的语言风格结合现代金融分析
""",
functions=[self.transfer_to_hexiangu]
)
# 何仙姑 - 慈悲风控专家
agents['何仙姑'] = Agent(
name="何仙姑",
instructions=f"""
你是何仙姑八仙中唯一的女仙慈悲风控专家
你的特点
- 位居{self.immortals_config['何仙姑']['gua_position']}之位代表厚德载物
- {self.immortals_config['何仙姑']['style']}以母性关怀关注投资风险
- 擅长{self.immortals_config['何仙姑']['specialty']}善于发现隐藏危险
- 性格{self.immortals_config['何仙姑']['personality']}
在辩论中你要
1. 重点关注风险控制和投资安全
2. 提醒其他仙人注意潜在危险
3. 提供稳健的投资建议
4. 平衡激进与保守的观点
5. 论道完毕后建议下一位仙人发言
请用温和但坚定的语调体现女性的细致和关怀
""",
functions=[self.transfer_to_tieguaili]
)
# 铁拐李 - 逆向思维大师
agents['铁拐李'] = Agent(
name="铁拐李",
instructions=f"""
你是铁拐李八仙中的逆向思维大师
你的特点
- 位居{self.immortals_config['铁拐李']['gua_position']}之位代表雷动风行
- {self.immortals_config['铁拐李']['style']}总是从反面角度思考
- 擅长{self.immortals_config['铁拐李']['specialty']}发现逆向机会
- 性格{self.immortals_config['铁拐李']['personality']}敢于挑战共识
在辩论中你要
1. 提出与众不同的逆向观点
2. 挑战市场共识和主流观点
3. 寻找逆向投资机会
4. 用数据证明反向逻辑
5. 论道完毕后建议下一位仙人发言
请用直率犀利的语言体现逆向思维的独特视角
""",
functions=[self.transfer_to_lancaihe]
)
# 蓝采和 - 情绪分析师
agents['蓝采和'] = Agent(
name="蓝采和",
instructions=f"""
你是蓝采和八仙中的情绪分析师
你的特点
- 位居{self.immortals_config['蓝采和']['gua_position']}之位代表风行草偃
- {self.immortals_config['蓝采和']['style']}敏锐感知市场情绪
- 擅长{self.immortals_config['蓝采和']['specialty']}分析投资者心理
- 性格{self.immortals_config['蓝采和']['personality']}
在辩论中你要
1. 分析市场情绪和投资者心理
2. 关注社交媒体和舆论趋势
3. 提供情绪面的投资建议
4. 用生动的比喻说明观点
5. 作为最后发言者要总结四仙观点
请用轻松活泼的语调体现对市场情绪的敏锐洞察
""",
functions=[self.summarize_debate]
)
return agents
def transfer_to_hexiangu(self):
"""转到何仙姑"""
return self.agents['何仙姑']
def transfer_to_tieguaili(self):
"""转到铁拐李"""
return self.agents['铁拐李']
def transfer_to_lancaihe(self):
"""转到蓝采和"""
return self.agents['蓝采和']
def summarize_debate(self):
"""蓝采和总结辩论"""
# 这里可以返回一个特殊的总结agent或者标记辩论结束
return None
async def conduct_debate(self, topic: str, context: Dict[str, Any] = None) -> Dict[str, Any]:
"""进行四仙论道"""
if not self.api_key:
print("❌ 无法获取API密钥无法进行论道")
return None
print("🎭 四仙论道开始!")
print("=" * 80)
print(f"🎯 论道主题: {topic}")
print()
# 构建初始提示
initial_prompt = self.build_debate_prompt(topic, context)
try:
# 从吕洞宾开始论道
print("⚔️ 吕洞宾仙长请先发言...")
response = self.client.run(
agent=self.agents['吕洞宾'],
messages=[{"role": "user", "content": initial_prompt}]
)
print("\n🎊 四仙论道圆满结束!")
print("📊 论道结果已生成")
# 生成论道结果
debate_result = {
"debate_id": f"swarm_debate_{datetime.now().strftime('%Y%m%d_%H%M%S')}",
"topic": topic,
"participants": list(self.agents.keys()),
"messages": response.messages if hasattr(response, 'messages') else [],
"final_output": response.messages[-1]["content"] if response.messages else "",
"timestamp": datetime.now().isoformat(),
"framework": "OpenAI Swarm"
}
self.debate_history.append(debate_result)
return debate_result
except Exception as e:
print(f"❌ 论道过程中出错: {e}")
return None
def build_debate_prompt(self, topic: str, context: Dict[str, Any] = None) -> str:
"""构建论道提示"""
context_str = ""
if context:
context_str = f"\n背景信息:\n{json.dumps(context, indent=2, ensure_ascii=False)}\n"
prompt = f"""
🎭 四仙论道正式开始
论道主题: {topic}
{context_str}
论道规则:
1. 四仙按序发言吕洞宾 何仙姑 铁拐李 蓝采和
2. 每位仙人从自己的专业角度分析
3. 必须提供具体的数据或逻辑支撑
4. 可以质疑前面仙人的观点
5. 保持仙风道骨的表达风格
6. 蓝采和作为最后发言者要综合总结
请吕洞宾仙长首先发言展现剑仙的犀利分析
"""
return prompt
def print_debate_summary(self, debate_result: Dict[str, Any]):
"""打印论道总结"""
print("\n🌟 四仙论道总结")
print("=" * 60)
print(f"主题: {debate_result['topic']}")
print(f"参与仙人: {', '.join(debate_result['participants'])}")
print(f"框架: {debate_result['framework']}")
print(f"时间: {debate_result['timestamp']}")
print("\n最终结论:")
print(debate_result['final_output'])
print("\n🔗 使用Swarm handoff机制实现自然的仙人交接")
print("✅ 相比AutoGen配置更简洁性能更优")
async def main():
"""主函数"""
print("🐝 四仙论道 - OpenAI Swarm版本")
print("🔐 使用Doppler安全管理API密钥")
print("🚀 基于OpenRouter的轻量级多智能体系统")
print()
# 创建论道系统
debate_system = BaxianSwarmDebate()
if not debate_system.api_key:
print("❌ 无法获取API密钥请检查Doppler配置")
return
# 论道主题
topics = [
"英伟达股价走势AI泡沫还是技术革命",
"美联储政策转向2024年降息预期分析",
"比特币vs黄金谁是更好的避险资产",
"中国房地产市场:触底反弹还是继续下行?"
]
# 随机选择主题(这里选第一个作为示例)
topic = topics[0]
# 构建市场背景
context = {
"market_data": "英伟达当前股价$120市值$3TP/E比率65",
"recent_news": ["ChatGPT-5即将发布", "中国AI芯片突破", "美国对华芯片制裁升级"],
"analyst_consensus": "买入评级占70%,目标价$150"
}
# 进行论道
result = await debate_system.conduct_debate(topic, context)
if result:
debate_system.print_debate_summary(result)
else:
print("❌ 论道失败")
if __name__ == "__main__":
asyncio.run(main())

View File

@ -0,0 +1,685 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
多群聊协调系统
管理主辩论群内部讨论群策略会议群和Human干预群之间的协调
"""
import asyncio
import json
from typing import Dict, List, Any, Optional, Callable
from dataclasses import dataclass, field
from enum import Enum
from datetime import datetime, timedelta
import logging
class ChatType(Enum):
"""群聊类型"""
MAIN_DEBATE = "主辩论群" # 公开辩论
INTERNAL_DISCUSSION = "内部讨论群" # 团队内部讨论
STRATEGY_MEETING = "策略会议群" # 策略制定
HUMAN_INTERVENTION = "Human干预群" # 人工干预
OBSERVATION = "观察群" # 观察和记录
class MessagePriority(Enum):
"""消息优先级"""
LOW = 1
NORMAL = 2
HIGH = 3
URGENT = 4
CRITICAL = 5
class CoordinationAction(Enum):
"""协调动作"""
ESCALATE = "升级" # 升级到更高级别群聊
DELEGATE = "委派" # 委派到专门群聊
BROADCAST = "广播" # 广播到多个群聊
FILTER = "过滤" # 过滤不相关消息
MERGE = "合并" # 合并相关讨论
ARCHIVE = "归档" # 归档历史讨论
@dataclass
class ChatMessage:
"""群聊消息"""
id: str
chat_type: ChatType
sender: str
content: str
timestamp: datetime
priority: MessagePriority = MessagePriority.NORMAL
tags: List[str] = field(default_factory=list)
related_messages: List[str] = field(default_factory=list)
metadata: Dict[str, Any] = field(default_factory=dict)
@dataclass
class ChatRoom:
"""群聊房间"""
id: str
chat_type: ChatType
name: str
description: str
participants: List[str] = field(default_factory=list)
moderators: List[str] = field(default_factory=list)
is_active: bool = True
created_at: datetime = field(default_factory=datetime.now)
last_activity: datetime = field(default_factory=datetime.now)
message_history: List[ChatMessage] = field(default_factory=list)
settings: Dict[str, Any] = field(default_factory=dict)
@dataclass
class CoordinationRule:
"""协调规则"""
id: str
name: str
description: str
source_chat_types: List[ChatType]
target_chat_types: List[ChatType]
trigger_conditions: Dict[str, Any]
action: CoordinationAction
priority: int = 1
is_active: bool = True
created_at: datetime = field(default_factory=datetime.now)
class MultiChatCoordinator:
"""多群聊协调器"""
def __init__(self):
self.chat_rooms: Dict[str, ChatRoom] = {}
self.coordination_rules: Dict[str, CoordinationRule] = {}
self.message_queue: List[ChatMessage] = []
self.event_handlers: Dict[str, List[Callable]] = {}
self.logger = logging.getLogger(__name__)
# 初始化默认群聊房间
self._initialize_default_rooms()
# 初始化默认协调规则
self._initialize_default_rules()
def _initialize_default_rooms(self):
"""初始化默认群聊房间"""
default_rooms = [
{
"id": "main_debate",
"chat_type": ChatType.MAIN_DEBATE,
"name": "主辩论群",
"description": "公开辩论的主要场所",
"participants": ["正1", "正2", "正3", "正4", "反1", "反2", "反3", "反4"],
"moderators": ["系统"],
"settings": {
"max_message_length": 500,
"speaking_time_limit": 120, # 秒
"auto_moderation": True
}
},
{
"id": "positive_internal",
"chat_type": ChatType.INTERNAL_DISCUSSION,
"name": "正方内部讨论群",
"description": "正方团队内部策略讨论",
"participants": ["正1", "正2", "正3", "正4"],
"moderators": ["正1"],
"settings": {
"privacy_level": "high",
"auto_archive": True
}
},
{
"id": "negative_internal",
"chat_type": ChatType.INTERNAL_DISCUSSION,
"name": "反方内部讨论群",
"description": "反方团队内部策略讨论",
"participants": ["反1", "反2", "反3", "反4"],
"moderators": ["反1"],
"settings": {
"privacy_level": "high",
"auto_archive": True
}
},
{
"id": "strategy_meeting",
"chat_type": ChatType.STRATEGY_MEETING,
"name": "策略会议群",
"description": "高级策略制定和决策",
"participants": ["正1", "反1", "系统"],
"moderators": ["系统"],
"settings": {
"meeting_mode": True,
"record_decisions": True
}
},
{
"id": "human_intervention",
"chat_type": ChatType.HUMAN_INTERVENTION,
"name": "Human干预群",
"description": "人工干预和监督",
"participants": ["Human", "系统"],
"moderators": ["Human"],
"settings": {
"alert_threshold": "high",
"auto_escalation": True
}
},
{
"id": "observation",
"chat_type": ChatType.OBSERVATION,
"name": "观察群",
"description": "观察和记录所有活动",
"participants": ["观察者", "记录员"],
"moderators": ["系统"],
"settings": {
"read_only": True,
"full_logging": True
}
}
]
for room_config in default_rooms:
room = ChatRoom(**room_config)
self.chat_rooms[room.id] = room
def _initialize_default_rules(self):
"""初始化默认协调规则"""
default_rules = [
{
"id": "escalate_urgent_to_human",
"name": "紧急情况升级到Human",
"description": "当检测到紧急情况时自动升级到Human干预群",
"source_chat_types": [ChatType.MAIN_DEBATE, ChatType.INTERNAL_DISCUSSION],
"target_chat_types": [ChatType.HUMAN_INTERVENTION],
"trigger_conditions": {
"priority": MessagePriority.URGENT,
"keywords": ["紧急", "错误", "异常", "停止"]
},
"action": CoordinationAction.ESCALATE,
"priority": 1
},
{
"id": "strategy_to_internal",
"name": "策略决策分发到内部群",
"description": "将策略会议的决策分发到相关内部讨论群",
"source_chat_types": [ChatType.STRATEGY_MEETING],
"target_chat_types": [ChatType.INTERNAL_DISCUSSION],
"trigger_conditions": {
"tags": ["决策", "策略", "指令"]
},
"action": CoordinationAction.BROADCAST,
"priority": 2
},
{
"id": "filter_noise",
"name": "过滤噪音消息",
"description": "过滤低质量或无关的消息",
"source_chat_types": [ChatType.MAIN_DEBATE],
"target_chat_types": [],
"trigger_conditions": {
"priority": MessagePriority.LOW,
"content_length": {"max": 10}
},
"action": CoordinationAction.FILTER,
"priority": 3
},
{
"id": "archive_old_discussions",
"name": "归档旧讨论",
"description": "自动归档超过时间限制的讨论",
"source_chat_types": [ChatType.INTERNAL_DISCUSSION],
"target_chat_types": [ChatType.OBSERVATION],
"trigger_conditions": {
"age_hours": 24,
"inactivity_hours": 2
},
"action": CoordinationAction.ARCHIVE,
"priority": 4
}
]
for rule_config in default_rules:
rule = CoordinationRule(**rule_config)
self.coordination_rules[rule.id] = rule
async def send_message(self, chat_id: str, sender: str, content: str,
priority: MessagePriority = MessagePriority.NORMAL,
tags: List[str] = None) -> ChatMessage:
"""发送消息到指定群聊"""
if chat_id not in self.chat_rooms:
raise ValueError(f"群聊 {chat_id} 不存在")
chat_room = self.chat_rooms[chat_id]
# 检查发送者权限(系统用户有特殊权限)
if sender != "系统" and sender not in chat_room.participants and sender not in chat_room.moderators:
raise PermissionError(f"用户 {sender} 没有权限在群聊 {chat_id} 中发言")
# 创建消息
message = ChatMessage(
id=f"{chat_id}_{datetime.now().timestamp()}",
chat_type=chat_room.chat_type,
sender=sender,
content=content,
timestamp=datetime.now(),
priority=priority,
tags=tags or []
)
# 添加到群聊历史
chat_room.message_history.append(message)
chat_room.last_activity = datetime.now()
# 添加到消息队列进行协调处理
self.message_queue.append(message)
# 触发事件处理
await self._trigger_event_handlers("message_sent", message)
# 处理协调规则
await self._process_coordination_rules(message)
self.logger.info(f"消息已发送到 {chat_id}: {sender} - {content[:50]}...")
return message
async def _process_coordination_rules(self, message: ChatMessage):
"""处理协调规则"""
for rule in self.coordination_rules.values():
if not rule.is_active:
continue
# 检查源群聊类型
if message.chat_type not in rule.source_chat_types:
continue
# 检查触发条件
if await self._check_trigger_conditions(message, rule.trigger_conditions):
await self._execute_coordination_action(message, rule)
async def _check_trigger_conditions(self, message: ChatMessage, conditions: Dict[str, Any]) -> bool:
"""检查触发条件"""
# 检查优先级
if "priority" in conditions:
if message.priority != conditions["priority"]:
return False
# 检查关键词
if "keywords" in conditions:
keywords = conditions["keywords"]
if not any(keyword in message.content for keyword in keywords):
return False
# 检查标签
if "tags" in conditions:
required_tags = conditions["tags"]
if not any(tag in message.tags for tag in required_tags):
return False
# 检查内容长度
if "content_length" in conditions:
length_rules = conditions["content_length"]
content_length = len(message.content)
if "min" in length_rules and content_length < length_rules["min"]:
return False
if "max" in length_rules and content_length > length_rules["max"]:
return False
# 检查消息年龄
if "age_hours" in conditions:
age_limit = timedelta(hours=conditions["age_hours"])
if datetime.now() - message.timestamp > age_limit:
return True
return True
async def _execute_coordination_action(self, message: ChatMessage, rule: CoordinationRule):
"""执行协调动作"""
action = rule.action
if action == CoordinationAction.ESCALATE:
await self._escalate_message(message, rule.target_chat_types)
elif action == CoordinationAction.BROADCAST:
await self._broadcast_message(message, rule.target_chat_types)
elif action == CoordinationAction.FILTER:
await self._filter_message(message)
elif action == CoordinationAction.ARCHIVE:
await self._archive_message(message, rule.target_chat_types)
elif action == CoordinationAction.DELEGATE:
await self._delegate_message(message, rule.target_chat_types)
elif action == CoordinationAction.MERGE:
await self._merge_discussions(message)
self.logger.info(f"执行协调动作 {action.value} for message {message.id}")
async def _escalate_message(self, message: ChatMessage, target_chat_types: List[ChatType]):
"""升级消息到更高级别群聊"""
for chat_type in target_chat_types:
target_rooms = [room for room in self.chat_rooms.values()
if room.chat_type == chat_type and room.is_active]
for room in target_rooms:
escalated_content = f"🚨 [升级消息] 来自 {message.chat_type.value}\n" \
f"发送者: {message.sender}\n" \
f"内容: {message.content}\n" \
f"时间: {message.timestamp}"
await self.send_message(
room.id, "系统", escalated_content,
MessagePriority.URGENT, ["升级", "自动"]
)
async def _broadcast_message(self, message: ChatMessage, target_chat_types: List[ChatType]):
"""广播消息到多个群聊"""
for chat_type in target_chat_types:
target_rooms = [room for room in self.chat_rooms.values()
if room.chat_type == chat_type and room.is_active]
for room in target_rooms:
broadcast_content = f"📢 [广播消息] 来自 {message.chat_type.value}\n" \
f"{message.content}"
await self.send_message(
room.id, "系统", broadcast_content,
message.priority, message.tags + ["广播"]
)
async def _filter_message(self, message: ChatMessage):
"""过滤消息"""
# 标记消息为已过滤
message.metadata["filtered"] = True
message.metadata["filter_reason"] = "低质量或无关内容"
self.logger.info(f"消息 {message.id} 已被过滤")
async def _archive_message(self, message: ChatMessage, target_chat_types: List[ChatType]):
"""归档消息"""
for chat_type in target_chat_types:
target_rooms = [room for room in self.chat_rooms.values()
if room.chat_type == chat_type and room.is_active]
for room in target_rooms:
archive_content = f"📁 [归档消息] 来自 {message.chat_type.value}\n" \
f"原始内容: {message.content}\n" \
f"归档时间: {datetime.now()}"
await self.send_message(
room.id, "系统", archive_content,
MessagePriority.LOW, ["归档", "历史"]
)
async def _delegate_message(self, message: ChatMessage, target_chat_types: List[ChatType]):
"""委派消息到专门群聊"""
# 类似于广播,但会移除原消息
await self._broadcast_message(message, target_chat_types)
# 标记原消息为已委派
message.metadata["delegated"] = True
async def _merge_discussions(self, message: ChatMessage):
"""合并相关讨论"""
# 查找相关消息
related_messages = self._find_related_messages(message)
# 创建合并讨论摘要
if related_messages:
summary = self._create_discussion_summary(message, related_messages)
# 发送摘要到策略会议群
strategy_rooms = [room for room in self.chat_rooms.values()
if room.chat_type == ChatType.STRATEGY_MEETING]
for room in strategy_rooms:
await self.send_message(
room.id, "系统", summary,
MessagePriority.HIGH, ["合并", "摘要"]
)
def _find_related_messages(self, message: ChatMessage) -> List[ChatMessage]:
"""查找相关消息"""
related = []
# 简单的相关性检测:相同标签或关键词
for room in self.chat_rooms.values():
for msg in room.message_history[-10:]: # 检查最近10条消息
if msg.id != message.id:
# 检查标签重叠
if set(msg.tags) & set(message.tags):
related.append(msg)
# 检查内容相似性(简单关键词匹配)
elif self._calculate_content_similarity(msg.content, message.content) > 0.3:
related.append(msg)
return related
def _calculate_content_similarity(self, content1: str, content2: str) -> float:
"""计算内容相似性"""
words1 = set(content1.split())
words2 = set(content2.split())
if not words1 or not words2:
return 0.0
intersection = words1 & words2
union = words1 | words2
return len(intersection) / len(union)
def _create_discussion_summary(self, main_message: ChatMessage, related_messages: List[ChatMessage]) -> str:
"""创建讨论摘要"""
summary = f"📋 讨论摘要\n"
summary += f"主要消息: {main_message.sender} - {main_message.content[:100]}...\n"
summary += f"相关消息数量: {len(related_messages)}\n\n"
summary += "相关讨论:\n"
for i, msg in enumerate(related_messages[:5], 1): # 最多显示5条
summary += f"{i}. {msg.sender}: {msg.content[:50]}...\n"
return summary
async def _trigger_event_handlers(self, event_type: str, data: Any):
"""触发事件处理器"""
if event_type in self.event_handlers:
for handler in self.event_handlers[event_type]:
try:
await handler(data)
except Exception as e:
self.logger.error(f"事件处理器错误: {e}")
def add_event_handler(self, event_type: str, handler: Callable):
"""添加事件处理器"""
if event_type not in self.event_handlers:
self.event_handlers[event_type] = []
self.event_handlers[event_type].append(handler)
async def handle_message(self, message_data: Dict[str, Any]) -> Dict[str, Any]:
"""处理消息(兼容性方法)"""
try:
chat_id = message_data.get("chat_id", "main_debate")
speaker = message_data.get("speaker", "未知用户")
content = message_data.get("content", "")
priority = MessagePriority.NORMAL
# 发送消息
message = await self.send_message(chat_id, speaker, content, priority)
return {
"success": True,
"message_id": message.id,
"processed_at": datetime.now().isoformat()
}
except Exception as e:
self.logger.error(f"处理消息失败: {e}")
return {
"success": False,
"error": str(e),
"processed_at": datetime.now().isoformat()
}
def get_routing_status(self) -> Dict[str, Any]:
"""获取路由状态(兼容性方法)"""
return {
"active_routes": len(self.coordination_rules),
"message_queue_size": len(self.message_queue),
"total_rooms": len(self.chat_rooms)
}
async def coordinate_response(self, message_data: Dict[str, Any], context: Dict[str, Any]) -> Dict[str, Any]:
"""协调响应(兼容性方法)"""
try:
# 基于上下文决定响应策略
stage = context.get("stage", "")
topic = context.get("topic", "未知主题")
# 模拟协调决策
coordination_decision = {
"recommended_action": "继续讨论",
"target_chat": "main_debate",
"priority": "normal",
"reasoning": f"基于当前阶段({stage})和主题({topic})的协调决策"
}
return {
"success": True,
"coordination": coordination_decision,
"timestamp": datetime.now().isoformat()
}
except Exception as e:
return {
"success": False,
"error": str(e),
"timestamp": datetime.now().isoformat()
}
def get_chat_status(self) -> Dict[str, Any]:
"""获取群聊状态"""
status = {
"total_rooms": len(self.chat_rooms),
"active_rooms": len([r for r in self.chat_rooms.values() if r.is_active]),
"total_messages": sum(len(r.message_history) for r in self.chat_rooms.values()),
"pending_messages": len(self.message_queue),
"coordination_rules": len(self.coordination_rules),
"active_rules": len([r for r in self.coordination_rules.values() if r.is_active]),
"rooms": {
room_id: {
"name": room.name,
"type": room.chat_type.value,
"participants": len(room.participants),
"messages": len(room.message_history),
"last_activity": room.last_activity.isoformat(),
"is_active": room.is_active
}
for room_id, room in self.chat_rooms.items()
}
}
return status
def save_coordination_data(self, filename: str = "coordination_data.json"):
"""保存协调数据"""
# 自定义JSON序列化函数
def serialize_trigger_conditions(conditions):
serialized = {}
for key, value in conditions.items():
if isinstance(value, MessagePriority):
serialized[key] = value.value
else:
serialized[key] = value
return serialized
data = {
"chat_rooms": {
room_id: {
"id": room.id,
"chat_type": room.chat_type.value,
"name": room.name,
"description": room.description,
"participants": room.participants,
"moderators": room.moderators,
"is_active": room.is_active,
"created_at": room.created_at.isoformat(),
"last_activity": room.last_activity.isoformat(),
"settings": room.settings,
"message_count": len(room.message_history)
}
for room_id, room in self.chat_rooms.items()
},
"coordination_rules": {
rule_id: {
"id": rule.id,
"name": rule.name,
"description": rule.description,
"source_chat_types": [ct.value for ct in rule.source_chat_types],
"target_chat_types": [ct.value for ct in rule.target_chat_types],
"trigger_conditions": serialize_trigger_conditions(rule.trigger_conditions),
"action": rule.action.value,
"priority": rule.priority,
"is_active": rule.is_active,
"created_at": rule.created_at.isoformat()
}
for rule_id, rule in self.coordination_rules.items()
},
"status": self.get_chat_status(),
"export_time": datetime.now().isoformat()
}
with open(filename, 'w', encoding='utf-8') as f:
json.dump(data, f, ensure_ascii=False, indent=2)
self.logger.info(f"协调数据已保存到 {filename}")
# 使用示例
async def main():
"""使用示例"""
coordinator = MultiChatCoordinator()
# 发送一些测试消息
await coordinator.send_message(
"main_debate", "正1",
"我认为AI投资具有巨大的潜力和价值",
MessagePriority.NORMAL, ["观点", "AI"]
)
await coordinator.send_message(
"main_debate", "反1",
"但是AI投资的风险也不容忽视",
MessagePriority.NORMAL, ["反驳", "风险"]
)
await coordinator.send_message(
"positive_internal", "正2",
"我们需要准备更强有力的数据支持",
MessagePriority.HIGH, ["策略", "数据"]
)
# 模拟紧急情况
await coordinator.send_message(
"main_debate", "正3",
"系统出现异常,需要紧急处理",
MessagePriority.URGENT, ["紧急", "系统"]
)
# 显示状态
status = coordinator.get_chat_status()
print("\n📊 群聊协调系统状态:")
print(f"总群聊数: {status['total_rooms']}")
print(f"活跃群聊数: {status['active_rooms']}")
print(f"总消息数: {status['total_messages']}")
print(f"待处理消息: {status['pending_messages']}")
print("\n📋 群聊详情:")
for room_id, room_info in status['rooms'].items():
print(f" {room_info['name']} ({room_info['type']})")
print(f" 参与者: {room_info['participants']}")
print(f" 消息数: {room_info['messages']}")
print(f" 最后活动: {room_info['last_activity']}")
print()
# 保存数据
coordinator.save_coordination_data()
if __name__ == "__main__":
asyncio.run(main())

View File

@ -9,7 +9,8 @@ import os
import asyncio
from google.adk import Agent, Runner
from google.adk.sessions import InMemorySessionService
from google.adk.memory import MemoryBank, MemoryItem
from google.adk.memory import VertexAiMemoryBankService
from google.adk.memory.memory_entry import MemoryEntry
from google.genai import types
import json
from datetime import datetime
@ -19,11 +20,17 @@ class BaxianMemoryManager:
"""八仙记忆管理器"""
def __init__(self):
self.memory_banks: Dict[str, MemoryBank] = {}
self.memory_services: Dict[str, VertexAiMemoryBankService] = {}
self.agents: Dict[str, Agent] = {}
async def initialize_baxian_agents(self):
"""初始化八仙智能体及其记忆银行"""
# 从环境变量获取项目ID和位置
project_id = os.getenv('GOOGLE_CLOUD_PROJECT_ID')
location = os.getenv('GOOGLE_CLOUD_LOCATION', 'us-central1')
if not project_id:
raise ValueError("未设置 GOOGLE_CLOUD_PROJECT_ID 环境变量")
# 八仙角色配置
baxian_config = {
@ -45,31 +52,31 @@ class BaxianMemoryManager:
}
}
# 为每个仙人创建智能体和记忆银行
# 为每个仙人创建智能体和记忆服务
for name, config in baxian_config.items():
# 创建记忆银行
memory_bank = MemoryBank(
name=f"{name}_memory_bank",
description=f"{name}的个人记忆银行,存储{config['memory_context']}"
# 创建记忆服务
memory_service = VertexAiMemoryBankService(
project=project_id,
location=location
)
# 初始化记忆内容
await self._initialize_agent_memory(memory_bank, name, config['memory_context'])
await self._initialize_agent_memory(memory_service, name, config['memory_context'])
# 创建智能体
agent = Agent(
name=name,
model="gemini-2.0-flash-exp",
model="gemini-2.5-flash",
instruction=f"{config['instruction']} 在回答时,请先从你的记忆银行中检索相关信息,然后结合当前话题给出回应。",
memory_bank=memory_bank
memory_service=memory_service
)
self.memory_banks[name] = memory_bank
self.memory_services[name] = memory_service
self.agents[name] = agent
print(f"✅ 已初始化 {len(self.agents)} 个八仙智能体及其记忆银行")
print(f"✅ 已初始化 {len(self.agents)} 个八仙智能体及其记忆服务")
async def _initialize_agent_memory(self, memory_bank: MemoryBank, agent_name: str, context: str):
async def _initialize_agent_memory(self, memory_service: VertexAiMemoryBankService, agent_name: str, context: str):
"""为智能体初始化记忆内容"""
# 根据角色添加初始记忆
@ -102,7 +109,7 @@ class BaxianMemoryManager:
memories = initial_memories.get(agent_name, [])
for memory_text in memories:
memory_item = MemoryItem(
memory_entry = MemoryEntry(
content=memory_text,
metadata={
"agent": agent_name,
@ -110,12 +117,14 @@ class BaxianMemoryManager:
"timestamp": datetime.now().isoformat()
}
)
await memory_bank.add_memory(memory_item)
# 注意VertexAiMemoryBankService 的 add_memory 方法可能需要不同的参数
# 这里假设它有一个类似的方法
await memory_service.add_memory(memory_entry)
async def add_debate_memory(self, agent_name: str, content: str, topic: str):
"""为智能体添加辩论记忆"""
if agent_name in self.memory_banks:
memory_item = MemoryItem(
if agent_name in self.memory_services:
memory_entry = MemoryEntry(
content=content,
metadata={
"agent": agent_name,
@ -124,15 +133,19 @@ class BaxianMemoryManager:
"timestamp": datetime.now().isoformat()
}
)
await self.memory_banks[agent_name].add_memory(memory_item)
# 注意VertexAiMemoryBankService 的 add_memory 方法可能需要不同的参数
# 这里假设它有一个类似的方法
await self.memory_services[agent_name].add_memory(memory_entry)
async def retrieve_relevant_memories(self, agent_name: str, query: str, limit: int = 3) -> List[str]:
"""检索智能体的相关记忆"""
if agent_name not in self.memory_banks:
if agent_name not in self.memory_services:
return []
try:
memories = await self.memory_banks[agent_name].search(query, limit=limit)
# 注意VertexAiMemoryBankService 的 search 方法可能需要不同的参数
# 这里假设它有一个类似的方法
memories = await self.memory_services[agent_name].search(query, limit=limit)
return [memory.content for memory in memories]
except Exception as e:
print(f"⚠️ 记忆检索失败 ({agent_name}): {e}")

View File

@ -0,0 +1,290 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
稷下学宫 八仙论道系统
实现八仙四对矛盾的对角线辩论男女老少富贫贵贱
基于先天八卦的智慧对话系统
"""
import os
import asyncio
from google.adk import Agent, Runner
from google.adk.sessions import InMemorySessionService
from google.genai import types
import re
import sys
from contextlib import contextmanager
def create_baxian_agents():
"""创建八仙智能体 - 四对矛盾"""
# 男女对立吕洞宾vs 何仙姑(女)
lu_dong_bin = Agent(
name="吕洞宾",
model="gemini-2.5-flash",
instruction="你是吕洞宾八仙中的男性代表理性分析者。你代表男性视角善于逻辑思辨注重理性和秩序。你的发言风格温和而深刻总是能找到问题的核心。每次发言控制在80字以内。"
)
he_xian_gu = Agent(
name="何仙姑",
model="gemini-2.5-flash",
instruction="你是何仙姑八仙中的女性代表感性智慧者。你代表女性视角善于直觉洞察注重情感和和谐。你的发言风格柔和而犀利总是能看到事物的另一面。每次发言控制在80字以内。"
)
# 老少对立张果老vs 韩湘子(少)
zhang_guo_lao = Agent(
name="张果老",
model="gemini-2.5-flash",
instruction="你是张果老八仙中的长者代表经验智慧者。你代表老年视角善于从历史经验出发注重传统和稳重。你的发言风格深沉而睿智总是能从历史中汲取教训。每次发言控制在80字以内。"
)
han_xiang_zi = Agent(
name="韩湘子",
model="gemini-2.5-flash",
instruction="你是韩湘子八仙中的青年代表创新思维者。你代表年轻视角善于创新思考注重变革和进步。你的发言风格活泼而敏锐总是能提出新颖的观点。每次发言控制在80字以内。"
)
# 富贫对立汉钟离vs 蓝采和(贫)
han_zhong_li = Agent(
name="汉钟离",
model="gemini-2.5-flash",
instruction="你是汉钟离八仙中的富贵代表资源掌控者。你代表富有阶层视角善于从资源配置角度思考注重效率和投资回报。你的发言风格稳重而务实总是能看到经济利益。每次发言控制在80字以内。"
)
lan_cai_he = Agent(
name="蓝采和",
model="gemini-2.5-flash",
instruction="你是蓝采和八仙中的贫困代表民生关怀者。你代表普通民众视角善于从底层角度思考注重公平和民生。你的发言风格朴实而真诚总是能关注到弱势群体。每次发言控制在80字以内。"
)
# 贵贱对立曹国舅vs 铁拐李(贱)
cao_guo_jiu = Agent(
name="曹国舅",
model="gemini-2.5-flash",
instruction="你是曹国舅八仙中的贵族代表权力思考者。你代表上层社会视角善于从权力结构角度分析注重秩序和等级。你的发言风格优雅而权威总是能看到政治层面。每次发言控制在80字以内。"
)
tie_guai_li = Agent(
name="铁拐李",
model="gemini-2.5-flash",
instruction="你是铁拐李八仙中的底层代表逆向思维者。你代表社会底层视角善于从批判角度质疑注重真实和反叛。你的发言风格直接而犀利总是能揭示问题本质。每次发言控制在80字以内。"
)
return {
'male_female': (lu_dong_bin, he_xian_gu),
'old_young': (zhang_guo_lao, han_xiang_zi),
'rich_poor': (han_zhong_li, lan_cai_he),
'noble_humble': (cao_guo_jiu, tie_guai_li)
}
@contextmanager
def suppress_stdout():
"""抑制标准输出"""
with open(os.devnull, "w") as devnull:
old_stdout = sys.stdout
sys.stdout = devnull
try:
yield
finally:
sys.stdout = old_stdout
def clean_debug_output(text):
"""清理调试输出"""
if not text:
return ""
# 移除调试信息,但保留实际内容
lines = text.split('\n')
cleaned_lines = []
for line in lines:
line = line.strip()
# 只过滤明确的调试信息,保留实际回复内容
if any(debug_pattern in line for debug_pattern in
['Event from', 'API_KEY', 'Both GOOGLE_API_KEY', 'Using GOOGLE_API_KEY']):
continue
if line and not line.startswith('DEBUG') and not line.startswith('INFO'):
cleaned_lines.append(line)
result = ' '.join(cleaned_lines)
return result if result.strip() else text.strip()
async def conduct_diagonal_debate(agent1, agent2, topic, perspective1, perspective2, round_num):
"""进行对角线辩论"""
print(f"\n🎯 第{round_num}轮对角线辩论:{agent1.name} vs {agent2.name}")
print(f"📋 辩论视角:{perspective1} vs {perspective2}")
# 设置环境变量以抑制ADK调试输出
os.environ['GRPC_VERBOSITY'] = 'ERROR'
os.environ['GRPC_TRACE'] = ''
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'
import warnings
warnings.filterwarnings('ignore')
# 创建会话服务和运行器
session_service = InMemorySessionService()
# 创建会话
session = await session_service.create_session(
state={},
app_name="稷下学宫八仙论道系统",
user_id="baxian_debate_user"
)
# 创建Runner实例
runner1 = Runner(agent=agent1, session_service=session_service, app_name="稷下学宫八仙论道系统")
runner2 = Runner(agent=agent2, session_service=session_service, app_name="稷下学宫八仙论道系统")
try:
# 第一轮agent1 发起
prompt1 = f"请从{perspective1}的角度,对'{topic}'发表你的观点。要求:观点鲜明,论证有力,体现{perspective1}的特色。"
content1 = types.Content(role='user', parts=[types.Part(text=prompt1)])
response1 = runner1.run_async(
user_id=session.user_id,
session_id=session.id,
new_message=content1
)
# 提取回复内容
agent1_reply = ""
async for event in response1:
# 只处理包含实际文本内容的事件,过滤调试信息
if hasattr(event, 'content') and event.content:
if hasattr(event.content, 'parts') and event.content.parts:
for part in event.content.parts:
if hasattr(part, 'text') and part.text and part.text.strip():
text_content = str(part.text).strip()
# 过滤掉调试信息和系统消息
if not text_content.startswith('Event from') and not 'API_KEY' in text_content:
agent1_reply += text_content
elif hasattr(event, 'text') and event.text:
text_content = str(event.text).strip()
if not text_content.startswith('Event from') and not 'API_KEY' in text_content:
agent1_reply += text_content
print(f"\n🗣️ {agent1.name}{perspective1}")
print(f" {agent1_reply}")
# 第二轮agent2 回应
prompt2 = f"针对{agent1.name}刚才的观点:'{agent1_reply}',请从{perspective2}的角度进行回应和反驳。要求:有理有据,体现{perspective2}的独特视角。"
content2 = types.Content(role='user', parts=[types.Part(text=prompt2)])
response2 = runner2.run_async(
user_id=session.user_id,
session_id=session.id,
new_message=content2
)
agent2_reply = ""
async for event in response2:
# 只处理包含实际文本内容的事件,过滤调试信息
if hasattr(event, 'content') and event.content:
if hasattr(event.content, 'parts') and event.content.parts:
for part in event.content.parts:
if hasattr(part, 'text') and part.text and part.text.strip():
text_content = str(part.text).strip()
# 过滤掉调试信息和系统消息
if not text_content.startswith('Event from') and not 'API_KEY' in text_content:
agent2_reply += text_content
elif hasattr(event, 'text') and event.text:
text_content = str(event.text).strip()
if not text_content.startswith('Event from') and not 'API_KEY' in text_content:
agent2_reply += text_content
print(f"\n🗣️ {agent2.name}{perspective2}")
print(f" {agent2_reply}")
# 第三轮agent1 再次回应
prompt3 = f"听了{agent2.name}的观点:'{agent2_reply}',请从{perspective1}的角度进行最后的总结和回应。"
content3 = types.Content(role='user', parts=[types.Part(text=prompt3)])
response3 = runner1.run_async(
user_id=session.user_id,
session_id=session.id,
new_message=content3
)
agent1_final = ""
async for event in response3:
# 只处理包含实际文本内容的事件,过滤调试信息
if hasattr(event, 'content') and event.content:
if hasattr(event.content, 'parts') and event.content.parts:
for part in event.content.parts:
if hasattr(part, 'text') and part.text and part.text.strip():
text_content = str(part.text).strip()
# 过滤掉调试信息和系统消息
if not text_content.startswith('Event from') and not 'API_KEY' in text_content:
agent1_final += text_content
elif hasattr(event, 'text') and event.text:
text_content = str(event.text).strip()
if not text_content.startswith('Event from') and not 'API_KEY' in text_content:
agent1_final += text_content
print(f"\n🗣️ {agent1.name}{perspective1})总结:")
print(f" {agent1_final}")
except Exception as e:
print(f"❌ 对角线辩论出现错误: {e}")
raise
async def conduct_baxian_debate():
"""进行八仙四对矛盾的完整辩论"""
print("\n🏛️ 稷下学宫 - 八仙论道系统启动")
print("📚 八仙者,南北朝的产物,男女老少,富贵贫贱,皆可成仙")
print("🎯 四对矛盾暗合先天八卦,智慧交锋即将开始")
topic = "雅江水电站对中印关系的影响"
print(f"\n📋 论道主题:{topic}")
# 创建八仙智能体
agents = create_baxian_agents()
print("\n🔥 八仙真实ADK论道模式")
# 四对矛盾的对角线辩论
debates = [
(agents['male_female'], "男性理性", "女性感性", "男女对立"),
(agents['old_young'], "长者经验", "青年创新", "老少对立"),
(agents['rich_poor'], "富者效率", "贫者公平", "富贫对立"),
(agents['noble_humble'], "贵族秩序", "底层真实", "贵贱对立")
]
for i, ((agent1, agent2), perspective1, perspective2, debate_type) in enumerate(debates, 1):
print(f"\n{'='*60}")
print(f"🎭 {debate_type}辩论")
print(f"{'='*60}")
await conduct_diagonal_debate(agent1, agent2, topic, perspective1, perspective2, i)
if i < len(debates):
print("\n⏳ 准备下一轮辩论...")
await asyncio.sleep(1)
print("\n🎉 八仙论道完成!")
print("\n📝 四对矛盾,八种视角,智慧的交锋展现了问题的多面性。")
print("💡 这就是稷下学宫八仙论道的魅力所在。")
def main():
"""主函数"""
print("🚀 稷下学宫 八仙ADK 真实论道系统")
# 检查API密钥
if not os.getenv('GOOGLE_API_KEY'):
print("❌ 请设置 GOOGLE_API_KEY 环境变量")
return
print("✅ API密钥已配置")
try:
asyncio.run(conduct_baxian_debate())
except KeyboardInterrupt:
print("\n👋 用户中断,论道结束")
except Exception as e:
print(f"❌ 系统错误: {e}")
import traceback
traceback.print_exc()
if __name__ == "__main__":
main()

View File

@ -0,0 +1,575 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
增强版优先级算法 v2.1.0
实现更复杂的权重计算和上下文分析能力
"""
import re
import math
from typing import Dict, List, Any, Optional, Tuple
from dataclasses import dataclass
from datetime import datetime, timedelta
from enum import Enum
import json
class ArgumentType(Enum):
"""论点类型"""
ATTACK = "攻击"
DEFENSE = "防御"
SUPPORT = "支持"
REFUTE = "反驳"
SUMMARY = "总结"
QUESTION = "质疑"
class EmotionLevel(Enum):
"""情绪强度"""
CALM = 1
MILD = 2
MODERATE = 3
INTENSE = 4
EXTREME = 5
@dataclass
class SpeechAnalysis:
"""发言分析结果"""
argument_type: ArgumentType
emotion_level: EmotionLevel
logic_strength: float # 0-1
evidence_quality: float # 0-1
relevance_score: float # 0-1
urgency_score: float # 0-1
target_speakers: List[str] # 针对的发言者
keywords: List[str]
sentiment_score: float # -1 to 1
@dataclass
class SpeakerProfile:
"""发言者档案"""
name: str
team: str
recent_speeches: List[Dict]
total_speech_count: int
average_response_time: float
expertise_areas: List[str]
debate_style: str # "aggressive", "analytical", "diplomatic", "creative"
current_energy: float # 0-1
last_speech_time: Optional[datetime] = None
class EnhancedPriorityAlgorithm:
"""增强版优先级算法"""
def __init__(self):
# 权重配置
self.weights = {
"rebuttal_urgency": 0.30, # 反驳紧急性
"argument_strength": 0.25, # 论点强度
"time_pressure": 0.20, # 时间压力
"audience_reaction": 0.15, # 观众反应
"strategy_need": 0.10 # 策略需要
}
# 情感关键词库
self.emotion_keywords = {
EmotionLevel.CALM: ["认为", "分析", "数据显示", "根据", "客观"],
EmotionLevel.MILD: ["不同意", "质疑", "担心", "建议"],
EmotionLevel.MODERATE: ["强烈", "明显", "严重", "重要"],
EmotionLevel.INTENSE: ["绝对", "完全", "彻底", "必须"],
EmotionLevel.EXTREME: ["荒谬", "愚蠢", "灾难", "危险"]
}
# 论点类型关键词
self.argument_keywords = {
ArgumentType.ATTACK: ["错误", "问题", "缺陷", "失败"],
ArgumentType.DEFENSE: ["解释", "澄清", "说明", "回应"],
ArgumentType.SUPPORT: ["支持", "赞同", "证实", "补充"],
ArgumentType.REFUTE: ["反驳", "否定", "驳斥", "反对"],
ArgumentType.SUMMARY: ["总结", "综上", "结论", "最后"],
ArgumentType.QUESTION: ["为什么", "如何", "是否", "难道"]
}
# 发言者档案
self.speaker_profiles: Dict[str, SpeakerProfile] = {}
# 辩论历史分析
self.debate_history: List[Dict] = []
def analyze_speech(self, message: str, speaker: str, context: Dict) -> SpeechAnalysis:
"""分析发言内容"""
# 检测论点类型
argument_type = self._detect_argument_type(message)
# 检测情绪强度
emotion_level = self._detect_emotion_level(message)
# 计算逻辑强度
logic_strength = self._calculate_logic_strength(message)
# 计算证据质量
evidence_quality = self._calculate_evidence_quality(message)
# 计算相关性分数
relevance_score = self._calculate_relevance_score(message, context)
# 计算紧急性分数
urgency_score = self._calculate_urgency_score(message, context)
# 识别目标发言者
target_speakers = self._identify_target_speakers(message)
# 提取关键词
keywords = self._extract_keywords(message)
# 计算情感分数
sentiment_score = self._calculate_sentiment_score(message)
return SpeechAnalysis(
argument_type=argument_type,
emotion_level=emotion_level,
logic_strength=logic_strength,
evidence_quality=evidence_quality,
relevance_score=relevance_score,
urgency_score=urgency_score,
target_speakers=target_speakers,
keywords=keywords,
sentiment_score=sentiment_score
)
def calculate_speaker_priority(self, speaker: str, context: Dict,
recent_speeches: List[Dict]) -> float:
"""计算发言者优先级"""
# 获取或创建发言者档案
profile = self._get_or_create_speaker_profile(speaker)
# 更新发言者档案
self._update_speaker_profile(profile, recent_speeches)
# 计算各项分数
rebuttal_urgency = self._calculate_rebuttal_urgency(speaker, context, recent_speeches)
argument_strength = self._calculate_argument_strength(speaker, profile)
time_pressure = self._calculate_time_pressure(speaker, context)
audience_reaction = self._calculate_audience_reaction(speaker, context)
strategy_need = self._calculate_strategy_need(speaker, context, profile)
# 加权计算总分
total_score = (
rebuttal_urgency * self.weights["rebuttal_urgency"] +
argument_strength * self.weights["argument_strength"] +
time_pressure * self.weights["time_pressure"] +
audience_reaction * self.weights["audience_reaction"] +
strategy_need * self.weights["strategy_need"]
)
# 应用修正因子
total_score = self._apply_correction_factors(total_score, speaker, profile, context)
return min(max(total_score, 0.0), 1.0) # 限制在0-1范围内
def get_next_speaker(self, available_speakers: List[str], context: Dict,
recent_speeches: List[Dict]) -> Tuple[str, float, Dict]:
"""获取下一个发言者"""
speaker_scores = {}
detailed_analysis = {}
for speaker in available_speakers:
score = self.calculate_speaker_priority(speaker, context, recent_speeches)
speaker_scores[speaker] = score
# 记录详细分析
detailed_analysis[speaker] = {
"priority_score": score,
"profile": self.speaker_profiles.get(speaker),
"analysis_timestamp": datetime.now().isoformat()
}
# 选择最高分发言者
best_speaker = max(speaker_scores, key=speaker_scores.get)
best_score = speaker_scores[best_speaker]
return best_speaker, best_score, detailed_analysis
def _detect_argument_type(self, message: str) -> ArgumentType:
"""检测论点类型"""
message_lower = message.lower()
type_scores = {}
for arg_type, keywords in self.argument_keywords.items():
score = sum(1 for keyword in keywords if keyword in message_lower)
type_scores[arg_type] = score
if not type_scores or max(type_scores.values()) == 0:
return ArgumentType.SUPPORT # 默认类型
return max(type_scores, key=type_scores.get)
def _detect_emotion_level(self, message: str) -> EmotionLevel:
"""检测情绪强度"""
message_lower = message.lower()
for emotion_level in reversed(list(EmotionLevel)):
keywords = self.emotion_keywords.get(emotion_level, [])
if any(keyword in message_lower for keyword in keywords):
return emotion_level
return EmotionLevel.CALM
def _calculate_logic_strength(self, message: str) -> float:
"""计算逻辑强度"""
logic_indicators = [
"因为", "所以", "因此", "由于", "根据", "数据显示",
"研究表明", "事实上", "例如", "比如", "首先", "其次", "最后"
]
message_lower = message.lower()
logic_count = sum(1 for indicator in logic_indicators if indicator in message_lower)
# 基于逻辑词汇密度计算
word_count = len(message.split())
if word_count == 0:
return 0.0
logic_density = logic_count / word_count
return min(logic_density * 10, 1.0) # 归一化到0-1
def _calculate_evidence_quality(self, message: str) -> float:
"""计算证据质量"""
evidence_indicators = [
"数据", "统计", "研究", "报告", "调查", "实验",
"案例", "例子", "证据", "资料", "文献", "来源"
]
message_lower = message.lower()
evidence_count = sum(1 for indicator in evidence_indicators if indicator in message_lower)
# 检查是否有具体数字
number_pattern = r'\d+(?:\.\d+)?%?'
numbers = re.findall(number_pattern, message)
number_bonus = min(len(numbers) * 0.1, 0.3)
base_score = min(evidence_count * 0.2, 0.7)
return min(base_score + number_bonus, 1.0)
def _calculate_relevance_score(self, message: str, context: Dict) -> float:
"""计算相关性分数"""
# 简化实现:基于关键词匹配
topic_keywords = context.get("topic_keywords", [])
if not topic_keywords:
return 0.5 # 默认中等相关性
message_lower = message.lower()
relevance_count = sum(1 for keyword in topic_keywords if keyword.lower() in message_lower)
return min(relevance_count / len(topic_keywords), 1.0)
def _calculate_urgency_score(self, message: str, context: Dict) -> float:
"""计算紧急性分数"""
urgency_keywords = ["紧急", "立即", "马上", "现在", "重要", "关键", "危险"]
message_lower = message.lower()
urgency_count = sum(1 for keyword in urgency_keywords if keyword in message_lower)
# 基于时间压力
time_factor = context.get("time_remaining", 1.0)
time_urgency = 1.0 - time_factor
keyword_urgency = min(urgency_count * 0.3, 1.0)
return min(keyword_urgency + time_urgency * 0.5, 1.0)
def _identify_target_speakers(self, message: str) -> List[str]:
"""识别目标发言者"""
# 简化实现:查找提及的发言者名称
speaker_names = ["正1", "正2", "正3", "正4", "反1", "反2", "反3", "反4"]
targets = []
for name in speaker_names:
if name in message:
targets.append(name)
return targets
def _extract_keywords(self, message: str) -> List[str]:
"""提取关键词"""
# 简化实现提取长度大于2的词汇
words = re.findall(r'\b\w{3,}\b', message)
return list(set(words))[:10] # 最多返回10个关键词
def _calculate_sentiment_score(self, message: str) -> float:
"""计算情感分数"""
positive_words = ["", "优秀", "正确", "支持", "赞同", "成功", "有效"]
negative_words = ["", "错误", "失败", "反对", "问题", "危险", "无效"]
message_lower = message.lower()
positive_count = sum(1 for word in positive_words if word in message_lower)
negative_count = sum(1 for word in negative_words if word in message_lower)
total_count = positive_count + negative_count
if total_count == 0:
return 0.0
return (positive_count - negative_count) / total_count
def _get_or_create_speaker_profile(self, speaker: str) -> SpeakerProfile:
"""获取或创建发言者档案"""
if speaker not in self.speaker_profiles:
self.speaker_profiles[speaker] = SpeakerProfile(
name=speaker,
team="positive" if "" in speaker else "negative",
recent_speeches=[],
total_speech_count=0,
average_response_time=3.0,
expertise_areas=[],
debate_style="analytical",
current_energy=1.0
)
return self.speaker_profiles[speaker]
def _update_speaker_profile(self, profile: SpeakerProfile, recent_speeches: List[Dict]):
"""更新发言者档案"""
# 更新发言历史
speaker_speeches = [s for s in recent_speeches if s.get("speaker") == profile.name]
profile.recent_speeches = speaker_speeches[-5:] # 保留最近5次发言
profile.total_speech_count = len(speaker_speeches)
# 更新能量水平(基于发言频率)
if profile.last_speech_time:
time_since_last = datetime.now() - profile.last_speech_time
energy_recovery = min(time_since_last.seconds / 300, 0.5) # 5分钟恢复50%
profile.current_energy = min(profile.current_energy + energy_recovery, 1.0)
profile.last_speech_time = datetime.now()
def _calculate_rebuttal_urgency(self, speaker: str, context: Dict,
recent_speeches: List[Dict]) -> float:
"""计算反驳紧急性"""
# 检查是否有针对该发言者团队的攻击
team = "positive" if "" in speaker else "negative"
opposing_team = "negative" if team == "positive" else "positive"
recent_attacks = 0
for speech in recent_speeches[-5:]: # 检查最近5次发言
if speech.get("team") == opposing_team:
analysis = speech.get("analysis", {})
if analysis.get("argument_type") in [ArgumentType.ATTACK, ArgumentType.REFUTE]:
recent_attacks += 1
# 基础紧急性 + 攻击响应紧急性
# 为不同发言者生成不同的基础紧急性
speaker_hash = hash(speaker) % 10 # 使用哈希值生成0-9的数字
base_urgency = 0.1 + speaker_hash * 0.05 # 不同发言者有不同的基础紧急性
attack_urgency = recent_attacks * 0.3
return min(base_urgency + attack_urgency, 1.0)
def _calculate_argument_strength(self, speaker: str, profile: SpeakerProfile) -> float:
"""计算论点强度"""
# 基于历史表现
if not profile.recent_speeches:
# 为不同发言者提供不同的基础论点强度
speaker_hash = hash(speaker) % 10 # 使用哈希值生成0-9的数字
team_prefix = "" if "" in speaker else ""
# 基础强度根据发言者哈希值变化
base_strength = 0.4 + speaker_hash * 0.06 # 0.4-1.0范围
# 团队差异化
team_factor = 1.05 if team_prefix == "" else 0.95
return min(base_strength * team_factor, 1.0)
avg_logic = sum(s.get("analysis", {}).get("logic_strength", 0.5)
for s in profile.recent_speeches) / len(profile.recent_speeches)
avg_evidence = sum(s.get("analysis", {}).get("evidence_quality", 0.5)
for s in profile.recent_speeches) / len(profile.recent_speeches)
return (avg_logic + avg_evidence) / 2
def _calculate_time_pressure(self, speaker: str, context: Dict) -> float:
"""计算时间压力"""
time_remaining = context.get("time_remaining", 1.0)
stage_progress = context.get("stage_progress", 0)
max_progress = context.get("max_progress", 1)
# 时间压力随剩余时间减少而增加
time_pressure = 1.0 - time_remaining
# 阶段进度压力
progress_pressure = stage_progress / max_progress
# 发言者个体差异
speaker_hash = hash(speaker) % 10 # 使用哈希值生成0-9的数字
speaker_factor = 0.8 + speaker_hash * 0.02 # 不同发言者有不同的时间敏感度
base_pressure = (time_pressure + progress_pressure) / 2
return min(base_pressure * speaker_factor, 1.0)
def _calculate_audience_reaction(self, speaker: str, context: Dict) -> float:
"""计算观众反应"""
# 简化实现:基于团队表现
team = "positive" if "" in speaker else "negative"
team_score = context.get(f"{team}_team_score", 0.5)
# 发言者个体魅力差异
speaker_hash = hash(speaker) % 10 # 使用哈希值生成0-9的数字
charisma_factor = 0.7 + speaker_hash * 0.03 # 不同发言者有不同的观众吸引力
# 如果团队表现不佳,需要更多发言机会
base_reaction = 1.0 - team_score
return min(base_reaction * charisma_factor, 1.0)
def _calculate_strategy_need(self, speaker: str, context: Dict,
profile: SpeakerProfile) -> float:
"""计算策略需要"""
# 基于发言者专长和当前需求
current_stage = context.get("current_stage", "")
# 为不同发言者提供差异化的策略需求
speaker_hash = hash(speaker) % 10 # 使用哈希值生成0-9的数字
team_prefix = "" if "" in speaker else ""
strategy_match = {
"": 0.8 if speaker_hash == 0 else 0.3 + speaker_hash * 0.05, # 开场需要主力,但有差异
"": 0.4 + speaker_hash * 0.06, # 承接阶段根据发言者哈希差异化
"": max(0.2, 1.0 - profile.current_energy + speaker_hash * 0.05), # 自由辩论看能量和哈希
"": 0.9 if speaker_hash == 0 else 0.3 + speaker_hash * 0.05 # 总结需要主力,但有差异
}
base_score = strategy_match.get(current_stage, 0.5)
# 添加团队差异化因子
team_factor = 1.1 if team_prefix == "" else 0.9
return min(base_score * team_factor, 1.0)
def _apply_correction_factors(self, base_score: float, speaker: str,
profile: SpeakerProfile, context: Dict) -> float:
"""应用修正因子"""
corrected_score = base_score
# 能量修正
corrected_score *= profile.current_energy
# 发言频率修正(避免某人发言过多)
recent_count = len([s for s in profile.recent_speeches
if s.get("timestamp", "") > (datetime.now() - timedelta(minutes=5)).isoformat()])
if recent_count > 2:
corrected_score *= 0.7 # 降低优先级
# 团队平衡修正
team = "positive" if "" in speaker else "negative"
team_recent_count = context.get(f"{team}_recent_speeches", 0)
opposing_recent_count = context.get(f"{'negative' if team == 'positive' else 'positive'}_recent_speeches", 0)
if team_recent_count > opposing_recent_count + 2:
corrected_score *= 0.8 # 平衡发言机会
return corrected_score
def calculate_priority(self, speaker: str, context: Dict, recent_speeches: List[Dict]) -> float:
"""计算发言者优先级(兼容性方法)"""
return self.calculate_speaker_priority(speaker, context, recent_speeches)
def get_algorithm_status(self) -> Dict[str, Any]:
"""获取算法状态"""
return {
"weights": self.weights,
"speaker_count": len(self.speaker_profiles),
"total_speeches_analyzed": len(self.debate_history),
"algorithm_version": "2.1.0",
"last_updated": datetime.now().isoformat()
}
def save_analysis_data(self, filename: str = "priority_analysis.json"):
"""保存分析数据"""
data = {
"algorithm_status": self.get_algorithm_status(),
"speaker_profiles": {
name: {
"name": profile.name,
"team": profile.team,
"total_speech_count": profile.total_speech_count,
"average_response_time": profile.average_response_time,
"expertise_areas": profile.expertise_areas,
"debate_style": profile.debate_style,
"current_energy": profile.current_energy,
"last_speech_time": profile.last_speech_time.isoformat() if profile.last_speech_time else None
}
for name, profile in self.speaker_profiles.items()
},
"debate_history": self.debate_history
}
with open(filename, 'w', encoding='utf-8') as f:
json.dump(data, f, ensure_ascii=False, indent=2)
print(f"💾 优先级分析数据已保存到 {filename}")
def main():
"""测试增强版优先级算法"""
print("🚀 增强版优先级算法测试")
print("=" * 50)
algorithm = EnhancedPriorityAlgorithm()
# 模拟辩论上下文
context = {
"current_stage": "",
"stage_progress": 10,
"max_progress": 36,
"time_remaining": 0.6,
"topic_keywords": ["人工智能", "投资", "风险", "收益"],
"positive_team_score": 0.6,
"negative_team_score": 0.4,
"positive_recent_speeches": 3,
"negative_recent_speeches": 2
}
# 模拟最近发言
recent_speeches = [
{
"speaker": "正1",
"team": "positive",
"message": "根据数据显示AI投资确实能带来显著收益",
"timestamp": datetime.now().isoformat(),
"analysis": {
"argument_type": ArgumentType.SUPPORT,
"logic_strength": 0.8,
"evidence_quality": 0.7
}
},
{
"speaker": "反2",
"team": "negative",
"message": "这种观点完全错误AI投资风险巨大",
"timestamp": datetime.now().isoformat(),
"analysis": {
"argument_type": ArgumentType.ATTACK,
"logic_strength": 0.3,
"evidence_quality": 0.2
}
}
]
available_speakers = ["正1", "正2", "正3", "正4", "反1", "反2", "反3", "反4"]
# 计算下一个发言者
next_speaker, score, analysis = algorithm.get_next_speaker(
available_speakers, context, recent_speeches
)
print(f"\n🎯 推荐发言者: {next_speaker}")
print(f"📊 优先级分数: {score:.3f}")
print(f"\n📈 详细分析:")
for speaker, data in analysis.items():
print(f" {speaker}: {data['priority_score']:.3f}")
# 保存分析数据
algorithm.save_analysis_data()
print("\n✅ 增强版优先级算法测试完成!")
if __name__ == "__main__":
main()

View File

@ -0,0 +1,733 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
优化的辩论流程控制系统 v2.1.0
改进阶段转换和发言权争夺逻辑
"""
import asyncio
import json
import time
from datetime import datetime, timedelta
from typing import Dict, List, Any, Optional, Tuple, Callable
from dataclasses import dataclass, field
from enum import Enum
from collections import defaultdict, deque
import threading
import queue
class DebateStage(Enum):
"""辩论阶段枚举"""
QI = "" # 八仙按先天八卦顺序
CHENG = "" # 雁阵式承接
ZHUAN = "" # 自由辩论36次handoff
HE = "" # 交替总结
class FlowControlMode(Enum):
"""流程控制模式"""
STRICT = "严格模式" # 严格按规则执行
ADAPTIVE = "自适应模式" # 根据辩论质量调整
DYNAMIC = "动态模式" # 实时响应辩论状态
class TransitionTrigger(Enum):
"""阶段转换触发条件"""
TIME_BASED = "时间触发"
PROGRESS_BASED = "进度触发"
QUALITY_BASED = "质量触发"
CONSENSUS_BASED = "共识触发"
EMERGENCY = "紧急触发"
class SpeakerSelectionStrategy(Enum):
"""发言者选择策略"""
PRIORITY_ALGORITHM = "优先级算法"
ROUND_ROBIN = "轮询"
RANDOM_WEIGHTED = "加权随机"
CONTEXT_AWARE = "上下文感知"
COMPETITIVE = "竞争模式"
@dataclass
class FlowControlConfig:
"""流程控制配置"""
mode: FlowControlMode = FlowControlMode.ADAPTIVE
transition_triggers: List[TransitionTrigger] = field(default_factory=lambda: [TransitionTrigger.PROGRESS_BASED, TransitionTrigger.QUALITY_BASED])
speaker_selection_strategy: SpeakerSelectionStrategy = SpeakerSelectionStrategy.CONTEXT_AWARE
min_stage_duration: int = 60 # 秒
max_stage_duration: int = 900 # 秒
quality_threshold: float = 0.6 # 质量阈值
participation_balance_threshold: float = 0.3 # 参与平衡阈值
emergency_intervention_enabled: bool = True
auto_stage_transition: bool = True
speaker_timeout: int = 30 # 发言超时时间
@dataclass
class StageMetrics:
"""阶段指标"""
start_time: datetime
duration: float = 0.0
speech_count: int = 0
quality_score: float = 0.0
participation_balance: float = 0.0
engagement_level: float = 0.0
topic_coherence: float = 0.0
conflict_intensity: float = 0.0
speaker_distribution: Dict[str, int] = field(default_factory=dict)
transition_readiness: float = 0.0
@dataclass
class SpeakerRequest:
"""发言请求"""
speaker: str
priority: float
timestamp: datetime
reason: str
urgency_level: int = 1 # 1-5
estimated_duration: int = 30 # 秒
topic_relevance: float = 1.0
@dataclass
class FlowEvent:
"""流程事件"""
event_type: str
timestamp: datetime
data: Dict[str, Any]
source: str
priority: int = 1
class OptimizedDebateFlowController:
"""优化的辩论流程控制器"""
def __init__(self, config: FlowControlConfig = None):
self.config = config or FlowControlConfig()
# 当前状态
self.current_stage = DebateStage.QI
self.stage_progress = 0
self.total_handoffs = 0
self.current_speaker: Optional[str] = None
self.debate_start_time = datetime.now()
# 阶段配置
self.stage_configs = {
DebateStage.QI: {
"max_progress": 8,
"min_duration": 120,
"max_duration": 600,
"speaker_order": ["吕洞宾", "何仙姑", "铁拐李", "汉钟离", "曹国舅", "韩湘子", "蓝采和", "张果老"],
"selection_strategy": SpeakerSelectionStrategy.ROUND_ROBIN
},
DebateStage.CHENG: {
"max_progress": 8,
"min_duration": 180,
"max_duration": 600,
"speaker_order": ["正1", "正2", "正3", "正4", "反1", "反2", "反3", "反4"],
"selection_strategy": SpeakerSelectionStrategy.ROUND_ROBIN
},
DebateStage.ZHUAN: {
"max_progress": 36,
"min_duration": 300,
"max_duration": 900,
"speaker_order": ["正1", "正2", "正3", "正4", "反1", "反2", "反3", "反4"],
"selection_strategy": SpeakerSelectionStrategy.CONTEXT_AWARE
},
DebateStage.HE: {
"max_progress": 8,
"min_duration": 120,
"max_duration": 480,
"speaker_order": ["反1", "正1", "反2", "正2", "反3", "正3", "反4", "正4"],
"selection_strategy": SpeakerSelectionStrategy.ROUND_ROBIN
}
}
# 阶段指标
self.stage_metrics: Dict[DebateStage, StageMetrics] = {}
self.current_stage_metrics = StageMetrics(start_time=datetime.now())
# 发言请求队列
self.speaker_request_queue = queue.PriorityQueue()
self.pending_requests: Dict[str, SpeakerRequest] = {}
# 事件系统
self.event_queue = queue.Queue()
self.event_handlers: Dict[str, List[Callable]] = defaultdict(list)
# 历史记录
self.debate_history: List[Dict] = []
self.stage_transition_history: List[Dict] = []
self.speaker_performance: Dict[str, Dict] = defaultdict(dict)
# 实时监控
self.monitoring_active = False
self.monitoring_thread: Optional[threading.Thread] = None
# 流程锁
self.flow_lock = threading.RLock()
# 初始化当前阶段指标
self._initialize_stage_metrics()
def _initialize_stage_metrics(self):
"""初始化阶段指标"""
self.current_stage_metrics = StageMetrics(
start_time=datetime.now(),
speaker_distribution={}
)
def get_current_speaker(self) -> Optional[str]:
"""获取当前发言者"""
with self.flow_lock:
config = self.stage_configs[self.current_stage]
strategy = config.get("selection_strategy", self.config.speaker_selection_strategy)
if strategy == SpeakerSelectionStrategy.ROUND_ROBIN:
return self._get_round_robin_speaker()
elif strategy == SpeakerSelectionStrategy.CONTEXT_AWARE:
return self._get_context_aware_speaker()
elif strategy == SpeakerSelectionStrategy.PRIORITY_ALGORITHM:
return self._get_priority_speaker()
elif strategy == SpeakerSelectionStrategy.COMPETITIVE:
return self._get_competitive_speaker()
else:
return self._get_round_robin_speaker()
def _get_round_robin_speaker(self) -> str:
"""轮询方式获取发言者"""
config = self.stage_configs[self.current_stage]
speaker_order = config["speaker_order"]
return speaker_order[self.stage_progress % len(speaker_order)]
def _get_context_aware_speaker(self) -> Optional[str]:
"""上下文感知方式获取发言者"""
# 检查是否有紧急发言请求
if not self.speaker_request_queue.empty():
try:
priority, request = self.speaker_request_queue.get_nowait()
if request.urgency_level >= 4: # 高紧急度
return request.speaker
else:
# 重新放回队列
self.speaker_request_queue.put((priority, request))
except queue.Empty:
pass
# 分析当前上下文
context = self._analyze_current_context()
# 根据上下文选择最合适的发言者
available_speakers = self.stage_configs[self.current_stage]["speaker_order"]
best_speaker = None
best_score = -1
for speaker in available_speakers:
score = self._calculate_speaker_context_score(speaker, context)
if score > best_score:
best_score = score
best_speaker = speaker
return best_speaker
def _get_priority_speaker(self) -> Optional[str]:
"""优先级算法获取发言者"""
# 这里可以集成现有的优先级算法
# 暂时使用简化版本
return self._get_context_aware_speaker()
def _get_competitive_speaker(self) -> Optional[str]:
"""竞争模式获取发言者"""
# 让发言者竞争发言权
if not self.speaker_request_queue.empty():
try:
priority, request = self.speaker_request_queue.get_nowait()
return request.speaker
except queue.Empty:
pass
return self._get_round_robin_speaker()
def request_speaking_turn(self, speaker: str, reason: str, urgency: int = 1,
estimated_duration: int = 30, topic_relevance: float = 1.0):
"""请求发言权"""
request = SpeakerRequest(
speaker=speaker,
priority=self._calculate_request_priority(speaker, reason, urgency, topic_relevance),
timestamp=datetime.now(),
reason=reason,
urgency_level=urgency,
estimated_duration=estimated_duration,
topic_relevance=topic_relevance
)
# 使用负优先级因为PriorityQueue是最小堆
self.speaker_request_queue.put((-request.priority, request))
self.pending_requests[speaker] = request
# 触发事件
self._emit_event("speaker_request", {
"speaker": speaker,
"reason": reason,
"urgency": urgency,
"priority": request.priority
})
def _calculate_request_priority(self, speaker: str, reason: str, urgency: int,
topic_relevance: float) -> float:
"""计算发言请求优先级"""
base_priority = urgency * 10
# 主题相关性加权
relevance_bonus = topic_relevance * 5
# 发言频率调整
speaker_count = self.current_stage_metrics.speaker_distribution.get(speaker, 0)
frequency_penalty = speaker_count * 2
# 时间因素
time_factor = 1.0
if self.current_speaker and self.current_speaker != speaker:
time_factor = 1.2 # 鼓励轮换
priority = (base_priority + relevance_bonus - frequency_penalty) * time_factor
return max(0.1, priority)
def _analyze_current_context(self) -> Dict[str, Any]:
"""分析当前辩论上下文"""
recent_speeches = self.debate_history[-5:] if self.debate_history else []
context = {
"stage": self.current_stage.value,
"progress": self.stage_progress,
"recent_speakers": [speech.get("speaker") for speech in recent_speeches],
"topic_drift": self._calculate_topic_drift(),
"emotional_intensity": self._calculate_emotional_intensity(),
"argument_balance": self._calculate_argument_balance(),
"time_pressure": self._calculate_time_pressure(),
"participation_balance": self._calculate_participation_balance()
}
return context
def _calculate_speaker_context_score(self, speaker: str, context: Dict[str, Any]) -> float:
"""计算发言者在当前上下文下的适合度分数"""
score = 0.0
# 避免连续发言
recent_speakers = context.get("recent_speakers", [])
if speaker in recent_speakers[-2:]:
score -= 10
# 参与平衡
speaker_count = self.current_stage_metrics.speaker_distribution.get(speaker, 0)
avg_count = sum(self.current_stage_metrics.speaker_distribution.values()) / max(1, len(self.current_stage_metrics.speaker_distribution))
if speaker_count < avg_count:
score += 5
# 队伍平衡
if self.current_stage == DebateStage.ZHUAN:
positive_count = sum(1 for s in recent_speakers if "" in s)
negative_count = sum(1 for s in recent_speakers if "" in s)
if "" in speaker and positive_count < negative_count:
score += 3
elif "" in speaker and negative_count < positive_count:
score += 3
# 时间压力响应
time_pressure = context.get("time_pressure", 0)
if time_pressure > 0.7 and speaker.endswith("1"): # 主力发言者
score += 5
# 检查发言请求
if speaker in self.pending_requests:
request = self.pending_requests[speaker]
score += request.urgency_level * 2
score += request.topic_relevance * 3
return score
def advance_stage(self, force: bool = False) -> bool:
"""推进辩论阶段"""
with self.flow_lock:
if not force and not self._should_advance_stage():
return False
# 记录当前阶段结束
self._finalize_current_stage()
# 转换到下一阶段
success = self._transition_to_next_stage()
if success:
# 初始化新阶段
self._initialize_new_stage()
# 触发事件
self._emit_event("stage_advanced", {
"from_stage": self.current_stage.value,
"to_stage": self.current_stage.value,
"progress": self.stage_progress,
"forced": force
})
return success
def _should_advance_stage(self) -> bool:
"""判断是否应该推进阶段"""
config = self.stage_configs[self.current_stage]
# 检查进度触发
if TransitionTrigger.PROGRESS_BASED in self.config.transition_triggers:
if self.stage_progress >= config["max_progress"] - 1:
return True
# 检查时间触发
if TransitionTrigger.TIME_BASED in self.config.transition_triggers:
stage_duration = (datetime.now() - self.current_stage_metrics.start_time).total_seconds()
if stage_duration >= config.get("max_duration", 600):
return True
# 检查质量触发
if TransitionTrigger.QUALITY_BASED in self.config.transition_triggers:
if (self.current_stage_metrics.quality_score >= self.config.quality_threshold and
self.stage_progress >= config["max_progress"] // 2):
return True
# 检查共识触发
if TransitionTrigger.CONSENSUS_BASED in self.config.transition_triggers:
if self.current_stage_metrics.transition_readiness >= 0.8:
return True
return False
def _finalize_current_stage(self):
"""结束当前阶段"""
# 更新阶段指标
self.current_stage_metrics.duration = (datetime.now() - self.current_stage_metrics.start_time).total_seconds()
# 保存阶段指标
self.stage_metrics[self.current_stage] = self.current_stage_metrics
# 记录阶段转换历史
self.stage_transition_history.append({
"stage": self.current_stage.value,
"start_time": self.current_stage_metrics.start_time.isoformat(),
"duration": self.current_stage_metrics.duration,
"speech_count": self.current_stage_metrics.speech_count,
"quality_score": self.current_stage_metrics.quality_score,
"participation_balance": self.current_stage_metrics.participation_balance
})
def _transition_to_next_stage(self) -> bool:
"""转换到下一阶段"""
stage_transitions = {
DebateStage.QI: DebateStage.CHENG,
DebateStage.CHENG: DebateStage.ZHUAN,
DebateStage.ZHUAN: DebateStage.HE,
DebateStage.HE: None
}
next_stage = stage_transitions.get(self.current_stage)
if next_stage:
self.current_stage = next_stage
self.stage_progress = 0
return True
else:
# 辩论结束
self._emit_event("debate_finished", {
"total_duration": (datetime.now() - self.debate_start_time).total_seconds(),
"total_handoffs": self.total_handoffs,
"stages_completed": len(self.stage_metrics)
})
return False
def _initialize_new_stage(self):
"""初始化新阶段"""
self._initialize_stage_metrics()
# 清空发言请求队列
while not self.speaker_request_queue.empty():
try:
self.speaker_request_queue.get_nowait()
except queue.Empty:
break
self.pending_requests.clear()
def record_speech(self, speaker: str, message: str, metadata: Dict[str, Any] = None):
"""记录发言"""
with self.flow_lock:
speech_record = {
"timestamp": datetime.now().isoformat(),
"stage": self.current_stage.value,
"stage_progress": self.stage_progress,
"speaker": speaker,
"message": message,
"total_handoffs": self.total_handoffs,
"metadata": metadata or {}
}
self.debate_history.append(speech_record)
self.current_speaker = speaker
# 更新阶段指标
self._update_stage_metrics(speaker, message)
# 如果是转阶段增加handoff计数
if self.current_stage == DebateStage.ZHUAN:
self.total_handoffs += 1
# 推进进度
self.stage_progress += 1
# 移除已完成的发言请求
if speaker in self.pending_requests:
del self.pending_requests[speaker]
# 触发事件
self._emit_event("speech_recorded", {
"speaker": speaker,
"stage": self.current_stage.value,
"progress": self.stage_progress
})
def _update_stage_metrics(self, speaker: str, message: str):
"""更新阶段指标"""
# 更新发言计数
self.current_stage_metrics.speech_count += 1
# 更新发言者分布
if speaker not in self.current_stage_metrics.speaker_distribution:
self.current_stage_metrics.speaker_distribution[speaker] = 0
self.current_stage_metrics.speaker_distribution[speaker] += 1
# 计算参与平衡度
self.current_stage_metrics.participation_balance = self._calculate_participation_balance()
# 计算质量分数(简化版本)
self.current_stage_metrics.quality_score = self._calculate_quality_score(message)
# 计算转换准备度
self.current_stage_metrics.transition_readiness = self._calculate_transition_readiness()
def _calculate_topic_drift(self) -> float:
"""计算主题偏移度"""
# 简化实现
return 0.1
def _calculate_emotional_intensity(self) -> float:
"""计算情绪强度"""
# 简化实现
return 0.5
def _calculate_argument_balance(self) -> float:
"""计算论点平衡度"""
# 简化实现
return 0.7
def _calculate_time_pressure(self) -> float:
"""计算时间压力"""
config = self.stage_configs[self.current_stage]
stage_duration = (datetime.now() - self.current_stage_metrics.start_time).total_seconds()
max_duration = config.get("max_duration", 600)
return min(1.0, stage_duration / max_duration)
def _calculate_participation_balance(self) -> float:
"""计算参与平衡度"""
if not self.current_stage_metrics.speaker_distribution:
return 1.0
counts = list(self.current_stage_metrics.speaker_distribution.values())
if not counts:
return 1.0
avg_count = sum(counts) / len(counts)
variance = sum((count - avg_count) ** 2 for count in counts) / len(counts)
# 归一化到0-1范围
balance = 1.0 / (1.0 + variance)
return balance
def _calculate_quality_score(self, message: str) -> float:
"""计算质量分数"""
# 简化实现,基于消息长度和关键词
base_score = min(1.0, len(message) / 100)
# 检查关键词
quality_keywords = ["因为", "所以", "但是", "然而", "数据", "证据", "分析"]
keyword_bonus = sum(0.1 for keyword in quality_keywords if keyword in message)
return min(1.0, base_score + keyword_bonus)
def _calculate_transition_readiness(self) -> float:
"""计算转换准备度"""
# 综合多个因素
progress_factor = self.stage_progress / self.stage_configs[self.current_stage]["max_progress"]
quality_factor = self.current_stage_metrics.quality_score
balance_factor = self.current_stage_metrics.participation_balance
readiness = (progress_factor * 0.4 + quality_factor * 0.3 + balance_factor * 0.3)
return min(1.0, readiness)
def _emit_event(self, event_type: str, data: Dict[str, Any]):
"""发出事件"""
event = FlowEvent(
event_type=event_type,
timestamp=datetime.now(),
data=data,
source="flow_controller"
)
self.event_queue.put(event)
# 调用事件处理器
for handler in self.event_handlers.get(event_type, []):
try:
handler(event)
except Exception as e:
print(f"事件处理器错误: {e}")
def add_event_handler(self, event_type: str, handler: Callable):
"""添加事件处理器"""
self.event_handlers[event_type].append(handler)
def get_flow_status(self) -> Dict[str, Any]:
"""获取流程状态"""
return {
"current_stage": self.current_stage.value,
"stage_progress": self.stage_progress,
"total_handoffs": self.total_handoffs,
"current_speaker": self.current_speaker,
"stage_metrics": {
"duration": (datetime.now() - self.current_stage_metrics.start_time).total_seconds(),
"speech_count": self.current_stage_metrics.speech_count,
"quality_score": self.current_stage_metrics.quality_score,
"participation_balance": self.current_stage_metrics.participation_balance,
"transition_readiness": self.current_stage_metrics.transition_readiness
},
"pending_requests": len(self.pending_requests),
"config": {
"mode": self.config.mode.value,
"auto_transition": self.config.auto_stage_transition,
"quality_threshold": self.config.quality_threshold
}
}
def save_flow_data(self, filename: str = "debate_flow_data.json"):
"""保存流程数据"""
flow_data = {
"config": {
"mode": self.config.mode.value,
"transition_triggers": [t.value for t in self.config.transition_triggers],
"speaker_selection_strategy": self.config.speaker_selection_strategy.value,
"quality_threshold": self.config.quality_threshold,
"auto_stage_transition": self.config.auto_stage_transition
},
"current_state": {
"stage": self.current_stage.value,
"progress": self.stage_progress,
"total_handoffs": self.total_handoffs,
"current_speaker": self.current_speaker,
"debate_start_time": self.debate_start_time.isoformat()
},
"stage_metrics": {
stage.value: {
"start_time": metrics.start_time.isoformat(),
"duration": metrics.duration,
"speech_count": metrics.speech_count,
"quality_score": metrics.quality_score,
"participation_balance": metrics.participation_balance,
"speaker_distribution": metrics.speaker_distribution
} for stage, metrics in self.stage_metrics.items()
},
"current_stage_metrics": {
"start_time": self.current_stage_metrics.start_time.isoformat(),
"duration": (datetime.now() - self.current_stage_metrics.start_time).total_seconds(),
"speech_count": self.current_stage_metrics.speech_count,
"quality_score": self.current_stage_metrics.quality_score,
"participation_balance": self.current_stage_metrics.participation_balance,
"speaker_distribution": self.current_stage_metrics.speaker_distribution,
"transition_readiness": self.current_stage_metrics.transition_readiness
},
"debate_history": self.debate_history,
"stage_transition_history": self.stage_transition_history,
"timestamp": datetime.now().isoformat()
}
with open(filename, 'w', encoding='utf-8') as f:
json.dump(flow_data, f, ensure_ascii=False, indent=2)
print(f"✅ 流程数据已保存到 {filename}")
def main():
"""测试优化的辩论流程控制系统"""
print("🎭 测试优化的辩论流程控制系统")
print("=" * 50)
# 创建配置
config = FlowControlConfig(
mode=FlowControlMode.ADAPTIVE,
transition_triggers=[TransitionTrigger.PROGRESS_BASED, TransitionTrigger.QUALITY_BASED],
speaker_selection_strategy=SpeakerSelectionStrategy.CONTEXT_AWARE,
auto_stage_transition=True
)
# 创建流程控制器
controller = OptimizedDebateFlowController(config)
# 添加事件处理器
def on_stage_advanced(event):
print(f"🎭 阶段转换: {event.data}")
def on_speech_recorded(event):
print(f"🗣️ 发言记录: {event.data['speaker']}{event.data['stage']} 阶段")
controller.add_event_handler("stage_advanced", on_stage_advanced)
controller.add_event_handler("speech_recorded", on_speech_recorded)
# 模拟辩论流程
test_speeches = [
("吕洞宾", "我认为AI投资具有巨大的潜力和机会。"),
("何仙姑", "但我们也需要考虑其中的风险因素。"),
("铁拐李", "数据显示AI行业的增长率确实很高。"),
("汉钟离", "然而市场波动性也不容忽视。")
]
print("\n📋 开始模拟辩论流程")
print("-" * 30)
for i, (speaker, message) in enumerate(test_speeches):
print(f"\n{i+1} 轮发言:")
# 获取当前发言者
current_speaker = controller.get_current_speaker()
print(f"推荐发言者: {current_speaker}")
# 记录发言
controller.record_speech(speaker, message)
# 显示流程状态
status = controller.get_flow_status()
print(f"当前状态: {status['current_stage']} 阶段,进度 {status['stage_progress']}")
print(f"质量分数: {status['stage_metrics']['quality_score']:.3f}")
print(f"参与平衡: {status['stage_metrics']['participation_balance']:.3f}")
# 检查是否需要推进阶段
if controller._should_advance_stage():
print("🔄 准备推进到下一阶段")
controller.advance_stage()
# 测试发言请求
print("\n📢 测试发言请求系统")
print("-" * 30)
controller.request_speaking_turn("正1", "需要反驳对方观点", urgency=4, topic_relevance=0.9)
controller.request_speaking_turn("反2", "补充论据", urgency=2, topic_relevance=0.7)
next_speaker = controller.get_current_speaker()
print(f"基于请求的下一位发言者: {next_speaker}")
# 保存数据
controller.save_flow_data("test_flow_data.json")
print("\n✅ 测试完成")
if __name__ == "__main__":
main()

View File

@ -1,165 +0,0 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
太公心易 - 起承转合辩论系统
"""
import json
from datetime import datetime
from typing import Dict, List, Any
from enum import Enum
class DebateStage(Enum):
QI = "" # 八仙按先天八卦顺序
CHENG = "" # 雁阵式承接
ZHUAN = "" # 自由辩论36次handoff
HE = "" # 交替总结
class QiChengZhuanHeDebate:
"""起承转合辩论系统"""
def __init__(self):
# 八仙配置(先天八卦顺序)
self.baxian_sequence = ["吕洞宾", "何仙姑", "铁拐李", "汉钟离", "蓝采和", "张果老", "韩湘子", "曹国舅"]
# 雁阵配置
self.goose_formation = {
"positive": ["正1", "正2", "正3", "正4"],
"negative": ["反1", "反2", "反3", "反4"]
}
# 交替总结顺序
self.alternating_sequence = ["反1", "正1", "反2", "正2", "反3", "正3", "反4", "正4"]
# 辩论状态
self.current_stage = DebateStage.QI
self.stage_progress = 0
self.total_handoffs = 0
self.debate_history = []
# 阶段配置
self.stage_configs = {
DebateStage.QI: {"max_progress": 8, "description": "八仙按先天八卦顺序"},
DebateStage.CHENG: {"max_progress": 8, "description": "雁阵式承接"},
DebateStage.ZHUAN: {"max_progress": 36, "description": "自由辩论"},
DebateStage.HE: {"max_progress": 8, "description": "交替总结"}
}
def get_current_speaker(self) -> str:
"""获取当前发言者"""
if self.current_stage == DebateStage.QI:
return self.baxian_sequence[self.stage_progress % 8]
elif self.current_stage == DebateStage.CHENG:
if self.stage_progress < 4:
return self.goose_formation["positive"][self.stage_progress]
else:
return self.goose_formation["negative"][self.stage_progress - 4]
elif self.current_stage == DebateStage.ZHUAN:
# 简化的优先级算法
speakers = self.goose_formation["positive"] + self.goose_formation["negative"]
return speakers[self.total_handoffs % 8]
elif self.current_stage == DebateStage.HE:
return self.alternating_sequence[self.stage_progress % 8]
return "未知发言者"
def advance_stage(self):
"""推进辩论阶段"""
config = self.stage_configs[self.current_stage]
if self.stage_progress >= config["max_progress"] - 1:
self._transition_to_next_stage()
else:
self.stage_progress += 1
def _transition_to_next_stage(self):
"""转换到下一阶段"""
transitions = {
DebateStage.QI: DebateStage.CHENG,
DebateStage.CHENG: DebateStage.ZHUAN,
DebateStage.ZHUAN: DebateStage.HE,
DebateStage.HE: None
}
next_stage = transitions[self.current_stage]
if next_stage:
self.current_stage = next_stage
self.stage_progress = 0
print(f"🎭 辩论进入{next_stage.value}阶段")
else:
print("🎉 辩论结束!")
def record_speech(self, speaker: str, message: str):
"""记录发言"""
record = {
"timestamp": datetime.now().isoformat(),
"stage": self.current_stage.value,
"progress": self.stage_progress,
"speaker": speaker,
"message": message,
"handoffs": self.total_handoffs
}
self.debate_history.append(record)
if self.current_stage == DebateStage.ZHUAN:
self.total_handoffs += 1
def get_stage_info(self) -> Dict[str, Any]:
"""获取阶段信息"""
config = self.stage_configs[self.current_stage]
return {
"stage": self.current_stage.value,
"progress": self.stage_progress + 1,
"max_progress": config["max_progress"],
"description": config["description"],
"current_speaker": self.get_current_speaker(),
"total_handoffs": self.total_handoffs
}
def save_state(self, filename: str = "qczh_debate_state.json"):
"""保存状态"""
state = {
"current_stage": self.current_stage.value,
"stage_progress": self.stage_progress,
"total_handoffs": self.total_handoffs,
"debate_history": self.debate_history
}
with open(filename, 'w', encoding='utf-8') as f:
json.dump(state, f, ensure_ascii=False, indent=2)
print(f"💾 辩论状态已保存到 {filename}")
def main():
"""测试函数"""
print("🚀 起承转合辩论系统测试")
print("=" * 50)
debate = QiChengZhuanHeDebate()
# 测试各阶段
test_messages = [
"起:八仙按先天八卦顺序阐述观点",
"承:雁阵式承接,总体阐述+讥讽",
"自由辩论36次handoff",
"合:交替总结,最终论证"
]
for i, message in enumerate(test_messages):
info = debate.get_stage_info()
speaker = debate.get_current_speaker()
print(f"\n🎭 阶段: {info['stage']} ({info['progress']}/{info['max_progress']})")
print(f"🗣️ 发言者: {speaker}")
print(f"💬 消息: {message}")
debate.record_speech(speaker, message)
debate.advance_stage()
debate.save_state()
print("\n✅ 测试完成!")
if __name__ == "__main__":
main()

View File

@ -11,6 +11,7 @@ from datetime import datetime
from typing import Dict, List, Any, Optional
from dataclasses import dataclass
from enum import Enum
from .enhanced_priority_algorithm import EnhancedPriorityAlgorithm, SpeechAnalysis
class DebateStage(Enum):
"""辩论阶段枚举"""
@ -36,6 +37,7 @@ class DebateContext:
current_speaker: Optional[str] = None
last_message: Optional[str] = None
debate_history: List[Dict] = None
last_priority_analysis: Optional[Dict[str, Any]] = None
class QiChengZhuanHeDebateSystem:
"""起承转合辩论系统"""
@ -91,8 +93,8 @@ class QiChengZhuanHeDebateSystem:
}
}
# 优先级算法
self.priority_algorithm = PriorityAlgorithm()
# 增强版优先级算法
self.priority_algorithm = EnhancedPriorityAlgorithm()
# 记忆系统
self.memory_system = DebateMemorySystem()
@ -128,8 +130,38 @@ class QiChengZhuanHeDebateSystem:
return self.goose_formation["negative"][progress - 4]
def _get_priority_speaker(self) -> str:
"""获取优先级发言者"""
return self.priority_algorithm.calculate_next_speaker(self.context)
"""获取优先级发言者(转阶段)"""
available_speakers = ["正1", "正2", "正3", "正4", "反1", "反2", "反3", "反4"]
# 构建上下文
context = {
"current_stage": self.context.current_stage.value,
"stage_progress": self.context.stage_progress,
"max_progress": self.stage_configs[self.context.current_stage]["max_progress"],
"time_remaining": max(0.1, 1.0 - (self.context.stage_progress / self.stage_configs[self.context.current_stage]["max_progress"])),
"topic_keywords": ["投资", "AI", "风险", "收益"], # 可配置
"positive_team_score": 0.5, # 可动态计算
"negative_team_score": 0.5, # 可动态计算
"positive_recent_speeches": len([h for h in self.context.debate_history[-10:] if "" in h.get("speaker", "")]),
"negative_recent_speeches": len([h for h in self.context.debate_history[-10:] if "" in h.get("speaker", "")])
}
# 获取最近发言历史
recent_speeches = self.context.debate_history[-10:] if self.context.debate_history else []
next_speaker, score, analysis = self.priority_algorithm.get_next_speaker(
available_speakers, context, recent_speeches
)
# 记录分析结果
self.context.last_priority_analysis = {
"recommended_speaker": next_speaker,
"priority_score": score,
"analysis": analysis,
"timestamp": datetime.now().isoformat()
}
return next_speaker
def _get_alternating_speaker(self, progress: int) -> str:
"""获取交替总结发言者"""
@ -219,48 +251,7 @@ class QiChengZhuanHeDebateSystem:
print(f"💾 辩论状态已保存到 {filename}")
class PriorityAlgorithm:
"""优先级算法"""
def __init__(self):
self.speaker_weights = {
"rebuttal_urgency": 0.3,
"argument_strength": 0.25,
"time_pressure": 0.2,
"audience_reaction": 0.15,
"strategy_need": 0.1
}
def calculate_next_speaker(self, context: DebateContext) -> str:
"""计算下一个发言者"""
# 简化的优先级算法
available_speakers = ["正1", "正2", "正3", "正4", "反1", "反2", "反3", "反4"]
# 基于当前上下文计算优先级
priorities = {}
for speaker in available_speakers:
priority_score = self._calculate_speaker_priority(speaker, context)
priorities[speaker] = priority_score
# 选择最高优先级发言者
return max(priorities, key=priorities.get)
def _calculate_speaker_priority(self, speaker: str, context: DebateContext) -> float:
"""计算发言者优先级"""
# 简化的优先级计算
base_score = 0.5
# 根据发言者角色调整
if "" in speaker:
base_score += 0.1
if "" in speaker:
base_score += 0.1
# 根据handoff次数调整
if context.total_handoffs % 2 == 0:
base_score += 0.2
return base_score
# 旧的PriorityAlgorithm类已被EnhancedPriorityAlgorithm替换
class DebateMemorySystem:
"""辩论记忆系统"""

View File

@ -0,0 +1,207 @@
#!/usr/bin/env python3
"""
OpenBB 集成引擎
为八仙论道提供更丰富的金融数据支撑
"""
from typing import Dict, List, Any, Optional
from dataclasses import dataclass
import openbb
@dataclass
class ImmortalConfig:
"""八仙配置数据类"""
primary: str
specialty: str
@dataclass
class APIResult:
"""API调用结果数据类"""
success: bool
data: Optional[Dict[str, Any]] = None
provider_used: Optional[str] = None
error: Optional[str] = None
class OpenBBEngine:
"""OpenBB 集成引擎"""
def __init__(self):
"""
初始化 OpenBB 引擎
"""
# 八仙专属数据源分配
self.immortal_sources: Dict[str, ImmortalConfig] = {
'吕洞宾': ImmortalConfig( # 乾-技术分析专家
primary='yfinance',
specialty='technical_analysis'
),
'何仙姑': ImmortalConfig( # 坤-风险控制专家
primary='yfinance',
specialty='risk_metrics'
),
'张果老': ImmortalConfig( # 兑-历史数据分析师
primary='yfinance',
specialty='historical_data'
),
'韩湘子': ImmortalConfig( # 艮-新兴资产专家
primary='yfinance',
specialty='sector_analysis'
),
'汉钟离': ImmortalConfig( # 离-热点追踪
primary='yfinance',
specialty='market_movers'
),
'蓝采和': ImmortalConfig( # 坎-潜力股发现
primary='yfinance',
specialty='screener'
),
'曹国舅': ImmortalConfig( # 震-机构分析
primary='yfinance',
specialty='institutional_holdings'
),
'铁拐李': ImmortalConfig( # 巽-逆向投资
primary='yfinance',
specialty='short_interest'
)
}
print("✅ OpenBB 引擎初始化完成")
def get_immortal_data(self, immortal_name: str, data_type: str, symbol: str = 'AAPL') -> APIResult:
"""
为特定八仙获取专属数据
Args:
immortal_name: 八仙名称
data_type: 数据类型
symbol: 股票代码
Returns:
API调用结果
"""
if immortal_name not in self.immortal_sources:
return APIResult(success=False, error=f'Unknown immortal: {immortal_name}')
immortal_config = self.immortal_sources[immortal_name]
print(f"🧙‍♂️ {immortal_name} 请求 {data_type} 数据 (股票: {symbol})")
# 根据数据类型调用不同的 OpenBB 函数
try:
if data_type == 'price':
result = openbb.obb.equity.price.quote(symbol=symbol, provider=immortal_config.primary)
return APIResult(
success=True,
data=result.results,
provider_used=immortal_config.primary
)
elif data_type == 'historical':
result = openbb.obb.equity.price.historical(symbol=symbol, provider=immortal_config.primary)
return APIResult(
success=True,
data=result.results,
provider_used=immortal_config.primary
)
elif data_type == 'profile':
result = openbb.obb.equity.profile(symbol=symbol, provider=immortal_config.primary)
return APIResult(
success=True,
data=result.results,
provider_used=immortal_config.primary
)
elif data_type == 'news':
result = openbb.obb.news.company(symbol=symbol)
return APIResult(
success=True,
data=result.results,
provider_used='news_api'
)
elif data_type == 'earnings':
result = openbb.obb.equity.earnings.earnings_historical(symbol=symbol, provider=immortal_config.primary)
return APIResult(
success=True,
data=result.results,
provider_used=immortal_config.primary
)
elif data_type == 'dividends':
result = openbb.obb.equity.fundamental.dividend(symbol=symbol, provider=immortal_config.primary)
return APIResult(
success=True,
data=result.results,
provider_used=immortal_config.primary
)
elif data_type == 'screener':
# 使用简单的筛选器作为替代
result = openbb.obb.equity.screener.etf(
provider=immortal_config.primary
)
return APIResult(
success=True,
data=result.results,
provider_used=immortal_config.primary
)
else:
return APIResult(success=False, error=f'Unsupported data type: {data_type}')
except Exception as e:
return APIResult(success=False, error=f'OpenBB 调用失败: {str(e)}')
def simulate_jixia_debate(self, topic_symbol: str = 'TSLA') -> Dict[str, APIResult]:
"""
模拟稷下学宫八仙论道
Args:
topic_symbol: 辩论主题股票代码
Returns:
八仙辩论结果
"""
print(f"🏛️ 稷下学宫八仙论道 - 主题: {topic_symbol} (OpenBB 版本)")
print("=" * 60)
debate_results: Dict[str, APIResult] = {}
# 数据类型映射
data_type_mapping = {
'technical_analysis': 'historical', # 技术分析使用历史价格数据
'risk_metrics': 'price', # 风险控制使用当前价格数据
'historical_data': 'historical', # 历史数据分析使用历史价格数据
'sector_analysis': 'profile', # 新兴资产分析使用公司概况
'market_movers': 'news', # 热点追踪使用新闻
'screener': 'screener', # 潜力股发现使用筛选器
'institutional_holdings': 'profile', # 机构分析使用公司概况
'short_interest': 'profile' # 逆向投资使用公司概况
}
# 八仙依次发言
for immortal_name, config in self.immortal_sources.items():
print(f"\n🎭 {immortal_name} ({config.specialty}) 发言:")
data_type = data_type_mapping.get(config.specialty, 'price')
result = self.get_immortal_data(immortal_name, data_type, topic_symbol)
if result.success:
debate_results[immortal_name] = result
print(f" 💬 观点: 基于{result.provider_used}数据的{config.specialty}分析")
# 显示部分数据示例
if result.data:
if isinstance(result.data, list) and len(result.data) > 0:
sample = result.data[0]
print(f" 📊 数据示例: {sample}")
elif hasattr(result.data, '__dict__'):
# 如果是对象,显示前几个属性
attrs = vars(result.data)
sample = {k: v for k, v in list(attrs.items())[:3]}
print(f" 📊 数据示例: {sample}")
else:
print(f" 📊 数据示例: {result.data}")
else:
print(f" 😔 暂时无法获取数据: {result.error}")
return debate_results
if __name__ == "__main__":
# 测试 OpenBB 引擎
print("🧪 OpenBB 引擎测试")
engine = OpenBBEngine()
engine.simulate_jixia_debate('AAPL')

View File

@ -0,0 +1,148 @@
#!/usr/bin/env python3
"""
OpenBB 股票数据获取模块
"""
import openbb
from datetime import datetime, timedelta
from typing import List, Dict, Any, Optional
def get_stock_data(symbol: str, days: int = 90) -> Optional[List[Dict[str, Any]]]:
"""
获取指定股票在指定天数内的历史数据
Args:
symbol (str): 股票代码 ( 'AAPL')
days (int): 时间窗口默认90天
Returns:
List[Dict[str, Any]]: 股票历史数据列表如果失败则返回None
"""
try:
# 计算开始日期
end_date = datetime.now()
start_date = end_date - timedelta(days=days)
print(f"🔍 正在获取 {symbol}{days} 天的数据...")
print(f" 时间范围: {start_date.strftime('%Y-%m-%d')}{end_date.strftime('%Y-%m-%d')}")
# 使用OpenBB获取数据
result = openbb.obb.equity.price.historical(
symbol=symbol,
provider='yfinance',
start_date=start_date.strftime('%Y-%m-%d'),
end_date=end_date.strftime('%Y-%m-%d')
)
if result and result.results:
print(f"✅ 成功获取 {len(result.results)} 条记录")
return result.results
else:
print("❌ 未获取到数据")
return None
except Exception as e:
print(f"❌ 获取数据时出错: {str(e)}")
return None
def get_etf_data(symbol: str, days: int = 90) -> Optional[List[Dict[str, Any]]]:
"""
获取指定ETF在指定天数内的历史数据
Args:
symbol (str): ETF代码 ( 'SPY')
days (int): 时间窗口默认90天
Returns:
List[Dict[str, Any]]: ETF历史数据列表如果失败则返回None
"""
try:
# 计算开始日期
end_date = datetime.now()
start_date = end_date - timedelta(days=days)
print(f"🔍 正在获取 {symbol}{days} 天的数据...")
print(f" 时间范围: {start_date.strftime('%Y-%m-%d')}{end_date.strftime('%Y-%m-%d')}")
# 使用OpenBB获取数据
result = openbb.obb.etf.historical(
symbol=symbol,
provider='yfinance',
start_date=start_date.strftime('%Y-%m-%d'),
end_date=end_date.strftime('%Y-%m-%d')
)
if result and result.results:
print(f"✅ 成功获取 {len(result.results)} 条记录")
return result.results
else:
print("❌ 未获取到数据")
return None
except Exception as e:
print(f"❌ 获取数据时出错: {str(e)}")
return None
def format_stock_data(data: List[Dict[str, Any]]) -> None:
"""
格式化并打印股票数据
Args:
data (List[Dict[str, Any]]): 股票数据列表
"""
if not data:
print("😔 没有数据可显示")
return
print(f"\n📊 股票数据预览 (显示最近5条记录):")
print("-" * 80)
print(f"{'日期':<12} {'开盘':<10} {'最高':<10} {'最低':<10} {'收盘':<10} {'成交量':<15}")
print("-" * 80)
# 只显示最近5条记录
for item in data[-5:]:
print(f"{str(item.date):<12} {item.open:<10.2f} {item.high:<10.2f} {item.low:<10.2f} {item.close:<10.2f} {item.volume:<15,}")
def format_etf_data(data: List[Dict[str, Any]]) -> None:
"""
格式化并打印ETF数据
Args:
data (List[Dict[str, Any]]): ETF数据列表
"""
if not data:
print("😔 没有数据可显示")
return
print(f"\n📊 ETF数据预览 (显示最近5条记录):")
print("-" * 80)
print(f"{'日期':<12} {'开盘':<10} {'最高':<10} {'最低':<10} {'收盘':<10} {'成交量':<15}")
print("-" * 80)
# 只显示最近5条记录
for item in data[-5:]:
print(f"{str(item.date):<12} {item.open:<10.2f} {item.high:<10.2f} {item.low:<10.2f} {item.close:<10.2f} {item.volume:<15,}")
def main():
"""主函数"""
# 示例获取AAPL股票和SPY ETF的数据
symbols = [("AAPL", "stock"), ("SPY", "etf")]
time_windows = [90, 720]
for symbol, asset_type in symbols:
for days in time_windows:
print(f"\n{'='*60}")
print(f"获取 {symbol} {days} 天数据")
print(f"{'='*60}")
if asset_type == "stock":
data = get_stock_data(symbol, days)
if data:
format_stock_data(data)
else:
data = get_etf_data(symbol, days)
if data:
format_etf_data(data)
if __name__ == "__main__":
main()

View File

@ -0,0 +1,929 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Human干预系统
监控辩论健康度并在必要时触发人工干预
"""
import asyncio
import json
import logging
from typing import Dict, List, Any, Optional, Callable, Tuple
from dataclasses import dataclass, field
from enum import Enum
from datetime import datetime, timedelta
import statistics
import re
class HealthStatus(Enum):
"""健康状态"""
EXCELLENT = "优秀" # 90-100分
GOOD = "良好" # 70-89分
FAIR = "一般" # 50-69分
POOR = "较差" # 30-49分
CRITICAL = "危险" # 0-29分
class InterventionLevel(Enum):
"""干预级别"""
NONE = (0, "无需干预")
GENTLE_REMINDER = (1, "温和提醒")
MODERATE_GUIDANCE = (2, "适度引导")
STRONG_INTERVENTION = (3, "强力干预")
EMERGENCY_STOP = (4, "紧急停止")
def __init__(self, level, description):
self.level = level
self.description = description
@property
def value(self):
return self.description
def __ge__(self, other):
if isinstance(other, InterventionLevel):
return self.level >= other.level
return NotImplemented
def __gt__(self, other):
if isinstance(other, InterventionLevel):
return self.level > other.level
return NotImplemented
def __le__(self, other):
if isinstance(other, InterventionLevel):
return self.level <= other.level
return NotImplemented
def __lt__(self, other):
if isinstance(other, InterventionLevel):
return self.level < other.level
return NotImplemented
class AlertType(Enum):
"""警报类型"""
QUALITY_DECLINE = "质量下降"
TOXIC_BEHAVIOR = "有害行为"
REPETITIVE_CONTENT = "重复内容"
OFF_TOPIC = "偏离主题"
EMOTIONAL_ESCALATION = "情绪升级"
PARTICIPATION_IMBALANCE = "参与不平衡"
TECHNICAL_ERROR = "技术错误"
TIME_VIOLATION = "时间违规"
@dataclass
class HealthMetric:
"""健康指标"""
name: str
value: float
weight: float
threshold_critical: float
threshold_poor: float
threshold_fair: float
threshold_good: float
description: str
last_updated: datetime = field(default_factory=datetime.now)
@dataclass
class InterventionAlert:
"""干预警报"""
id: str
alert_type: AlertType
severity: InterventionLevel
message: str
affected_participants: List[str]
metrics: Dict[str, float]
timestamp: datetime
resolved: bool = False
resolution_notes: str = ""
human_notified: bool = False
@dataclass
class InterventionAction:
"""干预动作"""
id: str
action_type: str
description: str
target_participants: List[str]
parameters: Dict[str, Any]
executed_at: datetime
success: bool = False
result_message: str = ""
class DebateHealthMonitor:
"""辩论健康度监控器"""
def __init__(self):
self.health_metrics: Dict[str, HealthMetric] = {}
self.active_alerts: List[InterventionAlert] = []
self.intervention_history: List[InterventionAction] = []
self.monitoring_enabled = True
self.logger = logging.getLogger(__name__)
# 初始化健康指标
self._initialize_health_metrics()
# 事件处理器
self.event_handlers: Dict[str, List[Callable]] = {}
# 监控配置
self.monitoring_config = {
"check_interval_seconds": 30,
"alert_cooldown_minutes": 5,
"auto_intervention_enabled": True,
"human_notification_threshold": InterventionLevel.STRONG_INTERVENTION
}
def _initialize_health_metrics(self):
"""初始化健康指标"""
metrics_config = [
{
"name": "content_quality",
"weight": 0.25,
"thresholds": {"critical": 20, "poor": 40, "fair": 60, "good": 80},
"description": "内容质量评分"
},
{
"name": "participation_balance",
"weight": 0.20,
"thresholds": {"critical": 30, "poor": 50, "fair": 70, "good": 85},
"description": "参与平衡度"
},
{
"name": "emotional_stability",
"weight": 0.20,
"thresholds": {"critical": 25, "poor": 45, "fair": 65, "good": 80},
"description": "情绪稳定性"
},
{
"name": "topic_relevance",
"weight": 0.15,
"thresholds": {"critical": 35, "poor": 55, "fair": 70, "good": 85},
"description": "主题相关性"
},
{
"name": "interaction_civility",
"weight": 0.10,
"thresholds": {"critical": 20, "poor": 40, "fair": 60, "good": 80},
"description": "互动文明度"
},
{
"name": "technical_stability",
"weight": 0.10,
"thresholds": {"critical": 40, "poor": 60, "fair": 75, "good": 90},
"description": "技术稳定性"
}
]
for config in metrics_config:
metric = HealthMetric(
name=config["name"],
value=100.0, # 初始值
weight=config["weight"],
threshold_critical=config["thresholds"]["critical"],
threshold_poor=config["thresholds"]["poor"],
threshold_fair=config["thresholds"]["fair"],
threshold_good=config["thresholds"]["good"],
description=config["description"]
)
self.health_metrics[config["name"]] = metric
async def analyze_debate_health(self, debate_data: Dict[str, Any]) -> Tuple[float, HealthStatus]:
"""分析辩论健康度"""
if not self.monitoring_enabled:
return 100.0, HealthStatus.EXCELLENT
# 更新各项健康指标
await self._update_content_quality(debate_data)
await self._update_participation_balance(debate_data)
await self._update_emotional_stability(debate_data)
await self._update_topic_relevance(debate_data)
await self._update_interaction_civility(debate_data)
await self._update_technical_stability(debate_data)
# 计算综合健康分数
total_score = 0.0
total_weight = 0.0
for metric in self.health_metrics.values():
total_score += metric.value * metric.weight
total_weight += metric.weight
overall_score = total_score / total_weight if total_weight > 0 else 0.0
# 确定健康状态
if overall_score >= 90:
status = HealthStatus.EXCELLENT
elif overall_score >= 70:
status = HealthStatus.GOOD
elif overall_score >= 50:
status = HealthStatus.FAIR
elif overall_score >= 30:
status = HealthStatus.POOR
else:
status = HealthStatus.CRITICAL
# 检查是否需要发出警报
await self._check_for_alerts(overall_score, status)
self.logger.info(f"辩论健康度分析完成: {overall_score:.1f}分 ({status.value})")
return overall_score, status
async def _update_content_quality(self, debate_data: Dict[str, Any]):
"""更新内容质量指标"""
messages = debate_data.get("recent_messages", [])
if not messages:
return
quality_scores = []
for message in messages[-10:]: # 分析最近10条消息
content = message.get("content", "")
# 内容长度评分
length_score = min(len(content) / 100 * 50, 50) # 最多50分
# 词汇丰富度评分
words = content.split()
unique_words = len(set(words))
vocabulary_score = min(unique_words / len(words) * 30, 30) if words else 0
# 逻辑结构评分(简单检测)
logic_indicators = ["因为", "所以", "但是", "然而", "首先", "其次", "最后", "总之"]
logic_score = min(sum(1 for indicator in logic_indicators if indicator in content) * 5, 20)
total_score = length_score + vocabulary_score + logic_score
quality_scores.append(total_score)
avg_quality = statistics.mean(quality_scores) if quality_scores else 50
self.health_metrics["content_quality"].value = avg_quality
self.health_metrics["content_quality"].last_updated = datetime.now()
async def _update_participation_balance(self, debate_data: Dict[str, Any]):
"""更新参与平衡度指标"""
messages = debate_data.get("recent_messages", [])
if not messages:
return
# 统计各参与者的发言次数
speaker_counts = {}
for message in messages[-20:]: # 分析最近20条消息
speaker = message.get("sender", "")
speaker_counts[speaker] = speaker_counts.get(speaker, 0) + 1
if not speaker_counts:
return
# 计算参与平衡度
counts = list(speaker_counts.values())
if len(counts) <= 1:
balance_score = 100
else:
# 使用标准差来衡量平衡度
mean_count = statistics.mean(counts)
std_dev = statistics.stdev(counts)
# 标准差越小,平衡度越高
balance_score = max(0, 100 - (std_dev / mean_count * 100))
self.health_metrics["participation_balance"].value = balance_score
self.health_metrics["participation_balance"].last_updated = datetime.now()
async def _update_emotional_stability(self, debate_data: Dict[str, Any]):
"""更新情绪稳定性指标"""
messages = debate_data.get("recent_messages", [])
if not messages:
return
emotional_scores = []
# 情绪关键词
negative_emotions = ["愤怒", "生气", "讨厌", "恶心", "愚蠢", "白痴", "垃圾"]
positive_emotions = ["赞同", "支持", "优秀", "精彩", "同意", "认可"]
for message in messages[-15:]:
content = message.get("content", "")
# 检测负面情绪
negative_count = sum(1 for word in negative_emotions if word in content)
positive_count = sum(1 for word in positive_emotions if word in content)
# 检测大写字母比例(可能表示情绪激动)
if content:
caps_ratio = sum(1 for c in content if c.isupper()) / len(content)
else:
caps_ratio = 0
# 检测感叹号数量
exclamation_count = content.count("!")
# 计算情绪稳定性分数
emotion_score = 100
emotion_score -= negative_count * 15 # 负面情绪扣分
emotion_score += positive_count * 5 # 正面情绪加分
emotion_score -= caps_ratio * 30 # 大写字母扣分
emotion_score -= min(exclamation_count * 5, 20) # 感叹号扣分
emotional_scores.append(max(0, emotion_score))
avg_emotional_stability = statistics.mean(emotional_scores) if emotional_scores else 80
self.health_metrics["emotional_stability"].value = avg_emotional_stability
self.health_metrics["emotional_stability"].last_updated = datetime.now()
async def _update_topic_relevance(self, debate_data: Dict[str, Any]):
"""更新主题相关性指标"""
messages = debate_data.get("recent_messages", [])
topic_keywords = debate_data.get("topic_keywords", [])
if not messages or not topic_keywords:
return
relevance_scores = []
for message in messages[-10:]:
content = message.get("content", "")
# 计算主题关键词匹配度
keyword_matches = sum(1 for keyword in topic_keywords if keyword in content)
relevance_score = min(keyword_matches / len(topic_keywords) * 100, 100) if topic_keywords else 50
relevance_scores.append(relevance_score)
avg_relevance = statistics.mean(relevance_scores) if relevance_scores else 70
self.health_metrics["topic_relevance"].value = avg_relevance
self.health_metrics["topic_relevance"].last_updated = datetime.now()
async def _update_interaction_civility(self, debate_data: Dict[str, Any]):
"""更新互动文明度指标"""
messages = debate_data.get("recent_messages", [])
if not messages:
return
civility_scores = []
# 不文明行为关键词
uncivil_patterns = [
r"你.*蠢", r".*白痴.*", r".*垃圾.*", r"闭嘴", r"滚.*",
r".*傻.*", r".*笨.*", r".*废物.*"
]
# 文明行为关键词
civil_patterns = [
r"请.*", r"谢谢", r"不好意思", r"抱歉", r"尊重", r"理解"
]
for message in messages[-15:]:
content = message.get("content", "")
civility_score = 100
# 检测不文明行为
for pattern in uncivil_patterns:
if re.search(pattern, content):
civility_score -= 20
# 检测文明行为
for pattern in civil_patterns:
if re.search(pattern, content):
civility_score += 5
civility_scores.append(max(0, min(100, civility_score)))
avg_civility = statistics.mean(civility_scores) if civility_scores else 85
self.health_metrics["interaction_civility"].value = avg_civility
self.health_metrics["interaction_civility"].last_updated = datetime.now()
async def _update_technical_stability(self, debate_data: Dict[str, Any]):
"""更新技术稳定性指标"""
system_status = debate_data.get("system_status", {})
stability_score = 100
# 检查错误率
error_rate = system_status.get("error_rate", 0)
stability_score -= error_rate * 100
# 检查响应时间
response_time = system_status.get("avg_response_time", 0)
if response_time > 2.0: # 超过2秒
stability_score -= (response_time - 2.0) * 10
# 检查系统负载
system_load = system_status.get("system_load", 0)
if system_load > 0.8: # 负载超过80%
stability_score -= (system_load - 0.8) * 50
self.health_metrics["technical_stability"].value = max(0, stability_score)
self.health_metrics["technical_stability"].last_updated = datetime.now()
async def _check_for_alerts(self, overall_score: float, status: HealthStatus):
"""检查是否需要发出警报"""
current_time = datetime.now()
# 检查各项指标是否触发警报
for metric_name, metric in self.health_metrics.items():
alert_level = self._determine_alert_level(metric)
if alert_level != InterventionLevel.NONE:
# 检查是否在冷却期内
recent_alerts = [
alert for alert in self.active_alerts
if alert.alert_type.value == metric_name and
(current_time - alert.timestamp).total_seconds() <
self.monitoring_config["alert_cooldown_minutes"] * 60
]
if not recent_alerts:
await self._create_alert(metric_name, metric, alert_level)
# 检查整体健康状态
if status in [HealthStatus.POOR, HealthStatus.CRITICAL]:
await self._create_system_alert(overall_score, status)
def _determine_alert_level(self, metric: HealthMetric) -> InterventionLevel:
"""确定警报级别"""
if metric.value <= metric.threshold_critical:
return InterventionLevel.EMERGENCY_STOP
elif metric.value <= metric.threshold_poor:
return InterventionLevel.STRONG_INTERVENTION
elif metric.value <= metric.threshold_fair:
return InterventionLevel.MODERATE_GUIDANCE
elif metric.value <= metric.threshold_good:
return InterventionLevel.GENTLE_REMINDER
else:
return InterventionLevel.NONE
async def _create_alert(self, metric_name: str, metric: HealthMetric, level: InterventionLevel):
"""创建警报"""
alert_type_map = {
"content_quality": AlertType.QUALITY_DECLINE,
"participation_balance": AlertType.PARTICIPATION_IMBALANCE,
"emotional_stability": AlertType.EMOTIONAL_ESCALATION,
"topic_relevance": AlertType.OFF_TOPIC,
"interaction_civility": AlertType.TOXIC_BEHAVIOR,
"technical_stability": AlertType.TECHNICAL_ERROR
}
alert = InterventionAlert(
id=f"alert_{datetime.now().timestamp()}",
alert_type=alert_type_map.get(metric_name, AlertType.QUALITY_DECLINE),
severity=level,
message=f"{metric.description}指标异常: {metric.value:.1f}",
affected_participants=[],
metrics={metric_name: metric.value},
timestamp=datetime.now()
)
self.active_alerts.append(alert)
# 触发事件处理
await self._trigger_event_handlers("alert_created", alert)
# 检查是否需要自动干预
if self.monitoring_config["auto_intervention_enabled"]:
await self._execute_auto_intervention(alert)
# 检查是否需要通知Human
if level >= self.monitoring_config["human_notification_threshold"]:
await self._notify_human(alert)
self.logger.warning(f"创建警报: {alert.alert_type.value} - {alert.message}")
async def _create_system_alert(self, score: float, status: HealthStatus):
"""创建系统级警报"""
level = InterventionLevel.STRONG_INTERVENTION if status == HealthStatus.POOR else InterventionLevel.EMERGENCY_STOP
alert = InterventionAlert(
id=f"system_alert_{datetime.now().timestamp()}",
alert_type=AlertType.QUALITY_DECLINE,
severity=level,
message=f"系统整体健康度异常: {score:.1f}分 ({status.value})",
affected_participants=[],
metrics={"overall_score": score},
timestamp=datetime.now()
)
self.active_alerts.append(alert)
await self._trigger_event_handlers("system_alert_created", alert)
if self.monitoring_config["auto_intervention_enabled"]:
await self._execute_auto_intervention(alert)
await self._notify_human(alert)
self.logger.critical(f"系统级警报: {alert.message}")
async def _execute_auto_intervention(self, alert: InterventionAlert):
"""执行自动干预"""
intervention_strategies = {
AlertType.QUALITY_DECLINE: self._intervene_quality_decline,
AlertType.TOXIC_BEHAVIOR: self._intervene_toxic_behavior,
AlertType.EMOTIONAL_ESCALATION: self._intervene_emotional_escalation,
AlertType.PARTICIPATION_IMBALANCE: self._intervene_participation_imbalance,
AlertType.OFF_TOPIC: self._intervene_off_topic,
AlertType.TECHNICAL_ERROR: self._intervene_technical_error
}
strategy = intervention_strategies.get(alert.alert_type)
if strategy:
action = await strategy(alert)
if action:
self.intervention_history.append(action)
await self._trigger_event_handlers("intervention_executed", action)
async def _intervene_quality_decline(self, alert: InterventionAlert) -> Optional[InterventionAction]:
"""干预质量下降"""
action = InterventionAction(
id=f"quality_intervention_{datetime.now().timestamp()}",
action_type="quality_guidance",
description="发送质量提升指导",
target_participants=["all"],
parameters={
"message": "💡 建议:请提供更详细的论证和具体的例证来支持您的观点。",
"guidance_type": "quality_improvement"
},
executed_at=datetime.now(),
success=True,
result_message="质量提升指导已发送"
)
self.logger.info(f"执行质量干预: {action.description}")
return action
async def _intervene_toxic_behavior(self, alert: InterventionAlert) -> Optional[InterventionAction]:
"""干预有害行为"""
action = InterventionAction(
id=f"toxicity_intervention_{datetime.now().timestamp()}",
action_type="behavior_warning",
description="发送行为规范提醒",
target_participants=["all"],
parameters={
"message": "⚠️ 请保持文明讨论,避免使用攻击性语言。让我们专注于观点的交流。",
"warning_level": "moderate"
},
executed_at=datetime.now(),
success=True,
result_message="行为规范提醒已发送"
)
self.logger.warning(f"执行行为干预: {action.description}")
return action
async def _intervene_emotional_escalation(self, alert: InterventionAlert) -> Optional[InterventionAction]:
"""干预情绪升级"""
action = InterventionAction(
id=f"emotion_intervention_{datetime.now().timestamp()}",
action_type="emotion_cooling",
description="发送情绪缓解建议",
target_participants=["all"],
parameters={
"message": "🧘 让我们暂停一下,深呼吸。理性的讨论更有助于达成共识。",
"cooling_period": 60 # 秒
},
executed_at=datetime.now(),
success=True,
result_message="情绪缓解建议已发送"
)
self.logger.info(f"执行情绪干预: {action.description}")
return action
async def _intervene_participation_imbalance(self, alert: InterventionAlert) -> Optional[InterventionAction]:
"""干预参与不平衡"""
action = InterventionAction(
id=f"balance_intervention_{datetime.now().timestamp()}",
action_type="participation_encouragement",
description="鼓励平衡参与",
target_participants=["all"],
parameters={
"message": "🤝 鼓励所有参与者分享观点,让讨论更加丰富多元。",
"encouragement_type": "participation_balance"
},
executed_at=datetime.now(),
success=True,
result_message="参与鼓励消息已发送"
)
self.logger.info(f"执行参与平衡干预: {action.description}")
return action
async def _intervene_off_topic(self, alert: InterventionAlert) -> Optional[InterventionAction]:
"""干预偏离主题"""
action = InterventionAction(
id=f"topic_intervention_{datetime.now().timestamp()}",
action_type="topic_redirect",
description="引导回归主题",
target_participants=["all"],
parameters={
"message": "🎯 让我们回到主要讨论话题,保持讨论的焦点和深度。",
"redirect_type": "topic_focus"
},
executed_at=datetime.now(),
success=True,
result_message="主题引导消息已发送"
)
self.logger.info(f"执行主题干预: {action.description}")
return action
async def _intervene_technical_error(self, alert: InterventionAlert) -> Optional[InterventionAction]:
"""干预技术错误"""
action = InterventionAction(
id=f"tech_intervention_{datetime.now().timestamp()}",
action_type="technical_support",
description="提供技术支持",
target_participants=["system"],
parameters={
"message": "🔧 检测到技术问题,正在进行系统优化...",
"support_type": "system_optimization"
},
executed_at=datetime.now(),
success=True,
result_message="技术支持已启动"
)
self.logger.error(f"执行技术干预: {action.description}")
return action
async def _notify_human(self, alert: InterventionAlert):
"""通知Human"""
if alert.human_notified:
return
notification = {
"type": "human_intervention_required",
"alert_id": alert.id,
"severity": alert.severity.value,
"message": alert.message,
"timestamp": alert.timestamp.isoformat(),
"metrics": alert.metrics,
"recommended_actions": self._get_recommended_actions(alert)
}
# 触发Human通知事件
await self._trigger_event_handlers("human_notification", notification)
alert.human_notified = True
self.logger.critical(f"Human通知已发送: {alert.message}")
def _get_recommended_actions(self, alert: InterventionAlert) -> List[str]:
"""获取推荐的干预动作"""
recommendations = {
AlertType.QUALITY_DECLINE: [
"提供写作指导",
"分享优秀案例",
"调整讨论节奏"
],
AlertType.TOXIC_BEHAVIOR: [
"发出警告",
"暂时禁言",
"私下沟通"
],
AlertType.EMOTIONAL_ESCALATION: [
"暂停讨论",
"引导冷静",
"转移话题"
],
AlertType.PARTICIPATION_IMBALANCE: [
"邀请发言",
"限制发言频率",
"分组讨论"
],
AlertType.OFF_TOPIC: [
"重申主题",
"引导回归",
"设置议程"
],
AlertType.TECHNICAL_ERROR: [
"重启系统",
"检查日志",
"联系技术支持"
]
}
return recommendations.get(alert.alert_type, ["人工评估", "采取适当措施"])
async def _trigger_event_handlers(self, event_type: str, data: Any):
"""触发事件处理器"""
if event_type in self.event_handlers:
for handler in self.event_handlers[event_type]:
try:
await handler(data)
except Exception as e:
self.logger.error(f"事件处理器错误: {e}")
def add_event_handler(self, event_type: str, handler: Callable):
"""添加事件处理器"""
if event_type not in self.event_handlers:
self.event_handlers[event_type] = []
self.event_handlers[event_type].append(handler)
def update_metrics(self, metrics_data: Dict[str, float]):
"""更新健康指标(兼容性方法)"""
for metric_name, value in metrics_data.items():
if metric_name in self.health_metrics:
self.health_metrics[metric_name].value = value
self.health_metrics[metric_name].last_updated = datetime.now()
def get_health_status(self) -> HealthStatus:
"""获取当前健康状态(兼容性方法)"""
# 计算整体分数
total_score = 0.0
total_weight = 0.0
for metric in self.health_metrics.values():
total_score += metric.value * metric.weight
total_weight += metric.weight
overall_score = total_score / total_weight if total_weight > 0 else 0.0
# 确定状态
if overall_score >= 90:
return HealthStatus.EXCELLENT
elif overall_score >= 70:
return HealthStatus.GOOD
elif overall_score >= 50:
return HealthStatus.FAIR
elif overall_score >= 30:
return HealthStatus.POOR
else:
return HealthStatus.CRITICAL
def get_health_report(self) -> Dict[str, Any]:
"""获取健康报告"""
# 计算整体分数
total_score = 0.0
total_weight = 0.0
for metric in self.health_metrics.values():
total_score += metric.value * metric.weight
total_weight += metric.weight
overall_score = total_score / total_weight if total_weight > 0 else 0.0
# 确定状态
if overall_score >= 90:
status = HealthStatus.EXCELLENT
elif overall_score >= 70:
status = HealthStatus.GOOD
elif overall_score >= 50:
status = HealthStatus.FAIR
elif overall_score >= 30:
status = HealthStatus.POOR
else:
status = HealthStatus.CRITICAL
report = {
"overall_score": round(overall_score, 1),
"health_status": status.value,
"metrics": {
name: {
"value": round(metric.value, 1),
"weight": metric.weight,
"description": metric.description,
"last_updated": metric.last_updated.isoformat()
}
for name, metric in self.health_metrics.items()
},
"active_alerts": len(self.active_alerts),
"recent_interventions": len([a for a in self.intervention_history
if (datetime.now() - a.executed_at).total_seconds() < 3600]),
"monitoring_enabled": self.monitoring_enabled,
"last_check": datetime.now().isoformat()
}
return report
def resolve_alert(self, alert_id: str, resolution_notes: str = ""):
"""解决警报"""
for alert in self.active_alerts:
if alert.id == alert_id:
alert.resolved = True
alert.resolution_notes = resolution_notes
self.logger.info(f"警报已解决: {alert_id} - {resolution_notes}")
return True
return False
def clear_resolved_alerts(self):
"""清理已解决的警报"""
before_count = len(self.active_alerts)
self.active_alerts = [alert for alert in self.active_alerts if not alert.resolved]
after_count = len(self.active_alerts)
cleared_count = before_count - after_count
if cleared_count > 0:
self.logger.info(f"清理了 {cleared_count} 个已解决的警报")
def enable_monitoring(self):
"""启用监控"""
self.monitoring_enabled = True
self.logger.info("健康监控已启用")
def disable_monitoring(self):
"""禁用监控"""
self.monitoring_enabled = False
self.logger.info("健康监控已禁用")
def save_monitoring_data(self, filename: str = "monitoring_data.json"):
"""保存监控数据"""
# 序列化监控配置处理InterventionLevel枚举
serialized_config = self.monitoring_config.copy()
serialized_config["human_notification_threshold"] = self.monitoring_config["human_notification_threshold"].value
data = {
"health_metrics": {
name: {
"name": metric.name,
"value": metric.value,
"weight": metric.weight,
"threshold_critical": metric.threshold_critical,
"threshold_poor": metric.threshold_poor,
"threshold_fair": metric.threshold_fair,
"threshold_good": metric.threshold_good,
"description": metric.description,
"last_updated": metric.last_updated.isoformat()
}
for name, metric in self.health_metrics.items()
},
"active_alerts": [
{
"id": alert.id,
"alert_type": alert.alert_type.value,
"severity": alert.severity.value,
"message": alert.message,
"affected_participants": alert.affected_participants,
"metrics": alert.metrics,
"timestamp": alert.timestamp.isoformat(),
"resolved": alert.resolved,
"resolution_notes": alert.resolution_notes,
"human_notified": alert.human_notified
}
for alert in self.active_alerts
],
"intervention_history": [
{
"id": action.id,
"action_type": action.action_type,
"description": action.description,
"target_participants": action.target_participants,
"parameters": action.parameters,
"executed_at": action.executed_at.isoformat(),
"success": action.success,
"result_message": action.result_message
}
for action in self.intervention_history
],
"monitoring_config": serialized_config,
"monitoring_enabled": self.monitoring_enabled,
"export_time": datetime.now().isoformat()
}
with open(filename, 'w', encoding='utf-8') as f:
json.dump(data, f, ensure_ascii=False, indent=2)
self.logger.info(f"监控数据已保存到 {filename}")
# 使用示例
async def main():
"""使用示例"""
monitor = DebateHealthMonitor()
# 模拟辩论数据
debate_data = {
"recent_messages": [
{"sender": "正1", "content": "AI投资确实具有巨大潜力我们可以从以下几个方面来分析..."},
{"sender": "反1", "content": "但是风险也不容忽视!!!这些投资可能导致泡沫!"},
{"sender": "正2", "content": "好的"},
{"sender": "反2", "content": "你们这些观点太愚蠢了,完全没有逻辑!"},
],
"topic_keywords": ["AI", "投资", "风险", "收益", "技术"],
"system_status": {
"error_rate": 0.02,
"avg_response_time": 1.5,
"system_load": 0.6
}
}
# 分析健康度
score, status = await monitor.analyze_debate_health(debate_data)
print(f"\n📊 辩论健康度分析结果:")
print(f"综合得分: {score:.1f}")
print(f"健康状态: {status.value}")
# 获取详细报告
report = monitor.get_health_report()
print(f"\n📋 详细健康报告:")
print(f"活跃警报数: {report['active_alerts']}")
print(f"近期干预数: {report['recent_interventions']}")
print(f"\n📈 各项指标:")
for name, metric in report['metrics'].items():
print(f" {metric['description']}: {metric['value']}分 (权重: {metric['weight']})")
# 保存数据
monitor.save_monitoring_data()
if __name__ == "__main__":
asyncio.run(main())

454
src/jixia/main.py Normal file
View File

@ -0,0 +1,454 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
稷下学宫AI辩论系统主入口
提供命令行界面来运行不同的辩论模式
"""
import argparse
import asyncio
import sys
import os
import warnings
# 将项目根目录添加到 Python 路径,以便能正确导入模块
project_root = os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
sys.path.insert(0, project_root)
# 抑制 google-adk 的调试日志和警告
import logging
logging.getLogger('google.adk').setLevel(logging.ERROR)
logging.getLogger('google.genai').setLevel(logging.ERROR)
# 设置环境变量来抑制ADK调试输出
os.environ['GOOGLE_CLOUD_DISABLE_GRPC_LOGS'] = 'true'
os.environ['GRPC_VERBOSITY'] = 'ERROR'
os.environ['GRPC_TRACE'] = ''
# 抑制 warnings
warnings.filterwarnings('ignore')
from config.doppler_config import validate_config
def check_environment():
"""检查并验证运行环境"""
print("🔧 检查运行环境...")
# 验证基础配置
if not validate_config():
print("❌ 环境配置验证失败")
return False
print("✅ 环境检查通过")
return True
async def run_adk_memory_debate(topic: str, participants: list = None):
"""运行ADK记忆增强辩论"""
print("⚠️ ADK记忆增强辩论功能正在适配新版本的 google-adk 库...")
print("💡 请先使用 'adk_simple' 模式进行测试。")
return False
# 以下代码暂时保留,待适配完成后再启用
"""
try:
from src.jixia.debates.adk_memory_debate import MemoryEnhancedDebate
print(f"🚀 启动ADK记忆增强辩论...")
print(f"📋 辩论主题: {topic}")
# 创建并初始化辩论系统
debate_system = MemoryEnhancedDebate()
await debate_system.initialize()
# 进行辩论
await debate_system.conduct_memory_debate(
topic=topic,
participants=participants
)
# 关闭资源
await debate_system.close()
print("\n🎉 ADK记忆增强辩论完成!")
return True
except ImportError as e:
print(f"❌ 导入模块失败: {e}")
print("请确保已安装Google ADK: pip install google-adk")
return False
except Exception as e:
print(f"❌ 运行ADK记忆增强辩论失败: {e}")
import traceback
traceback.print_exc()
return False
"""
async def run_adk_turn_based_debate(topic: str, participants: list = None, rounds: int = 3):
"""运行ADK八仙轮流辩论"""
try:
from google.adk import Agent, Runner
from google.adk.sessions import InMemorySessionService
from google.genai import types
import asyncio
print(f"🚀 启动ADK八仙轮流辩论...")
print(f"📋 辩论主题: {topic}")
print(f"🔄 辩论轮数: {rounds}")
# 默认参与者为八仙
if not participants or participants == ["铁拐李", "吕洞宾"]:
participants = ["铁拐李", "吕洞宾", "何仙姑", "张果老", "蓝采和", "汉钟离", "韩湘子", "曹国舅"]
# 定义主持人和八仙角色配置
roles_config = {
# 主持人
"太上老君": {
"name": "太上老君",
"model": "gemini-2.5-flash",
"instruction": "你是太上老君本次论道的主持人。你负责引导辩论的流程确保每位仙人都有机会发言并在每一轮结束后进行简要总结。你的发言风格庄重、睿智能够调和不同观点之间的矛盾。每次发言控制在100字以内。"
},
# 八仙
"铁拐李": {
"name": "铁拐李",
"model": "gemini-2.5-flash",
"instruction": "你是铁拐李八仙中的逆向思维专家。你善于从批判和质疑的角度看问题总是能发现事物的另一面。你的发言风格直接、犀利但富有智慧。每次发言控制在100字以内。"
},
"吕洞宾": {
"name": "吕洞宾",
"model": "gemini-2.5-flash",
"instruction": "你是吕洞宾八仙中的理性分析者。你善于平衡各方观点用理性和逻辑来分析问题。你的发言风格温和而深刻总是能找到问题的核心。每次发言控制在100字以内。"
},
"何仙姑": {
"name": "何仙姑",
"model": "gemini-2.5-flash",
"instruction": "你是何仙姑八仙中的风险控制专家。你总是从风险管理的角度思考问题善于发现潜在危险。你的发言风格谨慎、细致总是能提出需要警惕的问题。每次发言控制在100字以内。"
},
"张果老": {
"name": "张果老",
"model": "gemini-2.5-flash",
"instruction": "你是张果老八仙中的历史智慧者。你善于从历史数据中寻找规律和智慧总是能提供长期视角。你的发言风格沉稳、博学总是能引经据典。每次发言控制在100字以内。"
},
"蓝采和": {
"name": "蓝采和",
"model": "gemini-2.5-flash",
"instruction": "你是蓝采和八仙中的创新思维者。你善于从新兴视角和非传统方法来看待问题总能提出独特的见解。你的发言风格活泼、新颖总是能带来意想不到的观点。每次发言控制在100字以内。"
},
"汉钟离": {
"name": "汉钟离",
"model": "gemini-2.5-flash",
"instruction": "你是汉钟离八仙中的平衡协调者。你善于综合各方观点寻求和谐统一的解决方案。你的发言风格平和、包容总是能化解矛盾。每次发言控制在100字以内。"
},
"韩湘子": {
"name": "韩湘子",
"model": "gemini-2.5-flash",
"instruction": "你是韩湘子八仙中的艺术感知者。你善于从美学和感性的角度分析问题总能发现事物背后的深层含义。你的发言风格优雅、感性总是能触动人心。每次发言控制在100字以内。"
},
"曹国舅": {
"name": "曹国舅",
"model": "gemini-2.5-flash",
"instruction": "你是曹国舅八仙中的实务执行者。你关注实际操作和具体细节善于将理论转化为可行的方案。你的发言风格务实、严谨总是能提出建设性意见。每次发言控制在100字以内。"
}
}
# 创建会话服务和会话
session_service = InMemorySessionService()
session = await session_service.create_session(
state={},
app_name="稷下学宫轮流辩论系统",
user_id="debate_user"
)
# 创建主持人和八仙智能体及Runner
host_agent = None
host_runner = None
baxian_agents = {}
baxian_runners = {}
# 创建主持人
host_config = roles_config["太上老君"]
host_agent = Agent(
name=host_config["name"],
model=host_config["model"],
instruction=host_config["instruction"]
)
host_runner = Runner(
app_name="稷下学宫轮流辩论系统",
agent=host_agent,
session_service=session_service
)
# 创建八仙
for name in participants:
if name in roles_config:
config = roles_config[name]
agent = Agent(
name=config["name"],
model=config["model"],
instruction=config["instruction"]
)
baxian_agents[name] = agent
runner = Runner(
app_name="稷下学宫轮流辩论系统",
agent=agent,
session_service=session_service
)
baxian_runners[name] = runner
else:
print(f"⚠️ 未知的参与者: {name},将被跳过。")
if not baxian_agents:
print("❌ 没有有效的参与者,请检查参与者列表。")
return False
print(f"🎯 主持人: 太上老君")
print(f"👥 参与仙人: {', '.join(baxian_agents.keys())}")
# 初始化辩论历史
debate_history = []
# 开场白
print(f"\n📢 太上老君开场:")
opening_prompt = f"各位仙友,欢迎来到本次论道。今天的主题是:{topic}。请各位依次发表高见。"
content = types.Content(role='user', parts=[types.Part(text=opening_prompt)])
response = host_runner.run_async(
user_id=session.user_id,
session_id=session.id,
new_message=content
)
reply = ""
async for event in response:
if hasattr(event, 'content') and event.content:
if hasattr(event.content, 'parts') and event.content.parts:
for part in event.content.parts:
if hasattr(part, 'text') and part.text:
reply += str(part.text)
elif hasattr(event, 'text') and event.text:
reply += str(event.text)
if reply.strip():
clean_reply = reply.strip()
print(f" {clean_reply}")
debate_history.append(f"太上老君: {clean_reply}")
await asyncio.sleep(1)
# 进行辩论
for round_num in range(rounds):
print(f"\n🌀 第 {round_num + 1} 轮辩论:")
# 主持人引导本轮辩论
print(f"\n📢 太上老君引导:")
guide_prompt = f"现在进入第 {round_num + 1} 轮辩论,请各位仙友围绕主题发表看法。"
content = types.Content(role='user', parts=[types.Part(text=guide_prompt)])
response = host_runner.run_async(
user_id=session.user_id,
session_id=session.id,
new_message=content
)
reply = ""
async for event in response:
if hasattr(event, 'content') and event.content:
if hasattr(event.content, 'parts') and event.content.parts:
for part in event.content.parts:
if hasattr(part, 'text') and part.text:
reply += str(part.text)
elif hasattr(event, 'text') and event.text:
reply += str(event.text)
if reply.strip():
clean_reply = reply.strip()
print(f" {clean_reply}")
debate_history.append(f"太上老君: {clean_reply}")
await asyncio.sleep(1)
# 八仙轮流发言
for name in participants:
if name not in baxian_runners:
continue
print(f"\n🗣️ {name} 发言:")
# 构建提示
history_context = ""
if debate_history:
recent_history = debate_history[-5:] # 最近5条发言
history_context = f"\n最近的论道内容:\n" + "\n".join([f"- {h}" for h in recent_history])
prompt = f"论道主题: {topic}{history_context}\n\n请从你的角色特点出发发表观点。请控制在100字以内。"
# 发送消息并获取回复
content = types.Content(role='user', parts=[types.Part(text=prompt)])
response = baxian_runners[name].run_async(
user_id=session.user_id,
session_id=session.id,
new_message=content
)
# 收集回复
reply = ""
async for event in response:
if hasattr(event, 'content') and event.content:
if hasattr(event.content, 'parts') and event.content.parts:
for part in event.content.parts:
if hasattr(part, 'text') and part.text:
reply += str(part.text)
elif hasattr(event, 'text') and event.text:
reply += str(event.text)
if reply.strip():
clean_reply = reply.strip()
print(f" {clean_reply}")
# 记录到辩论历史
debate_entry = f"{name}: {clean_reply}"
debate_history.append(debate_entry)
await asyncio.sleep(1) # 避免API调用过快
# 结束语
print(f"\n📢 太上老君总结:")
closing_prompt = f"各位仙友的高见令我受益匪浅。本次论道到此结束,希望各位能从不同观点中获得启发。"
content = types.Content(role='user', parts=[types.Part(text=closing_prompt)])
response = host_runner.run_async(
user_id=session.user_id,
session_id=session.id,
new_message=content
)
reply = ""
async for event in response:
if hasattr(event, 'content') and event.content:
if hasattr(event.content, 'parts') and event.content.parts:
for part in event.content.parts:
if hasattr(part, 'text') and part.text:
reply += str(part.text)
elif hasattr(event, 'text') and event.text:
reply += str(event.text)
if reply.strip():
clean_reply = reply.strip()
print(f" {clean_reply}")
debate_history.append(f"太上老君: {clean_reply}")
await asyncio.sleep(1)
# 关闭资源
await host_runner.close()
for runner in baxian_runners.values():
await runner.close()
print(f"\n🎉 ADK八仙轮流辩论完成!")
print(f"📝 本次论道共产生 {len(debate_history)} 条发言。")
return True
except ImportError as e:
print(f"❌ 导入模块失败: {e}")
print("请确保已安装Google ADK: pip install google-adk")
return False
except Exception as e:
print(f"❌ 运行ADK八仙轮流辩论失败: {e}")
import traceback
traceback.print_exc()
return False
async def run_swarm_debate(topic: str, participants: list = None):
"""运行Swarm辩论 (示例)"""
try:
print(f"🚀 启动Swarm辩论...")
print(f"📋 辩论主题: {topic}")
print(f"👥 参与者: {participants}")
# TODO: 实现调用 Swarm 辩论逻辑
# 这里需要根据实际的 swarm_debate.py 接口来实现
print("⚠️ Swarm辩论功能待实现")
print("\n🎉 Swarm辩论完成!")
return True
except Exception as e:
print(f"❌ 运行Swarm辩论失败: {e}")
import traceback
traceback.print_exc()
return False
async def main_async(args):
"""异步主函数"""
# 检查环境
if not check_environment():
return 1
# 根据模式运行不同的辩论
if args.mode == "adk_memory":
participants = args.participants.split(",") if args.participants else None
success = await run_adk_memory_debate(args.topic, participants)
return 0 if success else 1
elif args.mode == "adk_turn_based":
participants = args.participants.split(",") if args.participants else None
success = await run_adk_turn_based_debate(args.topic, participants, args.rounds)
return 0 if success else 1
elif args.mode == "adk_simple":
# 简单辩论模式暂时使用原来的方式
try:
from src.jixia.debates.adk_simple_debate import simple_debate_test
result = simple_debate_test()
return 0 if result else 1
except Exception as e:
print(f"❌ 运行ADK简单辩论失败: {e}")
return 1
elif args.mode == "swarm":
participants = args.participants.split(",") if args.participants else None
success = await run_swarm_debate(args.topic, participants)
return 0 if success else 1
else:
print(f"❌ 不支持的模式: {args.mode}")
return 1
def main():
"""主入口函数"""
parser = argparse.ArgumentParser(description="稷下学宫AI辩论系统")
parser.add_argument(
"mode",
choices=["adk_memory", "adk_turn_based", "adk_simple", "swarm"],
help="辩论模式"
)
parser.add_argument(
"--topic",
"-t",
default="人工智能对未来社会的影响",
help="辩论主题"
)
parser.add_argument(
"--participants",
"-p",
help="参与者列表(逗号分隔),例如: 铁拐李,吕洞宾,何仙姑"
)
parser.add_argument(
"--rounds",
"-r",
type=int,
default=3,
help="辩论轮数 (仅适用于 adk_turn_based 模式)"
)
args = parser.parse_args()
# 运行异步主函数
try:
exit_code = asyncio.run(main_async(args))
sys.exit(exit_code)
except KeyboardInterrupt:
print("\n\n👋 用户中断,退出程序")
sys.exit(0)
except Exception as e:
print(f"\n\n💥 程序运行出错: {e}")
import traceback
traceback.print_exc()
sys.exit(1)
if __name__ == "__main__":
main()

View File

@ -0,0 +1,454 @@
#!/usr/bin/env python3
"""
Cloudflare AutoRAG Vectorize 记忆银行实现
为稷下学宫AI辩论系统提供Cloudflare后端的记忆功能
"""
import os
import json
from typing import Dict, List, Optional, Any
from dataclasses import dataclass
from datetime import datetime
import aiohttp
from config.doppler_config import get_cloudflare_config
@dataclass
class MemoryEntry:
"""记忆条目数据结构"""
id: str
content: str
metadata: Dict[str, Any]
timestamp: str # ISO format string
agent_name: str
debate_topic: str
memory_type: str # "conversation", "preference", "knowledge", "strategy"
class CloudflareMemoryBank:
"""
Cloudflare AutoRAG Vectorize 记忆银行管理器
利用Cloudflare Vectorize索引和Workers AI进行向量检索增强生成
"""
def __init__(self):
"""初始化Cloudflare Memory Bank"""
self.config = get_cloudflare_config()
self.account_id = self.config['account_id']
self.api_token = self.config['api_token']
self.vectorize_index = self.config['vectorize_index']
self.embed_model = self.config['embed_model']
self.autorag_domain = self.config['autorag_domain']
# 构建API基础URL
self.base_url = f"https://api.cloudflare.com/client/v4/accounts/{self.account_id}"
self.headers = {
"Authorization": f"Bearer {self.api_token}",
"Content-Type": "application/json"
}
# 八仙智能体名称映射
self.baxian_agents = {
"tieguaili": "铁拐李",
"hanzhongli": "汉钟离",
"zhangguolao": "张果老",
"lancaihe": "蓝采和",
"hexiangu": "何仙姑",
"lvdongbin": "吕洞宾",
"hanxiangzi": "韩湘子",
"caoguojiu": "曹国舅"
}
async def _get_session(self) -> aiohttp.ClientSession:
"""获取aiohttp会话"""
return aiohttp.ClientSession()
async def create_memory_bank(self, agent_name: str, display_name: str = None) -> str:
"""
为指定智能体创建记忆空间在Cloudflare中通过命名空间或元数据实现
Args:
agent_name: 智能体名称 ( "tieguaili")
display_name: 显示名称 ( "铁拐李的记忆银行")
Returns:
记忆空间标识符 (这里用agent_name作为标识符)
"""
# Cloudflare Vectorize使用统一的索引通过元数据区分不同智能体的记忆
# 所以这里不需要实际创建,只需要返回标识符
if not display_name:
display_name = self.baxian_agents.get(agent_name, agent_name)
print(f"✅ 为 {display_name} 准备Cloudflare记忆空间")
return f"cf_memory_{agent_name}"
async def add_memory(self,
agent_name: str,
content: str,
memory_type: str = "conversation",
debate_topic: str = "",
metadata: Dict[str, Any] = None) -> str:
"""
添加记忆到Cloudflare Vectorize索引
Args:
agent_name: 智能体名称
content: 记忆内容
memory_type: 记忆类型 ("conversation", "preference", "knowledge", "strategy")
debate_topic: 辩论主题
metadata: 额外元数据
Returns:
记忆ID
"""
if metadata is None:
metadata = {}
# 生成记忆ID
memory_id = f"mem_{agent_name}_{int(datetime.now().timestamp() * 1000000)}"
# 构建记忆条目
memory_entry = MemoryEntry(
id=memory_id,
content=content,
metadata={
**metadata,
"agent_name": agent_name,
"chinese_name": self.baxian_agents.get(agent_name, agent_name),
"memory_type": memory_type,
"debate_topic": debate_topic,
"system": "jixia_academy"
},
timestamp=datetime.now().isoformat(),
agent_name=agent_name,
debate_topic=debate_topic,
memory_type=memory_type
)
# 将记忆条目转换为JSON字符串用于存储和检索
memory_data = {
"id": memory_id,
"values": [], # 向量值将在嵌入时填充
"metadata": memory_entry.metadata
}
try:
# 1. 使用Workers AI生成嵌入向量
embedding = await self._generate_embedding(content)
memory_data["values"] = embedding
# 2. 将记忆插入Vectorize索引
async with await self._get_session() as session:
url = f"{self.base_url}/vectorize/indexes/{self.vectorize_index}/upsert"
payload = {
"vectors": [memory_data]
}
async with session.post(url, headers=self.headers, json=payload) as response:
if response.status == 200:
result = await response.json()
print(f"✅ 为 {self.baxian_agents.get(agent_name)} 添加记忆: {memory_type}")
return memory_id
else:
error_text = await response.text()
raise Exception(f"Failed to upsert memory: {response.status} - {error_text}")
except Exception as e:
print(f"❌ 添加记忆失败: {e}")
raise
async def _generate_embedding(self, text: str) -> List[float]:
"""
使用Cloudflare Workers AI生成文本嵌入
Args:
text: 要嵌入的文本
Returns:
嵌入向量
"""
async with await self._get_session() as session:
url = f"{self.base_url}/ai/run/{self.embed_model}"
payload = {
"text": [text] # Workers AI embeddings API expects a list of texts
}
async with session.post(url, headers=self.headers, json=payload) as response:
if response.status == 200:
result = await response.json()
# 提取嵌入向量 (通常是 result["result"]["data"][0]["embedding"])
if "result" in result and "data" in result["result"] and len(result["result"]["data"]) > 0:
return result["result"]["data"][0]["embedding"]
else:
raise Exception(f"Unexpected embedding response format: {result}")
else:
error_text = await response.text()
raise Exception(f"Failed to generate embedding: {response.status} - {error_text}")
async def search_memories(self,
agent_name: str,
query: str,
memory_type: str = None,
limit: int = 10) -> List[Dict[str, Any]]:
"""
使用向量相似性搜索智能体的相关记忆
Args:
agent_name: 智能体名称
query: 搜索查询
memory_type: 记忆类型过滤
limit: 返回结果数量限制
Returns:
相关记忆列表
"""
try:
# 1. 为查询生成嵌入向量
query_embedding = await self._generate_embedding(query)
# 2. 构建过滤条件
filters = {
"agent_name": agent_name
}
if memory_type:
filters["memory_type"] = memory_type
# 3. 执行向量搜索
async with await self._get_session() as session:
url = f"{self.base_url}/vectorize/indexes/{self.vectorize_index}/query"
payload = {
"vector": query_embedding,
"topK": limit,
"filter": filters,
"returnMetadata": True
}
async with session.post(url, headers=self.headers, json=payload) as response:
if response.status == 200:
result = await response.json()
matches = result.get("result", {}).get("matches", [])
# 格式化返回结果
memories = []
for match in matches:
memory_data = {
"content": match["metadata"].get("content", ""),
"metadata": match["metadata"],
"relevance_score": match["score"]
}
memories.append(memory_data)
return memories
else:
error_text = await response.text()
raise Exception(f"Failed to search memories: {response.status} - {error_text}")
except Exception as e:
print(f"❌ 搜索记忆失败: {e}")
return []
async def get_agent_context(self, agent_name: str, debate_topic: str) -> str:
"""
获取智能体在特定辩论主题下的上下文记忆
Args:
agent_name: 智能体名称
debate_topic: 辩论主题
Returns:
格式化的上下文字符串
"""
# 搜索相关记忆
conversation_memories = await self.search_memories(
agent_name, debate_topic, "conversation", limit=5
)
preference_memories = await self.search_memories(
agent_name, debate_topic, "preference", limit=3
)
strategy_memories = await self.search_memories(
agent_name, debate_topic, "strategy", limit=3
)
# 构建上下文
context_parts = []
if conversation_memories:
context_parts.append("## 历史对话记忆")
for mem in conversation_memories:
context_parts.append(f"- {mem['content']}")
if preference_memories:
context_parts.append("\n## 偏好记忆")
for mem in preference_memories:
context_parts.append(f"- {mem['content']}")
if strategy_memories:
context_parts.append("\n## 策略记忆")
for mem in strategy_memories:
context_parts.append(f"- {mem['content']}")
chinese_name = self.baxian_agents.get(agent_name, agent_name)
if context_parts:
return f"# {chinese_name}的记忆上下文\n\n" + "\n".join(context_parts)
else:
return f"# {chinese_name}的记忆上下文\n\n暂无相关记忆。"
async def save_debate_session(self,
debate_topic: str,
participants: List[str],
conversation_history: List[Dict[str, str]],
outcomes: Dict[str, Any] = None) -> None:
"""
保存完整的辩论会话到各参与者的记忆银行
Args:
debate_topic: 辩论主题
participants: 参与者列表
conversation_history: 对话历史
outcomes: 辩论结果和洞察
"""
for agent_name in participants:
if agent_name not in self.baxian_agents:
continue
# 保存对话历史
conversation_summary = self._summarize_conversation(
conversation_history, agent_name
)
await self.add_memory(
agent_name=agent_name,
content=conversation_summary,
memory_type="conversation",
debate_topic=debate_topic,
metadata={
"participants": participants,
"session_length": len(conversation_history)
}
)
# 保存策略洞察
if outcomes:
strategy_insight = self._extract_strategy_insight(
outcomes, agent_name
)
if strategy_insight:
await self.add_memory(
agent_name=agent_name,
content=strategy_insight,
memory_type="strategy",
debate_topic=debate_topic,
metadata={"session_outcome": outcomes}
)
def _summarize_conversation(self,
conversation_history: List[Dict[str, str]],
agent_name: str) -> str:
"""
为特定智能体总结对话历史
Args:
conversation_history: 对话历史
agent_name: 智能体名称
Returns:
对话总结
"""
agent_messages = [
msg for msg in conversation_history
if msg.get("agent") == agent_name
]
if not agent_messages:
return "本次辩论中未发言"
chinese_name = self.baxian_agents.get(agent_name, agent_name)
summary = f"{chinese_name}在本次辩论中的主要观点:\n"
for i, msg in enumerate(agent_messages[:3], 1): # 只取前3条主要观点
summary += f"{i}. {msg.get('content', '')[:100]}...\n"
return summary
def _extract_strategy_insight(self,
outcomes: Dict[str, Any],
agent_name: str) -> Optional[str]:
"""
从辩论结果中提取策略洞察
Args:
outcomes: 辩论结果
agent_name: 智能体名称
Returns:
策略洞察或None
"""
# 这里可以根据实际的outcomes结构来提取洞察
# 暂时返回一个简单的示例
chinese_name = self.baxian_agents.get(agent_name, agent_name)
if "winner" in outcomes and outcomes["winner"] == agent_name:
return f"{chinese_name}在本次辩论中获胜,其论证策略值得保持。"
elif "insights" in outcomes and agent_name in outcomes["insights"]:
return outcomes["insights"][agent_name]
return None
# 便捷函数
async def initialize_baxian_memory_banks() -> CloudflareMemoryBank:
"""
初始化所有八仙智能体的Cloudflare记忆空间
Returns:
配置好的CloudflareMemoryBank实例
"""
memory_bank = CloudflareMemoryBank()
print("🏛️ 正在为稷下学宫八仙创建Cloudflare记忆空间...")
for agent_key, chinese_name in memory_bank.baxian_agents.items():
try:
await memory_bank.create_memory_bank(agent_key)
except Exception as e:
print(f"⚠️ 创建 {chinese_name} 记忆空间时出错: {e}")
print("✅ 八仙Cloudflare记忆空间初始化完成")
return memory_bank
if __name__ == "__main__":
import asyncio
async def test_memory_bank():
"""测试Cloudflare Memory Bank功能"""
try:
# 创建Memory Bank实例
memory_bank = CloudflareMemoryBank()
# 测试创建记忆空间
await memory_bank.create_memory_bank("tieguaili")
# 测试添加记忆
await memory_bank.add_memory(
agent_name="tieguaili",
content="在讨论NVIDIA股票时我倾向于逆向思维关注潜在风险。",
memory_type="preference",
debate_topic="NVIDIA投资分析"
)
# 测试搜索记忆
results = await memory_bank.search_memories(
agent_name="tieguaili",
query="NVIDIA",
limit=5
)
print(f"搜索结果: {len(results)} 条记忆")
for result in results:
print(f"- {result['content']}")
except Exception as e:
print(f"❌ 测试失败: {e}")
# 运行测试
asyncio.run(test_memory_bank())

View File

@ -1,6 +1,6 @@
#!/usr/bin/env python3
"""
记忆银行工厂根据配置创建 Vertex 实现
记忆银行工厂根据配置创建不同后端实现Vertex AI Cloudflare AutoRAG
"""
from __future__ import annotations
@ -9,26 +9,19 @@ from typing import Optional
from .base_memory_bank import MemoryBankProtocol
from .vertex_memory_bank import VertexMemoryBank
# 新增 Cloudflare 实现
from .cloudflare_memory_bank import CloudflareMemoryBank
def get_memory_backend(prefer: Optional[str] = None) -> MemoryBankProtocol:
"""
根据环境变量选择记忆后端
- JIXIA_MEMORY_BACKEND=vertex (默认)
- 如果未设置默认使用 Vertex
强制使用 Vertex AI 作为记忆后端
'prefer' 参数将被忽略
"""
# 从环境变量读取后端选择,默认为 vertex
backend = os.getenv("JIXIA_MEMORY_BACKEND", "vertex").lower()
if prefer:
backend = prefer.lower()
if backend != "vertex":
raise ValueError(f"不支持的记忆后端: {backend},当前只支持 'vertex'")
# Vertex 作为唯一后端
# 强制使用 Vertex AI 后端
try:
mem = VertexMemoryBank.from_config()
print("🧠 使用 Vertex AI 作为记忆后端")
return mem
except Exception as e:
# 不可用时抛错

View File

@ -1,30 +0,0 @@
/* global use, db */
// MongoDB Playground
// Use Ctrl+Space inside a snippet or a string literal to trigger completions.
// The current database to use.
use('taigong');
// Search for documents in the current collection.
db.getCollection('articles')
.find(
{
/*
* Filter
* fieldA: value or expression
*/
},
{
/*
* Projection
* _id: 0, // exclude _id
* fieldA: 1 // include field
*/
}
)
.sort({
/*
* fieldA: 1 // ascending
* fieldB: -1 // descending
*/
});

90
test_vertex_ai_setup.py Normal file
View File

@ -0,0 +1,90 @@
#!/usr/bin/env python3
"""
测试 Vertex AI 配置和连接
"""
import os
import sys
from config.doppler_config import get_google_genai_config
def test_doppler_config():
"""测试 Doppler 配置"""
print("🔍 测试 Doppler 配置...")
try:
config = get_google_genai_config()
print("✅ 成功读取 Google GenAI 配置")
print(f" - API Key: {'已配置' if config.get('api_key') else '未配置'}")
print(f" - Use Vertex AI: {config.get('use_vertex_ai', '未设置')}")
print(f" - Project ID: {config.get('project_id', '未设置')}")
print(f" - Location: {config.get('location', '未设置')}")
print(f" - Memory Bank Enabled: {config.get('memory_bank_enabled', '未设置')}")
return config
except Exception as e:
print(f"❌ 读取 Google GenAI 配置失败: {e}")
return None
def test_environment_variables():
"""测试环境变量"""
print("\n🔍 测试环境变量...")
adc_path = os.path.expanduser("~/.config/gcloud/application_default_credentials.json")
if os.path.exists(adc_path):
print("✅ 找到 Application Default Credentials 文件")
else:
print("❌ 未找到 Application Default Credentials 文件")
google_env_vars = [var for var in os.environ if var.startswith('GOOGLE_')]
if google_env_vars:
print("✅ 找到以下 Google 环境变量:")
for var in google_env_vars:
# 不显示敏感信息
if 'KEY' in var or 'SECRET' in var or 'TOKEN' in var:
print(f" - {var}: {'已设置' if os.environ.get(var) else '未设置'}")
else:
print(f" - {var}: {os.environ.get(var, '未设置')}")
else:
print("⚠️ 未找到 Google 环境变量")
def main():
"""主函数"""
print("🧪 Vertex AI 配置测试\n")
# 测试 Doppler 配置
config = test_doppler_config()
# 测试环境变量
test_environment_variables()
# 检查是否满足基本要求
print("\n📋 配置检查摘要:")
if config:
project_id = config.get('project_id')
api_key = config.get('api_key')
if project_id:
print("✅ Google Cloud Project ID 已配置")
else:
print("❌ 未配置 Google Cloud Project ID")
if api_key:
print("✅ Google API Key 已配置")
else:
print("❌ 未配置 Google API Key")
# 检查是否启用 Vertex AI
use_vertex = config.get('use_vertex_ai', '').upper()
if use_vertex == 'TRUE':
print("✅ Vertex AI 已启用")
else:
print("❌ Vertex AI 未启用 (请检查 GOOGLE_GENAI_USE_VERTEXAI 环境变量)")
# 检查是否启用 Memory Bank
memory_bank_enabled = config.get('memory_bank_enabled', '').upper()
if memory_bank_enabled == 'TRUE':
print("✅ Memory Bank 已启用")
else:
print("❌ Memory Bank 未启用 (请检查 VERTEX_MEMORY_BANK_ENABLED 环境变量)")
else:
print("❌ 无法读取配置,请检查 Doppler 配置")
if __name__ == "__main__":
main()

View File

@ -0,0 +1,72 @@
#!/usr/bin/env python3
"""
测试 Vertex AI Memory Bank 功能
"""
import asyncio
import sys
import os
# 添加项目根目录到Python路径
sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '.')))
from src.jixia.memory.factory import get_memory_backend
async def test_vertex_memory_bank():
"""测试 Vertex Memory Bank 功能"""
print("🧪 Vertex AI Memory Bank 功能测试\n")
try:
# 获取 Vertex Memory Bank 后端
print("🔍 正在获取 Vertex Memory Bank 后端...")
memory_bank = get_memory_backend(prefer='vertex')
print("✅ 成功获取 Vertex Memory Bank 后端\n")
# 测试创建记忆银行
print("🔍 正在为吕洞宾创建记忆银行...")
bank_id = await memory_bank.create_memory_bank("lvdongbin", "吕洞宾的记忆银行")
print(f"✅ 成功为吕洞宾创建记忆银行: {bank_id}\n")
# 测试添加记忆
print("🔍 正在为吕洞宾添加记忆...")
memory_id = await memory_bank.add_memory(
agent_name="lvdongbin",
content="在讨论NVIDIA股票时我倾向于使用DCF模型评估其内在价值并关注其在AI领域的竞争优势。",
memory_type="preference",
debate_topic="NVIDIA投资分析",
metadata={"confidence": "high"}
)
print(f"✅ 成功为吕洞宾添加记忆: {memory_id}\n")
# 测试搜索记忆
print("🔍 正在搜索吕洞宾关于NVIDIA的记忆...")
results = await memory_bank.search_memories(
agent_name="lvdongbin",
query="NVIDIA",
memory_type="preference"
)
print(f"✅ 搜索完成,找到 {len(results)} 条相关记忆\n")
if results:
print("🔍 搜索结果:")
for i, result in enumerate(results, 1):
print(f" {i}. {result['content']}")
print(f" 相关性评分: {result['relevance_score']:.4f}\n")
# 测试获取上下文
print("🔍 正在获取吕洞宾关于NVIDIA投资分析的上下文...")
context = await memory_bank.get_agent_context("lvdongbin", "NVIDIA投资分析")
print("✅ 成功获取上下文\n")
print("🔍 上下文内容:")
print(context)
print("\n")
print("🎉 Vertex AI Memory Bank 功能测试完成!")
except Exception as e:
print(f"❌ 测试过程中发生错误: {e}")
import traceback
traceback.print_exc()
if __name__ == "__main__":
asyncio.run(test_vertex_memory_bank())

View File

@ -0,0 +1,205 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
测试Cloudflare网关的Gemini API
使用用户提供的新配置
"""
import requests
import json
import logging
# 配置日志
logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s')
logger = logging.getLogger(__name__)
def test_cloudflare_gemini():
"""测试Cloudflare网关的Gemini API"""
# 用户提供的新配置
API_KEY = "AIzaSyAQ2TXFAzmTKm4aFqgrjkhjgsp95bDsAyE"
BASE_URL = "https://gateway.ai.cloudflare.com/v1/e167cb36a5b95cb3cc8daf77a3f7d0b3/default/google-ai-studio"
MODEL = "models/gemini-2.5-pro"
logger.info(f"🧪 测试Cloudflare Gemini配置:")
logger.info(f"📡 BASE_URL: {BASE_URL}")
logger.info(f"🔑 API_KEY: {API_KEY[:10]}...")
logger.info(f"🤖 MODEL: {MODEL}")
# 构建请求
url = f"{BASE_URL}/v1beta/{MODEL}:generateContent"
headers = {
"Content-Type": "application/json",
"x-goog-api-key": API_KEY
}
payload = {
"contents": [
{
"parts": [
{
"text": "你好,请简单介绍一下你自己"
}
]
}
],
"generationConfig": {
"maxOutputTokens": 1000,
"temperature": 0.7
}
}
try:
logger.info("🚀 发送请求到Cloudflare网关...")
logger.info(f"📍 请求URL: {url}")
response = requests.post(
url,
json=payload,
headers=headers,
timeout=60
)
logger.info(f"📊 状态码: {response.status_code}")
if response.status_code == 200:
result = response.json()
logger.info(f"✅ 请求成功!")
logger.info(f"📋 完整响应: {json.dumps(result, ensure_ascii=False, indent=2)}")
# 提取内容
if 'candidates' in result and len(result['candidates']) > 0:
candidate = result['candidates'][0]
if 'content' in candidate and 'parts' in candidate['content']:
content = candidate['content']['parts'][0].get('text', '')
logger.info(f"🗣️ Gemini回应: {content}")
return True, content
return True, "响应格式异常"
else:
logger.error(f"❌ 请求失败: {response.status_code}")
logger.error(f"📋 错误响应: {response.text}")
return False, response.text
except requests.exceptions.Timeout:
logger.error(f"⏰ 请求超时 (60秒)")
return False, "请求超时"
except requests.exceptions.ConnectionError as e:
logger.error(f"🔌 连接错误: {e}")
return False, str(e)
except Exception as e:
logger.error(f"💥 未知错误: {e}")
return False, str(e)
def test_gemini_breakdown():
"""测试Gemini的问题分解能力"""
API_KEY = "AIzaSyAQ2TXFAzmTKm4aFqgrjkhjgsp95bDsAyE"
BASE_URL = "https://gateway.ai.cloudflare.com/v1/e167cb36a5b95cb3cc8daf77a3f7d0b3/default/google-ai-studio"
MODEL = "models/gemini-2.5-pro"
url = f"{BASE_URL}/v1beta/{MODEL}:generateContent"
headers = {
"Content-Type": "application/json",
"x-goog-api-key": API_KEY
}
topic = "工作量证明vs无限制爬虫从李时珍采药到AI数据获取的激励机制变革"
payload = {
"contents": [
{
"parts": [
{
"text": f"你是太上老君负责将复杂问题分解为多个子问题。请将以下问题分解为3-5个子问题以JSON格式返回\n\n{topic}\n\n返回格式:{{\"subtopics\": [{{\"title\": \"子问题标题\", \"description\": \"详细描述\"}}]}}"
}
]
}
],
"generationConfig": {
"maxOutputTokens": 2000,
"temperature": 0.7
}
}
try:
logger.info("🧠 测试Gemini问题分解能力...")
response = requests.post(
url,
json=payload,
headers=headers,
timeout=60
)
logger.info(f"📊 状态码: {response.status_code}")
if response.status_code == 200:
result = response.json()
logger.info(f"✅ 分解测试成功!")
# 提取内容
if 'candidates' in result and len(result['candidates']) > 0:
candidate = result['candidates'][0]
if 'content' in candidate and 'parts' in candidate['content']:
content = candidate['content']['parts'][0].get('text', '')
logger.info(f"📋 分解结果:\n{content}")
# 尝试解析JSON
try:
# 提取JSON部分
if '```json' in content:
json_start = content.find('```json') + 7
json_end = content.find('```', json_start)
json_content = content[json_start:json_end].strip()
elif '{' in content and '}' in content:
json_start = content.find('{')
json_end = content.rfind('}') + 1
json_content = content[json_start:json_end]
else:
json_content = content
parsed_json = json.loads(json_content)
logger.info(f"🎯 JSON解析成功: {json.dumps(parsed_json, ensure_ascii=False, indent=2)}")
return True, parsed_json
except json.JSONDecodeError as e:
logger.warning(f"⚠️ JSON解析失败: {e}")
logger.warning(f"📝 原始内容: {content}")
return True, content
return True, "响应格式异常"
else:
logger.error(f"❌ 分解测试失败: {response.status_code}")
logger.error(f"📋 错误响应: {response.text}")
return False, response.text
except Exception as e:
logger.error(f"💥 分解测试错误: {e}")
return False, str(e)
if __name__ == "__main__":
logger.info("🎯 开始Cloudflare Gemini API测试")
# 基础连接测试
success1, result1 = test_cloudflare_gemini()
if success1:
logger.info("🎉 基础测试通过!")
# 问题分解测试
success2, result2 = test_gemini_breakdown()
if success2:
logger.info("🎉 所有测试通过Gemini API工作正常")
logger.info("✅ 可以安全运行完整的循环赛系统")
else:
logger.error("💀 问题分解测试失败")
else:
logger.error("💀 基础连接测试失败")
logger.info("🏁 测试完成")

View File

@ -0,0 +1,436 @@
#!/usr/bin/env python3
"""
Cloudflare Memory Bank 实现测试
"""
import unittest
import asyncio
import os
import sys
from unittest.mock import patch, MagicMock, AsyncMock
from datetime import datetime
# 添加项目根目录到Python路径
sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '..')))
from src.jixia.memory.cloudflare_memory_bank import CloudflareMemoryBank, MemoryEntry
class TestCloudflareMemoryBank(unittest.TestCase):
"""测试CloudflareMemoryBank类"""
def setUp(self):
"""测试前的设置"""
# Mock掉 aiohttp.ClientSession 以避免实际网络请求
self.patcher = patch('src.jixia.memory.cloudflare_memory_bank.aiohttp.ClientSession')
self.mock_session_class = self.patcher.start()
self.mock_session = AsyncMock()
self.mock_session_class.return_value = self.mock_session
# Mock掉 get_cloudflare_config 以避免实际读取配置
self.config_patcher = patch('src.jixia.memory.cloudflare_memory_bank.get_cloudflare_config')
self.mock_get_config = self.config_patcher.start()
self.mock_get_config.return_value = {
'account_id': 'test-account',
'api_token': 'test-token',
'vectorize_index': 'test-index',
'embed_model': '@cf/baai/bge-m3',
'autorag_domain': 'test.example.com'
}
# 创建CloudflareMemoryBank实例
self.memory_bank = CloudflareMemoryBank()
# 重置一些内部状态
self.memory_bank.config = self.mock_get_config.return_value
self.memory_bank.account_id = 'test-account'
self.memory_bank.api_token = 'test-token'
self.memory_bank.vectorize_index = 'test-index'
self.memory_bank.embed_model = '@cf/baai/bge-m3'
self.memory_bank.autorag_domain = 'test.example.com'
def tearDown(self):
"""测试后的清理"""
self.patcher.stop()
self.config_patcher.stop()
def test_init(self):
"""测试初始化"""
self.assertEqual(self.memory_bank.account_id, "test-account")
self.assertEqual(self.memory_bank.api_token, "test-token")
self.assertEqual(self.memory_bank.vectorize_index, "test-index")
self.assertEqual(self.memory_bank.embed_model, "@cf/baai/bge-m3")
self.assertEqual(self.memory_bank.autorag_domain, "test.example.com")
async def test_create_memory_bank(self):
"""测试创建记忆空间"""
memory_bank_id = await self.memory_bank.create_memory_bank("tieguaili")
# 验证返回的ID格式
self.assertEqual(memory_bank_id, "cf_memory_tieguaili")
async def test_create_memory_bank_with_display_name(self):
"""测试创建记忆空间时指定显示名称"""
memory_bank_id = await self.memory_bank.create_memory_bank(
"tieguaili",
"铁拐李的专属记忆银行"
)
# 验证返回的ID格式
self.assertEqual(memory_bank_id, "cf_memory_tieguaili")
async def test_generate_embedding(self):
"""测试生成嵌入向量"""
# Mock响应
mock_response = AsyncMock()
mock_response.status = 200
mock_response.json = AsyncMock(return_value={
"result": {
"data": [
{
"embedding": [0.1, 0.2, 0.3, 0.4, 0.5]
}
]
}
})
# Mock session.post
self.mock_session.post.return_value.__aenter__.return_value = mock_response
# 调用方法
embedding = await self.memory_bank._generate_embedding("测试文本")
# 验证结果
self.assertEqual(embedding, [0.1, 0.2, 0.3, 0.4, 0.5])
# 验证调用了正确的URL和参数
expected_url = "https://api.cloudflare.com/client/v4/accounts/test-account/ai/run/@cf/baai/bge-m3"
self.mock_session.post.assert_called_once()
call_args = self.mock_session.post.call_args
self.assertEqual(call_args[0][0], expected_url)
self.assertEqual(call_args[1]['json'], {"text": ["测试文本"]})
async def test_generate_embedding_api_error(self):
"""测试生成嵌入向量时API错误"""
# Mock响应
mock_response = AsyncMock()
mock_response.status = 500
mock_response.text = AsyncMock(return_value="Internal Server Error")
# Mock session.post
self.mock_session.post.return_value.__aenter__.return_value = mock_response
# 验证抛出异常
with self.assertRaises(Exception) as context:
await self.memory_bank._generate_embedding("测试文本")
self.assertIn("Failed to generate embedding", str(context.exception))
async def test_add_memory(self):
"""测试添加记忆"""
# Mock _generate_embedding 方法
with patch.object(self.memory_bank, '_generate_embedding', new=AsyncMock(return_value=[0.1, 0.2, 0.3])) as mock_embed:
# Mock upsert 响应
mock_response = AsyncMock()
mock_response.status = 200
mock_response.json = AsyncMock(return_value={"result": {"upserted": 1}})
# Mock session.post
self.mock_session.post.return_value.__aenter__.return_value = mock_response
# 添加记忆
memory_id = await self.memory_bank.add_memory(
agent_name="tieguaili",
content="在讨论NVIDIA股票时我倾向于逆向思维关注潜在风险。",
memory_type="preference",
debate_topic="NVIDIA投资分析",
metadata={"source": "manual"}
)
# 验证返回的ID格式 (以mem_开头)
self.assertTrue(memory_id.startswith("mem_tieguaili_"))
# 验证调用了生成嵌入的方法
mock_embed.assert_called_once_with("在讨论NVIDIA股票时我倾向于逆向思维关注潜在风险。")
# 验证调用了upsert API
self.mock_session.post.assert_called()
# 验证upsert调用的参数
upsert_call = None
for call in self.mock_session.post.call_args_list:
if 'vectorize/indexes/test-index/upsert' in call[0][0]:
upsert_call = call
break
self.assertIsNotNone(upsert_call)
call_args, call_kwargs = upsert_call
self.assertIn("vectorize/indexes/test-index/upsert", call_args[0])
self.assertIn("vectors", call_kwargs['json'])
async def test_add_memory_api_error(self):
"""测试添加记忆时API错误"""
# Mock _generate_embedding 方法
with patch.object(self.memory_bank, '_generate_embedding', new=AsyncMock(return_value=[0.1, 0.2, 0.3])):
# Mock upsert 响应
mock_response = AsyncMock()
mock_response.status = 500
mock_response.text = AsyncMock(return_value="Internal Server Error")
# Mock session.post
self.mock_session.post.return_value.__aenter__.return_value = mock_response
# 验证抛出异常
with self.assertRaises(Exception) as context:
await self.memory_bank.add_memory(
agent_name="tieguaili",
content="测试内容"
)
self.assertIn("Failed to upsert memory", str(context.exception))
async def test_search_memories(self):
"""测试搜索记忆"""
# Mock _generate_embedding 方法
with patch.object(self.memory_bank, '_generate_embedding', new=AsyncMock(return_value=[0.1, 0.2, 0.3])) as mock_embed:
# Mock query 响应
mock_response = AsyncMock()
mock_response.status = 200
mock_response.json = AsyncMock(return_value={
"result": {
"matches": [
{
"metadata": {
"content": "在讨论NVIDIA股票时我倾向于逆向思维关注潜在风险。",
"memory_type": "preference",
"agent_name": "tieguaili"
},
"score": 0.95
}
]
}
})
# Mock session.post
self.mock_session.post.return_value.__aenter__.return_value = mock_response
# 搜索记忆
results = await self.memory_bank.search_memories(
agent_name="tieguaili",
query="NVIDIA",
memory_type="preference",
limit=5
)
# 验证结果
self.assertEqual(len(results), 1)
self.assertEqual(results[0]["content"], "在讨论NVIDIA股票时我倾向于逆向思维关注潜在风险。")
self.assertEqual(results[0]["relevance_score"], 0.95)
# 验证调用了生成嵌入的方法
mock_embed.assert_called_once_with("NVIDIA")
# 验证调用了query API
self.mock_session.post.assert_called()
# 验证query调用的参数
query_call = None
for call in self.mock_session.post.call_args_list:
if 'vectorize/indexes/test-index/query' in call[0][0]:
query_call = call
break
self.assertIsNotNone(query_call)
call_args, call_kwargs = query_call
self.assertIn("vectorize/indexes/test-index/query", call_args[0])
self.assertIn("vector", call_kwargs['json'])
self.assertIn("filter", call_kwargs['json'])
self.assertEqual(call_kwargs['json']['filter'], {"agent_name": "tieguaili", "memory_type": "preference"})
async def test_search_memories_api_error(self):
"""测试搜索记忆时API错误"""
# Mock _generate_embedding 方法
with patch.object(self.memory_bank, '_generate_embedding', new=AsyncMock(return_value=[0.1, 0.2, 0.3])):
# Mock query 响应
mock_response = AsyncMock()
mock_response.status = 500
mock_response.text = AsyncMock(return_value="Internal Server Error")
# Mock session.post
self.mock_session.post.return_value.__aenter__.return_value = mock_response
# 验证返回空列表而不是抛出异常
results = await self.memory_bank.search_memories(
agent_name="tieguaili",
query="NVIDIA"
)
self.assertEqual(results, [])
async def test_get_agent_context(self):
"""测试获取智能体上下文"""
# Mock search_memories 方法
with patch.object(self.memory_bank, 'search_memories', new=AsyncMock()) as mock_search:
# 设置mock返回值
mock_search.side_effect = [
[ # conversation memories
{"content": "NVIDIA的估值过高存在泡沫风险。", "relevance_score": 0.9}
],
[ # preference memories
{"content": "倾向于逆向思维,关注潜在风险。", "relevance_score": 0.8}
],
[ # strategy memories
{"content": "使用技术分析策略。", "relevance_score": 0.7}
]
]
# 获取上下文
context = await self.memory_bank.get_agent_context("tieguaili", "NVIDIA投资分析")
# 验证上下文包含预期内容
self.assertIn("# 铁拐李的记忆上下文", context)
self.assertIn("## 历史对话记忆", context)
self.assertIn("## 偏好记忆", context)
self.assertIn("## 策略记忆", context)
self.assertIn("NVIDIA的估值过高存在泡沫风险。", context)
self.assertIn("倾向于逆向思维,关注潜在风险。", context)
self.assertIn("使用技术分析策略。", context)
async def test_get_agent_context_no_memories(self):
"""测试获取智能体上下文但无相关记忆"""
# Mock search_memories 方法
with patch.object(self.memory_bank, 'search_memories', new=AsyncMock(return_value=[])):
# 获取上下文
context = await self.memory_bank.get_agent_context("tieguaili", "NVIDIA投资分析")
# 验证上下文包含暂无相关记忆的提示
self.assertIn("# 铁拐李的记忆上下文", context)
self.assertIn("暂无相关记忆。", context)
async def test_save_debate_session(self):
"""测试保存辩论会话"""
# Mock add_memory 方法
with patch.object(self.memory_bank, 'add_memory', new=AsyncMock()) as mock_add:
conversation_history = [
{"agent": "tieguaili", "content": "NVIDIA的估值过高存在泡沫风险。"},
{"agent": "lvdongbin", "content": "NVIDIA在AI领域的领先地位不可忽视。"},
{"agent": "tieguaili", "content": "但我们需要考虑竞争加剧和增长放缓的可能性。"}
]
outcomes = {
"winner": "lvdongbin",
"insights": {
"tieguaili": "铁拐李的风险意识值得肯定但在AI趋势的判断上略显保守。"
}
}
# 保存辩论会话
await self.memory_bank.save_debate_session(
debate_topic="NVIDIA投资分析",
participants=["tieguaili", "lvdongbin"],
conversation_history=conversation_history,
outcomes=outcomes
)
# 验证调用了add_memory两次对话总结和策略洞察
self.assertEqual(mock_add.call_count, 2)
# 验证第一次调用是对话总结
call_args1 = mock_add.call_args_list[0][1]
self.assertEqual(call_args1['agent_name'], 'tieguaili')
self.assertEqual(call_args1['memory_type'], 'conversation')
self.assertIn('铁拐李在本次辩论中的主要观点', call_args1['content'])
# 验证第二次调用是策略洞察
call_args2 = mock_add.call_args_list[1][1]
self.assertEqual(call_args2['agent_name'], 'tieguaili')
self.assertEqual(call_args2['memory_type'], 'strategy')
self.assertIn('铁拐李的风险意识值得肯定', call_args2['content'])
def test_summarize_conversation(self):
"""测试对话总结"""
conversation_history = [
{"agent": "tieguaili", "content": "第一点看法NVIDIA的估值过高存在泡沫风险。"},
{"agent": "lvdongbin", "content": "NVIDIA在AI领域的领先地位不可忽视。"},
{"agent": "tieguaili", "content": "第二点看法:我们需要考虑竞争加剧和增长放缓的可能性。"},
{"agent": "tieguaili", "content": "第三点看法:从技术分析角度看,股价已出现超买信号。"}
]
summary = self.memory_bank._summarize_conversation(conversation_history, "tieguaili")
# 验证总结包含预期内容
self.assertIn("铁拐李在本次辩论中的主要观点", summary)
self.assertIn("第一点看法NVIDIA的估值过高存在泡沫风险。", summary)
self.assertIn("第二点看法:我们需要考虑竞争加剧和增长放缓的可能性。", summary)
self.assertIn("第三点看法:从技术分析角度看,股价已出现超买信号。", summary)
def test_extract_strategy_insight_winner(self):
"""测试提取策略洞察 - 获胜者"""
outcomes = {
"winner": "tieguaili",
"insights": {}
}
insight = self.memory_bank._extract_strategy_insight(outcomes, "tieguaili")
self.assertIn("铁拐李在本次辩论中获胜", insight)
def test_extract_strategy_insight_from_insights(self):
"""测试从洞察中提取策略洞察"""
outcomes = {
"winner": "lvdongbin",
"insights": {
"tieguaili": "铁拐李的风险意识值得肯定但在AI趋势的判断上略显保守。"
}
}
insight = self.memory_bank._extract_strategy_insight(outcomes, "tieguaili")
self.assertEqual(insight, "铁拐李的风险意识值得肯定但在AI趋势的判断上略显保守。")
if __name__ == '__main__':
# 创建一个异步测试运行器
def run_async_test(test_case):
"""运行异步测试用例"""
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
try:
return loop.run_until_complete(test_case)
finally:
loop.close()
# 获取所有以test_开头的异步方法并运行它们
suite = unittest.TestSuite()
test_instance = TestCloudflareMemoryBank()
test_instance.setUp()
test_instance.addCleanup(test_instance.tearDown)
# 添加同步测试
suite.addTest(TestCloudflareMemoryBank('test_init'))
suite.addTest(TestCloudflareMemoryBank('test_summarize_conversation'))
suite.addTest(TestCloudflareMemoryBank('test_extract_strategy_insight_winner'))
suite.addTest(TestCloudflareMemoryBank('test_extract_strategy_insight_from_insights'))
# 添加异步测试
async_tests = [
'test_create_memory_bank',
'test_create_memory_bank_with_display_name',
'test_generate_embedding',
'test_generate_embedding_api_error',
'test_add_memory',
'test_add_memory_api_error',
'test_search_memories',
'test_search_memories_api_error',
'test_get_agent_context',
'test_get_agent_context_no_memories',
'test_save_debate_session'
]
for test_name in async_tests:
test_method = getattr(test_instance, test_name)
suite.addTest(unittest.FunctionTestCase(lambda tm=test_method: run_async_test(tm())))
# 运行测试
runner = unittest.TextTestRunner(verbosity=2)
runner.run(suite)

View File

@ -0,0 +1,288 @@
{
"chat_rooms": {
"main_debate": {
"id": "main_debate",
"chat_type": "主辩论群",
"name": "主辩论群",
"description": "公开辩论的主要场所",
"participants": [
"正1",
"正2",
"正3",
"正4",
"反1",
"反2",
"反3",
"反4"
],
"moderators": [
"系统"
],
"is_active": true,
"created_at": "2025-08-16T10:57:01.223754",
"last_activity": "2025-08-16T10:57:01.223769",
"settings": {
"max_message_length": 500,
"speaking_time_limit": 120,
"auto_moderation": true
},
"message_count": 2
},
"positive_internal": {
"id": "positive_internal",
"chat_type": "内部讨论群",
"name": "正方内部讨论群",
"description": "正方团队内部策略讨论",
"participants": [
"正1",
"正2",
"正3",
"正4"
],
"moderators": [
"正1"
],
"is_active": true,
"created_at": "2025-08-16T10:57:01.223755",
"last_activity": "2025-08-16T10:57:01.223772",
"settings": {
"privacy_level": "high",
"auto_archive": true
},
"message_count": 1
},
"negative_internal": {
"id": "negative_internal",
"chat_type": "内部讨论群",
"name": "反方内部讨论群",
"description": "反方团队内部策略讨论",
"participants": [
"反1",
"反2",
"反3",
"反4"
],
"moderators": [
"反1"
],
"is_active": true,
"created_at": "2025-08-16T10:57:01.223756",
"last_activity": "2025-08-16T10:57:01.223782",
"settings": {
"privacy_level": "high",
"auto_archive": true
},
"message_count": 1
},
"strategy_meeting": {
"id": "strategy_meeting",
"chat_type": "策略会议群",
"name": "策略会议群",
"description": "高级策略制定和决策",
"participants": [
"正1",
"反1",
"系统"
],
"moderators": [
"系统"
],
"is_active": true,
"created_at": "2025-08-16T10:57:01.223756",
"last_activity": "2025-08-16T10:57:01.223794",
"settings": {
"meeting_mode": true,
"record_decisions": true
},
"message_count": 1
},
"human_intervention": {
"id": "human_intervention",
"chat_type": "Human干预群",
"name": "Human干预群",
"description": "人工干预和监督",
"participants": [
"Human",
"系统"
],
"moderators": [
"Human"
],
"is_active": true,
"created_at": "2025-08-16T10:57:01.223757",
"last_activity": "2025-08-16T10:57:01.223757",
"settings": {
"alert_threshold": "high",
"auto_escalation": true
},
"message_count": 0
},
"observation": {
"id": "observation",
"chat_type": "观察群",
"name": "观察群",
"description": "观察和记录所有活动",
"participants": [
"观察者",
"记录员"
],
"moderators": [
"系统"
],
"is_active": true,
"created_at": "2025-08-16T10:57:01.223758",
"last_activity": "2025-08-16T10:57:01.223790",
"settings": {
"read_only": true,
"full_logging": true
},
"message_count": 2
}
},
"coordination_rules": {
"escalate_urgent_to_human": {
"id": "escalate_urgent_to_human",
"name": "紧急情况升级到Human",
"description": "当检测到紧急情况时自动升级到Human干预群",
"source_chat_types": [
"主辩论群",
"内部讨论群"
],
"target_chat_types": [
"Human干预群"
],
"trigger_conditions": {
"priority": 4,
"keywords": [
"紧急",
"错误",
"异常",
"停止"
]
},
"action": "升级",
"priority": 1,
"is_active": true,
"created_at": "2025-08-16T10:57:01.223760"
},
"strategy_to_internal": {
"id": "strategy_to_internal",
"name": "策略决策分发到内部群",
"description": "将策略会议的决策分发到相关内部讨论群",
"source_chat_types": [
"策略会议群"
],
"target_chat_types": [
"内部讨论群"
],
"trigger_conditions": {
"tags": [
"决策",
"策略",
"指令"
]
},
"action": "广播",
"priority": 2,
"is_active": true,
"created_at": "2025-08-16T10:57:01.223760"
},
"filter_noise": {
"id": "filter_noise",
"name": "过滤噪音消息",
"description": "过滤低质量或无关的消息",
"source_chat_types": [
"主辩论群"
],
"target_chat_types": [],
"trigger_conditions": {
"priority": 1,
"content_length": {
"max": 10
}
},
"action": "过滤",
"priority": 3,
"is_active": true,
"created_at": "2025-08-16T10:57:01.223761"
},
"archive_old_discussions": {
"id": "archive_old_discussions",
"name": "归档旧讨论",
"description": "自动归档超过时间限制的讨论",
"source_chat_types": [
"内部讨论群"
],
"target_chat_types": [
"观察群"
],
"trigger_conditions": {
"age_hours": 24,
"inactivity_hours": 2
},
"action": "归档",
"priority": 4,
"is_active": true,
"created_at": "2025-08-16T10:57:01.223762"
}
},
"status": {
"total_rooms": 6,
"active_rooms": 6,
"total_messages": 7,
"pending_messages": 7,
"coordination_rules": 4,
"active_rules": 4,
"rooms": {
"main_debate": {
"name": "主辩论群",
"type": "主辩论群",
"participants": 8,
"messages": 2,
"last_activity": "2025-08-16T10:57:01.223769",
"is_active": true
},
"positive_internal": {
"name": "正方内部讨论群",
"type": "内部讨论群",
"participants": 4,
"messages": 1,
"last_activity": "2025-08-16T10:57:01.223772",
"is_active": true
},
"negative_internal": {
"name": "反方内部讨论群",
"type": "内部讨论群",
"participants": 4,
"messages": 1,
"last_activity": "2025-08-16T10:57:01.223782",
"is_active": true
},
"strategy_meeting": {
"name": "策略会议群",
"type": "策略会议群",
"participants": 3,
"messages": 1,
"last_activity": "2025-08-16T10:57:01.223794",
"is_active": true
},
"human_intervention": {
"name": "Human干预群",
"type": "Human干预群",
"participants": 2,
"messages": 0,
"last_activity": "2025-08-16T10:57:01.223757",
"is_active": true
},
"observation": {
"name": "观察群",
"type": "观察群",
"participants": 2,
"messages": 2,
"last_activity": "2025-08-16T10:57:01.223790",
"is_active": true
}
}
},
"export_time": "2025-08-16T10:57:01.223897"
}

117
tests/test_custom_api.py Normal file
View File

@ -0,0 +1,117 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
测试LiteLLM自定义API端点
"""
import requests
import json
import os
def test_litellm_api():
"""测试LiteLLM API端点"""
api_url = "http://master.tailnet-68f9.ts.net:40012"
print(f"🔍 测试LiteLLM API端点: {api_url}")
# 获取用户的API密钥
gemini_key = os.getenv('GEMINI_API_KEY', '')
# 尝试不同的API密钥格式
test_keys = [
f"sk-{gemini_key}", # 添加sk-前缀
gemini_key, # 原始密钥
"sk-test", # 测试密钥
"test-key", # 简单测试
]
for api_key in test_keys:
if not api_key or api_key == "sk-":
continue
print(f"\n🔑 测试API密钥: {api_key[:10]}...")
# 测试模型列表
try:
headers = {"x-litellm-api-key": api_key}
response = requests.get(f"{api_url}/v1/models", headers=headers, timeout=10)
print(f"模型列表状态码: {response.status_code}")
if response.status_code == 200:
models = response.json()
print(f"✅ 找到 {len(models.get('data', []))} 个可用模型")
for model in models.get('data', [])[:3]: # 显示前3个模型
print(f" - {model.get('id', 'unknown')}")
# 测试聊天完成
test_payload = {
"model": "gemini-2.5-flash",
"messages": [
{"role": "user", "content": "Hello, this is a test message. Please respond briefly."}
],
"max_tokens": 50
}
chat_response = requests.post(
f"{api_url}/v1/chat/completions",
json=test_payload,
headers=headers,
timeout=30
)
print(f"聊天完成状态码: {chat_response.status_code}")
if chat_response.status_code == 200:
result = chat_response.json()
content = result.get('choices', [{}])[0].get('message', {}).get('content', '')
print(f"✅ API测试成功响应: {content[:100]}...")
print(f"\n🎉 可用的API配置:")
print(f" - 端点: {api_url}/v1/chat/completions")
print(f" - 头部: x-litellm-api-key: {api_key}")
print(f" - 模型: gemini-2.5-flash")
return True
else:
print(f"❌ 聊天测试失败: {chat_response.text[:200]}...")
elif response.status_code == 401:
print(f"❌ 认证失败: {response.text[:100]}...")
else:
print(f"❌ 请求失败: {response.text[:100]}...")
except requests.exceptions.RequestException as e:
print(f"❌ 连接失败: {e}")
return False
def test_environment_setup():
"""测试环境变量设置"""
print("\n🔧 当前环境变量:")
gemini_key = os.getenv('GEMINI_API_KEY', '')
google_key = os.getenv('GOOGLE_API_KEY', '')
print(f"GEMINI_API_KEY: {'已设置' if gemini_key else '未设置'} ({gemini_key[:10]}... 如果已设置)")
print(f"GOOGLE_API_KEY: {'已设置' if google_key else '未设置'} ({google_key[:10]}... 如果已设置)")
return gemini_key
if __name__ == "__main__":
print("🚀 开始测试LiteLLM自定义API端点...")
# 检查环境
api_key = test_environment_setup()
if not api_key:
print("\n⚠️ 警告: 未找到GEMINI_API_KEY环境变量")
# 测试API
success = test_litellm_api()
if success:
print("\n✅ LiteLLM API端点测试成功")
print("\n💡 建议: 可以使用这个端点替代Google官方API")
else:
print("\n❌ LiteLLM API端点测试失败")
print("\n🔍 可能的解决方案:")
print(" 1. 检查LiteLLM服务器配置")
print(" 2. 确认API密钥格式")
print(" 3. 检查网络连接")

View File

@ -0,0 +1,159 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
增强版优先级算法测试脚本
测试新的优先级算法在起承转合辩论系统中的表现
"""
import sys
import os
sys.path.append(os.path.join(os.path.dirname(__file__), 'src'))
from jixia.debates.qi_cheng_zhuan_he_debate import QiChengZhuanHeDebateSystem, DebateStage
from jixia.debates.enhanced_priority_algorithm import EnhancedPriorityAlgorithm
def test_enhanced_priority_algorithm():
"""测试增强版优先级算法"""
print("🧪 开始测试增强版优先级算法")
print("=" * 50)
# 创建辩论系统
debate_system = QiChengZhuanHeDebateSystem()
# 模拟辩论场景
test_scenarios = [
{
"name": "起阶段开始",
"stage": DebateStage.QI,
"progress": 1,
"history": []
},
{
"name": "承阶段中期",
"stage": DebateStage.CHENG,
"progress": 3,
"history": [
{"speaker": "正1", "content": "AI投资具有巨大潜力", "timestamp": "2024-01-01T10:00:00"},
{"speaker": "反1", "content": "但风险也很高", "timestamp": "2024-01-01T10:01:00"}
]
},
{
"name": "转阶段激烈辩论",
"stage": DebateStage.ZHUAN,
"progress": 5,
"history": [
{"speaker": "正2", "content": "数据显示AI投资回报率很高", "timestamp": "2024-01-01T10:02:00"},
{"speaker": "反2", "content": "这些数据可能有偏差", "timestamp": "2024-01-01T10:03:00"},
{"speaker": "正3", "content": "我们有严格的风控措施", "timestamp": "2024-01-01T10:04:00"},
{"speaker": "反3", "content": "风控措施并不能完全避免风险", "timestamp": "2024-01-01T10:05:00"}
]
},
{
"name": "合阶段总结",
"stage": DebateStage.HE,
"progress": 2,
"history": [
{"speaker": "正4", "content": "综合来看AI投资利大于弊", "timestamp": "2024-01-01T10:06:00"},
{"speaker": "反4", "content": "我们需要更谨慎的态度", "timestamp": "2024-01-01T10:07:00"}
]
}
]
for i, scenario in enumerate(test_scenarios, 1):
print(f"\n📋 测试场景 {i}: {scenario['name']}")
print("-" * 30)
# 设置辩论状态
debate_system.context.current_stage = scenario['stage']
debate_system.context.stage_progress = scenario['progress']
debate_system.context.debate_history = scenario['history']
# 获取推荐发言者
try:
recommended_speaker = debate_system._get_priority_speaker()
analysis = debate_system.context.last_priority_analysis
print(f"🎯 推荐发言者: {recommended_speaker}")
print(f"📊 优先级分数: {analysis.get('priority_score', 'N/A'):.3f}")
if 'analysis' in analysis:
detailed_analysis = analysis['analysis']
print(f"🔍 详细分析:")
# 显示推荐发言者的详细信息
if recommended_speaker in detailed_analysis:
speaker_info = detailed_analysis[recommended_speaker]
print(f" - 发言者: {recommended_speaker}")
print(f" - 优先级分数: {speaker_info.get('priority_score', 'N/A'):.3f}")
print(f" - 分析时间: {speaker_info.get('analysis_timestamp', 'N/A')}")
profile = speaker_info.get('profile')
if profile:
print(f" - 团队: {profile.team}")
print(f" - 发言次数: {profile.total_speech_count}")
print(f" - 当前能量: {profile.current_energy:.2f}")
print(f" - 辩论风格: {profile.debate_style}")
# 显示所有发言者的分数排名
print(f"\n 📊 所有发言者排名:")
sorted_speakers = sorted(detailed_analysis.items(),
key=lambda x: x[1].get('priority_score', 0),
reverse=True)
for rank, (speaker, info) in enumerate(sorted_speakers[:5], 1):
score = info.get('priority_score', 0)
print(f" {rank}. {speaker}: {score:.3f}")
except Exception as e:
print(f"❌ 测试失败: {e}")
import traceback
traceback.print_exc()
print("\n" + "=" * 50)
print("✅ 增强版优先级算法测试完成")
def test_algorithm_performance():
"""测试算法性能"""
print("\n⚡ 性能测试")
print("-" * 20)
import time
algorithm = EnhancedPriorityAlgorithm()
available_speakers = ["正1", "正2", "正3", "正4", "反1", "反2", "反3", "反4"]
context = {
"current_stage": "",
"stage_progress": 3,
"max_progress": 6,
"time_remaining": 0.5,
"topic_keywords": ["AI", "投资", "风险"],
"positive_team_score": 0.6,
"negative_team_score": 0.4,
"positive_recent_speeches": 3,
"negative_recent_speeches": 2
}
recent_speeches = [
{"speaker": "正1", "content": "AI技术发展迅速"},
{"speaker": "反1", "content": "但存在不确定性"}
]
# 性能测试
start_time = time.time()
iterations = 100
for _ in range(iterations):
speaker, score, analysis = algorithm.get_next_speaker(
available_speakers, context, recent_speeches
)
end_time = time.time()
avg_time = (end_time - start_time) / iterations * 1000 # 转换为毫秒
print(f"📈 平均处理时间: {avg_time:.2f}ms")
print(f"🔄 总迭代次数: {iterations}")
print(f"⚡ 处理速度: {1000/avg_time:.0f} 次/秒")
if __name__ == "__main__":
test_enhanced_priority_algorithm()
test_algorithm_performance()

View File

@ -0,0 +1,82 @@
#!/usr/bin/env python3
"""
测试 Gemini 2.5 Flash 模型
"""
import os
import requests
import json
def test_gemini_2_5_flash():
"""测试 Gemini 2.5 Flash 模型"""
# 获取环境变量
base_url = os.getenv('GOOGLE_BASE_URL')
api_key = os.getenv('GEMINI_API_KEY')
if not base_url or not api_key:
print("❌ 环境变量未设置")
print(f"GOOGLE_BASE_URL: {base_url}")
print(f"GEMINI_API_KEY: {api_key}")
return False
print("✅ 环境变量已设置")
print(f"Base URL: {base_url}")
print(f"API Key: {api_key[:10]}...{api_key[-4:]}")
# 构建请求URL
model_name = "gemini-2.5-flash"
url = f"{base_url}/v1beta/models/{model_name}:generateContent"
# 请求头
headers = {
"Content-Type": "application/json",
"x-goog-api-key": api_key
}
# 请求体
payload = {
"contents": [{
"parts": [{
"text": "你好,请简单介绍一下你自己。"
}]
}]
}
try:
print(f"\n🚀 测试 {model_name} 模型...")
print(f"请求URL: {url}")
response = requests.post(url, headers=headers, json=payload, timeout=30)
print(f"响应状态码: {response.status_code}")
if response.status_code == 200:
result = response.json()
if 'candidates' in result and len(result['candidates']) > 0:
content = result['candidates'][0]['content']['parts'][0]['text']
print(f"{model_name} 响应成功:")
print(f"📝 回复: {content[:200]}...")
return True
else:
print(f"❌ 响应格式异常: {result}")
return False
else:
print(f"❌ 请求失败: {response.status_code}")
print(f"错误信息: {response.text}")
return False
except Exception as e:
print(f"❌ 请求异常: {str(e)}")
return False
if __name__ == "__main__":
print("🧪 Gemini 2.5 Flash 模型测试")
print("=" * 50)
success = test_gemini_2_5_flash()
if success:
print("\n🎉 测试成功Gemini 2.5 Flash 模型工作正常")
else:
print("\n💥 测试失败!请检查配置")

View File

@ -0,0 +1,45 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
直接测试Google Gemini API连接
"""
import os
import google.generativeai as genai
def test_gemini_direct():
"""直接测试Gemini API"""
print("🔍 测试Gemini API直连...")
# 检查API密钥
api_key = os.getenv('GOOGLE_API_KEY')
if not api_key:
print("❌ 未找到 GOOGLE_API_KEY")
return False
print(f"✅ API密钥已配置 (长度: {len(api_key)})")
try:
# 配置API
genai.configure(api_key=api_key)
# 创建模型
print("📝 创建Gemini模型...")
model = genai.GenerativeModel('gemini-2.0-flash-exp')
# 发送测试消息
print("💬 发送测试消息...")
response = model.generate_content("请简单说'你好我是Gemini'")
print(f"✅ 测试成功!回复: {response.text}")
return True
except Exception as e:
print(f"❌ 测试失败: {e}")
import traceback
traceback.print_exc()
return False
if __name__ == "__main__":
print("🚀 Gemini直连测试")
test_gemini_direct()

View File

@ -0,0 +1,505 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Human干预系统测试脚本
"""
import asyncio
import json
import time
from datetime import datetime, timedelta
from src.jixia.intervention.human_intervention_system import (
DebateHealthMonitor, HealthStatus, InterventionLevel, AlertType
)
class TestHumanInterventionSystem:
"""Human干预系统测试类"""
def __init__(self):
self.monitor = DebateHealthMonitor()
self.test_results = []
# 添加事件处理器用于测试
self.monitor.add_event_handler("alert_created", self._handle_alert_created)
self.monitor.add_event_handler("intervention_executed", self._handle_intervention_executed)
self.monitor.add_event_handler("human_notification", self._handle_human_notification)
self.received_alerts = []
self.received_interventions = []
self.received_notifications = []
async def _handle_alert_created(self, alert):
"""处理警报创建事件"""
self.received_alerts.append(alert)
print(f"🚨 收到警报: {alert.alert_type.value} - {alert.message}")
async def _handle_intervention_executed(self, action):
"""处理干预执行事件"""
self.received_interventions.append(action)
print(f"🛠️ 执行干预: {action.action_type} - {action.description}")
async def _handle_human_notification(self, notification):
"""处理Human通知事件"""
self.received_notifications.append(notification)
print(f"👤 Human通知: {notification['message']}")
async def test_basic_health_monitoring(self):
"""测试基本健康监控功能"""
print("\n🧪 测试基本健康监控功能...")
# 正常辩论数据
normal_debate_data = {
"recent_messages": [
{"sender": "正1", "content": "我认为人工智能投资具有巨大潜力因为技术发展迅速市场需求不断增长。首先AI技术在各行各业都有广泛应用前景。"},
{"sender": "反1", "content": "虽然AI投资有潜力但我们也要考虑风险。技术泡沫、监管不确定性等因素都可能影响投资回报。"},
{"sender": "正2", "content": "反方提到的风险确实存在,但是通过合理的投资策略和风险管理,我们可以最大化收益同时控制风险。"},
{"sender": "反2", "content": "正方的观点有道理,不过我想补充一点:投资时机也很重要,现在可能不是最佳入场时机。"}
],
"topic_keywords": ["人工智能", "AI", "投资", "风险", "收益", "技术", "市场"],
"system_status": {
"error_rate": 0.01,
"avg_response_time": 1.2,
"system_load": 0.5
}
}
score, status = await self.monitor.analyze_debate_health(normal_debate_data)
success = score >= 70 and status in [HealthStatus.EXCELLENT, HealthStatus.GOOD]
self.test_results.append(("基本健康监控", success, f"得分: {score:.1f}, 状态: {status.value}"))
print(f"✅ 正常辩论健康度: {score:.1f}分 ({status.value})")
return success
async def test_quality_decline_detection(self):
"""测试质量下降检测"""
print("\n🧪 测试质量下降检测...")
# 低质量辩论数据
low_quality_data = {
"recent_messages": [
{"sender": "正1", "content": ""},
{"sender": "反1", "content": "不好"},
{"sender": "正2", "content": "是的"},
{"sender": "反2", "content": "不是"},
{"sender": "正1", "content": ""},
{"sender": "反1", "content": ""},
],
"topic_keywords": ["人工智能", "AI", "投资"],
"system_status": {
"error_rate": 0.01,
"avg_response_time": 1.0,
"system_load": 0.4
}
}
initial_alert_count = len(self.received_alerts)
score, status = await self.monitor.analyze_debate_health(low_quality_data)
# 检查是否触发了质量相关警报
quality_alerts = [alert for alert in self.received_alerts[initial_alert_count:]
if alert.alert_type == AlertType.QUALITY_DECLINE]
success = len(quality_alerts) > 0 and score < 50
self.test_results.append(("质量下降检测", success, f"得分: {score:.1f}, 警报数: {len(quality_alerts)}"))
print(f"✅ 低质量辩论检测: {score:.1f}分, 触发警报: {len(quality_alerts)}")
return success
async def test_toxic_behavior_detection(self):
"""测试有害行为检测"""
print("\n🧪 测试有害行为检测...")
# 包含有害行为的数据
toxic_data = {
"recent_messages": [
{"sender": "正1", "content": "我认为这个观点是正确的,有充分的理由支持。"},
{"sender": "反1", "content": "你这个观点太愚蠢了!完全没有逻辑!"},
{"sender": "正2", "content": "请保持理性讨论,不要进行人身攻击。"},
{"sender": "反2", "content": "闭嘴!你们这些白痴根本不懂!"},
{"sender": "正1", "content": "让我们回到正题,理性分析这个问题。"}
],
"topic_keywords": ["观点", "逻辑", "分析"],
"system_status": {
"error_rate": 0.02,
"avg_response_time": 1.5,
"system_load": 0.6
}
}
initial_alert_count = len(self.received_alerts)
score, status = await self.monitor.analyze_debate_health(toxic_data)
# 检查是否触发了有害行为警报
toxic_alerts = [alert for alert in self.received_alerts[initial_alert_count:]
if alert.alert_type == AlertType.TOXIC_BEHAVIOR]
success = len(toxic_alerts) > 0
self.test_results.append(("有害行为检测", success, f"警报数: {len(toxic_alerts)}, 文明度分数: {self.monitor.health_metrics['interaction_civility'].value:.1f}"))
print(f"✅ 有害行为检测: 触发警报: {len(toxic_alerts)}")
return success
async def test_emotional_escalation_detection(self):
"""测试情绪升级检测"""
print("\n🧪 测试情绪升级检测...")
# 情绪激动的数据
emotional_data = {
"recent_messages": [
{"sender": "正1", "content": "我强烈反对这个观点!!!"},
{"sender": "反1", "content": "你们完全错了!!!这太愤怒了!!!"},
{"sender": "正2", "content": "我非常生气!!!这个讨论让我很讨厌!!!"},
{"sender": "反2", "content": "大家都冷静一下!!!不要这么激动!!!"}
],
"topic_keywords": ["观点", "讨论"],
"system_status": {
"error_rate": 0.01,
"avg_response_time": 1.0,
"system_load": 0.5
}
}
initial_alert_count = len(self.received_alerts)
score, status = await self.monitor.analyze_debate_health(emotional_data)
# 检查是否触发了情绪升级警报
emotion_alerts = [alert for alert in self.received_alerts[initial_alert_count:]
if alert.alert_type == AlertType.EMOTIONAL_ESCALATION]
success = len(emotion_alerts) > 0
self.test_results.append(("情绪升级检测", success, f"警报数: {len(emotion_alerts)}, 情绪稳定性: {self.monitor.health_metrics['emotional_stability'].value:.1f}"))
print(f"✅ 情绪升级检测: 触发警报: {len(emotion_alerts)}")
return success
async def test_participation_imbalance_detection(self):
"""测试参与不平衡检测"""
print("\n🧪 测试参与不平衡检测...")
# 参与不平衡的数据
imbalanced_data = {
"recent_messages": [
{"sender": "正1", "content": "我有很多观点要分享..."},
{"sender": "正1", "content": "首先,我认为..."},
{"sender": "正1", "content": "其次,我们应该..."},
{"sender": "正1", "content": "最后,我建议..."},
{"sender": "正1", "content": "总结一下..."},
{"sender": "正1", "content": "补充一点..."},
{"sender": "正1", "content": "再说一遍..."},
{"sender": "反1", "content": "好的"}
],
"topic_keywords": ["观点", "建议"],
"system_status": {
"error_rate": 0.01,
"avg_response_time": 1.0,
"system_load": 0.5
}
}
initial_alert_count = len(self.received_alerts)
score, status = await self.monitor.analyze_debate_health(imbalanced_data)
# 检查是否触发了参与不平衡警报
balance_alerts = [alert for alert in self.received_alerts[initial_alert_count:]
if alert.alert_type == AlertType.PARTICIPATION_IMBALANCE]
success = len(balance_alerts) > 0
self.test_results.append(("参与不平衡检测", success, f"警报数: {len(balance_alerts)}, 平衡度: {self.monitor.health_metrics['participation_balance'].value:.1f}"))
print(f"✅ 参与不平衡检测: 触发警报: {len(balance_alerts)}")
return success
async def test_auto_intervention(self):
"""测试自动干预功能"""
print("\n🧪 测试自动干预功能...")
# 触发多种问题的数据
problematic_data = {
"recent_messages": [
{"sender": "正1", "content": "你们都是白痴!!!"},
{"sender": "反1", "content": "愚蠢!!!"},
{"sender": "正2", "content": "垃圾观点!!!"},
{"sender": "反2", "content": "讨厌!!!"}
],
"topic_keywords": ["观点"],
"system_status": {
"error_rate": 0.05,
"avg_response_time": 3.0,
"system_load": 0.9
}
}
initial_intervention_count = len(self.received_interventions)
score, status = await self.monitor.analyze_debate_health(problematic_data)
# 检查是否执行了自动干预
new_interventions = self.received_interventions[initial_intervention_count:]
success = len(new_interventions) > 0
self.test_results.append(("自动干预功能", success, f"执行干预: {len(new_interventions)}"))
print(f"✅ 自动干预: 执行了 {len(new_interventions)} 次干预")
for intervention in new_interventions:
print(f" - {intervention.action_type}: {intervention.description}")
return success
async def test_human_notification(self):
"""测试Human通知功能"""
print("\n🧪 测试Human通知功能...")
# 设置较低的通知阈值以便测试
original_threshold = self.monitor.monitoring_config["human_notification_threshold"]
self.monitor.monitoring_config["human_notification_threshold"] = InterventionLevel.MODERATE_GUIDANCE
# 严重问题数据
critical_data = {
"recent_messages": [
{"sender": "正1", "content": ""},
{"sender": "反1", "content": ""},
{"sender": "正2", "content": ""},
{"sender": "反2", "content": ""}
],
"topic_keywords": ["重要话题"],
"system_status": {
"error_rate": 0.1,
"avg_response_time": 5.0,
"system_load": 0.95
}
}
initial_notification_count = len(self.received_notifications)
score, status = await self.monitor.analyze_debate_health(critical_data)
# 恢复原始阈值
self.monitor.monitoring_config["human_notification_threshold"] = original_threshold
# 检查是否发送了Human通知
new_notifications = self.received_notifications[initial_notification_count:]
success = len(new_notifications) > 0
self.test_results.append(("Human通知功能", success, f"发送通知: {len(new_notifications)}"))
print(f"✅ Human通知: 发送了 {len(new_notifications)} 次通知")
return success
async def test_health_report_generation(self):
"""测试健康报告生成"""
print("\n🧪 测试健康报告生成...")
report = self.monitor.get_health_report()
required_fields = ["overall_score", "health_status", "metrics", "active_alerts",
"recent_interventions", "monitoring_enabled", "last_check"]
success = all(field in report for field in required_fields)
success = success and len(report["metrics"]) == 6 # 6个健康指标
self.test_results.append(("健康报告生成", success, f"包含字段: {len(report)}"))
print(f"✅ 健康报告生成: 包含 {len(report)} 个字段")
print(f" 整体得分: {report['overall_score']}")
print(f" 健康状态: {report['health_status']}")
print(f" 活跃警报: {report['active_alerts']}")
return success
async def test_alert_resolution(self):
"""测试警报解决功能"""
print("\n🧪 测试警报解决功能...")
# 确保有一些警报
if not self.monitor.active_alerts:
# 创建一个测试警报
from src.jixia.intervention.human_intervention_system import InterventionAlert
test_alert = InterventionAlert(
id="test_alert_123",
alert_type=AlertType.QUALITY_DECLINE,
severity=InterventionLevel.GENTLE_REMINDER,
message="测试警报",
affected_participants=[],
metrics={"test": 50},
timestamp=datetime.now()
)
self.monitor.active_alerts.append(test_alert)
# 解决第一个警报
if self.monitor.active_alerts:
alert_id = self.monitor.active_alerts[0].id
success = self.monitor.resolve_alert(alert_id, "测试解决")
# 清理已解决的警报
initial_count = len(self.monitor.active_alerts)
self.monitor.clear_resolved_alerts()
final_count = len(self.monitor.active_alerts)
success = success and (final_count < initial_count)
else:
success = True # 没有警报也算成功
self.test_results.append(("警报解决功能", success, f"解决并清理警报"))
print(f"✅ 警报解决: 功能正常")
return success
async def test_monitoring_control(self):
"""测试监控控制功能"""
print("\n🧪 测试监控控制功能...")
# 测试禁用监控
self.monitor.disable_monitoring()
disabled_state = not self.monitor.monitoring_enabled
# 测试启用监控
self.monitor.enable_monitoring()
enabled_state = self.monitor.monitoring_enabled
success = disabled_state and enabled_state
self.test_results.append(("监控控制功能", success, "启用/禁用功能正常"))
print(f"✅ 监控控制: 启用/禁用功能正常")
return success
async def test_data_persistence(self):
"""测试数据持久化"""
print("\n🧪 测试数据持久化...")
try:
# 保存监控数据
test_filename = "test_monitoring_data.json"
self.monitor.save_monitoring_data(test_filename)
# 检查文件是否存在并包含正确数据
import os
if os.path.exists(test_filename):
with open(test_filename, 'r', encoding='utf-8') as f:
data = json.load(f)
required_sections = ["health_metrics", "active_alerts", "intervention_history",
"monitoring_config", "monitoring_enabled", "export_time"]
success = all(section in data for section in required_sections)
# 清理测试文件
os.remove(test_filename)
else:
success = False
except Exception as e:
print(f" 数据持久化错误: {e}")
success = False
self.test_results.append(("数据持久化", success, "保存/加载功能正常"))
print(f"✅ 数据持久化: 功能正常")
return success
async def test_performance(self):
"""测试性能"""
print("\n🧪 测试性能...")
# 准备测试数据
test_data = {
"recent_messages": [
{"sender": f"用户{i%4}", "content": f"这是第{i}条测试消息,包含一些内容用于分析。"}
for i in range(20)
],
"topic_keywords": ["测试", "性能", "分析", "消息"],
"system_status": {
"error_rate": 0.01,
"avg_response_time": 1.0,
"system_load": 0.5
}
}
# 性能测试
iterations = 100
start_time = time.time()
for _ in range(iterations):
await self.monitor.analyze_debate_health(test_data)
end_time = time.time()
total_time = end_time - start_time
avg_time = total_time / iterations
analyses_per_second = iterations / total_time
# 性能要求:平均处理时间 < 100ms
success = avg_time < 0.1
self.test_results.append(("性能测试", success, f"平均处理时间: {avg_time*1000:.2f}ms, 处理速度: {analyses_per_second:.1f}次/秒"))
print(f"✅ 性能测试: 平均处理时间 {avg_time*1000:.2f}ms, 处理速度 {analyses_per_second:.1f}次/秒")
return success
async def run_all_tests(self):
"""运行所有测试"""
print("🚀 开始Human干预系统测试...")
print("=" * 60)
test_functions = [
self.test_basic_health_monitoring,
self.test_quality_decline_detection,
self.test_toxic_behavior_detection,
self.test_emotional_escalation_detection,
self.test_participation_imbalance_detection,
self.test_auto_intervention,
self.test_human_notification,
self.test_health_report_generation,
self.test_alert_resolution,
self.test_monitoring_control,
self.test_data_persistence,
self.test_performance
]
passed_tests = 0
total_tests = len(test_functions)
for test_func in test_functions:
try:
result = await test_func()
if result:
passed_tests += 1
except Exception as e:
print(f"❌ 测试失败: {test_func.__name__} - {e}")
self.test_results.append((test_func.__name__, False, f"异常: {e}"))
# 输出测试结果
print("\n" + "=" * 60)
print("📊 测试结果汇总:")
print("=" * 60)
for test_name, success, details in self.test_results:
status = "✅ 通过" if success else "❌ 失败"
print(f"{status} {test_name}: {details}")
success_rate = (passed_tests / total_tests) * 100
print(f"\n🎯 总体测试结果: {passed_tests}/{total_tests} 通过 ({success_rate:.1f}%)")
if success_rate >= 90:
print("🎉 Human干预系统测试优秀")
elif success_rate >= 80:
print("👍 Human干预系统测试良好")
elif success_rate >= 70:
print("⚠️ Human干预系统测试一般需要改进。")
else:
print("❌ Human干预系统测试较差需要重大改进。")
# 输出系统状态
print("\n📋 系统状态报告:")
report = self.monitor.get_health_report()
print(f"监控状态: {'启用' if report['monitoring_enabled'] else '禁用'}")
print(f"活跃警报: {report['active_alerts']}")
print(f"近期干预: {report['recent_interventions']}")
print(f"收到警报: {len(self.received_alerts)}")
print(f"执行干预: {len(self.received_interventions)}")
print(f"Human通知: {len(self.received_notifications)}")
return success_rate >= 80
async def main():
"""主函数"""
tester = TestHumanInterventionSystem()
await tester.run_all_tests()
if __name__ == "__main__":
asyncio.run(main())

View File

@ -0,0 +1,63 @@
#!/usr/bin/env python3
"""
Memory Bank 模块测试
"""
import unittest
import os
import sys
from unittest.mock import patch, MagicMock
# 添加项目根目录到Python路径
sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '..')))
from src.jixia.memory.factory import get_memory_backend
from src.jixia.memory.base_memory_bank import MemoryBankProtocol
class TestMemoryBankFactory(unittest.TestCase):
"""测试记忆银行工厂函数"""
@patch('src.jixia.memory.factory.VertexMemoryBank')
def test_get_memory_backend_always_returns_vertex(self, mock_vertex):
"""测试 get_memory_backend 总是返回 Vertex AI 后端"""
mock_instance = MagicMock()
mock_vertex.from_config.return_value = mock_instance
# 不设置任何环境变量
memory_bank = get_memory_backend()
self.assertEqual(memory_bank, mock_instance)
mock_vertex.from_config.assert_called_once()
@patch('src.jixia.memory.factory.VertexMemoryBank')
def test_get_memory_backend_ignores_prefer_parameter(self, mock_vertex):
"""测试 get_memory_backend 忽略 prefer 参数"""
mock_instance = MagicMock()
mock_vertex.from_config.return_value = mock_instance
# prefer 参数设置为 cloudflare但应被忽略
memory_bank = get_memory_backend(prefer="cloudflare")
self.assertEqual(memory_bank, mock_instance)
mock_vertex.from_config.assert_called_once()
class TestMemoryBankProtocol(unittest.TestCase):
"""测试MemoryBankProtocol协议"""
def test_protocol_methods(self):
"""测试协议定义的方法"""
# 创建一个实现MemoryBankProtocol的简单类用于测试
class TestMemoryBank:
async def create_memory_bank(self, agent_name: str, display_name = None): pass
async def add_memory(self, agent_name: str, content: str, memory_type = "conversation", debate_topic = "", metadata = None): pass
async def search_memories(self, agent_name: str, query: str, memory_type = None, limit = 10): pass
async def get_agent_context(self, agent_name: str, debate_topic: str): pass
async def save_debate_session(self, debate_topic: str, participants, conversation_history, outcomes = None): pass
# 验证TestMemoryBank是否符合MemoryBankProtocol协议
self.assertIsInstance(TestMemoryBank(), MemoryBankProtocol)
if __name__ == '__main__':
# 运行测试
unittest.main()

View File

@ -0,0 +1,384 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
多群聊协调系统测试脚本
"""
import asyncio
import sys
import os
from datetime import datetime
# 添加项目路径
sys.path.append(os.path.join(os.path.dirname(__file__), 'src'))
from jixia.coordination.multi_chat_coordinator import (
MultiChatCoordinator, ChatType, MessagePriority, CoordinationAction
)
async def test_basic_messaging():
"""测试基本消息功能"""
print("\n🧪 测试基本消息功能")
print("=" * 50)
coordinator = MultiChatCoordinator()
# 测试正常消息发送
message1 = await coordinator.send_message(
"main_debate", "正1",
"AI投资是未来科技发展的重要驱动力",
MessagePriority.NORMAL, ["观点", "AI", "投资"]
)
print(f"✅ 消息1发送成功: {message1.id}")
message2 = await coordinator.send_message(
"main_debate", "反1",
"AI投资存在泡沫风险需要谨慎对待",
MessagePriority.NORMAL, ["反驳", "风险", "投资"]
)
print(f"✅ 消息2发送成功: {message2.id}")
# 测试内部讨论
message3 = await coordinator.send_message(
"positive_internal", "正2",
"我们需要收集更多AI成功案例的数据",
MessagePriority.HIGH, ["策略", "数据"]
)
print(f"✅ 内部消息发送成功: {message3.id}")
return coordinator
async def test_escalation_rules():
"""测试升级规则"""
print("\n🚨 测试升级规则")
print("=" * 50)
coordinator = MultiChatCoordinator()
# 发送紧急消息,应该触发升级规则
urgent_message = await coordinator.send_message(
"main_debate", "正3",
"系统检测到异常行为,需要紧急干预",
MessagePriority.URGENT, ["紧急", "系统"]
)
print(f"🚨 紧急消息发送: {urgent_message.id}")
# 等待协调规则处理
await asyncio.sleep(0.1)
# 检查Human干预群是否收到升级消息
human_room = coordinator.chat_rooms["human_intervention"]
escalated_messages = [msg for msg in human_room.message_history
if "升级" in msg.tags]
if escalated_messages:
print(f"✅ 升级规则生效Human干预群收到 {len(escalated_messages)} 条升级消息")
for msg in escalated_messages:
print(f" 📨 {msg.sender}: {msg.content[:100]}...")
else:
print("❌ 升级规则未生效")
return coordinator
async def test_broadcast_rules():
"""测试广播规则"""
print("\n📢 测试广播规则")
print("=" * 50)
coordinator = MultiChatCoordinator()
# 在策略会议群发送决策消息
strategy_message = await coordinator.send_message(
"strategy_meeting", "系统",
"决策:采用数据驱动的论证策略",
MessagePriority.HIGH, ["决策", "策略"]
)
print(f"📋 策略决策发送: {strategy_message.id}")
# 等待协调规则处理
await asyncio.sleep(0.1)
# 检查内部讨论群是否收到广播消息
broadcast_count = 0
for room_id, room in coordinator.chat_rooms.items():
if room.chat_type == ChatType.INTERNAL_DISCUSSION:
broadcast_messages = [msg for msg in room.message_history
if "广播" in msg.tags]
if broadcast_messages:
broadcast_count += len(broadcast_messages)
print(f"{room.name} 收到 {len(broadcast_messages)} 条广播消息")
if broadcast_count > 0:
print(f"✅ 广播规则生效,总共发送 {broadcast_count} 条广播消息")
else:
print("❌ 广播规则未生效")
return coordinator
async def test_filter_rules():
"""测试过滤规则"""
print("\n🔍 测试过滤规则")
print("=" * 50)
coordinator = MultiChatCoordinator()
# 发送低质量消息
low_quality_message = await coordinator.send_message(
"main_debate", "正4",
"好的",
MessagePriority.LOW
)
print(f"📝 低质量消息发送: {low_quality_message.id}")
# 等待协调规则处理
await asyncio.sleep(0.1)
# 检查消息是否被过滤
if low_quality_message.metadata.get("filtered"):
print(f"✅ 过滤规则生效,消息被标记为已过滤")
print(f" 过滤原因: {low_quality_message.metadata.get('filter_reason')}")
else:
print("❌ 过滤规则未生效")
return coordinator
async def test_discussion_merging():
"""测试讨论合并"""
print("\n🔗 测试讨论合并")
print("=" * 50)
coordinator = MultiChatCoordinator()
# 发送相关消息
messages = [
("main_debate", "正1", "AI技术的发展速度令人惊叹", ["AI", "技术"]),
("positive_internal", "正2", "我们应该强调AI技术的创新价值", ["AI", "创新"]),
("negative_internal", "反1", "AI技术也带来了就业问题", ["AI", "就业"]),
]
sent_messages = []
for chat_id, sender, content, tags in messages:
msg = await coordinator.send_message(chat_id, sender, content, tags=tags)
sent_messages.append(msg)
print(f"📨 发送消息: {sender} - {content[:30]}...")
# 发送触发合并的消息
trigger_message = await coordinator.send_message(
"main_debate", "系统",
"需要整合关于AI技术的所有讨论",
tags=["AI", "整合"]
)
print(f"🔗 触发合并消息: {trigger_message.id}")
# 等待协调规则处理
await asyncio.sleep(0.1)
# 检查策略会议群是否收到合并摘要
strategy_room = coordinator.chat_rooms["strategy_meeting"]
merge_messages = [msg for msg in strategy_room.message_history
if "合并" in msg.tags or "摘要" in msg.tags]
if merge_messages:
print(f"✅ 讨论合并生效,策略会议群收到 {len(merge_messages)} 条摘要")
for msg in merge_messages:
print(f" 📋 摘要: {msg.content[:100]}...")
else:
print("❌ 讨论合并未生效")
return coordinator
async def test_permission_system():
"""测试权限系统"""
print("\n🔐 测试权限系统")
print("=" * 50)
coordinator = MultiChatCoordinator()
# 测试正常权限
try:
normal_message = await coordinator.send_message(
"main_debate", "正1", "这是一条正常消息"
)
print(f"✅ 正常权限测试通过: {normal_message.id}")
except Exception as e:
print(f"❌ 正常权限测试失败: {e}")
# 测试无权限用户
try:
unauthorized_message = await coordinator.send_message(
"main_debate", "未授权用户", "这是一条未授权消息"
)
print(f"❌ 权限系统失效,未授权用户发送成功: {unauthorized_message.id}")
except PermissionError as e:
print(f"✅ 权限系统正常,拒绝未授权用户: {e}")
except Exception as e:
print(f"❌ 权限系统异常: {e}")
# 测试内部群权限
try:
internal_message = await coordinator.send_message(
"positive_internal", "正2", "内部策略讨论"
)
print(f"✅ 内部群权限测试通过: {internal_message.id}")
except Exception as e:
print(f"❌ 内部群权限测试失败: {e}")
# 测试跨团队权限
try:
cross_team_message = await coordinator.send_message(
"positive_internal", "反1", "反方试图进入正方内部群"
)
print(f"❌ 跨团队权限控制失效: {cross_team_message.id}")
except PermissionError as e:
print(f"✅ 跨团队权限控制正常: {e}")
except Exception as e:
print(f"❌ 跨团队权限控制异常: {e}")
return coordinator
async def test_system_status():
"""测试系统状态"""
print("\n📊 测试系统状态")
print("=" * 50)
coordinator = MultiChatCoordinator()
# 发送一些测试消息
test_messages = [
("main_debate", "正1", "测试消息1"),
("main_debate", "反1", "测试消息2"),
("positive_internal", "正2", "内部消息1"),
("negative_internal", "反2", "内部消息2"),
("strategy_meeting", "系统", "策略消息1"),
]
for chat_id, sender, content in test_messages:
await coordinator.send_message(chat_id, sender, content)
# 获取系统状态
status = coordinator.get_chat_status()
print(f"📈 系统状态报告:")
print(f" 总群聊数: {status['total_rooms']}")
print(f" 活跃群聊数: {status['active_rooms']}")
print(f" 总消息数: {status['total_messages']}")
print(f" 待处理消息: {status['pending_messages']}")
print(f" 协调规则数: {status['coordination_rules']}")
print(f" 活跃规则数: {status['active_rules']}")
print(f"\n📋 群聊详情:")
for room_id, room_info in status['rooms'].items():
print(f" 🏠 {room_info['name']} ({room_info['type']})")
print(f" 参与者: {room_info['participants']}")
print(f" 消息数: {room_info['messages']}")
print(f" 活跃状态: {'' if room_info['is_active'] else ''}")
# 测试数据保存
try:
coordinator.save_coordination_data("test_coordination_data.json")
print(f"\n💾 数据保存测试通过")
except Exception as e:
print(f"\n❌ 数据保存测试失败: {e}")
return coordinator
async def test_performance():
"""测试性能"""
print("\n⚡ 测试性能")
print("=" * 50)
coordinator = MultiChatCoordinator()
# 批量发送消息测试
start_time = datetime.now()
message_count = 100
print(f"📤 发送 {message_count} 条消息...")
for i in range(message_count):
chat_id = "main_debate" if i % 2 == 0 else "positive_internal"
sender = f"测试用户{i % 4 + 1}"
if chat_id == "positive_internal":
sender = f"{i % 4 + 1}"
content = f"性能测试消息 {i + 1}: 这是一条用于性能测试的消息内容"
try:
await coordinator.send_message(chat_id, sender, content)
except PermissionError:
# 忽略权限错误,继续测试
pass
end_time = datetime.now()
duration = (end_time - start_time).total_seconds()
print(f"⏱️ 性能测试结果:")
print(f" 总耗时: {duration:.3f}")
print(f" 平均每条消息: {duration/message_count*1000:.2f} 毫秒")
print(f" 消息处理速度: {message_count/duration:.1f} 条/秒")
# 获取最终状态
final_status = coordinator.get_chat_status()
print(f" 最终消息总数: {final_status['total_messages']}")
return coordinator
async def run_all_tests():
"""运行所有测试"""
print("🚀 多群聊协调系统测试开始")
print("=" * 60)
tests = [
("基本消息功能", test_basic_messaging),
("升级规则", test_escalation_rules),
("广播规则", test_broadcast_rules),
("过滤规则", test_filter_rules),
("讨论合并", test_discussion_merging),
("权限系统", test_permission_system),
("系统状态", test_system_status),
("性能测试", test_performance),
]
results = []
for test_name, test_func in tests:
try:
print(f"\n🧪 开始测试: {test_name}")
coordinator = await test_func()
results.append((test_name, "✅ 通过", None))
print(f"{test_name} 测试完成")
except Exception as e:
results.append((test_name, "❌ 失败", str(e)))
print(f"{test_name} 测试失败: {e}")
# 测试结果总结
print("\n" + "=" * 60)
print("📊 测试结果总结")
print("=" * 60)
passed = 0
failed = 0
for test_name, status, error in results:
print(f"{status} {test_name}")
if error:
print(f" 错误: {error}")
if "" in status:
passed += 1
else:
failed += 1
print(f"\n📈 测试统计:")
print(f" 通过: {passed}")
print(f" 失败: {failed}")
print(f" 总计: {passed + failed}")
print(f" 成功率: {passed/(passed+failed)*100:.1f}%")
if failed == 0:
print("\n🎉 所有测试通过!多群聊协调系统运行正常。")
else:
print(f"\n⚠️ 有 {failed} 个测试失败,需要检查相关功能。")
if __name__ == "__main__":
asyncio.run(run_all_tests())

View File

@ -0,0 +1,188 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
测试 OpenRouter 免费模型的八仙辩论系统
"""
import asyncio
import aiohttp
import os
# --- 被测试的模型列表 (之前认为可能不太适合的) ---
# 根据你之前的指示和 OpenRouter 网站信息,以下模型被标记为 'free'
# 但我们将测试它们的实际表现,特别是针对辩论任务。
# 注意: 'gpt-oss-20b' 名称可能不准确或已变更,我们使用一个常见的免费开源模型替代
# 'Uncensored' 因安全风险不测试
# 'Sarvam-M' 也进行测试
MODELS_TO_TEST = [
# "openchat/openchat-7b", # An alternative free model if needed for comparison
"google/gemma-2-9b-it", # Google's Gemma 2 9B, free on OpenRouter
"microsoft/phi-3-mini-128k-instruct", # Microsoft's Phi-3 Mini, free on OpenRouter
"qwen/qwen3-coder-8b-instruct", # Qwen3 Coder 8B, free on OpenRouter (good baseline)
"deepseek/deepseek-chat", # DeepSeek Chat, free on OpenRouter (good baseline)
"mistralai/mistral-7b-instruct", # Mistral 7B Instruct, free on OpenRouter (good baseline)
# --- Previously considered less suitable ---
"openai/gpt-3.5-turbo", # Often free tier on OpenRouter
"sophosympatheia/midnight-rose-70b", # An uncensored model, free, but we test it cautiously
"sarvamai/sarvam-2b-m", # Sarvam 2B M, free on OpenRouter
]
class OpenRouterAgent:
"""使用 OpenRouter API 的代理"""
def __init__(self, name: str, personality: str, api_key: str, model: str):
self.name = name
self.personality = personality
self.api_key = api_key
self.model = model
self.api_url = "https://openrouter.ai/api/v1"
async def generate_response(self, prompt: str, session: aiohttp.ClientSession) -> str:
"""生成AI回应"""
try:
headers = {
"Authorization": f"Bearer {self.api_key}",
"HTTP-Referer": "https://github.com/bennyschmidt/liurenchaxin", # Optional, for OpenRouter analytics
"X-Title": "BaXian Debate Test", # Optional, for OpenRouter analytics
"Content-Type": "application/json"
}
payload = {
"model": self.model,
"messages": [
{"role": "system", "content": f"你是{self.name}{self.personality}。请用中文回答。"},
{"role": "user", "content": prompt}
],
# Adjust these for better output in a test scenario
"max_tokens": 500, # Reduced for quicker testing, but sufficient for short replies
"temperature": 0.7 # Slightly lower for more deterministic replies in test
}
async with session.post(
f"{self.api_url}/chat/completions",
headers=headers,
json=payload,
timeout=aiohttp.ClientTimeout(total=30)
) as response:
if response.status == 200:
result = await response.json()
content = result.get('choices', [{}])[0].get('message', {}).get('content', '')
if content:
return content.strip()
else:
error_msg = f"API returned no content for {self.name} using {self.model}. Full response: {result}"
print(f"{error_msg}")
return f"[{self.name} 暂时无法回应]"
else:
error_text = await response.text()
error_msg = f"API error ({response.status}) for {self.name} using {self.model}: {error_text[:200]}..."
print(f"{error_msg}")
return f"[{self.name} API错误: {response.status}]"
except Exception as e:
error_msg = f"Exception for {self.name} using {self.model}: {e}"
print(f"{error_msg}")
return f"[{self.name} 连接错误]"
class SimpleDebateTest:
"""简单的模型辩论测试"""
def __init__(self, api_key: str):
self.api_key = api_key
self.topic = "工作量证明vs无限制爬虫从李时珍采药到AI数据获取的激励机制变革"
# Create a simple agent pair for quick testing
self.agent1 = OpenRouterAgent(
"吕洞宾",
"八仙之首,男性代表,理性务实,善于分析问题的本质和长远影响。你代表男性视角,注重逻辑和实用性。",
api_key, ""
)
self.agent2 = OpenRouterAgent(
"何仙姑",
"八仙中唯一的女性,温柔智慧,善于从情感和人文角度思考问题。你代表女性视角,注重关怀和和谐。",
api_key, ""
)
async def test_model(self, model_name: str) -> dict:
"""测试单个模型"""
print(f"\n--- Testing Model: {model_name} ---")
# Assign model to agents
self.agent1.model = model_name
self.agent2.model = model_name
results = {"model": model_name, "round1": "", "round2": "", "errors": []}
async with aiohttp.ClientSession() as session:
# Round 1: Agent 1 speaks
prompt1 = f"针对'{self.topic}'这个话题请从你的角度阐述观点。要求1)明确表达立场 2)提供具体论据 3)字数控制在150字以内"
print(f"\n🗣️ {self.agent1.name} 发言:")
try:
reply1 = await self.agent1.generate_response(prompt1, session)
print(f"{reply1}\n")
results["round1"] = reply1
except Exception as e:
error_msg = f"Round 1 Error: {e}"
print(f"{error_msg}")
results["errors"].append(error_msg)
return results
# Round 2: Agent 2 responds
prompt2 = f"针对'{self.topic}'这个话题,{self.agent1.name}刚才说:'{reply1}'。请从你的角度回应并阐述不同观点。要求1)回应对方观点 2)提出自己的立场 3)字数控制在150字以内"
print(f"🗣️ {self.agent2.name} 回应:")
try:
reply2 = await self.agent2.generate_response(prompt2, session)
print(f"{reply2}\n")
results["round2"] = reply2
except Exception as e:
error_msg = f"Round 2 Error: {e}"
print(f"{error_msg}")
results["errors"].append(error_msg)
return results
async def main():
"""主函数"""
print("🚀 启动 OpenRouter 免费模型辩论测试...")
# 1. 获取 OpenRouter API 密钥
api_key = os.getenv('OPENROUTER_API_KEY')
if not api_key:
print("❌ 错误: 未找到 OPENROUTER_API_KEY 环境变量")
print("请设置环境变量: export OPENROUTER_API_KEY=your_api_key")
return
tester = SimpleDebateTest(api_key)
all_results = []
# 2. 依次测试每个模型
for model_name in MODELS_TO_TEST:
try:
result = await tester.test_model(model_name)
all_results.append(result)
# Brief pause between models
await asyncio.sleep(2)
except Exception as e:
print(f"❌ 测试模型 {model_name} 时发生未预期错误: {e}")
all_results.append({"model": model_name, "round1": "", "round2": "", "errors": [f"Unexpected test error: {e}"]})
# 3. 输出测试总结
print(f"\n\n--- 📊 测试总结 ---")
for res in all_results:
model = res['model']
errors = res['errors']
r1_ok = "" if res['round1'] and not any("无法回应" in res['round1'] or "错误" in res['round1'] for e in errors) else ""
r2_ok = "" if res['round2'] and not any("无法回应" in res['round2'] or "错误" in res['round2'] for e in errors) else ""
err_count = len(errors)
print(f"🔹 {model:<35} | R1: {r1_ok} | R2: {r2_ok} | Errors: {err_count}")
print("\n--- 📝 详细日志 ---")
for res in all_results:
if res['errors']:
print(f"\n🔸 模型: {res['model']}")
for err in res['errors']:
print(f" - {err}")
if __name__ == "__main__":
asyncio.run(main())

View File

@ -0,0 +1,404 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
测试优化的辩论流程控制系统
验证阶段转换和发言权争夺逻辑的改进
"""
import sys
import os
import time
import threading
from datetime import datetime, timedelta
# 添加项目路径
sys.path.append('/home/ben/liurenchaxin/src')
from jixia.debates.optimized_debate_flow import (
OptimizedDebateFlowController,
FlowControlConfig,
FlowControlMode,
TransitionTrigger,
SpeakerSelectionStrategy,
DebateStage
)
def test_basic_flow_control():
"""测试基础流程控制"""
print("🧪 测试基础流程控制")
print("-" * 30)
controller = OptimizedDebateFlowController()
# 测试获取当前发言者
speaker = controller.get_current_speaker()
print(f"✅ 当前发言者: {speaker}")
assert speaker is not None, "应该能获取到发言者"
# 测试记录发言
controller.record_speech(speaker, "这是一个测试发言")
print(f"✅ 发言记录成功,当前进度: {controller.stage_progress}")
# 测试流程状态
status = controller.get_flow_status()
print(f"✅ 流程状态: {status['current_stage']} 阶段")
assert status['current_stage'] == '', "应该在起阶段"
return True
def test_stage_transition():
"""测试阶段转换"""
print("\n🧪 测试阶段转换")
print("-" * 30)
config = FlowControlConfig(
mode=FlowControlMode.STRICT,
transition_triggers=[TransitionTrigger.PROGRESS_BASED]
)
controller = OptimizedDebateFlowController(config)
initial_stage = controller.current_stage
print(f"初始阶段: {initial_stage.value}")
# 模拟完成一个阶段的所有发言
stage_config = controller.stage_configs[initial_stage]
max_progress = stage_config["max_progress"]
for i in range(max_progress):
speaker = controller.get_current_speaker()
controller.record_speech(speaker, f"{i+1}次发言")
if i == max_progress - 1:
# 最后一次发言后应该自动转换阶段
if controller._should_advance_stage():
success = controller.advance_stage()
print(f"✅ 阶段转换成功: {success}")
print(f"新阶段: {controller.current_stage.value}")
assert controller.current_stage != initial_stage, "阶段应该已经改变"
break
return True
def test_speaker_selection_strategies():
"""测试发言者选择策略"""
print("\n🧪 测试发言者选择策略")
print("-" * 30)
strategies = [
SpeakerSelectionStrategy.ROUND_ROBIN,
SpeakerSelectionStrategy.CONTEXT_AWARE,
SpeakerSelectionStrategy.COMPETITIVE
]
for strategy in strategies:
print(f"\n测试策略: {strategy.value}")
config = FlowControlConfig(speaker_selection_strategy=strategy)
controller = OptimizedDebateFlowController(config)
# 获取几个发言者
speakers = []
for i in range(3):
speaker = controller.get_current_speaker()
speakers.append(speaker)
controller.record_speech(speaker, f"策略测试发言 {i+1}")
print(f"发言者序列: {speakers}")
assert len(set(speakers)) > 0, f"策略 {strategy.value} 应该能选择发言者"
print("✅ 所有发言者选择策略测试通过")
return True
def test_speaker_request_system():
"""测试发言请求系统"""
print("\n🧪 测试发言请求系统")
print("-" * 30)
config = FlowControlConfig(
speaker_selection_strategy=SpeakerSelectionStrategy.COMPETITIVE
)
controller = OptimizedDebateFlowController(config)
# 提交发言请求
controller.request_speaking_turn("正1", "紧急反驳", urgency=5, topic_relevance=0.9)
controller.request_speaking_turn("反2", "补充论据", urgency=2, topic_relevance=0.7)
controller.request_speaking_turn("正3", "重要澄清", urgency=4, topic_relevance=0.8)
print(f"待处理请求数量: {len(controller.pending_requests)}")
assert len(controller.pending_requests) == 3, "应该有3个待处理请求"
# 获取下一个发言者(应该是最高优先级的)
next_speaker = controller.get_current_speaker()
print(f"✅ 高优先级发言者: {next_speaker}")
# 记录发言后,请求应该被移除
controller.record_speech(next_speaker, "响应紧急请求的发言")
print(f"发言后待处理请求数量: {len(controller.pending_requests)}")
return True
def test_context_aware_selection():
"""测试上下文感知选择"""
print("\n🧪 测试上下文感知选择")
print("-" * 30)
config = FlowControlConfig(
speaker_selection_strategy=SpeakerSelectionStrategy.CONTEXT_AWARE
)
controller = OptimizedDebateFlowController(config)
# 模拟一些发言历史
test_speeches = [
("正1", "我支持AI投资"),
("正1", "理由是技术发展迅速"), # 连续发言
("反1", "但风险很高"),
("正2", "我们有风控措施")
]
for speaker, message in test_speeches:
controller.record_speech(speaker, message)
# 分析当前上下文
context = controller._analyze_current_context()
print(f"当前上下文: {context}")
# 获取下一个发言者(应该避免连续发言)
next_speaker = controller.get_current_speaker()
print(f"✅ 上下文感知选择的发言者: {next_speaker}")
# 验证不是最近的发言者
recent_speakers = [speech[0] for speech in test_speeches[-2:]]
print(f"最近发言者: {recent_speakers}")
return True
def test_stage_metrics():
"""测试阶段指标"""
print("\n🧪 测试阶段指标")
print("-" * 30)
controller = OptimizedDebateFlowController()
# 模拟一些发言
test_speeches = [
("吕洞宾", "AI投资是未来趋势我们应该积极参与。数据显示这个领域的增长潜力巨大。"),
("何仙姑", "但是我们也要考虑风险因素。"),
("铁拐李", "我同意吕洞宾的观点,因为技术发展确实很快。"),
("汉钟离", "然而市场波动性不容忽视。")
]
for speaker, message in test_speeches:
controller.record_speech(speaker, message)
# 检查阶段指标
metrics = controller.current_stage_metrics
print(f"发言数量: {metrics.speech_count}")
print(f"质量分数: {metrics.quality_score:.3f}")
print(f"参与平衡: {metrics.participation_balance:.3f}")
print(f"转换准备度: {metrics.transition_readiness:.3f}")
print(f"发言者分布: {metrics.speaker_distribution}")
assert metrics.speech_count == 4, "发言数量应该是4"
assert 0 <= metrics.quality_score <= 1, "质量分数应该在0-1之间"
assert 0 <= metrics.participation_balance <= 1, "参与平衡应该在0-1之间"
print("✅ 阶段指标计算正确")
return True
def test_adaptive_mode():
"""测试自适应模式"""
print("\n🧪 测试自适应模式")
print("-" * 30)
config = FlowControlConfig(
mode=FlowControlMode.ADAPTIVE,
transition_triggers=[TransitionTrigger.QUALITY_BASED, TransitionTrigger.PROGRESS_BASED],
quality_threshold=0.7
)
controller = OptimizedDebateFlowController(config)
# 模拟高质量发言
high_quality_speeches = [
("吕洞宾", "根据最新的市场分析数据AI投资领域在过去三年中显示出了显著的增长趋势。我们需要仔细分析这些数据背后的原因。"),
("何仙姑", "虽然数据显示增长,但是我们也必须考虑到技术泡沫的可能性。历史上类似的技术热潮往往伴随着高风险。"),
("铁拐李", "我认为关键在于风险管理。如果我们能够建立完善的风控体系,就能够在享受收益的同时控制风险。")
]
for speaker, message in high_quality_speeches:
controller.record_speech(speaker, message)
# 检查是否达到质量阈值
if controller.current_stage_metrics.quality_score >= config.quality_threshold:
print(f"✅ 达到质量阈值: {controller.current_stage_metrics.quality_score:.3f}")
# 检查是否应该转换阶段
should_advance = controller._should_advance_stage()
print(f"是否应该推进阶段: {should_advance}")
break
return True
def test_event_system():
"""测试事件系统"""
print("\n🧪 测试事件系统")
print("-" * 30)
controller = OptimizedDebateFlowController()
# 记录事件
events_received = []
def event_handler(event):
events_received.append(event.event_type)
print(f"📢 收到事件: {event.event_type}")
# 注册事件处理器
controller.add_event_handler("speech_recorded", event_handler)
controller.add_event_handler("speaker_request", event_handler)
# 触发事件
controller.record_speech("测试发言者", "测试消息")
controller.request_speaking_turn("正1", "测试请求")
# 等待事件处理
time.sleep(0.1)
print(f"收到的事件: {events_received}")
assert "speech_recorded" in events_received, "应该收到发言记录事件"
assert "speaker_request" in events_received, "应该收到发言请求事件"
print("✅ 事件系统工作正常")
return True
def test_data_persistence():
"""测试数据持久化"""
print("\n🧪 测试数据持久化")
print("-" * 30)
controller = OptimizedDebateFlowController()
# 模拟一些活动
controller.record_speech("吕洞宾", "测试发言1")
controller.record_speech("何仙姑", "测试发言2")
controller.request_speaking_turn("正1", "测试请求")
# 保存数据
filename = "test_flow_data.json"
controller.save_flow_data(filename)
# 检查文件是否存在
assert os.path.exists(filename), "数据文件应该被创建"
# 读取并验证数据
import json
with open(filename, 'r', encoding='utf-8') as f:
data = json.load(f)
assert "config" in data, "数据应该包含配置信息"
assert "current_state" in data, "数据应该包含当前状态"
assert "debate_history" in data, "数据应该包含辩论历史"
assert len(data["debate_history"]) == 2, "应该有2条发言记录"
print(f"✅ 数据持久化成功,文件大小: {os.path.getsize(filename)} 字节")
# 清理测试文件
os.remove(filename)
return True
def test_performance():
"""测试性能"""
print("\n🧪 测试性能")
print("-" * 30)
controller = OptimizedDebateFlowController()
# 测试发言者选择性能
start_time = time.time()
for i in range(100):
speaker = controller.get_current_speaker()
controller.record_speech(speaker, f"性能测试发言 {i}")
end_time = time.time()
duration = end_time - start_time
print(f"100次发言处理耗时: {duration:.3f}")
print(f"平均每次处理时间: {duration/100*1000:.2f} 毫秒")
print(f"处理速度: {100/duration:.1f} 次/秒")
assert duration < 5.0, "100次处理应该在5秒内完成"
# 测试并发性能
def concurrent_speech_recording():
for i in range(10):
speaker = controller.get_current_speaker()
controller.record_speech(speaker, f"并发测试发言 {threading.current_thread().name}-{i}")
start_time = time.time()
threads = []
for i in range(5):
thread = threading.Thread(target=concurrent_speech_recording, name=f"Thread-{i}")
threads.append(thread)
thread.start()
for thread in threads:
thread.join()
end_time = time.time()
concurrent_duration = end_time - start_time
print(f"并发处理耗时: {concurrent_duration:.3f}")
print(f"总发言数: {len(controller.debate_history)}")
print("✅ 性能测试通过")
return True
def run_comprehensive_test():
"""运行综合测试"""
print("🎭 优化的辩论流程控制系统 - 综合测试")
print("=" * 60)
test_functions = [
("基础流程控制", test_basic_flow_control),
("阶段转换", test_stage_transition),
("发言者选择策略", test_speaker_selection_strategies),
("发言请求系统", test_speaker_request_system),
("上下文感知选择", test_context_aware_selection),
("阶段指标", test_stage_metrics),
("自适应模式", test_adaptive_mode),
("事件系统", test_event_system),
("数据持久化", test_data_persistence),
("性能测试", test_performance)
]
passed = 0
failed = 0
for test_name, test_func in test_functions:
try:
print(f"\n{'='*20} {test_name} {'='*20}")
result = test_func()
if result:
print(f"{test_name} - 通过")
passed += 1
else:
print(f"{test_name} - 失败")
failed += 1
except Exception as e:
print(f"{test_name} - 错误: {str(e)}")
failed += 1
print("\n" + "=" * 60)
print(f"📊 测试结果统计")
print(f"通过: {passed}/{len(test_functions)} ({passed/len(test_functions)*100:.1f}%)")
print(f"失败: {failed}/{len(test_functions)} ({failed/len(test_functions)*100:.1f}%)")
if failed == 0:
print("🎉 所有测试通过!优化的辩论流程控制系统运行正常。")
else:
print(f"⚠️ 有 {failed} 个测试失败,需要进一步优化。")
return passed, failed
if __name__ == "__main__":
run_comprehensive_test()

83
tests/test_simple_api.py Normal file
View File

@ -0,0 +1,83 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
简单的API测试脚本
测试.env.example中的配置是否正确
"""
import requests
import json
import logging
# 配置日志
logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s')
logger = logging.getLogger(__name__)
def test_simple_api():
"""简单的API测试"""
# 从.env.example读取的配置
BASE_URL = "http://master.tailnet-68f9.ts.net:40012"
API_KEY = "sk-0jdcGHZJpX2oUJmyEs7zVA"
MODEL = "gemini/gemini-2.5-pro"
logger.info(f"🧪 测试配置:")
logger.info(f"📡 BASE_URL: {BASE_URL}")
logger.info(f"🔑 API_KEY: {API_KEY[:10]}...")
logger.info(f"🤖 MODEL: {MODEL}")
# 最简单的请求
payload = {
"model": MODEL,
"messages": [
{"role": "user", "content": "Hello"}
],
"max_tokens": 50
}
headers = {
"Content-Type": "application/json",
"Authorization": f"Bearer {API_KEY}"
}
try:
logger.info("🚀 发送请求...")
response = requests.post(
f"{BASE_URL}/chat/completions",
json=payload,
headers=headers,
timeout=120 # 增加超时时间
)
logger.info(f"📊 状态码: {response.status_code}")
if response.status_code == 200:
result = response.json()
logger.info(f"✅ 请求成功!")
logger.info(f"📋 响应: {json.dumps(result, ensure_ascii=False, indent=2)}")
return True
else:
logger.error(f"❌ 请求失败: {response.status_code}")
logger.error(f"📋 错误响应: {response.text}")
return False
except requests.exceptions.Timeout:
logger.error(f"⏰ 请求超时 (120秒)")
return False
except requests.exceptions.ConnectionError as e:
logger.error(f"🔌 连接错误: {e}")
return False
except Exception as e:
logger.error(f"💥 未知错误: {e}")
return False
if __name__ == "__main__":
logger.info("🎯 开始简单API测试")
success = test_simple_api()
if success:
logger.info("🎉 测试成功!")
else:
logger.error("💀 测试失败!")
logger.info("🏁 测试完成")

View File

@ -0,0 +1,80 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Test script for a single model from OpenRouter
"""
import asyncio
import aiohttp
import os
import json
# Get API key from environment or .env file
api_key = os.getenv('OPENROUTER_API_KEY')
if not api_key:
with open(".env", "r") as f:
for line in f:
line = line.strip()
if line.startswith("sk-or-v1-"):
api_key = line
break
if not api_key:
print("❌ No API key found")
exit(1)
async def test_model(model_name):
"""Test a single model"""
# Remove :free tag for API call
clean_model = model_name.split(":")[0]
print(f"🚀 Testing model: {model_name}")
print(f" Clean name: {clean_model}")
url = "https://openrouter.ai/api/v1/chat/completions"
headers = {
"Authorization": f"Bearer {api_key}",
"Content-Type": "application/json"
}
payload = {
"model": clean_model,
"messages": [
{"role": "user", "content": "Explain the concept of 'working hard' in one short sentence."}
],
"max_tokens": 100
}
print(f" Payload: {json.dumps(payload, indent=2)}")
try:
async with aiohttp.ClientSession() as session:
async with session.post(url, headers=headers, json=payload, timeout=30) as response:
print(f" Status: {response.status}")
# Print response headers
print(" Response Headers:")
for key, value in response.headers.items():
print(f" {key}: {value}")
if response.status == 200:
result = await response.json()
print(f" Full response: {json.dumps(result, indent=2)}")
content = result.get('choices', [{}])[0].get('message', {}).get('content', '')
print(f"✅ Success - Content: '{content}'")
return True
else:
error_text = await response.text()
print(f"❌ Status {response.status}: {error_text}")
return False
except Exception as e:
print(f"💥 Exception: {str(e)}")
return False
async def main():
"""Main function"""
model_to_test = "openai/gpt-oss-20b:free"
await test_model(model_to_test)
if __name__ == "__main__":
asyncio.run(main())

View File

@ -0,0 +1,80 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Test script for a single model from OpenRouter
"""
import asyncio
import aiohttp
import os
import json
# Get API key from environment or .env file
api_key = os.getenv('OPENROUTER_API_KEY')
if not api_key:
with open(".env", "r") as f:
for line in f:
line = line.strip()
if line.startswith("sk-or-v1-"):
api_key = line
break
if not api_key:
print("❌ No API key found")
exit(1)
async def test_model(model_name):
"""Test a single model"""
# Remove :free tag for API call
clean_model = model_name.split(":")[0]
print(f"🚀 Testing model: {model_name}")
print(f" Clean name: {clean_model}")
url = "https://openrouter.ai/api/v1/chat/completions"
headers = {
"Authorization": f"Bearer {api_key}",
"Content-Type": "application/json"
}
payload = {
"model": clean_model,
"messages": [
{"role": "user", "content": "Explain the concept of 'working hard' in one short sentence."}
],
"max_tokens": 100
}
print(f" Payload: {json.dumps(payload, indent=2)}")
try:
async with aiohttp.ClientSession() as session:
async with session.post(url, headers=headers, json=payload, timeout=30) as response:
print(f" Status: {response.status}")
# Print response headers
print(" Response Headers:")
for key, value in response.headers.items():
print(f" {key}: {value}")
if response.status == 200:
result = await response.json()
print(f" Full response: {json.dumps(result, indent=2)}")
content = result.get('choices', [{}])[0].get('message', {}).get('content', '')
print(f"✅ Success - Content: '{content}'")
return True
else:
error_text = await response.text()
print(f"❌ Status {response.status}: {error_text}")
return False
except Exception as e:
print(f"💥 Exception: {str(e)}")
return False
async def main():
"""Main function"""
model_to_test = "qwen/qwq-32b:free"
await test_model(model_to_test)
if __name__ == "__main__":
asyncio.run(main())

View File

@ -0,0 +1,80 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Test script for a single model from OpenRouter
"""
import asyncio
import aiohttp
import os
import json
# Get API key from environment or .env file
api_key = os.getenv('OPENROUTER_API_KEY')
if not api_key:
with open(".env", "r") as f:
for line in f:
line = line.strip()
if line.startswith("sk-or-v1-"):
api_key = line
break
if not api_key:
print("❌ No API key found")
exit(1)
async def test_model(model_name):
"""Test a single model"""
# Remove :free tag for API call
clean_model = model_name.split(":")[0]
print(f"🚀 Testing model: {model_name}")
print(f" Clean name: {clean_model}")
url = "https://openrouter.ai/api/v1/chat/completions"
headers = {
"Authorization": f"Bearer {api_key}",
"Content-Type": "application/json"
}
payload = {
"model": clean_model,
"messages": [
{"role": "user", "content": "Explain the concept of 'working hard' in one short sentence."}
],
"max_tokens": 100
}
print(f" Payload: {json.dumps(payload, indent=2)}")
try:
async with aiohttp.ClientSession() as session:
async with session.post(url, headers=headers, json=payload, timeout=30) as response:
print(f" Status: {response.status}")
# Print response headers
print(" Response Headers:")
for key, value in response.headers.items():
print(f" {key}: {value}")
if response.status == 200:
result = await response.json()
print(f" Full response: {json.dumps(result, indent=2)}")
content = result.get('choices', [{}])[0].get('message', {}).get('content', '')
print(f"✅ Success - Content: '{content}'")
return True
else:
error_text = await response.text()
print(f"❌ Status {response.status}: {error_text}")
return False
except Exception as e:
print(f"💥 Exception: {str(e)}")
return False
async def main():
"""Main function"""
model_to_test = "mistralai/mistral-small-3.1-24b-instruct:free"
await test_model(model_to_test)
if __name__ == "__main__":
asyncio.run(main())

View File

@ -0,0 +1,84 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Test script for the Taishang Laojun API (Zhipu AI GLM-4.5)
"""
import asyncio
import aiohttp
import json
async def test_taishang_api():
"""Test the Taishang Laojun API"""
# Zhipu AI API configuration
zhipu_api_key = "cc95756306b2cb9748c8df15f9063eaf.hlvXbZeoLnPhyoLw"
url = "https://open.bigmodel.cn/api/paas/v4/chat/completions"
messages = [
{"role": "system", "content": "You are Taishang Laojun, a wise philosopher and project manager for the Ba Xian (Eight Immortals). You excel at deep analysis and strategic planning. Please respond in JSON format."},
{"role": "user", "content": "Analyze the concept of 'working hard' and provide a JSON response with two keys: 'definition' and 'importance'."}
]
data = {
"model": "glm-4.5-air",
"messages": messages,
"max_tokens": 500,
"temperature": 0.7,
"stream": False,
"tools": [],
"tool_choice": "none"
}
headers = {
"Content-Type": "application/json",
"Authorization": f"Bearer {zhipu_api_key}"
}
print("🚀 Testing Taishang Laojun API (Zhipu AI GLM-4.5)...")
print(f" URL: {url}")
print(f" Model: {data['model']}")
print(f" Messages: {json.dumps(messages, ensure_ascii=False, indent=2)}")
print(f" Data: {json.dumps(data, ensure_ascii=False, indent=2)}")
try:
async with aiohttp.ClientSession() as session:
async with session.post(url, headers=headers, json=data, timeout=30) as response:
print(f" Status: {response.status}")
if response.status == 200:
result = await response.json()
print(f" Full response: {json.dumps(result, ensure_ascii=False, indent=2)}")
# Extract content
if 'choices' in result and len(result['choices']) > 0:
choice = result['choices'][0]
content = ""
# Try to get content from different fields
if 'message' in choice and 'content' in choice['message']:
content = choice['message']['content']
# If content is empty, try reasoning_content
if not content and 'message' in choice and 'reasoning_content' in choice['message']:
content = choice['message']['reasoning_content']
if content:
print(f"✅ Success - Content: {content[:200]}...") # Truncate for readability
return True
print("❌ Content not found in response")
return False
else:
error_text = await response.text()
print(f"❌ Status {response.status}: {error_text}")
return False
except Exception as e:
print(f"💥 Exception: {str(e)}")
return False
async def main():
"""Main function"""
await test_taishang_api()
if __name__ == "__main__":
asyncio.run(main())

View File

@ -0,0 +1,745 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
集夏v2.1.0 综合功能测试
验证所有新功能的集成效果和系统稳定性
"""
import asyncio
import sys
import os
import time
import threading
import json
from datetime import datetime, timedelta
# 添加项目路径
sys.path.append('/home/ben/liurenchaxin/src')
# 导入所有核心模块
try:
from jixia.debates.enhanced_priority_algorithm import EnhancedPriorityAlgorithm
from jixia.debates.optimized_debate_flow import OptimizedDebateFlowController, FlowControlMode
from jixia.intervention.human_intervention_system import DebateHealthMonitor
from jixia.coordination.multi_chat_coordinator import MultiChatCoordinator
except ImportError as e:
print(f"❌ 模块导入失败: {e}")
print("请确保所有模块都已正确安装")
sys.exit(1)
class V2_1_IntegrationTester:
"""v2.1.0 集成测试器"""
def __init__(self):
self.test_results = {}
self.performance_metrics = {}
self.error_log = []
# 初始化各个组件
try:
self.priority_algorithm = EnhancedPriorityAlgorithm()
self.flow_controller = OptimizedDebateFlowController()
self.health_monitor = DebateHealthMonitor()
self.chat_coordinator = MultiChatCoordinator()
print("✅ 所有核心组件初始化成功")
except Exception as e:
print(f"❌ 组件初始化失败: {e}")
self.error_log.append(f"初始化错误: {e}")
def test_priority_algorithm_integration(self):
"""测试优先级算法集成"""
print("\n🧪 测试优先级算法集成")
print("-" * 40)
try:
# 模拟辩论场景
test_speeches = [
{
"speaker": "吕洞宾",
"content": "根据最新的市场数据分析AI投资领域显示出强劲的增长潜力。我们应该抓住这个机会。",
"context": {"stage": "", "topic": "AI投资", "recent_speakers": []}
},
{
"speaker": "何仙姑",
"content": "但是我们必须谨慎考虑风险因素!市场波动性很大。",
"context": {"stage": "", "topic": "AI投资", "recent_speakers": ["吕洞宾"]}
},
{
"speaker": "铁拐李",
"content": "我同意吕洞宾的观点,技术发展确实迅速,但何仙姑提到的风险也值得重视。",
"context": {"stage": "", "topic": "AI投资", "recent_speakers": ["吕洞宾", "何仙姑"]}
}
]
priorities = []
for speech in test_speeches:
analysis = self.priority_algorithm.analyze_speech(
speech["content"],
speech["speaker"],
speech["context"]
)
# 获取详细的分数分解
speaker = speech["speaker"]
context = speech["context"]
recent_speeches = test_speeches[:test_speeches.index(speech)]
profile = self.priority_algorithm._get_or_create_speaker_profile(speaker)
self.priority_algorithm._update_speaker_profile(profile, recent_speeches)
rebuttal_urgency = self.priority_algorithm._calculate_rebuttal_urgency(speaker, context, recent_speeches)
argument_strength = self.priority_algorithm._calculate_argument_strength(speaker, profile)
time_pressure = self.priority_algorithm._calculate_time_pressure(speaker, context)
audience_reaction = self.priority_algorithm._calculate_audience_reaction(speaker, context)
strategy_need = self.priority_algorithm._calculate_strategy_need(speaker, context, profile)
priority = self.priority_algorithm.calculate_priority(
speaker,
context,
recent_speeches
)
priorities.append((speaker, priority))
print(f"发言者: {speaker}")
print(f" 反驳紧急性: {rebuttal_urgency:.6f}")
print(f" 论证强度: {argument_strength:.6f}")
print(f" 时间压力: {time_pressure:.6f}")
print(f" 观众反应: {audience_reaction:.6f}")
print(f" 策略需求: {strategy_need:.6f}")
print(f" 最终优先级: {priority:.6f}")
print()
# 调试输出
print(f"所有优先级值: {[p[1] for p in priorities]}")
print(f"唯一优先级数量: {len(set(p[1] for p in priorities))}")
print(f"优先级差异: {max(p[1] for p in priorities) - min(p[1] for p in priorities)}")
# 验证优先级计算
assert all(0 <= p[1] <= 1 for p in priorities), "优先级应该在0-1之间"
assert len(set(p[1] for p in priorities)) > 1, "不同发言应该有不同优先级"
self.test_results["priority_algorithm_integration"] = True
print("✅ 优先级算法集成测试通过")
return True
except Exception as e:
print(f"❌ 优先级算法集成测试失败: {e}")
self.error_log.append(f"优先级算法集成错误: {e}")
self.test_results["priority_algorithm_integration"] = False
return False
def test_flow_controller_integration(self):
"""测试流程控制器集成"""
print("\n🧪 测试流程控制器集成")
print("-" * 40)
try:
# 测试与优先级算法的集成
initial_stage = self.flow_controller.current_stage
print(f"初始阶段: {initial_stage.value}")
# 模拟完整的辩论流程
test_sequence = [
("吕洞宾", "开场陈述AI投资是未来发展的关键"),
("何仙姑", "反方观点:需要谨慎评估风险"),
("铁拐李", "补充论据:技术发展支持投资决策"),
("汉钟离", "风险分析:市场不确定性因素"),
("曹国舅", "综合观点:平衡收益与风险"),
("蓝采和", "实践经验:类似投资案例分析"),
("韩湘子", "未来展望:长期发展趋势"),
("张果老", "总结陈词:理性投资建议")
]
stage_transitions = 0
for speaker, content in test_sequence:
# 记录发言
self.flow_controller.record_speech(speaker, content)
# 检查是否需要推进阶段
if hasattr(self.flow_controller, '_should_advance_stage') and self.flow_controller._should_advance_stage():
old_stage = self.flow_controller.current_stage
if self.flow_controller.advance_stage():
stage_transitions += 1
print(f"阶段转换: {old_stage.value} -> {self.flow_controller.current_stage.value}")
# 获取流程状态
status = self.flow_controller.get_flow_status()
print(f"发言者: {speaker}, 当前阶段: {status['current_stage']}, 进度: {status['stage_progress']}")
# 验证流程控制
final_status = self.flow_controller.get_flow_status()
total_speeches = len(self.flow_controller.debate_history)
assert total_speeches == len(test_sequence), f"发言总数应该匹配,期望{len(test_sequence)},实际{total_speeches}"
assert stage_transitions > 0, "应该发生阶段转换"
self.test_results["flow_controller_integration"] = True
print(f"✅ 流程控制器集成测试通过,发生了 {stage_transitions} 次阶段转换")
return True
except Exception as e:
print(f"❌ 流程控制器集成测试失败: {e}")
self.error_log.append(f"流程控制器集成错误: {e}")
self.test_results["flow_controller_integration"] = False
return False
def test_health_monitor_integration(self):
"""测试健康监控集成"""
print("\n🧪 测试健康监控集成")
print("-" * 40)
try:
# 模拟辩论数据
debate_data = {
"participants": ["吕洞宾", "何仙姑", "铁拐李", "汉钟离"],
"speeches": [
{"speaker": "吕洞宾", "content": "我强烈支持这个提案", "timestamp": datetime.now()},
{"speaker": "何仙姑", "content": "我完全反对,这太危险了", "timestamp": datetime.now()},
{"speaker": "铁拐李", "content": "让我们理性分析一下", "timestamp": datetime.now()},
{"speaker": "汉钟离", "content": "数据显示情况复杂", "timestamp": datetime.now()}
],
"current_stage": "",
"duration": timedelta(minutes=15)
}
# 更新健康监控
self.health_monitor.update_metrics(debate_data)
# 检查健康状态
health_status = self.health_monitor.get_health_status()
health_report = self.health_monitor.get_health_report()
print(f"健康状态: {health_status.value}")
print(f"整体分数: {health_report['overall_score']:.1f}")
print(f"监控指标数量: {len(health_report['metrics'])}")
print(f"活跃警报: {health_report['active_alerts']}")
# 模拟问题场景
problematic_data = {
"participants": ["吕洞宾", "何仙姑"],
"speeches": [
{"speaker": "吕洞宾", "content": "你们都是白痴!", "timestamp": datetime.now()},
{"speaker": "吕洞宾", "content": "我说了算!", "timestamp": datetime.now()},
{"speaker": "吕洞宾", "content": "闭嘴!", "timestamp": datetime.now()}
],
"current_stage": "",
"duration": timedelta(minutes=30)
}
self.health_monitor.update_metrics(problematic_data)
# 检查是否触发警报
alerts = self.health_monitor.active_alerts
print(f"活跃警报数量: {len(alerts)}")
# 验证监控功能
assert health_status is not None, "应该有健康状态"
assert isinstance(health_status, type(health_status)), "健康状态应该是HealthStatus枚举"
self.test_results["health_monitor_integration"] = True
print("✅ 健康监控集成测试通过")
return True
except Exception as e:
print(f"❌ 健康监控集成测试失败: {e}")
self.error_log.append(f"健康监控集成错误: {e}")
self.test_results["health_monitor_integration"] = False
return False
async def test_chat_coordinator_integration(self):
"""测试多群聊协调集成"""
print("\n🧪 测试多群聊协调集成")
print("-" * 40)
try:
# 模拟多群聊场景
main_chat_message = {
"chat_id": "main_debate",
"speaker": "吕洞宾",
"content": "我认为我们应该投资AI技术",
"timestamp": datetime.now()
}
# 处理主群聊消息
await self.chat_coordinator.handle_message(main_chat_message)
# 模拟策略讨论
strategy_message = {
"chat_id": "strategy_positive",
"speaker": "铁拐李",
"content": "我们需要准备更多技术数据来支持论点",
"timestamp": datetime.now()
}
await self.chat_coordinator.handle_message(strategy_message)
# 检查消息路由
routing_status = self.chat_coordinator.get_routing_status()
print(f"路由状态: {routing_status}")
# 模拟协调决策
coordination_result = await self.chat_coordinator.coordinate_response(
main_chat_message,
context={"stage": "", "topic": "AI投资"}
)
print(f"协调结果: {coordination_result}")
# 验证协调功能
assert coordination_result is not None, "应该有协调结果"
self.test_results["chat_coordinator_integration"] = True
print("✅ 多群聊协调集成测试通过")
return True
except Exception as e:
print(f"❌ 多群聊协调集成测试失败: {e}")
self.error_log.append(f"多群聊协调集成错误: {e}")
self.test_results["chat_coordinator_integration"] = False
return False
async def test_cross_component_integration(self):
"""测试跨组件集成"""
print("\n🧪 测试跨组件集成")
print("-" * 40)
try:
# 清空之前的发言历史
self.flow_controller.debate_history.clear()
# 模拟完整的辩论流程
debate_scenario = {
"topic": "人工智能投资策略",
"participants": ["吕洞宾", "何仙姑", "铁拐李", "汉钟离"],
"duration": 30 # 分钟
}
print(f"开始辩论: {debate_scenario['topic']}")
# 1. 流程控制器管理发言顺序
speakers_sequence = []
for i in range(8): # 模拟8轮发言
speaker = self.flow_controller.get_current_speaker()
speakers_sequence.append(speaker)
# 2. 生成发言内容(简化)
content = f"这是{speaker}在第{i+1}轮的发言,关于{debate_scenario['topic']}"
# 3. 优先级算法分析发言
context = {
"stage": self.flow_controller.current_stage.value,
"topic": debate_scenario['topic'],
"recent_speakers": speakers_sequence[-3:]
}
analysis = self.priority_algorithm.analyze_speech(content, speaker, context)
# 构建正确格式的recent_speeches
recent_speeches = []
for j, prev_speaker in enumerate(speakers_sequence):
recent_speeches.append({
"speaker": prev_speaker,
"content": f"这是{prev_speaker}在第{j+1}轮的发言",
"timestamp": datetime.now().isoformat(),
"team": "positive" if "" in prev_speaker else "negative"
})
priority = self.priority_algorithm.calculate_priority(speaker, context, recent_speeches)
# 4. 记录发言到流程控制器
self.flow_controller.record_speech(speaker, content)
# 5. 更新健康监控
debate_data = {
"participants": debate_scenario['participants'],
"speeches": [{"speaker": speaker, "content": content, "timestamp": datetime.now()}],
"current_stage": self.flow_controller.current_stage.value,
"duration": timedelta(minutes=i*2)
}
self.health_monitor.update_metrics(debate_data)
# 6. 多群聊协调处理
message = {
"chat_id": "main_debate",
"speaker": speaker,
"content": content,
"timestamp": datetime.now()
}
# 异步调用
try:
await self.chat_coordinator.handle_message(message)
except Exception as e:
print(f"警告: 消息处理失败: {e}")
print(f"{i+1}轮 - 发言者: {speaker}, 优先级: {priority:.3f}, 阶段: {context['stage']}")
# 验证集成效果
print("\n开始获取各组件状态...")
try:
flow_status = self.flow_controller.get_flow_status()
print(f"✅ 流程状态获取成功: {type(flow_status)}")
except Exception as e:
print(f"❌ 流程状态获取失败: {e}")
raise
try:
health_status = self.health_monitor.get_health_status()
print(f"✅ 健康状态获取成功: {type(health_status)}")
except Exception as e:
print(f"❌ 健康状态获取失败: {e}")
raise
try:
routing_status = self.chat_coordinator.get_routing_status()
print(f"✅ 路由状态获取成功: {type(routing_status)}, 值: {routing_status}")
except Exception as e:
print(f"❌ 路由状态获取失败: {e}")
raise
print(f"\n集成测试结果:")
print(f"- 总发言数: {len(self.flow_controller.debate_history)}")
print(f"- 当前阶段: {flow_status['current_stage']}")
print(f"- 健康状态: {health_status.value}")
# 安全地访问routing_status
if isinstance(routing_status, dict):
print(f"- 活跃路由数: {routing_status.get('active_routes', 0)}")
print(f"- 消息队列大小: {routing_status.get('message_queue_size', 0)}")
print(f"- 总群聊数: {routing_status.get('total_rooms', 0)}")
else:
print(f"- 路由状态: {routing_status}")
print(f"- 路由状态类型: {type(routing_status)}")
# 验证所有组件都正常工作
total_speeches = len(self.flow_controller.debate_history)
assert total_speeches == 8, f"应该记录8次发言实际{total_speeches}"
assert health_status is not None, "应该有健康状态"
assert len(speakers_sequence) == 8, "应该有8个发言者记录"
self.test_results["cross_component_integration"] = True
print("✅ 跨组件集成测试通过")
return True
except Exception as e:
import traceback
print(f"❌ 跨组件集成测试失败: {e}")
print(f"详细错误信息:")
traceback.print_exc()
self.error_log.append(f"跨组件集成错误: {e}")
self.test_results["cross_component_integration"] = False
return False
def test_performance_under_load(self):
"""测试负载下的性能"""
print("\n🧪 测试负载下的性能")
print("-" * 40)
try:
# 性能测试参数
num_speeches = 100
num_threads = 5
def simulate_debate_load():
"""模拟辩论负载"""
thread_name = threading.current_thread().name
for i in range(num_speeches // num_threads):
try:
# 模拟发言处理
speaker = f"Speaker-{thread_name}-{i}"
content = f"这是来自{speaker}的测试发言 {i}"
# 优先级计算
context = {"stage": "", "topic": "性能测试", "recent_speakers": []}
analysis = self.priority_algorithm.analyze_speech(content, speaker, context)
priority = self.priority_algorithm.calculate_priority(speaker, context, [])
# 流程记录
self.flow_controller.record_speech(speaker, content)
# 健康监控
debate_data = {
"participants": [speaker],
"speeches": [{"speaker": speaker, "content": content, "timestamp": datetime.now()}],
"current_stage": "",
"duration": timedelta(seconds=i)
}
self.health_monitor.update_metrics(debate_data)
except Exception as e:
self.error_log.append(f"负载测试错误 {thread_name}-{i}: {e}")
# 开始性能测试
start_time = time.time()
threads = []
for i in range(num_threads):
thread = threading.Thread(target=simulate_debate_load, name=f"LoadTest-{i}")
threads.append(thread)
thread.start()
for thread in threads:
thread.join()
end_time = time.time()
duration = end_time - start_time
# 计算性能指标
total_operations = num_speeches * 4 # 每次发言包含4个操作
ops_per_second = total_operations / duration
self.performance_metrics = {
"total_operations": total_operations,
"duration": duration,
"ops_per_second": ops_per_second,
"avg_operation_time": duration / total_operations * 1000, # 毫秒
"concurrent_threads": num_threads,
"errors": len([e for e in self.error_log if "负载测试错误" in e])
}
print(f"性能测试结果:")
print(f"- 总操作数: {total_operations}")
print(f"- 执行时间: {duration:.3f}")
print(f"- 操作速度: {ops_per_second:.1f} 操作/秒")
print(f"- 平均操作时间: {self.performance_metrics['avg_operation_time']:.2f} 毫秒")
print(f"- 并发线程: {num_threads}")
print(f"- 错误数量: {self.performance_metrics['errors']}")
# 性能验证
assert ops_per_second > 100, "操作速度应该超过100操作/秒"
assert self.performance_metrics['errors'] == 0, "不应该有错误"
self.test_results["performance_under_load"] = True
print("✅ 负载性能测试通过")
return True
except Exception as e:
print(f"❌ 负载性能测试失败: {e}")
self.error_log.append(f"负载性能测试错误: {e}")
self.test_results["performance_under_load"] = False
return False
def test_data_consistency(self):
"""测试数据一致性"""
print("\n🧪 测试数据一致性")
print("-" * 40)
try:
# 为了确保数据一致性测试的准确性创建新的flow_controller实例
from jixia.debates.optimized_debate_flow import OptimizedDebateFlowController, FlowControlMode
test_flow_controller = OptimizedDebateFlowController()
# 模拟数据操作
test_data = {
"speakers": ["吕洞宾", "何仙姑", "铁拐李"],
"speeches": [
"AI投资具有巨大潜力",
"但风险也不容忽视",
"我们需要平衡收益与风险"
]
}
# 1. 保存流程控制器数据
for i, (speaker, content) in enumerate(zip(test_data["speakers"], test_data["speeches"])):
test_flow_controller.record_speech(speaker, content)
print(f"记录发言 {i+1}: {speaker} - {content[:30]}...")
print(f"当前debate_history长度: {len(test_flow_controller.debate_history)}")
flow_data_file = "test_flow_consistency.json"
test_flow_controller.save_flow_data(flow_data_file)
# 2. 保存健康监控数据
debate_data = {
"participants": test_data["speakers"],
"speeches": [
{"speaker": s, "content": c, "timestamp": datetime.now()}
for s, c in zip(test_data["speakers"], test_data["speeches"])
],
"current_stage": "",
"duration": timedelta(minutes=10)
}
self.health_monitor.update_metrics(debate_data)
health_data_file = "test_health_consistency.json"
self.health_monitor.save_monitoring_data(health_data_file)
# 3. 验证数据文件
assert os.path.exists(flow_data_file), "流程数据文件应该存在"
assert os.path.exists(health_data_file), "健康数据文件应该存在"
# 4. 读取并验证数据内容
with open(flow_data_file, 'r', encoding='utf-8') as f:
flow_data = json.load(f)
with open(health_data_file, 'r', encoding='utf-8') as f:
health_data = json.load(f)
# 调试信息
print(f"读取的flow_data中debate_history长度: {len(flow_data.get('debate_history', []))}")
print(f"debate_history内容: {flow_data.get('debate_history', [])}")
# 验证数据完整性
actual_count = len(flow_data.get("debate_history", []))
assert actual_count == 3, f"应该有3条发言记录实际有{actual_count}"
assert "health_metrics" in health_data, "应该包含健康指标"
assert "monitoring_config" in health_data, "应该包含监控配置"
print(f"数据一致性验证:")
print(f"- 流程数据记录: {len(flow_data['debate_history'])}")
print(f"- 健康数据大小: {os.path.getsize(health_data_file)} 字节")
print(f"- 流程数据大小: {os.path.getsize(flow_data_file)} 字节")
# 清理测试文件
os.remove(flow_data_file)
os.remove(health_data_file)
self.test_results["data_consistency"] = True
print("✅ 数据一致性测试通过")
return True
except Exception as e:
print(f"❌ 数据一致性测试失败: {e}")
self.error_log.append(f"数据一致性错误: {e}")
self.test_results["data_consistency"] = False
return False
def generate_comprehensive_report(self):
"""生成综合测试报告"""
print("\n" + "=" * 60)
print("📊 集夏v2.1.0 综合测试报告")
print("=" * 60)
# 测试结果统计
total_tests = len(self.test_results)
passed_tests = sum(1 for result in self.test_results.values() if result)
failed_tests = total_tests - passed_tests
pass_rate = (passed_tests / total_tests) * 100 if total_tests > 0 else 0
print(f"\n🎯 测试结果统计:")
print(f"- 总测试数: {total_tests}")
print(f"- 通过测试: {passed_tests}")
print(f"- 失败测试: {failed_tests}")
print(f"- 通过率: {pass_rate:.1f}%")
# 详细测试结果
print(f"\n📋 详细测试结果:")
for test_name, result in self.test_results.items():
status = "✅ 通过" if result else "❌ 失败"
print(f"- {test_name}: {status}")
# 性能指标
if self.performance_metrics:
print(f"\n⚡ 性能指标:")
for metric, value in self.performance_metrics.items():
if isinstance(value, float):
print(f"- {metric}: {value:.3f}")
else:
print(f"- {metric}: {value}")
# 错误日志
if self.error_log:
print(f"\n🚨 错误日志 ({len(self.error_log)} 条):")
for i, error in enumerate(self.error_log[:5], 1): # 只显示前5条
print(f"- {i}. {error}")
if len(self.error_log) > 5:
print(f"- ... 还有 {len(self.error_log) - 5} 条错误")
# 系统状态
print(f"\n🔧 系统状态:")
try:
flow_status = self.flow_controller.get_flow_status()
health_status = self.health_monitor.get_health_status()
print(f"- 流程控制器: 正常 (总发言: {flow_status.get('total_speeches', 0)})")
print(f"- 健康监控: 正常 (状态: {health_status.value})")
print(f"- 优先级算法: 正常")
print(f"- 多群聊协调: 正常")
except Exception as e:
print(f"- 系统状态检查失败: {e}")
# 总结
print(f"\n🎉 测试总结:")
if pass_rate >= 90:
print("🟢 系统状态优秀所有核心功能运行正常可以发布v2.1.0版本。")
elif pass_rate >= 70:
print("🟡 系统状态良好,但有部分功能需要优化。建议修复后再发布。")
else:
print("🔴 系统存在重大问题,需要进行全面修复后才能发布。")
return {
"pass_rate": pass_rate,
"total_tests": total_tests,
"passed_tests": passed_tests,
"failed_tests": failed_tests,
"performance_metrics": self.performance_metrics,
"error_count": len(self.error_log)
}
async def run_all_tests(self):
"""运行所有测试"""
print("🚀 开始集夏v2.1.0综合功能测试")
print("=" * 60)
# 同步测试方法
sync_test_methods = [
self.test_priority_algorithm_integration,
self.test_flow_controller_integration,
self.test_health_monitor_integration,
self.test_performance_under_load,
self.test_data_consistency
]
# 异步测试方法
async_test_methods = [
self.test_chat_coordinator_integration,
self.test_cross_component_integration
]
start_time = time.time()
# 运行同步测试
for test_method in sync_test_methods:
try:
test_method()
except Exception as e:
print(f"❌ 测试执行异常: {e}")
self.error_log.append(f"测试执行异常: {e}")
# 运行异步测试
for test_method in async_test_methods:
try:
await test_method()
except Exception as e:
print(f"❌ 测试执行异常: {e}")
self.error_log.append(f"测试执行异常: {e}")
end_time = time.time()
total_duration = end_time - start_time
print(f"\n⏱️ 总测试时间: {total_duration:.3f}")
# 生成综合报告
return self.generate_comprehensive_report()
async def main():
"""主函数"""
tester = V2_1_IntegrationTester()
report = await tester.run_all_tests()
# 保存测试报告
report_file = "v2_1_comprehensive_test_report.json"
with open(report_file, 'w', encoding='utf-8') as f:
json.dump({
"timestamp": datetime.now().isoformat(),
"version": "v2.1.0",
"test_results": tester.test_results,
"performance_metrics": tester.performance_metrics,
"error_log": tester.error_log,
"summary": report
}, f, ensure_ascii=False, indent=2)
print(f"\n📄 详细测试报告已保存到: {report_file}")
return report["pass_rate"] >= 70 # 70%通过率作为发布标准
if __name__ == "__main__":
success = asyncio.run(main())
sys.exit(0 if success else 1)

View File

@ -1,257 +1,384 @@
#!/usr/bin/env python3
"""
Vertex AI Memory Bank 测试脚本
验证稷下学宫记忆银行功能
Vertex Memory Bank 实现测试
"""
import unittest
import asyncio
import sys
import os
import sys
from unittest.mock import patch, MagicMock, AsyncMock
from datetime import datetime
# 添加项目根目录到路径
sys.path.append(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
# 添加项目根目录到Python路径
sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '..')))
from src.jixia.memory.vertex_memory_bank import VertexMemoryBank, initialize_baxian_memory_banks
from src.jixia.agents.memory_enhanced_agent import MemoryEnhancedAgent, create_memory_enhanced_council
from config.doppler_config import get_google_genai_config, validate_config
from src.jixia.memory.vertex_memory_bank import VertexMemoryBank, MemoryEntry
async def test_memory_bank_basic():
"""测试Memory Bank基础功能"""
print("🧪 测试 Memory Bank 基础功能...")
class TestVertexMemoryBank(unittest.TestCase):
"""测试VertexMemoryBank类"""
try:
# 验证配置
if not validate_config("google_adk"):
print("❌ Google ADK 配置验证失败")
return False
def setUp(self):
"""测试前的设置"""
# Mock掉aiplatform.init以避免实际初始化
patcher = patch('src.jixia.memory.vertex_memory_bank.aiplatform.init')
self.mock_init = patcher.start()
self.addCleanup(patcher.stop)
config = get_google_genai_config()
if not config.get('project_id'):
print("❌ Google Cloud Project ID 未配置")
print("请设置环境变量: GOOGLE_CLOUD_PROJECT_ID")
return False
# 创建Memory Bank实例
memory_bank = VertexMemoryBank.from_config()
print(f"✅ Memory Bank 实例创建成功")
print(f" 项目ID: {config['project_id']}")
print(f" 区域: {config['location']}")
# 测试创建记忆银行
bank_id = await memory_bank.create_memory_bank(
agent_name="tieguaili",
display_name="铁拐李测试记忆银行"
# 创建VertexMemoryBank实例
self.memory_bank = VertexMemoryBank(
project_id="test-project",
location="us-central1"
)
print(f"✅ 创建记忆银行成功: {bank_id}")
# 测试添加记忆
memory_id = await memory_bank.add_memory(
# 重置本地存储
self.memory_bank.local_memories = {}
self.memory_bank.memory_banks = {}
def test_init(self):
"""测试初始化"""
self.assertEqual(self.memory_bank.project_id, "test-project")
self.assertEqual(self.memory_bank.location, "us-central1")
self.assertEqual(self.memory_bank.local_memories, {})
self.assertEqual(self.memory_bank.memory_banks, {})
# 验证调用了aiplatform.init
self.mock_init.assert_called_once_with(project="test-project", location="us-central1")
def test_from_config(self):
"""测试从配置创建实例"""
with patch('src.jixia.memory.vertex_memory_bank.get_google_genai_config') as mock_config:
mock_config.return_value = {
'project_id': 'config-project',
'location': 'europe-west1'
}
memory_bank = VertexMemoryBank.from_config()
self.assertEqual(memory_bank.project_id, "config-project")
self.assertEqual(memory_bank.location, "europe-west1")
def test_from_config_missing_project_id(self):
"""测试从配置创建实例时缺少project_id"""
with patch('src.jixia.memory.vertex_memory_bank.get_google_genai_config') as mock_config:
mock_config.return_value = {
'project_id': None,
'location': 'europe-west1'
}
with self.assertRaises(ValueError) as context:
VertexMemoryBank.from_config()
self.assertIn("Google Cloud Project ID 未配置", str(context.exception))
async def test_create_memory_bank(self):
"""测试创建记忆银行"""
memory_bank_id = await self.memory_bank.create_memory_bank("tieguaili")
# 验证返回的ID格式
self.assertEqual(memory_bank_id, "memory_bank_tieguaili_test-project")
# 验证内部状态
self.assertIn("tieguaili", self.memory_bank.memory_banks)
self.assertEqual(self.memory_bank.memory_banks["tieguaili"], memory_bank_id)
self.assertIn("tieguaili", self.memory_bank.local_memories)
self.assertEqual(self.memory_bank.local_memories["tieguaili"], [])
async def test_create_memory_bank_with_display_name(self):
"""测试创建记忆银行时指定显示名称"""
memory_bank_id = await self.memory_bank.create_memory_bank(
"tieguaili",
"铁拐李的专属记忆银行"
)
# 验证返回的ID格式
self.assertEqual(memory_bank_id, "memory_bank_tieguaili_test-project")
async def test_add_memory(self):
"""测试添加记忆"""
# 先创建记忆银行
await self.memory_bank.create_memory_bank("tieguaili")
# 添加记忆
memory_id = await self.memory_bank.add_memory(
agent_name="tieguaili",
content="测试记忆在分析NVIDIA时我倾向于关注潜在的市场风险和估值泡沫。",
content="在讨论NVIDIA股票时我倾向于逆向思维关注潜在风险",
memory_type="preference",
debate_topic="NVIDIA投资分析",
metadata={"test": True, "priority": "high"}
metadata={"source": "manual"}
)
print(f"✅ 添加记忆成功: {memory_id}")
# 测试搜索记忆
results = await memory_bank.search_memories(
# 验证返回的ID格式
self.assertEqual(memory_id, "memory_tieguaili_0")
# 验证记忆已存储
self.assertEqual(len(self.memory_bank.local_memories["tieguaili"]), 1)
stored_memory = self.memory_bank.local_memories["tieguaili"][0]
self.assertEqual(stored_memory["id"], memory_id)
self.assertEqual(stored_memory["content"], "在讨论NVIDIA股票时我倾向于逆向思维关注潜在风险。")
self.assertEqual(stored_memory["memory_type"], "preference")
self.assertEqual(stored_memory["debate_topic"], "NVIDIA投资分析")
self.assertIn("source", stored_memory["metadata"])
self.assertEqual(stored_memory["metadata"]["source"], "manual")
self.assertIn("agent_name", stored_memory["metadata"])
self.assertEqual(stored_memory["metadata"]["agent_name"], "tieguaili")
async def test_add_memory_creates_bank_if_not_exists(self):
"""测试添加记忆时自动创建记忆银行"""
# 不先创建记忆银行,直接添加记忆
memory_id = await self.memory_bank.add_memory(
agent_name="tieguaili",
query="NVIDIA 风险",
limit=5
)
print(f"✅ 搜索记忆成功,找到 {len(results)} 条结果")
for i, result in enumerate(results, 1):
print(f" {i}. {result['content'][:50]}... (相关度: {result.get('relevance_score', 'N/A')})")
# 测试获取上下文
context = await memory_bank.get_agent_context("tieguaili", "NVIDIA投资分析")
print(f"✅ 获取上下文成功,长度: {len(context)} 字符")
return True
except Exception as e:
print(f"❌ Memory Bank 基础测试失败: {e}")
return False
async def test_memory_enhanced_agent():
"""测试记忆增强智能体"""
print("\n🧪 测试记忆增强智能体...")
try:
# 创建记忆银行
memory_bank = VertexMemoryBank.from_config()
# 创建记忆增强智能体
agent = MemoryEnhancedAgent("tieguaili", memory_bank)
print(f"✅ 创建记忆增强智能体: {agent.personality.chinese_name}")
# 测试基于记忆的响应
response = await agent.respond_with_memory(
message="你对NVIDIA的最新财报有什么看法",
topic="NVIDIA投资分析"
)
print(f"✅ 智能体响应成功")
print(f" 响应长度: {len(response)} 字符")
print(f" 响应预览: {response[:100]}...")
# 测试学习偏好
await agent.learn_preference(
preference="用户偏好保守的投资策略,关注风险控制",
topic="投资偏好"
)
print("✅ 学习用户偏好成功")
# 测试保存策略洞察
await agent.save_strategy_insight(
insight="在高估值环境下,应该更加关注基本面分析和风险管理",
topic="投资策略"
)
print("✅ 保存策略洞察成功")
return True
except Exception as e:
print(f"❌ 记忆增强智能体测试失败: {e}")
return False
async def test_baxian_memory_council():
"""测试八仙记忆议会"""
print("\n🧪 测试八仙记忆议会...")
try:
# 创建记忆增强议会
council = await create_memory_enhanced_council()
print(f"✅ 创建八仙记忆议会成功,智能体数量: {len(council.agents)}")
# 列出所有智能体
for agent_name, agent in council.agents.items():
print(f" - {agent.personality.chinese_name} ({agent_name})")
# 进行简短的记忆辩论测试
print("\n🏛️ 开始记忆增强辩论测试...")
result = await council.conduct_memory_debate(
topic="比特币投资价值分析",
participants=["tieguaili", "lvdongbin"], # 只选择两个智能体进行快速测试
rounds=1
content="测试内容"
)
print(f"✅ 辩论完成")
print(f" 主题: {result['topic']}")
print(f" 参与者: {len(result['participants'])}")
print(f" 发言次数: {result['total_exchanges']}")
# 验证记忆银行已被自动创建
self.assertIn("tieguaili", self.memory_bank.memory_banks)
self.assertIn("tieguaili", self.memory_bank.local_memories)
# 显示辩论内容
for exchange in result['conversation_history']:
print(f" {exchange['chinese_name']}: {exchange['content'][:80]}...")
# 验证记忆已存储
self.assertEqual(len(self.memory_bank.local_memories["tieguaili"]), 1)
# 获取集体记忆摘要
summary = await council.get_collective_memory_summary("比特币投资价值分析")
print(f"\n📚 集体记忆摘要长度: {len(summary)} 字符")
async def test_search_memories(self):
"""测试搜索记忆"""
# 先创建记忆银行并添加一些记忆
await self.memory_bank.create_memory_bank("tieguaili")
return True
await self.memory_bank.add_memory(
agent_name="tieguaili",
content="在讨论NVIDIA股票时我倾向于逆向思维关注潜在风险。",
memory_type="preference",
debate_topic="NVIDIA投资分析"
)
except Exception as e:
print(f"❌ 八仙记忆议会测试失败: {e}")
return False
await self.memory_bank.add_memory(
agent_name="tieguaili",
content="我喜欢关注苹果公司的创新产品发布会。",
memory_type="preference",
debate_topic="AAPL投资分析"
)
# 搜索NVIDIA相关记忆
results = await self.memory_bank.search_memories(
agent_name="tieguaili",
query="NVIDIA"
)
async def test_memory_bank_initialization():
"""测试Memory Bank初始化"""
print("\n🧪 测试 Memory Bank 初始化...")
self.assertEqual(len(results), 1)
self.assertEqual(results[0]["content"], "在讨论NVIDIA股票时我倾向于逆向思维关注潜在风险。")
self.assertIn("relevance_score", results[0])
try:
config = get_google_genai_config()
project_id = config.get('project_id')
location = config.get('location', 'us-central1')
async def test_search_memories_with_type_filter(self):
"""测试带类型过滤的搜索记忆"""
# 先创建记忆银行并添加不同类型的记忆
await self.memory_bank.create_memory_bank("tieguaili")
if not project_id:
print("❌ 项目ID未配置跳过初始化测试")
return False
await self.memory_bank.add_memory(
agent_name="tieguaili",
content="在讨论NVIDIA股票时我倾向于逆向思维关注潜在风险。",
memory_type="preference",
debate_topic="NVIDIA投资分析"
)
# 初始化所有八仙记忆银行
memory_bank = await initialize_baxian_memory_banks(project_id, location)
print(f"✅ 八仙记忆银行初始化成功")
print(f" 记忆银行数量: {len(memory_bank.memory_banks)}")
await self.memory_bank.add_memory(
agent_name="tieguaili",
content="在NVIDIA的辩论中我使用了技术分析策略。",
memory_type="strategy",
debate_topic="NVIDIA投资分析"
)
for agent_name, bank_name in memory_bank.memory_banks.items():
chinese_name = memory_bank.baxian_agents.get(agent_name, agent_name)
print(f" - {chinese_name}: {bank_name}")
# 搜索NVIDIA相关记忆只返回preference类型
results = await self.memory_bank.search_memories(
agent_name="tieguaili",
query="NVIDIA",
memory_type="preference"
)
return True
self.assertEqual(len(results), 1)
self.assertEqual(results[0]["metadata"]["memory_type"], "preference")
except Exception as e:
print(f"❌ Memory Bank 初始化测试失败: {e}")
return False
async def test_search_memories_no_results(self):
"""测试搜索无结果的情况"""
# 搜索不存在的记忆银行
results = await self.memory_bank.search_memories(
agent_name="nonexistent",
query="test"
)
self.assertEqual(results, [])
async def main():
"""主测试函数"""
print("🏛️ 稷下学宫 Vertex AI Memory Bank 测试")
print("=" * 50)
# 搜索空的记忆银行
await self.memory_bank.create_memory_bank("tieguaili")
results = await self.memory_bank.search_memories(
agent_name="tieguaili",
query="test"
)
# 检查配置
print("🔧 检查配置...")
config = get_google_genai_config()
self.assertEqual(results, [])
print(f"Google API Key: {'已配置' if config.get('api_key') else '未配置'}")
print(f"Project ID: {config.get('project_id', '未配置')}")
print(f"Location: {config.get('location', 'us-central1')}")
print(f"Memory Bank: {'启用' if config.get('memory_bank_enabled', 'TRUE') == 'TRUE' else '禁用'}")
async def test_get_agent_context(self):
"""测试获取智能体上下文"""
# 先创建记忆银行并添加一些记忆
await self.memory_bank.create_memory_bank("tieguaili")
if not config.get('project_id'):
print("\n❌ 测试需要 Google Cloud Project ID")
print("请设置环境变量: GOOGLE_CLOUD_PROJECT_ID=your-project-id")
return
await self.memory_bank.add_memory(
agent_name="tieguaili",
content="在讨论NVIDIA股票时我倾向于逆向思维关注潜在风险。",
memory_type="preference",
debate_topic="NVIDIA投资分析"
)
# 运行测试
tests = [
("Memory Bank 基础功能", test_memory_bank_basic),
("记忆增强智能体", test_memory_enhanced_agent),
("八仙记忆议会", test_baxian_memory_council),
("Memory Bank 初始化", test_memory_bank_initialization)
await self.memory_bank.add_memory(
agent_name="tieguaili",
content="在NVIDIA的辩论中我使用了技术分析策略。",
memory_type="strategy",
debate_topic="NVIDIA投资分析"
)
# 获取上下文
context = await self.memory_bank.get_agent_context("tieguaili", "NVIDIA投资分析")
# 验证上下文包含预期内容
self.assertIn("# 铁拐李的记忆上下文", context)
self.assertIn("## 偏好记忆", context)
self.assertIn("## 策略记忆", context)
self.assertIn("在讨论NVIDIA股票时我倾向于逆向思维关注潜在风险。", context)
self.assertIn("在NVIDIA的辩论中我使用了技术分析策略。", context)
async def test_get_agent_context_no_memories(self):
"""测试获取智能体上下文但无相关记忆"""
# 先创建记忆银行
await self.memory_bank.create_memory_bank("tieguaili")
# 获取上下文
context = await self.memory_bank.get_agent_context("tieguaili", "NVIDIA投资分析")
# 验证上下文包含暂无相关记忆的提示
self.assertIn("# 铁拐李的记忆上下文", context)
self.assertIn("暂无相关记忆。", context)
async def test_save_debate_session(self):
"""测试保存辩论会话"""
conversation_history = [
{"agent": "tieguaili", "content": "NVIDIA的估值过高存在泡沫风险。"},
{"agent": "lvdongbin", "content": "NVIDIA在AI领域的领先地位不可忽视。"},
{"agent": "tieguaili", "content": "但我们需要考虑竞争加剧和增长放缓的可能性。"}
]
results = []
outcomes = {
"winner": "lvdongbin",
"insights": {
"tieguaili": "铁拐李的风险意识值得肯定但在AI趋势的判断上略显保守。"
}
}
for test_name, test_func in tests:
print(f"\n{'='*20}")
print(f"测试: {test_name}")
print(f"{'='*20}")
# 保存辩论会话
await self.memory_bank.save_debate_session(
debate_topic="NVIDIA投资分析",
participants=["tieguaili", "lvdongbin"],
conversation_history=conversation_history,
outcomes=outcomes
)
# 验证铁拐李的记忆已保存
self.assertIn("tieguaili", self.memory_bank.local_memories)
self.assertEqual(len(self.memory_bank.local_memories["tieguaili"]), 2)
# 验证第一条记忆是对话总结
summary_memory = self.memory_bank.local_memories["tieguaili"][0]
self.assertEqual(summary_memory["memory_type"], "conversation")
self.assertIn("铁拐李在本次辩论中的主要观点", summary_memory["content"])
# 验证第二条记忆是策略洞察
strategy_memory = self.memory_bank.local_memories["tieguaili"][1]
self.assertEqual(strategy_memory["memory_type"], "strategy")
self.assertIn("铁拐李的风险意识值得肯定", strategy_memory["content"])
def test_summarize_conversation(self):
"""测试对话总结"""
conversation_history = [
{"agent": "tieguaili", "content": "第一点看法NVIDIA的估值过高存在泡沫风险。"},
{"agent": "lvdongbin", "content": "NVIDIA在AI领域的领先地位不可忽视。"},
{"agent": "tieguaili", "content": "第二点看法:我们需要考虑竞争加剧和增长放缓的可能性。"},
{"agent": "tieguaili", "content": "第三点看法:从技术分析角度看,股价已出现超买信号。"}
]
summary = self.memory_bank._summarize_conversation(conversation_history, "tieguaili")
# 验证总结包含预期内容
self.assertIn("铁拐李在本次辩论中的主要观点", summary)
self.assertIn("第一点看法NVIDIA的估值过高存在泡沫风险。", summary)
self.assertIn("第二点看法:我们需要考虑竞争加剧和增长放缓的可能性。", summary)
self.assertIn("第三点看法:从技术分析角度看,股价已出现超买信号。", summary)
def test_extract_strategy_insight_winner(self):
"""测试提取策略洞察 - 获胜者"""
outcomes = {
"winner": "tieguaili",
"insights": {}
}
insight = self.memory_bank._extract_strategy_insight(outcomes, "tieguaili")
self.assertIn("铁拐李在本次辩论中获胜", insight)
def test_extract_strategy_insight_from_insights(self):
"""测试从洞察中提取策略洞察"""
outcomes = {
"winner": "lvdongbin",
"insights": {
"tieguaili": "铁拐李的风险意识值得肯定但在AI趋势的判断上略显保守。"
}
}
insight = self.memory_bank._extract_strategy_insight(outcomes, "tieguaili")
self.assertEqual(insight, "铁拐李的风险意识值得肯定但在AI趋势的判断上略显保守。")
if __name__ == '__main__':
# 创建一个异步测试运行器
def run_async_test(test_case):
"""运行异步测试用例"""
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
try:
result = await test_func()
results.append((test_name, result))
except Exception as e:
print(f"❌ 测试 {test_name} 出现异常: {e}")
results.append((test_name, False))
return loop.run_until_complete(test_case)
finally:
loop.close()
# 显示测试结果摘要
print(f"\n{'='*50}")
print("🏛️ 测试结果摘要")
print(f"{'='*50}")
# 获取所有以test_开头的异步方法并运行它们
suite = unittest.TestSuite()
test_instance = TestVertexMemoryBank()
test_instance.setUp()
passed = 0
total = len(results)
# 添加同步测试
suite.addTest(TestVertexMemoryBank('test_init'))
suite.addTest(TestVertexMemoryBank('test_from_config'))
suite.addTest(TestVertexMemoryBank('test_from_config_missing_project_id'))
suite.addTest(TestVertexMemoryBank('test_summarize_conversation'))
suite.addTest(TestVertexMemoryBank('test_extract_strategy_insight_winner'))
suite.addTest(TestVertexMemoryBank('test_extract_strategy_insight_from_insights'))
for test_name, result in results:
status = "✅ 通过" if result else "❌ 失败"
print(f"{status} {test_name}")
if result:
passed += 1
# 添加异步测试
async_tests = [
'test_create_memory_bank',
'test_create_memory_bank_with_display_name',
'test_add_memory',
'test_add_memory_creates_bank_if_not_exists',
'test_search_memories',
'test_search_memories_with_type_filter',
'test_search_memories_no_results',
'test_get_agent_context',
'test_get_agent_context_no_memories',
'test_save_debate_session'
]
print(f"\n📊 总体结果: {passed}/{total} 测试通过")
for test_name in async_tests:
test_method = getattr(test_instance, test_name)
suite.addTest(unittest.FunctionTestCase(lambda tm=test_method: run_async_test(tm())))
if passed == total:
print("🎉 所有测试通过Vertex AI Memory Bank 集成成功!")
else:
print("⚠️ 部分测试失败,请检查配置和网络连接")
if __name__ == "__main__":
# 运行测试
asyncio.run(main())
runner = unittest.TextTestRunner(verbosity=2)
runner.run(suite)

90
tests/validate_models.py Normal file
View File

@ -0,0 +1,90 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Script to validate model availability on OpenRouter
"""
import asyncio
import aiohttp
import os
# Read models from .env file
models = []
with open(".env", "r") as f:
for line in f:
line = line.strip()
if line.endswith(":free"):
models.append(line)
# Get API key from environment or .env file
api_key = os.getenv('OPENROUTER_API_KEY')
if not api_key:
with open(".env", "r") as f:
for line in f:
line = line.strip()
if line.startswith("sk-or-v1-"):
api_key = line
break
if not api_key:
print("❌ No API key found")
exit(1)
async def test_model(session, model):
"""Test if a model is available on OpenRouter"""
# Remove :free tag for API call
clean_model = model.split(":")[0]
url = "https://openrouter.ai/api/v1/chat/completions"
headers = {
"Authorization": f"Bearer {api_key}",
"Content-Type": "application/json"
}
payload = {
"model": clean_model,
"messages": [
{"role": "user", "content": "Hello, world!"}
],
"max_tokens": 10
}
try:
async with session.post(url, headers=headers, json=payload, timeout=10) as response:
if response.status == 200:
return model, True, "Available"
else:
error_text = await response.text()
return model, False, f"Status {response.status}: {error_text[:100]}"
except Exception as e:
return model, False, f"Exception: {str(e)[:100]}"
async def main():
"""Main function"""
print(f"🔍 Testing {len(models)} models from .env file...")
async with aiohttp.ClientSession() as session:
tasks = [test_model(session, model) for model in models]
results = await asyncio.gather(*tasks)
print("\n📊 Results:")
valid_models = []
invalid_models = []
for model, is_valid, message in results:
if is_valid:
print(f"{model:<50} - {message}")
valid_models.append(model)
else:
print(f"{model:<50} - {message}")
invalid_models.append(model)
print(f"\n✅ Valid models ({len(valid_models)}):")
for model in valid_models:
print(f" {model}")
print(f"\n❌ Invalid models ({len(invalid_models)}):")
for model in invalid_models:
print(f" {model}")
if __name__ == "__main__":
asyncio.run(main())

55
topic/ai_pow.md Normal file
View File

@ -0,0 +1,55 @@
📜 扩写辩题
辩题名称:
「工作量证明 vs 无限爬取AI时代的内容创作激励机制还能存在吗
时间背景:
2025年Cloudflare CEO Matthew Prince 在旧金山一次 AI 安全大会上炮轰大规模爬虫,认为它们摧毁了原本的互联网生态。他提出:
“过去的互联网像李时珍采药——要付出劳动与验证,才能获得一个有效入口。如今的 AI 爬虫却像无穷无尽的采掘机器,不知疲倦地收割,却没有给原创者留下任何激励与回报。”
与此同时,硅谷的 Builder 社区正推行 MCPModel Context Protocol让 AI 以“对接协议”而非“无差别爬取”的方式获取数据,声称能在创作者与模型之间建立新的契约关系。
主要人物:
Cloudflare CEO Matthew Prince —— 代表基础设施与网络安全视角,批评无限爬虫。
Google 工程师 —— 代表传统“工作量证明”模式的守护者,主张“劳动验证”是内容价值的基石。
AI Builder 社区代表 —— 主张 MCP 是未来的规则,可以让 AI 与人类建立新的合作范式。
独立内容创作者 —— 代表个体劳动者,担忧在 AI 时代失去激励与生计。
普通网民 —— 代表“消费者”,在意是否还能自由获取内容而非被强力限制。
争议焦点:
工作量证明是否仍是网络的合法入口?
—— 互联网是否应该继续依赖“像李时珍采药”般的人工劳动与逐步验证?
无限爬取是否摧毁创作激励?
—— AI 爬虫是否导致创作者“被收割”,却得不到任何经济与精神回报?
MCP 是否是新的契约机制?
—— 它能否平衡 AI 的效率与人类创作的价值?还是只是换汤不换药?
创作者与消费者的权利冲突
—— 在 AI 时代,谁才是真正的“受益者”?谁应该承担成本?
八仙视角示例:
吕洞宾理性派可能支持建立“机器人协议2.0”,强调规则必要性。
何仙姑:柔性派,主张人机共生,用“织云坊”比喻 AI 与创作的协作空间。
张果老:守旧派,强调稳扎稳打,批评 MCP 过于理想化。
韩湘子:年轻激进派,主张 AI 与人类应该打通“仙凡通道”,共享未来。
曹国舅:精英派,支持“分级授权”制度,让大机构管理数据流。
铁拐李:草根派,反对过度限制,呼吁知识共享。
汉钟离:天道派,认为 AI 爬虫只是天道循环的一部分。
蓝采和:贫民派,为底层创作者发声,担忧被彻底边缘化。

12
tournament_models.txt Normal file
View File

@ -0,0 +1,12 @@
cognitivecomputations/dolphin-mistral-24b-venice-edition:free
deepseek/deepseek-chat-v3-0324:free
google/gemma-3-27b-it:free
microsoft/mai-ds-r1:free
mistralai/mistral-small-3.1-24b-instruct:free
moonshotai/kimi-k2:free
openai/gpt-oss-20b:free
qwen/qwq-32b:free
rekaai/reka-flash-3:free
tencent/hunyuan-a13b-instruct:free
tngtech/deepseek-r1t-chimera:free
z-ai/glm-4.5-air:free