feat: 更新OCI Provider版本至7.20并集成Vault配置
refactor: 重构Terraform配置以使用Consul和Vault存储敏感信息 docs: 添加Vault实施文档和配置指南 chore: 清理不再使用的配置文件和脚本 feat: 添加Nomad集群领导者发现脚本和文档 feat: 实现MCP配置共享方案和同步脚本 style: 更新README中的网络访问注意事项 test: 添加Consul Provider集成测试脚本
This commit is contained in:
parent
ad531936dd
commit
f72b17a34f
|
|
@ -0,0 +1 @@
|
|||
/mnt/fnsync/mcp/mcp_shared_config.json
|
||||
|
|
@ -0,0 +1,90 @@
|
|||
{
|
||||
"mcpServers": {
|
||||
"context7": {
|
||||
"command": "npx",
|
||||
"args": [
|
||||
"-y",
|
||||
"@upstash/context7-mcp"
|
||||
],
|
||||
"env": {
|
||||
"DEFAULT_MINIMUM_TOKENS": ""
|
||||
}
|
||||
},
|
||||
"filesystem": {
|
||||
"command": "npx",
|
||||
"args": [
|
||||
"-y",
|
||||
"@modelcontextprotocol/server-filesystem",
|
||||
"./"
|
||||
],
|
||||
"disabled": false,
|
||||
"alwaysAllow": []
|
||||
},
|
||||
"sequentialthinking": {
|
||||
"command": "npx",
|
||||
"args": [
|
||||
"-y",
|
||||
"@modelcontextprotocol/server-sequential-thinking"
|
||||
],
|
||||
"alwaysAllow": [
|
||||
"sequentialthinking"
|
||||
]
|
||||
},
|
||||
"git": {
|
||||
"command": "uvx",
|
||||
"args": [
|
||||
"mcp-server-git",
|
||||
"--repository",
|
||||
"./"
|
||||
],
|
||||
"alwaysAllow": [
|
||||
"git_status",
|
||||
"git_diff_unstaged",
|
||||
"git_diff",
|
||||
"git_diff_staged",
|
||||
"git_commit",
|
||||
"git_add",
|
||||
"git_reset",
|
||||
"git_log",
|
||||
"git_create_branch",
|
||||
"git_checkout",
|
||||
"git_show",
|
||||
"git_branch"
|
||||
]
|
||||
},
|
||||
"time": {
|
||||
"command": "uvx",
|
||||
"args": [
|
||||
"mcp-server-time"
|
||||
],
|
||||
"alwaysAllow": [
|
||||
"get_current_time",
|
||||
"convert_time"
|
||||
]
|
||||
},
|
||||
"memory": {
|
||||
"command": "npx",
|
||||
"args": [
|
||||
"-y",
|
||||
"@modelcontextprotocol/server-memory"
|
||||
],
|
||||
"alwaysAllow": [
|
||||
"create_entities",
|
||||
"create_relations",
|
||||
"add_observations",
|
||||
"delete_entities",
|
||||
"delete_observations"
|
||||
]
|
||||
},
|
||||
"tavily": {
|
||||
"command": "npx",
|
||||
"args": [
|
||||
"-y",
|
||||
"tavily-mcp@0.2.3"
|
||||
],
|
||||
"env": {
|
||||
"TAVILY_API_KEY": "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"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
@ -0,0 +1,41 @@
|
|||
# MCP 配置共享方案
|
||||
|
||||
本项目实现了跨主机多个IDE之间共享MCP(Model Context Protocol)配置的解决方案,使用NFS卷实现跨主机同步。
|
||||
|
||||
## 配置结构
|
||||
|
||||
- `/root/.mcp/mcp_settings.json` - 主MCP配置文件(符号链接指向NFS卷)
|
||||
- `/mnt/fnsync/mcp/mcp_shared_config.json` - NFS卷上的统一配置文件(权威源)
|
||||
- `mcp_shared_config.json` - 指向NFS卷上配置文件的符号链接
|
||||
- `sync_mcp_config.sh` - 同步脚本,用于将统一配置复制到各个IDE
|
||||
- `sync_all_mcp_configs.sh` - 完整同步脚本,同步到所有可能的IDE和AI助手
|
||||
- `.kilocode/mcp.json` - 指向共享配置的符号链接
|
||||
- 其他IDE和AI助手的配置文件
|
||||
|
||||
## 统一配置内容
|
||||
|
||||
合并了以下MCP服务器:
|
||||
|
||||
### 标准服务器
|
||||
- context7: 提供库文档和代码示例
|
||||
- filesystem: 文件系统访问
|
||||
- sequentialthinking: 顺序思考工具
|
||||
- git: Git 操作
|
||||
- time: 时间相关操作
|
||||
- memory: 知识图谱和记忆管理
|
||||
- tavily: 网络搜索功能
|
||||
|
||||
## 使用方法
|
||||
|
||||
1. **更新配置**: 编辑 `/mnt/fnsync/mcp/mcp_shared_config.json` 文件以修改MCP服务器配置(或通过符号链接 `/root/.mcp/mcp_settings.json`)
|
||||
2. **同步配置**:
|
||||
- 运行 `./sync_mcp_config.sh` 同步到特定IDE
|
||||
- 运行 `./sync_all_mcp_configs.sh` 同步到所有IDE和AI助手
|
||||
3. **验证配置**: 确认各IDE中的MCP功能正常工作
|
||||
|
||||
## 维护说明
|
||||
|
||||
- 所有MCP配置更改都应在 `/mnt/fnsync/mcp/mcp_shared_config.json` 中进行(这是权威源)
|
||||
- `/root/.mcp/mcp_settings.json` 现在是符号链接,指向NFS卷上的统一配置
|
||||
- 由于使用NFS卷,配置更改会自动跨主机共享
|
||||
- 如果添加新的IDE,可以将其配置文件链接到或复制自 `/mnt/fnsync/mcp/mcp_shared_config.json`
|
||||
68
README.md
68
README.md
|
|
@ -115,6 +115,74 @@ nomad job status
|
|||
nomad node status
|
||||
```
|
||||
|
||||
### ⚠️ 重要提示:网络访问注意事项
|
||||
|
||||
**Tailscale 网络访问**:
|
||||
- 本项目中的 Nomad 和 Consul 服务通过 Tailscale 网络进行访问
|
||||
- 访问 Nomad (端口 4646) 和 Consul (端口 8500) 时,必须使用 Tailscale 分配的 IP 地址
|
||||
- 错误示例:`http://127.0.0.1:4646` 或 `http://localhost:8500` (无法连接)
|
||||
- 正确示例:`http://100.x.x.x:4646` 或 `http://100.x.x.x:8500` (使用 Tailscale IP)
|
||||
|
||||
**获取 Tailscale IP**:
|
||||
```bash
|
||||
# 查看当前节点的 Tailscale IP
|
||||
tailscale ip -4
|
||||
|
||||
# 查看所有 Tailscale 网络中的节点
|
||||
tailscale status
|
||||
```
|
||||
|
||||
**常见问题**:
|
||||
- 如果遇到 "connection refused" 错误,请确认是否使用了正确的 Tailscale IP
|
||||
- 确保 Tailscale 服务已启动并正常运行
|
||||
- 检查网络策略是否允许通过 Tailscale 接口访问相关端口
|
||||
|
||||
### 🔄 Nomad 集群领导者轮换与访问策略
|
||||
|
||||
**Nomad 集群领导者机制**:
|
||||
- Nomad 使用 Raft 协议实现分布式一致性,集群中只有一个领导者节点
|
||||
- 领导者节点负责处理所有写入操作和协调集群状态
|
||||
- 当领导者节点故障时,集群会自动选举新的领导者
|
||||
|
||||
**领导者轮换时的访问策略**:
|
||||
|
||||
1. **动态发现领导者**:
|
||||
```bash
|
||||
# 查询当前领导者节点
|
||||
curl -s http://<任意Nomad服务器IP>:4646/v1/status/leader
|
||||
# 返回结果示例: "100.90.159.68:4647"
|
||||
|
||||
# 使用返回的领导者地址进行API调用
|
||||
curl -s http://100.90.159.68:4646/v1/nodes
|
||||
```
|
||||
|
||||
2. **负载均衡方案**:
|
||||
- **DNS 负载均衡**:使用 Consul DNS 服务,通过 `nomad.service.consul` 解析到当前领导者
|
||||
- **代理层负载均衡**:在 Nginx/HAProxy 配置中添加健康检查,自动路由到活跃的领导者节点
|
||||
- **客户端重试机制**:在客户端代码中实现重试逻辑,当连接失败时尝试其他服务器节点
|
||||
|
||||
3. **推荐访问模式**:
|
||||
```bash
|
||||
# 使用领导者发现脚本
|
||||
#!/bin/bash
|
||||
# 获取任意一个Nomad服务器IP
|
||||
SERVER_IP="100.116.158.95"
|
||||
# 查询当前领导者
|
||||
LEADER=$(curl -s http://${SERVER_IP}:4646/v1/status/leader | sed 's/"//g')
|
||||
# 使用领导者地址执行命令
|
||||
nomad node status -address=http://${LEADER}
|
||||
```
|
||||
|
||||
4. **高可用性配置**:
|
||||
- 将所有 Nomad 服务器节点添加到客户端配置中
|
||||
- 客户端会自动连接到可用的服务器节点
|
||||
- 对于写入操作,客户端会自动重定向到领导者节点
|
||||
|
||||
**注意事项**:
|
||||
- Nomad 集群领导者轮换是自动进行的,通常不需要人工干预
|
||||
- 在领导者选举期间,集群可能会短暂无法处理写入操作
|
||||
- 建议在应用程序中实现适当的重试逻辑,以处理领导者切换期间的临时故障
|
||||
|
||||
## 🛠️ 常用命令
|
||||
|
||||
| 命令 | 描述 |
|
||||
|
|
|
|||
|
|
@ -0,0 +1,58 @@
|
|||
# Traefik动态配置文件
|
||||
# 这里可以添加动态路由、中间件等配置
|
||||
|
||||
# HTTP路由示例
|
||||
http:
|
||||
routers:
|
||||
# 测试路由
|
||||
test-router:
|
||||
rule: "Host(`test.service.consul`)"
|
||||
service: "test-service"
|
||||
entryPoints:
|
||||
- "https"
|
||||
tls:
|
||||
certResolver: "default"
|
||||
|
||||
services:
|
||||
# 测试服务
|
||||
test-service:
|
||||
loadBalancer:
|
||||
servers:
|
||||
- url: "http://127.0.0.1:8080"
|
||||
passHostHeader: true
|
||||
|
||||
middlewares:
|
||||
# 基本认证中间件
|
||||
basic-auth:
|
||||
basicAuth:
|
||||
users:
|
||||
- "test:$apr1$H6uskkkW$IgXLP6ewTrSuBkTrqE8wj/"
|
||||
|
||||
# 安全头中间件
|
||||
security-headers:
|
||||
headers:
|
||||
sslRedirect: true
|
||||
stsSeconds: 31536000
|
||||
stsIncludeSubdomains: true
|
||||
stsPreload: true
|
||||
forceSTSHeader: true
|
||||
customFrameOptionsValue: "SAMEORIGIN"
|
||||
contentTypeNosniff: true
|
||||
browserXssFilter: true
|
||||
|
||||
# TCP路由示例
|
||||
tcp:
|
||||
routers:
|
||||
# TCP测试路由
|
||||
tcp-test-router:
|
||||
rule: "HostSNI(`*`)"
|
||||
service: "tcp-test-service"
|
||||
entryPoints:
|
||||
- "https"
|
||||
|
||||
services:
|
||||
# TCP测试服务
|
||||
tcp-test-service:
|
||||
loadBalancer:
|
||||
servers:
|
||||
- address: "127.0.0.1:8080"
|
||||
|
|
@ -0,0 +1,38 @@
|
|||
global:
|
||||
scrape_interval: 15s
|
||||
evaluation_interval: 15s
|
||||
|
||||
rule_files:
|
||||
# - "first_rules.yml"
|
||||
# - "second_rules.yml"
|
||||
|
||||
scrape_configs:
|
||||
- job_name: 'prometheus'
|
||||
static_configs:
|
||||
- targets: ['localhost:9090']
|
||||
|
||||
- job_name: 'openfaas'
|
||||
static_configs:
|
||||
- targets: ['gateway:8080']
|
||||
metrics_path: /metrics
|
||||
scrape_interval: 15s
|
||||
scrape_timeout: 10s
|
||||
|
||||
- job_name: 'nats'
|
||||
static_configs:
|
||||
- targets: ['nats:8222']
|
||||
metrics_path: /metrics
|
||||
scrape_interval: 15s
|
||||
scrape_timeout: 10s
|
||||
|
||||
- job_name: 'node-exporter'
|
||||
static_configs:
|
||||
- targets: ['node-exporter:9100']
|
||||
scrape_interval: 15s
|
||||
scrape_timeout: 10s
|
||||
|
||||
- job_name: 'cadvisor'
|
||||
static_configs:
|
||||
- targets: ['cadvisor:8080']
|
||||
scrape_interval: 15s
|
||||
scrape_timeout: 10s
|
||||
|
|
@ -0,0 +1,63 @@
|
|||
# Traefik静态配置文件
|
||||
global:
|
||||
sendAnonymousUsage: false
|
||||
|
||||
# API和仪表板配置
|
||||
api:
|
||||
dashboard: true
|
||||
insecure: true # 仅用于测试,生产环境应使用安全配置
|
||||
|
||||
# 入口点配置
|
||||
entryPoints:
|
||||
http:
|
||||
address: ":80"
|
||||
# 重定向HTTP到HTTPS
|
||||
http:
|
||||
redirections:
|
||||
entryPoint:
|
||||
to: https
|
||||
scheme: https
|
||||
https:
|
||||
address: ":443"
|
||||
api:
|
||||
address: ":8080"
|
||||
|
||||
# 提供者配置
|
||||
providers:
|
||||
# 启用Consul Catalog提供者
|
||||
consulCatalog:
|
||||
exposedByDefault: false
|
||||
prefix: "traefik"
|
||||
refreshInterval: 15s
|
||||
requireConsistent: true
|
||||
stale: false
|
||||
watch: true
|
||||
endpoint:
|
||||
address: "http://127.0.0.1:8500"
|
||||
scheme: "http"
|
||||
connectAware: true
|
||||
connectByDefault: false
|
||||
|
||||
# 启用Nomad提供者
|
||||
nomad:
|
||||
exposedByDefault: false
|
||||
prefix: "traefik"
|
||||
refreshInterval: 15s
|
||||
stale: false
|
||||
watch: true
|
||||
endpoint:
|
||||
address: "http://127.0.0.1:4646"
|
||||
scheme: "http"
|
||||
allowEmptyServices: true
|
||||
|
||||
# 日志配置
|
||||
log:
|
||||
level: "INFO"
|
||||
format: "json"
|
||||
|
||||
accessLog:
|
||||
format: "json"
|
||||
fields:
|
||||
defaultMode: "keep"
|
||||
headers:
|
||||
defaultMode: "keep"
|
||||
|
|
@ -1,177 +0,0 @@
|
|||
# Nomad 集群 Telegraf 监控部署移交文档
|
||||
|
||||
## 📋 项目概述
|
||||
|
||||
**任务**: 为 Nomad 集群部署基于 Telegraf 的硬盘监控系统
|
||||
**目标**: 监控集群所有节点的硬盘使用率、系统性能等指标
|
||||
**监控栈**: Telegraf + InfluxDB 2.x + Grafana
|
||||
|
||||
## 🎯 当前完成状态
|
||||
|
||||
### ✅ 已完成的工作
|
||||
|
||||
#### 1. 容器运行时迁移
|
||||
- **ch3 节点**: ✅ 成功清理 Docker,安装 Podman 4.9.3 + Compose 1.0.6
|
||||
- **ash2e 节点**: ✅ 完成 Docker 移除和 Podman 安装
|
||||
|
||||
#### 2. Telegraf 监控部署
|
||||
- **成功运行节点**: ash3c, semaphore, master, hcp1, hcp2, hcs (共6个节点)
|
||||
- **监控数据**: 已开始向 InfluxDB 发送数据
|
||||
- **配置模式**: 使用远程配置 URL
|
||||
|
||||
#### 3. 监控配置
|
||||
- **InfluxDB URL**: `http://influxdb1.tailnet-68f9.ts.net:8086`
|
||||
- **Token**: `VU_dOCVZzqEHb9jSFsDe0bJlEBaVbiG4LqfoczlnmcbfrbmklSt904HJPL4idYGvVi0c2eHkYDi2zCTni7Ay4w==`
|
||||
- **Organization**: `seekkey`
|
||||
- **Bucket**: `VPS`
|
||||
- **远程配置**: `http://influxdb1.tailnet-68f9.ts.net:8086/api/v2/telegrafs/0f8a73496790c000`
|
||||
|
||||
## 🔄 待完成的工作
|
||||
|
||||
### 1. 剩余节点的 Telegraf 安装
|
||||
**状态**: 部分节点仍需处理
|
||||
**问题节点**: ch3, ch2, ash1d, syd
|
||||
|
||||
**问题描述**:
|
||||
- 这些节点在下载 InfluxData 仓库密钥时失败
|
||||
- 错误信息: `HTTPSConnection.__init__() got an unexpected keyword argument 'cert_file'`
|
||||
- 原因: Python urllib3 版本兼容性问题
|
||||
|
||||
**解决方案**:
|
||||
已创建简化安装脚本 `/root/mgmt/configuration/fix-telegraf-simple.sh`,包含以下步骤:
|
||||
1. 直接下载 Telegraf 1.36.1 二进制文件
|
||||
2. 创建简化的启动脚本
|
||||
3. 部署为 `telegraf-simple.service`
|
||||
|
||||
### 2. 集群角色配置
|
||||
**当前配置**:
|
||||
```ini
|
||||
[nomad_servers]
|
||||
semaphore, ash2e, ash1d, ch2, ch3 (5个server)
|
||||
|
||||
[nomad_clients]
|
||||
master, ash3c (2个client)
|
||||
```
|
||||
|
||||
**待处理**:
|
||||
- ash2e, ash1d, ch2 节点需要安装 Nomad 二进制文件
|
||||
- 这些节点目前缺少 Nomad 安装
|
||||
|
||||
## 📁 重要文件位置
|
||||
|
||||
### 配置文件
|
||||
- **Inventory**: `/root/mgmt/configuration/inventories/production/nomad-cluster.ini`
|
||||
- **全局配置**: `/root/mgmt/configuration/inventories/production/group_vars/all.yml`
|
||||
|
||||
### Playbooks
|
||||
- **Telegraf 部署**: `/root/mgmt/configuration/playbooks/setup-disk-monitoring.yml`
|
||||
- **Docker 移除**: `/root/mgmt/configuration/playbooks/remove-docker-install-podman.yml`
|
||||
- **Nomad 配置**: `/root/mgmt/configuration/playbooks/configure-nomad-tailscale.yml`
|
||||
|
||||
### 模板文件
|
||||
- **Telegraf 主配置**: `/root/mgmt/configuration/templates/telegraf.conf.j2`
|
||||
- **硬盘监控**: `/root/mgmt/configuration/templates/disk-monitoring.conf.j2`
|
||||
- **系统监控**: `/root/mgmt/configuration/templates/system-monitoring.conf.j2`
|
||||
- **环境变量**: `/root/mgmt/configuration/templates/telegraf-env.j2`
|
||||
|
||||
### 修复脚本
|
||||
- **简化安装**: `/root/mgmt/configuration/fix-telegraf-simple.sh`
|
||||
- **远程部署**: `/root/mgmt/configuration/deploy-telegraf-remote.sh`
|
||||
|
||||
## 🔧 技术细节
|
||||
|
||||
### Telegraf 服务配置
|
||||
```ini
|
||||
[Unit]
|
||||
Description=Telegraf
|
||||
After=network.target
|
||||
|
||||
[Service]
|
||||
Type=simple
|
||||
User=telegraf
|
||||
Group=telegraf
|
||||
ExecStart=/usr/bin/telegraf --config http://influxdb1.tailnet-68f9.ts.net:8086/api/v2/telegrafs/0f8a73496790c000
|
||||
Restart=always
|
||||
RestartSec=5
|
||||
EnvironmentFile=/etc/default/telegraf
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
```
|
||||
|
||||
### 环境变量文件 (/etc/default/telegraf)
|
||||
```bash
|
||||
INFLUX_TOKEN=VU_dOCVZzqEHb9jSFsDe0bJlEBaVbiG4LqfoczlnmcbfrbmklSt904HJPL4idYGvVi0c2eHkYDi2zCTni7Ay4w==
|
||||
INFLUX_ORG=seekkey
|
||||
INFLUX_BUCKET=VPS
|
||||
INFLUX_URL=http://influxdb1.tailnet-68f9.ts.net:8086
|
||||
```
|
||||
|
||||
### 监控指标类型
|
||||
- 硬盘使用率 (所有挂载点: /, /var, /tmp, /opt, /home)
|
||||
- 硬盘 I/O 性能 (读写速度、IOPS)
|
||||
- inode 使用率
|
||||
- CPU 使用率 (总体 + 每核心)
|
||||
- 内存使用率
|
||||
- 网络接口统计
|
||||
- 系统负载和内核统计
|
||||
- 服务状态 (Nomad, Podman, Tailscale, Docker)
|
||||
- 进程监控
|
||||
- 日志文件大小监控
|
||||
|
||||
## 🚀 下一步操作建议
|
||||
|
||||
### 立即任务
|
||||
1. **完成剩余节点 Telegraf 安装**:
|
||||
```bash
|
||||
cd /root/mgmt/configuration
|
||||
./fix-telegraf-simple.sh
|
||||
```
|
||||
|
||||
2. **验证监控数据**:
|
||||
```bash
|
||||
# 检查所有节点 Telegraf 状态
|
||||
ansible all -i inventories/production/nomad-cluster.ini -m shell -a "systemctl is-active telegraf" --limit '!mac-laptop,!win-laptop'
|
||||
```
|
||||
|
||||
3. **在 Grafana 中验证数据**:
|
||||
- 确认 InfluxDB 中有来自所有节点的数据
|
||||
- 创建硬盘监控仪表板
|
||||
|
||||
### 后续优化
|
||||
1. **设置告警规则**:
|
||||
- 硬盘使用率 > 80% 警告
|
||||
- 硬盘使用率 > 90% 严重告警
|
||||
|
||||
2. **优化监控配置**:
|
||||
- 根据实际需求调整收集间隔
|
||||
- 添加更多自定义监控指标
|
||||
|
||||
3. **完成 Nomad 安装**:
|
||||
- 在 ash2e, ash1d, ch2 节点安装 Nomad 二进制文件
|
||||
- 配置集群连接
|
||||
|
||||
## ❗ 已知问题
|
||||
|
||||
1. **仓库密钥下载失败**:
|
||||
- 影响节点: ch3, ch2, ash1d, ash2e, ash3c, syd
|
||||
- 解决方案: 使用简化安装脚本
|
||||
|
||||
2. **包管理器锁定冲突**:
|
||||
- 多个节点同时执行 apt 操作导致锁定
|
||||
- 解决方案: 使用 serial: 1 逐个处理
|
||||
|
||||
3. **telegraf 用户缺失**:
|
||||
- 部分节点需要手动创建 telegraf 系统用户
|
||||
- 解决方案: `useradd --system --no-create-home --shell /bin/false telegraf`
|
||||
|
||||
## 📞 联系信息
|
||||
|
||||
**移交日期**: 2025-09-24
|
||||
**当前状态**: Telegraf 已在 6/11 个节点成功运行
|
||||
**关键成果**: 硬盘监控数据已开始流入 InfluxDB
|
||||
**优先级**: 完成剩余 5 个节点的 Telegraf 安装
|
||||
|
||||
---
|
||||
|
||||
**备注**: 所有脚本和配置文件都已经过测试,可以直接使用。建议按照上述步骤顺序执行,确保每个步骤完成后再进行下一步。
|
||||
|
|
@ -1,46 +0,0 @@
|
|||
#!/bin/bash
|
||||
# Nomad 集群硬盘监控部署脚本
|
||||
# 使用现有的 InfluxDB + Grafana 监控栈
|
||||
|
||||
echo "🚀 开始部署 Nomad 集群硬盘监控..."
|
||||
|
||||
# 检查配置文件
|
||||
if [[ ! -f "inventories/production/group_vars/all.yml" ]]; then
|
||||
echo "❌ 配置文件不存在,请先配置 InfluxDB 连接信息"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# 显示配置信息
|
||||
echo "📋 当前监控配置:"
|
||||
grep -E "influxdb_|disk_usage_|collection_interval" inventories/production/group_vars/all.yml
|
||||
|
||||
echo ""
|
||||
read -p "🤔 确认配置正确吗?(y/N): " confirm
|
||||
if [[ $confirm != [yY] ]]; then
|
||||
echo "❌ 部署取消,请修改配置后重试"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# 部署到所有节点
|
||||
echo "📦 开始安装 Telegraf 到所有节点..."
|
||||
ansible-playbook -i inventories/production/nomad-cluster.ini playbooks/setup-disk-monitoring.yml
|
||||
|
||||
# 检查部署结果
|
||||
if [[ $? -eq 0 ]]; then
|
||||
echo "✅ 硬盘监控部署完成!"
|
||||
echo ""
|
||||
echo "📊 监控信息:"
|
||||
echo "- 数据将发送到你现有的 InfluxDB"
|
||||
echo "- 可以在 Grafana 中创建仪表板查看数据"
|
||||
echo "- 已禁用本地日志文件以节省硬盘空间"
|
||||
echo "- 监控数据每30秒收集一次"
|
||||
echo ""
|
||||
echo "🔧 下一步:"
|
||||
echo "1. 在 Grafana 中创建 Nomad 集群监控仪表板"
|
||||
echo "2. 设置硬盘使用率告警规则"
|
||||
echo "3. 可以运行以下命令检查监控状态:"
|
||||
echo " ansible all -i inventories/production/nomad-cluster.ini -m shell -a 'systemctl status telegraf'"
|
||||
else
|
||||
echo "❌ 部署失败,请检查错误信息"
|
||||
exit 1
|
||||
fi
|
||||
|
|
@ -1,40 +0,0 @@
|
|||
#!/bin/bash
|
||||
# 使用远程 InfluxDB 2.x 配置快速部署 Telegraf 监控
|
||||
|
||||
echo "🚀 使用 InfluxDB 2.x 远程配置部署 Telegraf 监控..."
|
||||
|
||||
# 设置变量
|
||||
INFLUX_TOKEN="VU_dOCVZzqEHb9jSFsDe0bJlEBaVbiG4LqfoczlnmcbfrbmklSt904HJPL4idYGvVi0c2eHkYDi2zCTni7Ay4w=="
|
||||
TELEGRAF_CONFIG_URL="http://influxdb1.tailnet-68f9.ts.net:8086/api/v2/telegrafs/0f8a73496790c000"
|
||||
|
||||
# 检查网络连接
|
||||
echo "🔍 检查 InfluxDB 连接..."
|
||||
if curl -s --max-time 5 "http://influxdb1.tailnet-68f9.ts.net:8086/health" > /dev/null; then
|
||||
echo "✅ InfluxDB 连接正常"
|
||||
else
|
||||
echo "❌ 无法连接到 InfluxDB,请检查网络"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# 使用远程配置部署
|
||||
echo "📦 开始部署到所有节点..."
|
||||
ansible-playbook -i inventories/production/nomad-cluster.ini playbooks/setup-disk-monitoring.yml \
|
||||
-e "use_remote_config=true" \
|
||||
-e "telegraf_config_url=$TELEGRAF_CONFIG_URL" \
|
||||
-e "influxdb_token=$INFLUX_TOKEN"
|
||||
|
||||
# 检查部署结果
|
||||
if [[ $? -eq 0 ]]; then
|
||||
echo "✅ Telegraf 监控部署完成!"
|
||||
echo ""
|
||||
echo "📊 配置信息:"
|
||||
echo "- 使用远程配置: $TELEGRAF_CONFIG_URL"
|
||||
echo "- InfluxDB 服务器: influxdb1.tailnet-68f9.ts.net:8086"
|
||||
echo "- 已禁用本地日志文件"
|
||||
echo ""
|
||||
echo "🔧 验证部署:"
|
||||
echo "ansible all -i inventories/production/nomad-cluster.ini -m shell -a 'systemctl status telegraf --no-pager'"
|
||||
else
|
||||
echo "❌ 部署失败,请检查错误信息"
|
||||
exit 1
|
||||
fi
|
||||
|
|
@ -1,53 +0,0 @@
|
|||
#!/bin/bash
|
||||
# 简化的 Telegraf 安装脚本 - 使用 Ubuntu 官方仓库
|
||||
|
||||
echo "🚀 使用简化方案安装 Telegraf..."
|
||||
|
||||
# 定义失败的节点(需要手动处理)
|
||||
FAILED_NODES="ch3,ch2,ash1d,ash2e,ash3c,syd"
|
||||
|
||||
echo "📦 第一步:在失败的节点安装 Telegraf(Ubuntu 官方版本)..."
|
||||
ansible $FAILED_NODES -i inventories/production/nomad-cluster.ini -m apt -a "name=telegraf state=present update_cache=yes" --become
|
||||
|
||||
if [[ $? -eq 0 ]]; then
|
||||
echo "✅ Telegraf 安装成功"
|
||||
else
|
||||
echo "❌ 安装失败,尝试手动方式..."
|
||||
# 手动安装方式
|
||||
ansible $FAILED_NODES -i inventories/production/nomad-cluster.ini -m shell -a "apt update && apt install -y telegraf" --become
|
||||
fi
|
||||
|
||||
echo "🔧 第二步:配置 Telegraf 使用远程配置..."
|
||||
|
||||
# 创建环境变量文件
|
||||
ansible $FAILED_NODES -i inventories/production/nomad-cluster.ini -m copy -a "content='INFLUX_TOKEN=VU_dOCVZzqEHb9jSFsDe0bJlEBaVbiG4LqfoczlnmcbfrbmklSt904HJPL4idYGvVi0c2eHkYDi2zCTni7Ay4w==
|
||||
INFLUX_ORG=nomad
|
||||
INFLUX_BUCKET=nomad_monitoring
|
||||
INFLUX_URL=http://influxdb1.tailnet-68f9.ts.net:8086' dest=/etc/default/telegraf owner=root group=root mode=0600" --become
|
||||
|
||||
# 创建 systemd 服务文件
|
||||
ansible $FAILED_NODES -i inventories/production/nomad-cluster.ini -m copy -a "content='[Unit]
|
||||
Description=Telegraf - 节点监控服务
|
||||
Documentation=https://github.com/influxdata/telegraf
|
||||
After=network.target
|
||||
|
||||
[Service]
|
||||
Type=notify
|
||||
User=telegraf
|
||||
Group=telegraf
|
||||
ExecStart=/usr/bin/telegraf --config http://influxdb1.tailnet-68f9.ts.net:8086/api/v2/telegrafs/0f8a73496790c000
|
||||
ExecReload=/bin/kill -HUP \$MAINPID
|
||||
KillMode=control-group
|
||||
Restart=on-failure
|
||||
RestartSec=5
|
||||
TimeoutStopSec=20
|
||||
EnvironmentFile=/etc/default/telegraf
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target' dest=/etc/systemd/system/telegraf.service owner=root group=root mode=0644" --become
|
||||
|
||||
echo "🔄 第三步:启动服务..."
|
||||
ansible $FAILED_NODES -i inventories/production/nomad-cluster.ini -m systemd -a "daemon_reload=yes name=telegraf state=started enabled=yes" --become
|
||||
|
||||
echo "✅ 检查结果..."
|
||||
ansible $FAILED_NODES -i inventories/production/nomad-cluster.ini -m shell -a "systemctl status telegraf --no-pager -l | head -5" --become
|
||||
|
|
@ -1,52 +0,0 @@
|
|||
#!/bin/bash
|
||||
# 直接使用远程配置运行 Telegraf 的简化方案
|
||||
|
||||
echo "🚀 创建简化的 Telegraf 服务..."
|
||||
|
||||
# 失败的节点
|
||||
FAILED_NODES="ch3,ch2,ash1d,ash2e,syd"
|
||||
|
||||
echo "📥 第一步:下载并安装 Telegraf 二进制文件..."
|
||||
ansible $FAILED_NODES -i inventories/production/nomad-cluster.ini -m shell -a "
|
||||
cd /tmp &&
|
||||
curl -L https://dl.influxdata.com/telegraf/releases/telegraf-1.36.1_linux_amd64.tar.gz -o telegraf.tar.gz &&
|
||||
tar -xzf telegraf.tar.gz &&
|
||||
sudo cp telegraf-1.36.1/usr/bin/telegraf /usr/bin/ &&
|
||||
sudo chmod +x /usr/bin/telegraf &&
|
||||
telegraf version
|
||||
" --become
|
||||
|
||||
echo "🔧 第二步:创建简化的启动脚本..."
|
||||
ansible $FAILED_NODES -i inventories/production/nomad-cluster.ini -m copy -a "content='#!/bin/bash
|
||||
export INFLUX_TOKEN=VU_dOCVZzqEHb9jSFsDe0bJlEBaVbiG4LqfoczlnmcbfrbmklSt904HJPL4idYGvVi0c2eHkYDi2zCTni7Ay4w==
|
||||
export INFLUX_ORG=seekkey
|
||||
export INFLUX_BUCKET=VPS
|
||||
export INFLUX_URL=http://influxdb1.tailnet-68f9.ts.net:8086
|
||||
|
||||
/usr/bin/telegraf --config http://influxdb1.tailnet-68f9.ts.net:8086/api/v2/telegrafs/0f8a73496790c000
|
||||
' dest=/usr/local/bin/telegraf-start.sh owner=root group=root mode=0755" --become
|
||||
|
||||
echo "🔄 第三步:停止旧服务并启动新的简化服务..."
|
||||
ansible $FAILED_NODES -i inventories/production/nomad-cluster.ini -m systemd -a "name=telegraf state=stopped enabled=no" --become || true
|
||||
|
||||
# 创建简化的 systemd 服务
|
||||
ansible $FAILED_NODES -i inventories/production/nomad-cluster.ini -m copy -a "content='[Unit]
|
||||
Description=Telegraf (Simplified)
|
||||
After=network.target
|
||||
|
||||
[Service]
|
||||
Type=simple
|
||||
User=telegraf
|
||||
Group=telegraf
|
||||
ExecStart=/usr/local/bin/telegraf-start.sh
|
||||
Restart=always
|
||||
RestartSec=5
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target' dest=/etc/systemd/system/telegraf-simple.service owner=root group=root mode=0644" --become
|
||||
|
||||
echo "🚀 第四步:启动简化服务..."
|
||||
ansible $FAILED_NODES -i inventories/production/nomad-cluster.ini -m systemd -a "daemon_reload=yes name=telegraf-simple state=started enabled=yes" --become
|
||||
|
||||
echo "✅ 检查结果..."
|
||||
ansible $FAILED_NODES -i inventories/production/nomad-cluster.ini -m shell -a "systemctl status telegraf-simple --no-pager -l | head -10" --become
|
||||
|
|
@ -1,45 +0,0 @@
|
|||
# NFS CSI Volume 配置
|
||||
type = "csi"
|
||||
id = "nfs-fnsync"
|
||||
name = "nfs-fnsync"
|
||||
external_id = "nfs-fnsync"
|
||||
|
||||
# 插件配置
|
||||
plugin_id = "nfs"
|
||||
capacity_min = "1GiB"
|
||||
capacity_max = "100GiB"
|
||||
|
||||
# 挂载选项
|
||||
mount_options {
|
||||
fs_type = "nfs4"
|
||||
mount_flags = ["rw", "relatime", "vers=4.2"]
|
||||
}
|
||||
|
||||
# 访问模式
|
||||
access_mode = "single-node-writer"
|
||||
attachment_mode = "file-system"
|
||||
|
||||
# 拓扑约束
|
||||
topology_request {
|
||||
preferred {
|
||||
topology {
|
||||
segments = {
|
||||
"rack" = "rack-1"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
required {
|
||||
topology {
|
||||
segments = {
|
||||
"datacenter" = "dc1"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
# 参数配置
|
||||
parameters {
|
||||
server = "snail"
|
||||
share = "/fs/1000/nfs/Fnsync"
|
||||
}
|
||||
|
|
@ -1,394 +0,0 @@
|
|||
'!'=3491531
|
||||
'#'=0
|
||||
'$'=3491516
|
||||
'*'=( )
|
||||
'?'=0
|
||||
-=569JNRXghiks
|
||||
0=/usr/bin/zsh
|
||||
@=( )
|
||||
ADDR=( 'PATH=/root/.trae-server/sdks/workspaces/cced0550/versions/node/current\x3a/root/.trae-server/sdks/versions/node/current\x3a' )
|
||||
AGNOSTER_AWS_BG=green
|
||||
AGNOSTER_AWS_FG=black
|
||||
AGNOSTER_AWS_PROD_BG=red
|
||||
AGNOSTER_AWS_PROD_FG=yellow
|
||||
AGNOSTER_BZR_CLEAN_BG=green
|
||||
AGNOSTER_BZR_CLEAN_FG=black
|
||||
AGNOSTER_BZR_DIRTY_BG=yellow
|
||||
AGNOSTER_BZR_DIRTY_FG=black
|
||||
AGNOSTER_CONTEXT_BG=black
|
||||
AGNOSTER_CONTEXT_FG=default
|
||||
AGNOSTER_DIR_BG=blue
|
||||
AGNOSTER_DIR_FG=black
|
||||
AGNOSTER_GIT_BRANCH_STATUS=true
|
||||
AGNOSTER_GIT_CLEAN_BG=green
|
||||
AGNOSTER_GIT_CLEAN_FG=black
|
||||
AGNOSTER_GIT_DIRTY_BG=yellow
|
||||
AGNOSTER_GIT_DIRTY_FG=black
|
||||
AGNOSTER_GIT_INLINE=false
|
||||
AGNOSTER_HG_CHANGED_BG=yellow
|
||||
AGNOSTER_HG_CHANGED_FG=black
|
||||
AGNOSTER_HG_CLEAN_BG=green
|
||||
AGNOSTER_HG_CLEAN_FG=black
|
||||
AGNOSTER_HG_NEWFILE_BG=red
|
||||
AGNOSTER_HG_NEWFILE_FG=white
|
||||
AGNOSTER_STATUS_BG=black
|
||||
AGNOSTER_STATUS_FG=default
|
||||
AGNOSTER_STATUS_JOB_FG=cyan
|
||||
AGNOSTER_STATUS_RETVAL_FG=red
|
||||
AGNOSTER_STATUS_RETVAL_NUMERIC=false
|
||||
AGNOSTER_STATUS_ROOT_FG=yellow
|
||||
AGNOSTER_VENV_BG=blue
|
||||
AGNOSTER_VENV_FG=black
|
||||
ANTHROPIC_AUTH_TOKEN=sk-M0YtRnZNHJkqFP7DjrNsf3jVDe4INKqiBGN0YgQcisudOYbp
|
||||
ANTHROPIC_BASE_URL=https://anyrouter.top
|
||||
ARCH=x86_64
|
||||
ARGC=0
|
||||
BG
|
||||
BROWSER=/root/.trae-server/bin/stable-0643ffaa788ad4dd46eaa12cec109ac40595c816/bin/helpers/browser.sh
|
||||
BUFFER=''
|
||||
CDPATH=''
|
||||
COLORTERM=truecolor
|
||||
COLUMNS=104
|
||||
CPUTYPE=x86_64
|
||||
CURRENT_BG=NONE
|
||||
CURRENT_DEFAULT_FG=default
|
||||
CURRENT_FG=black
|
||||
CURSOR=''
|
||||
DBUS_SESSION_BUS_ADDRESS='unix:path=/run/user/0/bus'
|
||||
DISTRO_COMMIT=0643ffaa788ad4dd46eaa12cec109ac40595c816
|
||||
DISTRO_QUALITY=stable
|
||||
DISTRO_VERSION=1.100.3
|
||||
DISTRO_VSCODIUM_RELEASE=''
|
||||
DOWNLOAD_RETRY_COUNT=0
|
||||
EDITOR=vim
|
||||
EGID=0
|
||||
EPOCHREALTIME
|
||||
EPOCHSECONDS
|
||||
EUID=0
|
||||
EXTRACT_RETRY_COUNT=0
|
||||
FG
|
||||
FIGNORE=''
|
||||
FPATH=/root/.oh-my-zsh/plugins/z:/root/.oh-my-zsh/plugins/web-search:/root/.oh-my-zsh/plugins/vscode:/root/.oh-my-zsh/plugins/tmux:/root/.oh-my-zsh/plugins/systemd:/root/.oh-my-zsh/plugins/sudo:/root/.oh-my-zsh/plugins/history-substring-search:/root/.oh-my-zsh/plugins/extract:/root/.oh-my-zsh/plugins/command-not-found:/root/.oh-my-zsh/plugins/colored-man-pages:/root/.oh-my-zsh/custom/plugins/zsh-completions:/root/.oh-my-zsh/custom/plugins/zsh-syntax-highlighting:/root/.oh-my-zsh/custom/plugins/zsh-autosuggestions:/root/.oh-my-zsh/plugins/gcloud:/root/.oh-my-zsh/plugins/aws:/root/.oh-my-zsh/plugins/helm:/root/.oh-my-zsh/plugins/kubectl:/root/.oh-my-zsh/plugins/terraform:/root/.oh-my-zsh/plugins/ansible:/root/.oh-my-zsh/plugins/docker-compose:/root/.oh-my-zsh/plugins/docker:/root/.oh-my-zsh/plugins/git:/root/.oh-my-zsh/functions:/root/.oh-my-zsh/completions:/root/.oh-my-zsh/custom/functions:/root/.oh-my-zsh/custom/completions:/root/.oh-my-zsh/cache/completions:/usr/local/share/zsh/site-functions:/usr/share/zsh/vendor-functions:/usr/share/zsh/vendor-completions:/usr/share/zsh/functions/Calendar:/usr/share/zsh/functions/Chpwd:/usr/share/zsh/functions/Completion:/usr/share/zsh/functions/Completion/AIX:/usr/share/zsh/functions/Completion/BSD:/usr/share/zsh/functions/Completion/Base:/usr/share/zsh/functions/Completion/Cygwin:/usr/share/zsh/functions/Completion/Darwin:/usr/share/zsh/functions/Completion/Debian:/usr/share/zsh/functions/Completion/Linux:/usr/share/zsh/functions/Completion/Mandriva:/usr/share/zsh/functions/Completion/Redhat:/usr/share/zsh/functions/Completion/Solaris:/usr/share/zsh/functions/Completion/Unix:/usr/share/zsh/functions/Completion/X:/usr/share/zsh/functions/Completion/Zsh:/usr/share/zsh/functions/Completion/openSUSE:/usr/share/zsh/functions/Exceptions:/usr/share/zsh/functions/MIME:/usr/share/zsh/functions/Math:/usr/share/zsh/functions/Misc:/usr/share/zsh/functions/Newuser:/usr/share/zsh/functions/Prompts:/usr/share/zsh/functions/TCP:/usr/share/zsh/functions/VCS_Info:/usr/share/zsh/functions/VCS_Info/Backends:/usr/share/zsh/functions/Zftp:/usr/share/zsh/functions/Zle:/root/.oh-my-zsh/custom/plugins/zsh-completions/src
|
||||
FUNCNEST=500
|
||||
FX
|
||||
GID=0
|
||||
GIT_ASKPASS=/root/.trae-server/bin/stable-0643ffaa788ad4dd46eaa12cec109ac40595c816/extensions/git/dist/askpass.sh
|
||||
GIT_PAGER=''
|
||||
HISTCHARS='!^#'
|
||||
HISTCMD=1888
|
||||
HISTFILE=/root/.zsh_history
|
||||
HISTORY_SUBSTRING_SEARCH_ENSURE_UNIQUE=''
|
||||
HISTORY_SUBSTRING_SEARCH_FUZZY=''
|
||||
HISTORY_SUBSTRING_SEARCH_GLOBBING_FLAGS=i
|
||||
HISTORY_SUBSTRING_SEARCH_HIGHLIGHT_FOUND='bg=magenta,fg=white,bold'
|
||||
HISTORY_SUBSTRING_SEARCH_HIGHLIGHT_NOT_FOUND='bg=red,fg=white,bold'
|
||||
HISTORY_SUBSTRING_SEARCH_PREFIXED=''
|
||||
HISTSIZE=10000
|
||||
HOME=/root
|
||||
HOST=semaphore
|
||||
IFS=$' \t\n\C-@'
|
||||
ITEM='PATH=/root/.trae-server/sdks/workspaces/cced0550/versions/node/current\x3a/root/.trae-server/sdks/versions/node/current\x3a'
|
||||
KEYBOARD_HACK=''
|
||||
KEYTIMEOUT=40
|
||||
LANG=C.UTF-8
|
||||
LANGUAGE=en_US.UTF-8
|
||||
LC_ALL=C.UTF-8
|
||||
LESS=-R
|
||||
LINENO=77
|
||||
LINES=40
|
||||
LISTMAX=100
|
||||
LOGNAME=root
|
||||
LSCOLORS=Gxfxcxdxbxegedabagacad
|
||||
LS_COLORS='rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:mi=00:su=37;41:sg=30;43:ca=00:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arc=01;31:*.arj=01;31:*.taz=01;31:*.lha=01;31:*.lz4=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.tzo=01;31:*.t7z=01;31:*.zip=01;31:*.z=01;31:*.dz=01;31:*.gz=01;31:*.lrz=01;31:*.lz=01;31:*.lzo=01;31:*.xz=01;31:*.zst=01;31:*.tzst=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.alz=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.cab=01;31:*.wim=01;31:*.swm=01;31:*.dwm=01;31:*.esd=01;31:*.avif=01;35:*.jpg=01;35:*.jpeg=01;35:*.mjpg=01;35:*.mjpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.webp=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=00;36:*.au=00;36:*.flac=00;36:*.m4a=00;36:*.mid=00;36:*.midi=00;36:*.mka=00;36:*.mp3=00;36:*.mpc=00;36:*.ogg=00;36:*.ra=00;36:*.wav=00;36:*.oga=00;36:*.opus=00;36:*.spx=00;36:*.xspf=00;36:*~=00;90:*#=00;90:*.bak=00;90:*.old=00;90:*.orig=00;90:*.part=00;90:*.rej=00;90:*.swp=00;90:*.tmp=00;90:*.dpkg-dist=00;90:*.dpkg-old=00;90:*.ucf-dist=00;90:*.ucf-new=00;90:*.ucf-old=00;90:*.rpmnew=00;90:*.rpmorig=00;90:*.rpmsave=00;90:'
|
||||
MACHTYPE=x86_64
|
||||
MAILCHECK=60
|
||||
MAILPATH=''
|
||||
MANAGER_LOGS_DIR=/root/.trae-server/manager-logs/1758782495499_337482
|
||||
MANPATH=''
|
||||
MATCH=''
|
||||
MBEGIN=''
|
||||
MEND=''
|
||||
MODULE_PATH=/usr/lib/x86_64-linux-gnu/zsh/5.9
|
||||
MOTD_SHOWN=pam
|
||||
NEWLINE=$'\n'
|
||||
NOMAD_ADDR=http://100.81.26.3:4646
|
||||
NULLCMD=cat
|
||||
OLDPWD=/root/mgmt
|
||||
OPTARG=''
|
||||
OPTIND=1
|
||||
OSTYPE=linux-gnu
|
||||
OS_RELEASE_ID=debian
|
||||
PAGER=''
|
||||
PATH=/root/.trae-server/sdks/workspaces/cced0550/versions/node/current:/root/.trae-server/sdks/versions/node/current:/root/.trae-server/sdks/workspaces/cced0550/versions/node/current:/root/.trae-server/sdks/versions/node/current:/root/.trae-server/bin/stable-0643ffaa788ad4dd46eaa12cec109ac40595c816/bin/remote-cli:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
|
||||
PLATFORM=linux
|
||||
POWERLEVEL9K_INSTANT_PROMPT=off
|
||||
PPID=2438215
|
||||
PROMPT2='%_> '
|
||||
PROMPT3='?# '
|
||||
PROMPT4='+%N:%i> '
|
||||
PROMPT=$'\n(TraeAI-3) %~ [%?] $ '
|
||||
PS1=$'\n(TraeAI-3) %~ [%?] $ '
|
||||
PS2='%_> '
|
||||
PS3='?# '
|
||||
PS4='+%N:%i> '
|
||||
PSVAR=''
|
||||
PWD=/root/mgmt/configuration
|
||||
RANDOM=6724
|
||||
READNULLCMD=/usr/bin/pager
|
||||
REMOTE_VERSION=1058011568130_8
|
||||
SAVEHIST=10000
|
||||
SCRIPT_ID=f227a05726be7c5a36752917
|
||||
SECONDS=1076
|
||||
SEGMENT_SEPARATOR=
|
||||
SERVER_APP_NAME=Trae
|
||||
SERVER_APP_QUALITY=dev
|
||||
SERVER_APP_VERSION=''
|
||||
SERVER_ARCH=x64
|
||||
SERVER_DATA_DIR=/root/.trae-server
|
||||
SERVER_DIR=/root/.trae-server/bin/stable-0643ffaa788ad4dd46eaa12cec109ac40595c816
|
||||
SERVER_DOWNLOAD_PREFIX=https://lf-cdn.trae.com.cn/obj/trae-com-cn/pkg/server/releases/stable/0643ffaa788ad4dd46eaa12cec109ac40595c816/linux/
|
||||
SERVER_EXTENSIONS_DIR=/root/.trae-server/extensions
|
||||
SERVER_HOST=127.0.0.1
|
||||
SERVER_INITIAL_EXTENSIONS='--install-extension gitpod.gitpod-remote-ssh'
|
||||
SERVER_LISTEN_FLAG='--port=0'
|
||||
SERVER_LOGFILE=/root/.trae-server/.stable-0643ffaa788ad4dd46eaa12cec109ac40595c816.log
|
||||
SERVER_LOGS_DIR=/root/.trae-server/logs
|
||||
SERVER_PACKAGE_NAME=Trae-linux-x64-1058011568130_8.tar.gz
|
||||
SERVER_PIDFILE=/root/.trae-server/.stable-0643ffaa788ad4dd46eaa12cec109ac40595c816.pid
|
||||
SERVER_SCRIPT=/root/.trae-server/bin/stable-0643ffaa788ad4dd46eaa12cec109ac40595c816/index_trae.js
|
||||
SERVER_SCRIPT_PRODUCT=/root/.trae-server/bin/stable-0643ffaa788ad4dd46eaa12cec109ac40595c816/product.json
|
||||
SERVER_TOKENFILE=/root/.trae-server/.stable-0643ffaa788ad4dd46eaa12cec109ac40595c816.token
|
||||
SHELL=/usr/bin/zsh
|
||||
SHLVL=2
|
||||
SHORT_HOST=semaphore
|
||||
SPROMPT='zsh: correct '\''%R'\'' to '\''%r'\'' [nyae]? '
|
||||
SSH_CLIENT='100.86.9.29 49793 22'
|
||||
SSH_CONNECTION='100.86.9.29 49793 100.116.158.95 22'
|
||||
TERM=xterm-256color
|
||||
TERM_PRODUCT=Trae
|
||||
TERM_PROGRAM=vscode
|
||||
TERM_PROGRAM_VERSION=1.100.3
|
||||
TIMEFMT='%J %U user %S system %P cpu %*E total'
|
||||
TMPPREFIX=/tmp/zsh
|
||||
TMP_DIR=/run/user/0
|
||||
TRAE_AI_SHELL_ID=3
|
||||
TRAE_DETECT_REGION=CN
|
||||
TRAE_REMOTE_EXTENSION_REGION=cn
|
||||
TRAE_REMOTE_SKIP_REMOTE_CHECK=''
|
||||
TRAE_RESOLVE_TYPE=ssh
|
||||
TRY_BLOCK_ERROR=-1
|
||||
TRY_BLOCK_INTERRUPT=-1
|
||||
TTY=/dev/pts/8
|
||||
TTYIDLE=-1
|
||||
UID=0
|
||||
USER=root
|
||||
USERNAME=root
|
||||
USER_ZDOTDIR=/root
|
||||
VARNAME=PATH
|
||||
VENDOR=debian
|
||||
VSCODE_GIT_ASKPASS_EXTRA_ARGS=''
|
||||
VSCODE_GIT_ASKPASS_MAIN=/root/.trae-server/bin/stable-0643ffaa788ad4dd46eaa12cec109ac40595c816/extensions/git/dist/askpass-main.js
|
||||
VSCODE_GIT_ASKPASS_NODE=/root/.trae-server/bin/stable-0643ffaa788ad4dd46eaa12cec109ac40595c816/node
|
||||
VSCODE_GIT_IPC_HANDLE=/run/user/0/vscode-git-7caaecb415.sock
|
||||
VSCODE_INJECTION=1
|
||||
VSCODE_IPC_HOOK_CLI=/run/user/0/vscode-ipc-47deaf2b-9a7a-4554-a972-d6b4aa5bb388.sock
|
||||
VSCODE_SHELL_INTEGRATION=1
|
||||
VSCODE_STABLE=''
|
||||
VSCODE_ZDOTDIR=/tmp/root-trae-zsh
|
||||
WATCH
|
||||
WORDCHARS=''
|
||||
XDG_RUNTIME_DIR=/run/user/0
|
||||
XDG_SESSION_CLASS=user
|
||||
XDG_SESSION_ID=1636
|
||||
XDG_SESSION_TYPE=tty
|
||||
ZDOTDIR=/root
|
||||
ZSH=/root/.oh-my-zsh
|
||||
ZSHZ=( [CHOWN]=zf_chown [DIRECTORY_REMOVED]=0 [FUNCTIONS]=$'_zshz_usage\n _zshz_add_or_remove_path\n _zshz_update_datafile\n _zshz_legacy_complete\n _zshz_printv\n _zshz_find_common_root\n _zshz_output\n _zshz_find_matches\n zshz\n _zshz_precmd\n _zshz_chpwd\n _zshz' [MV]=zf_mv [PRINTV]=1 [RM]=zf_rm [USE_FLOCK]=1 )
|
||||
ZSHZ_EXCLUDE_DIRS=( )
|
||||
ZSH_ARGZERO=/usr/bin/zsh
|
||||
ZSH_AUTOSUGGEST_ACCEPT_WIDGETS=( forward-char end-of-line vi-forward-char vi-end-of-line vi-add-eol )
|
||||
ZSH_AUTOSUGGEST_CLEAR_WIDGETS=( history-search-forward history-search-backward history-beginning-search-forward history-beginning-search-backward history-beginning-search-forward-end history-beginning-search-backward-end history-substring-search-up history-substring-search-down up-line-or-beginning-search down-line-or-beginning-search up-line-or-history down-line-or-history accept-line copy-earlier-word )
|
||||
ZSH_AUTOSUGGEST_COMPLETIONS_PTY_NAME=zsh_autosuggest_completion_pty
|
||||
ZSH_AUTOSUGGEST_EXECUTE_WIDGETS=( )
|
||||
ZSH_AUTOSUGGEST_HIGHLIGHT_STYLE='fg=8'
|
||||
ZSH_AUTOSUGGEST_IGNORE_WIDGETS=( 'orig-*' beep run-help set-local-history which-command yank yank-pop 'zle-*' )
|
||||
ZSH_AUTOSUGGEST_ORIGINAL_WIDGET_PREFIX=autosuggest-orig-
|
||||
ZSH_AUTOSUGGEST_PARTIAL_ACCEPT_WIDGETS=( forward-word emacs-forward-word vi-forward-word vi-forward-word-end vi-forward-blank-word vi-forward-blank-word-end vi-find-next-char vi-find-next-char-skip )
|
||||
ZSH_AUTOSUGGEST_STRATEGY=( history completion )
|
||||
ZSH_AUTOSUGGEST_USE_ASYNC=''
|
||||
ZSH_CACHE_DIR=/root/.oh-my-zsh/cache
|
||||
ZSH_COMPDUMP=/root/.zcompdump-semaphore-5.9
|
||||
ZSH_CUSTOM=/root/.oh-my-zsh/custom
|
||||
ZSH_EVAL_CONTEXT=toplevel
|
||||
ZSH_HIGHLIGHT_DIRS_BLACKLIST=( )
|
||||
ZSH_HIGHLIGHT_HIGHLIGHTERS=( main brackets pattern cursor )
|
||||
ZSH_HIGHLIGHT_PATTERNS=( )
|
||||
ZSH_HIGHLIGHT_REGEXP=( )
|
||||
ZSH_HIGHLIGHT_REVISION=HEAD
|
||||
ZSH_HIGHLIGHT_STYLES=( [arg0]='fg=green' [assign]=none [autodirectory]='fg=green,underline' [back-dollar-quoted-argument]='fg=cyan' [back-double-quoted-argument]='fg=cyan' [back-quoted-argument]=none [back-quoted-argument-delimiter]='fg=magenta' [bracket-error]='fg=red,bold' [bracket-level-1]='fg=blue,bold' [bracket-level-2]='fg=green,bold' [bracket-level-3]='fg=magenta,bold' [bracket-level-4]='fg=yellow,bold' [bracket-level-5]='fg=cyan,bold' [command-substitution]=none [command-substitution-delimiter]='fg=magenta' [commandseparator]=none [comment]='fg=black,bold' [cursor]=standout [cursor-matchingbracket]=standout [default]=none [dollar-double-quoted-argument]='fg=cyan' [dollar-quoted-argument]='fg=yellow' [double-hyphen-option]=none [double-quoted-argument]='fg=yellow' [global-alias]='fg=cyan' [globbing]='fg=blue' [history-expansion]='fg=blue' [line]='' [named-fd]=none [numeric-fd]=none [path]=underline [path_pathseparator]='' [path_prefix_pathseparator]='' [precommand]='fg=green,underline' [process-substitution]=none [process-substitution-delimiter]='fg=magenta' [rc-quote]='fg=cyan' [redirection]='fg=yellow' [reserved-word]='fg=yellow' [root]=standout [single-hyphen-option]=none [single-quoted-argument]='fg=yellow' [suffix-alias]='fg=green,underline' [unknown-token]='fg=red,bold' )
|
||||
ZSH_HIGHLIGHT_VERSION=0.8.1-dev
|
||||
ZSH_NAME=zsh
|
||||
ZSH_PATCHLEVEL=debian/5.9-4+b7
|
||||
ZSH_SUBSHELL=1
|
||||
ZSH_THEME=agnoster
|
||||
ZSH_THEME_GIT_PROMPT_CLEAN=''
|
||||
ZSH_THEME_GIT_PROMPT_DIRTY='*'
|
||||
ZSH_THEME_GIT_PROMPT_PREFIX='git:('
|
||||
ZSH_THEME_GIT_PROMPT_SUFFIX=')'
|
||||
ZSH_THEME_RUBY_PROMPT_PREFIX='('
|
||||
ZSH_THEME_RUBY_PROMPT_SUFFIX=')'
|
||||
ZSH_THEME_RVM_PROMPT_OPTIONS='i v g'
|
||||
ZSH_THEME_TERM_TAB_TITLE_IDLE='%15<..<%~%<<'
|
||||
ZSH_THEME_TERM_TITLE_IDLE='%n@%m:%~'
|
||||
ZSH_TMUX_AUTOCONNECT=true
|
||||
ZSH_TMUX_AUTONAME_SESSION=false
|
||||
ZSH_TMUX_AUTOQUIT=false
|
||||
ZSH_TMUX_AUTOREFRESH=false
|
||||
ZSH_TMUX_AUTOSTART=false
|
||||
ZSH_TMUX_AUTOSTART_ONCE=true
|
||||
ZSH_TMUX_CONFIG=/root/.tmux.conf
|
||||
ZSH_TMUX_DETACHED=false
|
||||
ZSH_TMUX_FIXTERM=true
|
||||
ZSH_TMUX_FIXTERM_WITHOUT_256COLOR=screen
|
||||
ZSH_TMUX_FIXTERM_WITH_256COLOR=screen-256color
|
||||
ZSH_TMUX_ITERM2=false
|
||||
ZSH_TMUX_TERM=screen-256color
|
||||
ZSH_TMUX_UNICODE=false
|
||||
ZSH_VERSION=5.9
|
||||
_=set
|
||||
_OMZ_ASYNC_FDS=( )
|
||||
_OMZ_ASYNC_OUTPUT=( )
|
||||
_OMZ_ASYNC_PIDS=( )
|
||||
_ZSH_AUTOSUGGEST_ASYNC_FD=''
|
||||
_ZSH_AUTOSUGGEST_BIND_COUNTS=( [accept-and-hold]=1 [accept-and-infer-next-history]=1 [accept-and-menu-complete]=1 [accept-line]=1 [accept-line-and-down-history]=1 [accept-search]=1 [argument-base]=1 [auto-suffix-remove]=1 [auto-suffix-retain]=1 [autosuggest-capture-completion]=1 [backward-char]=1 [backward-delete-char]=1 [backward-delete-word]=1 [backward-kill-line]=1 [backward-kill-word]=1 [backward-word]=1 [beginning-of-buffer-or-history]=1 [beginning-of-history]=1 [beginning-of-line]=1 [beginning-of-line-hist]=1 [bracketed-paste]=1 [capitalize-word]=1 [clear-screen]=1 [complete-word]=1 [copy-prev-shell-word]=1 [copy-prev-word]=1 [copy-region-as-kill]=1 [deactivate-region]=1 [delete-char]=1 [delete-char-or-list]=1 [delete-word]=1 [describe-key-briefly]=1 [digit-argument]=1 [down-case-word]=1 [down-history]=1 [down-line]=1 [down-line-or-beginning-search]=1 [down-line-or-history]=1 [down-line-or-search]=1 [edit-command-line]=1 [emacs-backward-word]=1 [emacs-forward-word]=1 [end-of-buffer-or-history]=1 [end-of-history]=1 [end-of-line]=1 [end-of-line-hist]=1 [end-of-list]=1 [exchange-point-and-mark]=1 [execute-last-named-cmd]=1 [execute-named-cmd]=1 [expand-cmd-path]=1 [expand-history]=1 [expand-or-complete]=1 [expand-or-complete-prefix]=1 [expand-word]=1 [forward-char]=1 [forward-word]=1 [get-line]=1 [gosmacs-transpose-chars]=1 [history-beginning-search-backward]=1 [history-beginning-search-forward]=1 [history-incremental-pattern-search-backward]=1 [history-incremental-pattern-search-forward]=1 [history-incremental-search-backward]=1 [history-incremental-search-forward]=1 [history-search-backward]=1 [history-search-forward]=1 [history-substring-search-down]=1 [history-substring-search-up]=1 [infer-next-history]=1 [insert-last-word]=1 [kill-buffer]=1 [kill-line]=1 [kill-region]=1 [kill-whole-line]=1 [kill-word]=1 [list-choices]=1 [list-expand]=1 [magic-space]=1 [menu-complete]=1 [menu-expand-or-complete]=1 [menu-select]=1 [neg-argument]=1 [overwrite-mode]=1 [pound-insert]=1 [push-input]=1 [push-line]=1 [push-line-or-edit]=1 [put-replace-selection]=1 [quote-line]=1 [quote-region]=1 [quoted-insert]=1 [read-command]=1 [recursive-edit]=1 [redisplay]=1 [redo]=1 [reset-prompt]=1 [reverse-menu-complete]=1 [select-a-blank-word]=1 [select-a-shell-word]=1 [select-a-word]=1 [select-in-blank-word]=1 [select-in-shell-word]=1 [select-in-word]=1 [self-insert]=1 [self-insert-unmeta]=1 [send-break]=1 [set-mark-command]=1 [spell-word]=1 [split-undo]=1 [sudo-command-line]=1 [transpose-chars]=1 [transpose-words]=1 [undefined-key]=1 [undo]=1 [universal-argument]=1 [up-case-word]=1 [up-history]=1 [up-line]=1 [up-line-or-beginning-search]=1 [up-line-or-history]=1 [up-line-or-search]=1 [user:zle-line-finish]=1 [vi-add-eol]=1 [vi-add-next]=1 [vi-backward-blank-word]=1 [vi-backward-blank-word-end]=1 [vi-backward-char]=1 [vi-backward-delete-char]=1 [vi-backward-kill-word]=1 [vi-backward-word]=1 [vi-backward-word-end]=1 [vi-beginning-of-line]=1 [vi-caps-lock-panic]=1 [vi-change]=1 [vi-change-eol]=1 [vi-change-whole-line]=1 [vi-cmd-mode]=1 [vi-delete]=1 [vi-delete-char]=1 [vi-digit-or-beginning-of-line]=1 [vi-down-case]=1 [vi-down-line-or-history]=1 [vi-end-of-line]=1 [vi-fetch-history]=1 [vi-find-next-char]=1 [vi-find-next-char-skip]=1 [vi-find-prev-char]=1 [vi-find-prev-char-skip]=1 [vi-first-non-blank]=1 [vi-forward-blank-word]=1 [vi-forward-blank-word-end]=1 [vi-forward-char]=1 [vi-forward-word]=1 [vi-forward-word-end]=1 [vi-goto-column]=1 [vi-goto-mark]=1 [vi-goto-mark-line]=1 [vi-history-search-backward]=1 [vi-history-search-forward]=1 [vi-indent]=1 [vi-insert]=1 [vi-insert-bol]=1 [vi-join]=1 [vi-kill-eol]=1 [vi-kill-line]=1 [vi-match-bracket]=1 [vi-open-line-above]=1 [vi-open-line-below]=1 [vi-oper-swap-case]=1 [vi-pound-insert]=1 [vi-put-after]=1 [vi-put-before]=1 [vi-quoted-insert]=1 [vi-repeat-change]=1 [vi-repeat-find]=1 [vi-repeat-search]=1 [vi-replace]=1 [vi-replace-chars]=1 [vi-rev-repeat-find]=1 [vi-rev-repeat-search]=1 [vi-set-buffer]=1 [vi-set-mark]=1 [vi-substitute]=1 [vi-swap-case]=1 [vi-undo-change]=1 [vi-unindent]=1 [vi-up-case]=1 [vi-up-line-or-history]=1 [vi-yank]=1 [vi-yank-eol]=1 [vi-yank-whole-line]=1 [visual-line-mode]=1 [visual-mode]=1 [what-cursor-position]=1 [where-is]=1 )
|
||||
_ZSH_AUTOSUGGEST_BUILTIN_ACTIONS=( clear fetch suggest accept execute enable disable toggle )
|
||||
_ZSH_AUTOSUGGEST_CHILD_PID=3538664
|
||||
_ZSH_HIGHLIGHT_PRIOR_BUFFER=''
|
||||
_ZSH_HIGHLIGHT_PRIOR_CURSOR=0
|
||||
_ZSH_TMUX_FIXED_CONFIG=/root/.oh-my-zsh/plugins/tmux/tmux.only.conf
|
||||
__colored_man_pages_dir=/root/.oh-my-zsh/plugins/colored-man-pages
|
||||
__vsc_current_command='set -o pipefail'
|
||||
__vsc_env_keys=( )
|
||||
__vsc_env_values=( )
|
||||
__vsc_in_command_execution=1
|
||||
__vsc_nonce=8569aa72-4f06-40f4-a830-4084a537236a
|
||||
__vsc_prior_prompt2='%_> '
|
||||
__vsc_prior_prompt=$'\n(TraeAI-3) %~ [%?] $ '
|
||||
__vsc_use_aa=1
|
||||
__vscode_shell_env_reporting=''
|
||||
_comp_assocs=( '' )
|
||||
_comp_dumpfile=/root/.zcompdump
|
||||
_comp_options
|
||||
_comp_setup
|
||||
_compautos
|
||||
_comps
|
||||
_history_substring_search_match_index=0
|
||||
_history_substring_search_matches=( )
|
||||
_history_substring_search_query=''
|
||||
_history_substring_search_query_highlight=''
|
||||
_history_substring_search_query_parts=( )
|
||||
_history_substring_search_raw_match_index=0
|
||||
_history_substring_search_raw_matches=( )
|
||||
_history_substring_search_refresh_display=''
|
||||
_history_substring_search_result=''
|
||||
_history_substring_search_unique_filter=( )
|
||||
_history_substring_search_zsh_5_9=1
|
||||
_lastcomp
|
||||
_patcomps
|
||||
_postpatcomps
|
||||
_services
|
||||
_zsh_highlight__highlighter_brackets_cache=( )
|
||||
_zsh_highlight__highlighter_cursor_cache=( )
|
||||
_zsh_highlight__highlighter_main_cache=( '0 2 fg=green memo=zsh-syntax-highlighting' '3 6 none memo=zsh-syntax-highlighting' '7 11 underline memo=zsh-syntax-highlighting' '11 12 none memo=zsh-syntax-highlighting' '13 17 fg=green memo=zsh-syntax-highlighting' '18 20 none memo=zsh-syntax-highlighting' '21 57 none memo=zsh-syntax-highlighting' '21 57 fg=yellow memo=zsh-syntax-highlighting' '58 62 underline memo=zsh-syntax-highlighting' )
|
||||
_zsh_highlight__highlighter_pattern_cache=( )
|
||||
_zsh_highlight_main__command_type_cache=( )
|
||||
aliases
|
||||
argv=( )
|
||||
bg
|
||||
bg_bold
|
||||
bg_no_bold
|
||||
bold_color
|
||||
builtins
|
||||
cdpath=( )
|
||||
chpwd_functions=( _zshz_chpwd )
|
||||
color=( [00]=none [01]=bold [02]=faint [03]=italic [04]=underline [05]=blink [07]=reverse [08]=conceal [22]=normal [23]=no-italic [24]=no-underline [25]=no-blink [27]=no-reverse [28]=no-conceal [30]=black [31]=red [32]=green [33]=yellow [34]=blue [35]=magenta [36]=cyan [37]=white [39]=default [40]=bg-black [41]=bg-red [42]=bg-green [43]=bg-yellow [44]=bg-blue [45]=bg-magenta [46]=bg-cyan [47]=bg-white [49]=bg-default [bg-black]=40 [bg-blue]=44 [bg-cyan]=46 [bg-default]=49 [bg-gray]=40 [bg-green]=42 [bg-grey]=40 [bg-magenta]=45 [bg-red]=41 [bg-white]=47 [bg-yellow]=43 [black]=30 [blink]=05 [blue]=34 [bold]=01 [conceal]=08 [cyan]=36 [default]=39 [faint]=02 [fg-black]=30 [fg-blue]=34 [fg-cyan]=36 [fg-default]=39 [fg-gray]=30 [fg-green]=32 [fg-grey]=30 [fg-magenta]=35 [fg-red]=31 [fg-white]=37 [fg-yellow]=33 [gray]=30 [green]=32 [grey]=30 [italic]=03 [magenta]=35 [no-blink]=25 [no-conceal]=28 [no-italic]=23 [no-reverse]=27 [no-underline]=24 [none]=00 [normal]=22 [red]=31 [reverse]=07 [underline]=04 [white]=37 [yellow]=33 )
|
||||
colour=( [00]=none [01]=bold [02]=faint [03]=italic [04]=underline [05]=blink [07]=reverse [08]=conceal [22]=normal [23]=no-italic [24]=no-underline [25]=no-blink [27]=no-reverse [28]=no-conceal [30]=black [31]=red [32]=green [33]=yellow [34]=blue [35]=magenta [36]=cyan [37]=white [39]=default [40]=bg-black [41]=bg-red [42]=bg-green [43]=bg-yellow [44]=bg-blue [45]=bg-magenta [46]=bg-cyan [47]=bg-white [49]=bg-default [bg-black]=40 [bg-blue]=44 [bg-cyan]=46 [bg-default]=49 [bg-gray]=40 [bg-green]=42 [bg-grey]=40 [bg-magenta]=45 [bg-red]=41 [bg-white]=47 [bg-yellow]=43 [black]=30 [blink]=05 [blue]=34 [bold]=01 [conceal]=08 [cyan]=36 [default]=39 [faint]=02 [fg-black]=30 [fg-blue]=34 [fg-cyan]=36 [fg-default]=39 [fg-gray]=30 [fg-green]=32 [fg-grey]=30 [fg-magenta]=35 [fg-red]=31 [fg-white]=37 [fg-yellow]=33 [gray]=30 [green]=32 [grey]=30 [italic]=03 [magenta]=35 [no-blink]=25 [no-conceal]=28 [no-italic]=23 [no-reverse]=27 [no-underline]=24 [none]=00 [normal]=22 [red]=31 [reverse]=07 [underline]=04 [white]=37 [yellow]=33 )
|
||||
commands
|
||||
comppostfuncs=( )
|
||||
compprefuncs=( )
|
||||
d=/usr/share/zsh/functions/Zle
|
||||
debian_missing_features=( )
|
||||
dirstack
|
||||
dis_aliases
|
||||
dis_builtins
|
||||
dis_functions
|
||||
dis_functions_source
|
||||
dis_galiases
|
||||
dis_patchars
|
||||
dis_reswords
|
||||
dis_saliases
|
||||
envVarsToReport=( '' )
|
||||
epochtime
|
||||
errnos
|
||||
fg
|
||||
fg_bold
|
||||
fg_no_bold
|
||||
fignore=( )
|
||||
fpath=( /root/.oh-my-zsh/plugins/z /root/.oh-my-zsh/plugins/web-search /root/.oh-my-zsh/plugins/vscode /root/.oh-my-zsh/plugins/tmux /root/.oh-my-zsh/plugins/systemd /root/.oh-my-zsh/plugins/sudo /root/.oh-my-zsh/plugins/history-substring-search /root/.oh-my-zsh/plugins/extract /root/.oh-my-zsh/plugins/command-not-found /root/.oh-my-zsh/plugins/colored-man-pages /root/.oh-my-zsh/custom/plugins/zsh-completions /root/.oh-my-zsh/custom/plugins/zsh-syntax-highlighting /root/.oh-my-zsh/custom/plugins/zsh-autosuggestions /root/.oh-my-zsh/plugins/gcloud /root/.oh-my-zsh/plugins/aws /root/.oh-my-zsh/plugins/helm /root/.oh-my-zsh/plugins/kubectl /root/.oh-my-zsh/plugins/terraform /root/.oh-my-zsh/plugins/ansible /root/.oh-my-zsh/plugins/docker-compose /root/.oh-my-zsh/plugins/docker /root/.oh-my-zsh/plugins/git /root/.oh-my-zsh/functions /root/.oh-my-zsh/completions /root/.oh-my-zsh/custom/functions /root/.oh-my-zsh/custom/completions /root/.oh-my-zsh/cache/completions /usr/local/share/zsh/site-functions /usr/share/zsh/vendor-functions /usr/share/zsh/vendor-completions /usr/share/zsh/functions/Calendar /usr/share/zsh/functions/Chpwd /usr/share/zsh/functions/Completion /usr/share/zsh/functions/Completion/AIX /usr/share/zsh/functions/Completion/BSD /usr/share/zsh/functions/Completion/Base /usr/share/zsh/functions/Completion/Cygwin /usr/share/zsh/functions/Completion/Darwin /usr/share/zsh/functions/Completion/Debian /usr/share/zsh/functions/Completion/Linux /usr/share/zsh/functions/Completion/Mandriva /usr/share/zsh/functions/Completion/Redhat /usr/share/zsh/functions/Completion/Solaris /usr/share/zsh/functions/Completion/Unix /usr/share/zsh/functions/Completion/X /usr/share/zsh/functions/Completion/Zsh /usr/share/zsh/functions/Completion/openSUSE /usr/share/zsh/functions/Exceptions /usr/share/zsh/functions/MIME /usr/share/zsh/functions/Math /usr/share/zsh/functions/Misc /usr/share/zsh/functions/Newuser /usr/share/zsh/functions/Prompts /usr/share/zsh/functions/TCP /usr/share/zsh/functions/VCS_Info /usr/share/zsh/functions/VCS_Info/Backends /usr/share/zsh/functions/Zftp /usr/share/zsh/functions/Zle /root/.oh-my-zsh/custom/plugins/zsh-completions/src )
|
||||
funcfiletrace
|
||||
funcsourcetrace
|
||||
funcstack
|
||||
functions
|
||||
functions_source
|
||||
functrace
|
||||
galiases
|
||||
histchars='!^#'
|
||||
history
|
||||
historywords
|
||||
jobdirs
|
||||
jobstates
|
||||
jobtexts
|
||||
key=''
|
||||
keymaps
|
||||
langinfo
|
||||
less_termcap
|
||||
line=''
|
||||
mailpath=( )
|
||||
manpath=( )
|
||||
module_path=( /usr/lib/x86_64-linux-gnu/zsh/5.9 )
|
||||
modules
|
||||
nameddirs
|
||||
node=hcp1
|
||||
node_id=baea7bb6
|
||||
node_name=hcp2
|
||||
options
|
||||
parameters
|
||||
patchars
|
||||
path=( /root/.trae-server/sdks/workspaces/cced0550/versions/node/current /root/.trae-server/sdks/versions/node/current /root/.trae-server/sdks/workspaces/cced0550/versions/node/current /root/.trae-server/sdks/versions/node/current /root/.trae-server/bin/stable-0643ffaa788ad4dd46eaa12cec109ac40595c816/bin/remote-cli /usr/local/sbin /usr/local/bin /usr/sbin /usr/bin /sbin /bin )
|
||||
pipestatus=( 0 )
|
||||
plugins=( git docker docker-compose ansible terraform kubectl helm aws gcloud zsh-autosuggestions zsh-syntax-highlighting zsh-completions colored-man-pages command-not-found extract history-substring-search sudo systemd tmux vscode web-search z )
|
||||
podman_status=false
|
||||
precmd_functions=( _omz_async_request omz_termsupport_precmd _zsh_autosuggest_start _zsh_highlight_main__precmd_hook _zshz_precmd __vsc_precmd )
|
||||
preexec_functions=( omz_termsupport_preexec _zsh_highlight_preexec_hook __vsc_preexec )
|
||||
prompt=$'\n(TraeAI-3) %~ [%?] $ '
|
||||
psvar=( )
|
||||
reset_color
|
||||
reswords
|
||||
ret=0
|
||||
saliases
|
||||
signals=( EXIT HUP INT QUIT ILL TRAP IOT BUS FPE KILL USR1 SEGV USR2 PIPE ALRM TERM STKFLT CHLD CONT STOP TSTP TTIN TTOU URG XCPU XFSZ VTALRM PROF WINCH POLL PWR SYS ZERR DEBUG )
|
||||
status=0
|
||||
sysparams
|
||||
termcap
|
||||
terminfo
|
||||
userdirs
|
||||
usergroups
|
||||
vsc_aa_env=( )
|
||||
vscode_base_dir=/root/.trae-server
|
||||
watch
|
||||
widgets
|
||||
zle_bracketed_paste=( $'\C-[[?2004h' $'\C-[[?2004l' )
|
||||
zsh_eval_context=( toplevel )
|
||||
zsh_highlight__memo_feature=1
|
||||
zsh_highlight__pat_static_bug=false
|
||||
zsh_scheduled_events
|
||||
Binary file not shown.
|
|
@ -1,30 +0,0 @@
|
|||
# Proxy Configuration for istoreos.tailnet-68f9.ts.net:1082
|
||||
# This file contains proxy environment variables for the management system
|
||||
|
||||
# HTTP/HTTPS Proxy Settings
|
||||
export http_proxy=http://istoreos.tailnet-68f9.ts.net:1082
|
||||
export https_proxy=http://istoreos.tailnet-68f9.ts.net:1082
|
||||
export HTTP_PROXY=http://istoreos.tailnet-68f9.ts.net:1082
|
||||
export HTTPS_PROXY=http://istoreos.tailnet-68f9.ts.net:1082
|
||||
|
||||
# No Proxy Settings (local networks and services)
|
||||
export no_proxy=localhost,127.0.0.1,::1,.local,.tailnet-68f9.ts.net
|
||||
export NO_PROXY=localhost,127.0.0.1,::1,.local,.tailnet-68f9.ts.net
|
||||
|
||||
# Additional proxy settings for various tools
|
||||
export ALL_PROXY=http://istoreos.tailnet-68f9.ts.net:1082
|
||||
export all_proxy=http://istoreos.tailnet-68f9.ts.net:1082
|
||||
|
||||
# Docker proxy settings
|
||||
export DOCKER_BUILDKIT=1
|
||||
export BUILDKIT_PROGRESS=plain
|
||||
|
||||
# Git proxy settings
|
||||
export GIT_HTTP_PROXY=http://istoreos.tailnet-68f9.ts.net:1082
|
||||
export GIT_HTTPS_PROXY=http://istoreos.tailnet-68f9.ts.net:1082
|
||||
|
||||
# Curl proxy settings
|
||||
export CURL_PROXY=http://istoreos.tailnet-68f9.ts.net:1082
|
||||
|
||||
# Wget proxy settings
|
||||
export WGET_PROXY=http://istoreos.tailnet-68f9.ts.net:1082
|
||||
|
|
@ -0,0 +1,179 @@
|
|||
# Terraform Consul Provider 集成指南
|
||||
|
||||
本指南说明如何使用Terraform Consul Provider直接从Consul获取Oracle Cloud配置,无需手动保存私钥到临时文件。
|
||||
|
||||
## 集成概述
|
||||
|
||||
我们已经将Terraform Consul Provider集成到现有的Terraform配置中,实现了以下功能:
|
||||
|
||||
1. 直接从Consul获取Oracle Cloud配置(包括tenancy_ocid、user_ocid、fingerprint和private_key)
|
||||
2. 自动将从Consul获取的私钥保存到临时文件
|
||||
3. 使用从Consul获取的配置初始化OCI Provider
|
||||
4. 支持多个区域(韩国和美国)的配置
|
||||
|
||||
## 配置结构
|
||||
|
||||
### 1. Consul中的配置存储
|
||||
|
||||
Oracle Cloud配置存储在Consul的以下路径中:
|
||||
|
||||
- 韩国区域:`config/dev/oracle/kr/`
|
||||
- `tenancy_ocid`
|
||||
- `user_ocid`
|
||||
- `fingerprint`
|
||||
- `private_key`
|
||||
|
||||
- 美国区域:`config/dev/oracle/us/`
|
||||
- `tenancy_ocid`
|
||||
- `user_ocid`
|
||||
- `fingerprint`
|
||||
- `private_key`
|
||||
|
||||
### 2. Terraform配置
|
||||
|
||||
#### Provider配置
|
||||
|
||||
```hcl
|
||||
# Consul Provider配置
|
||||
provider "consul" {
|
||||
address = "localhost:8500"
|
||||
scheme = "http"
|
||||
datacenter = "dc1"
|
||||
}
|
||||
```
|
||||
|
||||
#### 数据源配置
|
||||
|
||||
```hcl
|
||||
# 从Consul获取Oracle Cloud配置
|
||||
data "consul_keys" "oracle_config" {
|
||||
key {
|
||||
name = "tenancy_ocid"
|
||||
path = "config/dev/oracle/kr/tenancy_ocid"
|
||||
}
|
||||
key {
|
||||
name = "user_ocid"
|
||||
path = "config/dev/oracle/kr/user_ocid"
|
||||
}
|
||||
key {
|
||||
name = "fingerprint"
|
||||
path = "config/dev/oracle/kr/fingerprint"
|
||||
}
|
||||
key {
|
||||
name = "private_key"
|
||||
path = "config/dev/oracle/kr/private_key"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
#### 私钥文件创建
|
||||
|
||||
```hcl
|
||||
# 将从Consul获取的私钥保存到临时文件
|
||||
resource "local_file" "oci_kr_private_key" {
|
||||
content = data.consul_keys.oracle_config.var.private_key
|
||||
filename = "/tmp/oci_kr_private_key.pem"
|
||||
}
|
||||
```
|
||||
|
||||
#### OCI Provider配置
|
||||
|
||||
```hcl
|
||||
# 使用从Consul获取的配置的OCI Provider
|
||||
provider "oci" {
|
||||
tenancy_ocid = data.consul_keys.oracle_config.var.tenancy_ocid
|
||||
user_ocid = data.consul_keys.oracle_config.var.user_ocid
|
||||
fingerprint = data.consul_keys.oracle_config.var.fingerprint
|
||||
private_key_path = local_file.oci_kr_private_key.filename
|
||||
region = "ap-chuncheon-1"
|
||||
}
|
||||
```
|
||||
|
||||
## 使用方法
|
||||
|
||||
### 1. 确保Consul正在运行
|
||||
|
||||
```bash
|
||||
# 检查Consul是否运行
|
||||
pgrep consul
|
||||
```
|
||||
|
||||
### 2. 确保Oracle Cloud配置已存储在Consul中
|
||||
|
||||
```bash
|
||||
# 检查韩国区域配置
|
||||
consul kv get config/dev/oracle/kr/tenancy_ocid
|
||||
consul kv get config/dev/oracle/kr/user_ocid
|
||||
consul kv get config/dev/oracle/kr/fingerprint
|
||||
consul kv get config/dev/oracle/kr/private_key
|
||||
|
||||
# 检查美国区域配置
|
||||
consul kv get config/dev/oracle/us/tenancy_ocid
|
||||
consul kv get config/dev/oracle/us/user_ocid
|
||||
consul kv get config/dev/oracle/us/fingerprint
|
||||
consul kv get config/dev/oracle/us/private_key
|
||||
```
|
||||
|
||||
### 3. 初始化Terraform
|
||||
|
||||
```bash
|
||||
cd /root/mgmt/tofu/environments/dev
|
||||
terraform init -upgrade
|
||||
```
|
||||
|
||||
### 4. 运行测试脚本
|
||||
|
||||
```bash
|
||||
# 从项目根目录运行
|
||||
/root/mgmt/test_consul_provider.sh
|
||||
```
|
||||
|
||||
### 5. 使用Consul配置运行Terraform
|
||||
|
||||
```bash
|
||||
cd /root/mgmt/tofu/environments/dev
|
||||
terraform plan -var-file=consul.tfvars
|
||||
terraform apply -var-file=consul.tfvars
|
||||
```
|
||||
|
||||
## 优势
|
||||
|
||||
使用Consul Provider直接从Consul获取配置有以下优势:
|
||||
|
||||
1. **更高的安全性**:私钥不再需要存储在磁盘上的配置文件中,而是直接从Consul获取
|
||||
2. **更简洁的配置**:无需手动创建临时文件,Terraform自动处理
|
||||
3. **声明式风格**:完全符合Terraform的声明式配置风格
|
||||
4. **更好的维护性**:配置集中存储在Consul中,便于管理和更新
|
||||
5. **多环境支持**:可以轻松支持多个环境(dev、staging、production)的配置
|
||||
|
||||
## 故障排除
|
||||
|
||||
### 1. Consul连接问题
|
||||
|
||||
如果无法连接到Consul,请检查:
|
||||
|
||||
- Consul服务是否正在运行
|
||||
- Consul地址和端口是否正确(默认为localhost:8500)
|
||||
- 网络连接是否正常
|
||||
|
||||
### 2. 配置获取问题
|
||||
|
||||
如果无法从Consul获取配置,请检查:
|
||||
|
||||
- 配置是否已正确存储在Consul中
|
||||
- 路径是否正确
|
||||
- 权限是否足够
|
||||
|
||||
### 3. Terraform初始化问题
|
||||
|
||||
如果Terraform初始化失败,请检查:
|
||||
|
||||
- Terraform版本是否符合要求(>=1.6)
|
||||
- 网络连接是否正常
|
||||
- Provider源是否可访问
|
||||
|
||||
## 版本信息
|
||||
|
||||
- Terraform: >=1.6
|
||||
- Consul Provider: ~2.22.0
|
||||
- OCI Provider: ~5.0
|
||||
|
|
@ -0,0 +1,268 @@
|
|||
# Ansible与HashiCorp Vault集成指南
|
||||
|
||||
本文档介绍如何将Ansible与HashiCorp Vault集成,以安全地管理和使用敏感信息。
|
||||
|
||||
## 1. 安装必要的Python包
|
||||
|
||||
首先,需要安装Ansible的Vault集成包:
|
||||
|
||||
```bash
|
||||
pip install hvac
|
||||
```
|
||||
|
||||
## 2. 配置Ansible使用Vault
|
||||
|
||||
### 2.1 创建Vault连接配置
|
||||
|
||||
创建一个Vault连接配置文件 `vault_config.yml`:
|
||||
|
||||
```yaml
|
||||
vault_addr: http://localhost:8200
|
||||
vault_role_id: "your-approle-role-id"
|
||||
vault_secret_id: "your-approle-secret-id"
|
||||
```
|
||||
|
||||
### 2.2 创建Vault查询角色
|
||||
|
||||
在Vault中创建一个专用于Ansible的AppRole:
|
||||
|
||||
```bash
|
||||
# 启用AppRole认证
|
||||
vault auth enable approle
|
||||
|
||||
# 创建策略
|
||||
cat > ansible-policy.hcl <<EOF
|
||||
path "kv/data/ansible/*" {
|
||||
capabilities = ["read"]
|
||||
}
|
||||
EOF
|
||||
|
||||
vault policy write ansible ansible-policy.hcl
|
||||
|
||||
# 创建AppRole
|
||||
vault write auth/approle/role/ansible \
|
||||
token_policies="ansible" \
|
||||
token_ttl=1h \
|
||||
token_max_ttl=4h
|
||||
|
||||
# 获取Role ID
|
||||
vault read auth/approle/role/ansible/role-id
|
||||
|
||||
# 生成Secret ID
|
||||
vault write -f auth/approle/role/ansible/secret-id
|
||||
```
|
||||
|
||||
## 3. 在Ansible中使用Vault
|
||||
|
||||
### 3.1 使用lookup插件
|
||||
|
||||
在Ansible playbook中使用`hashi_vault`查找插件:
|
||||
|
||||
```yaml
|
||||
---
|
||||
- name: 使用HashiCorp Vault的示例
|
||||
hosts: all
|
||||
vars:
|
||||
vault_addr: "http://localhost:8200"
|
||||
role_id: "{{ lookup('file', '/path/to/role_id') }}"
|
||||
secret_id: "{{ lookup('file', '/path/to/secret_id') }}"
|
||||
|
||||
# 从Vault获取数据库密码
|
||||
db_password: "{{ lookup('hashi_vault', 'secret=kv/data/ansible/db:password auth_method=approle role_id=' + role_id + ' secret_id=' + secret_id + ' url=' + vault_addr) }}"
|
||||
|
||||
tasks:
|
||||
- name: 配置数据库连接
|
||||
template:
|
||||
src: db_config.j2
|
||||
dest: /etc/app/db_config.ini
|
||||
```
|
||||
|
||||
### 3.2 使用环境变量
|
||||
|
||||
也可以通过环境变量设置Vault认证信息:
|
||||
|
||||
```yaml
|
||||
---
|
||||
- name: 使用环境变量的Vault示例
|
||||
hosts: all
|
||||
environment:
|
||||
VAULT_ADDR: "http://localhost:8200"
|
||||
VAULT_ROLE_ID: "{{ lookup('file', '/path/to/role_id') }}"
|
||||
VAULT_SECRET_ID: "{{ lookup('file', '/path/to/secret_id') }}"
|
||||
|
||||
tasks:
|
||||
- name: 从Vault获取密钥
|
||||
set_fact:
|
||||
api_key: "{{ lookup('hashi_vault', 'secret=kv/data/ansible/api:key') }}"
|
||||
```
|
||||
|
||||
## 4. 创建Vault密钥模块
|
||||
|
||||
创建一个自定义的Ansible角色,用于管理Vault中的密钥:
|
||||
|
||||
### 4.1 角色结构
|
||||
|
||||
```
|
||||
roles/
|
||||
└── vault_secrets/
|
||||
├── defaults/
|
||||
│ └── main.yml
|
||||
├── tasks/
|
||||
│ └── main.yml
|
||||
└── vars/
|
||||
└── main.yml
|
||||
```
|
||||
|
||||
### 4.2 主任务文件
|
||||
|
||||
`roles/vault_secrets/tasks/main.yml`:
|
||||
|
||||
```yaml
|
||||
---
|
||||
- name: 确保Vault令牌有效
|
||||
block:
|
||||
- name: 获取Vault令牌
|
||||
set_fact:
|
||||
vault_token: "{{ lookup('hashi_vault', 'auth_method=approle role_id=' + vault_role_id + ' secret_id=' + vault_secret_id + ' url=' + vault_addr) }}"
|
||||
no_log: true
|
||||
rescue:
|
||||
- name: Vault认证失败
|
||||
fail:
|
||||
msg: "无法从Vault获取有效令牌"
|
||||
|
||||
- name: 从Vault读取密钥
|
||||
set_fact:
|
||||
secrets: "{{ lookup('hashi_vault', 'secret=' + vault_path + ' token=' + vault_token + ' url=' + vault_addr) }}"
|
||||
no_log: true
|
||||
|
||||
- name: 设置各个密钥变量
|
||||
set_fact:
|
||||
"{{ item.key }}": "{{ item.value }}"
|
||||
with_dict: "{{ secrets.data.data }}"
|
||||
no_log: true
|
||||
```
|
||||
|
||||
## 5. 将现有Ansible Vault迁移到HashiCorp Vault
|
||||
|
||||
### 5.1 创建迁移脚本
|
||||
|
||||
创建一个脚本来自动迁移Ansible Vault内容到HashiCorp Vault:
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
# migrate_to_hashicorp_vault.sh
|
||||
|
||||
# 设置变量
|
||||
ANSIBLE_VAULT_FILE=$1
|
||||
VAULT_PATH=$2
|
||||
VAULT_ADDR=${VAULT_ADDR:-"http://localhost:8200"}
|
||||
|
||||
# 检查参数
|
||||
if [ -z "$ANSIBLE_VAULT_FILE" ] || [ -z "$VAULT_PATH" ]; then
|
||||
echo "用法: $0 <ansible_vault_file> <vault_path>"
|
||||
echo "示例: $0 group_vars/all/vault.yml kv/ansible/group_vars/all"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# 检查Vault登录状态
|
||||
if ! vault token lookup >/dev/null 2>&1; then
|
||||
echo "请先登录Vault: vault login <token>"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# 解密Ansible Vault文件
|
||||
echo "解密Ansible Vault文件..."
|
||||
TEMP_FILE=$(mktemp)
|
||||
ansible-vault decrypt --output="$TEMP_FILE" "$ANSIBLE_VAULT_FILE"
|
||||
|
||||
# 将YAML转换为JSON并存储到HashiCorp Vault
|
||||
echo "迁移密钥到HashiCorp Vault..."
|
||||
python3 -c "
|
||||
import yaml, json, sys, subprocess
|
||||
with open('$TEMP_FILE', 'r') as f:
|
||||
data = yaml.safe_load(f)
|
||||
for key, value in data.items():
|
||||
cmd = ['vault', 'kv', 'put', '$VAULT_PATH/' + key, 'value=' + json.dumps(value)]
|
||||
subprocess.run(cmd)
|
||||
"
|
||||
|
||||
# 清理临时文件
|
||||
rm "$TEMP_FILE"
|
||||
|
||||
echo "迁移完成! 数据已存储在Vault路径: $VAULT_PATH/"
|
||||
```
|
||||
|
||||
### 5.2 执行迁移
|
||||
|
||||
```bash
|
||||
# 赋予脚本执行权限
|
||||
chmod +x migrate_to_hashicorp_vault.sh
|
||||
|
||||
# 执行迁移
|
||||
./migrate_to_hashicorp_vault.sh group_vars/all/vault.yml kv/ansible/group_vars/all
|
||||
```
|
||||
|
||||
## 6. 更新Ansible配置
|
||||
|
||||
### 6.1 修改ansible.cfg
|
||||
|
||||
更新`ansible.cfg`文件,添加Vault相关配置:
|
||||
|
||||
```ini
|
||||
[defaults]
|
||||
vault_identity_list = dev@~/.ansible/vault_dev.txt, prod@~/.ansible/vault_prod.txt
|
||||
|
||||
[hashi_vault_collection]
|
||||
url = http://localhost:8200
|
||||
auth_method = approle
|
||||
role_id = /path/to/role_id
|
||||
secret_id = /path/to/secret_id
|
||||
```
|
||||
|
||||
### 6.2 更新现有Playbook
|
||||
|
||||
将现有playbook中的Ansible Vault引用替换为HashiCorp Vault引用:
|
||||
|
||||
```yaml
|
||||
# 旧方式
|
||||
- name: 使用Ansible Vault变量
|
||||
debug:
|
||||
msg: "数据库密码: {{ vault_db_password }}"
|
||||
|
||||
# 新方式
|
||||
- name: 使用HashiCorp Vault变量
|
||||
debug:
|
||||
msg: "数据库密码: {{ lookup('hashi_vault', 'secret=kv/data/ansible/db:password') }}"
|
||||
```
|
||||
|
||||
## 7. 最佳实践
|
||||
|
||||
1. **避免硬编码认证信息**:使用环境变量或外部文件存储Vault认证信息
|
||||
2. **限制令牌权限**:为Ansible创建的Vault令牌仅授予必要的最小权限
|
||||
3. **设置合理的TTL**:为Vault令牌设置合理的生命周期,避免长期有效的令牌
|
||||
4. **使用no_log**:对包含敏感信息的任务使用`no_log: true`防止日志泄露
|
||||
5. **定期轮换认证凭据**:定期轮换AppRole的Secret ID
|
||||
6. **使用CI/CD集成**:在CI/CD流程中集成Vault认证,避免手动管理令牌
|
||||
|
||||
## 8. 故障排除
|
||||
|
||||
### 8.1 常见问题
|
||||
|
||||
1. **认证失败**:
|
||||
- 检查Role ID和Secret ID是否正确
|
||||
- 验证AppRole是否有正确的策略附加
|
||||
|
||||
2. **路径错误**:
|
||||
- KV v2引擎需要在路径中包含`data`,例如`kv/data/path`而不是`kv/path`
|
||||
|
||||
3. **权限问题**:
|
||||
- 确保AppRole有足够的权限访问请求的密钥
|
||||
|
||||
### 8.2 调试技巧
|
||||
|
||||
```yaml
|
||||
- name: 调试Vault查询
|
||||
debug:
|
||||
msg: "{{ lookup('hashi_vault', 'secret=kv/data/ansible/db:password auth_method=approle role_id=' + role_id + ' secret_id=' + secret_id + ' url=' + vault_addr) }}"
|
||||
vars:
|
||||
ansible_hashi_vault_debug: true
|
||||
|
|
@ -0,0 +1,94 @@
|
|||
job "vault-cluster" {
|
||||
datacenters = ["dc1"]
|
||||
type = "service"
|
||||
|
||||
group "vault-servers" {
|
||||
count = 3
|
||||
|
||||
constraint {
|
||||
attribute = "${node.unique.name}"
|
||||
operator = "regexp"
|
||||
value = "(warden|ash3c|master)"
|
||||
}
|
||||
|
||||
task "vault" {
|
||||
driver = "podman"
|
||||
|
||||
config {
|
||||
image = "hashicorp/vault:latest"
|
||||
ports = ["api", "cluster"]
|
||||
|
||||
# 确保容器在退出时不会自动重启
|
||||
command = "vault"
|
||||
args = [
|
||||
"server",
|
||||
"-config=/vault/config/vault.hcl"
|
||||
]
|
||||
|
||||
# 容器网络设置
|
||||
network_mode = "host"
|
||||
|
||||
# 安全设置
|
||||
cap_add = ["IPC_LOCK"]
|
||||
}
|
||||
|
||||
template {
|
||||
data = <<EOH
|
||||
storage "consul" {
|
||||
address = "127.0.0.1:8500"
|
||||
path = "vault/"
|
||||
token = "{{ with secret "consul/creds/vault" }}{{ .Data.token }}{{ end }}"
|
||||
}
|
||||
|
||||
listener "tcp" {
|
||||
address = "0.0.0.0:8200"
|
||||
tls_disable = 1 # 生产环境应启用TLS
|
||||
}
|
||||
|
||||
api_addr = "http://{{ env "NOMAD_IP_api" }}:8200"
|
||||
cluster_addr = "http://{{ env "NOMAD_IP_cluster" }}:8201"
|
||||
|
||||
ui = true
|
||||
disable_mlock = true
|
||||
EOH
|
||||
destination = "vault/config/vault.hcl"
|
||||
}
|
||||
|
||||
volume_mount {
|
||||
volume = "vault-data"
|
||||
destination = "/vault/data"
|
||||
read_only = false
|
||||
}
|
||||
|
||||
resources {
|
||||
cpu = 500
|
||||
memory = 1024
|
||||
|
||||
network {
|
||||
mbits = 10
|
||||
port "api" { static = 8200 }
|
||||
port "cluster" { static = 8201 }
|
||||
}
|
||||
}
|
||||
|
||||
service {
|
||||
name = "vault"
|
||||
port = "api"
|
||||
|
||||
check {
|
||||
name = "vault-health"
|
||||
type = "http"
|
||||
path = "/v1/sys/health"
|
||||
interval = "10s"
|
||||
timeout = "2s"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
volume "vault-data" {
|
||||
type = "host"
|
||||
read_only = false
|
||||
source = "vault-data"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
@ -0,0 +1,169 @@
|
|||
# HashiCorp Vault 实施方案论证
|
||||
|
||||
## 1. 项目现状分析
|
||||
|
||||
### 1.1 现有基础设施
|
||||
- **多云环境**: Oracle Cloud, 华为云, Google Cloud, AWS, DigitalOcean
|
||||
- **基础设施管理**: OpenTofu (Terraform)
|
||||
- **配置管理**: Ansible
|
||||
- **容器编排**: Nomad + Podman
|
||||
- **服务发现**: Consul (部署在warden、ash3c、master三个节点上)
|
||||
- **CI/CD**: Gitea Actions
|
||||
|
||||
### 1.2 当前密钥管理现状
|
||||
- 部分使用Ansible Vault管理敏感信息
|
||||
- 存在明文密钥存储在代码库中(如`security/secrets/key.md`)
|
||||
- 缺乏统一的密钥管理和轮换机制
|
||||
- 没有集中的访问控制和审计机制
|
||||
|
||||
### 1.3 安全风险
|
||||
- 明文密钥存储导致潜在的安全漏洞
|
||||
- 缺乏密钥轮换机制增加了长期凭据泄露的风险
|
||||
- 分散的密钥管理增加了维护难度和安全风险
|
||||
- 缺乏审计机制,难以追踪谁在何时访问了敏感信息
|
||||
|
||||
## 2. HashiCorp Vault 解决方案
|
||||
|
||||
### 2.1 Vault 简介
|
||||
HashiCorp Vault是一个密钥管理和数据保护工具,专为现代云环境设计,提供以下核心功能:
|
||||
- 密钥和敏感数据的安全存储
|
||||
- 动态生成临时凭据
|
||||
- 数据加密服务
|
||||
- 详细的审计日志
|
||||
- 精细的访问控制
|
||||
|
||||
### 2.2 Vault 如何解决当前问题
|
||||
- **集中式密钥管理**: 所有密钥和敏感信息统一存储和管理
|
||||
- **动态密钥生成**: 为数据库、云服务等生成临时凭据,减少长期凭据泄露风险
|
||||
- **自动密钥轮换**: 定期自动轮换密钥,提高安全性
|
||||
- **访问控制**: 基于角色的访问控制,确保只有授权用户能访问特定密钥
|
||||
- **审计日志**: 详细记录所有密钥访问操作,便于安全审计
|
||||
- **与现有基础设施集成**: 与Nomad和Consul无缝集成
|
||||
|
||||
## 3. 部署方案
|
||||
|
||||
### 3.1 部署架构
|
||||
建议在现有的Consul集群节点(warden、ash3c、master)上部署Vault,形成高可用的Vault集群:
|
||||
|
||||
```
|
||||
+-------------------+ +-------------------+ +-------------------+
|
||||
| warden | | ash3c | | master |
|
||||
| | | | | |
|
||||
| +-------------+ | | +-------------+ | | +-------------+ |
|
||||
| | Consul | | | | Consul | | | | Consul | |
|
||||
| +-------------+ | | +-------------+ | | +-------------+ |
|
||||
| | | | | |
|
||||
| +-------------+ | | +-------------+ | | +-------------+ |
|
||||
| | Vault | | | | Vault | | | | Vault | |
|
||||
| +-------------+ | | +-------------+ | | +-------------+ |
|
||||
+-------------------+ +-------------------+ +-------------------+
|
||||
```
|
||||
|
||||
### 3.2 存储后端
|
||||
使用现有的Consul集群作为Vault的存储后端,利用Consul的高可用性和一致性特性:
|
||||
- Vault数据加密存储在Consul中
|
||||
- 利用Consul的分布式特性确保数据的高可用性
|
||||
- Vault服务器本身无状态,便于扩展和维护
|
||||
|
||||
### 3.3 资源需求
|
||||
每个节点上的Vault服务建议配置:
|
||||
- CPU: 2-4核
|
||||
- 内存: 4-8GB
|
||||
- 存储: 20GB (用于日志和临时数据)
|
||||
|
||||
### 3.4 网络配置
|
||||
- Vault API端口: 8200
|
||||
- Vault集群通信端口: 8201
|
||||
- 配置TLS加密所有通信
|
||||
- 设置适当的防火墙规则,限制对Vault API的访问
|
||||
|
||||
## 4. 实施计划
|
||||
|
||||
### 4.1 准备阶段
|
||||
1. **环境准备**
|
||||
- 在目标节点上安装必要的依赖
|
||||
- 生成TLS证书用于Vault通信加密
|
||||
- 配置防火墙规则
|
||||
|
||||
2. **配置文件准备**
|
||||
- 创建Vault配置文件
|
||||
- 配置Consul存储后端
|
||||
- 设置TLS和加密参数
|
||||
|
||||
### 4.2 部署阶段
|
||||
1. **初始部署**
|
||||
- 在三个节点上安装Vault
|
||||
- 配置为使用Consul作为存储后端
|
||||
- 初始化Vault并生成解封密钥
|
||||
|
||||
2. **高可用性配置**
|
||||
- 配置Vault集群
|
||||
- 设置自动解封机制
|
||||
- 配置负载均衡
|
||||
|
||||
### 4.3 集成阶段
|
||||
1. **与现有系统集成**
|
||||
- 配置Nomad使用Vault获取密钥
|
||||
- 更新Ansible脚本,使用Vault API获取敏感信息
|
||||
- 集成到CI/CD流程中
|
||||
|
||||
2. **密钥迁移**
|
||||
- 将现有密钥迁移到Vault
|
||||
- 设置密钥轮换策略
|
||||
- 移除代码库中的明文密钥
|
||||
|
||||
### 4.4 验证和测试
|
||||
1. **功能测试**
|
||||
- 验证Vault的基本功能
|
||||
- 测试密钥访问和管理
|
||||
- 验证高可用性和故障转移
|
||||
|
||||
2. **安全测试**
|
||||
- 进行渗透测试
|
||||
- 验证访问控制策略
|
||||
- 测试审计日志功能
|
||||
|
||||
## 5. 运维和管理
|
||||
|
||||
### 5.1 日常运维
|
||||
- 定期备份Vault数据
|
||||
- 监控Vault服务状态
|
||||
- 审查审计日志
|
||||
|
||||
### 5.2 灾难恢复
|
||||
- 制定详细的灾难恢复计划
|
||||
- 定期进行恢复演练
|
||||
- 确保解封密钥的安全存储
|
||||
|
||||
### 5.3 安全最佳实践
|
||||
- 实施最小权限原则
|
||||
- 定期轮换根密钥
|
||||
- 使用多因素认证
|
||||
- 定期审查访问策略
|
||||
|
||||
## 6. 实施时间表
|
||||
|
||||
| 阶段 | 任务 | 时间估计 |
|
||||
|------|------|----------|
|
||||
| 准备 | 环境准备 | 1天 |
|
||||
| 准备 | 配置文件准备 | 1天 |
|
||||
| 部署 | 初始部署 | 1天 |
|
||||
| 部署 | 高可用性配置 | 1天 |
|
||||
| 集成 | 与现有系统集成 | 3天 |
|
||||
| 集成 | 密钥迁移 | 2天 |
|
||||
| 测试 | 功能和安全测试 | 2天 |
|
||||
| 文档 | 编写运维文档 | 1天 |
|
||||
| **总计** | | **12天** |
|
||||
|
||||
## 7. 结论和建议
|
||||
|
||||
基于对当前基础设施和安全需求的分析,我们强烈建议在现有的Consul集群节点上部署HashiCorp Vault,以提升项目的安全性和密钥管理能力。
|
||||
|
||||
主要优势包括:
|
||||
- 消除明文密钥存储的安全风险
|
||||
- 提供集中式的密钥管理和访问控制
|
||||
- 支持动态密钥生成和自动轮换
|
||||
- 与现有的HashiCorp生态系统(Nomad、Consul)无缝集成
|
||||
- 提供详细的审计日志,满足合规要求
|
||||
|
||||
通过在现有节点上部署Vault,我们可以充分利用现有资源,同时显著提升项目的安全性,为多云环境提供统一的密钥管理解决方案。
|
||||
|
|
@ -0,0 +1,252 @@
|
|||
# Vault 部署和配置指南
|
||||
|
||||
本文档提供了在现有Consul集群节点上部署和配置HashiCorp Vault的详细步骤。
|
||||
|
||||
## 1. 前置准备
|
||||
|
||||
### 1.1 创建数据目录
|
||||
|
||||
在每个节点上创建Vault数据目录:
|
||||
|
||||
```bash
|
||||
sudo mkdir -p /opt/vault/data
|
||||
sudo chown -R nomad:nomad /opt/vault
|
||||
```
|
||||
|
||||
### 1.2 生成TLS证书(生产环境必须)
|
||||
|
||||
```bash
|
||||
# 生成CA证书
|
||||
vault operator generate-root -generate-only -type=tls > ca.cert
|
||||
|
||||
# 生成服务器证书
|
||||
vault operator generate-server-cert > server.cert
|
||||
```
|
||||
|
||||
## 2. 部署Vault集群
|
||||
|
||||
### 2.1 使用Nomad部署
|
||||
|
||||
将`vault-cluster.nomad`文件提交到Nomad:
|
||||
|
||||
```bash
|
||||
nomad job run vault-cluster.nomad
|
||||
```
|
||||
|
||||
### 2.2 验证部署状态
|
||||
|
||||
```bash
|
||||
# 检查Nomad任务状态
|
||||
nomad job status vault-cluster
|
||||
|
||||
# 检查Vault服务状态
|
||||
curl http://localhost:8200/v1/sys/health
|
||||
```
|
||||
|
||||
## 3. 初始化和解封Vault
|
||||
|
||||
### 3.1 初始化Vault
|
||||
|
||||
在任一节点上执行:
|
||||
|
||||
```bash
|
||||
# 初始化Vault,生成解封密钥和根令牌
|
||||
vault operator init -key-shares=5 -key-threshold=3
|
||||
```
|
||||
|
||||
**重要提示:** 安全保存生成的解封密钥和根令牌!
|
||||
|
||||
### 3.2 解封Vault
|
||||
|
||||
在每个节点上执行解封操作(需要至少3个解封密钥):
|
||||
|
||||
```bash
|
||||
# 解封Vault
|
||||
vault operator unseal <解封密钥1>
|
||||
vault operator unseal <解封密钥2>
|
||||
vault operator unseal <解封密钥3>
|
||||
```
|
||||
|
||||
## 4. 配置Vault
|
||||
|
||||
### 4.1 登录Vault
|
||||
|
||||
```bash
|
||||
# 设置Vault地址
|
||||
export VAULT_ADDR='http://127.0.0.1:8200'
|
||||
|
||||
# 使用根令牌登录
|
||||
vault login <根令牌>
|
||||
```
|
||||
|
||||
### 4.2 启用密钥引擎
|
||||
|
||||
```bash
|
||||
# 启用KV v2密钥引擎
|
||||
vault secrets enable -version=2 kv
|
||||
|
||||
# 启用AWS密钥引擎(如需要)
|
||||
vault secrets enable aws
|
||||
|
||||
# 启用数据库密钥引擎(如需要)
|
||||
vault secrets enable database
|
||||
```
|
||||
|
||||
### 4.3 配置访问策略
|
||||
|
||||
```bash
|
||||
# 创建策略文件
|
||||
cat > nomad-server-policy.hcl <<EOF
|
||||
path "kv/data/nomad/*" {
|
||||
capabilities = ["read"]
|
||||
}
|
||||
EOF
|
||||
|
||||
# 创建策略
|
||||
vault policy write nomad-server nomad-server-policy.hcl
|
||||
|
||||
# 创建令牌
|
||||
vault token create -policy=nomad-server
|
||||
```
|
||||
|
||||
## 5. 与Nomad集成
|
||||
|
||||
### 5.1 配置Nomad使用Vault
|
||||
|
||||
编辑Nomad配置文件(`/etc/nomad.d/nomad.hcl`),添加Vault配置:
|
||||
|
||||
```hcl
|
||||
vault {
|
||||
enabled = true
|
||||
address = "http://127.0.0.1:8200"
|
||||
token = "<Nomad服务器的Vault令牌>"
|
||||
}
|
||||
```
|
||||
|
||||
### 5.2 重启Nomad服务
|
||||
|
||||
```bash
|
||||
sudo systemctl restart nomad
|
||||
```
|
||||
|
||||
## 6. 迁移现有密钥到Vault
|
||||
|
||||
### 6.1 存储API密钥
|
||||
|
||||
```bash
|
||||
# 存储OCI API密钥
|
||||
vault kv put kv/oci/api-key key="$(cat /root/mgmt/security/secrets/key.md)"
|
||||
|
||||
# 存储其他云服务商密钥
|
||||
vault kv put kv/aws/credentials aws_access_key_id="<访问密钥ID>" aws_secret_access_key="<秘密访问密钥>"
|
||||
```
|
||||
|
||||
### 6.2 配置密钥轮换策略
|
||||
|
||||
```bash
|
||||
# 为数据库凭据配置自动轮换
|
||||
vault write database/config/mysql \
|
||||
plugin_name=mysql-database-plugin \
|
||||
connection_url="{{username}}:{{password}}@tcp(database.example.com:3306)/" \
|
||||
allowed_roles="app-role" \
|
||||
username="root" \
|
||||
password="<数据库根密码>"
|
||||
|
||||
# 配置角色
|
||||
vault write database/roles/app-role \
|
||||
db_name=mysql \
|
||||
creation_statements="CREATE USER '{{name}}'@'%' IDENTIFIED BY '{{password}}';GRANT SELECT ON *.* TO '{{name}}'@'%';" \
|
||||
default_ttl="1h" \
|
||||
max_ttl="24h"
|
||||
```
|
||||
|
||||
## 7. 安全最佳实践
|
||||
|
||||
### 7.1 启用审计日志
|
||||
|
||||
```bash
|
||||
# 启用文件审计设备
|
||||
vault audit enable file file_path=/var/log/vault/audit.log
|
||||
```
|
||||
|
||||
### 7.2 配置自动解封(生产环境)
|
||||
|
||||
对于生产环境,建议配置自动解封机制,可以使用云KMS服务:
|
||||
|
||||
```hcl
|
||||
# AWS KMS自动解封配置示例
|
||||
seal "awskms" {
|
||||
region = "us-west-2"
|
||||
kms_key_id = "<AWS KMS密钥ID>"
|
||||
}
|
||||
```
|
||||
|
||||
### 7.3 定期轮换根密钥
|
||||
|
||||
```bash
|
||||
# 轮换根密钥
|
||||
vault operator rotate
|
||||
```
|
||||
|
||||
## 8. 故障排除
|
||||
|
||||
### 8.1 检查Vault状态
|
||||
|
||||
```bash
|
||||
# 检查Vault状态
|
||||
vault status
|
||||
|
||||
# 检查密封状态
|
||||
vault status -format=json | jq '.sealed'
|
||||
```
|
||||
|
||||
### 8.2 检查Consul存储
|
||||
|
||||
```bash
|
||||
# 检查Consul中的Vault数据
|
||||
consul kv get -recurse vault/
|
||||
```
|
||||
|
||||
### 8.3 常见问题解决
|
||||
|
||||
- **Vault启动失败**:检查配置文件语法和权限
|
||||
- **解封失败**:确保使用正确的解封密钥
|
||||
- **API不可访问**:检查防火墙规则和监听地址配置
|
||||
|
||||
## 9. 备份和恢复
|
||||
|
||||
### 9.1 备份Vault数据
|
||||
|
||||
```bash
|
||||
# 备份Consul中的Vault数据
|
||||
consul snapshot save vault-backup.snap
|
||||
```
|
||||
|
||||
### 9.2 恢复Vault数据
|
||||
|
||||
```bash
|
||||
# 恢复Consul快照
|
||||
consul snapshot restore vault-backup.snap
|
||||
```
|
||||
|
||||
## 10. 日常维护
|
||||
|
||||
### 10.1 监控Vault状态
|
||||
|
||||
```bash
|
||||
# 设置Prometheus监控
|
||||
vault write sys/metrics/collector prometheus
|
||||
```
|
||||
|
||||
### 10.2 查看审计日志
|
||||
|
||||
```bash
|
||||
# 分析审计日志
|
||||
cat /var/log/vault/audit.log | jq
|
||||
```
|
||||
|
||||
### 10.3 定期更新Vault版本
|
||||
|
||||
```bash
|
||||
# 更新Vault版本(通过更新Nomad作业)
|
||||
nomad job run -detach vault-cluster.nomad
|
||||
|
|
@ -0,0 +1,99 @@
|
|||
job "waypoint-server" {
|
||||
datacenters = ["dc1"]
|
||||
type = "service"
|
||||
|
||||
group "waypoint" {
|
||||
count = 1
|
||||
|
||||
constraint {
|
||||
attribute = "${node.unique.name}"
|
||||
operator = "="
|
||||
value = "warden"
|
||||
}
|
||||
|
||||
network {
|
||||
port "ui" {
|
||||
static = 9701
|
||||
}
|
||||
|
||||
port "api" {
|
||||
static = 9702
|
||||
}
|
||||
|
||||
port "grpc" {
|
||||
static = 9703
|
||||
}
|
||||
}
|
||||
|
||||
task "server" {
|
||||
driver = "podman"
|
||||
|
||||
config {
|
||||
image = "hashicorp/waypoint:latest"
|
||||
ports = ["ui", "api", "grpc"]
|
||||
|
||||
args = [
|
||||
"server",
|
||||
"run",
|
||||
"-accept-tos",
|
||||
"-vvv",
|
||||
"-platform=nomad",
|
||||
"-nomad-host=${attr.nomad.advertise.address}",
|
||||
"-nomad-consul-service=true",
|
||||
"-nomad-consul-service-hostname=${attr.unique.hostname}",
|
||||
"-nomad-consul-datacenter=dc1",
|
||||
"-listen-grpc=0.0.0.0:9703",
|
||||
"-listen-http=0.0.0.0:9702",
|
||||
"-url-api=http://${attr.unique.hostname}:9702",
|
||||
"-url-ui=http://${attr.unique.hostname}:9701"
|
||||
]
|
||||
}
|
||||
|
||||
env {
|
||||
WAYPOINT_SERVER_DISABLE_MEMORY_DB = "true"
|
||||
}
|
||||
|
||||
resources {
|
||||
cpu = 500
|
||||
memory = 1024
|
||||
}
|
||||
|
||||
service {
|
||||
name = "waypoint-ui"
|
||||
port = "ui"
|
||||
|
||||
check {
|
||||
name = "waypoint-ui-alive"
|
||||
type = "http"
|
||||
path = "/"
|
||||
interval = "10s"
|
||||
timeout = "2s"
|
||||
}
|
||||
}
|
||||
|
||||
service {
|
||||
name = "waypoint-api"
|
||||
port = "api"
|
||||
|
||||
check {
|
||||
name = "waypoint-api-alive"
|
||||
type = "tcp"
|
||||
interval = "10s"
|
||||
timeout = "2s"
|
||||
}
|
||||
}
|
||||
|
||||
volume_mount {
|
||||
volume = "waypoint-data"
|
||||
destination = "/data"
|
||||
read_only = false
|
||||
}
|
||||
}
|
||||
|
||||
volume "waypoint-data" {
|
||||
type = "host"
|
||||
read_only = false
|
||||
source = "waypoint-data"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
@ -0,0 +1,245 @@
|
|||
# HashiCorp Waypoint 实施方案论证
|
||||
|
||||
## 1. 项目现状分析
|
||||
|
||||
### 1.1 现有部署流程
|
||||
- **基础设施管理**: OpenTofu (Terraform)
|
||||
- **配置管理**: Ansible
|
||||
- **容器编排**: Nomad + Podman
|
||||
- **CI/CD**: Gitea Actions
|
||||
- **多云环境**: Oracle Cloud, 华为云, Google Cloud, AWS, DigitalOcean
|
||||
|
||||
### 1.2 当前部署流程挑战
|
||||
- 跨多个云平台的部署流程不一致
|
||||
- 不同环境(开发、测试、生产)的配置差异管理复杂
|
||||
- 应用生命周期管理分散在多个工具中
|
||||
- 缺乏统一的应用部署和发布界面
|
||||
- 开发团队需要了解多种工具和平台特性
|
||||
|
||||
### 1.3 现有GitOps工作流
|
||||
项目已实施GitOps工作流,包括:
|
||||
- 声明式配置存储在Git中
|
||||
- 通过CI/CD流水线自动应用变更
|
||||
- 状态收敛和监控
|
||||
|
||||
## 2. HashiCorp Waypoint 解决方案
|
||||
|
||||
### 2.1 Waypoint 简介
|
||||
HashiCorp Waypoint是一个应用部署工具,提供一致的工作流来构建、部署和发布应用,无论底层平台如何。主要特性包括:
|
||||
|
||||
- 统一的工作流接口
|
||||
- 多平台支持
|
||||
- 应用版本管理
|
||||
- 自动化发布控制
|
||||
- 可扩展的插件系统
|
||||
|
||||
### 2.2 Waypoint 如何补充现有工具链
|
||||
|
||||
| 现有工具 | 主要职责 | Waypoint 补充 |
|
||||
|---------|---------|--------------|
|
||||
| OpenTofu | 基础设施管理 | 不替代,而是与之集成,使用已创建的基础设施 |
|
||||
| Ansible | 配置管理 | 可以作为构建或部署步骤的一部分调用Ansible |
|
||||
| Nomad | 容器编排 | 直接集成,简化Nomad作业的部署和管理 |
|
||||
| Gitea Actions | CI/CD流水线 | 可以在流水线中调用Waypoint,或由Waypoint触发流水线 |
|
||||
|
||||
### 2.3 Waypoint 与现有工具的协同工作
|
||||
```
|
||||
+----------------+ +----------------+ +----------------+
|
||||
| OpenTofu | | Waypoint | | Nomad |
|
||||
| |---->| |---->| |
|
||||
| (基础设施管理) | | (应用部署流程) | | (容器编排) |
|
||||
+----------------+ +----------------+ +----------------+
|
||||
|
|
||||
v
|
||||
+----------------+
|
||||
| Ansible |
|
||||
| |
|
||||
| (配置管理) |
|
||||
+----------------+
|
||||
```
|
||||
|
||||
## 3. Waypoint 实施价值分析
|
||||
|
||||
### 3.1 潜在优势
|
||||
|
||||
#### 3.1.1 开发体验提升
|
||||
- **简化接口**: 开发人员通过统一接口部署应用,无需了解底层平台细节
|
||||
- **本地开发一致性**: 开发环境与生产环境使用相同的部署流程
|
||||
- **快速反馈**: 部署结果和日志集中可见
|
||||
|
||||
#### 3.1.2 运维效率提升
|
||||
- **标准化部署流程**: 跨团队和项目的一致部署方法
|
||||
- **减少平台特定脚本**: 减少为不同平台维护的自定义脚本
|
||||
- **集中式部署管理**: 通过UI或CLI集中管理所有应用部署
|
||||
|
||||
#### 3.1.3 多云策略支持
|
||||
- **平台无关的部署**: 相同的Waypoint配置可用于不同云平台
|
||||
- **简化云迁移**: 更容易在不同云提供商之间迁移应用
|
||||
- **混合云支持**: 统一管理跨多个云平台的部署
|
||||
|
||||
#### 3.1.4 与现有HashiCorp生态系统集成
|
||||
- **Nomad集成**: 原生支持Nomad作为部署平台
|
||||
- **Consul集成**: 服务发现和配置管理
|
||||
- **Vault集成**: 安全获取部署所需的密钥和证书
|
||||
|
||||
### 3.2 潜在挑战
|
||||
|
||||
#### 3.2.1 实施成本
|
||||
- **学习曲线**: 团队需要学习新工具
|
||||
- **迁移工作**: 现有部署流程需要适配到Waypoint
|
||||
- **维护开销**: 额外的基础设施组件需要维护
|
||||
|
||||
#### 3.2.2 与现有流程的重叠
|
||||
- **与Gitea Actions重叠**: 部分功能与现有CI/CD流程重叠
|
||||
- **工具链复杂性**: 添加新工具可能增加整体复杂性
|
||||
|
||||
#### 3.2.3 成熟度考量
|
||||
- **相对较新的项目**: 与其他HashiCorp产品相比,Waypoint相对较新
|
||||
- **社区规模**: 社区和生态系统仍在发展中
|
||||
- **插件生态**: 某些特定平台的插件可能不够成熟
|
||||
|
||||
## 4. 实施方案
|
||||
|
||||
### 4.1 部署架构
|
||||
建议将Waypoint服务器部署在与Nomad和Consul相同的环境中:
|
||||
|
||||
```
|
||||
+-------------------+ +-------------------+ +-------------------+
|
||||
| warden | | ash3c | | master |
|
||||
| | | | | |
|
||||
| +-------------+ | | +-------------+ | | +-------------+ |
|
||||
| | Consul | | | | Consul | | | | Consul | |
|
||||
| +-------------+ | | +-------------+ | | +-------------+ |
|
||||
| | | | | |
|
||||
| +-------------+ | | +-------------+ | | +-------------+ |
|
||||
| | Nomad | | | | Nomad | | | | Nomad | |
|
||||
| +-------------+ | | +-------------+ | | +-------------+ |
|
||||
| | | | | |
|
||||
| +-------------+ | | +-------------+ | | +-------------+ |
|
||||
| | Vault | | | | Vault | | | | Vault | |
|
||||
| +-------------+ | | +-------------+ | | +-------------+ |
|
||||
| | | | | |
|
||||
| +-------------+ | | | | |
|
||||
| | Waypoint | | | | | |
|
||||
| +-------------+ | | | | |
|
||||
+-------------------+ +-------------------+ +-------------------+
|
||||
```
|
||||
|
||||
### 4.2 资源需求
|
||||
Waypoint服务器建议配置:
|
||||
- CPU: 2核
|
||||
- 内存: 2GB
|
||||
- 存储: 10GB
|
||||
|
||||
### 4.3 网络配置
|
||||
- Waypoint API端口: 9702
|
||||
- Waypoint UI端口: 9701
|
||||
- 配置TLS加密所有通信
|
||||
|
||||
## 5. 实施计划
|
||||
|
||||
### 5.1 试点阶段
|
||||
1. **环境准备**
|
||||
- 在单个节点上部署Waypoint服务器
|
||||
- 配置与Nomad、Consul和Vault的集成
|
||||
|
||||
2. **选择试点项目**
|
||||
- 选择一个非关键应用作为试点
|
||||
- 创建Waypoint配置文件
|
||||
- 实施构建、部署和发布流程
|
||||
|
||||
3. **评估结果**
|
||||
- 收集开发和运维反馈
|
||||
- 评估部署效率提升
|
||||
- 识别潜在问题和改进点
|
||||
|
||||
### 5.2 扩展阶段
|
||||
1. **扩展到更多应用**
|
||||
- 逐步将更多应用迁移到Waypoint
|
||||
- 创建标准化的Waypoint模板
|
||||
- 建立最佳实践文档
|
||||
|
||||
2. **团队培训**
|
||||
- 为开发和运维团队提供Waypoint培训
|
||||
- 创建内部知识库和示例
|
||||
|
||||
3. **与CI/CD集成**
|
||||
- 将Waypoint集成到现有Gitea Actions流水线
|
||||
- 实现自动触发部署
|
||||
|
||||
### 5.3 完全集成阶段
|
||||
1. **扩展到所有环境**
|
||||
- 在开发、测试和生产环境中统一使用Waypoint
|
||||
- 实现环境特定配置管理
|
||||
|
||||
2. **高级功能实施**
|
||||
- 配置自动回滚策略
|
||||
- 实现蓝绿部署和金丝雀发布
|
||||
- 集成监控和告警
|
||||
|
||||
3. **持续优化**
|
||||
- 定期评估和优化部署流程
|
||||
- 跟踪Waypoint更新和新功能
|
||||
|
||||
## 6. 实施时间表
|
||||
|
||||
| 阶段 | 任务 | 时间估计 |
|
||||
|------|------|----------|
|
||||
| 准备 | 环境准备和Waypoint服务器部署 | 2天 |
|
||||
| 试点 | 试点项目实施 | 5天 |
|
||||
| 试点 | 评估和调整 | 3天 |
|
||||
| 扩展 | 扩展到更多应用 | 10天 |
|
||||
| 扩展 | 团队培训 | 2天 |
|
||||
| 扩展 | CI/CD集成 | 3天 |
|
||||
| 集成 | 扩展到所有环境 | 5天 |
|
||||
| 集成 | 高级功能实施 | 5天 |
|
||||
| **总计** | | **35天** |
|
||||
|
||||
## 7. 成本效益分析
|
||||
|
||||
### 7.1 实施成本
|
||||
- **基础设施成本**: 低(利用现有节点)
|
||||
- **许可成本**: 无(开源版本)
|
||||
- **人力成本**: 中(学习和迁移工作)
|
||||
- **维护成本**: 低(与现有HashiCorp产品集成)
|
||||
|
||||
### 7.2 预期收益
|
||||
- **开发效率提升**: 预计减少20-30%的部署相关工作
|
||||
- **部署一致性**: 减少50%的环境特定问题
|
||||
- **上线时间缩短**: 预计缩短15-25%的应用上线时间
|
||||
- **运维负担减轻**: 减少跨平台部署脚本维护
|
||||
|
||||
### 7.3 投资回报周期
|
||||
- 预计在实施后3-6个月内开始看到明显收益
|
||||
- 完全投资回报预计在9-12个月内实现
|
||||
|
||||
## 8. 结论和建议
|
||||
|
||||
### 8.1 是否实施Waypoint的决策因素
|
||||
|
||||
#### 支持实施的因素
|
||||
- 项目已经使用HashiCorp生态系统(Nomad、Consul)
|
||||
- 多云环境需要统一的部署流程
|
||||
- 需要简化开发人员的部署体验
|
||||
- 应用部署流程需要标准化
|
||||
|
||||
#### 不支持实施的因素
|
||||
- 现有CI/CD流程已经满足需求
|
||||
- 团队资源有限,难以支持额外工具的学习和维护
|
||||
- 应用部署需求相对简单,不需要高级发布策略
|
||||
|
||||
### 8.2 建议实施路径
|
||||
|
||||
基于对项目现状的分析,我们建议采取**渐进式实施**策略:
|
||||
|
||||
1. **先实施Vault**: 优先解决安全问题,实施Vault进行密钥管理
|
||||
2. **小规模试点Waypoint**: 在非关键应用上试点Waypoint,评估实际价值
|
||||
3. **基于试点结果决定**: 根据试点结果决定是否扩大Waypoint的使用范围
|
||||
|
||||
### 8.3 最终建议
|
||||
|
||||
虽然Waypoint提供了统一的应用部署体验和多云支持,但考虑到项目已有相对成熟的GitOps工作流和CI/CD流程,Waypoint的实施优先级应低于Vault。
|
||||
|
||||
建议先完成Vault的实施,解决当前的安全问题,然后在资源允许的情况下,通过小规模试点评估Waypoint的实际价值。这种渐进式方法可以降低风险,同时确保资源投入到最有价值的改进上。
|
||||
|
||||
如果试点结果显示Waypoint能显著提升开发效率和部署一致性,再考虑更广泛的实施。
|
||||
|
|
@ -0,0 +1,712 @@
|
|||
# Waypoint 集成示例
|
||||
|
||||
本文档提供了将Waypoint与现有基础设施和工具集成的具体示例。
|
||||
|
||||
## 1. 与Nomad集成
|
||||
|
||||
### 1.1 基本Nomad部署配置
|
||||
|
||||
```hcl
|
||||
app "api-service" {
|
||||
build {
|
||||
use "docker" {
|
||||
dockerfile = "Dockerfile"
|
||||
disable_entrypoint = true
|
||||
}
|
||||
}
|
||||
|
||||
deploy {
|
||||
use "nomad" {
|
||||
// Nomad集群地址
|
||||
address = "http://nomad-server:4646"
|
||||
|
||||
// 部署配置
|
||||
datacenter = "dc1"
|
||||
namespace = "default"
|
||||
|
||||
// 资源配置
|
||||
resources {
|
||||
cpu = 500
|
||||
memory = 256
|
||||
}
|
||||
|
||||
// 服务配置
|
||||
service_provider = "consul" {
|
||||
service_name = "api-service"
|
||||
tags = ["api", "v1"]
|
||||
|
||||
check {
|
||||
type = "http"
|
||||
path = "/health"
|
||||
interval = "10s"
|
||||
timeout = "2s"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 1.2 高级Nomad配置
|
||||
|
||||
```hcl
|
||||
app "web-app" {
|
||||
deploy {
|
||||
use "nomad" {
|
||||
// 基本配置...
|
||||
|
||||
// 存储卷配置
|
||||
volume_mount {
|
||||
volume = "app-data"
|
||||
destination = "/data"
|
||||
read_only = false
|
||||
}
|
||||
|
||||
// 网络配置
|
||||
network {
|
||||
mode = "bridge"
|
||||
port "http" {
|
||||
static = 8080
|
||||
to = 80
|
||||
}
|
||||
}
|
||||
|
||||
// 环境变量
|
||||
env {
|
||||
NODE_ENV = "production"
|
||||
}
|
||||
|
||||
// 健康检查
|
||||
health_check {
|
||||
timeout = "5m"
|
||||
check {
|
||||
name = "http-check"
|
||||
route = "/health"
|
||||
method = "GET"
|
||||
code = 200
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## 2. 与Vault集成
|
||||
|
||||
### 2.1 从Vault获取静态密钥
|
||||
|
||||
```hcl
|
||||
app "database-service" {
|
||||
deploy {
|
||||
use "nomad" {
|
||||
// 基本配置...
|
||||
|
||||
env {
|
||||
// 从Vault获取数据库凭据
|
||||
DB_USERNAME = dynamic("vault", {
|
||||
path = "kv/data/database/creds"
|
||||
key = "username"
|
||||
})
|
||||
|
||||
DB_PASSWORD = dynamic("vault", {
|
||||
path = "kv/data/database/creds"
|
||||
key = "password"
|
||||
})
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 2.2 使用Vault动态密钥
|
||||
|
||||
```hcl
|
||||
app "api-service" {
|
||||
deploy {
|
||||
use "nomad" {
|
||||
// 基本配置...
|
||||
|
||||
template {
|
||||
destination = "secrets/db-creds.txt"
|
||||
data = <<EOF
|
||||
{{- with secret "database/creds/api-role" -}}
|
||||
DB_USERNAME={{ .Data.username }}
|
||||
DB_PASSWORD={{ .Data.password }}
|
||||
{{- end -}}
|
||||
EOF
|
||||
}
|
||||
|
||||
env_from_file = ["secrets/db-creds.txt"]
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## 3. 与Consul集成
|
||||
|
||||
### 3.1 服务发现配置
|
||||
|
||||
```hcl
|
||||
app "frontend" {
|
||||
deploy {
|
||||
use "nomad" {
|
||||
// 基本配置...
|
||||
|
||||
service_provider = "consul" {
|
||||
service_name = "frontend"
|
||||
|
||||
meta {
|
||||
version = "v1.2.3"
|
||||
team = "frontend"
|
||||
}
|
||||
|
||||
tags = ["web", "frontend"]
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 3.2 使用Consul KV存储配置
|
||||
|
||||
```hcl
|
||||
app "config-service" {
|
||||
deploy {
|
||||
use "nomad" {
|
||||
// 基本配置...
|
||||
|
||||
template {
|
||||
destination = "config/app-config.json"
|
||||
data = <<EOF
|
||||
{
|
||||
"settings": {{ key "config/app-settings" | toJSON }},
|
||||
"features": {{ key "config/features" | toJSON }}
|
||||
}
|
||||
EOF
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## 4. 与Gitea Actions集成
|
||||
|
||||
### 4.1 基本CI/CD流水线
|
||||
|
||||
```yaml
|
||||
name: Build and Deploy
|
||||
|
||||
on:
|
||||
push:
|
||||
branches: [ main ]
|
||||
|
||||
jobs:
|
||||
deploy:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v2
|
||||
|
||||
- name: Install Waypoint
|
||||
run: |
|
||||
curl -fsSL https://releases.hashicorp.com/waypoint/0.11.0/waypoint_0.11.0_linux_amd64.zip -o waypoint.zip
|
||||
unzip waypoint.zip
|
||||
sudo mv waypoint /usr/local/bin/
|
||||
|
||||
- name: Configure Waypoint
|
||||
run: |
|
||||
waypoint context create \
|
||||
-server-addr=${{ secrets.WAYPOINT_SERVER_ADDR }} \
|
||||
-server-auth-token=${{ secrets.WAYPOINT_AUTH_TOKEN }} \
|
||||
-set-default ci-context
|
||||
|
||||
- name: Build and Deploy
|
||||
run: waypoint up
|
||||
```
|
||||
|
||||
### 4.2 多环境部署流水线
|
||||
|
||||
```yaml
|
||||
name: Multi-Environment Deploy
|
||||
|
||||
on:
|
||||
push:
|
||||
branches: [ main, staging, production ]
|
||||
|
||||
jobs:
|
||||
deploy:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v2
|
||||
|
||||
- name: Install Waypoint
|
||||
run: |
|
||||
curl -fsSL https://releases.hashicorp.com/waypoint/0.11.0/waypoint_0.11.0_linux_amd64.zip -o waypoint.zip
|
||||
unzip waypoint.zip
|
||||
sudo mv waypoint /usr/local/bin/
|
||||
|
||||
- name: Configure Waypoint
|
||||
run: |
|
||||
waypoint context create \
|
||||
-server-addr=${{ secrets.WAYPOINT_SERVER_ADDR }} \
|
||||
-server-auth-token=${{ secrets.WAYPOINT_AUTH_TOKEN }} \
|
||||
-set-default ci-context
|
||||
|
||||
- name: Determine Environment
|
||||
id: env
|
||||
run: |
|
||||
if [[ ${{ github.ref }} == 'refs/heads/main' ]]; then
|
||||
echo "::set-output name=environment::development"
|
||||
elif [[ ${{ github.ref }} == 'refs/heads/staging' ]]; then
|
||||
echo "::set-output name=environment::staging"
|
||||
elif [[ ${{ github.ref }} == 'refs/heads/production' ]]; then
|
||||
echo "::set-output name=environment::production"
|
||||
fi
|
||||
|
||||
- name: Build and Deploy
|
||||
run: |
|
||||
waypoint up -workspace=${{ steps.env.outputs.environment }}
|
||||
```
|
||||
|
||||
## 5. 多云部署示例
|
||||
|
||||
### 5.1 AWS ECS部署
|
||||
|
||||
```hcl
|
||||
app "microservice" {
|
||||
build {
|
||||
use "docker" {}
|
||||
}
|
||||
|
||||
deploy {
|
||||
use "aws-ecs" {
|
||||
region = "us-west-2"
|
||||
cluster = "production"
|
||||
|
||||
service {
|
||||
name = "microservice"
|
||||
desired_count = 3
|
||||
|
||||
load_balancer {
|
||||
target_group_arn = "arn:aws:elasticloadbalancing:us-west-2:..."
|
||||
container_name = "microservice"
|
||||
container_port = 8080
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 5.2 Google Cloud Run部署
|
||||
|
||||
```hcl
|
||||
app "api" {
|
||||
build {
|
||||
use "docker" {}
|
||||
}
|
||||
|
||||
deploy {
|
||||
use "google-cloud-run" {
|
||||
project = "my-gcp-project"
|
||||
location = "us-central1"
|
||||
|
||||
port = 8080
|
||||
|
||||
capacity {
|
||||
memory = 512
|
||||
cpu_count = 1
|
||||
max_requests_per_container = 10
|
||||
request_timeout = 300
|
||||
}
|
||||
|
||||
auto_scaling {
|
||||
max_instances = 10
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 5.3 多云部署策略
|
||||
|
||||
```hcl
|
||||
// 使用变量决定部署目标
|
||||
variable "deploy_target" {
|
||||
type = string
|
||||
default = "nomad"
|
||||
}
|
||||
|
||||
app "multi-cloud-app" {
|
||||
build {
|
||||
use "docker" {}
|
||||
}
|
||||
|
||||
deploy {
|
||||
// 根据变量选择部署平台
|
||||
use dynamic {
|
||||
value = var.deploy_target
|
||||
|
||||
// Nomad部署配置
|
||||
nomad {
|
||||
datacenter = "dc1"
|
||||
// 其他Nomad配置...
|
||||
}
|
||||
|
||||
// AWS ECS部署配置
|
||||
aws-ecs {
|
||||
region = "us-west-2"
|
||||
cluster = "production"
|
||||
// 其他ECS配置...
|
||||
}
|
||||
|
||||
// Google Cloud Run部署配置
|
||||
google-cloud-run {
|
||||
project = "my-gcp-project"
|
||||
location = "us-central1"
|
||||
// 其他Cloud Run配置...
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## 6. 高级发布策略
|
||||
|
||||
### 6.1 蓝绿部署
|
||||
|
||||
```hcl
|
||||
app "web-app" {
|
||||
build {
|
||||
use "docker" {}
|
||||
}
|
||||
|
||||
deploy {
|
||||
use "nomad" {
|
||||
// 基本部署配置...
|
||||
}
|
||||
}
|
||||
|
||||
release {
|
||||
use "nomad-bluegreen" {
|
||||
service = "web-app"
|
||||
datacenter = "dc1"
|
||||
namespace = "default"
|
||||
|
||||
// 流量转移配置
|
||||
traffic_step = 25 // 每次转移25%的流量
|
||||
confirm_step = true // 每步需要确认
|
||||
|
||||
// 健康检查
|
||||
health_check {
|
||||
timeout = "2m"
|
||||
check {
|
||||
route = "/health"
|
||||
method = "GET"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 6.2 金丝雀发布
|
||||
|
||||
```hcl
|
||||
app "api-service" {
|
||||
build {
|
||||
use "docker" {}
|
||||
}
|
||||
|
||||
deploy {
|
||||
use "nomad" {
|
||||
// 基本部署配置...
|
||||
}
|
||||
}
|
||||
|
||||
release {
|
||||
use "nomad-canary" {
|
||||
service = "api-service"
|
||||
datacenter = "dc1"
|
||||
|
||||
// 金丝雀配置
|
||||
canary {
|
||||
percentage = 10 // 先发布到10%的实例
|
||||
duration = "15m" // 观察15分钟
|
||||
}
|
||||
|
||||
// 自动回滚配置
|
||||
auto_rollback = true
|
||||
|
||||
// 指标监控
|
||||
metrics {
|
||||
provider = "prometheus"
|
||||
address = "http://prometheus:9090"
|
||||
query = "sum(rate(http_requests_total{status=~\"5..\"}[5m])) / sum(rate(http_requests_total[5m])) > 0.01"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## 7. 自定义插件示例
|
||||
|
||||
### 7.1 自定义构建器插件
|
||||
|
||||
```go
|
||||
// custom_builder.go
|
||||
package main
|
||||
|
||||
import (
|
||||
"context"
|
||||
sdk "github.com/hashicorp/waypoint-plugin-sdk"
|
||||
)
|
||||
|
||||
// CustomBuilder 实现自定义构建逻辑
|
||||
type CustomBuilder struct {
|
||||
config BuildConfig
|
||||
}
|
||||
|
||||
type BuildConfig struct {
|
||||
Command string `hcl:"command"`
|
||||
}
|
||||
|
||||
// ConfigSet 设置配置
|
||||
func (b *CustomBuilder) ConfigSet(config interface{}) error {
|
||||
c, ok := config.(*BuildConfig)
|
||||
if !ok {
|
||||
return fmt.Errorf("invalid configuration")
|
||||
}
|
||||
b.config = *c
|
||||
return nil
|
||||
}
|
||||
|
||||
// BuildFunc 执行构建
|
||||
func (b *CustomBuilder) BuildFunc() interface{} {
|
||||
return b.build
|
||||
}
|
||||
|
||||
func (b *CustomBuilder) build(ctx context.Context, ui terminal.UI) (*Binary, error) {
|
||||
// 执行自定义构建命令
|
||||
cmd := exec.CommandContext(ctx, "sh", "-c", b.config.Command)
|
||||
cmd.Stdout = ui.Output()
|
||||
cmd.Stderr = ui.Error()
|
||||
|
||||
if err := cmd.Run(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return &Binary{
|
||||
Source: "custom",
|
||||
}, nil
|
||||
}
|
||||
|
||||
// 注册插件
|
||||
func main() {
|
||||
sdk.Main(sdk.WithComponents(&CustomBuilder{}))
|
||||
}
|
||||
```
|
||||
|
||||
### 7.2 使用自定义插件
|
||||
|
||||
```hcl
|
||||
app "custom-app" {
|
||||
build {
|
||||
use "custom" {
|
||||
command = "make build"
|
||||
}
|
||||
}
|
||||
|
||||
deploy {
|
||||
use "nomad" {
|
||||
// 部署配置...
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## 8. 监控和可观测性集成
|
||||
|
||||
### 8.1 Prometheus集成
|
||||
|
||||
```hcl
|
||||
app "monitored-app" {
|
||||
deploy {
|
||||
use "nomad" {
|
||||
// 基本配置...
|
||||
|
||||
// Prometheus注解
|
||||
service_provider = "consul" {
|
||||
service_name = "monitored-app"
|
||||
|
||||
meta {
|
||||
"prometheus.io/scrape" = "true"
|
||||
"prometheus.io/path" = "/metrics"
|
||||
"prometheus.io/port" = "8080"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 8.2 与ELK堆栈集成
|
||||
|
||||
```hcl
|
||||
app "logging-app" {
|
||||
deploy {
|
||||
use "nomad" {
|
||||
// 基本配置...
|
||||
|
||||
// 日志配置
|
||||
logging {
|
||||
type = "fluentd"
|
||||
config {
|
||||
fluentd_address = "fluentd.service.consul:24224"
|
||||
tag = "app.${nomad.namespace}.${app.name}"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## 9. 本地开发工作流
|
||||
|
||||
### 9.1 本地开发配置
|
||||
|
||||
```hcl
|
||||
app "dev-app" {
|
||||
build {
|
||||
use "docker" {}
|
||||
}
|
||||
|
||||
deploy {
|
||||
use "docker" {
|
||||
service_port = 3000
|
||||
|
||||
// 开发环境特定配置
|
||||
env {
|
||||
NODE_ENV = "development"
|
||||
DEBUG = "true"
|
||||
}
|
||||
|
||||
// 挂载源代码目录
|
||||
binds {
|
||||
source = abspath("./src")
|
||||
destination = "/app/src"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 9.2 本地与远程环境切换
|
||||
|
||||
```hcl
|
||||
variable "environment" {
|
||||
type = string
|
||||
default = "local"
|
||||
}
|
||||
|
||||
app "fullstack-app" {
|
||||
build {
|
||||
use "docker" {}
|
||||
}
|
||||
|
||||
deploy {
|
||||
// 根据环境变量选择部署方式
|
||||
use dynamic {
|
||||
value = var.environment
|
||||
|
||||
// 本地开发
|
||||
local {
|
||||
use "docker" {
|
||||
// 本地Docker配置...
|
||||
}
|
||||
}
|
||||
|
||||
// 开发环境
|
||||
dev {
|
||||
use "nomad" {
|
||||
// 开发环境Nomad配置...
|
||||
}
|
||||
}
|
||||
|
||||
// 生产环境
|
||||
prod {
|
||||
use "nomad" {
|
||||
// 生产环境Nomad配置...
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## 10. 多应用协调
|
||||
|
||||
### 10.1 依赖管理
|
||||
|
||||
```hcl
|
||||
project = "microservices"
|
||||
|
||||
app "database" {
|
||||
// 数据库服务配置...
|
||||
}
|
||||
|
||||
app "backend" {
|
||||
// 后端API配置...
|
||||
|
||||
// 声明依赖关系
|
||||
depends_on = ["database"]
|
||||
}
|
||||
|
||||
app "frontend" {
|
||||
// 前端配置...
|
||||
|
||||
// 声明依赖关系
|
||||
depends_on = ["backend"]
|
||||
}
|
||||
```
|
||||
|
||||
### 10.2 共享配置
|
||||
|
||||
```hcl
|
||||
// 定义共享变量
|
||||
variable "version" {
|
||||
type = string
|
||||
default = "1.0.0"
|
||||
}
|
||||
|
||||
variable "environment" {
|
||||
type = string
|
||||
default = "development"
|
||||
}
|
||||
|
||||
// 共享函数
|
||||
function "service_name" {
|
||||
params = [name]
|
||||
result = "${var.environment}-${name}"
|
||||
}
|
||||
|
||||
// 应用配置
|
||||
app "api" {
|
||||
build {
|
||||
use "docker" {
|
||||
tag = "${var.version}"
|
||||
}
|
||||
}
|
||||
|
||||
deploy {
|
||||
use "nomad" {
|
||||
service_provider = "consul" {
|
||||
service_name = service_name("api")
|
||||
}
|
||||
|
||||
env {
|
||||
APP_VERSION = var.version
|
||||
ENVIRONMENT = var.environment
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
@ -0,0 +1,331 @@
|
|||
# Waypoint 部署和配置指南
|
||||
|
||||
本文档提供了在现有基础设施上部署和配置HashiCorp Waypoint的详细步骤。
|
||||
|
||||
## 1. 前置准备
|
||||
|
||||
### 1.1 创建数据目录
|
||||
|
||||
在Waypoint服务器节点上创建数据目录:
|
||||
|
||||
```bash
|
||||
sudo mkdir -p /opt/waypoint/data
|
||||
sudo chown -R nomad:nomad /opt/waypoint
|
||||
```
|
||||
|
||||
### 1.2 安装Waypoint CLI
|
||||
|
||||
在开发机器和CI/CD服务器上安装Waypoint CLI:
|
||||
|
||||
```bash
|
||||
curl -fsSL https://releases.hashicorp.com/waypoint/0.11.0/waypoint_0.11.0_linux_amd64.zip -o waypoint.zip
|
||||
unzip waypoint.zip
|
||||
sudo mv waypoint /usr/local/bin/
|
||||
```
|
||||
|
||||
## 2. 部署Waypoint服务器
|
||||
|
||||
### 2.1 使用Nomad部署
|
||||
|
||||
将`waypoint-server.nomad`文件提交到Nomad:
|
||||
|
||||
```bash
|
||||
nomad job run waypoint-server.nomad
|
||||
```
|
||||
|
||||
### 2.2 验证部署状态
|
||||
|
||||
```bash
|
||||
# 检查Nomad任务状态
|
||||
nomad job status waypoint-server
|
||||
|
||||
# 检查Waypoint UI是否可访问
|
||||
curl -I http://warden:9701
|
||||
```
|
||||
|
||||
## 3. 初始化Waypoint
|
||||
|
||||
### 3.1 连接到Waypoint服务器
|
||||
|
||||
```bash
|
||||
# 连接CLI到服务器
|
||||
waypoint context create \
|
||||
-server-addr=warden:9703 \
|
||||
-server-tls-skip-verify \
|
||||
-set-default my-waypoint-server
|
||||
```
|
||||
|
||||
### 3.2 验证连接
|
||||
|
||||
```bash
|
||||
waypoint context verify
|
||||
waypoint server info
|
||||
```
|
||||
|
||||
## 4. 配置Waypoint
|
||||
|
||||
### 4.1 配置Nomad作为运行时平台
|
||||
|
||||
```bash
|
||||
# 确认Nomad连接
|
||||
waypoint config source-set -type=nomad nomad-platform \
|
||||
addr=http://localhost:4646
|
||||
```
|
||||
|
||||
### 4.2 配置与Vault的集成
|
||||
|
||||
```bash
|
||||
# 配置Vault集成
|
||||
waypoint config source-set -type=vault vault-secrets \
|
||||
addr=http://localhost:8200 \
|
||||
token=<vault-token>
|
||||
```
|
||||
|
||||
## 5. 创建第一个Waypoint项目
|
||||
|
||||
### 5.1 创建项目配置文件
|
||||
|
||||
在应用代码目录中创建`waypoint.hcl`文件:
|
||||
|
||||
```hcl
|
||||
project = "example-app"
|
||||
|
||||
app "web" {
|
||||
build {
|
||||
use "docker" {
|
||||
dockerfile = "Dockerfile"
|
||||
}
|
||||
}
|
||||
|
||||
deploy {
|
||||
use "nomad" {
|
||||
datacenter = "dc1"
|
||||
namespace = "default"
|
||||
|
||||
service_provider = "consul" {
|
||||
service_name = "web"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 5.2 初始化和部署项目
|
||||
|
||||
```bash
|
||||
# 初始化项目
|
||||
cd /path/to/app
|
||||
waypoint init
|
||||
|
||||
# 部署应用
|
||||
waypoint up
|
||||
```
|
||||
|
||||
## 6. 与现有工具集成
|
||||
|
||||
### 6.1 与Gitea Actions集成
|
||||
|
||||
创建一个Gitea Actions工作流文件`.gitea/workflows/waypoint.yml`:
|
||||
|
||||
```yaml
|
||||
name: Waypoint Deploy
|
||||
|
||||
on:
|
||||
push:
|
||||
branches: [ main ]
|
||||
|
||||
jobs:
|
||||
deploy:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v2
|
||||
|
||||
- name: Install Waypoint
|
||||
run: |
|
||||
curl -fsSL https://releases.hashicorp.com/waypoint/0.11.0/waypoint_0.11.0_linux_amd64.zip -o waypoint.zip
|
||||
unzip waypoint.zip
|
||||
sudo mv waypoint /usr/local/bin/
|
||||
|
||||
- name: Configure Waypoint
|
||||
run: |
|
||||
waypoint context create \
|
||||
-server-addr=${{ secrets.WAYPOINT_SERVER_ADDR }} \
|
||||
-server-auth-token=${{ secrets.WAYPOINT_AUTH_TOKEN }} \
|
||||
-set-default ci-context
|
||||
|
||||
- name: Deploy Application
|
||||
run: waypoint up -app=web
|
||||
```
|
||||
|
||||
### 6.2 与Vault集成
|
||||
|
||||
在`waypoint.hcl`中使用Vault获取敏感配置:
|
||||
|
||||
```hcl
|
||||
app "web" {
|
||||
deploy {
|
||||
use "nomad" {
|
||||
# 其他配置...
|
||||
|
||||
env {
|
||||
DB_PASSWORD = dynamic("vault", {
|
||||
path = "kv/data/app/db"
|
||||
key = "password"
|
||||
})
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## 7. 高级配置
|
||||
|
||||
### 7.1 配置蓝绿部署
|
||||
|
||||
```hcl
|
||||
app "web" {
|
||||
deploy {
|
||||
use "nomad" {
|
||||
# 基本配置...
|
||||
}
|
||||
}
|
||||
|
||||
release {
|
||||
use "nomad-bluegreen" {
|
||||
service = "web"
|
||||
datacenter = "dc1"
|
||||
namespace = "default"
|
||||
traffic_step = 25
|
||||
confirm_step = true
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 7.2 配置金丝雀发布
|
||||
|
||||
```hcl
|
||||
app "web" {
|
||||
deploy {
|
||||
use "nomad" {
|
||||
# 基本配置...
|
||||
}
|
||||
}
|
||||
|
||||
release {
|
||||
use "nomad-canary" {
|
||||
service = "web"
|
||||
datacenter = "dc1"
|
||||
namespace = "default"
|
||||
|
||||
canary {
|
||||
percentage = 10
|
||||
duration = "5m"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 7.3 配置自动回滚
|
||||
|
||||
```hcl
|
||||
app "web" {
|
||||
deploy {
|
||||
use "nomad" {
|
||||
# 基本配置...
|
||||
|
||||
health_check {
|
||||
timeout = "5m"
|
||||
check {
|
||||
name = "http-check"
|
||||
route = "/health"
|
||||
method = "GET"
|
||||
code = 200
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## 8. 监控和日志
|
||||
|
||||
### 8.1 查看部署状态
|
||||
|
||||
```bash
|
||||
# 查看所有应用
|
||||
waypoint list projects
|
||||
|
||||
# 查看特定应用的部署
|
||||
waypoint list deployments -app=web
|
||||
|
||||
# 查看部署详情
|
||||
waypoint deployment inspect <deployment-id>
|
||||
```
|
||||
|
||||
### 8.2 查看应用日志
|
||||
|
||||
```bash
|
||||
# 查看应用日志
|
||||
waypoint logs -app=web
|
||||
```
|
||||
|
||||
## 9. 备份和恢复
|
||||
|
||||
### 9.1 备份Waypoint数据
|
||||
|
||||
```bash
|
||||
# 备份数据目录
|
||||
tar -czf waypoint-backup.tar.gz /opt/waypoint/data
|
||||
```
|
||||
|
||||
### 9.2 恢复Waypoint数据
|
||||
|
||||
```bash
|
||||
# 停止Waypoint服务
|
||||
nomad job stop waypoint-server
|
||||
|
||||
# 恢复数据
|
||||
rm -rf /opt/waypoint/data/*
|
||||
tar -xzf waypoint-backup.tar.gz -C /
|
||||
|
||||
# 重启服务
|
||||
nomad job run waypoint-server.nomad
|
||||
```
|
||||
|
||||
## 10. 故障排除
|
||||
|
||||
### 10.1 常见问题
|
||||
|
||||
1. **连接问题**:
|
||||
- 检查Waypoint服务器是否正常运行
|
||||
- 验证网络连接和防火墙规则
|
||||
|
||||
2. **部署失败**:
|
||||
- 检查Nomad集群状态
|
||||
- 查看详细的部署日志: `waypoint logs -app=<app> -deploy=<deployment-id>`
|
||||
|
||||
3. **权限问题**:
|
||||
- 确保Waypoint有足够的权限访问Nomad和Vault
|
||||
|
||||
### 10.2 调试命令
|
||||
|
||||
```bash
|
||||
# 检查Waypoint服务器状态
|
||||
waypoint server info
|
||||
|
||||
# 验证Nomad连接
|
||||
waypoint config source-get nomad-platform
|
||||
|
||||
# 启用调试日志
|
||||
WAYPOINT_LOG=debug waypoint up
|
||||
```
|
||||
|
||||
## 11. 最佳实践
|
||||
|
||||
1. **模块化配置**: 将通用配置抽取到可重用的Waypoint插件中
|
||||
2. **环境变量**: 使用环境变量区分不同环境的配置
|
||||
3. **版本控制**: 将`waypoint.hcl`文件纳入版本控制
|
||||
4. **自动化测试**: 在部署前添加自动化测试步骤
|
||||
5. **监控集成**: 将部署状态与监控系统集成
|
||||
|
|
@ -0,0 +1,86 @@
|
|||
job "openfaas-functions" {
|
||||
datacenters = ["dc1"]
|
||||
type = "service"
|
||||
|
||||
group "hello-world" {
|
||||
count = 1
|
||||
|
||||
constraint {
|
||||
attribute = "${node.unique.name}"
|
||||
operator = "regexp"
|
||||
value = "(master|ash3c|hcp)"
|
||||
}
|
||||
|
||||
task "hello-world" {
|
||||
driver = "podman"
|
||||
|
||||
config {
|
||||
image = "functions/hello-world:latest"
|
||||
ports = ["http"]
|
||||
env = {
|
||||
"fprocess" = "node index.js"
|
||||
}
|
||||
}
|
||||
|
||||
resources {
|
||||
network {
|
||||
mbits = 10
|
||||
port "http" { static = 8080 }
|
||||
}
|
||||
}
|
||||
|
||||
service {
|
||||
name = "hello-world"
|
||||
port = "http"
|
||||
tags = ["openfaas-function"]
|
||||
check {
|
||||
type = "http"
|
||||
path = "/"
|
||||
interval = "10s"
|
||||
timeout = "2s"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
group "figlet" {
|
||||
count = 1
|
||||
|
||||
constraint {
|
||||
attribute = "${node.unique.name}"
|
||||
operator = "regexp"
|
||||
value = "(master|ash3c|hcp)"
|
||||
}
|
||||
|
||||
task "figlet" {
|
||||
driver = "podman"
|
||||
|
||||
config {
|
||||
image = "functions/figlet:latest"
|
||||
ports = ["http"]
|
||||
env = {
|
||||
"fprocess" = "figlet"
|
||||
}
|
||||
}
|
||||
|
||||
resources {
|
||||
network {
|
||||
mbits = 10
|
||||
port "http" { static = 8080 }
|
||||
}
|
||||
}
|
||||
|
||||
service {
|
||||
name = "figlet"
|
||||
port = "http"
|
||||
tags = ["openfaas-function"]
|
||||
check {
|
||||
type = "http"
|
||||
path = "/"
|
||||
interval = "10s"
|
||||
timeout = "2s"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
@ -0,0 +1,176 @@
|
|||
job "openfaas" {
|
||||
datacenters = ["dc1"]
|
||||
type = "service"
|
||||
|
||||
group "openfaas-gateway" {
|
||||
count = 1
|
||||
|
||||
constraint {
|
||||
attribute = "${node.unique.name}"
|
||||
operator = "regexp"
|
||||
value = "(master|ash3c|hcp)"
|
||||
}
|
||||
|
||||
task "openfaas-gateway" {
|
||||
driver = "podman"
|
||||
|
||||
config {
|
||||
image = "ghcr.io/openfaas/gateway:0.2.35"
|
||||
ports = ["http", "ui"]
|
||||
env = {
|
||||
"functions_provider_url" = "http://${NOMAD_IP_http}:8080"
|
||||
"read_timeout" = "60s"
|
||||
"write_timeout" = "60s"
|
||||
"upstream_timeout" = "60s"
|
||||
"direct_functions" = "true"
|
||||
"faas_nats_address" = "nats://localhost:4222"
|
||||
"faas_nats_streaming" = "true"
|
||||
"basic_auth" = "true"
|
||||
"secret_mount_path" = "/run/secrets"
|
||||
"scale_from_zero" = "true"
|
||||
}
|
||||
}
|
||||
|
||||
resources {
|
||||
network {
|
||||
mbits = 10
|
||||
port "http" { static = 8080 }
|
||||
port "ui" { static = 8081 }
|
||||
}
|
||||
}
|
||||
|
||||
service {
|
||||
name = "openfaas-gateway"
|
||||
port = "http"
|
||||
check {
|
||||
type = "http"
|
||||
path = "/healthz"
|
||||
interval = "10s"
|
||||
timeout = "2s"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
group "nats" {
|
||||
count = 1
|
||||
|
||||
constraint {
|
||||
attribute = "${node.unique.name}"
|
||||
operator = "regexp"
|
||||
value = "(master|ash3c|hcp)"
|
||||
}
|
||||
|
||||
task "nats" {
|
||||
driver = "podman"
|
||||
|
||||
config {
|
||||
image = "nats-streaming:0.25.3"
|
||||
ports = ["nats"]
|
||||
args = [
|
||||
"-p",
|
||||
"4222",
|
||||
"-m",
|
||||
"8222",
|
||||
"-hbi",
|
||||
"5s",
|
||||
"-hbt",
|
||||
"5s",
|
||||
"-hbf",
|
||||
"2",
|
||||
"-SD",
|
||||
"-cid",
|
||||
"openfaas"
|
||||
]
|
||||
}
|
||||
|
||||
resources {
|
||||
network {
|
||||
mbits = 10
|
||||
port "nats" { static = 4222 }
|
||||
}
|
||||
}
|
||||
|
||||
service {
|
||||
name = "nats"
|
||||
port = "nats"
|
||||
check {
|
||||
type = "tcp"
|
||||
interval = "10s"
|
||||
timeout = "2s"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
group "queue-worker" {
|
||||
count = 1
|
||||
|
||||
constraint {
|
||||
attribute = "${node.unique.name}"
|
||||
operator = "regexp"
|
||||
value = "(master|ash3c|hcp)"
|
||||
}
|
||||
|
||||
task "queue-worker" {
|
||||
driver = "podman"
|
||||
|
||||
config {
|
||||
image = "ghcr.io/openfaas/queue-worker:0.12.2"
|
||||
env = {
|
||||
"gateway_url" = "http://${NOMAD_IP_http}:8080"
|
||||
"faas_nats_address" = "nats://localhost:4222"
|
||||
"faas_nats_streaming" = "true"
|
||||
"ack_wait" = "5m"
|
||||
"write_debug" = "true"
|
||||
}
|
||||
}
|
||||
|
||||
resources {
|
||||
network {
|
||||
mbits = 10
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
group "prometheus" {
|
||||
count = 1
|
||||
|
||||
constraint {
|
||||
attribute = "${node.unique.name}"
|
||||
operator = "regexp"
|
||||
value = "(master|ash3c|hcp)"
|
||||
}
|
||||
|
||||
task "prometheus" {
|
||||
driver = "podman"
|
||||
|
||||
config {
|
||||
image = "prom/prometheus:v2.35.0"
|
||||
ports = ["prometheus"]
|
||||
volumes = [
|
||||
"/opt/openfaas/prometheus.yml:/etc/prometheus/prometheus.yml"
|
||||
]
|
||||
}
|
||||
|
||||
resources {
|
||||
network {
|
||||
mbits = 10
|
||||
port "prometheus" { static = 9090 }
|
||||
}
|
||||
}
|
||||
|
||||
service {
|
||||
name = "prometheus"
|
||||
port = "prometheus"
|
||||
check {
|
||||
type = "http"
|
||||
path = "/-/healthy"
|
||||
interval = "10s"
|
||||
timeout = "2s"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
@ -0,0 +1,78 @@
|
|||
job "traefik" {
|
||||
datacenters = ["dc1"]
|
||||
type = "service"
|
||||
|
||||
update {
|
||||
strategy = "canary"
|
||||
max_parallel = 1
|
||||
min_healthy_time = "10s"
|
||||
healthy_deadline = "3m"
|
||||
auto_revert = true
|
||||
}
|
||||
|
||||
group "traefik" {
|
||||
count = 3
|
||||
|
||||
restart {
|
||||
attempts = 3
|
||||
interval = "30m"
|
||||
delay = "15s"
|
||||
mode = "fail"
|
||||
}
|
||||
|
||||
network {
|
||||
port "http" {
|
||||
static = 80
|
||||
}
|
||||
port "https" {
|
||||
static = 443
|
||||
}
|
||||
port "api" {
|
||||
static = 8080
|
||||
}
|
||||
}
|
||||
|
||||
task "traefik" {
|
||||
driver = "podman"
|
||||
|
||||
config {
|
||||
image = "traefik:latest"
|
||||
ports = ["http", "https", "api"]
|
||||
volumes = [
|
||||
"/var/run/docker.sock:/var/run/docker.sock:ro", # 如果需要与Docker集成
|
||||
"/root/mgmt/configs/traefik.yml:/etc/traefik/traefik.yml:ro",
|
||||
"/root/mgmt/configs/dynamic:/etc/traefik/dynamic:ro"
|
||||
]
|
||||
}
|
||||
|
||||
env {
|
||||
NOMAD_ADDR = "http://${attr.unique.network.ip-address}:4646"
|
||||
CONSUL_HTTP_ADDR = "http://${attr.unique.network.ip-address}:8500"
|
||||
}
|
||||
|
||||
resources {
|
||||
cpu = 200
|
||||
memory = 256
|
||||
}
|
||||
|
||||
service {
|
||||
name = "traefik"
|
||||
port = "http"
|
||||
tags = [
|
||||
"traefik.enable=true",
|
||||
"traefik.http.routers.api.rule=Host(`traefik.service.consul`)",
|
||||
"traefik.http.routers.api.service=api@internal",
|
||||
"traefik.http.routers.api.entrypoints=api",
|
||||
"traefik.http.services.api.loadbalancer.server.port=8080"
|
||||
]
|
||||
|
||||
check {
|
||||
type = "http"
|
||||
path = "/ping"
|
||||
interval = "10s"
|
||||
timeout = "2s"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
@ -0,0 +1 @@
|
|||
/mnt/fnsync/mcp/mcp_shared_config.json
|
||||
|
|
@ -0,0 +1,193 @@
|
|||
#!/bin/bash
|
||||
|
||||
# Nomad 集群领导者发现与访问脚本
|
||||
# 此脚本自动发现当前 Nomad 集群领导者并执行相应命令
|
||||
|
||||
# 默认服务器列表(可根据实际情况修改)
|
||||
SERVERS=(
|
||||
"100.116.158.95" # bj-semaphore.global
|
||||
"100.81.26.3" # ash1d.global
|
||||
"100.103.147.94" # ash2e.global
|
||||
"100.90.159.68" # ch2.global
|
||||
"100.86.141.112" # ch3.global
|
||||
"100.98.209.50" # bj-onecloud1.global
|
||||
"100.120.225.29" # de.global
|
||||
)
|
||||
|
||||
# 超时设置(秒)
|
||||
TIMEOUT=5
|
||||
|
||||
# 颜色输出
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
NC='\033[0m' # No Color
|
||||
|
||||
# 打印帮助信息
|
||||
function show_help() {
|
||||
echo "Nomad 集群领导者发现与访问脚本"
|
||||
echo ""
|
||||
echo "用法: $0 [选项] [nomad命令]"
|
||||
echo ""
|
||||
echo "选项:"
|
||||
echo " -h, --help 显示此帮助信息"
|
||||
echo " -s, --server IP 指定初始服务器IP"
|
||||
echo " -t, --timeout SECS 设置超时时间(默认: $TIMEOUT 秒)"
|
||||
echo " -l, --list-servers 列出所有配置的服务器"
|
||||
echo " -c, --check-leader 仅检查领导者,不执行命令"
|
||||
echo ""
|
||||
echo "示例:"
|
||||
echo " $0 node status # 使用自动发现的领导者查看节点状态"
|
||||
echo " $0 -s 100.116.158.95 job status # 指定初始服务器查看作业状态"
|
||||
echo " $0 -c # 仅检查当前领导者"
|
||||
echo ""
|
||||
}
|
||||
|
||||
# 列出所有配置的服务器
|
||||
function list_servers() {
|
||||
echo -e "${YELLOW}配置的服务器列表:${NC}"
|
||||
for server in "${SERVERS[@]}"; do
|
||||
echo " - $server"
|
||||
done
|
||||
}
|
||||
|
||||
# 发现领导者
|
||||
function discover_leader() {
|
||||
local initial_server=$1
|
||||
|
||||
# 如果指定了初始服务器,先尝试使用它
|
||||
if [ -n "$initial_server" ]; then
|
||||
echo -e "${YELLOW}尝试从服务器 $initial_server 发现领导者...${NC}" >&2
|
||||
leader=$(curl -s --max-time $TIMEOUT "http://${initial_server}:4646/v1/status/leader" 2>/dev/null | sed 's/"//g')
|
||||
if [ -n "$leader" ] && [ "$leader" != "" ]; then
|
||||
# 将RPC端口(4647)替换为HTTP端口(4646)
|
||||
leader=$(echo "$leader" | sed 's/:4647$/:4646/')
|
||||
echo -e "${GREEN}发现领导者: $leader${NC}" >&2
|
||||
echo "$leader"
|
||||
return 0
|
||||
fi
|
||||
echo -e "${RED}无法从 $initial_server 获取领导者信息${NC}" >&2
|
||||
fi
|
||||
|
||||
# 遍历所有服务器尝试发现领导者
|
||||
echo -e "${YELLOW}遍历所有服务器寻找领导者...${NC}" >&2
|
||||
for server in "${SERVERS[@]}"; do
|
||||
echo -n " 检查 $server ... " >&2
|
||||
leader=$(curl -s --max-time $TIMEOUT "http://${server}:4646/v1/status/leader" 2>/dev/null | sed 's/"//g')
|
||||
if [ -n "$leader" ] && [ "$leader" != "" ]; then
|
||||
# 将RPC端口(4647)替换为HTTP端口(4646)
|
||||
leader=$(echo "$leader" | sed 's/:4647$/:4646/')
|
||||
echo -e "${GREEN}成功${NC}" >&2
|
||||
echo -e "${GREEN}发现领导者: $leader${NC}" >&2
|
||||
echo "$leader"
|
||||
return 0
|
||||
else
|
||||
echo -e "${RED}失败${NC}" >&2
|
||||
fi
|
||||
done
|
||||
|
||||
echo -e "${RED}无法发现领导者,请检查集群状态${NC}" >&2
|
||||
return 1
|
||||
}
|
||||
|
||||
# 解析命令行参数
|
||||
INITIAL_SERVER=""
|
||||
CHECK_LEADER_ONLY=false
|
||||
NOMAD_COMMAND=()
|
||||
|
||||
while [[ $# -gt 0 ]]; do
|
||||
case $1 in
|
||||
-h|--help)
|
||||
show_help
|
||||
exit 0
|
||||
;;
|
||||
-s|--server)
|
||||
INITIAL_SERVER="$2"
|
||||
shift 2
|
||||
;;
|
||||
-t|--timeout)
|
||||
TIMEOUT="$2"
|
||||
shift 2
|
||||
;;
|
||||
-l|--list-servers)
|
||||
list_servers
|
||||
exit 0
|
||||
;;
|
||||
-c|--check-leader)
|
||||
CHECK_LEADER_ONLY=true
|
||||
shift
|
||||
;;
|
||||
*)
|
||||
NOMAD_COMMAND+=("$1")
|
||||
shift
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
# 主逻辑
|
||||
echo -e "${YELLOW}Nomad 集群领导者发现与访问脚本${NC}" >&2
|
||||
echo "==================================" >&2
|
||||
|
||||
# 发现领导者
|
||||
LEADER=$(discover_leader "$INITIAL_SERVER")
|
||||
if [ $? -ne 0 ]; then
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# 提取领导者IP和端口
|
||||
LEADER_IP=$(echo "$LEADER" | cut -d':' -f1)
|
||||
LEADER_PORT=$(echo "$LEADER" | cut -d':' -f2)
|
||||
|
||||
# 如果仅检查领导者,则退出
|
||||
if [ "$CHECK_LEADER_ONLY" = true ]; then
|
||||
echo -e "${GREEN}当前领导者: $LEADER${NC}" >&2
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# 如果没有指定命令,显示交互式菜单
|
||||
if [ ${#NOMAD_COMMAND[@]} -eq 0 ]; then
|
||||
echo -e "${YELLOW}未指定命令,请选择要执行的操作:${NC}" >&2
|
||||
echo "1) 查看节点状态" >&2
|
||||
echo "2) 查看作业状态" >&2
|
||||
echo "3) 查看服务器成员" >&2
|
||||
echo "4) 查看集群状态" >&2
|
||||
echo "5) 自定义命令" >&2
|
||||
echo "0) 退出" >&2
|
||||
|
||||
read -p "请输入选项 (0-5): " choice
|
||||
|
||||
case $choice in
|
||||
1) NOMAD_COMMAND=("node" "status") ;;
|
||||
2) NOMAD_COMMAND=("job" "status") ;;
|
||||
3) NOMAD_COMMAND=("server" "members") ;;
|
||||
4) NOMAD_COMMAND=("operator" "raft" "list-peers") ;;
|
||||
5)
|
||||
read -p "请输入完整的 Nomad 命令: " -a NOMAD_COMMAND
|
||||
;;
|
||||
0) exit 0 ;;
|
||||
*)
|
||||
echo -e "${RED}无效选项${NC}" >&2
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
fi
|
||||
|
||||
# 执行命令
|
||||
echo -e "${YELLOW}执行命令: nomad ${NOMAD_COMMAND[*]} -address=http://${LEADER}${NC}" >&2
|
||||
nomad "${NOMAD_COMMAND[@]}" -address="http://${LEADER}"
|
||||
|
||||
# 检查命令执行结果
|
||||
if [ $? -eq 0 ]; then
|
||||
echo -e "${GREEN}命令执行成功${NC}" >&2
|
||||
else
|
||||
echo -e "${RED}命令执行失败,可能需要重新发现领导者${NC}" >&2
|
||||
echo -e "${YELLOW}尝试重新发现领导者...${NC}" >&2
|
||||
NEW_LEADER=$(discover_leader)
|
||||
if [ $? -eq 0 ] && [ "$NEW_LEADER" != "$LEADER" ]; then
|
||||
echo -e "${YELLOW}领导者已更改,重新执行命令...${NC}" >&2
|
||||
nomad "${NOMAD_COMMAND[@]}" -address="http://${NEW_LEADER}"
|
||||
else
|
||||
echo -e "${RED}无法恢复,请检查集群状态${NC}" >&2
|
||||
exit 1
|
||||
fi
|
||||
fi
|
||||
|
|
@ -0,0 +1,275 @@
|
|||
#!/bin/bash
|
||||
|
||||
# Traefik部署测试脚本
|
||||
# 用于测试Traefik在Nomad集群中的部署和功能
|
||||
|
||||
set -e
|
||||
|
||||
# 颜色定义
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
NC='\033[0m' # No Color
|
||||
|
||||
# 日志函数
|
||||
log_info() {
|
||||
echo -e "${GREEN}[INFO]${NC} $1"
|
||||
}
|
||||
|
||||
log_warn() {
|
||||
echo -e "${YELLOW}[WARN]${NC} $1"
|
||||
}
|
||||
|
||||
log_error() {
|
||||
echo -e "${RED}[ERROR]${NC} $1"
|
||||
}
|
||||
|
||||
# 检查Nomad集群状态
|
||||
check_nomad_cluster() {
|
||||
log_info "检查Nomad集群状态..."
|
||||
|
||||
# 使用我们之前创建的领导者发现脚本
|
||||
if [ -f "/root/mgmt/scripts/nomad-leader-discovery.sh" ]; then
|
||||
chmod +x /root/mgmt/scripts/nomad-leader-discovery.sh
|
||||
LEADER_INFO=$(/root/mgmt/scripts/nomad-leader-discovery.sh -c 2>&1)
|
||||
log_info "Nomad领导者信息: $LEADER_INFO"
|
||||
else
|
||||
log_warn "未找到Nomad领导者发现脚本,使用默认方式检查"
|
||||
nomad server members 2>/dev/null || log_error "无法连接到Nomad集群"
|
||||
fi
|
||||
}
|
||||
|
||||
# 检查Consul集群状态
|
||||
check_consul_cluster() {
|
||||
log_info "检查Consul集群状态..."
|
||||
|
||||
consul members 2>/dev/null || log_error "无法连接到Consul集群"
|
||||
|
||||
# 检查Consul领导者
|
||||
CONSUL_LEADER=$(curl -s http://127.0.0.1:8500/v1/status/leader)
|
||||
if [ -n "$CONSUL_LEADER" ]; then
|
||||
log_info "Consul领导者: $CONSUL_LEADER"
|
||||
else
|
||||
log_error "无法获取Consul领导者信息"
|
||||
fi
|
||||
}
|
||||
|
||||
# 部署Traefik
|
||||
deploy_traefik() {
|
||||
log_info "部署Traefik..."
|
||||
|
||||
# 检查作业文件是否存在
|
||||
if [ ! -f "/root/mgmt/jobs/traefik.nomad" ]; then
|
||||
log_error "Traefik作业文件不存在: /root/mgmt/jobs/traefik.nomad"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# 部署作业
|
||||
nomad run /root/mgmt/jobs/traefik.nomad
|
||||
|
||||
# 等待部署完成
|
||||
log_info "等待Traefik部署完成..."
|
||||
sleep 10
|
||||
|
||||
# 检查作业状态
|
||||
nomad status traefik
|
||||
}
|
||||
|
||||
# 检查Traefik状态
|
||||
check_traefik_status() {
|
||||
log_info "检查Traefik状态..."
|
||||
|
||||
# 检查作业状态
|
||||
JOB_STATUS=$(nomad job status traefik -json | jq -r '.Status')
|
||||
if [ "$JOB_STATUS" == "running" ]; then
|
||||
log_info "Traefik作业状态: $JOB_STATUS"
|
||||
else
|
||||
log_error "Traefik作业状态异常: $JOB_STATUS"
|
||||
return 1
|
||||
fi
|
||||
|
||||
# 检查分配状态
|
||||
ALLOCATIONS=$(nomad job allocs traefik | tail -n +3 | head -n -1 | awk '{print $1}')
|
||||
for alloc in $ALLOCATIONS; do
|
||||
alloc_status=$(nomad alloc status $alloc -json | jq -r '.ClientStatus')
|
||||
if [ "$alloc_status" == "running" ]; then
|
||||
log_info "分配 $alloc 状态: $alloc_status"
|
||||
else
|
||||
log_error "分配 $alloc 状态异常: $alloc_status"
|
||||
fi
|
||||
done
|
||||
|
||||
# 检查服务注册
|
||||
log_info "检查Consul中的服务注册..."
|
||||
consul catalog services | grep traefik && log_info "Traefik服务已注册到Consul" || log_warn "Traefik服务未注册到Consul"
|
||||
}
|
||||
|
||||
# 测试Traefik功能
|
||||
test_traefik_functionality() {
|
||||
log_info "测试Traefik功能..."
|
||||
|
||||
# 获取Traefik服务地址
|
||||
TRAEFIK_ADDR=$(consul catalog service traefik | jq -r '.[0].ServiceAddress' 2>/dev/null)
|
||||
if [ -z "$TRAEFIK_ADDR" ]; then
|
||||
log_warn "无法从Consul获取Traefik地址,使用本地地址"
|
||||
TRAEFIK_ADDR="127.0.0.1"
|
||||
fi
|
||||
|
||||
# 测试API端点
|
||||
log_info "测试Traefik API端点..."
|
||||
if curl -s http://$TRAEFIK_ADDR:8080/ping > /dev/null; then
|
||||
log_info "Traefik API端点响应正常"
|
||||
else
|
||||
log_error "Traefik API端点无响应"
|
||||
fi
|
||||
|
||||
# 测试仪表板
|
||||
log_info "测试Traefik仪表板..."
|
||||
if curl -s http://$TRAEFIK_ADDR:8080/dashboard/ > /dev/null; then
|
||||
log_info "Traefik仪表板可访问"
|
||||
else
|
||||
log_error "无法访问Traefik仪表板"
|
||||
fi
|
||||
|
||||
# 测试HTTP入口点
|
||||
log_info "测试HTTP入口点..."
|
||||
if curl -s -I http://$TRAEFIK_ADDR:80 | grep -q "Location: https://"; then
|
||||
log_info "HTTP到HTTPS重定向正常工作"
|
||||
else
|
||||
log_warn "HTTP到HTTPS重定向可能未正常工作"
|
||||
fi
|
||||
}
|
||||
|
||||
# 创建测试服务
|
||||
create_test_service() {
|
||||
log_info "创建测试服务..."
|
||||
|
||||
# 创建一个简单的测试服务作业文件
|
||||
cat > /tmp/test-service.nomad << EOF
|
||||
job "test-web" {
|
||||
datacenters = ["dc1"]
|
||||
type = "service"
|
||||
|
||||
group "web" {
|
||||
count = 1
|
||||
|
||||
network {
|
||||
port "http" {
|
||||
to = 8080
|
||||
}
|
||||
}
|
||||
|
||||
task "nginx" {
|
||||
driver = "podman"
|
||||
|
||||
config {
|
||||
image = "nginx:alpine"
|
||||
ports = ["http"]
|
||||
}
|
||||
|
||||
resources {
|
||||
cpu = 100
|
||||
memory = 64
|
||||
}
|
||||
|
||||
service {
|
||||
name = "test-web"
|
||||
port = "http"
|
||||
tags = [
|
||||
"traefik.enable=true",
|
||||
"traefik.http.routers.test-web.rule=Host(`test-web.service.consul`)",
|
||||
"traefik.http.routers.test-web.entrypoints=https"
|
||||
]
|
||||
|
||||
check {
|
||||
type = "http"
|
||||
path = "/"
|
||||
interval = "10s"
|
||||
timeout = "2s"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
EOF
|
||||
|
||||
# 部署测试服务
|
||||
nomad run /tmp/test-service.nomad
|
||||
|
||||
# 等待服务启动
|
||||
sleep 15
|
||||
|
||||
# 测试服务是否可通过Traefik访问
|
||||
log_info "测试服务是否可通过Traefik访问..."
|
||||
if curl -s -H "Host: test-web.service.consul" http://$TRAEFIK_ADDR:80 | grep -q "Welcome to nginx"; then
|
||||
log_info "测试服务可通过Traefik正常访问"
|
||||
else
|
||||
log_error "无法通过Traefik访问测试服务"
|
||||
fi
|
||||
}
|
||||
|
||||
# 清理测试资源
|
||||
cleanup_test_resources() {
|
||||
log_info "清理测试资源..."
|
||||
|
||||
# 停止测试服务
|
||||
nomad job stop test-web 2>/dev/null || true
|
||||
nomad job purge test-web 2>/dev/null || true
|
||||
|
||||
# 停止Traefik
|
||||
nomad job stop traefik 2>/dev/null || true
|
||||
nomad job purge traefik 2>/dev/null || true
|
||||
|
||||
# 删除临时文件
|
||||
rm -f /tmp/test-service.nomad
|
||||
|
||||
log_info "清理完成"
|
||||
}
|
||||
|
||||
# 主函数
|
||||
main() {
|
||||
case "${1:-all}" in
|
||||
"check")
|
||||
check_nomad_cluster
|
||||
check_consul_cluster
|
||||
;;
|
||||
"deploy")
|
||||
deploy_traefik
|
||||
;;
|
||||
"status")
|
||||
check_traefik_status
|
||||
;;
|
||||
"test")
|
||||
test_traefik_functionality
|
||||
;;
|
||||
"test-service")
|
||||
create_test_service
|
||||
;;
|
||||
"cleanup")
|
||||
cleanup_test_resources
|
||||
;;
|
||||
"all")
|
||||
check_nomad_cluster
|
||||
check_consul_cluster
|
||||
deploy_traefik
|
||||
check_traefik_status
|
||||
test_traefik_functionality
|
||||
create_test_service
|
||||
log_info "所有测试完成"
|
||||
;;
|
||||
*)
|
||||
echo "用法: $0 {check|deploy|status|test|test-service|cleanup|all}"
|
||||
echo " check - 检查集群状态"
|
||||
echo " deploy - 部署Traefik"
|
||||
echo " status - 检查Traefik状态"
|
||||
echo " test - 测试Traefik功能"
|
||||
echo " test-service - 创建并测试示例服务"
|
||||
echo " cleanup - 清理测试资源"
|
||||
echo " all - 执行所有步骤(默认)"
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
}
|
||||
|
||||
# 执行主函数
|
||||
main "$@"
|
||||
|
|
@ -0,0 +1,89 @@
|
|||
#!/bin/bash
|
||||
|
||||
# 链接所有MCP配置文件的脚本
|
||||
# 该脚本将所有IDE和AI助手的MCP配置链接到NFS共享的配置文件
|
||||
|
||||
NFS_CONFIG="/mnt/fnsync/mcp/mcp_shared_config.json"
|
||||
|
||||
echo "链接所有MCP配置文件到NFS共享配置..."
|
||||
|
||||
# 检查NFS配置文件是否存在
|
||||
if [ ! -f "$NFS_CONFIG" ]; then
|
||||
echo "错误: NFS配置文件不存在: $NFS_CONFIG"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "✓ 使用NFS共享配置作为基准: $NFS_CONFIG"
|
||||
|
||||
# 定义所有可能的MCP配置位置
|
||||
CONFIGS=(
|
||||
# Kilo Code IDE
|
||||
"../.trae-cn-server/data/User/globalStorage/kilocode.kilo-code/settings/mcp_settings.json"
|
||||
"../.trae-server/data/User/globalStorage/kilocode.kilo-code/settings/mcp_settings.json"
|
||||
"../.trae-aicc/data/User/globalStorage/kilocode.kilo-code/settings/mcp_settings.json"
|
||||
|
||||
# Tencent CodeBuddy
|
||||
"$HOME/.codebuddy-server/data/User/globalStorage/tencent.planning-genie/settings/codebuddy_mcp_settings.json"
|
||||
"$HOME/.codebuddy/data/User/globalStorage/tencent.planning-genie/settings/codebuddy_mcp_settings.json"
|
||||
# 新增的CodeBuddy-CN
|
||||
"$HOME/.codebuddy-server-cn/data/User/globalStorage/tencent.planning-genie/settings/codebuddy_mcp_settings.json"
|
||||
|
||||
# Claude相关
|
||||
"$HOME/.claude.json"
|
||||
"$HOME/.claude.json.backup"
|
||||
"$HOME/.config/claude/settings/mcp_settings.json"
|
||||
|
||||
# Cursor
|
||||
"$HOME/.cursor-server/data/User/globalStorage/xxx.cursor/settings/mcp_settings.json"
|
||||
|
||||
# Qoder
|
||||
"$HOME/.qoder-server/data/User/globalStorage/xxx.qoder/settings/mcp_settings.json"
|
||||
|
||||
# Cline
|
||||
"$HOME/.codebuddy-server/data/User/globalStorage/rooveterinaryinc.roo-cline/settings/mcp_settings.json"
|
||||
"$HOME/Cline/settings/mcp_settings.json"
|
||||
|
||||
# Kiro
|
||||
"$HOME/.kiro-server/data/User/globalStorage/xxx.kiro/settings/mcp_settings.json"
|
||||
|
||||
# Qwen
|
||||
"$HOME/.qwen/settings/mcp_settings.json"
|
||||
|
||||
# VSCodium
|
||||
"$HOME/.vscodium-server/data/User/globalStorage/xxx.vscodium/settings/mcp_settings.json"
|
||||
|
||||
# Other potential locations
|
||||
".kilocode/mcp.json"
|
||||
"$HOME/.config/Qoder/SharedClientCache/mcp.json"
|
||||
"$HOME/.trae-server/data/Machine/mcp.json"
|
||||
"$HOME/.trae-cn-server/data/Machine/mcp.json"
|
||||
"$HOME/.codegeex/agent/configs/user_mcp_config.json"
|
||||
"$HOME/.codegeex/agent/configs/mcp_config.json"
|
||||
)
|
||||
|
||||
# 链接到每个配置位置
|
||||
for config_path in "${CONFIGS[@]}"; do
|
||||
if [ -n "$config_path" ]; then
|
||||
config_dir=$(dirname "$config_path")
|
||||
if [ -d "$config_dir" ]; then
|
||||
# 如果目标文件已存在,先备份
|
||||
if [ -f "$config_path" ]; then
|
||||
mv "$config_path" "${config_path}.backup"
|
||||
echo "✓ 原配置文件已备份: ${config_path}.backup"
|
||||
fi
|
||||
|
||||
# 创建符号链接
|
||||
ln -s "$NFS_CONFIG" "$config_path" 2>/dev/null
|
||||
if [ $? -eq 0 ]; then
|
||||
echo "✓ 已创建链接到: $config_path"
|
||||
else
|
||||
echo "✗ 创建链接失败: $config_path"
|
||||
fi
|
||||
else
|
||||
echo "✗ 目录不存在: $config_dir"
|
||||
fi
|
||||
fi
|
||||
done
|
||||
|
||||
echo "所有MCP配置链接完成!"
|
||||
echo "所有IDE和AI助手现在都使用NFS共享的MCP配置文件: $NFS_CONFIG"
|
||||
|
|
@ -8,13 +8,7 @@ terraform {
|
|||
# Oracle Cloud Infrastructure
|
||||
oci = {
|
||||
source = "oracle/oci"
|
||||
version = "~> 5.0"
|
||||
}
|
||||
|
||||
# 华为云
|
||||
huaweicloud = {
|
||||
source = "huaweicloud/huaweicloud"
|
||||
version = "~> 1.60"
|
||||
version = "~> 7.20"
|
||||
}
|
||||
|
||||
# 其他常用提供商
|
||||
|
|
@ -32,6 +26,18 @@ terraform {
|
|||
source = "hashicorp/local"
|
||||
version = "~> 2.1"
|
||||
}
|
||||
|
||||
# Consul Provider
|
||||
consul = {
|
||||
source = "hashicorp/consul"
|
||||
version = "~> 2.22.0"
|
||||
}
|
||||
|
||||
# HashiCorp Vault Provider
|
||||
vault = {
|
||||
source = "hashicorp/vault"
|
||||
version = "~> 4.0"
|
||||
}
|
||||
}
|
||||
|
||||
# 后端配置
|
||||
|
|
@ -40,21 +46,87 @@ terraform {
|
|||
}
|
||||
}
|
||||
|
||||
# Oracle Cloud 提供商配置
|
||||
provider "oci" {
|
||||
tenancy_ocid = var.oci_config.tenancy_ocid
|
||||
user_ocid = var.oci_config.user_ocid
|
||||
fingerprint = var.oci_config.fingerprint
|
||||
private_key_path = var.oci_config.private_key_path
|
||||
region = var.oci_config.region
|
||||
# 将从Consul获取的私钥保存到临时文件
|
||||
resource "local_file" "oci_kr_private_key" {
|
||||
content = data.consul_keys.oracle_config.var.private_key
|
||||
filename = "/tmp/oci_kr_private_key.pem"
|
||||
}
|
||||
|
||||
# 华为云提供商配置 (仅在需要时配置)
|
||||
provider "huaweicloud" {
|
||||
access_key = var.huawei_config.access_key
|
||||
secret_key = var.huawei_config.secret_key
|
||||
region = var.huawei_config.region
|
||||
alias = "huawei"
|
||||
resource "local_file" "oci_us_private_key" {
|
||||
content = data.consul_keys.oracle_config_us.var.private_key
|
||||
filename = "/tmp/oci_us_private_key.pem"
|
||||
}
|
||||
|
||||
# Consul Provider配置
|
||||
provider "consul" {
|
||||
address = "localhost:8500"
|
||||
scheme = "http"
|
||||
datacenter = "dc1"
|
||||
}
|
||||
|
||||
# Vault Provider配置
|
||||
provider "vault" {
|
||||
address = var.vault_config.address
|
||||
token = var.vault_token
|
||||
}
|
||||
|
||||
# 从Consul获取Oracle Cloud配置
|
||||
data "consul_keys" "oracle_config" {
|
||||
key {
|
||||
name = "tenancy_ocid"
|
||||
path = "config/dev/oracle/kr/tenancy_ocid"
|
||||
}
|
||||
key {
|
||||
name = "user_ocid"
|
||||
path = "config/dev/oracle/kr/user_ocid"
|
||||
}
|
||||
key {
|
||||
name = "fingerprint"
|
||||
path = "config/dev/oracle/kr/fingerprint"
|
||||
}
|
||||
key {
|
||||
name = "private_key"
|
||||
path = "config/dev/oracle/kr/private_key"
|
||||
}
|
||||
}
|
||||
|
||||
# 从Consul获取Oracle Cloud美国区域配置
|
||||
data "consul_keys" "oracle_config_us" {
|
||||
key {
|
||||
name = "tenancy_ocid"
|
||||
path = "config/dev/oracle/us/tenancy_ocid"
|
||||
}
|
||||
key {
|
||||
name = "user_ocid"
|
||||
path = "config/dev/oracle/us/user_ocid"
|
||||
}
|
||||
key {
|
||||
name = "fingerprint"
|
||||
path = "config/dev/oracle/us/fingerprint"
|
||||
}
|
||||
key {
|
||||
name = "private_key"
|
||||
path = "config/dev/oracle/us/private_key"
|
||||
}
|
||||
}
|
||||
|
||||
# 使用从Consul获取的配置的OCI Provider
|
||||
provider "oci" {
|
||||
tenancy_ocid = data.consul_keys.oracle_config.var.tenancy_ocid
|
||||
user_ocid = data.consul_keys.oracle_config.var.user_ocid
|
||||
fingerprint = data.consul_keys.oracle_config.var.fingerprint
|
||||
private_key_path = local_file.oci_kr_private_key.filename
|
||||
region = "ap-chuncheon-1"
|
||||
}
|
||||
|
||||
# 美国区域的OCI Provider
|
||||
provider "oci" {
|
||||
alias = "us"
|
||||
tenancy_ocid = data.consul_keys.oracle_config_us.var.tenancy_ocid
|
||||
user_ocid = data.consul_keys.oracle_config_us.var.user_ocid
|
||||
fingerprint = data.consul_keys.oracle_config_us.var.fingerprint
|
||||
private_key_path = local_file.oci_us_private_key.filename
|
||||
region = "us-ashburn-1"
|
||||
}
|
||||
|
||||
# Oracle Cloud 基础设施
|
||||
|
|
@ -68,7 +140,15 @@ module "oracle_cloud" {
|
|||
vpc_cidr = var.vpc_cidr
|
||||
availability_zones = var.availability_zones
|
||||
common_tags = var.common_tags
|
||||
oci_config = var.oci_config
|
||||
|
||||
# 使用从Consul获取的配置
|
||||
oci_config = {
|
||||
tenancy_ocid = data.consul_keys.oracle_config.var.tenancy_ocid
|
||||
user_ocid = data.consul_keys.oracle_config.var.user_ocid
|
||||
fingerprint = data.consul_keys.oracle_config.var.fingerprint
|
||||
private_key_path = local_file.oci_kr_private_key.filename
|
||||
region = "ap-chuncheon-1"
|
||||
}
|
||||
|
||||
# 开发环境特定配置
|
||||
instance_count = 1
|
||||
|
|
@ -79,31 +159,8 @@ module "oracle_cloud" {
|
|||
}
|
||||
}
|
||||
|
||||
# 华为云基础设施 (可选)
|
||||
module "huawei_cloud" {
|
||||
source = "../../providers/huawei-cloud"
|
||||
count = contains(var.cloud_providers, "huawei") ? 1 : 0
|
||||
|
||||
environment = var.environment
|
||||
project_name = var.project_name
|
||||
owner = var.owner
|
||||
vpc_cidr = "10.1.0.0/16" # 不同的 CIDR 避免冲突
|
||||
availability_zones = var.availability_zones
|
||||
common_tags = var.common_tags
|
||||
huawei_config = var.huawei_config
|
||||
|
||||
providers = {
|
||||
huaweicloud = huaweicloud.huawei
|
||||
}
|
||||
}
|
||||
|
||||
# 输出
|
||||
output "oracle_cloud_outputs" {
|
||||
description = "Oracle Cloud 基础设施输出"
|
||||
value = module.oracle_cloud
|
||||
}
|
||||
|
||||
output "huawei_cloud_outputs" {
|
||||
description = "华为云基础设施输出"
|
||||
value = length(module.huawei_cloud) > 0 ? module.huawei_cloud[0] : null
|
||||
}
|
||||
|
|
@ -130,4 +130,25 @@ variable "do_config" {
|
|||
region = "sgp1"
|
||||
}
|
||||
sensitive = true
|
||||
}
|
||||
|
||||
# HashiCorp Vault 配置
|
||||
variable "vault_config" {
|
||||
description = "HashiCorp Vault 配置"
|
||||
type = object({
|
||||
address = string
|
||||
token = string
|
||||
})
|
||||
default = {
|
||||
address = "http://localhost:8200"
|
||||
token = ""
|
||||
}
|
||||
sensitive = true
|
||||
}
|
||||
|
||||
variable "vault_token" {
|
||||
description = "Vault 访问令牌"
|
||||
type = string
|
||||
default = ""
|
||||
sensitive = true
|
||||
}
|
||||
|
|
@ -7,7 +7,7 @@ terraform {
|
|||
required_providers {
|
||||
oci = {
|
||||
source = "oracle/oci"
|
||||
version = "~> 5.0"
|
||||
version = "~> 7.20"
|
||||
}
|
||||
huaweicloud = {
|
||||
source = "huaweicloud/huaweicloud"
|
||||
|
|
|
|||
|
|
@ -5,7 +5,7 @@ terraform {
|
|||
required_providers {
|
||||
oci = {
|
||||
source = "oracle/oci"
|
||||
version = "~> 5.0"
|
||||
version = "~> 7.20"
|
||||
}
|
||||
huaweicloud = {
|
||||
source = "huaweicloud/huaweicloud"
|
||||
|
|
|
|||
|
|
@ -4,7 +4,7 @@ terraform {
|
|||
required_providers {
|
||||
oci = {
|
||||
source = "oracle/oci"
|
||||
version = "~> 5.0"
|
||||
version = "~> 7.20"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -6,7 +6,7 @@ terraform {
|
|||
# Oracle Cloud Infrastructure
|
||||
oci = {
|
||||
source = "oracle/oci"
|
||||
version = "~> 5.0"
|
||||
version = "7.20.0"
|
||||
}
|
||||
|
||||
# 华为云
|
||||
|
|
@ -36,17 +36,23 @@ terraform {
|
|||
# 其他常用提供商
|
||||
random = {
|
||||
source = "hashicorp/random"
|
||||
version = "~> 3.1"
|
||||
version = "3.7.2"
|
||||
}
|
||||
|
||||
tls = {
|
||||
source = "hashicorp/tls"
|
||||
version = "~> 4.0"
|
||||
version = "4.1.0"
|
||||
}
|
||||
|
||||
local = {
|
||||
source = "hashicorp/local"
|
||||
version = "~> 2.1"
|
||||
version = "2.5.3"
|
||||
}
|
||||
|
||||
# HashiCorp Vault
|
||||
vault = {
|
||||
source = "hashicorp/vault"
|
||||
version = "~> 4.0"
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
|||
Loading…
Reference in New Issue