feat: 迁移基础设施到Nomad和Podman并重构配置

refactor: 更新Ansible Playbooks以支持Nomad集群
docs: 更新文档反映从Docker Swarm到Nomad的迁移
ci: 更新Gitea工作流以支持Podman构建
test: 添加Nomad作业测试文件
build: 更新Makefile以支持Podman操作
chore: 清理旧的Docker Swarm相关文件和配置
This commit is contained in:
Houzhong Xu 2025-09-27 08:04:23 +00:00
parent c0d4cf54dc
commit a06e5e1a00
No known key found for this signature in database
GPG Key ID: B44BEB1438F1B46F
54 changed files with 2010 additions and 329 deletions

Binary file not shown.

After

Width:  |  Height:  |  Size: 94 KiB

View File

@ -1,7 +1,7 @@
# Gitea 仓库设置
repository:
name: mgmt
description: "基础设施管理项目 - OpenTofu + Ansible + Docker Swarm"
description: "基础设施管理项目 - OpenTofu + Ansible + Nomad + Podman"
website: ""
default_branch: main

View File

@ -11,20 +11,20 @@ on:
jobs:
build:
runs-on: ubuntu-latest
name: Build Docker Images
name: Build Podman Images
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Set up Podman
run: |
sudo apt-get update
sudo apt-get install -y podman
podman --version
- name: Login to Container Registry
uses: docker/login-action@v3
with:
registry: ${{ secrets.REGISTRY_URL }}
username: ${{ secrets.REGISTRY_USERNAME }}
password: ${{ secrets.REGISTRY_PASSWORD }}
run: |
echo ${{ secrets.REGISTRY_PASSWORD }} | podman login ${{ secrets.REGISTRY_URL }} --username ${{ secrets.REGISTRY_USERNAME }} --password-stdin
- name: Build and push images
run: |
@ -33,20 +33,21 @@ jobs:
if [ -f "$dockerfile" ]; then
app_name=$(basename $(dirname "$dockerfile"))
echo "Building $app_name"
docker build -t "${{ secrets.REGISTRY_URL }}/$app_name:${{ github.sha }}" -f "$dockerfile" .
docker push "${{ secrets.REGISTRY_URL }}/$app_name:${{ github.sha }}"
podman build -t "${{ secrets.REGISTRY_URL }}/$app_name:${{ github.sha }}" -f "$dockerfile" .
podman push "${{ secrets.REGISTRY_URL }}/$app_name:${{ github.sha }}"
fi
done
deploy-swarm:
deploy-nomad:
runs-on: ubuntu-latest
name: Deploy to Docker Swarm
name: Deploy to Nomad Cluster
needs: build
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Deploy to Swarm
- name: Deploy to Nomad
run: |
# 这里可以通过 SSH 连接到 Swarm 管理节点进行部署
echo "Deploy to Swarm placeholder"
# 这里可以通过 SSH 连接到 Nomad 管理节点进行部署
echo "Deploy to Nomad placeholder"
# 示例命令: nomad job run -var "image_tag=${{ github.sha }}" jobs/app.nomad

View File

@ -38,18 +38,18 @@ ansible-deploy: ## 部署应用
@echo "📦 部署应用..."
@cd configuration && ansible-playbook -i inventories/production/inventory.ini playbooks/bootstrap/main.yml
# Docker 操作
docker-build: ## 构建 Docker 镜像
@echo "🐳 构建 Docker 镜像..."
@docker-compose -f containers/compose/development/docker-compose.yml build
# Podman 操作
podman-build: ## 构建 Podman 镜像
@echo "📦 构建 Podman 镜像..."
@podman-compose -f containers/compose/development/docker-compose.yml build
docker-up: ## 启动开发环境
podman-up: ## 启动开发环境
@echo "🚀 启动开发环境..."
@docker-compose -f containers/compose/development/docker-compose.yml up -d
@podman-compose -f containers/compose/development/docker-compose.yml up -d
docker-down: ## 停止开发环境
podman-down: ## 停止开发环境
@echo "🛑 停止开发环境..."
@docker-compose -f containers/compose/development/docker-compose.yml down
@podman-compose -f containers/compose/development/docker-compose.yml down
# 测试
test: ## 运行测试
@ -70,7 +70,7 @@ clean: ## 清理临时文件
@echo "🧹 清理临时文件..."
@find . -name "*.tfstate*" -delete
@find . -name ".terraform" -type d -exec rm -rf {} + 2>/dev/null || true
@docker system prune -f
@podman system prune -f
# 备份
backup: ## 创建备份
@ -80,7 +80,7 @@ backup: ## 创建备份
# 监控
monitor: ## 启动监控
@echo "📊 启动监控..."
@docker-compose -f containers/compose/production/monitoring.yml up -d
@podman-compose -f containers/compose/production/monitoring.yml up -d
# 安全扫描
security-scan: ## 安全扫描

View File

@ -1,13 +1,13 @@
# 🏗️ 基础设施管理项目
这是一个现代化的多云基础设施管理平台,专注于 OpenTofu、Ansible 和 Docker Swarm 的集成管理。
这是一个现代化的多云基础设施管理平台,专注于 OpenTofu、Ansible 和 Nomad + Podman 的集成管理。
## 🎯 项目特性
- **🌩️ 多云支持**: Oracle Cloud, 华为云, Google Cloud, AWS, DigitalOcean
- **🏗️ 基础设施即代码**: 使用 OpenTofu 管理云资源
- **⚙️ 配置管理**: 使用 Ansible 自动化配置和部署
- **🐳 容器编排**: Docker Swarm 集群管理和服务编排
- **🐳 容器编排**: Nomad 集群管理和 Podman 容器运行时
- **🔄 CI/CD**: Gitea Actions 自动化流水线
- **📊 监控**: Prometheus + Grafana 监控体系
- **🔐 安全**: 多层安全防护和合规性
@ -22,10 +22,6 @@ mgmt/
│ ├── modules/ # 可复用模块
│ ├── providers/ # 云服务商配置
│ └── shared/ # 共享配置
├── swarm/ # Docker Swarm 配置
│ ├── stacks/ # Docker Stack 配置文件
│ ├── configs/ # Traefik 等基础设施配置
│ └── scripts/ # Swarm 管理脚本
├── configuration/ # Ansible 配置管理
│ ├── inventories/ # 主机清单
│ ├── playbooks/ # 剧本
@ -39,6 +35,8 @@ mgmt/
└── Makefile # 项目管理命令
```
**注意:** 项目已从 Docker Swarm 迁移到 Nomad + Podman原有的 swarm 目录已不再使用。
## 🚀 快速开始
### 1. 环境准备
@ -78,17 +76,17 @@ vim tofu/environments/dev/terraform.tfvars
cd tofu/environments/dev && tofu apply
```
### 4. 部署 Docker Swarm 服务
### 4. 部署 Nomad 服务
```bash
# 初始化 Docker Swarm
./mgmt.sh swarm init
# 部署 Consul 集群
nomad run /root/mgmt/consul-cluster-nomad.nomad
# 部署 Traefik 反向代理
./mgmt.sh swarm deploy traefik swarm/stacks/traefik-swarm-stack.yml
# 查看 Nomad 任务
nomad job status
# 部署示例服务
./mgmt.sh swarm deploy demo swarm/stacks/demo-services-stack.yml
# 查看节点状态
nomad node status
```
## 🛠️ 常用命令
@ -98,9 +96,10 @@ cd tofu/environments/dev && tofu apply
| `./mgmt.sh status` | 显示项目状态总览 |
| `./mgmt.sh deploy` | 快速部署所有服务 |
| `./mgmt.sh cleanup` | 清理所有部署的服务 |
| `./mgmt.sh swarm <cmd>` | Docker Swarm 管理命令 |
| `./mgmt.sh tofu <cmd>` | OpenTofu 管理命令 |
| `swarm/scripts/swarm-manager.sh help` | Swarm 管理帮助 |
| `nomad job status` | 查看 Nomad 任务状态 |
| `nomad node status` | 查看 Nomad 节点状态 |
| `podman ps` | 查看运行中的容器 |
| `scripts/setup/setup-opentofu.sh help` | OpenTofu 设置帮助 |
## 🌩️ 支持的云服务商
@ -145,10 +144,10 @@ cd tofu/environments/dev && tofu apply
5. **Ansible 部署** → 配置和部署应用
### 应用部署流程
1. **应用代码更新** → 构建 Docker 镜像
1. **应用代码更新** → 构建容器镜像
2. **镜像推送** → 推送到镜像仓库
3. **Compose 更新** → 更新服务定义
4. **Swarm 部署** → 滚动更新服务
3. **Nomad Job 更新** → 更新任务定义
4. **Nomad 部署** → 滚动更新服务
5. **健康检查** → 验证部署状态
## 📊 监控和可观测性

47
configs/nomad-ash3c.hcl Normal file
View File

@ -0,0 +1,47 @@
datacenter = "dc1"
data_dir = "/opt/nomad/data"
plugin_dir = "/opt/nomad/plugins"
log_level = "INFO"
bind_addr = "100.116.80.94"
addresses {
http = "100.116.80.94"
rpc = "100.116.80.94"
serf = "100.116.80.94"
}
ports {
http = 4646
rpc = 4647
serf = 4648
}
server {
enabled = false
}
client {
enabled = true
network_interface = "tailscale0"
servers = [
"100.116.158.95:4647", # semaphore
"100.103.147.94:4647", # ash2e
"100.81.26.3:4647", # ash1d
"100.90.159.68:4647" # ch2
]
}
plugin "nomad-driver-podman" {
config {
socket_path = "unix:///run/podman/podman.sock"
volumes {
enabled = true
}
}
}
consul {
address = "100.116.80.94:8500"
}

47
configs/nomad-master.hcl Normal file
View File

@ -0,0 +1,47 @@
datacenter = "dc1"
data_dir = "/opt/nomad/data"
plugin_dir = "/opt/nomad/plugins"
log_level = "INFO"
bind_addr = "100.117.106.136"
addresses {
http = "100.117.106.136"
rpc = "100.117.106.136"
serf = "100.117.106.136"
}
ports {
http = 4646
rpc = 4647
serf = 4648
}
server {
enabled = false
}
client {
enabled = true
network_interface = "tailscale0"
servers = [
"100.116.158.95:4647", # semaphore
"100.103.147.94:4647", # ash2e
"100.81.26.3:4647", # ash1d
"100.90.159.68:4647" # ch2
]
}
plugin "nomad-driver-podman" {
config {
socket_path = "unix:///run/podman/podman.sock"
volumes {
enabled = true
}
}
}
consul {
address = "100.117.106.136:8500"
}

View File

@ -1,10 +1,23 @@
[consul_cluster]
# Consul 集群 Inventory - 三节点配置
[consul_servers]
master ansible_host=master ansible_port=60022 ansible_user=ben ansible_become=yes ansible_become_pass=3131
ash3c ansible_host=ash3c ansible_user=ben ansible_become=yes ansible_become_pass=3131
warden ansible_host=warden ansible_user=ben ansible_become=yes ansible_become_pass=3131
[consul_cluster:vars]
[consul_cluster:children]
consul_servers
[consul_servers:vars]
ansible_ssh_common_args='-o StrictHostKeyChecking=no'
consul_version=1.21.4
consul_version=1.21.5
consul_datacenter=dc1
# 生成加密密钥: consul keygen
vault_consul_encrypt_key=1EvGItLOB8nuHnSA0o+rO0zXzLeJl+U+Jfvuw0+H848=
consul_encrypt_key=1EvGItLOB8nuHnSA0o+rO0zXzLeJl+U+Jfvuw0+H848=
consul_bootstrap_expect=3
consul_server=true
consul_ui_config=true
consul_client_addr=0.0.0.0
consul_bind_addr="{{ ansible_default_ipv4.address }}"
consul_data_dir=/opt/consul/data
consul_config_dir=/etc/consul.d
consul_log_level=INFO
consul_port=8500

View File

@ -1,6 +1,8 @@
[consul_servers]
master ansible_host=100.117.106.136 ansible_user=ben ansible_become=yes ansible_become_pass=3131
ash3c ansible_host=100.116.80.94 ansible_user=ben ansible_become=yes ansible_become_pass=3131
semaphore ansible_host=100.116.158.95 ansible_user=ben ansible_become=yes ansible_become_pass=3131
# hcs节点将在一个月后退役
hcs ansible_host=100.84.197.26 ansible_user=ben ansible_become=yes ansible_become_pass=3131
[consul_servers:vars]

View File

@ -16,11 +16,16 @@ ash3c ansible_host=ash3c ansible_user=ben ansible_become=yes ansible_become_pass
[huawei]
hcs ansible_host=hcs ansible_user=ben ansible_become=yes ansible_become_pass=3131
# hcs 节点已退役 (2025-09-27)
[google]
benwork ansible_host=benwork ansible_user=ben ansible_become=yes ansible_become_pass=3131
[ditigalocean]
# syd ansible_host=syd ansible_user=ben ansible_become=yes ansible_become_pass=3131 # 故障节点,已隔离
[faulty_cloud_servers]
# 故障的云服务器节点,需要通过 OpenTofu 和 Consul 解决
# hcs 节点已退役 (2025-09-27)
syd ansible_host=syd ansible_user=ben ansible_become=yes ansible_become_pass=3131
[aws]
@ -42,7 +47,7 @@ postgresql ansible_host=postgresql ansible_user=root ansible_become=yes ansible_
influxdb ansible_host=influxdb1 ansible_user=root ansible_become=yes ansible_become_pass=313131
warden ansible_host=warden ansible_user=ben ansible_become=yes ansible_become_pass=3131
[semaphore]
semaphoressh ansible_host=semaphore ansible_user=root ansible_become=yes ansible_become_pass=313131
semaphoressh ansible_host=localhost ansible_user=root ansible_become=yes ansible_become_pass=313131 ansible_ssh_pass=313131
[alpine]
#Alpine Linux containers using apk package manager
@ -63,9 +68,6 @@ snail ansible_host=snail ansible_user=houzhongxu ansible_ssh_pass=Aa313131@ben a
[armbian]
onecloud1 ansible_host=onecloud1 ansible_user=ben ansible_ssh_pass=3131 ansible_become=yes ansible_become_pass=3131
[germany]
de ansible_host=de ansible_user=ben ansible_ssh_pass=3131 ansible_become=yes ansible_become_pass=3131
[beijing:children]
nomadlxc
hcp
@ -79,7 +81,6 @@ hcp
oci_a1
huawei
ditigalocean
germany
[nomad_servers:children]
oci_us
oci_kr

View File

@ -0,0 +1,6 @@
[target_nodes]
master ansible_host=master ansible_port=60022 ansible_user=ben ansible_become=yes ansible_become_pass=3131
ash3c ansible_host=ash3c ansible_user=ben ansible_become=yes ansible_become_pass=3131
[target_nodes:vars]
ansible_ssh_common_args='-o StrictHostKeyChecking=no'

394
configuration/pipefail Normal file
View File

@ -0,0 +1,394 @@
'!'=3491531
'#'=0
'$'=3491516
'*'=( )
'?'=0
-=569JNRXghiks
0=/usr/bin/zsh
@=( )
ADDR=( 'PATH=/root/.trae-server/sdks/workspaces/cced0550/versions/node/current\x3a/root/.trae-server/sdks/versions/node/current\x3a' )
AGNOSTER_AWS_BG=green
AGNOSTER_AWS_FG=black
AGNOSTER_AWS_PROD_BG=red
AGNOSTER_AWS_PROD_FG=yellow
AGNOSTER_BZR_CLEAN_BG=green
AGNOSTER_BZR_CLEAN_FG=black
AGNOSTER_BZR_DIRTY_BG=yellow
AGNOSTER_BZR_DIRTY_FG=black
AGNOSTER_CONTEXT_BG=black
AGNOSTER_CONTEXT_FG=default
AGNOSTER_DIR_BG=blue
AGNOSTER_DIR_FG=black
AGNOSTER_GIT_BRANCH_STATUS=true
AGNOSTER_GIT_CLEAN_BG=green
AGNOSTER_GIT_CLEAN_FG=black
AGNOSTER_GIT_DIRTY_BG=yellow
AGNOSTER_GIT_DIRTY_FG=black
AGNOSTER_GIT_INLINE=false
AGNOSTER_HG_CHANGED_BG=yellow
AGNOSTER_HG_CHANGED_FG=black
AGNOSTER_HG_CLEAN_BG=green
AGNOSTER_HG_CLEAN_FG=black
AGNOSTER_HG_NEWFILE_BG=red
AGNOSTER_HG_NEWFILE_FG=white
AGNOSTER_STATUS_BG=black
AGNOSTER_STATUS_FG=default
AGNOSTER_STATUS_JOB_FG=cyan
AGNOSTER_STATUS_RETVAL_FG=red
AGNOSTER_STATUS_RETVAL_NUMERIC=false
AGNOSTER_STATUS_ROOT_FG=yellow
AGNOSTER_VENV_BG=blue
AGNOSTER_VENV_FG=black
ANTHROPIC_AUTH_TOKEN=sk-M0YtRnZNHJkqFP7DjrNsf3jVDe4INKqiBGN0YgQcisudOYbp
ANTHROPIC_BASE_URL=https://anyrouter.top
ARCH=x86_64
ARGC=0
BG
BROWSER=/root/.trae-server/bin/stable-0643ffaa788ad4dd46eaa12cec109ac40595c816/bin/helpers/browser.sh
BUFFER=''
CDPATH=''
COLORTERM=truecolor
COLUMNS=104
CPUTYPE=x86_64
CURRENT_BG=NONE
CURRENT_DEFAULT_FG=default
CURRENT_FG=black
CURSOR=''
DBUS_SESSION_BUS_ADDRESS='unix:path=/run/user/0/bus'
DISTRO_COMMIT=0643ffaa788ad4dd46eaa12cec109ac40595c816
DISTRO_QUALITY=stable
DISTRO_VERSION=1.100.3
DISTRO_VSCODIUM_RELEASE=''
DOWNLOAD_RETRY_COUNT=0
EDITOR=vim
EGID=0
EPOCHREALTIME
EPOCHSECONDS
EUID=0
EXTRACT_RETRY_COUNT=0
FG
FIGNORE=''
FPATH=/root/.oh-my-zsh/plugins/z:/root/.oh-my-zsh/plugins/web-search:/root/.oh-my-zsh/plugins/vscode:/root/.oh-my-zsh/plugins/tmux:/root/.oh-my-zsh/plugins/systemd:/root/.oh-my-zsh/plugins/sudo:/root/.oh-my-zsh/plugins/history-substring-search:/root/.oh-my-zsh/plugins/extract:/root/.oh-my-zsh/plugins/command-not-found:/root/.oh-my-zsh/plugins/colored-man-pages:/root/.oh-my-zsh/custom/plugins/zsh-completions:/root/.oh-my-zsh/custom/plugins/zsh-syntax-highlighting:/root/.oh-my-zsh/custom/plugins/zsh-autosuggestions:/root/.oh-my-zsh/plugins/gcloud:/root/.oh-my-zsh/plugins/aws:/root/.oh-my-zsh/plugins/helm:/root/.oh-my-zsh/plugins/kubectl:/root/.oh-my-zsh/plugins/terraform:/root/.oh-my-zsh/plugins/ansible:/root/.oh-my-zsh/plugins/docker-compose:/root/.oh-my-zsh/plugins/docker:/root/.oh-my-zsh/plugins/git:/root/.oh-my-zsh/functions:/root/.oh-my-zsh/completions:/root/.oh-my-zsh/custom/functions:/root/.oh-my-zsh/custom/completions:/root/.oh-my-zsh/cache/completions:/usr/local/share/zsh/site-functions:/usr/share/zsh/vendor-functions:/usr/share/zsh/vendor-completions:/usr/share/zsh/functions/Calendar:/usr/share/zsh/functions/Chpwd:/usr/share/zsh/functions/Completion:/usr/share/zsh/functions/Completion/AIX:/usr/share/zsh/functions/Completion/BSD:/usr/share/zsh/functions/Completion/Base:/usr/share/zsh/functions/Completion/Cygwin:/usr/share/zsh/functions/Completion/Darwin:/usr/share/zsh/functions/Completion/Debian:/usr/share/zsh/functions/Completion/Linux:/usr/share/zsh/functions/Completion/Mandriva:/usr/share/zsh/functions/Completion/Redhat:/usr/share/zsh/functions/Completion/Solaris:/usr/share/zsh/functions/Completion/Unix:/usr/share/zsh/functions/Completion/X:/usr/share/zsh/functions/Completion/Zsh:/usr/share/zsh/functions/Completion/openSUSE:/usr/share/zsh/functions/Exceptions:/usr/share/zsh/functions/MIME:/usr/share/zsh/functions/Math:/usr/share/zsh/functions/Misc:/usr/share/zsh/functions/Newuser:/usr/share/zsh/functions/Prompts:/usr/share/zsh/functions/TCP:/usr/share/zsh/functions/VCS_Info:/usr/share/zsh/functions/VCS_Info/Backends:/usr/share/zsh/functions/Zftp:/usr/share/zsh/functions/Zle:/root/.oh-my-zsh/custom/plugins/zsh-completions/src
FUNCNEST=500
FX
GID=0
GIT_ASKPASS=/root/.trae-server/bin/stable-0643ffaa788ad4dd46eaa12cec109ac40595c816/extensions/git/dist/askpass.sh
GIT_PAGER=''
HISTCHARS='!^#'
HISTCMD=1888
HISTFILE=/root/.zsh_history
HISTORY_SUBSTRING_SEARCH_ENSURE_UNIQUE=''
HISTORY_SUBSTRING_SEARCH_FUZZY=''
HISTORY_SUBSTRING_SEARCH_GLOBBING_FLAGS=i
HISTORY_SUBSTRING_SEARCH_HIGHLIGHT_FOUND='bg=magenta,fg=white,bold'
HISTORY_SUBSTRING_SEARCH_HIGHLIGHT_NOT_FOUND='bg=red,fg=white,bold'
HISTORY_SUBSTRING_SEARCH_PREFIXED=''
HISTSIZE=10000
HOME=/root
HOST=semaphore
IFS=$' \t\n\C-@'
ITEM='PATH=/root/.trae-server/sdks/workspaces/cced0550/versions/node/current\x3a/root/.trae-server/sdks/versions/node/current\x3a'
KEYBOARD_HACK=''
KEYTIMEOUT=40
LANG=C.UTF-8
LANGUAGE=en_US.UTF-8
LC_ALL=C.UTF-8
LESS=-R
LINENO=77
LINES=40
LISTMAX=100
LOGNAME=root
LSCOLORS=Gxfxcxdxbxegedabagacad
LS_COLORS='rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:mi=00:su=37;41:sg=30;43:ca=00:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arc=01;31:*.arj=01;31:*.taz=01;31:*.lha=01;31:*.lz4=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.tzo=01;31:*.t7z=01;31:*.zip=01;31:*.z=01;31:*.dz=01;31:*.gz=01;31:*.lrz=01;31:*.lz=01;31:*.lzo=01;31:*.xz=01;31:*.zst=01;31:*.tzst=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.alz=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.cab=01;31:*.wim=01;31:*.swm=01;31:*.dwm=01;31:*.esd=01;31:*.avif=01;35:*.jpg=01;35:*.jpeg=01;35:*.mjpg=01;35:*.mjpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.webp=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=00;36:*.au=00;36:*.flac=00;36:*.m4a=00;36:*.mid=00;36:*.midi=00;36:*.mka=00;36:*.mp3=00;36:*.mpc=00;36:*.ogg=00;36:*.ra=00;36:*.wav=00;36:*.oga=00;36:*.opus=00;36:*.spx=00;36:*.xspf=00;36:*~=00;90:*#=00;90:*.bak=00;90:*.old=00;90:*.orig=00;90:*.part=00;90:*.rej=00;90:*.swp=00;90:*.tmp=00;90:*.dpkg-dist=00;90:*.dpkg-old=00;90:*.ucf-dist=00;90:*.ucf-new=00;90:*.ucf-old=00;90:*.rpmnew=00;90:*.rpmorig=00;90:*.rpmsave=00;90:'
MACHTYPE=x86_64
MAILCHECK=60
MAILPATH=''
MANAGER_LOGS_DIR=/root/.trae-server/manager-logs/1758782495499_337482
MANPATH=''
MATCH=''
MBEGIN=''
MEND=''
MODULE_PATH=/usr/lib/x86_64-linux-gnu/zsh/5.9
MOTD_SHOWN=pam
NEWLINE=$'\n'
NOMAD_ADDR=http://100.81.26.3:4646
NULLCMD=cat
OLDPWD=/root/mgmt
OPTARG=''
OPTIND=1
OSTYPE=linux-gnu
OS_RELEASE_ID=debian
PAGER=''
PATH=/root/.trae-server/sdks/workspaces/cced0550/versions/node/current:/root/.trae-server/sdks/versions/node/current:/root/.trae-server/sdks/workspaces/cced0550/versions/node/current:/root/.trae-server/sdks/versions/node/current:/root/.trae-server/bin/stable-0643ffaa788ad4dd46eaa12cec109ac40595c816/bin/remote-cli:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
PLATFORM=linux
POWERLEVEL9K_INSTANT_PROMPT=off
PPID=2438215
PROMPT2='%_> '
PROMPT3='?# '
PROMPT4='+%N:%i> '
PROMPT=$'\n(TraeAI-3) %~ [%?] $ '
PS1=$'\n(TraeAI-3) %~ [%?] $ '
PS2='%_> '
PS3='?# '
PS4='+%N:%i> '
PSVAR=''
PWD=/root/mgmt/configuration
RANDOM=6724
READNULLCMD=/usr/bin/pager
REMOTE_VERSION=1058011568130_8
SAVEHIST=10000
SCRIPT_ID=f227a05726be7c5a36752917
SECONDS=1076
SEGMENT_SEPARATOR=
SERVER_APP_NAME=Trae
SERVER_APP_QUALITY=dev
SERVER_APP_VERSION=''
SERVER_ARCH=x64
SERVER_DATA_DIR=/root/.trae-server
SERVER_DIR=/root/.trae-server/bin/stable-0643ffaa788ad4dd46eaa12cec109ac40595c816
SERVER_DOWNLOAD_PREFIX=https://lf-cdn.trae.com.cn/obj/trae-com-cn/pkg/server/releases/stable/0643ffaa788ad4dd46eaa12cec109ac40595c816/linux/
SERVER_EXTENSIONS_DIR=/root/.trae-server/extensions
SERVER_HOST=127.0.0.1
SERVER_INITIAL_EXTENSIONS='--install-extension gitpod.gitpod-remote-ssh'
SERVER_LISTEN_FLAG='--port=0'
SERVER_LOGFILE=/root/.trae-server/.stable-0643ffaa788ad4dd46eaa12cec109ac40595c816.log
SERVER_LOGS_DIR=/root/.trae-server/logs
SERVER_PACKAGE_NAME=Trae-linux-x64-1058011568130_8.tar.gz
SERVER_PIDFILE=/root/.trae-server/.stable-0643ffaa788ad4dd46eaa12cec109ac40595c816.pid
SERVER_SCRIPT=/root/.trae-server/bin/stable-0643ffaa788ad4dd46eaa12cec109ac40595c816/index_trae.js
SERVER_SCRIPT_PRODUCT=/root/.trae-server/bin/stable-0643ffaa788ad4dd46eaa12cec109ac40595c816/product.json
SERVER_TOKENFILE=/root/.trae-server/.stable-0643ffaa788ad4dd46eaa12cec109ac40595c816.token
SHELL=/usr/bin/zsh
SHLVL=2
SHORT_HOST=semaphore
SPROMPT='zsh: correct '\''%R'\'' to '\''%r'\'' [nyae]? '
SSH_CLIENT='100.86.9.29 49793 22'
SSH_CONNECTION='100.86.9.29 49793 100.116.158.95 22'
TERM=xterm-256color
TERM_PRODUCT=Trae
TERM_PROGRAM=vscode
TERM_PROGRAM_VERSION=1.100.3
TIMEFMT='%J %U user %S system %P cpu %*E total'
TMPPREFIX=/tmp/zsh
TMP_DIR=/run/user/0
TRAE_AI_SHELL_ID=3
TRAE_DETECT_REGION=CN
TRAE_REMOTE_EXTENSION_REGION=cn
TRAE_REMOTE_SKIP_REMOTE_CHECK=''
TRAE_RESOLVE_TYPE=ssh
TRY_BLOCK_ERROR=-1
TRY_BLOCK_INTERRUPT=-1
TTY=/dev/pts/8
TTYIDLE=-1
UID=0
USER=root
USERNAME=root
USER_ZDOTDIR=/root
VARNAME=PATH
VENDOR=debian
VSCODE_GIT_ASKPASS_EXTRA_ARGS=''
VSCODE_GIT_ASKPASS_MAIN=/root/.trae-server/bin/stable-0643ffaa788ad4dd46eaa12cec109ac40595c816/extensions/git/dist/askpass-main.js
VSCODE_GIT_ASKPASS_NODE=/root/.trae-server/bin/stable-0643ffaa788ad4dd46eaa12cec109ac40595c816/node
VSCODE_GIT_IPC_HANDLE=/run/user/0/vscode-git-7caaecb415.sock
VSCODE_INJECTION=1
VSCODE_IPC_HOOK_CLI=/run/user/0/vscode-ipc-47deaf2b-9a7a-4554-a972-d6b4aa5bb388.sock
VSCODE_SHELL_INTEGRATION=1
VSCODE_STABLE=''
VSCODE_ZDOTDIR=/tmp/root-trae-zsh
WATCH
WORDCHARS=''
XDG_RUNTIME_DIR=/run/user/0
XDG_SESSION_CLASS=user
XDG_SESSION_ID=1636
XDG_SESSION_TYPE=tty
ZDOTDIR=/root
ZSH=/root/.oh-my-zsh
ZSHZ=( [CHOWN]=zf_chown [DIRECTORY_REMOVED]=0 [FUNCTIONS]=$'_zshz_usage\n _zshz_add_or_remove_path\n _zshz_update_datafile\n _zshz_legacy_complete\n _zshz_printv\n _zshz_find_common_root\n _zshz_output\n _zshz_find_matches\n zshz\n _zshz_precmd\n _zshz_chpwd\n _zshz' [MV]=zf_mv [PRINTV]=1 [RM]=zf_rm [USE_FLOCK]=1 )
ZSHZ_EXCLUDE_DIRS=( )
ZSH_ARGZERO=/usr/bin/zsh
ZSH_AUTOSUGGEST_ACCEPT_WIDGETS=( forward-char end-of-line vi-forward-char vi-end-of-line vi-add-eol )
ZSH_AUTOSUGGEST_CLEAR_WIDGETS=( history-search-forward history-search-backward history-beginning-search-forward history-beginning-search-backward history-beginning-search-forward-end history-beginning-search-backward-end history-substring-search-up history-substring-search-down up-line-or-beginning-search down-line-or-beginning-search up-line-or-history down-line-or-history accept-line copy-earlier-word )
ZSH_AUTOSUGGEST_COMPLETIONS_PTY_NAME=zsh_autosuggest_completion_pty
ZSH_AUTOSUGGEST_EXECUTE_WIDGETS=( )
ZSH_AUTOSUGGEST_HIGHLIGHT_STYLE='fg=8'
ZSH_AUTOSUGGEST_IGNORE_WIDGETS=( 'orig-*' beep run-help set-local-history which-command yank yank-pop 'zle-*' )
ZSH_AUTOSUGGEST_ORIGINAL_WIDGET_PREFIX=autosuggest-orig-
ZSH_AUTOSUGGEST_PARTIAL_ACCEPT_WIDGETS=( forward-word emacs-forward-word vi-forward-word vi-forward-word-end vi-forward-blank-word vi-forward-blank-word-end vi-find-next-char vi-find-next-char-skip )
ZSH_AUTOSUGGEST_STRATEGY=( history completion )
ZSH_AUTOSUGGEST_USE_ASYNC=''
ZSH_CACHE_DIR=/root/.oh-my-zsh/cache
ZSH_COMPDUMP=/root/.zcompdump-semaphore-5.9
ZSH_CUSTOM=/root/.oh-my-zsh/custom
ZSH_EVAL_CONTEXT=toplevel
ZSH_HIGHLIGHT_DIRS_BLACKLIST=( )
ZSH_HIGHLIGHT_HIGHLIGHTERS=( main brackets pattern cursor )
ZSH_HIGHLIGHT_PATTERNS=( )
ZSH_HIGHLIGHT_REGEXP=( )
ZSH_HIGHLIGHT_REVISION=HEAD
ZSH_HIGHLIGHT_STYLES=( [arg0]='fg=green' [assign]=none [autodirectory]='fg=green,underline' [back-dollar-quoted-argument]='fg=cyan' [back-double-quoted-argument]='fg=cyan' [back-quoted-argument]=none [back-quoted-argument-delimiter]='fg=magenta' [bracket-error]='fg=red,bold' [bracket-level-1]='fg=blue,bold' [bracket-level-2]='fg=green,bold' [bracket-level-3]='fg=magenta,bold' [bracket-level-4]='fg=yellow,bold' [bracket-level-5]='fg=cyan,bold' [command-substitution]=none [command-substitution-delimiter]='fg=magenta' [commandseparator]=none [comment]='fg=black,bold' [cursor]=standout [cursor-matchingbracket]=standout [default]=none [dollar-double-quoted-argument]='fg=cyan' [dollar-quoted-argument]='fg=yellow' [double-hyphen-option]=none [double-quoted-argument]='fg=yellow' [global-alias]='fg=cyan' [globbing]='fg=blue' [history-expansion]='fg=blue' [line]='' [named-fd]=none [numeric-fd]=none [path]=underline [path_pathseparator]='' [path_prefix_pathseparator]='' [precommand]='fg=green,underline' [process-substitution]=none [process-substitution-delimiter]='fg=magenta' [rc-quote]='fg=cyan' [redirection]='fg=yellow' [reserved-word]='fg=yellow' [root]=standout [single-hyphen-option]=none [single-quoted-argument]='fg=yellow' [suffix-alias]='fg=green,underline' [unknown-token]='fg=red,bold' )
ZSH_HIGHLIGHT_VERSION=0.8.1-dev
ZSH_NAME=zsh
ZSH_PATCHLEVEL=debian/5.9-4+b7
ZSH_SUBSHELL=1
ZSH_THEME=agnoster
ZSH_THEME_GIT_PROMPT_CLEAN=''
ZSH_THEME_GIT_PROMPT_DIRTY='*'
ZSH_THEME_GIT_PROMPT_PREFIX='git:('
ZSH_THEME_GIT_PROMPT_SUFFIX=')'
ZSH_THEME_RUBY_PROMPT_PREFIX='('
ZSH_THEME_RUBY_PROMPT_SUFFIX=')'
ZSH_THEME_RVM_PROMPT_OPTIONS='i v g'
ZSH_THEME_TERM_TAB_TITLE_IDLE='%15<..<%~%<<'
ZSH_THEME_TERM_TITLE_IDLE='%n@%m:%~'
ZSH_TMUX_AUTOCONNECT=true
ZSH_TMUX_AUTONAME_SESSION=false
ZSH_TMUX_AUTOQUIT=false
ZSH_TMUX_AUTOREFRESH=false
ZSH_TMUX_AUTOSTART=false
ZSH_TMUX_AUTOSTART_ONCE=true
ZSH_TMUX_CONFIG=/root/.tmux.conf
ZSH_TMUX_DETACHED=false
ZSH_TMUX_FIXTERM=true
ZSH_TMUX_FIXTERM_WITHOUT_256COLOR=screen
ZSH_TMUX_FIXTERM_WITH_256COLOR=screen-256color
ZSH_TMUX_ITERM2=false
ZSH_TMUX_TERM=screen-256color
ZSH_TMUX_UNICODE=false
ZSH_VERSION=5.9
_=set
_OMZ_ASYNC_FDS=( )
_OMZ_ASYNC_OUTPUT=( )
_OMZ_ASYNC_PIDS=( )
_ZSH_AUTOSUGGEST_ASYNC_FD=''
_ZSH_AUTOSUGGEST_BIND_COUNTS=( [accept-and-hold]=1 [accept-and-infer-next-history]=1 [accept-and-menu-complete]=1 [accept-line]=1 [accept-line-and-down-history]=1 [accept-search]=1 [argument-base]=1 [auto-suffix-remove]=1 [auto-suffix-retain]=1 [autosuggest-capture-completion]=1 [backward-char]=1 [backward-delete-char]=1 [backward-delete-word]=1 [backward-kill-line]=1 [backward-kill-word]=1 [backward-word]=1 [beginning-of-buffer-or-history]=1 [beginning-of-history]=1 [beginning-of-line]=1 [beginning-of-line-hist]=1 [bracketed-paste]=1 [capitalize-word]=1 [clear-screen]=1 [complete-word]=1 [copy-prev-shell-word]=1 [copy-prev-word]=1 [copy-region-as-kill]=1 [deactivate-region]=1 [delete-char]=1 [delete-char-or-list]=1 [delete-word]=1 [describe-key-briefly]=1 [digit-argument]=1 [down-case-word]=1 [down-history]=1 [down-line]=1 [down-line-or-beginning-search]=1 [down-line-or-history]=1 [down-line-or-search]=1 [edit-command-line]=1 [emacs-backward-word]=1 [emacs-forward-word]=1 [end-of-buffer-or-history]=1 [end-of-history]=1 [end-of-line]=1 [end-of-line-hist]=1 [end-of-list]=1 [exchange-point-and-mark]=1 [execute-last-named-cmd]=1 [execute-named-cmd]=1 [expand-cmd-path]=1 [expand-history]=1 [expand-or-complete]=1 [expand-or-complete-prefix]=1 [expand-word]=1 [forward-char]=1 [forward-word]=1 [get-line]=1 [gosmacs-transpose-chars]=1 [history-beginning-search-backward]=1 [history-beginning-search-forward]=1 [history-incremental-pattern-search-backward]=1 [history-incremental-pattern-search-forward]=1 [history-incremental-search-backward]=1 [history-incremental-search-forward]=1 [history-search-backward]=1 [history-search-forward]=1 [history-substring-search-down]=1 [history-substring-search-up]=1 [infer-next-history]=1 [insert-last-word]=1 [kill-buffer]=1 [kill-line]=1 [kill-region]=1 [kill-whole-line]=1 [kill-word]=1 [list-choices]=1 [list-expand]=1 [magic-space]=1 [menu-complete]=1 [menu-expand-or-complete]=1 [menu-select]=1 [neg-argument]=1 [overwrite-mode]=1 [pound-insert]=1 [push-input]=1 [push-line]=1 [push-line-or-edit]=1 [put-replace-selection]=1 [quote-line]=1 [quote-region]=1 [quoted-insert]=1 [read-command]=1 [recursive-edit]=1 [redisplay]=1 [redo]=1 [reset-prompt]=1 [reverse-menu-complete]=1 [select-a-blank-word]=1 [select-a-shell-word]=1 [select-a-word]=1 [select-in-blank-word]=1 [select-in-shell-word]=1 [select-in-word]=1 [self-insert]=1 [self-insert-unmeta]=1 [send-break]=1 [set-mark-command]=1 [spell-word]=1 [split-undo]=1 [sudo-command-line]=1 [transpose-chars]=1 [transpose-words]=1 [undefined-key]=1 [undo]=1 [universal-argument]=1 [up-case-word]=1 [up-history]=1 [up-line]=1 [up-line-or-beginning-search]=1 [up-line-or-history]=1 [up-line-or-search]=1 [user:zle-line-finish]=1 [vi-add-eol]=1 [vi-add-next]=1 [vi-backward-blank-word]=1 [vi-backward-blank-word-end]=1 [vi-backward-char]=1 [vi-backward-delete-char]=1 [vi-backward-kill-word]=1 [vi-backward-word]=1 [vi-backward-word-end]=1 [vi-beginning-of-line]=1 [vi-caps-lock-panic]=1 [vi-change]=1 [vi-change-eol]=1 [vi-change-whole-line]=1 [vi-cmd-mode]=1 [vi-delete]=1 [vi-delete-char]=1 [vi-digit-or-beginning-of-line]=1 [vi-down-case]=1 [vi-down-line-or-history]=1 [vi-end-of-line]=1 [vi-fetch-history]=1 [vi-find-next-char]=1 [vi-find-next-char-skip]=1 [vi-find-prev-char]=1 [vi-find-prev-char-skip]=1 [vi-first-non-blank]=1 [vi-forward-blank-word]=1 [vi-forward-blank-word-end]=1 [vi-forward-char]=1 [vi-forward-word]=1 [vi-forward-word-end]=1 [vi-goto-column]=1 [vi-goto-mark]=1 [vi-goto-mark-line]=1 [vi-history-search-backward]=1 [vi-history-search-forward]=1 [vi-indent]=1 [vi-insert]=1 [vi-insert-bol]=1 [vi-join]=1 [vi-kill-eol]=1 [vi-kill-line]=1 [vi-match-bracket]=1 [vi-open-line-above]=1 [vi-open-line-below]=1 [vi-oper-swap-case]=1 [vi-pound-insert]=1 [vi-put-after]=1 [vi-put-before]=1 [vi-quoted-insert]=1 [vi-repeat-change]=1 [vi-repeat-find]=1 [vi-repeat-search]=1 [vi-replace]=1 [vi-replace-chars]=1 [vi-rev-repeat-find]=1 [vi-rev-repeat-search]=1 [vi-set-buffer]=1 [vi-set-mark]=1 [vi-substitute]=1 [vi-swap-case]=1 [vi-undo-change]=1 [vi-unindent]=1 [vi-up-case]=1 [vi-up-line-or-history]=1 [vi-yank]=1 [vi-yank-eol]=1 [vi-yank-whole-line]=1 [visual-line-mode]=1 [visual-mode]=1 [what-cursor-position]=1 [where-is]=1 )
_ZSH_AUTOSUGGEST_BUILTIN_ACTIONS=( clear fetch suggest accept execute enable disable toggle )
_ZSH_AUTOSUGGEST_CHILD_PID=3538664
_ZSH_HIGHLIGHT_PRIOR_BUFFER=''
_ZSH_HIGHLIGHT_PRIOR_CURSOR=0
_ZSH_TMUX_FIXED_CONFIG=/root/.oh-my-zsh/plugins/tmux/tmux.only.conf
__colored_man_pages_dir=/root/.oh-my-zsh/plugins/colored-man-pages
__vsc_current_command='set -o pipefail'
__vsc_env_keys=( )
__vsc_env_values=( )
__vsc_in_command_execution=1
__vsc_nonce=8569aa72-4f06-40f4-a830-4084a537236a
__vsc_prior_prompt2='%_> '
__vsc_prior_prompt=$'\n(TraeAI-3) %~ [%?] $ '
__vsc_use_aa=1
__vscode_shell_env_reporting=''
_comp_assocs=( '' )
_comp_dumpfile=/root/.zcompdump
_comp_options
_comp_setup
_compautos
_comps
_history_substring_search_match_index=0
_history_substring_search_matches=( )
_history_substring_search_query=''
_history_substring_search_query_highlight=''
_history_substring_search_query_parts=( )
_history_substring_search_raw_match_index=0
_history_substring_search_raw_matches=( )
_history_substring_search_refresh_display=''
_history_substring_search_result=''
_history_substring_search_unique_filter=( )
_history_substring_search_zsh_5_9=1
_lastcomp
_patcomps
_postpatcomps
_services
_zsh_highlight__highlighter_brackets_cache=( )
_zsh_highlight__highlighter_cursor_cache=( )
_zsh_highlight__highlighter_main_cache=( '0 2 fg=green memo=zsh-syntax-highlighting' '3 6 none memo=zsh-syntax-highlighting' '7 11 underline memo=zsh-syntax-highlighting' '11 12 none memo=zsh-syntax-highlighting' '13 17 fg=green memo=zsh-syntax-highlighting' '18 20 none memo=zsh-syntax-highlighting' '21 57 none memo=zsh-syntax-highlighting' '21 57 fg=yellow memo=zsh-syntax-highlighting' '58 62 underline memo=zsh-syntax-highlighting' )
_zsh_highlight__highlighter_pattern_cache=( )
_zsh_highlight_main__command_type_cache=( )
aliases
argv=( )
bg
bg_bold
bg_no_bold
bold_color
builtins
cdpath=( )
chpwd_functions=( _zshz_chpwd )
color=( [00]=none [01]=bold [02]=faint [03]=italic [04]=underline [05]=blink [07]=reverse [08]=conceal [22]=normal [23]=no-italic [24]=no-underline [25]=no-blink [27]=no-reverse [28]=no-conceal [30]=black [31]=red [32]=green [33]=yellow [34]=blue [35]=magenta [36]=cyan [37]=white [39]=default [40]=bg-black [41]=bg-red [42]=bg-green [43]=bg-yellow [44]=bg-blue [45]=bg-magenta [46]=bg-cyan [47]=bg-white [49]=bg-default [bg-black]=40 [bg-blue]=44 [bg-cyan]=46 [bg-default]=49 [bg-gray]=40 [bg-green]=42 [bg-grey]=40 [bg-magenta]=45 [bg-red]=41 [bg-white]=47 [bg-yellow]=43 [black]=30 [blink]=05 [blue]=34 [bold]=01 [conceal]=08 [cyan]=36 [default]=39 [faint]=02 [fg-black]=30 [fg-blue]=34 [fg-cyan]=36 [fg-default]=39 [fg-gray]=30 [fg-green]=32 [fg-grey]=30 [fg-magenta]=35 [fg-red]=31 [fg-white]=37 [fg-yellow]=33 [gray]=30 [green]=32 [grey]=30 [italic]=03 [magenta]=35 [no-blink]=25 [no-conceal]=28 [no-italic]=23 [no-reverse]=27 [no-underline]=24 [none]=00 [normal]=22 [red]=31 [reverse]=07 [underline]=04 [white]=37 [yellow]=33 )
colour=( [00]=none [01]=bold [02]=faint [03]=italic [04]=underline [05]=blink [07]=reverse [08]=conceal [22]=normal [23]=no-italic [24]=no-underline [25]=no-blink [27]=no-reverse [28]=no-conceal [30]=black [31]=red [32]=green [33]=yellow [34]=blue [35]=magenta [36]=cyan [37]=white [39]=default [40]=bg-black [41]=bg-red [42]=bg-green [43]=bg-yellow [44]=bg-blue [45]=bg-magenta [46]=bg-cyan [47]=bg-white [49]=bg-default [bg-black]=40 [bg-blue]=44 [bg-cyan]=46 [bg-default]=49 [bg-gray]=40 [bg-green]=42 [bg-grey]=40 [bg-magenta]=45 [bg-red]=41 [bg-white]=47 [bg-yellow]=43 [black]=30 [blink]=05 [blue]=34 [bold]=01 [conceal]=08 [cyan]=36 [default]=39 [faint]=02 [fg-black]=30 [fg-blue]=34 [fg-cyan]=36 [fg-default]=39 [fg-gray]=30 [fg-green]=32 [fg-grey]=30 [fg-magenta]=35 [fg-red]=31 [fg-white]=37 [fg-yellow]=33 [gray]=30 [green]=32 [grey]=30 [italic]=03 [magenta]=35 [no-blink]=25 [no-conceal]=28 [no-italic]=23 [no-reverse]=27 [no-underline]=24 [none]=00 [normal]=22 [red]=31 [reverse]=07 [underline]=04 [white]=37 [yellow]=33 )
commands
comppostfuncs=( )
compprefuncs=( )
d=/usr/share/zsh/functions/Zle
debian_missing_features=( )
dirstack
dis_aliases
dis_builtins
dis_functions
dis_functions_source
dis_galiases
dis_patchars
dis_reswords
dis_saliases
envVarsToReport=( '' )
epochtime
errnos
fg
fg_bold
fg_no_bold
fignore=( )
fpath=( /root/.oh-my-zsh/plugins/z /root/.oh-my-zsh/plugins/web-search /root/.oh-my-zsh/plugins/vscode /root/.oh-my-zsh/plugins/tmux /root/.oh-my-zsh/plugins/systemd /root/.oh-my-zsh/plugins/sudo /root/.oh-my-zsh/plugins/history-substring-search /root/.oh-my-zsh/plugins/extract /root/.oh-my-zsh/plugins/command-not-found /root/.oh-my-zsh/plugins/colored-man-pages /root/.oh-my-zsh/custom/plugins/zsh-completions /root/.oh-my-zsh/custom/plugins/zsh-syntax-highlighting /root/.oh-my-zsh/custom/plugins/zsh-autosuggestions /root/.oh-my-zsh/plugins/gcloud /root/.oh-my-zsh/plugins/aws /root/.oh-my-zsh/plugins/helm /root/.oh-my-zsh/plugins/kubectl /root/.oh-my-zsh/plugins/terraform /root/.oh-my-zsh/plugins/ansible /root/.oh-my-zsh/plugins/docker-compose /root/.oh-my-zsh/plugins/docker /root/.oh-my-zsh/plugins/git /root/.oh-my-zsh/functions /root/.oh-my-zsh/completions /root/.oh-my-zsh/custom/functions /root/.oh-my-zsh/custom/completions /root/.oh-my-zsh/cache/completions /usr/local/share/zsh/site-functions /usr/share/zsh/vendor-functions /usr/share/zsh/vendor-completions /usr/share/zsh/functions/Calendar /usr/share/zsh/functions/Chpwd /usr/share/zsh/functions/Completion /usr/share/zsh/functions/Completion/AIX /usr/share/zsh/functions/Completion/BSD /usr/share/zsh/functions/Completion/Base /usr/share/zsh/functions/Completion/Cygwin /usr/share/zsh/functions/Completion/Darwin /usr/share/zsh/functions/Completion/Debian /usr/share/zsh/functions/Completion/Linux /usr/share/zsh/functions/Completion/Mandriva /usr/share/zsh/functions/Completion/Redhat /usr/share/zsh/functions/Completion/Solaris /usr/share/zsh/functions/Completion/Unix /usr/share/zsh/functions/Completion/X /usr/share/zsh/functions/Completion/Zsh /usr/share/zsh/functions/Completion/openSUSE /usr/share/zsh/functions/Exceptions /usr/share/zsh/functions/MIME /usr/share/zsh/functions/Math /usr/share/zsh/functions/Misc /usr/share/zsh/functions/Newuser /usr/share/zsh/functions/Prompts /usr/share/zsh/functions/TCP /usr/share/zsh/functions/VCS_Info /usr/share/zsh/functions/VCS_Info/Backends /usr/share/zsh/functions/Zftp /usr/share/zsh/functions/Zle /root/.oh-my-zsh/custom/plugins/zsh-completions/src )
funcfiletrace
funcsourcetrace
funcstack
functions
functions_source
functrace
galiases
histchars='!^#'
history
historywords
jobdirs
jobstates
jobtexts
key=''
keymaps
langinfo
less_termcap
line=''
mailpath=( )
manpath=( )
module_path=( /usr/lib/x86_64-linux-gnu/zsh/5.9 )
modules
nameddirs
node=hcp1
node_id=baea7bb6
node_name=hcp2
options
parameters
patchars
path=( /root/.trae-server/sdks/workspaces/cced0550/versions/node/current /root/.trae-server/sdks/versions/node/current /root/.trae-server/sdks/workspaces/cced0550/versions/node/current /root/.trae-server/sdks/versions/node/current /root/.trae-server/bin/stable-0643ffaa788ad4dd46eaa12cec109ac40595c816/bin/remote-cli /usr/local/sbin /usr/local/bin /usr/sbin /usr/bin /sbin /bin )
pipestatus=( 0 )
plugins=( git docker docker-compose ansible terraform kubectl helm aws gcloud zsh-autosuggestions zsh-syntax-highlighting zsh-completions colored-man-pages command-not-found extract history-substring-search sudo systemd tmux vscode web-search z )
podman_status=false
precmd_functions=( _omz_async_request omz_termsupport_precmd _zsh_autosuggest_start _zsh_highlight_main__precmd_hook _zshz_precmd __vsc_precmd )
preexec_functions=( omz_termsupport_preexec _zsh_highlight_preexec_hook __vsc_preexec )
prompt=$'\n(TraeAI-3) %~ [%?] $ '
psvar=( )
reset_color
reswords
ret=0
saliases
signals=( EXIT HUP INT QUIT ILL TRAP IOT BUS FPE KILL USR1 SEGV USR2 PIPE ALRM TERM STKFLT CHLD CONT STOP TSTP TTIN TTOU URG XCPU XFSZ VTALRM PROF WINCH POLL PWR SYS ZERR DEBUG )
status=0
sysparams
termcap
terminfo
userdirs
usergroups
vsc_aa_env=( )
vscode_base_dir=/root/.trae-server
watch
widgets
zle_bracketed_paste=( $'\C-[[?2004h' $'\C-[[?2004l' )
zsh_eval_context=( toplevel )
zsh_highlight__memo_feature=1
zsh_highlight__pat_static_bug=false
zsh_scheduled_events

View File

@ -1,14 +0,0 @@
---
- name: Check for AppArmor or SELinux denials
hosts: germany
become: yes
tasks:
- name: Search journalctl for AppArmor/SELinux messages
shell: 'journalctl -k | grep -i -e apparmor -e selinux -e "avc: denied"'
register: security_logs
changed_when: false
failed_when: false
- name: Display security logs
debug:
var: security_logs.stdout_lines

View File

@ -0,0 +1,22 @@
---
- name: 清理 HashiCorp APT 源备份文件
hosts: nomad_cluster
become: yes
tasks:
- name: 查找所有 HashiCorp 备份文件
find:
paths: "/etc/apt/sources.list.d/"
patterns: "hashicorp.list.backup-*"
register: backup_files
- name: 删除所有备份文件
file:
path: "{{ item.path }}"
state: absent
loop: "{{ backup_files.files }}"
when: backup_files.files | length > 0
- name: 显示清理结果
debug:
msg: "已删除 {{ backup_files.files | length }} 个备份文件"

View File

@ -1,6 +1,6 @@
---
- name: Configure Podman driver for all Nomad client nodes
hosts: nomad_clients,nomad_servers
hosts: target_nodes
become: yes
tasks:

View File

@ -1,33 +0,0 @@
---
- name: Debug cgroup permissions
hosts: germany
become: yes
tasks:
- name: Check permissions of /sys/fs/cgroup/cpuset/
stat:
path: /sys/fs/cgroup/cpuset/
register: cpuset_dir
- name: Display cpuset dir stats
debug:
var: cpuset_dir.stat
- name: Check for nomad subdir in cpuset
stat:
path: /sys/fs/cgroup/cpuset/nomad
register: nomad_cpuset_dir
ignore_errors: true
- name: Display nomad cpuset dir stats
debug:
var: nomad_cpuset_dir.stat
when: nomad_cpuset_dir.stat.exists is defined and nomad_cpuset_dir.stat.exists
- name: List contents of /sys/fs/cgroup/cpuset/
command: ls -la /sys/fs/cgroup/cpuset/
register: ls_cpuset
changed_when: false
- name: Display contents of /sys/fs/cgroup/cpuset/
debug:
var: ls_cpuset.stdout_lines

View File

@ -1,14 +0,0 @@
---
- name: Debug Nomad cgroup subdirectory
hosts: germany
become: yes
tasks:
- name: List contents of /sys/fs/cgroup/cpuset/nomad/
command: ls -la /sys/fs/cgroup/cpuset/nomad/
register: ls_nomad_cpuset
changed_when: false
failed_when: false
- name: Display contents of /sys/fs/cgroup/cpuset/nomad/
debug:
var: ls_nomad_cpuset.stdout_lines

View File

@ -1,24 +0,0 @@
- name: Debug Nomad service on germany
hosts: germany
gather_facts: false
tasks:
- name: Get Nomad service status
command: systemctl status nomad.service --no-pager -l
register: nomad_status
ignore_errors: true
- name: Get Nomad service journal
command: journalctl -xeu nomad.service --no-pager -n 100
register: nomad_journal
ignore_errors: true
- name: Display debug information
debug:
msg: |
--- Nomad Service Status ---
{{ nomad_status.stdout }}
{{ nomad_status.stderr }}
--- Nomad Service Journal ---
{{ nomad_journal.stdout }}
{{ nomad_journal.stderr }}

View File

@ -1,12 +0,0 @@
- name: Distribute new podman binary to germany
hosts: germany
gather_facts: false
tasks:
- name: Copy new podman binary to /usr/local/bin
copy:
src: /root/mgmt/configuration/podman-remote-static-linux_amd64
dest: /usr/local/bin/podman
owner: root
group: root
mode: '0755'
become: yes

View File

@ -1,14 +0,0 @@
---
- name: Find Nomad service
hosts: germany
become: yes
tasks:
- name: List systemd services and filter for nomad
shell: systemctl list-unit-files --type=service | grep -i nomad
register: nomad_services
changed_when: false
failed_when: false
- name: Display found services
debug:
var: nomad_services.stdout_lines

View File

@ -1,19 +0,0 @@
---
- name: Fix cgroup permissions for Nomad
hosts: germany
become: yes
tasks:
- name: Recursively change ownership of nomad cgroup directory
file:
path: /sys/fs/cgroup/cpuset/nomad
state: directory
owner: root
group: root
recurse: yes
- name: Change ownership of the parent cpuset directory
file:
path: /sys/fs/cgroup/cpuset/
state: directory
owner: root
group: root

View File

@ -4,16 +4,9 @@
become: yes
tasks:
- name: 备份现有的 HashiCorp APT 源配置(如果存在)
copy:
src: "/etc/apt/sources.list.d/hashicorp.list"
dest: "/etc/apt/sources.list.d/hashicorp.list.backup-{{ ansible_date_time.epoch }}"
remote_src: yes
ignore_errors: yes
- name: 创建正确的 HashiCorp APT 源配置
copy:
content: "deb [trusted=yes] http://apt.releases.hashicorp.com bookworm main\n"
content: "deb [trusted=yes] http://apt.releases.hashicorp.com {{ ansible_distribution_release }} main\n"
dest: "/etc/apt/sources.list.d/hashicorp.list"
owner: root
group: root

View File

@ -0,0 +1,68 @@
---
- name: 在 master 和 ash3c 节点安装 Consul
hosts: master,ash3c
become: yes
vars:
consul_version: "1.21.5"
consul_arch: "arm64" # 因为这两个节点都是 aarch64
tasks:
- name: 检查节点架构
command: uname -m
register: node_arch
changed_when: false
- name: 显示节点架构
debug:
msg: "节点 {{ inventory_hostname }} 架构: {{ node_arch.stdout }}"
- name: 检查是否已安装 consul
command: which consul
register: consul_check
failed_when: false
changed_when: false
- name: 显示当前 consul 状态
debug:
msg: "Consul 状态: {{ 'already installed' if consul_check.rc == 0 else 'not installed' }}"
- name: 删除错误的 consul 二进制文件(如果存在)
file:
path: /usr/local/bin/consul
state: absent
when: consul_check.rc == 0
- name: 更新 APT 缓存
apt:
update_cache: yes
ignore_errors: yes
- name: 安装 consul 通过 APT
apt:
name: consul={{ consul_version }}-1
state: present
- name: 验证 consul 安装
command: consul version
register: consul_version_check
changed_when: false
- name: 显示安装的 consul 版本
debug:
msg: "安装的 Consul 版本: {{ consul_version_check.stdout_lines[0] }}"
- name: 确保 consul 用户存在
user:
name: consul
system: yes
shell: /bin/false
home: /opt/consul
create_home: no
- name: 创建 consul 数据目录
file:
path: /opt/consul
state: directory
owner: consul
group: consul
mode: '0755'

View File

@ -1,6 +1,6 @@
---
- name: Install Nomad Podman Driver Plugin
hosts: all
hosts: target_nodes
become: yes
vars:
nomad_user: nomad

View File

@ -1,22 +0,0 @@
---
- name: Manually run Nomad agent for debugging
hosts: germany
become: yes
tasks:
- name: Find Nomad binary path
shell: which nomad || find /usr -name nomad 2>/dev/null | head -1
register: nomad_binary_path
failed_when: nomad_binary_path.stdout == ""
- name: Run nomad agent directly
command: "{{ nomad_binary_path.stdout }} agent -config=/etc/nomad.d/nomad.hcl"
register: nomad_run
failed_when: false
- name: Display Nomad output
debug:
var: nomad_run.stdout
- name: Display Nomad error output
debug:
var: nomad_run.stderr

View File

@ -1,12 +0,0 @@
- name: Read Nomad config on germany
hosts: germany
gather_facts: false
tasks:
- name: Read nomad.hcl
command: cat /etc/nomad.d/nomad.hcl
register: nomad_config
ignore_errors: true
- name: Display config
debug:
msg: "{{ nomad_config.stdout }}"

View File

@ -4,47 +4,24 @@
### 发现的问题
1. **DNS 解析失败**:服务间无法通过服务名相互发现
2. **网络连通性问题**`ash3c` 节点网络配置异常(地址显示为 0.0.0.0
2. **网络连通性问题**`ash3c` 节点网络配置异常
3. **跨节点通信失败**`no route to host` 错误
4. **集群无法形成**:持续的 "No cluster leader" 错误
### 根本原因
- Docker Swarm overlay 网络在跨节点环境中的服务发现机制存在问题
- `ash3c` 节点的网络配置可能有问题
- 网络配置问题
- 防火墙或网络策略可能阻止了 Consul 集群通信端口
## 解决方案
### 方案 1单节点 Consul临时解决方案
**文件**: `swarm/stacks/consul-single-node.yml`
**优点**: 简单、可靠、立即可用
**缺点**: 没有高可用性
```bash
docker stack deploy -c swarm/stacks/consul-single-node.yml consul
```
### 方案 2使用主机网络的集群配置
**文件**: `swarm/stacks/consul-cluster-host-network.yml`
**优点**: 绕过 overlay 网络问题
**缺点**: 需要手动配置 IP 地址
### 方案 3修复后的 overlay 网络配置
**文件**: `swarm/stacks/consul-cluster-fixed.yml`
**优点**: 使用 Docker 原生网络
**缺点**: 需要解决底层网络问题
### 方案 4macvlan 网络配置
**文件**: `swarm/stacks/consul-cluster-macvlan.yml`
**优点**: 直接使用物理网络
**缺点**: 需要网络管理员权限和配置
### 当前部署方案(使用 Nomad + Podman
目前集群已从 Docker Swarm 迁移到 Nomad + Podman使用 `consul-cluster-nomad.nomad` 文件部署 Consul 集群。
## 网络诊断步骤
### 1. 检查节点状态
```bash
docker node ls
docker node inspect <node-name> --format '{{.Status.Addr}}'
nomad node status
```
### 2. 检查网络连通性
@ -64,10 +41,10 @@ telnet <ash3c-ip> 8301
# 8600: Consul DNS
```
### 4. 检查 Docker Swarm 网络
### 4. 检查 Podman 网络
```bash
docker network ls
docker network inspect <network-name>
podman network ls
podman network inspect <network-name>
```
## 推荐的修复流程
@ -108,7 +85,7 @@ docker exec <consul-container> consul operator raft list-peers
## 常见问题
### Q: 为什么服务发现不工作?
A: Docker Swarm 的 overlay 网络在某些配置下可能存在 DNS 解析问题,特别是跨节点通信时
A: 在之前的 Docker Swarm 架构中overlay 网络在某些配置下可能存在 DNS 解析问题。当前的 Nomad + Podman 架构已解决了这些问题
### Q: 如何选择合适的网络方案?
A:

View File

@ -26,12 +26,10 @@
### 1. 启动 Consul 集群
```bash
# 进入 swarm 目录
cd swarm
当前集群已从 Docker Swarm 迁移到 Nomad + Podman请使用 Nomad 部署 Consul 集群:
# 启动 Consul 集群
docker-compose -f configs/traefik-consul-setup.yml up -d
```bash
nomad run /root/mgmt/consul-cluster-nomad.nomad
```
### 2. 设置 Oracle Cloud 配置

View File

@ -0,0 +1,87 @@
job "consul-cluster-arm64" {
datacenters = ["dc1"]
type = "service"
# 只在 ARM64 节点上运行master 和 ash3c
constraint {
attribute = "${attr.unique.hostname}"
operator = "regexp"
value = "(master|ash3c)"
}
group "consul" {
count = 2
# 确保每个节点只运行一个实例
constraint {
operator = "distinct_hosts"
value = "true"
}
network {
port "http" {
static = 8500
}
port "rpc" {
static = 8400
}
port "serf_lan" {
static = 8301
}
port "serf_wan" {
static = 8302
}
port "server" {
static = 8300
}
port "dns" {
static = 8600
}
}
task "consul" {
driver = "exec"
config {
command = "consul"
args = [
"agent",
"-server",
"-bootstrap-expect=2",
"-data-dir=/tmp/consul-cluster-data",
"-bind=${NOMAD_IP_serf_lan}",
"-client=0.0.0.0",
"-retry-join=100.117.106.136", # master Tailscale IP
"-retry-join=100.116.80.94", # ash3c Tailscale IP
"-ui-config-enabled=true",
"-log-level=INFO",
"-node=${node.unique.name}-consul",
"-datacenter=dc1"
]
}
artifact {
source = "https://releases.hashicorp.com/consul/1.17.0/consul_1.17.0_linux_arm64.zip"
destination = "local/"
}
resources {
cpu = 200
memory = 256
}
service {
name = "consul-cluster-arm64"
port = "http"
check {
type = "http"
path = "/v1/status/leader"
port = "http"
interval = "10s"
timeout = "3s"
}
}
}
}
}

View File

@ -0,0 +1,88 @@
job "consul-cluster" {
datacenters = ["dc1"]
type = "service"
# 在三个节点上运行bj-warden, master, ash3c
constraint {
attribute = "${node.unique.name}"
operator = "regexp"
value = "(bj-warden|master|ash3c)"
}
group "consul" {
count = 3
# 确保每个节点只运行一个实例
constraint {
operator = "distinct_hosts"
value = "true"
}
network {
port "http" {
static = 8500
}
port "rpc" {
static = 8400
}
port "serf_lan" {
static = 8301
}
port "serf_wan" {
static = 8302
}
port "server" {
static = 8300
}
port "dns" {
static = 8600
}
}
task "consul" {
driver = "exec"
config {
command = "consul"
args = [
"agent",
"-server",
"-bootstrap-expect=3",
"-data-dir=/tmp/consul-cluster-data",
"-bind=${NOMAD_IP_serf_lan}",
"-client=0.0.0.0",
"-retry-join=100.122.197.112", # bj-warden Tailscale IP
"-retry-join=100.117.106.136", # master Tailscale IP
"-retry-join=100.116.80.94", # ash3c Tailscale IP
"-ui-config-enabled=true",
"-log-level=INFO",
"-node=${node.unique.name}-consul",
"-datacenter=dc1"
]
}
artifact {
source = "https://releases.hashicorp.com/consul/1.17.0/consul_1.17.0_linux_arm64.zip"
destination = "local/"
}
resources {
cpu = 200
memory = 256
}
service {
name = "consul-cluster"
port = "http"
check {
type = "http"
path = "/v1/status/leader"
port = "http"
interval = "10s"
timeout = "3s"
}
}
}
}
}

View File

@ -5,7 +5,7 @@ job "consul-cluster" {
constraint {
attribute = "${node.unique.name}"
operator = "regexp"
value = "^(master|ash3c|hcs)$"
value = "^(master|ash3c|semaphore)$"
}
group "consul" {
@ -59,7 +59,7 @@ job "consul-cluster" {
"-client=0.0.0.0",
"-retry-join=100.117.106.136",
"-retry-join=100.116.80.94",
"-retry-join=100.84.197.26"
"-retry-join=100.116.158.95"
]
volumes = [

View File

@ -0,0 +1,157 @@
job "consul-cluster-simple" {
datacenters = ["dc1"]
type = "service"
group "consul-master" {
count = 1
constraint {
attribute = "${node.unique.name}"
value = "master"
}
network {
port "http" {
static = 8500
}
port "rpc" {
static = 8300
}
port "serf_lan" {
static = 8301
}
port "serf_wan" {
static = 8302
}
}
task "consul" {
driver = "exec"
config {
command = "consul"
args = [
"agent",
"-server",
"-bootstrap-expect=3",
"-data-dir=/opt/nomad/data/consul",
"-client=100.64.0.0/10",
"-bind=100.117.106.136",
"-advertise=100.117.106.136",
"-retry-join=100.116.80.94",
"-retry-join=100.122.197.112",
"-ui"
]
}
resources {
cpu = 300
memory = 512
}
}
}
group "consul-ash3c" {
count = 1
constraint {
attribute = "${node.unique.name}"
value = "ash3c"
}
network {
port "http" {
static = 8500
}
port "rpc" {
static = 8300
}
port "serf_lan" {
static = 8301
}
port "serf_wan" {
static = 8302
}
}
task "consul" {
driver = "exec"
config {
command = "consul"
args = [
"agent",
"-server",
"-bootstrap-expect=3",
"-data-dir=/opt/nomad/data/consul",
"-client=100.64.0.0/10",
"-bind=100.116.80.94",
"-advertise=100.116.80.94",
"-retry-join=100.117.106.136",
"-retry-join=100.122.197.112",
"-ui"
]
}
resources {
cpu = 300
memory = 512
}
}
}
group "consul-warden" {
count = 1
constraint {
attribute = "${node.unique.name}"
value = "bj-warden"
}
network {
port "http" {
static = 8500
}
port "rpc" {
static = 8300
}
port "serf_lan" {
static = 8301
}
port "serf_wan" {
static = 8302
}
}
task "consul" {
driver = "exec"
config {
command = "consul"
args = [
"agent",
"-server",
"-bootstrap-expect=3",
"-data-dir=/opt/nomad/data/consul",
"-client=100.64.0.0/10",
"-bind=100.122.197.112",
"-advertise=100.122.197.112",
"-retry-join=100.117.106.136",
"-retry-join=100.116.80.94",
"-ui"
]
}
resources {
cpu = 300
memory = 512
}
}
}
}

View File

@ -0,0 +1,190 @@
job "consul-cluster-three-nodes" {
datacenters = ["dc1"]
type = "service"
group "consul-master" {
count = 1
constraint {
attribute = "${node.unique.name}"
value = "master"
}
network {
port "http" {
static = 8500
}
port "rpc" {
static = 8300
}
port "serf_lan" {
static = 8301
}
port "serf_wan" {
static = 8302
}
}
task "consul" {
driver = "exec"
config {
command = "consul"
args = [
"agent",
"-server",
"-bootstrap-expect=3",
"-data-dir=/opt/nomad/data/consul",
"-client=0.0.0.0",
"-bind=100.117.106.136",
"-advertise=100.117.106.136",
"-retry-join=100.116.80.94",
"-retry-join=100.122.197.112",
"-ui-config-enabled=true"
]
}
resources {
cpu = 300
memory = 512
}
service {
name = "consul-master"
port = "http"
check {
type = "http"
path = "/v1/status/leader"
port = "http"
interval = "10s"
timeout = "3s"
}
}
}
}
group "consul-ash3c" {
count = 1
constraint {
attribute = "${node.unique.name}"
value = "ash3c"
}
network {
port "http" {
static = 8500
}
port "rpc" {
static = 8300
}
port "serf_lan" {
static = 8301
}
port "serf_wan" {
static = 8302
}
}
task "consul" {
driver = "exec"
config {
command = "consul"
args = [
"agent",
"-server",
"-bootstrap-expect=3",
"-data-dir=/opt/nomad/data/consul",
"-client=0.0.0.0",
"-bind=100.116.80.94",
"-advertise=100.116.80.94",
"-retry-join=100.117.106.136",
"-retry-join=100.122.197.112",
"-ui-config-enabled=true"
]
}
resources {
cpu = 300
memory = 512
}
service {
name = "consul-ash3c"
port = "http"
check {
type = "http"
path = "/v1/status/leader"
port = "http"
interval = "10s"
timeout = "3s"
}
}
}
}
group "consul-warden" {
count = 1
constraint {
attribute = "${node.unique.name}"
value = "bj-warden"
}
network {
port "http" {
static = 8500
}
port "rpc" {
static = 8300
}
port "serf_lan" {
static = 8301
}
port "serf_wan" {
static = 8302
}
}
task "consul" {
driver = "exec"
config {
command = "consul"
args = [
"agent",
"-server",
"-bootstrap-expect=3",
"-data-dir=/opt/nomad/data/consul",
"-client=0.0.0.0",
"-bind=100.122.197.112",
"-advertise=100.122.197.112",
"-retry-join=100.117.106.136",
"-retry-join=100.116.80.94",
"-ui-config-enabled=true"
]
}
resources {
cpu = 300
memory = 512
}
service {
name = "consul-warden"
port = "http"
check {
type = "http"
path = "/v1/status/leader"
port = "http"
interval = "10s"
timeout = "3s"
}
}
}
}
}

View File

@ -0,0 +1,47 @@
job "consul-single-member" {
datacenters = ["dc1"]
type = "service"
priority = 50
constraint {
attribute = "${node.unique.name}"
value = "warden"
}
group "consul" {
count = 1
task "consul" {
driver = "exec"
config {
command = "consul"
args = ["agent", "-dev", "-client=0.0.0.0", "-data-dir=/tmp/consul-data"]
}
resources {
cpu = 200
memory = 256
network {
mbits = 10
port "http" {
static = 8500
}
}
}
service {
name = "consul"
port = "http"
check {
type = "http"
path = "/v1/status/leader"
port = "http"
interval = "10s"
timeout = "2s"
}
}
}
}
}

View File

@ -0,0 +1,47 @@
job "consul-single-member" {
datacenters = ["dc1"]
type = "service"
priority = 50
constraint {
attribute = "${node.unique.name}"
value = "warden"
}
group "consul" {
count = 1
task "consul" {
driver = "exec"
config {
command = "consul"
args = ["agent", "-dev", "-client=0.0.0.0", "-data-dir=/tmp/consul-data"]
}
resources {
cpu = 200
memory = 256
network {
mbits = 10
port "http" {
static = 8500
}
}
}
service {
name = "consul"
port = "http"
check {
type = "http"
path = "/v1/status/leader"
port = "http"
interval = "10s"
timeout = "2s"
}
}
}
}
}

View File

@ -0,0 +1,46 @@
job "consul-test-warden" {
datacenters = ["dc1"]
type = "service"
constraint {
attribute = "${node.unique.name}"
value = "bj-warden"
}
group "consul" {
count = 1
network {
port "http" {
static = 8500
}
}
task "consul" {
driver = "exec"
config {
command = "consul"
args = ["agent", "-dev", "-client=0.0.0.0", "-data-dir=/tmp/consul-test"]
}
resources {
cpu = 200
memory = 256
}
service {
name = "consul-test"
port = "http"
check {
type = "http"
path = "/v1/status/leader"
port = "http"
interval = "10s"
timeout = "2s"
}
}
}
}
}

View File

@ -0,0 +1,46 @@
job "consul-warden" {
datacenters = ["dc1"]
type = "service"
priority = 50
constraint {
attribute = "${node.unique.name}"
value = "warden"
}
group "consul" {
count = 1
task "consul" {
driver = "exec"
config {
command = "consul"
args = ["agent", "-dev", "-client=0.0.0.0", "-data-dir=/tmp/consul-data"]
}
resources {
cpu = 200
memory = 256
network {
port "http" {
static = 8500
}
}
}
service {
name = "consul"
port = "http"
check {
type = "http"
path = "/v1/status/leader"
port = "http"
interval = "10s"
timeout = "2s"
}
}
}
}
}

View File

@ -0,0 +1,46 @@
job "service-discovery-warden" {
datacenters = ["dc1"]
type = "service"
constraint {
attribute = "${node.unique.name}"
value = "warden"
}
group "discovery" {
count = 1
network {
port "http" {
static = 8500
}
}
task "discovery" {
driver = "exec"
config {
command = "consul"
args = ["agent", "-dev", "-client=0.0.0.0", "-data-dir=/tmp/discovery-data"]
}
resources {
cpu = 200
memory = 256
}
service {
name = "discovery-service"
port = "http"
check {
type = "http"
path = "/v1/status/leader"
port = "http"
interval = "10s"
timeout = "2s"
}
}
}
}
}

View File

@ -0,0 +1,52 @@
job "simple-consul-test" {
datacenters = ["dc1"]
type = "service"
constraint {
attribute = "${node.unique.name}"
value = "warden"
}
group "consul" {
count = 1
network {
port "http" {
static = 8500
}
}
task "consul" {
driver = "exec"
config {
command = "consul"
args = [
"agent",
"-dev",
"-client=0.0.0.0",
"-bind=100.122.197.112",
"-data-dir=/tmp/consul-test-data"
]
}
resources {
cpu = 200
memory = 256
}
service {
name = "consul-test"
port = "http"
check {
type = "http"
path = "/v1/status/leader"
port = "http"
interval = "10s"
timeout = "2s"
}
}
}
}
}

View File

@ -0,0 +1,24 @@
job "test-podman" {
datacenters = ["dc1"]
type = "batch"
group "test" {
count = 1
task "hello" {
driver = "podman"
config {
image = "docker.io/library/hello-world:latest"
logging = {
driver = "journald"
}
}
resources {
cpu = 100
memory = 128
}
}
}
}

View File

@ -0,0 +1,23 @@
job "test-podman-simple" {
datacenters = ["dc1"]
type = "batch"
group "test" {
count = 1
task "hello" {
driver = "podman"
config {
image = "alpine:latest"
command = "echo"
args = ["Hello from Podman!"]
}
resources {
cpu = 100
memory = 64
}
}
}
}

View File

@ -0,0 +1,31 @@
job "test-private-registry" {
datacenters = ["dc1"]
type = "batch"
group "test" {
count = 1
# 指定运行在北京节点上
constraint {
attribute = "${node.unique.name}"
operator = "regexp"
value = "bj-.*"
}
task "hello" {
driver = "podman"
config {
image = "hello-world:latest"
logging = {
driver = "journald"
}
}
resources {
cpu = 100
memory = 64
}
}
}
}

27
jobs/test-simple.nomad Normal file
View File

@ -0,0 +1,27 @@
job "test-simple" {
datacenters = ["dc1"]
type = "service"
constraint {
attribute = "${node.unique.name}"
value = "warden"
}
group "test" {
count = 1
task "hello" {
driver = "exec"
config {
command = "echo"
args = ["Hello from warden node!"]
}
resources {
cpu = 100
memory = 64
}
}
}
}

61
key.md
View File

@ -1,61 +0,0 @@
oci usa
-----BEGIN PRIVATE KEY-----
MIIEvgIBADANBgkqhkiG9w0BAQEFAASCBKgwggSkAgEAAoIBAQDiDwCO56943GNv
rg2WYQCZTpdgA1YfdDcM6AXugzwGm6zQwuhuDdTABwJj0PfB7q5s0MZFqwpVW+MS
IkTQsk8r+8gBo9FBtH4nneBfXgqjiKhlHhcgqHWALXm8AzDq6MJ+lCDwjgi5PsID
5jbnBUBXRpVXlkEDMZj5yTNiRfLMlFZiqc4mv1j1RSIQeuptt7l2AnXhKSuW2FL3
aQNWspUylIs9IDXuC490r04/fZmX0Iw7Crb4yWWFem1e1x0g6qDiAizwpE3FF0No
AqkH+p+y3Qe7Pew2/UUS4VNRyoGBstpbBJMQ8dXRER9M4KTEdLnSG2EzdM5IGupj
Gwo9PnPPAgMBAAECggEAChaP+HtPvLMJH6NtfnfXEQBi3P6zyd+OfV2gC3mBJO0E
P09ovqXmB/5ywDBD05G/6EyWLJG/ek5eycu3CnaKoJ8x2RuNwRg5m7GooQOPXKZC
mDtJiO7mSia9YgM6caFFh2SQ5mtQPwQVSxtA+U+mBBRlocJWJbsBj/7HSOwaM8BC
wl19kZiW0aOkoGidxvjlJfkPiNer/jTy5RMNKruDpaF8PsF7xIMLwuxT5VQ/gyYA
frXsWfQp+sve/XfUg9/RGP9jJQHNppL56YWYPa8XusC2nJCym9RLDlK56jF9jhYM
iQThksG3TzOXjdGM7MP5Q/SfNckQWy0KTOu7h+N98QKBgQD9l7SFypX5Mgn2PC1A
U3lwiLCvviaKSNbzNXc6pnijbGEEvNpUGRyGwmjXItGLiEoAky0eu42Ipult6PB+
WsjCIGTGI0UBObrjbWfaj+vt6zCcI653tgvrQO97t+F6xTHiQyGRDqIlFQcE9ZKO
EJ+wuC+MbBFGPSc/Zw43/twpzQKBgQDkNGHiuoGvoSuRPpMIB82IsMmtOlzymk8B
ZZMNzHfxFyf7a/1NUhc3UvmZdE67MS4cmoY0LCY4HBW2zxxsnX0cA+vRNLFneJMC
oH2XgQs+mi1Dgem+N6EYO/5PJZqeyUJ5x5WFGrULvsL2GKcSeSp2o2nkFyKSHfhD
7zWSQyNICwKBgQD1lDZD4n3+BxFSndAMnUnbSuQgLPrRq9xNRpeh+piVWl1R4zlj
e7X+YsJ4pMVcZK2VhPGK84IKtekUgSJ0mqIULJ6qqnkmyKtNlyOdqwaFLt+yNXO9
hlRgjE/e9aGr7M90GCKngQ5Q7t4PVWmJnlunHZceW4EXDh217qz8WRkIeQKBgGV0
JFB4Ok+qh4P7HcLkNSwf7Ilm+QuiLp2gWtA3pts4QD42tFY7uLaP3Qer/ZSbOLTe
vetT9WnckorDaQ+gtI5P7/cCRhyKLlFsqGlCpY0fXiA1EYXPlX8ArP7i6OrO7w7U
/FRAm1ytYl+mdiBwXcCAxgLxhh0P1d/d6SMtVfIhAoGBALdQ9tPSkFsJiDJi+d1u
SwPauzmr8O0geaLuey9WxVAxm5DR7eTJTUswZJGqadAZlnDr/H+7YU6lEBRI224Q
0ApgIIeGpMsTZFetRsOq+TUoNQcGVCtspOckCbElW6NMSzuAlf7aI+VqKM3uwlAn
FiTDYcmZIE1yNjYi+uaD8vkU
-----END PRIVATE KEY-----
OCI_API_KEY
oci kr
-----BEGIN PRIVATE KEY-----
MIIEvgIBADANBgkqhkiG9w0BAQEFAASCBKgwggSkAgEAAoIBAQCXpRaddq1RzS0J
cBAGHh94DJyqOBti+ASs6Abv1v3gyXu0ssNSe/g/ZBe+aDIIBisL/5nRLDZfeNZg
S5azKgPFPPlukTKVAfatQrgsaCC3uNm4D0NfGSINY90HcpZwX5GCZbUIFZuBZBac
33ze+RXoEo/QJFqniiUh/19A1R6mO2oJchgcXYv3+Fe8db35Fb9HDt6tg8+maO5R
8W149SpQUlU9JWCWuRV2cdKSXP1/z3JH3bq7hezD5qkZrcwt+4e+Pj9moYyA+O8i
n9Z9yQUeipp+HR3SdM++pzUhc1E/NNrgNTKQJi3OvrUx41jc00/5KwZUYyOtvJa3
/GBzycKNAgMBAAECggEAFiooGw3cmWc+3PFHNk2y1c4qG+slfZq4vDkRwn6PDwsE
DM5QJD9AcquDmO4L2gZkxlUuu1cV/3BfDSYfOcK7WFnoL1QDq6nkz0BAQSVbGt9m
2zNH6p92zbQ5+zuxZ21gjEmnYy4dU5U4hOdZjhGkNQ55fLfDlFdpxAVae9RqrWsp
8/9SiuD1G+fgNKbe8x+ASd8Y4rP9gjItDUCqmi2rnJJsR8i1PHMZiLC0S3FbcUo6
R7kwO0dGaXYFWtPUlMB/OPx172HM6yfE+lDmT6gKfWJ2dOKvu7mOtXTGSBIGMREV
MpC9ZhYBsk3jvZKlEju0sEATg3tdDBeX3xRNOkPFFQKBgQDVrmuAOS/Xrz5ZGzwl
nRzgtR02EjdQ+wJrfyJoufmJWZQlC6cAe3RyCBs8c7YiSyvTL773ZW/v9RoVln0R
WDaEdw7L+YLjdv51Dzy3KVY9A4HX1s+xhP5zG2C24Rp5ALwfS6Qf2JDGtAlm821J
z5kwEcY2sSOxGqL8MTA9gysmqwKBgQC1rXBNsW4BVhbqOXMC6ZpwQ/n6BEdHGl8g
1nNonznnZ6ECI/G/TBYA5Pb23FslpQAaFaYgg6pY1FisnIVUpjk23xKd193zY7P4
8Ah7D/gcAtloXeJ6sknaWjkBBYRRyZhlG3M428kKoIgxQveokGZiLvF4xB9/NBM3
oCepbq6bpwKBgQC8KWFEghcdCJYQhSkLvjQls5bLfHL1fnN9EXDNY6bXSehoTsB6
bjv2BillrEcgH62xxAOXet19IgocJG5xjYpET0raVxbpEmmzzv0aFO55v9Lgq6os
mf4ugldB8ysKjpkZvdQCrwOd1f/JhmYgbwxoBd7TXl0doWUQSog+QnkHDQKBgDMd
jDZf0GqR1TqrVT+hiDFD/uYoJAHOWqt7itcJzZnc30Eh6dd/ycUQpqeIEiECTogI
RUhqoxgBDr3p/910My7MDonYfXsIN0+4ATrWoGEJMDAcEiehWAQWVGmEKtl0FeuE
kKOTuvnBdvAdPl7v2c6QFKJ807vPZATHi8ExAfGLAoGBALuFCz/9Xlw5legmGJxP
IgbhcmSFCw9OmGRA6KdRZUN+Zsb6FVj9eCcF1My13iw359xFaYdhBD6hnbPIe3XS
bzVMczdiuRAI9LijXhzGWmw5hlkumaVDqZI3+Sy5lohLhOsV4Erss1vqL3R40zjk
fk2tnbktORYd6/Q0i0FJdO/H
-----END PRIVATE KEY-----
OCI_API_KEY

View File

@ -0,0 +1,69 @@
---
- name: Add Beijing prefix to LXC node names in Nomad configuration
hosts: beijing
become: yes
vars:
node_prefixes:
influxdb: "bj-influxdb"
warden: "bj-warden"
hcp1: "bj-hcp1"
hcp2: "bj-hcp2"
tailscale_ips:
influxdb: "100.100.7.4"
warden: "100.122.197.112"
hcp1: "100.97.62.111"
hcp2: "100.116.112.45"
tasks:
- name: Stop Nomad service
systemd:
name: nomad
state: stopped
- name: Get current node name from inventory
set_fact:
current_node_name: "{{ inventory_hostname }}"
new_node_name: "{{ node_prefixes[inventory_hostname] }}"
tailscale_ip: "{{ tailscale_ips[inventory_hostname] }}"
- name: Display node name change
debug:
msg: "Changing node name from {{ current_node_name }} to {{ new_node_name }}, using Tailscale IP {{ tailscale_ip }}"
- name: Update node name in Nomad configuration
lineinfile:
path: /etc/nomad.d/nomad.hcl
regexp: '^name\s*='
line: 'name = "{{ new_node_name }}"'
insertafter: 'datacenter = "dc1"'
state: present
- name: Validate Nomad configuration
shell: nomad config validate /etc/nomad.d/nomad.hcl
register: config_validation
failed_when: config_validation.rc != 0
- name: Start Nomad service
systemd:
name: nomad
state: started
- name: Wait for Nomad to be ready on Tailscale IP
wait_for:
port: 4646
host: "{{ tailscale_ip }}"
delay: 10
timeout: 60
- name: Wait for node registration
pause:
seconds: 15
- name: Display new configuration
shell: cat /etc/nomad.d/nomad.hcl | grep -E "^(datacenter|name|bind_addr)\s*="
register: nomad_config_check
- name: Show updated configuration
debug:
var: nomad_config_check.stdout_lines

View File

@ -0,0 +1,56 @@
---
- name: Fix duplicate plugin_dir configuration
hosts: nomadlxc,hcp
become: yes
tasks:
- name: Stop Nomad service
systemd:
name: nomad
state: stopped
- name: Remove duplicate plugin_dir lines
lineinfile:
path: /etc/nomad.d/nomad.hcl
regexp: '^plugin_dir = "/opt/nomad/plugins"'
state: absent
- name: Ensure only one plugin_dir configuration exists
lineinfile:
path: /etc/nomad.d/nomad.hcl
regexp: '^plugin_dir = "/opt/nomad/data/plugins"'
line: 'plugin_dir = "/opt/nomad/data/plugins"'
insertafter: 'data_dir = "/opt/nomad/data"'
state: present
- name: Validate Nomad configuration
shell: nomad config validate /etc/nomad.d/nomad.hcl
register: config_validation
failed_when: config_validation.rc != 0
- name: Start Nomad service
systemd:
name: nomad
state: started
- name: Wait for Nomad to be ready
wait_for:
port: 4646
host: localhost
delay: 10
timeout: 60
- name: Wait for plugins to load
pause:
seconds: 15
- name: Check driver status
shell: |
export NOMAD_ADDR=http://localhost:4646
nomad node status -self | grep -A 10 "Driver Status"
register: driver_status
failed_when: false
- name: Display driver status
debug:
var: driver_status.stdout_lines

View File

@ -0,0 +1,112 @@
---
- name: Fix Nomad Podman Driver Configuration
hosts: nomadlxc,hcp
become: yes
vars:
nomad_user: nomad
tasks:
- name: Stop Nomad service
systemd:
name: nomad
state: stopped
- name: Install Podman driver plugin if missing
block:
- name: Check if plugin exists
stat:
path: /opt/nomad/data/plugins/nomad-driver-podman
register: plugin_exists
- name: Download and install Podman driver plugin
block:
- name: Download Nomad Podman driver
get_url:
url: "https://releases.hashicorp.com/nomad-driver-podman/0.6.1/nomad-driver-podman_0.6.1_linux_amd64.zip"
dest: "/tmp/nomad-driver-podman.zip"
mode: '0644'
- name: Extract Podman driver
unarchive:
src: "/tmp/nomad-driver-podman.zip"
dest: "/tmp"
remote_src: yes
- name: Install Podman driver
copy:
src: "/tmp/nomad-driver-podman"
dest: "/opt/nomad/data/plugins/nomad-driver-podman"
owner: "{{ nomad_user }}"
group: "{{ nomad_user }}"
mode: '0755'
remote_src: yes
- name: Clean up temporary files
file:
path: "{{ item }}"
state: absent
loop:
- "/tmp/nomad-driver-podman.zip"
- "/tmp/nomad-driver-podman"
when: not plugin_exists.stat.exists
- name: Update Nomad configuration with correct plugin name and socket path
replace:
path: /etc/nomad.d/nomad.hcl
regexp: 'plugin "podman" \{'
replace: 'plugin "nomad-driver-podman" {'
- name: Update socket path to system socket
replace:
path: /etc/nomad.d/nomad.hcl
regexp: 'socket_path = "unix:///run/user/1001/podman/podman.sock"'
replace: 'socket_path = "unix:///run/podman/podman.sock"'
- name: Add plugin_dir configuration if missing
lineinfile:
path: /etc/nomad.d/nomad.hcl
line: 'plugin_dir = "/opt/nomad/data/plugins"'
insertafter: 'data_dir = "/opt/nomad/data"'
state: present
- name: Ensure Podman socket is enabled and running
systemd:
name: podman.socket
enabled: yes
state: started
- name: Start Nomad service
systemd:
name: nomad
state: started
- name: Wait for Nomad to be ready
wait_for:
port: 4646
host: localhost
delay: 10
timeout: 60
- name: Wait for plugins to load
pause:
seconds: 20
- name: Check driver status
shell: |
export NOMAD_ADDR=http://localhost:4646
nomad node status -self | grep -A 10 "Driver Status"
register: driver_status
failed_when: false
- name: Display driver status
debug:
var: driver_status.stdout_lines
- name: Check for Podman driver in logs
shell: journalctl -u nomad -n 30 --no-pager | grep -E "(podman|plugin)" | tail -10
register: plugin_logs
failed_when: false
- name: Display plugin logs
debug:
var: plugin_logs.stdout_lines

View File

@ -0,0 +1,46 @@
---
- name: Fix NFS mounting on warden node
hosts: warden
become: yes
tasks:
- name: Ensure rpcbind is running
systemd:
name: rpcbind
state: started
enabled: yes
- name: Ensure nfs-client.target is active
systemd:
name: nfs-client.target
state: started
enabled: yes
- name: Create consul-shared directory
file:
path: /opt/consul-shared
state: directory
mode: '0755'
- name: Mount NFS share
mount:
path: /opt/consul-shared
src: snail:/fs/1000/nfs
fstype: nfs
opts: rw,sync,vers=3
state: mounted
- name: Add to fstab for persistence
mount:
path: /opt/consul-shared
src: snail:/fs/1000/nfs
fstype: nfs
opts: rw,sync,vers=3
state: present
- name: Verify mount
command: df -h /opt/consul-shared
register: mount_result
- name: Display mount result
debug:
var: mount_result.stdout

View File

@ -0,0 +1,75 @@
---
- name: Setup NFS Storage for Consul Cluster
hosts: localhost
gather_facts: false
vars:
nfs_server: snail
nfs_export_path: /fs/1000/nfs
nfs_mount_path: /opt/consul-shared
tasks:
- name: Install NFS client and mount on master
ansible.builtin.shell: |
ssh -o StrictHostKeyChecking=no -p 60022 ben@master '
echo "3131" | sudo -S apt update &&
echo "3131" | sudo -S apt install -y nfs-common &&
echo "3131" | sudo -S mkdir -p {{ nfs_mount_path }} &&
echo "3131" | sudo -S mount -t nfs {{ nfs_server }}:{{ nfs_export_path }} {{ nfs_mount_path }} &&
echo "{{ nfs_server }}:{{ nfs_export_path }} {{ nfs_mount_path }} nfs defaults 0 0" | echo "3131" | sudo -S tee -a /etc/fstab
'
delegate_to: localhost
register: master_result
- name: Install NFS client and mount on ash3c
ansible.builtin.shell: |
ssh -o StrictHostKeyChecking=no ben@ash3c '
echo "3131" | sudo -S apt update &&
echo "3131" | sudo -S apt install -y nfs-common &&
echo "3131" | sudo -S mkdir -p {{ nfs_mount_path }} &&
echo "3131" | sudo -S mount -t nfs {{ nfs_server }}:{{ nfs_export_path }} {{ nfs_mount_path }} &&
echo "{{ nfs_server }}:{{ nfs_export_path }} {{ nfs_mount_path }} nfs defaults 0 0" | echo "3131" | sudo -S tee -a /etc/fstab
'
delegate_to: localhost
register: ash3c_result
- name: Install NFS client and mount on warden
ansible.builtin.shell: |
ssh -o StrictHostKeyChecking=no ben@warden '
echo "3131" | sudo -S apt update &&
echo "3131" | sudo -S apt install -y nfs-common &&
echo "3131" | sudo -S mkdir -p {{ nfs_mount_path }} &&
echo "3131" | sudo -S mount -t nfs {{ nfs_server }}:{{ nfs_export_path }} {{ nfs_mount_path }} &&
echo "{{ nfs_server }}:{{ nfs_export_path }} {{ nfs_mount_path }} nfs defaults 0 0" | echo "3131" | sudo -S tee -a /etc/fstab
'
delegate_to: localhost
register: warden_result
- name: Test NFS connectivity on all nodes
ansible.builtin.shell: |
ssh -o StrictHostKeyChecking=no -p 60022 ben@master 'echo "3131" | sudo -S touch {{ nfs_mount_path }}/test-master-$(date +%s) && ls -la {{ nfs_mount_path }}/'
ssh -o StrictHostKeyChecking=no ben@ash3c 'echo "3131" | sudo -S touch {{ nfs_mount_path }}/test-ash3c-$(date +%s) && ls -la {{ nfs_mount_path }}/'
ssh -o StrictHostKeyChecking=no ben@warden 'echo "3131" | sudo -S touch {{ nfs_mount_path }}/test-warden-$(date +%s) && ls -la {{ nfs_mount_path }}/'
delegate_to: localhost
register: nfs_test_result
- name: Display NFS test results
ansible.builtin.debug:
var: nfs_test_result.stdout_lines
- name: Create Consul data directories on NFS
ansible.builtin.shell: |
ssh -o StrictHostKeyChecking=no -p 60022 ben@master 'echo "3131" | sudo -S mkdir -p {{ nfs_mount_path }}/consul-master'
ssh -o StrictHostKeyChecking=no ben@ash3c 'echo "3131" | sudo -S mkdir -p {{ nfs_mount_path }}/consul-ash3c'
ssh -o StrictHostKeyChecking=no ben@warden 'echo "3131" | sudo -S mkdir -p {{ nfs_mount_path }}/consul-warden'
delegate_to: localhost
register: consul_dirs_result
- name: Display setup completion
ansible.builtin.debug:
msg:
- "NFS setup completed successfully!"
- "NFS mount point: {{ nfs_mount_path }}"
- "Consul data directories created:"
- " - {{ nfs_mount_path }}/consul-master"
- " - {{ nfs_mount_path }}/consul-ash3c"
- " - {{ nfs_mount_path }}/consul-warden"

View File

@ -0,0 +1,69 @@
#!/bin/bash
# 清理退役节点脚本
# 创建日期: 2025-09-27
# 执行日期: 2025-10-27 (一个月后)
set -e
NOMAD_ADDR=${NOMAD_ADDR:-"http://100.116.158.95:4646"}
echo "=== 清理退役节点脚本 ==="
echo "执行时间: $(date)"
echo "Nomad 地址: $NOMAD_ADDR"
echo ""
# 退役节点列表
RETIRED_NODES=(
"583f1b77:semaphore:已转为纯server"
"06bb8a3a:hcs:华为云节点退役"
)
echo "准备清理以下退役节点:"
for node_info in "${RETIRED_NODES[@]}"; do
IFS=':' read -r node_id node_name reason <<< "$node_info"
echo " - $node_name ($node_id): $reason"
done
echo ""
read -p "确认要清理这些节点吗? (y/N): " confirm
if [[ $confirm != [yY] ]]; then
echo "操作已取消"
exit 0
fi
echo "开始清理退役节点..."
for node_info in "${RETIRED_NODES[@]}"; do
IFS=':' read -r node_id node_name reason <<< "$node_info"
echo "处理节点: $node_name ($node_id)"
# 检查节点状态
if nomad node status "$node_id" >/dev/null 2>&1; then
echo " - 节点存在,开始清理..."
# 确保节点已 drain
echo " - 确保节点已 drain..."
nomad node drain -enable -yes "$node_id" || true
# 禁用调度
echo " - 禁用调度资格..."
nomad node eligibility -disable "$node_id" || true
# 等待一段时间确保所有任务已迁移
echo " - 等待任务迁移完成..."
sleep 10
echo " - 节点 $node_name 已成功清理"
else
echo " - 节点不存在或已被清理"
fi
echo ""
done
echo "=== 清理完成 ==="
echo "请手动验证集群状态:"
echo " nomad node status"
echo " nomad server members"
echo ""
echo "如需彻底删除节点记录,请联系管理员"