全新的使用机制

This commit is contained in:
chenqiang 2025-11-04 16:09:35 +08:00
commit 875beee773
12 changed files with 1921 additions and 0 deletions

3
.gitignore vendored Normal file
View File

@ -0,0 +1,3 @@
*.log
.envrc*
postgres_password.enc*

249
README.md Normal file
View File

@ -0,0 +1,249 @@
# PostgreSQL数据库服务
本目录提供了基于Docker的PostgreSQL数据库服务使用direnv和OpenSSL实现安全的密码管理避免在配置文件中硬编码敏感信息。系统采用交互式初始化脚本和统一的服务管理脚本简化了部署和维护流程。
## 目录结构及文件说明
```
├── .envrc # 环境变量配置文件使用direnv管理
├── .gitignore # Git忽略文件配置
├── README.md # 本文档
├── clear # 清理数据库数据的脚本
├── conf/ # PostgreSQL配置文件目录
├── data/ # PostgreSQL数据存储目录
├── docker-compose.yml # Docker Compose配置文件
├── fulldump # 数据库完全备份脚本(支持多数据库备份)
├── init # 交互式初始化配置脚本
├── postgres_password.enc # 加密的数据库密码文件
├── restore # 数据库恢复脚本(支持智能备份选择)
└── service # 统一的服务管理脚本替代原start/stop脚本
```
## 环境要求
### 开发环境
- macOS
- Docker 和 Docker Compose
- direnv (`brew install direnv`)
- OpenSSL
### 生产环境
- 支持主流Linux发行版包括但不限于银河麒麟服务器系统v10、Debian 12等
- Docker 和 Docker Compose自动处理架构兼容性
- direnv (`apt-get install direnv` 或发行版对应包管理器)
- OpenSSL
> **注意**Docker会自动处理ARM和x64架构的兼容性本方案不区分硬件架构
## 使用方法
### 首次使用配置
1. **安装依赖**
macOS:
```bash
# 使用Homebrew
brew install direnv openssl
```
Linux:
```bash
apt-get update
apt-get install direnv openssl
```
2. **执行交互式初始化**
```bash
cd /path/to/server/database
chmod +x init service
./init
```
初始化过程会引导您设置:
- 数据库密码PostgreSQL root密码
- 加密主密钥(用于加密/解密密码文件)
- 服务端口默认为25001
- 容器名称默认为guarddoc-server-postgres
3. **设置文件权限**
```bash
chmod 600 .envrc postgres_password.enc
chmod +x clear fulldump
```
> **注意**:初始化脚本会自动执行`direnv allow`,无需手动操作。如果初始化过程中密码验证失败,脚本会立即停止执行。
### 服务管理命令
使用统一的`service`脚本来管理PostgreSQL服务
```bash
cd /path/to/server/database
# 启动服务
./service start
# 停止服务
./service stop
# 检查服务状态
./service status
# 重启服务
./service restart
# 查看帮助信息
./service help
```
> **注意**:所有命令执行前都需要输入加密主密钥进行验证。如果密码验证失败,命令会立即停止执行。
### 连接数据库
数据库服务使用在初始化过程中指定的端口默认为5432
```bash
# 使用psql客户端连接
psql -h localhost -p 5432 -U postgres
# 输入初始化过程中设置的数据库密码
```
## 安全注意事项
1. **密码管理**
- 数据库密码使用AES-256-CBC加密存储采用PBKDF2密钥派生函数增强安全性
- 主密钥仅保存在用户记忆中,不存储在任何文件中
- 环境变量在离开工作目录时自动清除direnv特性
- 所有服务管理命令都需要密码验证才能执行
2. **文件权限**
- 敏感文件(.envrc, postgres_password.enc)设置严格的600权限
- 容器内数据目录使用PostgreSQL用户权限(700)
3. **注意事项**
- 请妥善保管主密钥,丢失后无法恢复数据库密码
- 定期备份数据库,避免数据丢失
- 不要将.envrc和postgres_password.enc提交到版本控制系统已在.gitignore中配置
## 自定义配置
### 修改现有配置
如果需要修改已初始化的配置:
1. 停止服务:
```bash
./service stop
```
2. 删除现有的配置文件:
```bash
rm -f .envrc postgres_password.enc
```
3. 重新运行初始化脚本:
```bash
./init
```
4. 重新启动服务:
```bash
./service start
```
### 添加自定义PostgreSQL配置
在`conf/`目录下添加配置文件,服务启动时会自动复制到容器内并应用:
```bash
# 示例:修改最大连接数
cat > conf/postgresql.conf << EOF
max_connections = 200
EOF
# 重启服务以应用配置
./service restart
```
## 备份与恢复
### 执行数据库备份
使用`fulldump`脚本进行数据库备份:
```bash
# 备份所有用户数据库排除系统数据库默认清除15天前的备份
./fulldump
# 备份指定数据库默认清除15天前的备份
./fulldump <database_name>
# 备份指定数据库,并清除指定天数前的备份
./fulldump <database_name> <days>
```
备份文件将保存在`data/backup/`目录中,命名格式为`<database_name>_full_<timestamp>`。
### 数据库恢复
使用`restore`脚本从备份恢复数据库:
```bash
# 恢复最新备份到原始数据库(根据备份文件名自动确定目标数据库)
./restore
# 恢复指定数据库的最新备份(优先查找与数据库名匹配的备份)
./restore <database_name>
```
脚本会选择与目标数据库匹配的最新备份文件(如果指定了数据库名),并在恢复前提供确认提示,避免误操作。
### 定时备份
将`fulldump`脚本加入到系统crontab定时任务中实现定时备份
```bash
# 编辑crontab
crontab -e
# 添加以下行每天1点备份所有用户数据库并清除15天前的备份
0 1 * * * bash /path/to/server/database/fulldump 15
# 或者只备份特定数据库
0 1 * * * bash /path/to/server/database/fulldump <database_name> 15
```
## 故障排除
### 常见问题
1. **密码验证失败**
- 错误提示:"错误: POSTGRES_PASSWORD环境变量未设置"
- 解决方案:确保输入正确的加密主密钥
2. **direnv未安装**
- 错误提示:"警告: direnv未安装"
- 解决方案按照环境要求部分安装direnv
- macOS: `brew install direnv`
- Linux: `apt-get install direnv`
3. **容器名称冲突**
- 错误提示:"Error response from daemon: Conflict. The container name ... is already in use"
- 解决方案:使用不同的容器名称重新初始化
4. **端口冲突**
- 错误提示:"Error starting userland proxy: listen tcp4 0.0.0.0:25001: bind: address already in use"
- 解决方案:使用不同的端口重新初始化
## 脚本工具详情
|脚本|作用|用法|
|---|---|---|
|init|交互式初始化数据库配置|./init|
|service|统一服务管理支持start/stop/status/restart|./service [command]|
|fulldump|数据库全量备份|./fulldump [<database>] [<days>]|
|restore|数据库恢复|./restore [<database>]|
|clear|清空所有数据(保留备份目录)|./clear|

63
clear Executable file
View File

@ -0,0 +1,63 @@
#!/bin/bash
# 获取脚本所在目录的绝对路径
SCRIPT_DIR=$(cd "$(dirname "$0")" && pwd)
# 切换工作目录到脚本所在目录
cd $SCRIPT_DIR
# 加载环境变量
load_env_variables() {
if [ -f ".envrc" ]; then
# 使用direnv加载环境变量
if command -v direnv &> /dev/null; then
eval "$(direnv export bash)"
# 检查POSTGRES_PASSWORD是否已设置
if [ -z "$POSTGRES_PASSWORD" ]; then
echo "错误: 密码验证失败,无法继续操作"
return 1
fi
else
echo "错误: 未安装direnv请先安装direnv"
return 1
fi
else
echo "错误: 找不到.envrc文件"
return 1
fi
return 0
}
# 调用函数加载环境变量
if ! load_env_variables; then
echo "无法加载环境变量,脚本退出"
exit 1
fi
# 确认提示
read -p "将终止postgres服务并删除数据库文件和归档文件但保留所有备份。删除操作不可逆是否继续(YES/no): " confirm
if [ "$confirm" != "YES" ]; then
echo "退出脚本"
exit 1
fi
container_name=${POSTGRES_CONTAINER_NAME:-postgres}
# 判断容器是否存在
if [ "$(docker ps -q -f name=${container_name})" ]; then
# 如果容器存在则执行docker compose down
docker compose down
fi
echo "清除数据库文件..."
echo "clear ./data/pgdata/ ..."
rm -rf ./data/pgdata/*
echo "清除归档文件..."
echo "clear ./data/archived/ ..."
rm -rf ./data/archived/*
echo "clear ./data/wal_backup/ ..."
rm -rf ./data/wal_backup/*
echo "备份目录保留,未做任何修改。"

100
conf/pg_hba.conf Normal file
View File

@ -0,0 +1,100 @@
# PostgreSQL Client Authentication Configuration File
# ===================================================
#
# Refer to the "Client Authentication" section in the PostgreSQL
# documentation for a complete description of this file. A short
# synopsis follows.
#
# This file controls: which hosts are allowed to connect, how clients
# are authenticated, which PostgreSQL user names they can use, which
# databases they can access. Records take one of these forms:
#
# local DATABASE USER METHOD [OPTIONS]
# host DATABASE USER ADDRESS METHOD [OPTIONS]
# hostssl DATABASE USER ADDRESS METHOD [OPTIONS]
# hostnossl DATABASE USER ADDRESS METHOD [OPTIONS]
# hostgssenc DATABASE USER ADDRESS METHOD [OPTIONS]
# hostnogssenc DATABASE USER ADDRESS METHOD [OPTIONS]
#
# (The uppercase items must be replaced by actual values.)
#
# The first field is the connection type:
# - "local" is a Unix-domain socket
# - "host" is a TCP/IP socket (encrypted or not)
# - "hostssl" is a TCP/IP socket that is SSL-encrypted
# - "hostnossl" is a TCP/IP socket that is not SSL-encrypted
# - "hostgssenc" is a TCP/IP socket that is GSSAPI-encrypted
# - "hostnogssenc" is a TCP/IP socket that is not GSSAPI-encrypted
#
# DATABASE can be "all", "sameuser", "samerole", "replication", a
# database name, or a comma-separated list thereof. The "all"
# keyword does not match "replication". Access to replication
# must be enabled in a separate record (see example below).
#
# USER can be "all", a user name, a group name prefixed with "+", or a
# comma-separated list thereof. In both the DATABASE and USER fields
# you can also write a file name prefixed with "@" to include names
# from a separate file.
#
# ADDRESS specifies the set of hosts the record matches. It can be a
# host name, or it is made up of an IP address and a CIDR mask that is
# an integer (between 0 and 32 (IPv4) or 128 (IPv6) inclusive) that
# specifies the number of significant bits in the mask. A host name
# that starts with a dot (.) matches a suffix of the actual host name.
# Alternatively, you can write an IP address and netmask in separate
# columns to specify the set of hosts. Instead of a CIDR-address, you
# can write "samehost" to match any of the server's own IP addresses,
# or "samenet" to match any address in any subnet that the server is
# directly connected to.
#
# METHOD can be "trust", "reject", "md5", "password", "scram-sha-256",
# "gss", "sspi", "ident", "peer", "pam", "ldap", "radius" or "cert".
# Note that "password" sends passwords in clear text; "md5" or
# "scram-sha-256" are preferred since they send encrypted passwords.
#
# OPTIONS are a set of options for the authentication in the format
# NAME=VALUE. The available options depend on the different
# authentication methods -- refer to the "Client Authentication"
# section in the documentation for a list of which options are
# available for which authentication methods.
#
# Database and user names containing spaces, commas, quotes and other
# special characters must be quoted. Quoting one of the keywords
# "all", "sameuser", "samerole" or "replication" makes the name lose
# its special character, and just match a database or username with
# that name.
#
# This file is read on server startup and when the server receives a
# SIGHUP signal. If you edit the file on a running system, you have to
# SIGHUP the server for the changes to take effect, run "pg_ctl reload",
# or execute "SELECT pg_reload_conf()".
#
# Put your actual configuration here
# ----------------------------------
#
# If you want to allow non-local connections, you need to add more
# "host" records. In that case you will also need to make PostgreSQL
# listen on a non-local interface via the listen_addresses
# configuration parameter, or via the -i or -h command line switches.
# CAUTION: Configuring the system for local "trust" authentication
# allows any local user to connect as any PostgreSQL user, including
# the database superuser. If you do not trust all your local users,
# use another authentication method.
# TYPE DATABASE USER ADDRESS METHOD
# "local" is for Unix domain socket connections only
local all all trust
# IPv4 local connections:
host all all 127.0.0.1/32 trust
# IPv6 local connections:
host all all ::1/128 trust
# Allow replication connections from localhost, by a user with the
# replication privilege.
local replication all trust
host replication all 127.0.0.1/32 trust
host replication all ::1/128 trust
host all all all scram-sha-256

42
conf/pg_ident.conf Normal file
View File

@ -0,0 +1,42 @@
# PostgreSQL User Name Maps
# =========================
#
# Refer to the PostgreSQL documentation, chapter "Client
# Authentication" for a complete description. A short synopsis
# follows.
#
# This file controls PostgreSQL user name mapping. It maps external
# user names to their corresponding PostgreSQL user names. Records
# are of the form:
#
# MAPNAME SYSTEM-USERNAME PG-USERNAME
#
# (The uppercase quantities must be replaced by actual values.)
#
# MAPNAME is the (otherwise freely chosen) map name that was used in
# pg_hba.conf. SYSTEM-USERNAME is the detected user name of the
# client. PG-USERNAME is the requested PostgreSQL user name. The
# existence of a record specifies that SYSTEM-USERNAME may connect as
# PG-USERNAME.
#
# If SYSTEM-USERNAME starts with a slash (/), it will be treated as a
# regular expression. Optionally this can contain a capture (a
# parenthesized subexpression). The substring matching the capture
# will be substituted for \1 (backslash-one) if present in
# PG-USERNAME.
#
# Multiple maps may be specified in this file and used by pg_hba.conf.
#
# No map names are defined in the default configuration. If all
# system user names and PostgreSQL user names are the same, you don't
# need anything in this file.
#
# This file is read on server startup and when the postmaster receives
# a SIGHUP signal. If you edit the file on a running system, you have
# to SIGHUP the postmaster for the changes to take effect. You can
# use "pg_ctl reload" to do that.
# Put your actual configuration here
# ----------------------------------
# MAPNAME SYSTEM-USERNAME PG-USERNAME

803
conf/postgresql.conf Normal file
View File

@ -0,0 +1,803 @@
# -----------------------------
# PostgreSQL configuration file
# -----------------------------
#
# This file consists of lines of the form:
#
# name = value
#
# (The "=" is optional.) Whitespace may be used. Comments are introduced with
# "#" anywhere on a line. The complete list of parameter names and allowed
# values can be found in the PostgreSQL documentation.
#
# The commented-out settings shown in this file represent the default values.
# Re-commenting a setting is NOT sufficient to revert it to the default value;
# you need to reload the server.
#
# This file is read on server startup and when the server receives a SIGHUP
# signal. If you edit the file on a running system, you have to SIGHUP the
# server for the changes to take effect, run "pg_ctl reload", or execute
# "SELECT pg_reload_conf()". Some parameters, which are marked below,
# require a server shutdown and restart to take effect.
#
# Any parameter can also be given as a command-line option to the server, e.g.,
# "postgres -c log_connections=on". Some parameters can be changed at run time
# with the "SET" SQL command.
#
# Memory units: B = bytes Time units: us = microseconds
# kB = kilobytes ms = milliseconds
# MB = megabytes s = seconds
# GB = gigabytes min = minutes
# TB = terabytes h = hours
# d = days
#------------------------------------------------------------------------------
# FILE LOCATIONS
#------------------------------------------------------------------------------
# The default values of these variables are driven from the -D command-line
# option or PGDATA environment variable, represented here as ConfigDir.
#data_directory = 'ConfigDir' # use data in another directory
# (change requires restart)
#hba_file = 'ConfigDir/pg_hba.conf' # host-based authentication file
# (change requires restart)
#ident_file = 'ConfigDir/pg_ident.conf' # ident configuration file
# (change requires restart)
# If external_pid_file is not explicitly set, no extra PID file is written.
#external_pid_file = '' # write an extra PID file
# (change requires restart)
#------------------------------------------------------------------------------
# CONNECTIONS AND AUTHENTICATION
#------------------------------------------------------------------------------
# - Connection Settings -
listen_addresses = '*'
# comma-separated list of addresses;
# defaults to 'localhost'; use '*' for all
# (change requires restart)
port = 5432 # (change requires restart)
max_connections = 100 # (change requires restart)
#superuser_reserved_connections = 3 # (change requires restart)
#unix_socket_directories = '/var/run/postgresql' # comma-separated list of directories
# (change requires restart)
#unix_socket_group = '' # (change requires restart)
#unix_socket_permissions = 0777 # begin with 0 to use octal notation
# (change requires restart)
#bonjour = off # advertise server via Bonjour
# (change requires restart)
#bonjour_name = '' # defaults to the computer name
# (change requires restart)
# - TCP settings -
# see "man tcp" for details
#tcp_keepalives_idle = 0 # TCP_KEEPIDLE, in seconds;
# 0 selects the system default
#tcp_keepalives_interval = 0 # TCP_KEEPINTVL, in seconds;
# 0 selects the system default
#tcp_keepalives_count = 0 # TCP_KEEPCNT;
# 0 selects the system default
#tcp_user_timeout = 0 # TCP_USER_TIMEOUT, in milliseconds;
# 0 selects the system default
#client_connection_check_interval = 0 # time between checks for client
# disconnection while running queries;
# 0 for never
# - Authentication -
#authentication_timeout = 1min # 1s-600s
#password_encryption = scram-sha-256 # scram-sha-256 or md5
#db_user_namespace = off
# GSSAPI using Kerberos
#krb_server_keyfile = 'FILE:${sysconfdir}/krb5.keytab'
#krb_caseins_users = off
# - SSL -
#ssl = off
#ssl_ca_file = ''
#ssl_cert_file = 'server.crt'
#ssl_crl_file = ''
#ssl_crl_dir = ''
#ssl_key_file = 'server.key'
#ssl_ciphers = 'HIGH:MEDIUM:+3DES:!aNULL' # allowed SSL ciphers
#ssl_prefer_server_ciphers = on
#ssl_ecdh_curve = 'prime256v1'
#ssl_min_protocol_version = 'TLSv1.2'
#ssl_max_protocol_version = ''
#ssl_dh_params_file = ''
#ssl_passphrase_command = ''
#ssl_passphrase_command_supports_reload = off
#------------------------------------------------------------------------------
# RESOURCE USAGE (except WAL)
#------------------------------------------------------------------------------
# - Memory -
shared_buffers = 128MB # min 128kB
# (change requires restart)
#huge_pages = try # on, off, or try
# (change requires restart)
#huge_page_size = 0 # zero for system default
# (change requires restart)
#temp_buffers = 8MB # min 800kB
#max_prepared_transactions = 0 # zero disables the feature
# (change requires restart)
# Caution: it is not advisable to set max_prepared_transactions nonzero unless
# you actively intend to use prepared transactions.
#work_mem = 4MB # min 64kB
#hash_mem_multiplier = 1.0 # 1-1000.0 multiplier on hash table work_mem
#maintenance_work_mem = 64MB # min 1MB
#autovacuum_work_mem = -1 # min 1MB, or -1 to use maintenance_work_mem
#logical_decoding_work_mem = 64MB # min 64kB
#max_stack_depth = 2MB # min 100kB
#shared_memory_type = mmap # the default is the first option
# supported by the operating system:
# mmap
# sysv
# windows
# (change requires restart)
dynamic_shared_memory_type = posix # the default is the first option
# supported by the operating system:
# posix
# sysv
# windows
# mmap
# (change requires restart)
#min_dynamic_shared_memory = 0MB # (change requires restart)
# - Disk -
#temp_file_limit = -1 # limits per-process temp file space
# in kilobytes, or -1 for no limit
# - Kernel Resources -
#max_files_per_process = 1000 # min 64
# (change requires restart)
# - Cost-Based Vacuum Delay -
#vacuum_cost_delay = 0 # 0-100 milliseconds (0 disables)
#vacuum_cost_page_hit = 1 # 0-10000 credits
#vacuum_cost_page_miss = 2 # 0-10000 credits
#vacuum_cost_page_dirty = 20 # 0-10000 credits
#vacuum_cost_limit = 200 # 1-10000 credits
# - Background Writer -
#bgwriter_delay = 200ms # 10-10000ms between rounds
#bgwriter_lru_maxpages = 100 # max buffers written/round, 0 disables
#bgwriter_lru_multiplier = 2.0 # 0-10.0 multiplier on buffers scanned/round
#bgwriter_flush_after = 512kB # measured in pages, 0 disables
# - Asynchronous Behavior -
#backend_flush_after = 0 # measured in pages, 0 disables
#effective_io_concurrency = 1 # 1-1000; 0 disables prefetching
#maintenance_io_concurrency = 10 # 1-1000; 0 disables prefetching
#max_worker_processes = 8 # (change requires restart)
#max_parallel_workers_per_gather = 2 # taken from max_parallel_workers
#max_parallel_maintenance_workers = 2 # taken from max_parallel_workers
#max_parallel_workers = 8 # maximum number of max_worker_processes that
# can be used in parallel operations
#parallel_leader_participation = on
#old_snapshot_threshold = -1 # 1min-60d; -1 disables; 0 is immediate
# (change requires restart)
#------------------------------------------------------------------------------
# WRITE-AHEAD LOG
#------------------------------------------------------------------------------
# - Settings -
wal_level = replica
#wal_level = replica # minimal, replica, or logical
# (change requires restart)
#fsync = on # flush data to disk for crash safety
# (turning this off can cause
# unrecoverable data corruption)
#synchronous_commit = on # synchronization level;
# off, local, remote_write, remote_apply, or on
#wal_sync_method = fsync # the default is the first option
# supported by the operating system:
# open_datasync
# fdatasync (default on Linux and FreeBSD)
# fsync
# fsync_writethrough
# open_sync
#full_page_writes = on # recover from partial page writes
#wal_log_hints = off # also do full page writes of non-critical updates
# (change requires restart)
#wal_compression = off # enable compression of full-page writes
#wal_init_zero = on # zero-fill new WAL files
#wal_recycle = on # recycle WAL files
#wal_buffers = -1 # min 32kB, -1 sets based on shared_buffers
# (change requires restart)
#wal_writer_delay = 200ms # 1-10000 milliseconds
#wal_writer_flush_after = 1MB # measured in pages, 0 disables
#wal_skip_threshold = 2MB
#commit_delay = 0 # range 0-100000, in microseconds
#commit_siblings = 5 # range 1-1000
# - Checkpoints -
#checkpoint_timeout = 5min # range 30s-1d
#checkpoint_completion_target = 0.9 # checkpoint target duration, 0.0 - 1.0
#checkpoint_flush_after = 256kB # measured in pages, 0 disables
#checkpoint_warning = 30s # 0 disables
max_wal_size = 1GB
min_wal_size = 80MB
# - Archiving -
# archive_mode = on
# archive_command = 'test ! -f /da`ta/archived/%f && cp %p /data/archived/%f'
# archive_timeout = 10
#archive_mode = off # enables archiving; off, on, or always
# (change requires restart)
#archive_command = '' # command to use to archive a logfile segment
# placeholders: %p = path of file to archive
# %f = file name only
# e.g. 'test ! -f /mnt/server/archivedir/%f && cp %p /mnt/server/archivedir/%f'
#archive_timeout = 0 # force a logfile segment switch after this
# number of seconds; 0 disables
# - Archive Recovery -
# These are only used in recovery mode.
#restore_command = '' # command to use to restore an archived logfile segment
# placeholders: %p = path of file to restore
# %f = file name only
# e.g. 'cp /mnt/server/archivedir/%f %p'
#archive_cleanup_command = '' # command to execute at every restartpoint
#recovery_end_command = '' # command to execute at completion of recovery
# - Recovery Target -
# Set these only when performing a targeted recovery.
#recovery_target = '' # 'immediate' to end recovery as soon as a
# consistent state is reached
# (change requires restart)
#recovery_target_name = '' # the named restore point to which recovery will proceed
# (change requires restart)
#recovery_target_time = '' # the time stamp up to which recovery will proceed
# (change requires restart)
#recovery_target_xid = '' # the transaction ID up to which recovery will proceed
# (change requires restart)
#recovery_target_lsn = '' # the WAL LSN up to which recovery will proceed
# (change requires restart)
#recovery_target_inclusive = on # Specifies whether to stop:
# just after the specified recovery target (on)
# just before the recovery target (off)
# (change requires restart)
#recovery_target_timeline = 'latest' # 'current', 'latest', or timeline ID
# (change requires restart)
#recovery_target_action = 'pause' # 'pause', 'promote', 'shutdown'
# (change requires restart)
#------------------------------------------------------------------------------
# REPLICATION
#------------------------------------------------------------------------------
# - Sending Servers -
# Set these on the primary and on any standby that will send replication data.
#max_wal_senders = 10 # max number of walsender processes
# (change requires restart)
#max_replication_slots = 10 # max number of replication slots
# (change requires restart)
#wal_keep_size = 0 # in megabytes; 0 disables
#max_slot_wal_keep_size = -1 # in megabytes; -1 disables
#wal_sender_timeout = 60s # in milliseconds; 0 disables
#track_commit_timestamp = off # collect timestamp of transaction commit
# (change requires restart)
# - Primary Server -
# These settings are ignored on a standby server.
#synchronous_standby_names = '' # standby servers that provide sync rep
# method to choose sync standbys, number of sync standbys,
# and comma-separated list of application_name
# from standby(s); '*' = all
#vacuum_defer_cleanup_age = 0 # number of xacts by which cleanup is delayed
# - Standby Servers -
# These settings are ignored on a primary server.
#primary_conninfo = '' # connection string to sending server
#primary_slot_name = '' # replication slot on sending server
#promote_trigger_file = '' # file name whose presence ends recovery
hot_standby = on # "off" disallows queries during recovery
#hot_standby = on # "off" disallows queries during recovery
# (change requires restart)
#max_standby_archive_delay = 30s # max delay before canceling queries
# when reading WAL from archive;
# -1 allows indefinite delay
#max_standby_streaming_delay = 30s # max delay before canceling queries
# when reading streaming WAL;
# -1 allows indefinite delay
#wal_receiver_create_temp_slot = off # create temp slot if primary_slot_name
# is not set
#wal_receiver_status_interval = 10s # send replies at least this often
# 0 disables
#hot_standby_feedback = off # send info from standby to prevent
# query conflicts
#wal_receiver_timeout = 60s # time that receiver waits for
# communication from primary
# in milliseconds; 0 disables
#wal_retrieve_retry_interval = 5s # time to wait before retrying to
# retrieve WAL after a failed attempt
#recovery_min_apply_delay = 0 # minimum delay for applying changes during recovery
# - Subscribers -
# These settings are ignored on a publisher.
#max_logical_replication_workers = 4 # taken from max_worker_processes
# (change requires restart)
#max_sync_workers_per_subscription = 2 # taken from max_logical_replication_workers
#------------------------------------------------------------------------------
# QUERY TUNING
#------------------------------------------------------------------------------
# - Planner Method Configuration -
#enable_async_append = on
#enable_bitmapscan = on
#enable_gathermerge = on
#enable_hashagg = on
#enable_hashjoin = on
#enable_incremental_sort = on
#enable_indexscan = on
#enable_indexonlyscan = on
#enable_material = on
#enable_memoize = on
#enable_mergejoin = on
#enable_nestloop = on
#enable_parallel_append = on
#enable_parallel_hash = on
#enable_partition_pruning = on
#enable_partitionwise_join = off
#enable_partitionwise_aggregate = off
#enable_seqscan = on
#enable_sort = on
#enable_tidscan = on
# - Planner Cost Constants -
#seq_page_cost = 1.0 # measured on an arbitrary scale
#random_page_cost = 4.0 # same scale as above
#cpu_tuple_cost = 0.01 # same scale as above
#cpu_index_tuple_cost = 0.005 # same scale as above
#cpu_operator_cost = 0.0025 # same scale as above
#parallel_setup_cost = 1000.0 # same scale as above
#parallel_tuple_cost = 0.1 # same scale as above
#min_parallel_table_scan_size = 8MB
#min_parallel_index_scan_size = 512kB
#effective_cache_size = 4GB
#jit_above_cost = 100000 # perform JIT compilation if available
# and query more expensive than this;
# -1 disables
#jit_inline_above_cost = 500000 # inline small functions if query is
# more expensive than this; -1 disables
#jit_optimize_above_cost = 500000 # use expensive JIT optimizations if
# query is more expensive than this;
# -1 disables
# - Genetic Query Optimizer -
#geqo = on
#geqo_threshold = 12
#geqo_effort = 5 # range 1-10
#geqo_pool_size = 0 # selects default based on effort
#geqo_generations = 0 # selects default based on effort
#geqo_selection_bias = 2.0 # range 1.5-2.0
#geqo_seed = 0.0 # range 0.0-1.0
# - Other Planner Options -
#default_statistics_target = 100 # range 1-10000
#constraint_exclusion = partition # on, off, or partition
#cursor_tuple_fraction = 0.1 # range 0.0-1.0
#from_collapse_limit = 8
#jit = on # allow JIT compilation
#join_collapse_limit = 8 # 1 disables collapsing of explicit
# JOIN clauses
#plan_cache_mode = auto # auto, force_generic_plan or
# force_custom_plan
#------------------------------------------------------------------------------
# REPORTING AND LOGGING
#------------------------------------------------------------------------------
# - Where to Log -
#log_destination = 'stderr' # Valid values are combinations of
# stderr, csvlog, syslog, and eventlog,
# depending on platform. csvlog
# requires logging_collector to be on.
# This is used when logging to stderr:
#logging_collector = off # Enable capturing of stderr and csvlog
# into log files. Required to be on for
# csvlogs.
# (change requires restart)
# These are only used if logging_collector is on:
#log_directory = 'log' # directory where log files are written,
# can be absolute or relative to PGDATA
#log_filename = 'postgresql-%Y-%m-%d_%H%M%S.log' # log file name pattern,
# can include strftime() escapes
#log_file_mode = 0600 # creation mode for log files,
# begin with 0 to use octal notation
#log_rotation_age = 1d # Automatic rotation of logfiles will
# happen after that time. 0 disables.
#log_rotation_size = 10MB # Automatic rotation of logfiles will
# happen after that much log output.
# 0 disables.
#log_truncate_on_rotation = off # If on, an existing log file with the
# same name as the new log file will be
# truncated rather than appended to.
# But such truncation only occurs on
# time-driven rotation, not on restarts
# or size-driven rotation. Default is
# off, meaning append to existing files
# in all cases.
# These are relevant when logging to syslog:
#syslog_facility = 'LOCAL0'
#syslog_ident = 'postgres'
#syslog_sequence_numbers = on
#syslog_split_messages = on
# This is only relevant when logging to eventlog (Windows):
# (change requires restart)
#event_source = 'PostgreSQL'
# - When to Log -
#log_min_messages = warning # values in order of decreasing detail:
# debug5
# debug4
# debug3
# debug2
# debug1
# info
# notice
# warning
# error
# log
# fatal
# panic
#log_min_error_statement = error # values in order of decreasing detail:
# debug5
# debug4
# debug3
# debug2
# debug1
# info
# notice
# warning
# error
# log
# fatal
# panic (effectively off)
#log_min_duration_statement = -1 # -1 is disabled, 0 logs all statements
# and their durations, > 0 logs only
# statements running at least this number
# of milliseconds
#log_min_duration_sample = -1 # -1 is disabled, 0 logs a sample of statements
# and their durations, > 0 logs only a sample of
# statements running at least this number
# of milliseconds;
# sample fraction is determined by log_statement_sample_rate
#log_statement_sample_rate = 1.0 # fraction of logged statements exceeding
# log_min_duration_sample to be logged;
# 1.0 logs all such statements, 0.0 never logs
#log_transaction_sample_rate = 0.0 # fraction of transactions whose statements
# are logged regardless of their duration; 1.0 logs all
# statements from all transactions, 0.0 never logs
# - What to Log -
#debug_print_parse = off
#debug_print_rewritten = off
#debug_print_plan = off
#debug_pretty_print = on
#log_autovacuum_min_duration = -1 # log autovacuum activity;
# -1 disables, 0 logs all actions and
# their durations, > 0 logs only
# actions running at least this number
# of milliseconds.
#log_checkpoints = off
#log_connections = off
#log_disconnections = off
#log_duration = off
#log_error_verbosity = default # terse, default, or verbose messages
#log_hostname = off
#log_line_prefix = '%m [%p] ' # special values:
# %a = application name
# %u = user name
# %d = database name
# %r = remote host and port
# %h = remote host
# %b = backend type
# %p = process ID
# %P = process ID of parallel group leader
# %t = timestamp without milliseconds
# %m = timestamp with milliseconds
# %n = timestamp with milliseconds (as a Unix epoch)
# %Q = query ID (0 if none or not computed)
# %i = command tag
# %e = SQL state
# %c = session ID
# %l = session line number
# %s = session start timestamp
# %v = virtual transaction ID
# %x = transaction ID (0 if none)
# %q = stop here in non-session
# processes
# %% = '%'
# e.g. '<%u%%%d> '
#log_lock_waits = off # log lock waits >= deadlock_timeout
#log_recovery_conflict_waits = off # log standby recovery conflict waits
# >= deadlock_timeout
#log_parameter_max_length = -1 # when logging statements, limit logged
# bind-parameter values to N bytes;
# -1 means print in full, 0 disables
#log_parameter_max_length_on_error = 0 # when logging an error, limit logged
# bind-parameter values to N bytes;
# -1 means print in full, 0 disables
#log_statement = 'none' # none, ddl, mod, all
#log_replication_commands = off
#log_temp_files = -1 # log temporary files equal or larger
# than the specified size in kilobytes;
# -1 disables, 0 logs all temp files
log_timezone = 'Asia/Shanghai'
#------------------------------------------------------------------------------
# PROCESS TITLE
#------------------------------------------------------------------------------
#cluster_name = '' # added to process titles if nonempty
# (change requires restart)
#update_process_title = on
#------------------------------------------------------------------------------
# STATISTICS
#------------------------------------------------------------------------------
# - Query and Index Statistics Collector -
#track_activities = on
#track_activity_query_size = 1024 # (change requires restart)
#track_counts = on
#track_io_timing = off
#track_wal_io_timing = off
#track_functions = none # none, pl, all
#stats_temp_directory = 'pg_stat_tmp'
# - Monitoring -
#compute_query_id = auto
#log_statement_stats = off
#log_parser_stats = off
#log_planner_stats = off
#log_executor_stats = off
#------------------------------------------------------------------------------
# AUTOVACUUM
#------------------------------------------------------------------------------
#autovacuum = on # Enable autovacuum subprocess? 'on'
# requires track_counts to also be on.
#autovacuum_max_workers = 3 # max number of autovacuum subprocesses
# (change requires restart)
#autovacuum_naptime = 1min # time between autovacuum runs
#autovacuum_vacuum_threshold = 50 # min number of row updates before
# vacuum
#autovacuum_vacuum_insert_threshold = 1000 # min number of row inserts
# before vacuum; -1 disables insert
# vacuums
#autovacuum_analyze_threshold = 50 # min number of row updates before
# analyze
#autovacuum_vacuum_scale_factor = 0.2 # fraction of table size before vacuum
#autovacuum_vacuum_insert_scale_factor = 0.2 # fraction of inserts over table
# size before insert vacuum
#autovacuum_analyze_scale_factor = 0.1 # fraction of table size before analyze
#autovacuum_freeze_max_age = 200000000 # maximum XID age before forced vacuum
# (change requires restart)
#autovacuum_multixact_freeze_max_age = 400000000 # maximum multixact age
# before forced vacuum
# (change requires restart)
#autovacuum_vacuum_cost_delay = 2ms # default vacuum cost delay for
# autovacuum, in milliseconds;
# -1 means use vacuum_cost_delay
#autovacuum_vacuum_cost_limit = -1 # default vacuum cost limit for
# autovacuum, -1 means use
# vacuum_cost_limit
#------------------------------------------------------------------------------
# CLIENT CONNECTION DEFAULTS
#------------------------------------------------------------------------------
# - Statement Behavior -
#client_min_messages = notice # values in order of decreasing detail:
# debug5
# debug4
# debug3
# debug2
# debug1
# log
# notice
# warning
# error
#search_path = '"$user", public' # schema names
#row_security = on
#default_table_access_method = 'heap'
#default_tablespace = '' # a tablespace name, '' uses the default
#default_toast_compression = 'pglz' # 'pglz' or 'lz4'
#temp_tablespaces = '' # a list of tablespace names, '' uses
# only default tablespace
#check_function_bodies = on
#default_transaction_isolation = 'read committed'
#default_transaction_read_only = off
#default_transaction_deferrable = off
#session_replication_role = 'origin'
#statement_timeout = 0 # in milliseconds, 0 is disabled
#lock_timeout = 0 # in milliseconds, 0 is disabled
#idle_in_transaction_session_timeout = 0 # in milliseconds, 0 is disabled
#idle_session_timeout = 0 # in milliseconds, 0 is disabled
#vacuum_freeze_table_age = 150000000
#vacuum_freeze_min_age = 50000000
#vacuum_failsafe_age = 1600000000
#vacuum_multixact_freeze_table_age = 150000000
#vacuum_multixact_freeze_min_age = 5000000
#vacuum_multixact_failsafe_age = 1600000000
#bytea_output = 'hex' # hex, escape
#xmlbinary = 'base64'
#xmloption = 'content'
#gin_pending_list_limit = 4MB
# - Locale and Formatting -
datestyle = 'iso, mdy'
#intervalstyle = 'postgres'
timezone = 'Asia/Shanghai'
#timezone_abbreviations = 'Default' # Select the set of available time zone
# abbreviations. Currently, there are
# Default
# Australia (historical usage)
# India
# You can create your own file in
# share/timezonesets/.
#extra_float_digits = 1 # min -15, max 3; any value >0 actually
# selects precise output mode
#client_encoding = sql_ascii # actually, defaults to database
# encoding
# These settings are initialized by initdb, but they can be changed.
lc_messages = 'en_US.utf8' # locale for system error message
# strings
lc_monetary = 'en_US.utf8' # locale for monetary formatting
lc_numeric = 'en_US.utf8' # locale for number formatting
lc_time = 'en_US.utf8' # locale for time formatting
# default configuration for text search
default_text_search_config = 'pg_catalog.english'
# - Shared Library Preloading -
#local_preload_libraries = ''
#session_preload_libraries = ''
#shared_preload_libraries = '' # (change requires restart)
#jit_provider = 'llvmjit' # JIT library to use
# - Other Defaults -
#dynamic_library_path = '$libdir'
#extension_destdir = '' # prepend path when loading extensions
# and shared objects (added by Debian)
#gin_fuzzy_search_limit = 0
#------------------------------------------------------------------------------
# LOCK MANAGEMENT
#------------------------------------------------------------------------------
#deadlock_timeout = 1s
#max_locks_per_transaction = 64 # min 10
# (change requires restart)
#max_pred_locks_per_transaction = 64 # min 10
# (change requires restart)
#max_pred_locks_per_relation = -2 # negative values mean
# (max_pred_locks_per_transaction
# / -max_pred_locks_per_relation) - 1
#max_pred_locks_per_page = 2 # min 0
#------------------------------------------------------------------------------
# VERSION AND PLATFORM COMPATIBILITY
#------------------------------------------------------------------------------
# - Previous PostgreSQL Versions -
#array_nulls = on
#backslash_quote = safe_encoding # on, off, or safe_encoding
#escape_string_warning = on
#lo_compat_privileges = off
#quote_all_identifiers = off
#standard_conforming_strings = on
#synchronize_seqscans = on
# - Other Platforms and Clients -
#transform_null_equals = off
#------------------------------------------------------------------------------
# ERROR HANDLING
#------------------------------------------------------------------------------
#exit_on_error = off # terminate session on any error?
#restart_after_crash = on # reinitialize after backend crash?
#data_sync_retry = off # retry or panic on failure to fsync
# data?
# (change requires restart)
#recovery_init_sync_method = fsync # fsync, syncfs (Linux 5.8+)
#------------------------------------------------------------------------------
# CONFIG FILE INCLUDES
#------------------------------------------------------------------------------
# These options allow settings to be loaded from files other than the
# default postgresql.conf. Note that these are directives, not variable
# assignments, so they can usefully be given more than once.
#include_dir = '...' # include files ending in '.conf' from
# a directory, e.g., 'conf.d'
#include_if_exists = '...' # include file only if it exists
#include = '...' # include file
#------------------------------------------------------------------------------
# CUSTOMIZED OPTIONS
#------------------------------------------------------------------------------
# Add settings for extensions here

8
data/.gitignore vendored Normal file
View File

@ -0,0 +1,8 @@
pgdata/*
archived/*
backup/first/*
backup/*
!.gitignore
!pgdata
!backup/first
!archived

13
docker-compose.yml Normal file
View File

@ -0,0 +1,13 @@
services:
postgres:
image: postgres:17
container_name: ${POSTGRES_CONTAINER_NAME:-postgres}
restart: always
environment:
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-123456}
PGDATA: /data/pgdata
volumes:
- ./data:/data
ports:
- ${POSTGRES_PORT:-5432}:5432

133
fulldump Executable file
View File

@ -0,0 +1,133 @@
#!/bin/bash
# 获取脚本所在目录的绝对路径
SCRIPT_DIR=$(cd "$(dirname "$0")" && pwd)
# 切换工作目录到脚本所在目录
cd $SCRIPT_DIR
# 加载环境变量
load_env_variables() {
if [ -f ".envrc" ]; then
# 使用direnv加载环境变量
if command -v direnv &> /dev/null; then
eval "$(direnv export bash)"
# 检查POSTGRES_PASSWORD是否已设置
if [ -z "$POSTGRES_PASSWORD" ]; then
echo "错误: 密码验证失败,无法继续操作"
return 1
fi
else
echo "错误: 未安装direnv请先安装direnv"
return 1
fi
else
echo "错误: 找不到.envrc文件"
return 1
fi
return 0
}
# 调用函数加载环境变量
if ! load_env_variables; then
echo "无法加载环境变量,脚本退出"
exit 1
fi
# 声明变量
container_name=${POSTGRES_CONTAINER_NAME:-postgres}
pg_user=${POSTGRES_USER:-postgres}
backup_dir="./data/backup/"
# 获取宿主机上的绝对备份路径
HOST_BACKUP_DIR="$(pwd)/${backup_dir}"
# 清除几天前的备份
days=${2:-15}
# 备份单个数据库的函数
backup_database() {
local db_name=$1
local datetime=$(date +"%Y%m%d_%H%M%S")
local filename="${db_name}_full_${datetime}"
local backup_path="${backup_dir}${filename}"
local host_backup_path="${HOST_BACKUP_DIR}${filename}"
echo "开始备份数据库 $db_name..."
# 创建新备份
docker exec -i "$container_name" pg_dump -U postgres -Fd "$db_name" -f "$backup_path" -j 4
# 检查备份是否成功
if [ $? -eq 0 ]; then
echo "备份完成!"
echo "宿主机上的备份文件路径: ${host_backup_path}"
echo "备份目录名: ${filename}"
echo "注意PostgreSQL目录格式备份包含多个文件其中toc.dat是目录文件包含备份内容的表目录信息"
# 显示宿主机上备份文件大小信息
du -sh "${host_backup_path}" 2>/dev/null || echo "无法获取备份大小信息"
# 显示备份目录中的文件数量
find "${host_backup_path}" -type f | wc -l | xargs echo "备份目录中的文件数量:"
echo ""
return 0
else
echo "备份失败!"
return 1
fi
}
# 主逻辑
if [ -n "$1" ]; then
# 如果指定了数据库名称,则只备份该数据库
pg_database=$1
echo "将备份指定的数据库: $pg_database"
# 清除该数据库几天前的备份
echo "清理 $pg_database 数据库 $days 天前的备份..."
find $backup_dir -maxdepth 1 -name "${pg_database}*" -mtime +$days -exec sh -c 'echo "$(date): $1" >> cleardump.log; rm -rf $1' sh {} \;
# 备份指定数据库
backup_database "$pg_database"
else
# 如果未指定数据库名称,则备份所有用户数据库(排除系统数据库)
echo "未指定数据库名称,将备份所有用户数据库"
# 获取所有用户数据库排除postgres, template0, template1系统数据库
echo "正在获取数据库列表..."
databases=$(docker exec -i "$container_name" psql -U postgres -t -c "SELECT datname FROM pg_database WHERE datistemplate = false AND datname NOT IN ('postgres', 'template0', 'template1');")
# 检查是否成功获取数据库列表
if [ -z "$databases" ]; then
echo "警告: 未找到用户数据库将备份默认的postgres数据库"
# 清理postgres数据库几天前的备份
echo "清理 postgres 数据库 $days 天前的备份..."
find $backup_dir -maxdepth 1 -name "postgres*" -mtime +$days -exec sh -c 'echo "$(date): $1" >> cleardump.log; rm -rf $1' sh {} \;
# 备份postgres数据库
backup_database "postgres"
else
echo "找到以下用户数据库:"
echo "$databases"
echo ""
# 清理所有数据库几天前的备份
echo "清理所有数据库 $days 天前的备份..."
find $backup_dir -maxdepth 1 -name "*_full_*" -mtime +$days -exec sh -c 'echo "$(date): $1" >> cleardump.log; rm -rf $1' sh {} \;
# 备份每个用户数据库
for db in $databases; do
# 去除可能的空格和换行符
db=$(echo "$db" | tr -d ' \n\r')
if [ -n "$db" ]; then
backup_database "$db"
fi
done
fi
fi
# 检查是否有备份成功完成
if [ $? -eq 0 ]; then
echo "所有指定的数据库备份完成!"
exit 0
else
echo "备份过程中出现错误,请检查日志"
exit 1
fi

171
init Executable file
View File

@ -0,0 +1,171 @@
#!/bin/bash
# PostgreSQL初始化脚本
# 用于初始化或重新初始化PostgreSQL配置环境
# 获取脚本所在目录的绝对路径
SCRIPT_DIR=$(cd "$(dirname "$0")" && pwd)
# 切换工作目录到脚本所在目录
cd $SCRIPT_DIR
# 检查必要的命令
check_commands() {
if ! command -v openssl &> /dev/null; then
echo "错误: 未找到openssl命令请先安装"
exit 1
fi
if ! command -v direnv &> /dev/null; then
echo "警告: 未找到direnv命令建议安装以获得最佳体验"
fi
}
# 创建必要的目录结构
create_directories() {
echo "创建必要的目录结构..."
mkdir -p ./data/pgdata
mkdir -p ./data/archived
mkdir -p ./data/backup/first
mkdir -p ./conf
echo "目录结构创建完成"
}
# 初始化配置文件
initialize_files() {
# 提示用户输入数据库密码步骤1
read -s -p "请输入PostgreSQL数据库密码: " postgres_password
echo
read -s -p "请再次输入密码确认: " postgres_password_confirm
echo
# 验证密码一致性
if [ "$postgres_password" != "$postgres_password_confirm" ]; then
echo "错误: 两次输入的密码不一致"
return 1
fi
# 验证密码强度(可选)
if [ ${#postgres_password} -lt 8 ]; then
echo "警告: 密码长度少于8个字符建议使用强密码"
read -p "是否继续使用此密码?(y/n): " continue
if [ "$continue" != "y" ]; then
return 1
fi
fi
# 创建加密文件步骤2
echo "创建加密的密码文件..."
echo "请为加密文件设置一个密码(主密钥):"
echo -n "$postgres_password" | openssl enc -aes-256-cbc -salt -pbkdf2 -iter 10000 -out postgres_password.enc
# 提示用户输入映射端口号步骤3
default_port="5432"
read -p "请输入PostgreSQL映射端口号 [$default_port]: " postgres_port
# 如果用户直接回车,使用默认值
if [ -z "$postgres_port" ]; then
postgres_port="$default_port"
fi
echo "映射端口号设置为: $postgres_port"
# 提示用户输入容器名称步骤4
default_container_name="postgres"
read -p "请输入PostgreSQL容器名称 [$default_container_name]: " postgres_container_name
# 如果用户直接回车,使用默认值
if [ -z "$postgres_container_name" ]; then
postgres_container_name="$default_container_name"
fi
echo "容器名称设置为: $postgres_container_name"
# 检查加密是否成功
if [ $? -ne 0 ]; then
echo "错误: 创建加密文件失败"
return 1
fi
# 设置加密文件权限
chmod 600 postgres_password.enc
echo "加密文件创建成功权限设置为600"
# 创建.envrc文件
echo "创建.envrc配置文件..."
cat > .envrc << EOF
# PostgreSQL配置环境变量
export POSTGRES_PASSWORD=\$(openssl enc -aes-256-cbc -d -pbkdf2 -iter 10000 -in postgres_password.enc)
export POSTGRES_PORT=$postgres_port
export POSTGRES_CONTAINER_NAME=$postgres_container_name
EOF
# 设置.envrc文件权限
chmod 600 .envrc
echo ".envrc文件创建成功权限设置为600"
# 自动执行direnv allow并提供状态反馈
if command -v direnv &> /dev/null; then
echo ""
echo "📝 初始化完成!自动配置环境变量..."
echo "正在执行 direnv allow..."
if direnv allow > /dev/null 2>&1; then
echo "✅ direnv allow 执行成功!环境变量已启用"
else
echo "❌ direnv allow 执行失败,请手动运行 'direnv allow' 来启用环境变量"
fi
else
echo ""
echo "初始化完成建议安装direnv以获得更好的使用体验"
echo " macOS: brew install direnv"
echo " Linux: apt-get install direnv 或 yum install direnv"
fi
return 0
}
# 主函数
main() {
echo "PostgreSQL环境初始化脚本"
echo "==================================="
# 检查必要的命令
check_commands
# 检查文件是否存在
if [ -f "postgres_password.enc" ] && [ -f ".envrc" ]; then
echo ""
echo "检测到postgres_password.enc和.envrc文件已存在"
read -p "是否重新初始化?这将覆盖现有配置!(y/n): " reinitialize
if [ "$reinitialize" != "y" ]; then
echo "初始化取消"
exit 0
fi
# 备份现有文件(可选)
backup_suffix="_bak_$(date +%Y%m%d%H%M%S)"
echo "备份现有文件..."
cp postgres_password.enc "postgres_password.enc$backup_suffix" 2>/dev/null
cp .envrc ".envrc$backup_suffix" 2>/dev/null
echo "备份完成"
fi
# 创建目录结构
create_directories
# 初始化配置文件
while ! initialize_files; do
echo "请重新输入密码..."
done
echo ""
echo "==================================="
echo "初始化成功!"
echo "使用说明:"
echo "1. 使用 './service start' 启动服务"
echo "2. 使用 './service stop' 停止服务"
echo "3. 使用 './service status' 查看服务状态"
echo "4. 使用 './service restart' 重启服务"
}
# 执行主函数
main

158
restore Executable file
View File

@ -0,0 +1,158 @@
#!/bin/bash
# 获取脚本所在目录的绝对路径
SCRIPT_DIR=$(cd "$(dirname "$0")" && pwd)
# 切换工作目录到脚本所在目录
cd $SCRIPT_DIR
# 加载环境变量
load_env_variables() {
if [ -f ".envrc" ]; then
# 使用direnv加载环境变量
if command -v direnv &> /dev/null; then
eval "$(direnv export bash)"
# 检查POSTGRES_PASSWORD是否已设置
if [ -z "$POSTGRES_PASSWORD" ]; then
echo "错误: 密码验证失败,无法继续操作"
return 1
fi
else
echo "错误: 未安装direnv请先安装direnv"
return 1
fi
else
echo "错误: 找不到.envrc文件"
return 1
fi
return 0
}
# 调用函数加载环境变量
if ! load_env_variables; then
echo "无法加载环境变量,脚本退出"
exit 1
fi
# 设置变量
container_name=${POSTGRES_CONTAINER_NAME:-postgres}
pg_user=${POSTGRES_USER:-postgres}
backup_dir="./data/backup/"
# 获取宿主机上的绝对备份路径
HOST_BACKUP_DIR="$(pwd)/${backup_dir}"
# 查找最新的备份文件
if [ -n "$1" ]; then
# 如果指定了数据库名称,优先查找该数据库的最新备份
target_db=$1
echo "正在查找数据库 $target_db 的最新备份..."
latest_backup=$(find "$backup_dir" -maxdepth 1 -type d -name "${target_db}_full_*" | sort -r | head -n 1)
# 如果没有找到该数据库的备份,才考虑使用从备份文件名中提取的数据库名
if [ -z "$latest_backup" ]; then
echo "警告: 未找到数据库 $target_db 的备份,尝试使用从备份文件名中提取的数据库名查找"
extracted_db_name=$(echo "$1" | sed -E 's/^([^_]+)_full_.+$|^([^_]+)$/\1\2/')
latest_backup=$(find "$backup_dir" -maxdepth 1 -type d -name "${extracted_db_name}_full_*" | sort -r | head -n 1)
fi
else
# 如果未指定数据库名称,查找所有备份中的最新一个
echo "未指定数据库名称,查找所有备份中的最新一个..."
latest_backup=$(find "$backup_dir" -maxdepth 1 -type d -name "*_full_*" | sort -r | head -n 1)
fi
# 检查是否找到备份文件
if [ -z "$latest_backup" ]; then
echo "错误: 在 $backup_dir 目录下未找到备份文件"
exit 1
fi
# 提取备份文件名(不含路径)
backup_name=$(basename "$latest_backup")
HOST_BACKUP_PATH="${HOST_BACKUP_DIR}${backup_name}"
# 从备份文件名中提取数据库名(假设格式为 databaseName_full_timestamp
extracted_db_name=$(echo "$backup_name" | sed -E 's/^([^_]+)_full_.+$/\1/')
# 目标数据库:优先使用命令行参数,否则使用从备份文件名中提取的数据库名
pg_database=${1:-$extracted_db_name}
# 验证找到的备份是否与目标数据库匹配
if [ -n "$1" ] && [ "$1" != "$extracted_db_name" ]; then
echo "注意:找到的备份文件是 $backup_name其中包含数据库 $extracted_db_name 的数据"
echo "您指定的目标数据库是 $1将把 $extracted_db_name 的数据恢复到 $1 数据库中"
read -p "是否继续?(YES/no): " confirm_match
if [ "$confirm_match" != "YES" ]; then
echo "用户取消操作,脚本退出"
exit 1
fi
fi
# 显示找到的备份信息
echo "找到最新备份文件:"
echo "备份目录名: $backup_name"
echo "宿主机上的备份路径: $HOST_BACKUP_PATH"
# 检查备份目录中是否包含toc.dat文件确认是有效的PostgreSQL目录格式备份
if [ ! -f "$latest_backup/toc.dat" ]; then
echo "错误: 备份目录不包含toc.dat文件可能不是有效的PostgreSQL目录格式备份"
exit 1
fi
# 显示恢复目标信息
echo "\n恢复目标"
echo "目标数据库: $pg_database"
echo "目标容器: $container_name"
# 确认提示
echo "\n警告此操作将恢复数据到数据库 $pg_database可能会覆盖现有数据"
read -p "是否继续执行恢复操作?(YES/no): " confirm
if [ "$confirm" != "YES" ]; then
echo "用户取消恢复操作,脚本退出"
exit 1
fi
# 检查数据库容器是否正在运行
if ! docker ps | grep -q "$container_name"; then
echo "错误: 数据库容器 $container_name 未运行,请先启动服务"
echo "您可以使用 ./service start 命令启动服务"
exit 1
fi
# 检查目标数据库是否存在
if ! docker exec -i "$container_name" psql -U "$pg_user" -lqt | cut -d \| -f 1 | grep -qw "$pg_database"; then
echo "警告: 目标数据库 $pg_database 不存在,将创建该数据库"
if ! docker exec -i "$container_name" createdb -U "$pg_user" "$pg_database"; then
echo "错误: 创建数据库 $pg_database 失败"
exit 1
fi
fi
# 执行恢复操作
echo "\n开始恢复数据库 $pg_database 从备份 $backup_name..."
echo "恢复过程可能需要一些时间,请耐心等待..."
# 使用pg_restore恢复数据库
if docker exec -i "$container_name" pg_restore \
-U "$pg_user" \
-d "$pg_database" \
-Fd \
-j 4 \
"${backup_dir}${backup_name}"; then
echo "\n数据库恢复成功!"
echo "恢复详情:"
echo "- 备份源: $HOST_BACKUP_PATH"
echo "- 目标数据库: $pg_database"
echo "- 目标容器: $container_name"
# 可选:显示数据库中的表数量,验证恢复结果
table_count=$(docker exec -i "$container_name" psql -U "$pg_user" -d "$pg_database" -c "SELECT COUNT(*) FROM pg_tables WHERE schemaname NOT IN ('pg_catalog', 'information_schema');" -t -A)
echo "- 恢复的表数量: $table_count"
exit 0
else
echo "\n数据库恢复失败!"
echo "请检查错误信息并重试"
exit 1
fi

178
service Executable file
View File

@ -0,0 +1,178 @@
#!/bin/bash
# PostgreSQL服务管理脚本
# 用法: ./service start|stop|status|restart
# 获取脚本所在目录的绝对路径
SCRIPT_DIR=$(cd "$(dirname "$0")" && pwd)
# 切换工作目录到脚本所在目录
cd $SCRIPT_DIR
# 容器名称从环境变量读取默认为postgres
container_name=${POSTGRES_CONTAINER_NAME:-postgres}
# 检查并加载环境变量
load_env_variables() {
if command -v direnv &> /dev/null; then
echo "使用direnv加载环境变量..."
# 尝试使用direnv加载环境变量
eval "$(direnv export bash)"
else
echo "警告: direnv未安装使用当前环境变量"
fi
# 检查POSTGRES_PASSWORD环境变量是否设置
if [ -z "$POSTGRES_PASSWORD" ]; then
echo "错误: POSTGRES_PASSWORD环境变量未设置"
echo "请确保已安装direnv并运行 'direnv allow'"
echo "提示: 可能是加密密码输入错误,请重新输入正确的密码"
return 1
fi
return 0
}
# 启动服务
start_service() {
echo "启动PostgreSQL服务..."
# 加载环境变量
if ! load_env_variables; then
exit 1
fi
# 声明变量
conf_dir="./conf"
conf_files=($(ls $conf_dir 2>/dev/null || echo ""))
target_dir="./data/pgdata"
# 创建目录结构
mkdir -p ./data/pgdata
mkdir -p ./data/archived
mkdir -p ./data/backup/first
# 启动docker容器
docker compose up -d
# 等待容器启动并复制配置文件
if [ ${#conf_files[@]} -gt 0 ]; then
echo "等待容器初始化并复制配置文件..."
for file in "${conf_files[@]}"
do
while [ ! -f "$target_dir/$file" ]; do
echo "等待容器 '$container_name' 初始化..."
sleep 5
done
cp "$conf_dir/$file" "$target_dir/$file"
echo "已复制 $file 从 $conf_dir 到 $target_dir"
done
fi
echo "PostgreSQL服务启动完成!"
}
# 停止服务
stop_service() {
echo "停止PostgreSQL服务..."
# 必须成功加载环境变量(包括密码验证)才能继续
if ! load_env_variables; then
echo "错误: 密码验证失败,无法继续操作"
exit 1
fi
# 使用docker compose down停止服务
docker compose down
# 如果docker compose命令失败尝试直接通过容器名称停止
if [ $? -ne 0 ]; then
echo "尝试直接停止容器..."
docker stop $container_name > /dev/null 2>&1
docker rm $container_name > /dev/null 2>&1
fi
echo "PostgreSQL服务已停止."
}
# 检查服务状态
status_service() {
echo "检查PostgreSQL服务状态..."
# 必须成功加载环境变量(包括密码验证)才能继续
if ! load_env_variables; then
echo "错误: 密码验证失败,无法继续操作"
exit 1
fi
# 重新获取容器名称,确保使用最新的环境变量值
updated_container_name=${POSTGRES_CONTAINER_NAME:-postgres}
# 检查容器是否正在运行
if docker ps | grep -q "$updated_container_name"; then
echo "PostgreSQL服务正在运行."
echo "访问地址: localhost:${POSTGRES_PORT:-25001}"
echo "容器名称: $updated_container_name"
return 0
else
echo "PostgreSQL服务未运行."
return 1
fi
}
# 重启服务
restart_service() {
echo "重启PostgreSQL服务..."
# 先停止服务(会验证密码)
stop_service
# 停止成功后,重新验证密码并启动服务
if [ $? -eq 0 ]; then
echo "正在启动PostgreSQL服务..."
start_service
else
echo "错误: 服务停止失败,无法重启"
exit 1
fi
}
# 显示帮助信息
show_help() {
echo "用法: ./service [command]"
echo "命令:"
echo " start 启动PostgreSQL服务"
echo " stop 停止PostgreSQL服务"
echo " status 检查PostgreSQL服务状态"
echo " restart 重启PostgreSQL服务"
echo " help 显示此帮助信息"
}
# 检查命令参数
if [ $# -eq 0 ]; then
echo "错误: 请指定命令"
show_help
exit 1
fi
# 执行对应的命令
case "$1" in
start)
start_service
;;
stop)
stop_service
;;
status)
status_service
;;
restart)
restart_service
;;
help)
show_help
;;
*)
echo "错误: 未知命令 '$1'"
show_help
exit 1
;;
esac