首页
直播
统计
壁纸
留言
友链
关于
Search
1
PVE开启硬件显卡直通功能
2,565 阅读
2
在k8s(kubernetes) 上安装 ingress V1.1.0
2,072 阅读
3
二进制安装Kubernetes(k8s) v1.24.0 IPv4/IPv6双栈
1,930 阅读
4
Ubuntu 通过 Netplan 配置网络教程
1,853 阅读
5
kubernetes (k8s) 二进制高可用安装
1,800 阅读
默认分类
登录
/
注册
Search
chenby
累计撰写
202
篇文章
累计收到
124
条评论
首页
栏目
默认分类
页面
直播
统计
壁纸
留言
友链
关于
搜索到
202
篇与
cby
的结果
2024-06-16
Redis Sentinel哨兵模式部署
Redis Sentinel哨兵模式部署主从模式的弊端就是不具备高可用性,当master挂掉以后,Redis将不能再对外提供写入操作,因此sentinel模式应运而生。sentinel中文含义为哨兵,顾名思义,它的作用就是监控redis集群的运行状况,此模式具有如下一些特点:sentinel模式是建立在主从模式的基础上,如果只有一个Redis节点,sentinel就没有任何意义;当master挂了以后,sentinel会在slave中选择一个做为master,并修改它们的配置文件,其他slave的配置文件也会被修改,比如slaveof属性会指向新的master;当master重新启动后,它将不再是master,而是做为slave接收新的master的同步数据;sentinel因为也是一个进程,所以有挂掉的可能,所以sentinel也会启动多个形成一个sentinel集群;多sentinel配置的时候,sentinel之间也会自动监控;当主从模式配置密码时,sentinel也会同步将配置信息修改到配置文件中;一个sentinel或sentinel集群可以管理多个主从Redis,多个sentinel也可以监控同一个redis;sentinel最好不要和Redis部署在同一台机器,不然Redis的服务器挂了以后,sentinel也可能会挂掉。其工作的流程如下所示:每个sentinel以每秒钟一次的频率向它所知的master,slave以及其他sentinel实例发送一个 PING 命令;如果一个实例距离最后一次有效回复 PING 命令的时间超过 down-after-milliseconds 选项所指定的值, 则这个实例会被sentinel标记为主观下线;如果一个master被标记为主观下线,则正在监视这个master的所有sentinel要以每秒一次的频率确认master的确进入了主观下线状态;当有足够数量的sentinel(大于等于配置文件指定的值)在指定的时间范围内确认master的确进入了主观下线状态, 则master会被标记为客观下线;在一般情况下, 每个sentinel会以每 10 秒一次的频率向它已知的所有master,slave发送 INFO 命令; - 当master被sentinel标记为客观下线时,sentinel向下线的master的所有slave发送 INFO 命令的频率会从 10 秒一次改为 1 秒一次;若没有足够数量的sentinel同意master已经下线,master的客观下线状态就会被移除;若master重新向sentinel的 PING 命令返回有效回复,master的主观下线状态就会被移除。环境IP角色192.168.1.21master, sentinel192.168.1.22slave1, sentinel192.168.1.23slave2, sentinel安装编译环境# ubuntu apt install make gcc # centos yum install make gcc安装 Redis# 查看 Redis 版本 http://download.redis.io/releases/ # 下载 Redis wget http://download.redis.io/releases/redis-7.2.5.tar.gz # 解压 tar xvf redis-7.2.5.tar.gz cd redis-7.2.5/ # 进行编译 make && make install配置服务# Redis 服务 cat << EOF > /usr/lib/systemd/system/redis.service [Unit] Description=Redis persistent key-value database After=network.target After=network-online.target Wants=network-online.target [Service] ExecStart=/usr/local/bin/redis-server /usr/local/redis/redis.conf --supervised systemd ExecStop=/usr/local/redis/redis-shutdown Type=forking User=redis Group=redis RuntimeDirectory=redis RuntimeDirectoryMode=0755 LimitNOFILE=65536 PrivateTmp=true [Install] WantedBy=multi-user.target EOF配置停止脚本mkdir /usr/local/redis vim /usr/local/redis/redis-shutdown #!/bin/bash # # Wrapper to close properly redis and sentinel test x"$REDIS_DEBUG" != x && set -x REDIS_CLI=/usr/local/bin/redis-cli # Retrieve service name SERVICE_NAME="$1" if [ -z "$SERVICE_NAME" ]; then SERVICE_NAME=redis fi # Get the proper config file based on service name CONFIG_FILE="/usr/local/redis/$SERVICE_NAME.conf" # Use awk to retrieve host, port from config file HOST=`awk '/^[[:blank:]]*bind/ { print $2 }' $CONFIG_FILE | tail -n1` PORT=`awk '/^[[:blank:]]*port/ { print $2 }' $CONFIG_FILE | tail -n1` PASS=`awk '/^[[:blank:]]*requirepass/ { print $2 }' $CONFIG_FILE | tail -n1` SOCK=`awk '/^[[:blank:]]*unixsocket\s/ { print $2 }' $CONFIG_FILE | tail -n1` # Just in case, use default host, port HOST=${HOST:-127.0.0.1} if [ "$SERVICE_NAME" = redis ]; then PORT=${PORT:-6379} else PORT=${PORT:-26739} fi # Setup additional parameters # e.g password-protected redis instances [ -z "$PASS" ] || ADDITIONAL_PARAMS="-a $PASS" # shutdown the service properly if [ -e "$SOCK" ] ; then $REDIS_CLI -s $SOCK $ADDITIONAL_PARAMS shutdown else $REDIS_CLI -h $HOST -p $PORT $ADDITIONAL_PARAMS shutdown fi授权启动服务chmod +x /usr/local/redis/redis-shutdown useradd -s /sbin/nologin redis cp /root/redis-7.2.5/redis.conf /usr/local/redis/ && chown -R redis:redis /usr/local/redis mkdir -p /usr/local/redis/data && chown -R redis:redis /usr/local/redis/data mkdir -p /usr/local/redis/sentinel && chown -R redis:redis /usr/local/redis/sentinel修改配置vim /usr/local/redis/redis.conf # master节点配置 bind 0.0.0.0 -::1 # 监听ip,多个ip用空格分隔 daemonize yes # 允许后台启动 logfile "/usr/local/redis/redis.log" # 日志路径 dir /usr/local/redis/data # 数据库备份文件存放目录 masterauth 123123 # slave连接master密码,master可省略 requirepass 123123 # 设置master连接密码,slave可省略 appendonly yes # 在/usr/local/redis/data目录生成appendonly.aof文件,将每一次写操作请求都追加到appendonly.aof 文件中 vim /usr/local/redis/redis.conf #slave1节点配置 bind 0.0.0.0 -::1 # 监听ip,多个ip用空格分隔 daemonize yes # 允许后台启动 logfile "/usr/local/redis/redis.log" # 日志路径 dir /usr/local/redis/data # 数据库备份文件存放目录 replicaof 192.168.1.21 6379 # replicaof用于追随某个节点的redis,被追随的节点为主节点,追随的为从节点。就是设置master节点 masterauth 123123 # slave连接master密码,master可省略 requirepass 123123 # 设置master连接密码,slave可省略 appendonly yes # 在/usr/local/redis/data目录生成appendonly.aof文件,将每一次写操作请求都追加到appendonly.aof 文件中 vim /usr/local/redis/redis.conf #slave2节点配置 bind 0.0.0.0 -::1 # 监听ip,多个ip用空格分隔 daemonize yes # 允许后台启动 logfile "/usr/local/redis/redis.log" # 日志路径 dir /usr/local/redis/data # 数据库备份文件存放目录 replicaof 192.168.1.21 6379 # replicaof用于追随某个节点的redis,被追随的节点为主节点,追随的为从节点。就是设置master节点 masterauth 123123 # slave连接master密码,master可省略 requirepass 123123 # 设置master连接密码,slave可省略 appendonly yes # 在/usr/local/redis/data目录生成appendonly.aof文件,将每 # 三个节点都执行 cat >/usr/local/redis/sentinel.conf<<EOF port 26379 daemonize yes logfile "/usr/local/redis/sentinel.log" # sentinel工作目录 dir "/usr/local/redis/sentinel" # 判断master失效至少需要2个sentinel同意,建议设置为n/2+1,n为sentinel个数 # sentinel monitor <master-name> <ip> <port> <count> sentinel monitor mymaster 192.168.1.21 6379 2 sentinel auth-pass mymaster 123123 # 判断master主观下线时间,默认30s sentinel down-after-milliseconds mymaster 30000 EOF修改linux内核参数# 临时生效 sysctl -w vm.overcommit_memory=1 # 永久生效 echo 'vm.overcommit_memory=1' >> /etc/sysctl.conf && sysctl -p ### 可选值:0,1,2。 # 0,:表示内核将检查是否有足够的可用内存供应用进程使用;如果有足够的可用内存,内存申请允许;否则,内存申请失败,并把错误返回给应用进程。 # 1:表示内核允许分配所有的物理内存,而不管当前的内存状态如何。 # 2: 表示内核允许分配超过所有物理内存和交换空间总和的内存。启动 Redissystemctl daemon-reload systemctl enable redis systemctl stop redis systemctl start redis systemctl status redis # 启动sentinel /usr/local/bin/redis-sentinel /usr/local/redis/sentinel.conf root@cby:~# netstat -anpt|grep 26379 tcp 0 0 0.0.0.0:26379 0.0.0.0:* LISTEN 9156/redis-sentinel tcp6 0 0 :::26379 :::* LISTEN 9156/redis-sentinel root@cby:~#查看集群redis-cli -h 192.168.1.21 -p 26379 -a 123123 192.168.1.21:26379> info sentinel sentinel_masters:1 sentinel_tilt:0 sentinel_tilt_since_seconds:-1 sentinel_running_scripts:0 sentinel_scripts_queue_length:0 sentinel_simulate_failure_flags:0 master0:name=mymaster,status=ok,address=192.168.1.21:6379,slaves=2,sentinels=3 192.168.1.21:26379> 故障模拟# 停掉master systemctl stop redis # 查看信息 redis-cli -h 192.168.1.22 -a 123123 info replication role:slave master_host:192.168.1.21 master_port:6379 master_link_status:down # 这里 master_last_io_seconds_ago:-1 master_sync_in_progress:0 slave_read_repl_offset:4567834 slave_repl_offset:4567834 master_link_down_since_seconds:0 slave_priority:100 slave_read_only:1 replica_announced:1 connected_slaves:0 master_failover_state:no-failover master_replid:449440daec10a3eb742b13e690de4adb26b20a07 master_replid2:0000000000000000000000000000000000000000 master_repl_offset:4567834 second_repl_offset:-1 repl_backlog_active:1 repl_backlog_size:1048576 repl_backlog_first_byte_offset:3515695 repl_backlog_histlen:1052140 # 再次查看信息 已经恢复 redis-cli -h 192.168.1.22 -a 123123 info replication role:master connected_slaves:1 # 这里 slave0:ip=192.168.1.23,port=6379,state=online,offset=4574293,lag=1 master_failover_state:no-failover master_replid:70e80f38d396bd5e649b30bd2669b3ae024f7e25 master_replid2:449440daec10a3eb742b13e690de4adb26b20a07 master_repl_offset:4574571 second_repl_offset:4567835 repl_backlog_active:1 repl_backlog_size:1048576 repl_backlog_first_byte_offset:3515695 repl_backlog_histlen:1058877 # 测试一下读写 redis-cli -h 192.168.1.22 -a 123123 192.168.1.22:6379> set k2 v2 OK 192.168.1.22:6379> 192.168.1.22:6379> get k2 "v2" 192.168.1.22:6379> # 恢复故障 systemctl start redis redis-cli -h 192.168.1.22 -a 123123 info replication role:master connected_slaves:2 # 这里 slave0:ip=192.168.1.23,port=6379,state=online,offset=4620778,lag=1 slave1:ip=192.168.1.21,port=6379,state=online,offset=4620940,lag=0 master_failover_state:no-failover master_replid:70e80f38d396bd5e649b30bd2669b3ae024f7e25 master_replid2:449440daec10a3eb742b13e690de4adb26b20a07 master_repl_offset:4620940 second_repl_offset:4567835 repl_backlog_active:1 repl_backlog_size:1048576 repl_backlog_first_byte_offset:3556575 repl_backlog_histlen:1064366 # 测试一下读写 redis-cli -h 192.168.1.22 -a 123123 192.168.1.22:6379> set k3 v3 OK 192.168.1.22:6379> get k3 "v3" 192.168.1.22:6379> 关于https://www.oiox.cn/https://www.oiox.cn/index.php/start-page.htmlCSDN、GitHub、51CTO、知乎、开源中国、思否、博客园、掘金、简书、华为云、阿里云、腾讯云、哔哩哔哩、今日头条、新浪微博、个人博客全网可搜《小陈运维》文章主要发布于微信公众号
2024年06月16日
87 阅读
0 评论
0 点赞
2024-06-16
Redis主从模式部署
Redis主从模式部署主从模式是Redis三种集群模式中最简单的,主数据库(master)和从数据库(slave)。其中,主从复制有如下特点:主数据库可以进行读写操作,当读写操作导致数据变化时会自动将数据同步给从数据库;从数据库一般是只读的,并且接收主数据库同步过来的数据;一个master可以拥有多个slave,但是一个slave只能对应一个master;slave挂了不影响其他slave的读和master的读和写,重新启动后会将数据从master同步过来;master挂了以后,不影响slave的读,但redis不再提供写服务,master重启后redis将重新对外提供写服务;master挂了以后,不会在slave节点中重新选一个master;工作机制:当slave启动后,主动向master发送SYNC命令。master接收到SYNC命令后在后台保存快照(RDB持久化)和缓存保存快照这段时间的命令,然后将保存的快照文件和缓存的命令发送给slave。slave接收到快照文件和命令后加载快照文件和缓存的执行命令。复制初始化后,master每次接收到的写命令都会同步发送给slave,保证主从数据一致性。环境IP角色192.168.1.21master192.168.1.22slave1192.168.1.23slave2安装编译环境# ubuntu apt install make gcc # centos yum install make gcc安装 Redis# 查看 Redis 版本 http://download.redis.io/releases/ # 下载 Redis wget http://download.redis.io/releases/redis-7.2.5.tar.gz # 解压 tar xvf redis-7.2.5.tar.gz cd redis-7.2.5/ # 进行编译 make && make install配置服务cat << EOF > /usr/lib/systemd/system/redis.service [Unit] Description=Redis persistent key-value database After=network.target After=network-online.target Wants=network-online.target [Service] ExecStart=/usr/local/bin/redis-server /usr/local/redis/redis.conf --supervised systemd ExecStop=/usr/local/redis/redis-shutdown Type=forking User=redis Group=redis RuntimeDirectory=redis RuntimeDirectoryMode=0755 LimitNOFILE=65536 PrivateTmp=true [Install] WantedBy=multi-user.target EOF配置停止脚本mkdir /usr/local/redis vim /usr/local/redis/redis-shutdown #!/bin/bash # # Wrapper to close properly redis and sentinel test x"$REDIS_DEBUG" != x && set -x REDIS_CLI=/usr/local/bin/redis-cli # Retrieve service name SERVICE_NAME="$1" if [ -z "$SERVICE_NAME" ]; then SERVICE_NAME=redis fi # Get the proper config file based on service name CONFIG_FILE="/usr/local/redis/$SERVICE_NAME.conf" # Use awk to retrieve host, port from config file HOST=`awk '/^[[:blank:]]*bind/ { print $2 }' $CONFIG_FILE | tail -n1` PORT=`awk '/^[[:blank:]]*port/ { print $2 }' $CONFIG_FILE | tail -n1` PASS=`awk '/^[[:blank:]]*requirepass/ { print $2 }' $CONFIG_FILE | tail -n1` SOCK=`awk '/^[[:blank:]]*unixsocket\s/ { print $2 }' $CONFIG_FILE | tail -n1` # Just in case, use default host, port HOST=${HOST:-127.0.0.1} if [ "$SERVICE_NAME" = redis ]; then PORT=${PORT:-6379} else PORT=${PORT:-26739} fi # Setup additional parameters # e.g password-protected redis instances [ -z "$PASS" ] || ADDITIONAL_PARAMS="-a $PASS" # shutdown the service properly if [ -e "$SOCK" ] ; then $REDIS_CLI -s $SOCK $ADDITIONAL_PARAMS shutdown else $REDIS_CLI -h $HOST -p $PORT $ADDITIONAL_PARAMS shutdown fi授权启动服务chmod +x /usr/local/redis/redis-shutdown useradd -s /sbin/nologin redis cp /root/redis-7.2.5/redis.conf /usr/local/redis/ && chown -R redis:redis /usr/local/redis mkdir -p /usr/local/redis/data && chown -R redis:redis /usr/local/redis/data修改配置vim /usr/local/redis/redis.conf # master节点配置 bind 0.0.0.0 -::1 # 监听ip,多个ip用空格分隔 daemonize yes # 允许后台启动 logfile "/usr/local/redis/redis.log" # 日志路径 dir /usr/local/redis/data # 数据库备份文件存放目录 masterauth 123123 # slave连接master密码,master可省略 requirepass 123123 # 设置master连接密码,slave可省略 appendonly yes # 在/usr/local/redis/data目录生成appendonly.aof文件,将每一次写操作请求都追加到appendonly.aof 文件中 vim /usr/local/redis/redis.conf #slave1节点配置 bind 0.0.0.0 -::1 # 监听ip,多个ip用空格分隔 daemonize yes # 允许后台启动 logfile "/usr/local/redis/redis.log" # 日志路径 dir /usr/local/redis/data # 数据库备份文件存放目录 replicaof 192.168.1.21 6379 # replicaof用于追随某个节点的redis,被追随的节点为主节点,追随的为从节点。就是设置master节点 masterauth 123123 # slave连接master密码,master可省略 requirepass 123123 # 设置master连接密码,slave可省略 appendonly yes # 在/usr/local/redis/data目录生成appendonly.aof文件,将每一次写操作请求都追加到appendonly.aof 文件中 vim /usr/local/redis/redis.conf #slave2节点配置 bind 0.0.0.0 -::1 # 监听ip,多个ip用空格分隔 daemonize yes # 允许后台启动 logfile "/usr/local/redis/redis.log" # 日志路径 dir /usr/local/redis/data # 数据库备份文件存放目录 replicaof 192.168.1.21 6379 # replicaof用于追随某个节点的redis,被追随的节点为主节点,追随的为从节点。就是设置master节点 masterauth 123123 # slave连接master密码,master可省略 requirepass 123123 # 设置master连接密码,slave可省略 appendonly yes # 在/usr/local/redis/data目录生成appendonly.aof文件,将每修改linux内核参数# 临时生效 sysctl -w vm.overcommit_memory=1 # 永久生效 echo 'vm.overcommit_memory=1' >> /etc/sysctl.conf && sysctl -p ### 可选值:0,1,2。 # 0,:表示内核将检查是否有足够的可用内存供应用进程使用;如果有足够的可用内存,内存申请允许;否则,内存申请失败,并把错误返回给应用进程。 # 1:表示内核允许分配所有的物理内存,而不管当前的内存状态如何。 # 2: 表示内核允许分配超过所有物理内存和交换空间总和的内存。启动 Redissystemctl daemon-reload systemctl enable redis systemctl stop redis systemctl start redis systemctl status redis查看集群# 交互式 redis-cli -h 192.168.1.21 -a 123123 1192.168.1.21:6379> info replication role:master connected_slaves:2 slave0:ip=192.168.1.22,port=6379,state=online,offset=14,lag=0 slave1:ip=192.168.1.23,port=6379,state=online,offset=14,lag=0 master_failover_state:no-failover master_replid:449440daec10a3eb742b13e690de4adb26b20a07 master_replid2:0000000000000000000000000000000000000000 master_repl_offset:14 second_repl_offset:-1 repl_backlog_active:1 repl_backlog_size:1048576 repl_backlog_first_byte_offset:1 repl_backlog_histlen:14 192.168.1.21:6379> # 交互式 redis-cli -h 192.168.1.21 192.168.1.21:6379> 192.168.1.21:6379> info replication NOAUTH Authentication required. 192.168.1.21:6379> 192.168.1.21:6379> auth 123123 OK 192.168.1.21:6379> 192.168.1.21:6379> info replication # Replication role:master connected_slaves:2 slave0:ip=192.168.1.22,port=6379,state=online,offset=56,lag=0 slave1:ip=192.168.1.23,port=6379,state=online,offset=56,lag=0 master_failover_state:no-failover master_replid:449440daec10a3eb742b13e690de4adb26b20a07 master_replid2:0000000000000000000000000000000000000000 master_repl_offset:56 second_repl_offset:-1 repl_backlog_active:1 repl_backlog_size:1048576 repl_backlog_first_byte_offset:1 repl_backlog_histlen:56 192.168.1.21:6379> # 非交互式 redis-cli -h 192.168.1.21 -a 123123 info replication压测root@cby:~# redis-benchmark -t set,get -n 100000 -a 123123 -h 192.168.1.21 ====== SET ====== 100000 requests completed in 0.98 seconds 50 parallel clients 3 bytes payload keep alive: 1 host configuration "save": 3600 1 300 100 60 10000 host configuration "appendonly": yes multi-thread: no Latency by percentile distribution: 0.000% <= 0.103 milliseconds (cumulative count 8) 50.000% <= 0.343 milliseconds (cumulative count 50319) 75.000% <= 0.399 milliseconds (cumulative count 75102) 87.500% <= 0.431 milliseconds (cumulative count 88783) 93.750% <= 0.447 milliseconds (cumulative count 93936) 96.875% <= 0.463 milliseconds (cumulative count 96878) 98.438% <= 0.487 milliseconds (cumulative count 98770) 99.219% <= 0.503 milliseconds (cumulative count 99227) 99.609% <= 0.615 milliseconds (cumulative count 99619) 99.805% <= 0.815 milliseconds (cumulative count 99807) 99.902% <= 1.071 milliseconds (cumulative count 99906) 99.951% <= 1.175 milliseconds (cumulative count 99954) 99.976% <= 1.247 milliseconds (cumulative count 99976) 99.988% <= 1.295 milliseconds (cumulative count 99989) 99.994% <= 1.319 milliseconds (cumulative count 99995) 99.997% <= 1.327 milliseconds (cumulative count 99997) 99.998% <= 1.335 milliseconds (cumulative count 99999) 99.999% <= 1.343 milliseconds (cumulative count 100000) 100.000% <= 1.343 milliseconds (cumulative count 100000) Cumulative distribution of latencies: 0.008% <= 0.103 milliseconds (cumulative count 8) 1.338% <= 0.207 milliseconds (cumulative count 1338) 35.037% <= 0.303 milliseconds (cumulative count 35037) 78.556% <= 0.407 milliseconds (cumulative count 78556) 99.227% <= 0.503 milliseconds (cumulative count 99227) 99.604% <= 0.607 milliseconds (cumulative count 99604) 99.736% <= 0.703 milliseconds (cumulative count 99736) 99.804% <= 0.807 milliseconds (cumulative count 99804) 99.842% <= 0.903 milliseconds (cumulative count 99842) 99.884% <= 1.007 milliseconds (cumulative count 99884) 99.922% <= 1.103 milliseconds (cumulative count 99922) 99.966% <= 1.207 milliseconds (cumulative count 99966) 99.991% <= 1.303 milliseconds (cumulative count 99991) 100.000% <= 1.407 milliseconds (cumulative count 100000) Summary: throughput summary: 102249.49 requests per second latency summary (msec): avg min p50 p95 p99 max 0.343 0.096 0.343 0.455 0.495 1.343 ====== GET ====== 100000 requests completed in 0.81 seconds 50 parallel clients 3 bytes payload keep alive: 1 host configuration "save": 3600 1 300 100 60 10000 host configuration "appendonly": yes multi-thread: no Latency by percentile distribution: 0.000% <= 0.063 milliseconds (cumulative count 9) 50.000% <= 0.263 milliseconds (cumulative count 52284) 75.000% <= 0.319 milliseconds (cumulative count 77215) 87.500% <= 0.351 milliseconds (cumulative count 90174) 93.750% <= 0.367 milliseconds (cumulative count 95109) 96.875% <= 0.383 milliseconds (cumulative count 97068) 98.438% <= 0.407 milliseconds (cumulative count 98532) 99.219% <= 0.487 milliseconds (cumulative count 99222) 99.609% <= 0.711 milliseconds (cumulative count 99619) 99.805% <= 0.919 milliseconds (cumulative count 99806) 99.902% <= 1.127 milliseconds (cumulative count 99908) 99.951% <= 1.231 milliseconds (cumulative count 99953) 99.976% <= 1.343 milliseconds (cumulative count 99976) 99.988% <= 1.391 milliseconds (cumulative count 99989) 99.994% <= 1.415 milliseconds (cumulative count 99995) 99.997% <= 1.423 milliseconds (cumulative count 99997) 99.998% <= 1.431 milliseconds (cumulative count 99999) 99.999% <= 1.439 milliseconds (cumulative count 100000) 100.000% <= 1.439 milliseconds (cumulative count 100000) Cumulative distribution of latencies: 0.034% <= 0.103 milliseconds (cumulative count 34) 24.823% <= 0.207 milliseconds (cumulative count 24823) 70.395% <= 0.303 milliseconds (cumulative count 70395) 98.532% <= 0.407 milliseconds (cumulative count 98532) 99.251% <= 0.503 milliseconds (cumulative count 99251) 99.458% <= 0.607 milliseconds (cumulative count 99458) 99.608% <= 0.703 milliseconds (cumulative count 99608) 99.707% <= 0.807 milliseconds (cumulative count 99707) 99.795% <= 0.903 milliseconds (cumulative count 99795) 99.855% <= 1.007 milliseconds (cumulative count 99855) 99.895% <= 1.103 milliseconds (cumulative count 99895) 99.945% <= 1.207 milliseconds (cumulative count 99945) 99.966% <= 1.303 milliseconds (cumulative count 99966) 99.993% <= 1.407 milliseconds (cumulative count 99993) 100.000% <= 1.503 milliseconds (cumulative count 100000) Summary: throughput summary: 122850.12 requests per second latency summary (msec): avg min p50 p95 p99 max 0.265 0.056 0.263 0.367 0.431 1.439 root@cby:~# 关于https://www.oiox.cn/https://www.oiox.cn/index.php/start-page.htmlCSDN、GitHub、51CTO、知乎、开源中国、思否、博客园、掘金、简书、华为云、阿里云、腾讯云、哔哩哔哩、今日头条、新浪微博、个人博客全网可搜《小陈运维》文章主要发布于微信公众号
2024年06月16日
76 阅读
0 评论
0 点赞
2024-06-16
Redis单实例安装
Redis单实例安装Redis(Remote Dictionary Server)是一个开源的内存数据库,遵守 BSD 协议,它提供了一个高性能的键值(key-value)存储系统,常用于缓存、消息队列、会话存储等应用场景。性能极高:Redis 以其极高的性能而著称,能够支持每秒数十万次的读写操作24。这使得Redis成为处理高并发请求的理想选择,尤其是在需要快速响应的场景中,如缓存、会话管理、排行榜等。丰富的数据类型:Redis 不仅支持基本的键值存储,还提供了丰富的数据类型,包括字符串、列表、集合、哈希表、有序集合等。这些数据类型为开发者提供了灵活的数据操作能力,使得Redis可以适应各种不同的应用场景。原子性操作:Redis 的所有操作都是原子性的,这意味着操作要么完全执行,要么完全不执行。这种特性对于确保数据的一致性和完整性至关重要,尤其是在高并发环境下处理事务时。持久化:Redis 支持数据的持久化,可以将内存中的数据保存到磁盘中,以便在系统重启后恢复数据。这为 Redis 提供了数据安全性,确保数据不会因为系统故障而丢失。支持发布/订阅模式:Redis 内置了发布/订阅模式(Pub/Sub),允许客户端之间通过消息传递进行通信。这使得 Redis 可以作为消息队列和实时数据传输的平台。单线程模型:尽管 Redis 是单线程的,但它通过高效的事件驱动模型来处理并发请求,确保了高性能和低延迟。单线程模型也简化了并发控制的复杂性。主从复制:Redis 支持主从复制,可以通过从节点来备份数据或分担读请求,提高数据的可用性和系统的伸缩性。应用场景广泛:Redis 被广泛应用于各种场景,包括但不限于缓存系统、会话存储、排行榜、实时分析、地理空间数据索引等。社区支持:Redis 拥有一个活跃的开发者社区,提供了大量的文档、教程和第三方库,这为开发者提供了强大的支持和丰富的资源。跨平台兼容性:Redis 可以在多种操作系统上运行,包括 Linux、macOS 和 Windows,这使得它能够在不同的技术栈中灵活部署。安装编译环境# ubuntu apt install make gcc # centos yum install make gcc安装 Redis# 查看 Redis 版本 http://download.redis.io/releases/ # 下载 Redis wget http://download.redis.io/releases/redis-7.2.5.tar.gz # 解压 tar xvf redis-7.2.5.tar.gz cd redis-7.2.5/ # 进行编译 make && make install配置服务cat << EOF > /usr/lib/systemd/system/redis.service [Unit] Description=Redis persistent key-value database After=network.target After=network-online.target Wants=network-online.target [Service] ExecStart=/usr/local/bin/redis-server /usr/local/redis/redis.conf --supervised systemd ExecStop=/usr/local/redis/redis-shutdown Type=forking User=redis Group=redis RuntimeDirectory=redis RuntimeDirectoryMode=0755 LimitNOFILE=65536 PrivateTmp=true [Install] WantedBy=multi-user.target EOF配置停止脚本mkdir /usr/local/redis vim /usr/local/redis/redis-shutdown #!/bin/bash # # Wrapper to close properly redis and sentinel test x"$REDIS_DEBUG" != x && set -x REDIS_CLI=/usr/local/bin/redis-cli # Retrieve service name SERVICE_NAME="$1" if [ -z "$SERVICE_NAME" ]; then SERVICE_NAME=redis fi # Get the proper config file based on service name CONFIG_FILE="/usr/local/redis/$SERVICE_NAME.conf" # Use awk to retrieve host, port from config file HOST=`awk '/^[[:blank:]]*bind/ { print $2 }' $CONFIG_FILE | tail -n1` PORT=`awk '/^[[:blank:]]*port/ { print $2 }' $CONFIG_FILE | tail -n1` PASS=`awk '/^[[:blank:]]*requirepass/ { print $2 }' $CONFIG_FILE | tail -n1` SOCK=`awk '/^[[:blank:]]*unixsocket\s/ { print $2 }' $CONFIG_FILE | tail -n1` # Just in case, use default host, port HOST=${HOST:-127.0.0.1} if [ "$SERVICE_NAME" = redis ]; then PORT=${PORT:-6379} else PORT=${PORT:-26739} fi # Setup additional parameters # e.g password-protected redis instances [ -z "$PASS" ] || ADDITIONAL_PARAMS="-a $PASS" # shutdown the service properly if [ -e "$SOCK" ] ; then $REDIS_CLI -s $SOCK $ADDITIONAL_PARAMS shutdown else $REDIS_CLI -h $HOST -p $PORT $ADDITIONAL_PARAMS shutdown fi授权启动服务chmod +x /usr/local/redis/redis-shutdown useradd -s /sbin/nologin redis cp /root/redis-7.2.5/redis.conf /usr/local/redis/ && chown -R redis:redis /usr/local/redis mkdir -p /usr/local/redis/data && chown -R redis:redis /usr/local/redis/data修改配置bind 0.0.0.0 -::1 # 监听ip,多个ip用空格分隔 daemonize yes # 允许后台启动 logfile "/usr/local/redis/redis.log" # 日志路径 dir /usr/local/redis/data # 数据库备份文件存放目录 requirepass 123123 # 设置连接密码 appendonly yes # 在/usr/local/redis/data目录生成appendonly.aof文件,将每一次写操作请求都追加到appendonly.aof 文件中修改linux内核参数# 临时生效 sysctl -w vm.overcommit_memory=1 # 永久生效 echo 'vm.overcommit_memory=1' >> /etc/sysctl.conf && sysctl -p ### 可选值:0,1,2。 # 0,:表示内核将检查是否有足够的可用内存供应用进程使用;如果有足够的可用内存,内存申请允许;否则,内存申请失败,并把错误返回给应用进程。 # 1:表示内核允许分配所有的物理内存,而不管当前的内存状态如何。 # 2: 表示内核允许分配超过所有物理内存和交换空间总和的内存。启动 Redissystemctl daemon-reload systemctl enable redis systemctl start redis systemctl status redis查看集群# 交互式 redis-cli -h 192.168.1.21 -a 123123 192.168.1.21:6379> info replication role:master connected_slaves:0 master_failover_state:no-failover master_replid:9d6563f8b2cf7300bc82890838b877eceae2d8bf master_replid2:0000000000000000000000000000000000000000 master_repl_offset:0 second_repl_offset:-1 repl_backlog_active:0 repl_backlog_size:1048576 repl_backlog_first_byte_offset:0 repl_backlog_histlen:0 192.168.1.21:6379> # 交互式 redis-cli -h 192.168.1.21 192.168.1.21:6379> 192.168.1.21:6379> info replication NOAUTH Authentication required. 192.168.1.21:6379> 192.168.1.21:6379> 192.168.1.21:6379> auth 123123 OK 192.168.1.21:6379> 192.168.1.21:6379> info replication # Replication role:master connected_slaves:0 master_failover_state:no-failover master_replid:9d6563f8b2cf7300bc82890838b877eceae2d8bf master_replid2:0000000000000000000000000000000000000000 master_repl_offset:0 second_repl_offset:-1 repl_backlog_active:0 repl_backlog_size:1048576 repl_backlog_first_byte_offset:0 repl_backlog_histlen:0 192.168.1.21:6379> 192.168.1.21:6379> # 非交互式 redis-cli -h 192.168.1.21 -a 123123 info replication压测root@cby:~# redis-benchmark -t set,get -n 100000 -a 123123 -h 192.168.1.21 ====== SET ====== 100000 requests completed in 0.85 seconds 50 parallel clients 3 bytes payload keep alive: 1 host configuration "save": 3600 1 300 100 60 10000 host configuration "appendonly": yes multi-thread: no Latency by percentile distribution: 0.000% <= 0.095 milliseconds (cumulative count 13) 50.000% <= 0.287 milliseconds (cumulative count 52749) 75.000% <= 0.343 milliseconds (cumulative count 77482) 87.500% <= 0.367 milliseconds (cumulative count 88051) 93.750% <= 0.383 milliseconds (cumulative count 94598) 96.875% <= 0.399 milliseconds (cumulative count 97691) 98.438% <= 0.407 milliseconds (cumulative count 98450) 99.219% <= 0.423 milliseconds (cumulative count 99272) 99.609% <= 0.455 milliseconds (cumulative count 99612) 99.805% <= 0.599 milliseconds (cumulative count 99816) 99.902% <= 0.911 milliseconds (cumulative count 99903) 99.951% <= 1.039 milliseconds (cumulative count 99952) 99.976% <= 1.303 milliseconds (cumulative count 99977) 99.988% <= 1.343 milliseconds (cumulative count 99988) 99.994% <= 1.367 milliseconds (cumulative count 99995) 99.997% <= 1.375 milliseconds (cumulative count 99997) 99.998% <= 1.383 milliseconds (cumulative count 99999) 99.999% <= 1.391 milliseconds (cumulative count 100000) 100.000% <= 1.391 milliseconds (cumulative count 100000) Cumulative distribution of latencies: 0.016% <= 0.103 milliseconds (cumulative count 16) 13.574% <= 0.207 milliseconds (cumulative count 13574) 59.956% <= 0.303 milliseconds (cumulative count 59956) 98.450% <= 0.407 milliseconds (cumulative count 98450) 99.708% <= 0.503 milliseconds (cumulative count 99708) 99.825% <= 0.607 milliseconds (cumulative count 99825) 99.868% <= 0.703 milliseconds (cumulative count 99868) 99.877% <= 0.807 milliseconds (cumulative count 99877) 99.899% <= 0.903 milliseconds (cumulative count 99899) 99.938% <= 1.007 milliseconds (cumulative count 99938) 99.966% <= 1.103 milliseconds (cumulative count 99966) 99.967% <= 1.207 milliseconds (cumulative count 99967) 99.977% <= 1.303 milliseconds (cumulative count 99977) 100.000% <= 1.407 milliseconds (cumulative count 100000) Summary: throughput summary: 117508.81 requests per second latency summary (msec): avg min p50 p95 p99 max 0.285 0.088 0.287 0.391 0.423 1.391 ====== GET ====== 100000 requests completed in 0.80 seconds 50 parallel clients 3 bytes payload keep alive: 1 host configuration "save": 3600 1 300 100 60 10000 host configuration "appendonly": yes multi-thread: no Latency by percentile distribution: 0.000% <= 0.039 milliseconds (cumulative count 1) 50.000% <= 0.255 milliseconds (cumulative count 51084) 75.000% <= 0.311 milliseconds (cumulative count 76787) 87.500% <= 0.343 milliseconds (cumulative count 90043) 93.750% <= 0.359 milliseconds (cumulative count 95251) 96.875% <= 0.375 milliseconds (cumulative count 97337) 98.438% <= 0.391 milliseconds (cumulative count 98520) 99.219% <= 0.415 milliseconds (cumulative count 99259) 99.609% <= 0.519 milliseconds (cumulative count 99611) 99.805% <= 0.639 milliseconds (cumulative count 99808) 99.902% <= 0.911 milliseconds (cumulative count 99903) 99.951% <= 1.895 milliseconds (cumulative count 99952) 99.976% <= 1.991 milliseconds (cumulative count 99977) 99.988% <= 2.031 milliseconds (cumulative count 99988) 99.994% <= 2.055 milliseconds (cumulative count 99994) 99.997% <= 2.071 milliseconds (cumulative count 99998) 99.998% <= 2.079 milliseconds (cumulative count 100000) 100.000% <= 2.079 milliseconds (cumulative count 100000) Cumulative distribution of latencies: 0.052% <= 0.103 milliseconds (cumulative count 52) 27.094% <= 0.207 milliseconds (cumulative count 27094) 73.309% <= 0.303 milliseconds (cumulative count 73309) 99.140% <= 0.407 milliseconds (cumulative count 99140) 99.577% <= 0.503 milliseconds (cumulative count 99577) 99.780% <= 0.607 milliseconds (cumulative count 99780) 99.832% <= 0.703 milliseconds (cumulative count 99832) 99.855% <= 0.807 milliseconds (cumulative count 99855) 99.899% <= 0.903 milliseconds (cumulative count 99899) 99.933% <= 1.007 milliseconds (cumulative count 99933) 99.938% <= 1.207 milliseconds (cumulative count 99938) 99.947% <= 1.407 milliseconds (cumulative count 99947) 99.950% <= 1.503 milliseconds (cumulative count 99950) 99.954% <= 1.903 milliseconds (cumulative count 99954) 99.981% <= 2.007 milliseconds (cumulative count 99981) 100.000% <= 2.103 milliseconds (cumulative count 100000) Summary: throughput summary: 125628.14 requests per second latency summary (msec): avg min p50 p95 p99 max 0.259 0.032 0.255 0.359 0.407 2.079 root@cby:~# 关于https://www.oiox.cn/https://www.oiox.cn/index.php/start-page.htmlCSDN、GitHub、51CTO、知乎、开源中国、思否、博客园、掘金、简书、华为云、阿里云、腾讯云、哔哩哔哩、今日头条、新浪微博、个人博客全网可搜《小陈运维》文章主要发布于微信公众号
2024年06月16日
55 阅读
0 评论
0 点赞
2024-06-08
小陈的容器镜像站
小陈的容器镜像站背景由于不可抗力原因建立了镜像站,支持多平台容器镜像代理。镜像使用优先级:官方地址 > 镜像地址 > 阿里云地址替换地址gcr.io >>>>> gcr.chenby.cnquay.io >>>>> quay.chenby.cnghcr.io >>>>> ghcr.chenby.cndocker.io >>>>> docker.chenby.cnk8s.gcr.io >>>>> k8s.chenby.cnregistry.k8s.io >>>>> k8s.chenby.cndocker.elastic.co >>>>> elastic.chenby.cndocker.cloudsmith.io >>>>> cloudsmith.chenby.cn配置Docker使用 Docker Hub 官方镜像,可以加入到 daemon.json 文件中。这样就可以使用正常的命令进行拉取镜像,系统会自动使用代理进行拉取。sudo mkdir -p /etc/docker sudo tee /etc/docker/daemon.json <<-'EOF' { "registry-mirrors": ["https://docker.chenby.cn"] } EOF sudo systemctl daemon-reload sudo systemctl restart docker举例# docker.io 仓库: # 官方地址: docker pull docker.io/nginx:latest docker pull docker.io/calico/node:v3.28.0 # 镜像地址: docker pull docker.chenby.cn/library/nginx:latest docker pull docker.chenby.cn/calico/node:v3.28.0 # docker.elastic.co 仓库: # 官方地址: docker pull docker.elastic.co/apm/apm-server:8.14.0 # 镜像地址: docker pull elastic.chenby.cn/apm/apm-server:8.14.0 # 阿里云地址: docker pull registry.aliyuncs.com/chenby/apm-server:8.14.0 # quay.io 仓库: # 官方地址: docker pull quay.io/ceph/ceph:v18.2.1 # 镜像地址: docker pull quay.chenby.cn/ceph/ceph:v18.2.1 # 阿里云地址: docker pull registry.aliyuncs.com/chenby/ceph:v18.2.1 # k8s.gcr.io 仓库: # 官方地址: docker pull k8s.gcr.io/kube-state-metrics/kube-state-metrics:v2.8.2 # 镜像地址: docker pull k8s.chenby.cn/kube-state-metrics/kube-state-metrics:v2.8.2 # 阿里云地址: docker pull registry.aliyuncs.com/chenby/kube-state-metrics:v2.8.2 # registry.k8s.io 仓库: # 官方地址: docker pull registry.k8s.io/sig-storage/nfsplugin:v4.2.0 # 镜像地址: docker pull k8s.chenby.cn/sig-storage/nfsplugin:v4.2.0 # 阿里云地址: docker pull registry.aliyuncs.com/chenby/nfsplugin:v4.2.0 # gcr.io 仓库: # 官方地址: docker pull gcr.io/kaniko-project/executor:v1.23.1 # 镜像地址: docker pull gcr.chenby.cn/kaniko-project/executor:v1.23.1 # 阿里云地址: docker pull registry.aliyuncs.com/chenby/executor:v1.23.1 # ghcr.io 仓库: # 官方地址: docker pull ghcr.io/coroot/coroot:1.1.0 # 镜像地址: docker pull ghcr.chenby.cn/coroot/coroot:1.1.0 # 阿里云地址: docker pull registry.aliyuncs.com/chenby/coroot:1.1.0关于https://www.oiox.cn/https://www.oiox.cn/index.php/start-page.htmlCSDN、GitHub、51CTO、知乎、开源中国、思否、博客园、掘金、简书、华为云、阿里云、腾讯云、哔哩哔哩、今日头条、新浪微博、个人博客全网可搜《小陈运维》文章主要发布于微信公众号
2024年06月08日
353 阅读
0 评论
0 点赞
2024-06-08
海外镜像同步到阿里云
海外镜像同步到阿里云如果拉取不到镜像可以尝试使用我的仓库进行拉取。所有镜像每隔八小时自动同步。使用方式docker.elastic.co/kibana/[镜像名称]:[版本版本号] ==> registry.aliyuncs.com/chenby/[镜像名称]:[版本版本号] quay.io/csiaddons/[镜像名称]:[版本版本号] ==> registry.aliyuncs.com/chenby/[镜像名称]:[版本版本号] k8s.gcr.io/[镜像名称]:[版本版本号] ==> registry.aliyuncs.com/chenby/[镜像名称]:[版本版本号] ....拉去镜像registry.aliyuncs.com/chenby/[镜像名称]:[版本版本号] registry.aliyuncs.com/chenby/kube-apiserver:v1.30.1目前有如下镜像仓库,后续会陆续增加images: docker.elastic.co: - elasticsearch/elasticsearch - kibana/kibana - logstash/logstash - beats/filebeat - beats/heartbeat - beats/packetbeat - beats/auditbeat - beats/journalbeat - beats/metricbeat - apm/apm-server - app-search/app-search quay.io: - coreos/flannel - ceph/ceph - cephcsi/cephcsi - csiaddons/k8s-sidecar - csiaddons/volumereplication-operator - prometheus/prometheus - prometheus/alertmanager - prometheus/pushgateway - prometheus/blackbox-exporter - prometheus/node-exporter - prometheus-operator/prometheus-config-reloader - prometheus-operator/prometheus-operator - brancz/kube-rbac-proxy - cilium/cilium - cilium/tetragon - cilium/operator - cilium/operator-generic - thanos/thanos - cilium/certgen - cilium/hubble-relay - cilium/hubble-ui - cilium/hubble-ui-backend - cilium/hubble-ui - cilium/cilium-envoy - cilium/cilium-etcd-operator - cilium/operator - cilium/startup-script - cilium/clustermesh-apiserver - coreos/etcd k8s.gcr.io: - dns/k8s-dns-node-cache - metrics-server/metrics-server - kube-state-metrics/kube-state-metrics - prometheus-adapter/prometheus-adapter - sig-storage/nfs-subdir-external-provisioner - sig-storage/csi-node-driver-registrar - sig-storage/csi-provisioner - sig-storage/csi-resizer - sig-storage/csi-snapshotter - sig-storage/csi-attacher - sig-storage/nfsplugin registry.k8s.io: - pause - etcd - conformance - kube-proxy - kube-apiserver - kube-scheduler - kube-controller-manager - coredns/coredns - ingress-nginx/controller - ingress-nginx/controller-chroot - ingress-nginx/kube-webhook-certgen - metrics-server/metrics-server - dns/k8s-dns-node-cache - sig-storage/nfs-subdir-external-provisioner - sig-storage/csi-node-driver-registrar - sig-storage/csi-provisioner - sig-storage/csi-resizer - sig-storage/csi-snapshotter - sig-storage/csi-attacher - kube-state-metrics/kube-state-metrics - prometheus-adapter/prometheus-adapter gcr.io: - kaniko-project/executor - google-samples/xtrabackup docker.io: - calico/node - calico/typha - calico/cni - calico/kube-controllers - calico/pod2daemon-flexvol - flannel/flannel - flannel/flannel-cni-plugin ghcr.io: - coroot/coroot - coroot/coroot-cluster-agent - coroot/coroot-node-agent关于https://www.oiox.cn/https://www.oiox.cn/index.php/start-page.htmlCSDN、GitHub、知乎、开源中国、思否、掘金、简书、华为云、阿里云、腾讯云、哔哩哔哩、今日头条、新浪微博、个人博客全网可搜《小陈运维》文章主要发布于微信公众号
2024年06月08日
204 阅读
0 评论
0 点赞
1
...
4
5
6
...
41