Redis is often referred as a data structures server. What this means is that Redis provides access to mutable data structures via a set of commands, which are sent using a server-client model with TCP sockets and a simple protocol. So different processes can query and modify the same data structures in a shared way.
Data structures implemented into Redis have a few special properties:
Redis cares to store them on disk, even if they are always served and modified into the server memory. This means that Redis is fast, but that is also non-volatile.
Implementation of data structures stress on memory efficiency, so data structures inside Redis will likely use less memory compared to the same data structure modeled using an high level programming language.
Redis offers a number of features that are natural to find in a database, like replication, tunable levels of durability, cluster, high availability.
Another good example is to think of Redis as a more complex version of memcached, where the operations are not just SETs and GETs, but operations to work with complex data types like Lists, Sets, ordered data structures, and so forth.
If you want to know more, this is a list of selected starting points:
################################## INCLUDES ################################### #这在你有标准配置模板但是每个redis服务器又需要个性设置的时候很有用。 include /path/to/local.conf include /path/to/other.conf
################################ GENERAL #####################################
#当从库同主机失去连接或者复制正在进行,从机库有两种运行方式:1) 如果slave-serve-stale-data设置为yes(默认设置),从库会继续响应客户端的请求。2) 如果slave-serve-stale-data设置为no,除去INFO和SLAVOF命令之外的任何请求都会返回一个错误”SYNC with master in progress”。 slave-serve-stale-data yes
#是否使用socket方式复制数据。目前redis复制提供两种方式,disk和socket。如果新的slave连上来或者重连的slave无法部分同步,就会执行全量同步,master会生成rdb文件。有2种方式:disk方式是master创建一个新的进程把rdb文件保存到磁盘,再把磁盘上的rdb文件传递给slave。socket是master创建一个新的进程,直接把rdb文件以socket的方式发给slave。disk方式的时候,当一个rdb保存的过程中,多个slave都能共享这个rdb文件。socket的方式就的一个个slave顺序复制。在磁盘速度缓慢,网速快的情况下推荐用socket方式。 repl-diskless-sync no
# 设置1或另一个设置为0禁用这个特性。 # Setting one or the other to 0 disables the feature. # By default min-slaves-to-write is set to 0 (feature disabled) and # min-slaves-max-lag is set to 10.
# 设置能连上redis的最大客户端连接数量。默认是10000个客户端连接。由于redis不区分连接是客户端连接还是内部打开文件或者和slave连接等,所以maxclients最小建议设置到32。如果超过了maxclients,redis会给新的连接发送’max number of clients reached’,并关闭连接。 # maxclients 10000
############################## APPEND ONLY MODE ############################### #默认redis使用的是rdb方式持久化,这种方式在许多应用中已经足够用了。但是redis如果中途宕机,会导致可能有几分钟的数据丢失,根据save来策略进行持久化,Append Only File是另一种持久化方式,可以提供更好的持久化特性。Redis会把每次写入的数据在接收后都写入 appendonly.aof 文件,每次启动时Redis都会先把这个文件的数据读入内存里,先忽略RDB文件。 appendonly no
# 在aof重写或者写入rdb文件的时候,会执行大量IO,此时对于everysec和always的aof模式来说,执行fsync会造成阻塞过长时间,no-appendfsync-on-rewrite字段设置为默认设置为no。如果对延迟要求很高的应用,这个字段可以设置为yes,否则还是设置为no,这样对持久化特性来说这是更安全的选择。设置为yes表示rewrite期间对新写操作不fsync,暂时存在内存中,等rewrite完成后再写入,默认为no,建议yes。Linux的默认fsync策略是30秒。可能丢失30秒数据。 no-appendfsync-on-rewrite no
To configure a master-slave redis with optimum performance, here are some suggestions:
Kernel Configure:
Make sure to set the Linux kernel overcommit memory setting to 1. Add vm.overcommit_memory = 1 to /etc/sysctl.conf and then reboot or run the command sysctl vm.overcommit_memory=1 for this to take effect immediately.This make redis to consider server always have enough memory to fork.
Make sure to disable Linux kernel feature transparent huge pages, it will affect greatly both memory usage and latency in a negative way. This is accomplished with the following command: echo never > /sys/kernel/mm/transparent_hugepage/enabled. It will change memory page size from 4kb to 2M, increase memory usage on copy-on-write. https://docs.mongodb.com/manual/tutorial/transparent-huge-pages/#red-hat-centos-7
Common redis:
maxclients 50000
tcp-keepalive 300, enabled to check tcp dead connections
daemonize yes, not using systemd to manage
maxmemory=70% of physical memory, leave enough safe space for forking to do background save or full replication. Since remain memory can be used for accumulate changes(which cause copy-on-write) during background save or AOF rewrite.
stop-writes-on-bgsave-error no, to allow client write even on failure of rdb background save, increase robustness of redis master.
repl-backlog-size 1gb. should be determined by the write LOAD of clients, the default is 1mb, we can raise it to 1gb.
repl-backlog-ttl 0,never release backlog buffer
client-output-buffer-limit slave 0 0, disable soft limit, set hardlimit same as maxmemory size, if set this to unlimited, and slave is blocking, will used up all memory and force master to evict all keys.
maxmemory-policy volatile-lru, only volatile keys will be evicted.
appendfsync everysec,if both master and slave crash due to power shortage, we can ensure only lost 1 second data in slave.
no-appendfsync-on-rewrite yes, make sure slave would be not blocking by rewrite process if try to write AOF log, so slave can follow master closely, we prefer availability to durability.
aof-load-truncated yes, let redis fix truncated error by itself.
The repl_backlog is only allocated once there is at least a replica connected.
If slave disconnected from master and write changes exceed repl-backlog-size. The master will do a background save, which will block all clients when forking which utilize copy-on-write to allocated memory if changed during dump, it may not double the memory usage if memory page size is small and load not too high.
No need to enable repl-diskless-sync , it also requires fork, but only will write to socket instead of disk.
can change python logic code to use hash type for kv type, such as redis tokens, it can reduce memory profoundly
hash-max-ziplist-value 64
hash-max-ziplist-entries 512
Redis master
save “”
appendonly no
disable RDB and AOF, since process forking caused by RDB and AOF will block all commands. INFO command to check fork time for each gigabytes: latest_fork_usec:2568287us / used_memory_human: 76.02G =33.784ms, for each gigabyte, it may take 33ms to fork
Redis slave:
enable RDB dump.
save 900 1 save 300 10 save 60 10000
enable AOF:
appendonly yes
be cautious if master is empty, the slaveof can wipe out all slave keys.
Fail-over
config set save “” -- disable RDB in master
config set appendonly no -- disable AOF in master
take note on latest_fork_usec:5934556, this means redis master will be blocked for 5.9 seconds, and if keepalived healthy check mistakenly consider this as downtime, its VIP will failover to slave which cause the current syncing slave become master, in this case, do not empty slave when trying to sync with master which may cause empty ‘master‘ at the moment.
When slave try to sync with master, remember to set keepalived fail count to be much bigger number, to prevent keepalived falsely to do unwanted failover.
If AOF enabled, redis will ignore dump.rdb file during initial loading.
On instance, where only one redis, can use service keepalived restart to failover, but if with two or more instance, it has to stop_redis.sh to let keepalived failover to another instance
Never stop_redis.sh on master, it may lost all data if appendonly.aof is empty, and cause long init loading time if data set is bigger!
[root@sg-gop-10-71-12-78 redis-6389]# cat check_redis.sh #!/bin/bash # Check if redis is running, return 1 if not. # Used by keepalived to initiate a failover in case redis is down
REDIS_STATUS=$(telnet 127.0.0.1 6389 < /dev/null | grep "Connected" ) if [ "$REDIS_STATUS" != "" ] then exit 0 else logger "REDIS is NOT running. Setting keepalived state to FAULT." exit 1 fi