redis文档翻译_LRU缓存(三)

2015-07-24 07:47:10 · 作者: · 浏览: 1
. This improved the performance of the algorithm, making it able to approximate more closely the behavior of a real LRU algorithm.
在Redis 3.0(目前的测试版),算法被改进了,使用了一个逐出最佳候选池。改进了算法的性能,使它更加近似真正LRU算法。
What is important about the Redis LRU algorithm is that you are able to tune the precision of the algorithm by changing the number of samples to check for every eviction. This parameter is controlled by the following configuration directive:
算法中,关于逐出检测的样品数量,你可以自己去调整。配置参数是:
maxmemory-samples 5
The reason why Redis does not use a true LRU implementation is because it costs more memory. However the approximation is virtually equivalent for the application using Redis. The following is a graphical comparison of how the LRU approximation used by Redis compares with true LRU.
Redis没有使用真正实现LRU算是的原因是,因为消耗更多的内存。然而对于使用Redis的应用来说,事实上是等价的。下面是Redis的LRU算法和真正LRU算法的比较:
\
The test to generate the above graphs filled a Redis server with a given number of keys. The keys were accessed from the first to the last, so that the first keys are the best candidates for eviction using an LRU algorithm. Later more 50% of keys are added, in order to force half of the old keys to be evicted.
给出配置数量的key生成上面的图表。key从第一行到最后一行被访问,那么第一个key是LUR算法中最好的逐出候选者。之后有50%的key被添加,那么一半的旧key被逐出。

You can see three kind of dots in the graphs, forming three distinct bands.

在上图中你可以看见3个明显的区别:

The light gray band are objects that were evicted. 浅灰色带是被逐出的对象。
The gray band are objects that were not evicted. 灰色带是没有被逐出的对象。
The green band are objects that were added. 绿色带是被添加的对象。 In a theoretical LRU implementation we expect that, among the old keys, the first half will be expired. The Redis LRU algorithm will instead only probabilistically expire the older keys.
LRU理论实现是在所有的旧key中前一半被逐出。Redis使用的是近似过期的key被逐出。

As you can see Redis 3.0 does a better job with 5 samples compared to Redis 2.8, however most objects that are among the latest accessed are still retained by Redis 2.8. Using a sample size of 10 in Redis 3.0 the approximation is very close to the theoretical performance of Redis 3.0.

如你所见,3.0的工作比2.8更好,然而在2.8版本中,大多数最新访问对象的仍然保留。在3.0使用样品为10 时,性能非常接近理论上的LRU算法。

Note that LRU is just a model to predict how likely a given key will be accessed in the future. Moreover, if your data access pattern closely resembles the power law, most of the accesses will be in the set of keys that the LRU approximated algorithm will be able to handle well.

注意:LRU仅仅是一个预测模式,给出的key很可能在未来被访问。此外,如果你的数据访问模式类似于幂律(线性的),大多数key都可能被访问那么这个LRU算法的处理就是非常好的。

?

In simulations we found that using a power law access pattern, the difference between true LRU and Redis approximation were minimal or non-existent.

在实战中 ,我们发现使用幂律(线性的)的访问模式,在真正的LRU算法和Redis的LRU算法之间差异很小或者不存在差异。

However you can raise the sample size to 10 at the cost of some additional CPU usage in order to closely approximate true LRU, and check if this makes a difference in your cache misses rate.

你可以提升样品大小配置到10,它将接近真正的LRU算法,并且有不同错过率,但是要消耗更多的CPU。

To experiment in production with different values for the sample size by using the CONFIG SET maxmemory-samples command, is very simple.

在调试时使用不同的样品大小去调试非常简单,使用命令CONFIG SET maxmemory-samples 实现。