LruCache是一个泛型类,它内部采用LinkedHashMap,并以强引用的方式存储外界的缓存对象,提供get和put方法来完成缓存的获取和添加操作。当缓存满时,LruCache会移除较早的缓存对象,然后再添加新的缓存对象。对Java中四种引用类型还不是特别清楚的读者可以自行查阅相关资料,这里不再给出介绍。
介绍源码前 先介绍LinkedHashMap一些特性
 
   LinkedHashMap实现与HashMap的不同之处在于,后者维护着一个运行于所有条目的双重链接列表。此链接列表定义了迭代顺序,该迭代顺序可以是插入顺序或者是访问顺序。
 对于LinkedHashMap而言,它继承与HashMap、底层使用哈希表与双向链表来保存所有元素。其基本操作与父类HashMap相似,它通过重写父类相关的方法,来实现自己的链接列表特性
1) Entry元素:
 LinkedHashMap采用的hash算法和HashMap相同,但是它重新定义了数组中保存的元素Entry,该Entry除了保存当前对象的引用外,还保存了其上一个元素before和下一个元素after的引用,从而在哈希表的基础上又构成了双向链接列表。
 
1) Entry元素:
 LinkedHashMap采用的hash算法和HashMap相同,但是它重新定义了数组中保存的元素Entry,该Entry除了保存当前对象的引用外,还保存了其上一个元素before和下一个元素after的引用,从而在哈希表的基础上又构成了双向链接列表。
/**
* 双向链表的表头元素。
*/
private transient Entry<K,V> header;
 
/**
* LinkedHashMap的Entry元素。
* 继承HashMap的Entry元素,又保存了其上一个元素before和下一个元素after的引用。
*/
private static class Entry<K,V> extends HashMap.Entry<K,V> {
Entry<K,V> before, after;
……
}
 
2) 读取:
   LinkedHashMap重写了父类HashMap的get方法,实际在调用父类getEntry()方法取得查找的元素后,再判断当排序模式accessOrder为true时,记录访问顺序,将最新访问的元素添加到双向链表的表头(这个特性保证了LRU最近最少使用),并从原来的位置删除。由于的链表的增加、删除操作是常量级的,故并不会带来性能的损失。
 
 @Override public V get(Object key) {
/*
* This method is overridden to eliminate the need for a polymorphic
* invocation in superclass at the expense of code duplication.
*/
if (key == null) {
HashMapEntry<K, V> e = entryForNullKey;
if (e == null)
return null;
if (accessOrder)
makeTail((LinkedEntry<K, V>) e);
return e.value;
} int hash = Collections.secondaryHash(key);
HashMapEntry<K, V>[] tab = table;
for (HashMapEntry<K, V> e = tab[hash & (tab.length - 1)];
e != null; e = e.next) {
K eKey = e.key;
if (eKey == key || (e.hash == hash && key.equals(eKey))) {
if (accessOrder)
makeTail((LinkedEntry<K, V>) e);
return e.value;
}
}
return null;
} /**
* Relinks the given entry to the tail of the list. Under access ordering,
* this method is invoked whenever the value of a pre-existing entry is
* read by Map.get or modified by Map.put.
*/
private void makeTail(LinkedEntry<K, V> e) {
// Unlink e
e.prv.nxt = e.nxt;
e.nxt.prv = e.prv; // Relink e as tail
LinkedEntry<K, V> header = this.header;
LinkedEntry<K, V> oldTail = header.prv;
e.nxt = header;
e.prv = oldTail;
oldTail.nxt = header.prv = e;
modCount++;
}

  

源码分析
 
public class LruCache<K, V> {
private final LinkedHashMap<K, V> map; /** Size of this cache in units. Not necessarily the number of elements. */
private int size;//当前缓存大小
private int maxSize;//缓存最大 private int putCount;//put次数
private int createCount;
private int evictionCount;//回收次数
private int hitCount;//命中次数
private int missCount;//没有命中次数 /**
* @param maxSize for caches that do not override {@link #sizeOf}, this is
* the maximum number of entries in the cache. For all other caches,
* this is the maximum sum of the sizes of the entries in this cache.
*/
public LruCache(int maxSize) {
if (maxSize <= 0) {
throw new IllegalArgumentException("maxSize <= 0");
}
this.maxSize = maxSize;
this.map = new LinkedHashMap<K, V>(0, 0.75f, true);
} /**
* Sets the size of the cache.
*
* @param maxSize The new maximum size.
*/
public void resize(int maxSize) {
if (maxSize <= 0) {
throw new IllegalArgumentException("maxSize <= 0");
} synchronized (this) {
this.maxSize = maxSize;
}
trimToSize(maxSize);
} /**
* 返回缓存中key对应的value,如果不存在则创建一个并返回。
* 如果value被返回,它就会被移动到队列的头部,如果value为null或者不能被创建,方法返回nul
*/
public final V get(K key) {
if (key == null) {
throw new NullPointerException("key == null");
} V mapValue;
synchronized (this) {
mapValue = map.get(key);
if (mapValue != null) {
hitCount++;
return mapValue;
}
missCount++;
} /*
* 如果未被命中,则试图创建一个value.这将会消耗较长时间,创建过程中,
* 如果要添加的value值和map中已有的值冲突,则释放已经创建value.
*/ V createdValue = create(key);
if (createdValue == null) {
return null;
} synchronized (this) {
createCount++;
mapValue = map.put(key, createdValue); if (mapValue != null) {
// There was a conflict so undo that last put
map.put(key, mapValue);
} else {
size += safeSizeOf(key, createdValue);
}
} if (mapValue != null) {
entryRemoved(false, key, createdValue, mapValue);
return mapValue;
} else {
//判断缓存是否越界
trimToSize(maxSize);
return createdValue;
}
} /**
* 缓存key对应的value.value 会被移动至队列头部。
* the queue.
*
* @return the previous value mapped by {@code key}.
*/
public final V put(K key, V value) {
if (key == null || value == null) {
throw new NullPointerException("key == null || value == null");
} V previous;
synchronized (this) {
putCount++;
size += safeSizeOf(key, value);
previous = map.put(key, value);
if (previous != null) {
size -= safeSizeOf(key, previous);
}
} if (previous != null) {
entryRemoved(false, key, previous, value);
} trimToSize(maxSize);
return previous;
} /**
* Remove the eldest entries until the total of remaining entries is at or
* below the requested size.
*
* @param maxSize the maximum size of the cache before returning. May be -1
* to evict even 0-sized elements.
*/
public void trimToSize(int maxSize) {
while (true) {
K key;
V value;
synchronized (this) {
if (size < 0 || (map.isEmpty() && size != 0)) {
throw new IllegalStateException(getClass().getName()
+ ".sizeOf() is reporting inconsistent results!");
} if (size <= maxSize) {
break;
} Map.Entry<K, V> toEvict = map.eldest();
if (toEvict == null) {
break;
} key = toEvict.getKey();
value = toEvict.getValue();
map.remove(key);
size -= safeSizeOf(key, value);
evictionCount++;
} entryRemoved(true, key, value, null);
}
} /**
* Removes the entry for {@code key} if it exists.
*
* @return the previous value mapped by {@code key}.
*/
public final V remove(K key) {
if (key == null) {
throw new NullPointerException("key == null");
} V previous;
synchronized (this) {
previous = map.remove(key);
if (previous != null) {
size -= safeSizeOf(key, previous);
}
} if (previous != null) {
entryRemoved(false, key, previous, null);
} return previous;
} /**
* Called for entries that have been evicted or removed. This method is
* invoked when a value is evicted to make space, removed by a call to
* {@link #remove}, or replaced by a call to {@link #put}. The default
* implementation does nothing.
*
* <p>The method is called without synchronization: other threads may
* access the cache while this method is executing.
*
* @param evicted true if the entry is being removed to make space, false
* if the removal was caused by a {@link #put} or {@link #remove}.
* @param newValue the new value for {@code key}, if it exists. If non-null,
* this removal was caused by a {@link #put}. Otherwise it was caused by
* an eviction or a {@link #remove}.
*/
protected void entryRemoved(boolean evicted, K key, V oldValue, V newValue) {} /**
* Called after a cache miss to compute a value for the corresponding key.
* Returns the computed value or null if no value can be computed. The
* default implementation returns null.
*
* <p>The method is called without synchronization: other threads may
* access the cache while this method is executing.
*
* <p>If a value for {@code key} exists in the cache when this method
* returns, the created value will be released with {@link #entryRemoved}
* and discarded. This can occur when multiple threads request the same key
* at the same time (causing multiple values to be created), or when one
* thread calls {@link #put} while another is creating a value for the same
* key.
*/
protected V create(K key) {
return null;
} private int safeSizeOf(K key, V value) {
int result = sizeOf(key, value);
if (result < 0) {
throw new IllegalStateException("Negative size: " + key + "=" + value);
}
return result;
} /**
* Returns the size of the entry for {@code key} and {@code value} in
* user-defined units. The default implementation returns 1 so that size
* is the number of entries and max size is the maximum number of entries.
*
* <p>An entry's size must not change while it is in the cache.
*/
protected int sizeOf(K key, V value) {
return 1;
} /**
* Clear the cache, calling {@link #entryRemoved} on each removed entry.
*/
public final void evictAll() {
trimToSize(-1); // -1 will evict 0-sized elements
} /**
* For caches that do not override {@link #sizeOf}, this returns the number
* of entries in the cache. For all other caches, this returns the sum of
* the sizes of the entries in this cache.
*/
public synchronized final int size() {
return size;
} /**
* For caches that do not override {@link #sizeOf}, this returns the maximum
* number of entries in the cache. For all other caches, this returns the
* maximum sum of the sizes of the entries in this cache.
*/
public synchronized final int maxSize() {
return maxSize;
} /**
* Returns the number of times {@link #get} returned a value that was
* already present in the cache.
*/
public synchronized final int hitCount() {
return hitCount;
} /**
* Returns the number of times {@link #get} returned null or required a new
* value to be created.
*/
public synchronized final int missCount() {
return missCount;
} /**
* Returns the number of times {@link #create(Object)} returned a value.
*/
public synchronized final int createCount() {
return createCount;
} /**
* Returns the number of times {@link #put} was called.
*/
public synchronized final int putCount() {
return putCount;
} /**
* Returns the number of values that have been evicted.
*/
public synchronized final int evictionCount() {
return evictionCount;
} /**
* Returns a copy of the current contents of the cache, ordered from least
* recently accessed to most recently accessed.
*/
public synchronized final Map<K, V> snapshot() {
return new LinkedHashMap<K, V>(map);
} @Override public synchronized final String toString() {
int accesses = hitCount + missCount;
int hitPercent = accesses != 0 ? (100 * hitCount / accesses) : 0;
return String.format("LruCache[maxSize=%d,hits=%d,misses=%d,hitRate=%d%%]",
maxSize, hitCount, missCount, hitPercent);
}
}
总结

1、LruCache 是基于 Lru

算法实现的一种缓存机制; 

  2、Lru算法的原理是把近期最少使用的数据给移除掉,当然前提是当前数据的量大于设定的最大值。 
  3、LruCache 没有真正的释放内存,只是从 Map中移除掉数据,真正释放内存还是要用户手动释放。
 

LruCache原理解析的更多相关文章

  1. Android面试收集录10 LruCache原理解析

    一.Android中的缓存策略 一般来说,缓存策略主要包含缓存的添加.获取和删除这三类操作.如何添加和获取缓存这个比较好理解,那么为什么还要删除缓存呢?这是因为不管是内存缓存还是硬盘缓存,它们的缓存大 ...

  2. [原][Docker]特性与原理解析

    Docker特性与原理解析 文章假设你已经熟悉了Docker的基本命令和基本知识 首先看看Docker提供了哪些特性: 交互式Shell:Docker可以分配一个虚拟终端并关联到任何容器的标准输入上, ...

  3. 【算法】(查找你附近的人) GeoHash核心原理解析及代码实现

    本文地址 原文地址 分享提纲: 0. 引子 1. 感性认识GeoHash 2. GeoHash算法的步骤 3. GeoHash Base32编码长度与精度 4. GeoHash算法 5. 使用注意点( ...

  4. Web APi之过滤器执行过程原理解析【二】(十一)

    前言 上一节我们详细讲解了过滤器的创建过程以及粗略的介绍了五种过滤器,用此五种过滤器对实现对执行Action方法各个时期的拦截非常重要.这一节我们简单将讲述在Action方法上.控制器上.全局上以及授 ...

  5. Web APi之过滤器创建过程原理解析【一】(十)

    前言 Web API的简单流程就是从请求到执行到Action并最终作出响应,但是在这个过程有一把[筛子],那就是过滤器Filter,在从请求到Action这整个流程中使用Filter来进行相应的处理从 ...

  6. GeoHash原理解析

    GeoHash 核心原理解析       引子 一提到索引,大家脑子里马上浮现出B树索引,因为大量的数据库(如MySQL.oracle.PostgreSQL等)都在使用B树.B树索引本质上是对索引字段 ...

  7. alibaba-dexposed 原理解析

    alibaba-dexposed 原理解析 使用参考地址: http://blog.csdn.net/qxs965266509/article/details/49821413 原理参考地址: htt ...

  8. 支付宝Andfix 原理解析

    支付宝Andfix 原理解析 使用参考地址: http://blog.csdn.net/qxs965266509/article/details/49802429 原理参考地址: http://blo ...

  9. JavaScript 模板引擎实现原理解析

    1.入门实例 首先我们来看一个简单模板: <script type="template" id="template"> <h2> < ...

随机推荐

  1. UVa 324 - Factorial Frequencies

    题目大意:给一个数n,统计n的阶乘中各个数字出现的次数.用java的大数做. import java.io.*; import java.util.*; import java.math.*; cla ...

  2. ftp_get_file_and_directory

    class DirectoryItem { public Uri BaseUri; public string AbsolutePath { get { return string.Format(&q ...

  3. JavaScript高级程序设计-8:BOM

    1. 什么是BOM? BOM(Browser Object Mode) 是指浏览器对象模型,是用于描述这种对象与对象之间层次关系的模型,浏览器对象模型提供了独立于内容的.可以与浏览器窗口进行互动的对象 ...

  4. 在ubuntu中为navicat创建快捷方式

    在ubuntu中,解压navicat并不会生成快捷方式,每次运行都需要进入软件解压的目录,然后运行命令开启navicat,十分不便.今天尝试引入快捷方式,直接双击运行,感觉挺不错. 首先下载一个合适的 ...

  5. bzoj2453

    传送门:http://www.lydsy.com/JudgeOnline/problem.php?id=2453 题目大意: (1)       若第一个字母为“M”,则紧接着有三个数字L.R.W.表 ...

  6. java中常用的空判断

    Java 判断字符串是否为空的四种方法: 方法一: 最多人使用的一个方法, 直观, 方便, 但效率很低: if(s == null ||"".equals(s));方法二: 比较字 ...

  7. MonthCalendar控件

    MonthCalendar控件  功能,直接显示月历,

  8. Notification的功能与用法

    Notification是显示在手机状态的通知——手机状态栏位于手机屏幕的最上方,那里一般显示了手机当前的网络状态.时间等.Notification所代表的是一种具有全局效果的通知,程序一般通过Not ...

  9. 用Zephir编写PHP扩展

    自从NodeJS,和Golang出来后,很多人都投奔过去了.不为什么,冲着那牛X的性能.那PHP的性能什么时候能提升一下呢?要不然就会被人鄙视了.其实大牛们也深刻体会到了这些威胁,于是都在秘密开发各种 ...

  10. App Store Review Guideline(带翻译)

    1. Terms and conditions(法律与条款) 1.1  As a developer of applications for the App Store you are bound b ...