ConcurrentHashMap 源码解析 -- Java 容器
final Segment<K,V> segmentFor(int hash) {
return segments[(hash >>> segmentShift) & segmentMask];
}
// Find power-of-two sizes best matching arguments
int sshift = 0;
int ssize = 1;
while (ssize < concurrencyLevel) {
++sshift;
ssize <<= 1;
}
segmentShift = 32 - sshift;
segmentMask = ssize - 1;
this.segments = Segment.newArray(ssize);
ssize 初始值是1, 假如concurrencyLevel是16,在ssize不断左移乘与2的过程中,sshift记录了总共移动了多少位
concurrentyLevel是16,那么ssize从1到16,总共是移动了4位
segmentShift = 32 - sshift = 32 -4 = 28
segmentMask = ssize - 1 = 16 - 1 = 15,二进制表示就是 1111
所以在上面的segmentFor函数中,使用的是将hash值无符号右移segmentShift 位,再通过segmentMask进行与操作,得到其实原来hash值的高sshift位,这个例子就是最高4位的值,4位刚好能够表示16个Segment
接下来看看构造函数的其它部分
if (initialCapacity > MAXIMUM_CAPACITY)
initialCapacity = MAXIMUM_CAPACITY;
int c = initialCapacity / ssize;
if (c * ssize < initialCapacity)
++c;
int cap = 1;
while (cap < c)
cap <<= 1; for (int i = 0; i < this.segments.length; ++i)
this.segments[i] = new Segment<K,V>(cap, loadFactor);
1-2 行初始化初始容量,最大为1 <<30 ,不能超过这个值
接下来计算c,如果 c*ssize < initialCapacity, ssize是segment的数目,将多个容量平均分成ssize份,如果还是比initialCapacity小,那么就把c+1。不难理解。
默认情况下,initialCapacity是16,ssize是15,那么c==0,那么此时cap==1 不变
解析来又是移位操作,这么做是为了保证每个segment中元素的数量都是2的整数次幂。cap就是比c大的最小一个2的整数次幂。
然后通过for循环,在初始化每个Segment
Segment(int initialCapacity, float lf) {
loadFactor = lf;
setTable(HashEntry.<K,V>newArray(initialCapacity));
}
@SuppressWarnings("unchecked")
static final <K,V> HashEntry<K,V>[] newArray(int i) {
return new HashEntry[i];
}
在Segment的构造函数中,调用HashEntry的newArray函数,也就是创建一个initialCapacity大小的HashEntry的数组
ConcurrentHashMap中的读是不需要加锁的,通过volatile来实现,写数据的时候,写完写volatile,但是读之前,先读volatile。volatile的特性保证了,volatile之前写的东西,能够被volatile读之后的线程读到。也就是写volatile的时候,会把cache刷到内存中,而读volatile的时候,会将cache的数据invalidate,而从内存中读取,这样就能够保证是最新的值。
void rehash() {
HashEntry<K,V>[] oldTable = table;
int oldCapacity = oldTable.length;
if (oldCapacity >= MAXIMUM_CAPACITY)
return;
/*
* Reclassify nodes in each list to new Map. Because we are
* using power-of-two expansion, the elements from each bin
* must either stay at same index, or move with a power of two
* offset. We eliminate unnecessary node creation by catching
* cases where old nodes can be reused because their next
* fields won't change. Statistically, at the default
* threshold, only about one-sixth of them need cloning when
* a table doubles. The nodes they replace will be garbage
* collectable as soon as they are no longer referenced by any
* reader thread that may be in the midst of traversing table
* right now.
*/
HashEntry<K,V>[] newTable = HashEntry.newArray(oldCapacity<<1); //注意这里容量只扩大为原来的两倍,所以元素的下标不然就是原来的两倍大,不然就是和原来一样
threshold = (int)(newTable.length * loadFactor);
int sizeMask = newTable.length - 1;
for (int i = 0; i < oldCapacity ; i++) {
// We need to guarantee that any existing reads of old Map can
// proceed. So we cannot yet null out each bin.
HashEntry<K,V> e = oldTable[i]; //获取链表的头部
if (e != null) {
HashEntry<K,V> next = e.next;
int idx = e.hash & sizeMask; //得到新的下标idx,注意sizeMask前面已经变成newTable.length-1,sizeMask变成了原来的两倍
// Single node on list
if (next == null) //如果只有一个节点,那么就结束了
newTable[idx] = e; //直接把新的index指向它,这里为什么不用管newTable[idx]中原来是否有元素,因为前面是两倍扩容,所以前面的所有数组的index肯定不会映射到这里,所以肯定为null
else {
// Reuse trailing consecutive sequence at same slot
HashEntry<K,V> lastRun = e;
int lastIdx = idx;
for (HashEntry<K,V> last = next; //在链表中遍历
last != null;
last = last.next) {
int k = last.hash & sizeMask; 计算下一个的hash值
if (k != lastIdx) { //如果hash值等于前面计算的hash值,那么就换掉
lastIdx = k;
lastRun = last;
}
}
newTable[lastIdx] = lastRun; //这样能够保留最长的同一个hash值的所有节点,至少是最后一个节点,相当于把一个链表最后几个能够hash到新的table中同一个位置的HashEntry重复利用起来
// Clone all remaining nodes
for (HashEntry<K,V> p = e; p != lastRun; p = p.next) { //处理其他的节点
int k = p.hash & sizeMask;
HashEntry<K,V> n = newTable[k]; //每次都添加在每个链表的前面
newTable[k] = new HashEntry<K,V>(p.key, p.hash,
n, p.value);
}
}
}
}
table = newTable;
}
Segment的remove函数
/**
* Remove; match on key only if value null, else match both.
*/
V remove(Object key, int hash, Object value) {
lock();
try {
int c = count - 1;
HashEntry<K,V>[] tab = table;
int index = hash & (tab.length - 1);
HashEntry<K,V> first = tab[index];
HashEntry<K,V> e = first;
while (e != null && (e.hash != hash || !key.equals(e.key)))
e = e.next; V oldValue = null;
if (e != null) {
V v = e.value;
if (value == null || value.equals(v)) {
oldValue = v; //由于next指针是final的,所以前面的所有HashEntry都要被clone
// All entries following removed node can stay
// in list, but all preceding ones need to be
// cloned.
++modCount;
HashEntry<K,V> newFirst = e.next;//从头遍历到要删除的节点,不断添加到表头
for (HashEntry<K,V> p = first; p != e; p = p.next)
newFirst = new HashEntry<K,V>(p.key, p.hash,
newFirst, p.value);
tab[index] = newFirst;
count = c; // write-volatile
}
}
return oldValue;
} finally {
unlock();
}
}
public boolean isEmpty() {
final Segment<K,V>[] segments = this.segments;
/*
* We keep track of per-segment modCounts to avoid ABA
* problems in which an element in one segment was added and
* in another removed during traversal, in which case the
* table was never actually empty at any point. Note the
* similar use of modCounts in the size() and containsValue()
* methods, which are the only other methods also susceptible
* to ABA problems.
*/
int[] mc = new int[segments.length];
int mcsum = 0;
for (int i = 0; i < segments.length; ++i) {
if (segments[i].count != 0) //如果count!=0 那么肯定是false
return false;
else
mcsum += mc[i] = segments[i].modCount; //记录modCount
}
// If mcsum happens to be zero, then we know we got a snapshot
// before any modifications at all were made. This is
// probably common enough to bother tracking.
if (mcsum != 0) { //如果mssum 等于0,也就是说某一瞬间,集合是空的
for (int i = 0; i < segments.length; ++i) {
if (segments[i].count != 0 || //再判断一次
mc[i] != segments[i].modCount) //或者这个过程中modCount发生了变化
return false;
}
}
return true;
}
/**
* Returns the number of key-value mappings in this map. If the
* map contains more than <tt>Integer.MAX_VALUE</tt> elements, returns
* <tt>Integer.MAX_VALUE</tt>.
*
* @return the number of key-value mappings in this map
*/
public int size() {
final Segment<K,V>[] segments = this.segments;
long sum = 0;
long check = 0;
int[] mc = new int[segments.length];
// Try a few times to get accurate count. On failure due to
// continuous async changes in table, resort to locking.
for (int k = 0; k < RETRIES_BEFORE_LOCK; ++k) { //最多检查两遍,如果这个过程中老是有线程在修改,那么就请求锁来解决
check = 0;
sum = 0;
int mcsum = 0;
for (int i = 0; i < segments.length; ++i) {
sum += segments[i].count; //计算个数
mcsum += mc[i] = segments[i].modCount; //记录modCount
}
if (mcsum != 0) { //如果这个过程中发生了修改
for (int i = 0; i < segments.length; ++i) {
check += segments[i].count; //重新计算check
if (mc[i] != segments[i].modCount) { //如果某个modCount变化了,也说明发生了改变,必须重试
check = -1; // force retry
break;
}
}
}
if (check == sum) //
break;
}
if (check != sum) { // Resort to locking all segments //全部锁住再计算
sum = 0;
for (int i = 0; i < segments.length; ++i)
segments[i].lock();
for (int i = 0; i < segments.length; ++i)
sum += segments[i].count;
for (int i = 0; i < segments.length; ++i)
segments[i].unlock();
}
if (sum > Integer.MAX_VALUE)
return Integer.MAX_VALUE;
else
return (int)sum;
}
/**
* Removes the key (and its corresponding value) from this map.
* This method does nothing if the key is not in the map.
*
* @param key the key that needs to be removed
* @return the previous value associated with <tt>key</tt>, or
* <tt>null</tt> if there was no mapping for <tt>key</tt>
* @throws NullPointerException if the specified key is null
*/
public V remove(Object key) {
int hash = hash(key.hashCode());
return segmentFor(hash).remove(key, hash, null); //这里是如果原来这个key映射到某个值,那么就remove掉,这里传入null,在remove函数中就知道直接不管原来的值是什么,直接删掉
} /**
* {@inheritDoc}
*
* @throws NullPointerException if the specified key is null
*/
public boolean remove(Object key, Object value) {
int hash = hash(key.hashCode());
if (value == null) //但是如果value是null,是不允许的,所以return false,因为null被用来判断特殊情况,如上所示
return false;
return segmentFor(hash).remove(key, hash, value) != null;
}
ConcurrentHashMap 源码解析 -- Java 容器的更多相关文章
- Java之ConcurrentHashMap源码解析
ConcurrentHashMap源码解析 目录 ConcurrentHashMap源码解析 jdk8之前的实现原理 jdk8的实现原理 变量解释 初始化 初始化table put操作 hash算法 ...
- Spring源码解析-ioc容器的设计
Spring源码解析-ioc容器的设计 1 IoC容器系列的设计:BeanFactory和ApplicatioContext 在Spring容器中,主要分为两个主要的容器系列,一个是实现BeanFac ...
- Java并发包源码学习系列:JDK1.8的ConcurrentHashMap源码解析
目录 为什么要使用ConcurrentHashMap? ConcurrentHashMap的结构特点 Java8之前 Java8之后 基本常量 重要成员变量 构造方法 tableSizeFor put ...
- ConcurrentHashMap源码解析,多线程扩容
前面一篇已经介绍过了 HashMap 的源码: HashMap源码解析.jdk7和8之后的区别.相关问题分析 HashMap并不是线程安全的,他就一个普通的容器,没有做相关的同步处理,因此线程不安全主 ...
- ConcurrentHashMap源码解析(1)
此文已由作者赵计刚授权网易云社区发布. 欢迎访问网易云社区,了解更多网易技术产品运营经验. 注:在看这篇文章之前,如果对HashMap的层不清楚的话,建议先去看看HashMap源码解析. http:/ ...
- 第二章 ConcurrentHashMap源码解析
注:在看这篇文章之前,如果对HashMap的层不清楚的话,建议先去看看HashMap源码解析. http://www.cnblogs.com/java-zhao/p/5106189.html 1.对于 ...
- 【Java实战】源码解析Java SPI(Service Provider Interface )机制原理
一.背景知识 在阅读开源框架源码时,发现许多框架都支持SPI(Service Provider Interface ),前面有篇文章JDBC对Driver的加载时应用了SPI,参考[Hibernate ...
- 数据结构算法 - ConcurrentHashMap 源码解析
五个线程同时往 HashMap 中 put 数据会发生什么? ConcurrentHashMap 是怎么保证线程安全的? 在分析 HashMap 源码时还遗留这两个问题,这次我们站在 Java 多线程 ...
- Spring源码解析 – AnnotationConfigApplicationContext容器创建过程
Spring在BeanFactory基础上提供了一些列具体容器的实现,其中AnnotationConfigApplicationContext是一个用来管理注解bean的容器,从AnnotationC ...
随机推荐
- Emmet:HTML/CSS代码快速编写神器(转)
Emmet的前身是大名鼎鼎的Zen coding,如果你从事Web前端开发的话,对该插件一定不会陌生.它使用仿CSS选择器的语法来生成代码,大大提高了HTML/CSS代码编写的速度,比如下面的演示: ...
- [HIve - LanguageManual] Joins
Hive Joins Hive Joins Join Syntax Examples MapJoin Restrictions Join Optimization Predicate Pushdown ...
- linux进程调度函数浅析(基于3.16-rc4)
众所周知,进程调度使用schedule()函数来完成,下面我们从分析该函数开始,代码如下(kernel/sched/core.c): asmlinkage __visible void __sched ...
- 关于Tokenizer与TokenFilter的区别
TokenStream是一个能在被调用后产生语汇单元流的类,但是 TokenStream 类有两个不同的类型:Tokenizer 类和 TokenFilter 类.这两个类都是从抽象类TokenStr ...
- AHOI2013 Round2 Day2 简要题解
第一题: 第一问可以用划分树或主席树在O(nlog2n)内做出来. 第二问可以用树状数组套主席树在O(nlog2n)内做出来. 我的代码太挫了,空间刚刚卡过...(在bzoj上) 第二题: 分治,将询 ...
- Android View事件传递机制
ViewGroup dispatchTouchEvent onInterceptTouchEvent onTouch View dispatchTouchEvent onTouch 假设View的层级 ...
- 什么是USBMini接口
USB的接口有四种.一种是大头,有A型和B型两种,其中A型最常见,就是我们用的最多的标准的USB接头:一种是小头的,也就是USB Mini,也有A型和B型两种,其中B型应用最多,主要应用于手机.MP4 ...
- [iOS基础控件 - 6.10.4] 项目启动原理 项目中的文件
A.项目中的常见文件 1.单元测试Test 2.Frameworks(xCode6 创建的SingleView Project没有) 依赖框架 3.Products 打包好的文件 4. p ...
- linux下查看端口的占用情况
前提:首先你必须知道,端口不是独立存在的,它是依附于进程的.某个进程开启,那么它对应的端口就开启了,进程关闭,则该端口也就关闭了.下次若某个进程再次开启,则相应的端口也再次开启.而不要纯粹的理解为关闭 ...
- oracle的substr和replace
//我个人做的是更新表中某个字段下的所有内容带有中文括号的信息变为英文括号,具体做法如下 update 表名 set 列名 =replace(要修改的字段名,要替换掉的内容,要替换上去的新内容) su ...