go-ethereum源码分析 PartII 共识算法
首先从共识引擎-Engine开始记录
Engine是一个独立于具体算法的共识引擎接口
Author(header) (common.Address, error) 返回打包header对应的区块的矿工地址
VerifyHeader(chain ChainReader, header, seal bool) 验证header是否遵循当前Engine的共识原则。seal代表是否要顺便把VerifySeal做了
VerifyHeaders
VerifyUncles(chain ChainReader, block) error
VerifySeal(chain ChainReader, header) error
Prepare(chain, header) error 为header的共识fields的初始化做准备, The changes are executed incline.
Finalize
Seal(chain, block,results chan<- *types.Block, stop <-chan struct{}) 生成sealing request并且把结果加入channel中。注意,异步(the method returns immediately and will send the result async),根据共识算法不同,可能获得复数个blocks
SealHash(header) common.Hash 在sealed之前的块的Hash值
CalcDifficulty(chain, time uint64, parent *types.Header) *bigInt 难度调节算法,返回新块的难度
APIs(chain ChainReader)[]rpc.API 返回该共识引擎所提供的RPC APIs
WORD_BYTES = # bytes in word
DATASET_BYTES_INIT = ** # bytes in dataset at genesis
DATASET_BYTES_GROWTH = ** # dataset growth per epoch
CACHE_BYTES_INIT = ** # bytes in cache at genesis
CACHE_BYTES_GROWTH = ** # cache growth per epoch
CACHE_MULTIPLIER= # Size of the DAG relative to the cache
EPOCH_LENGTH = # blocks per epoch
MIX_BYTES = # width of mix
HASH_BYTES = # hash length in bytes
DATASET_PARENTS = # number of parents of each dataset element
CACHE_ROUNDS = # number of rounds in cache production
ACCESSES = # number of accesses in hashimoto loop
cache size和dataset size因为要随着时间增长,所以是CACHE_BYTES_INIT + CACHE_BYTES_GROWTH * (block_number // EPOCH_LENGTH)之下的最大质数
生成mkcache的seed
def get_seedhash(block):
s = '\x00' * 32
for i in range(block.number // EPOCH_LENGTH):
s = serialize_hash(sha3_256(s))
return s
生成cache的方法
注意ETH用的hash是sha3的变体,更接近keccak算法
def mkcache(cache_size, seed):
n = cache_size // HASH_BYTES # Sequentially produce the initial dataset
o = [sha3_512(seed)]
for i in range(, n):
o.append(sha3_512(o[-])) # Use a low-round version of randmemohash
for _ in range(CACHE_ROUNDS):
for i in range(n):
v = o[i][] % n
o[i] = sha3_512(map(xor, o[(i-+n) % n], o[v])) return o
生成dataset的方法
FNV_PRIME = 0x01000193 def fnv(v1, v2):
return ((v1 * FNV_PRIME) ^ v2) % ** def calc_dataset_item(cache, i):
n = len(cache)
r = HASH_BYTES // WORD_BYTES
# initialize the mix
mix = copy.copy(cache[i % n])
mix[] ^= i
mix = sha3_512(mix)
# fnv it with a lot of random cache nodes based on i
for j in range(DATASET_PARENTS):
cache_index = fnv(i ^ j, mix[j % r])
mix = map(fnv, mix, cache[cache_index % n])
return sha3_512(mix)
def calc_dataset(full_size, cache):
return [calc_dataset_item(cache, i) for i in range(full_size // HASH_BYTES)]
算法主体
注意这里的s与seed不要混淆
def hashimoto(header, nonce, full_size, dataset_lookup):
n = full_size / HASH_BYTES
w = MIX_BYTES // WORD_BYTES
mixhashes = MIX_BYTES / HASH_BYTES
# combine header+nonce into a byte seed
s = sha3_512(header + nonce[::-])
# start the mix with replicated s
mix = []
for _ in range(MIX_BYTES / HASH_BYTES):
mix.extend(s)
# mix in random dataset nodes
for i in range(ACCESSES):
p = fnv(i ^ s[], mix[i % w]) % (n // mixhashes) * mixhashes
newdata = []
for j in range(MIX_BYTES / HASH_BYTES):
newdata.extend(dataset_lookup(p + j))
mix = map(fnv, mix, newdata)
# compress mix
cmix = []
for i in range(, len(mix), ):
cmix.append(fnv(fnv(fnv(mix[i], mix[i+]), mix[i+]), mix[i+]))
return {
"mix digest": serialize_hash(cmix),
"result": serialize_hash(sha3_256(s+cmix))
} def hashimoto_light(full_size, cache, header, nonce):
return hashimoto(header, nonce, full_size, lambda x: calc_dataset_item(cache, x)) def hashimoto_full(full_size, dataset, header, nonce):
return hashimoto(header, nonce, full_size, lambda x: dataset[x])
def mine(full_size, dataset, header, difficulty):
# zero-pad target to compare with hash on the same digit when reversed
target = zpad(encode_int(** // difficulty), 64)[::-1]
from random import randint
nonce = randint(, **)
while hashimoto_full(full_size, dataset, header, nonce) > target:
nonce = (nonce + ) % **
return nonce
// Ethash is a consensus engine based on proof-of-work implementing the ethash
// algorithm.
type Ethash struct {
config Config caches *lru // In memory caches to avoid regenerating too often
datasets *lru // In memory datasets to avoid regenerating too often // Mining related fields
rand *rand.Rand // Properly seeded random source for nonces
threads int // Number of threads to mine on if mining
update chan struct{} // Notification channel to update mining parameters
hashrate metrics.Meter // Meter tracking the average hashrate // Remote sealer related fields
workCh chan *sealTask // Notification channel to push new work and relative result channel to remote sealer
fetchWorkCh chan *sealWork // Channel used for remote sealer to fetch mining work
submitWorkCh chan *mineResult // Channel used for remote sealer to submit their mining result
fetchRateCh chan chan uint64 // Channel used to gather submitted hash rate for local or remote sealer.
submitRateCh chan *hashrate // Channel used for remote sealer to submit their mining hashrate // The fields below are hooks for testing
shared *Ethash // Shared PoW verifier to avoid cache regeneration
fakeFail uint64 // Block number which fails PoW check even in fake mode
fakeDelay time.Duration // Time delay to sleep for before returning from verify lock sync.Mutex // Ensures thread safety for the in-memory caches and mining fields
closeOnce sync.Once // Ensures exit channel will not be closed twice.
exitCh chan chan error // Notification channel to exiting backend threads
}
Ethash在ethash.go中有以下一些方法:
1. cache
2. dataset
这两个方法先在内存中找,接着去磁盘的DAG中找,最后创建对应数据结构
3. Hashrate
收集本机和网络peer的过去一分钟内search invocation的速率(单位每秒)
SUGAR:
Go non-blocking channel op
如果通道恰好能够收发,那么就对通道做对应操作
select {
case msg := <-messages:
fmt.Println("received message", msg)
case sig := <-signals:
fmt.Println("received signal", sig)
default:
fmt.Println("no activity")
}
go-ethereum源码分析 PartII 共识算法的更多相关文章
- 死磕以太坊源码分析之Kademlia算法
死磕以太坊源码分析之Kademlia算法 KAD 算法概述 Kademlia是一种点对点分布式哈希表(DHT),它在容易出错的环境中也具有可证明的一致性和性能.使用一种基于异或指标的拓扑结构来路由查询 ...
- [dev][ipsec][dpdk] strongswan/dpdk源码分析之ipsec算法配置过程
1 简述 storngswan的配置里用一种固定格式的字符串设置了用于协商的预定义算法.在包协商过程中strongswan将字符串转换为固定的枚举值封在数据包里用于传输. 协商成功之后,这组被协商选中 ...
- Spring Cloud Ribbon 源码分析---负载均衡算法
上一篇分析了Ribbon如何发送出去一个自带负载均衡效果的HTTP请求,本节就重点分析各个算法都是如何实现. 负载均衡整体是从IRule进去的: public interface IRule{ /* ...
- [ethereum源码分析](3) ethereum初始化指令
前言 在上一章介绍了关于区块链的一些基础知识,这一章会分析指令 geth --datadir dev/data/02 init private-geth/genesis.json 的源码,若你的eth ...
- Vue3源码分析之Diff算法
Diff 算法源码(结合源码写的简易版本) 备注:文章后面有详细解析,先简单浏览一遍整体代码,更容易阅读 // Vue3 中的 diff 算法 // 模拟节点 const { oldVirtualDo ...
- Ethereum 源码分析之框架
accounts 实现了一个高等级的以太坊账户管理 bmt 二进制的默克尔树的实现 build 主要是编译和构建的一些脚本和配置 cmd ...
- [ethereum源码分析](5) 创建新账号
前言 在上一章我们介绍了 ethereum运行开启console 的过程,那么在这一章我们将会介绍如何在以太坊中创建一个新的账号.以下的理解可能存在错误,如果各位大虾发现错误,还望指正. 指令分析 指 ...
- go ethereum源码分析 PartIV Transaction相关
核心数据结构: core.types.transaction.go type Transaction struct { data txdata // caches hash atomic.Value ...
- [ethereum源码分析](4) ethereum运行开启console
前言 在上一章我们介绍了 ethereum初始化指令 ,包括了系统是如何调用指令和指令的执行.在本章节我们将会介绍 geth --datadir dev/data/ --networkid cons ...
随机推荐
- js 获取屏幕或元素宽高...
窗口相对于屏幕顶部距离 window.screenTop 窗口相对于屏幕左边距离 window.screenLeft, 屏幕分辨率的高 window.screen.height, 屏幕分辨率的宽 wi ...
- Xml & Tomcat
文档声明: 简单声明, version : 解析这个xml的时候,使用什么版本的解析器解析 <?xml version="1.0" ?> encoding : 解析xm ...
- CEF JS实现获取剪贴板图片的DataURL
转载:https://www.deanhan.cn/js-paste-upload.html 转载:https://segmentfault.com/a/1190000002915597 转载:htt ...
- Oracle插入语句日期格式设置
insert into test values (1,'2015-01-01'); 直接设置成字符串,会报出“文字与格式字符串不匹配”的异常: 如果正确插入,则要将字符型数据转成日期型数据: 1 in ...
- Matlab 将两个图像进行分离 已知其中一个图像
5.下图(a)是一幅两个灰度图像合成的图像,已知其中一幅图像如图(b)所示,试把另一幅图像提取出来,并显示. 运用减法做 %加载入要处理的图片 A=imread('a.png'); %将I变为[0,1 ...
- freeswitch 事件命令
1.uuid_bridge 桥接两条呼叫的腿. Usage: uuid_bridge <uuid> <other_uuid> uuid_bridge至少需要有一条腿是被呼通的. ...
- Angular4基本网络请求get、post方式
1.在路径C:\AngularProject\AngularTest\src\app\app.module.ts这个文件下面引入 2.在需要用到的js文件引入 3.GET/POST 带参/不带参请求
- POJ 2663 Tri Tiling 【状压DP】
Description In how many ways can you tile a 3xn rectangle with 2x1 dominoes? Here is a sample tilin ...
- windows中的软链接硬链接等
学校嘛,有些时候还是得逆逆上网客户端啥的,并且学校的不少工作,这Windows的需求还是挺强的,之前Win10的体验并不是太好,不过时隔这么久,打算从7升级到10了,恰好系统也该换了. 首先是命令行的 ...
- 论文笔记:AdaScale: Towards real-time video object detection using adaptive scalingAdaScale
AdaScale: Towards real-time video object detection using adaptive scaling 2019-02-18 16:14:17 Paper: ...