原帖地址:http://www.anandtech.com/show/9969/jedec-publishes-hbm2-specification

The high-bandwidth memory (HBM) technology solves two key problems related to modern DRAM: it substantially increases bandwidth available to computing devices (e.g., GPUs) and reduces power consumption. The first-generation HBM has a number of limitations when it comes to capacity and clock-rates. However, the second-gen HBM promises to eliminate them.

JEDEC, a major semiconductor engineering trade organization that sets standards for DRAM, recently published the final specifications of the second-generation HBM (HBM2), which means that members of the organization had ratified the standard. The new memory technology builds upon the foundation of the original JESD235 standard, which describes stacked memory devices interconnected using through silicon vias (TSVs) with a very wide input/output (I/O) interface operating at moderate data-rates. The JESD235A will help engineers to further increase performance, capacity and capabilities of HBM memory chips. HBM Gen 2 will be particularly useful for the upcoming video cards by AMD and NVIDIA, which thanks to HBM2 can feature as much as 512 GB/s – 1 TB/s of memory bandwidth and 8, 16 or even 32 GB of memory onboard.

HBM Gen 1: Good, But With Limitations

The original JESD235 standard defines the first-generation HBM (HBM1) memory chips with a 1024-bit interface and up to 1 Gb/s data-rate, which stack two, four or eight DRAM devices with two 128-bit channels per device on a base logic die. Each HBM stack (which is also called KGSD — known good stacked die) supports up to eight 128-bit channels because its physical interface is limited to 1024 bits. Every channel is essentially a 128-bit DDR interface with 2n prefetch architecture (256 bits per memory read and write access) that has its own DRAM banks (8 or 16 banks, depending on density), command and data interface, clock-rate, timings, etc. Each channel can work independently from other channels in the stack or even within one DRAM die. HBM stacks use passive silicon interposers to connect to host processors (e.g., GPUs). For more information about HBM check out our article called “AMD Dives Deep On High Bandwidth Memory — What Will HBM Bring AMD?”.

HBM gen 1 memory KGSDs produced by SK Hynix (the only company that makes them commercially) stack four 2 Gb memory dies and operate at 1 Gb/s data rate per pin. AMD uses these KGSDs with 1 GB capacity and 128 GB/s peak bandwidth per stack to build its Fiji GPU system-in-packages (SiPs) and the Radeon R9 Fury/R9 Nano video cards. The graphics adapters have 4 GB of VRAM onboard, not a lot for 2016. While AMD’s flagship video cards do not seem to have capacity issues right now, 4 GB of memory per graphics adapter is a limitation. AMD’s latest graphics cards sport 512 GB/s of memory bandwidth, a massive amount by today’s standards, but even that amount could be a constraint for future high-end GPUs.

HBM Gen 2: Good Thing Gets Better

The second-generation HBM (HBM2) technology, which is outlined by the JESD235A standard, inherits physical 128-bit DDR interface with 2n prefetch architecture, internal organization, 1024-bit input/output, 1.2 V I/O and core voltages as well as all the crucial parts of the original tech. Just like the predecessor, HBM2 supports two, four or eight DRAM devices on a base logic die (2Hi, 4Hi, 8Hi stacks) per KGSD. HBM Gen 2 expands capacity of DRAM devices within a stack to 8 Gb and increases supported data-rates up to 1.6 Gb/s or even to 2 Gb/s per pin. In addition, the new technology brings an important improvement to maximize actual bandwidth.

One of the key enhancements of HBM2 is its Pseudo Channel mode, which divides a channel into two individual sub-channels of 64 bit I/O each, providing 128-bit prefetch per memory read and write access for each one. Pseudo channels operate at the same clock-rate, they share row and column command bus as well as CK and CKE inputs. However, they have separated banks, they decode and execute commands individually. SK Hynix says that the Pseudo Channel mode optimizes memory accesses and lowers latency, which results in higher effective bandwidth.

If, for some reason, an ASIC developer believes that Pseudo Channel mode is not optimal for their product, then HBM2 chips can also work in Legacy mode. While memory makers expect HBM2 to deliver higher effective bandwidth than predecessors, it depends on developers of memory controllers how efficient next-generation memory sub-systems will be. In any case, we will need to test actual hardware before we can confirm that HBM2 is better than HBM1 at the same clock-rate.

Additional improvements of HBM2 over the first-gen HBM includes lane remapping modes for hard and soft repair of lanes (HBM1 supports various DRAM cell test and repair techniques to improve yields of stacks, but not lane remapping), anti-overheating protection (KGSD can alert memory controllers of unsafe temperatures) and some other.

The second-generation HBM memory will be produced using newer manufacturing technologies than the first-gen HBM. For example, SK Hynix uses its 29nm process to make DRAM dies for its HBM1 stacks. For HBM2 memory, the company intends to use their 21nm process. Thanks to newer manufacturing technologies and higher effective bandwidth, HBM2 should have higher energy efficiency than HBM1 at its data-rates, but we do not have exact details at this point. In any case, HBM2 is likely to be more energy efficient than GDDR5 and GDDR5X, hence the odds are good that it will be the memory of choice for high-end graphics cards in the future.

Samsung Electronics this week said that it had begun mass production of HBM2 memory, but did not reveal too many details. Samsung's HBM2 KGSD features 4 GB capacity, 2 Gb/s data rate per pin and is based on four 8 Gb DRAM dies. The memory chips will let device manufacturers build SiPs with up to 16 GB of memory. It is noteworthy that Samsung decided to use 8 Gb DRAM dies for its HBM2 stacks. Such decision looks quite logical since with 8 Gb DRAM ICs the company can relatively easily increase or decrease capacity of its KGSDs by altering the number of DRAM layers. The DRAM maker uses its 20nm process to produce its HBM2 DRAM KGSDs. Unfortunately, Samsung did not reveal actual power consumption of the new memory stacks.

 

HBM2 memory stacks are not only faster and more capacious than HBM1 KGSDs, but they are also larger. SK Hynix’s HBM1 package has dimensions of 5.48 mm × 7.29 mm (39.94 mm2). The company’s HBM2 chip will have dimensions of 7.75 mm × 11.87 mm (91.99 mm2). Besides, HBM2 stacks will also be higher (0.695 mm/0.72 mm/0.745 mm vs. 0.49 mm) than HBM1 KGSDs, which may require developers of ASICs (e.g., GPUs) to install a heat-spreader on their SiPs to compensate for any differences in height between the memory stacks and GPU die, to protect the DRAM, and to guarantee sufficient cooling for high bandwidth memory.

Larger footprint of the second-gen HBM2 means that the upcoming SiPs with multiple memory stacks will require larger silicon interposers, which means that they are going to be slightly more expensive than SiPs based on the first-gen HBM. Since geometric parameters of staggered microbump pattern of HBM1 and HBM2 are the same, complexity of passive silicon interposers will remain the same for both types of memory. A good news is that to enable 512 GB/s of bandwidth, only two HBM2 stacks are needed, which implies that from bandwidth per mm2 point of view the new memory tech continues to be very efficient.


A slide by FormFactor and Teradyne from their presentation at Semiconductor Wafer Test Workshop 2015

Since SK Hynix’s HBM1 KGSDs are smaller than the company’s HBM2 stacks, they are going to have an advantage over the second-gen high-bandwidth memory for small form-factor SiPs. As a result, the South Korea-based DRAM maker may retain production of its HBM1 chips for some time.

New Use Cases and Industry Support

Thanks to higher capacity and data-rates, HBM2 memory stacks will be pretty flexible when it comes to configurations. For example, it will be possible to build a 2 GB KGSD with 256 GB/s of bandwidth that will use only two 8 Gb memory dies. Such memory stack could be used for graphics adapters designed for notebooks or ultra-small personal computers. Besides, it could be used as an external cache for a hybrid microprocessor with built-in graphics (in the same manner as Intel uses its eDRAM cache to boost performance of its integrated graphics processors). What remains to be seen is the cost of HBM2 stacks that deliver 256 GB/s bandwidth. If HBM2 and the necessary interposer remains as expensive as HBM1, it will likely continue to only be used for premium solutions.

Thanks to a variety of KGSD configurations prepared by DRAM manufacturers, expect new types of devices to start using HBM2. Samsung and SK Hynix believe that in addition to graphics and HPC (high-performance computing) cards, various server, networking and other applications will utilize the new type of memory. As of September, 2015, more than 10 companies were developing system-on-chips (including ASICs, x86 processors, ASSPs and FPGAs) with HBM support, according to SK Hynix.

The first-generation HBM memory delivers great bandwidth and energy efficiency, but it is produced by only one maker of DRAM and is not widely supported by developers of various ASICs. By contrast, Samsung Electronics and SK Hynix, two companies that control well over 50% of the global DRAM output, will make HBM2. Micron Technology yet has to confirm its plans to build HBM2, but since this is an industry-standard type of memory, the door is open if the company wishes to produce it.

Overall, the industry support for the high bandwidth memory technology is growing. There are 10 companies working on SoCs with HBM support, leading DRAM makers are gearing up to produce HBM2. The potential of the second-gen HBM seems to be rather high, but the costs remain a major concern. Regardless, it will be extremely interesting to see next-generation graphics cards from AMD and NVIDIA featuring HBM2 DRAM and find out what they are capable of because of the new Polaris and Pascal architectures as well as the new type of memory.

HBM内存介绍的更多相关文章

  1. DRAM 内存介绍(三)

    参考资料:http://www.anandtech.com/show/3851/everything-you-always-wanted-to-know-about-sdram-memory-but- ...

  2. DRAM 内存介绍(一)

    转载自博客大神迈克老狼的blog: http://www.cnblogs.com/mikewolf2002/archive/2012/11/13/2768804.html 参考资料:http://ww ...

  3. Android mmap 文件映射到内存介绍

    本文链接: Android mmap 文件映射到内存介绍 Android开发中,我们可能需要记录一些文件.例如记录log文件.如果使用流来写文件,频繁操作文件io可能会引起性能问题. 为了降低写文件的 ...

  4. DRAM 内存介绍(二)

    参考资料:http://www.anandtech.com/show/3851/everything-you-always-wanted-to-know-about-sdram-memory-but- ...

  5. Java虚拟机的内存组成以及堆内存介绍

    一.java内存组成介绍:堆(Heap)和非堆(Non-heap)内存 按照官方的说法:“Java 虚拟机具有一个堆,堆是运行时数据区域,所有类实例和数组的内存均从此处分配.堆是在 Java 虚拟机启 ...

  6. [ 转载 ] Java Jvm内存介绍

    一.基础理论知识 1.java虚拟机的生命周期: Java虚拟机的生命周期 一个运行中的Java虚拟机有着一个清晰的任务:执行Java程序.程序开始执行时他才运行,程序结束时他就停止.你在同一台机器上 ...

  7. 干货长文:Linux 文件系统与持久性内存介绍

    关注「开源Linux」,选择"设为星标" 回复「学习」,有我为您特别筛选的学习资料~ 1.Linux 虚拟文件系统介绍 在 Linux 系统中一切皆文件,除了通常所说的狭义的文件以 ...

  8. linux 内存介绍

    linux用free -m 查看linux内存使用情况 具体参数如下: Mem:内存的使用情况总览表. totel:机器总的物理内存 单位为:M used:用掉的内存. free:空闲的物理内存. 物 ...

  9. System V共享内存介绍

    (一)简单概念 共享内存作为一种进程间通信的方式,其相较于其他进程间通信方式而言最大的优点就是数据传输速率快.其内部实现的方式采用了Linux进程地址空间中的mmap文件映射区,将文件内容直接映射到各 ...

随机推荐

  1. 0731am视图 模型

    跨控制器调用方法 function DiaoYong(){ 造对象$sc = new \Home\Controller\GoodsController();echo $sc->aa(); 如果在 ...

  2. HTTP请求响应对照表

    1)请求(客户端->服务端[request]) GET(请求的方式) /books/java.html(请求的目标资源) HTTP/1.1(请求采用的协议和版本号) Accept: */*(客户 ...

  3. freeCodeCamp:Title Case a Sentence

    确保字符串的每个单词首字母都大写,其余部分小写. 像'the'和'of'这样的连接符同理. /*思路 将字符串转为小写.toLowerCase() 分割字符串以单词形式组成数组myarr 确保数组中的 ...

  4. 你知不知道 Cookie正在泄露你的隐私!

    关于电脑上的Cookie,我想很多朋友都还是有所了解的,遇到诸如论坛帐号无法登录此类问题的时候,大家都知道先清除一下IE的Cookie,然后再重新登录就很有可能可以解决问题.然而,即使是最熟悉的东西也 ...

  5. (译)cocos2d-x跨android&ios平台开发入门教程

    免责申明(必读!):本博客提供的所有教程的翻译原稿均来自于互联网,仅供学习交流之用,切勿进行商业传播.同时,转载时不要移除本申明.如产生任何纠纷,均与本博客所有人.发表该翻译稿之人无任何关系.谢谢合作 ...

  6. struts2--表单重复提交

    1.表单重复提交: --使用struts2的Token或TokenSession拦截器避免重复提交: > 上述两个拦截器均不在默认的拦截器栈中,需要在struts.xml中配置: > To ...

  7. SXT_项目

    30. svn服务器运行方式: svnserve:自己做实验的时候用. svn&apache结合起来用.[常用的] 29. EXTJs not Jquery[根据项目组需求] 28. tags ...

  8. Win2012R2的一个Bug---安装群集后可能引发的软件崩溃问题及相应补丁

    如标题,笔者查阅资料发现微软声称安装故障转角色后就可能发生上述描述问题,但不止于SSMS崩溃.建议使用win2012R2的朋友安装补丁. 笔者在部署win2012R2+Sql2014 cluster时 ...

  9. [Xamarin] 用Service 來製作一個Notification的時鐘 (转帖)

    這篇利用來製作一個會出現在Notification的時鐘,來敘述一下 Service,在你製作的App被關閉時,可以透過Service繼續運行你想處理的部分,當然Service 也有其生命周期 接下來 ...

  10. Python time datetime常用时间处理方法

    常用时间转换及处理函数: import datetime # 获取当前时间 d1 = datetime.datetime.now() print d1 # 当前时间加上半小时 d2 = d1 + da ...