In March 2007 Blaise Aguera y Arcas presented Seadragon & Photosynth at TED that created quite some buzz around the web. About a year later, in March 2008, Microsoft released Deep Zoom (formerly Seadragon) as a «killer feature» of their Silverlight 2 (Beta) launch at Mix08. Following this event, there was quite some back andforth in the blogosphere (damn, I hate that word) about the true innovation behind Microsoft's Deep Zoom.

Today, I don't want to get into the same kind of discussion but rather start a series that will give you a «behind the scenes» of Microsoft's Deep Zoom and similar technologies.

This first part of «Inside Deep Zoom» introduces the main ideas & concepts behind Deep Zoom. In part two, I'll talk about some of the mathematics involved and finally, part three will feature a discussion of the possibilities of this kind of technology and a demo of something you probably haven't seen yet.

Background

As part of my awesome internship at Zoomorama in Paris, I was working on some amazing things (of which you'll hopefully hear soon) and in my spare time, I've decided to have a closer look at Deep Zoom (formerly Seadragon.) This is when I did a lot of research around this topic and where I had the idea for this series in which I wanted to share my knowledge.

Introduction

Let's begin with a quote from Blaise Aguera y Arcas demo of Seadragon at the TED conference[1]:…the only thing that ought to limit the performance of a system like this one is the number of pixels on your screen at any given moment.

What is this supposed to mean? See, I have a 24" screen with a maximum resolution of 1920 x 1200 pixels. Now let's take a photo from my digital camera which shoots at 12 megapixel. The photo's size is typically 3872 x 2592 pixels. When I get the photo onto my computer, I roughly end with something that looks like this:

No matter how I put it, I'll never be able to see the entire 12 megapixel photo at 100% magnification on my 2.3 megapixel screen. Although this might seem obvious, let's take the time and look at it from another angle: With this in mind we don't care anymore if an image has 10 megapixel (that is 10'000'000 pixels) or 10 gigapixel (10'000'000'000 pixels) since the number of pixels we can see at any moment is limited by the resolution of our screen. This again means, looking at a 10 megapixel image and 10 gigapixel image on the same computer screen should have the same performance. The same should hold for looking at the same two images on a mobile device such as the iPhone. However, important to note is that with reference to the quote above we might experience a performance difference between the two devices since they differ in the number of pixels they can display.

So how do we manage to make the performance of displaying image data independent of its resolution? This is where the concept of an image pyramid steps in.

The Image Pyramid

Deep Zoom, or for that matter any other similar technology such asZoomoramaZoomifyGoogle Maps etc., uses something called animage pyramid as a basic building block for displaying large images in an efficient way:

The picture above illustrates the layout of such of an image pyramid. The two purposes of a typical image pyramid are to store an image of any size at many different resolutions (hence the term multi-scale) as well as these different resolutions sliced up in many parts, referred to as tiles.

Because the pyramid stores the original image (redundantly) at different resolutions we can display the resolution that is closest to the one we need and in a case where not the entire image fits on our screen, only the parts of the image (tiles) that are actually visible. Setting the parameter values for our pyramid such as number of levels and tile size allows us to control the required data transfer.

Image pyramids are the result of a space vs. bandwidth trade-off, often found in computer science. The image pyramid obviously has a bigger file size than its single image counterpart (for finding out how much exactly, be sure to come back for part two) but as you see in the illustration below, regarding bandwidth it's much more efficient at displaying high-resolution images where most parts of the image are typically not visible anyway (grey area):

As you can see in the picture above, there is still more data loaded (colored area) than absolutely necessary to display everything that is visible on the screen. This is where the image pyramid parameters I mentioned before come into play: Tile size and number of levels determine the relationship between amount of storage, number of network connections and bandwidth required for displaying high-resolution images.

Next

Well, this was it for part one of Inside Deep Zoom. I hope you enjoyed this short introduction to image pyramids & multi-scale imaging. If you want to find out more, as usual, I've collected some links in theFurther Reading section. Other than that, be sure to come back, as the next part of this series – part two – will discuss the characteristics of the Deep Zoom image pyramid and I will show you some of the mathematics behind it.

Further Reading

References

http://www.gasi.ch/blog/inside-deep-zoom-1/的更多相关文章

  1. http://www.gasi.ch/blog/inside-deep-zoom-2/

    Inside Deep Zoom – Part II: Mathematical Analysis Welcome to part two of Inside Deep Zoom. In part o ...

  2. [WPF系列]-Deep Zoom

        参考 Deep Zoom in Silverlight

  3. openseadragon.js与deep zoom java实现艺术品图片展示

    openseadragon.js 是一款用来做图像缩放的插件,它可以用来做图片展示,做展示的插件很多,也很优秀,但大多数都解决不了图片尺寸过大的问题. 艺术品图像展示就是最简单的例子,展示此类图片一般 ...

  4. A SIMPLE LIBRARY TO BUILD A DEEP ZOOM IMAGE

    My current project requires a lot of work with Deep Zoom images. We recently received some very high ...

  5. 零元学Expression Blend 4 - Chapter 23 Deep Zoom Composer与Deep Zoom功能

    原文:零元学Expression Blend 4 - Chapter 23 Deep Zoom Composer与Deep Zoom功能 最近有机会在工作上用到Deep Zoom这个功能,我就顺便介绍 ...

  6. 论文笔记之:Dueling Network Architectures for Deep Reinforcement Learning

    Dueling Network Architectures for Deep Reinforcement Learning ICML 2016 Best Paper 摘要:本文的贡献点主要是在 DQN ...

  7. What are some good books/papers for learning deep learning?

    What's the most effective way to get started with deep learning?       29 Answers     Yoshua Bengio, ...

  8. Life of an Oracle I/O: tracing logical and physical I/O with systemtap

    https://db-blog.web.cern.ch/blog/luca-canali/2014-12-life-oracle-io-tracing-logical-and-physical-io- ...

  9. RNN and LSTM saliency Predection Scene Label

    http://handong1587.github.io/deep_learning/2015/10/09/rnn-and-lstm.html  //RNN and LSTM http://hando ...

随机推荐

  1. 【凯子哥带你夯实应用层】使用ActionMode实现有删除动画的多选删除功能

        转载请注明出处:http://blog.csdn.net/zhaokaiqiang1992      ActionMode是3.0之后.官方推荐的一种上下文菜单的实现方式,在之前一直用的是Co ...

  2. Atitit 架构的原则attilax总结

    Atitit 架构的原则attilax总结 1.1. Rule of three称为"三次原则",指的是当某个功能第三次出现时,才进行"抽象化".是DRY原则和 ...

  3. 事件,委托,action与func文章不错的

    https://www.cnblogs.com/yinqixin/p/5056307.html https://www.cnblogs.com/BLoodMaster/archive/2010/07/ ...

  4. Android Shell命令dumpsys

    dumpsys命令可以显示手机中所有应用程序的信息,并且也会给出现在手机的状态. 直接执行adb shell dumpsys KEY 会显示以下所有信息. KEY的可选名称 SurfaceFlinge ...

  5. cannot send list of active checks to [ZabbixServerIp]: host [Zabbix server] not found

    解决办法 因为web端上被监控端的主机名和zabbix_agentd.conf中的Hostname名字不一样,改为一样的即可 注意发现问题一定要看日志: tail -f /var/log/zabbix ...

  6. quick3.3final版创建项目报错解决

    PHP Notice:  Undefined index: QUICK_V3_ROOT in 405,469,497,520,551这5行代码都访问了一个环境变量$_ENV['QUICK_V3_ROO ...

  7. iOS应用管理(优化)

    // //  ViewController.m //  01-应用管理 //  Created by apple on 17-5-14. //  Copyright (c) 2017年  All ri ...

  8. 使用ViewPager和Fragment实现滑动导航

    ViewPage是android-support-v4.jar包提供的用于页面滑动的库,android-support-v4.jar是google推荐使用的一个类库,在项目中使用之前,你必须其添加到项 ...

  9. js实现上传图片本地预览功能以及限制图片的文件大小和尺寸大小

    方法一: js: /**     * 上传图片本地预览方法     * @param {Object} fileObj 上传文件file的id元素  fresh-fileToUpload      * ...

  10. 【Math】证明:实对称阵属于不同特征值的的特征向量是正交的

    证明:实对称阵属于不同特征值的的特征向量是正交的. 设Ap=mp,Aq=nq,其中A是实对称矩阵,m,n为其不同的特征值,p,q分别为其对应得特征向量. 则 p1(Aq)=p1(nq)=np1q (p ...