Instead of distinguishingbetween code, integration and system testing, Google uses the language ofsmall, medium and large tests emphasizing scope over form. Small tests coversmall amounts of code and so on. Each of the three engineering roles mayexecute any of these types of tests and they may be performed as automated ormanual tests.

Small Tests are mostly (butnot always) automated and exercise the code within a single function or module.They are most likely written by a SWE or an SET and may require mocks and fakedenvironments to run but TEs often pick these tests up when they are trying todiagnose a particular failure. For small tests the focus is on typicalfunctional issues such as data corruption, error conditions and off by oneerrors. The question a small test attempts to answer is does this code do whatit is supposed to do?

Medium Tests can beautomated or manual and involve two or more features and specifically cover theinteraction between those features. I've heard any number of SETs describe thisas "testing a function and its nearest neighbors." SETs drive thedevelopment of these tests early in the product cycle as individual featuresare completed and SWEs are heavily involved in writing, debugging andmaintaining the actual tests. If a test fails or breaks, the developer takescare of it autonomously. Later in the development cycle TEs may perform mediumtests either manually (in the event the test is difficult or prohibitivelyexpensive to automate) or with automation. The question a medium test attemptsto answer is does a set of near neighbor functions interoperate with each otherthe way they are supposed to?

Large Tests cover three ormore (usually more) features and represent real user scenarios to the extentpossible. There is some concern with overall integration of the features butlarge tests tend to be more results driven, i.e., did the software do what theuser expects? All three roles are involved in writing large tests andeverything from automation to exploratory testing can be the vehicle toaccomplish accomplish it. The question a large test attempts to answer is doesthe product operate the way a user would expect?

The actual language ofsmall, medium and large isn’t important. Call them whatever you want. Theimportant thing is that Google testers share a common language to talk aboutwhat is getting tested and how those tests are scoped. When some enterprisingtesters began talking about a fourth class they dubbed enormous every othertester in the company could imagine a system-wide test covering nearly everyfeature and that ran for a very long time. No additional explanation wasnecessary.

The primary driver of whatgets tested and how much is a very dynamic process and varies wildly fromproduct to product. Google prefers to release often and leans toward getting aproduct out to users so we can get feedback and iterate. The general idea isthat if we have developed some product or a new feature of an existing productwe want to get it out to users as early as possible so they may benefit fromit. This requires that we involve users and external developers early in theprocess so we have a good handle on whether what we are delivering is hittingthe mark.

Finally, the mix betweenautomated and manual testing definitely favors the former for all three sizesof tests. If it can be automated and the problem doesn’t require humancleverness and intuition, then it should be automated. Only those problems, inany of the above categories, which specifically require human judgment, such asthe beauty of a user interface or whether exposing some piece of dataconstitutes a privacy concern, should remain in the realm of manual testing.

Having said that, it isimportant to note that Google performs a great deal of manual testing, bothscripted and exploratory, but even this testing is done under the watchful eyeof automation. Industry leading recording technology converts manual tests toautomated tests to be re-executed build after build to ensure minimalregressions and to keep manual testers always focusing on new issues. We alsoautomate the submission of bug reports and the routing of manual testing tasks.For example, if an automated test breaks, the system determines the last codechange that is the most likely culprit, sends email to its authors and files abug. The ongoing effort to automate to within the “last inch of the human mind”is currently the design spec for the next generation of test engineering toolsGoogle is building.

Those tools will behighlighted in future posts. However, my next target is going to revolve aroundThe Life of an SET. I hope you keep reading.
Instead of distinguishingbetween code, integration and system testing, Google uses the language ofsmall, medium and large tests emphasizing scope over form. Small tests coversmall amounts of code and so on. Each of the three engineering roles mayexecute any of these types of tests and they may be performed as automated ormanual tests.

Small Tests are mostly (butnot always) automated and exercise the code within a single function or module.They are most likely written by a SWE or an SET and may require mocks and fakedenvironments to run but TEs often pick these tests up when they are trying todiagnose a particular failure. For small tests the focus is on typicalfunctional issues such as data corruption, error conditions and off by oneerrors. The question a small test attempts to answer is does this code do whatit is supposed to do?

Medium Tests can beautomated or manual and involve two or more features and specifically cover theinteraction between those features. I've heard any number of SETs describe thisas "testing a function and its nearest neighbors." SETs drive thedevelopment of these tests early in the product cycle as individual featuresare completed and SWEs are heavily involved in writing, debugging andmaintaining the actual tests. If a test fails or breaks, the developer takescare of it autonomously. Later in the development cycle TEs may perform mediumtests either manually (in the event the test is difficult or prohibitivelyexpensive to automate) or with automation. The question a medium test attemptsto answer is does a set of near neighbor functions interoperate with each otherthe way they are supposed to?

Large Tests cover three ormore (usually more) features and represent real user scenarios to the extentpossible. There is some concern with overall integration of the features butlarge tests tend to be more results driven, i.e., did the software do what theuser expects? All three roles are involved in writing large tests andeverything from automation to exploratory testing can be the vehicle toaccomplish accomplish it. The question a large test attempts to answer is doesthe product operate the way a user would expect?

The actual language ofsmall, medium and large isn’t important. Call them whatever you want. Theimportant thing is that Google testers share a common language to talk aboutwhat is getting tested and how those tests are scoped. When some enterprisingtesters began talking about a fourth class they dubbed enormous every othertester in the company could imagine a system-wide test covering nearly everyfeature and that ran for a very long time. No additional explanation wasnecessary.

The primary driver of whatgets tested and how much is a very dynamic process and varies wildly fromproduct to product. Google prefers to release often and leans toward getting aproduct out to users so we can get feedback and iterate. The general idea isthat if we have developed some product or a new feature of an existing productwe want to get it out to users as early as possible so they may benefit fromit. This requires that we involve users and external developers early in theprocess so we have a good handle on whether what we are delivering is hittingthe mark.

Finally, the mix betweenautomated and manual testing definitely favors the former for all three sizesof tests. If it can be automated and the problem doesn’t require humancleverness and intuition, then it should be automated. Only those problems, inany of the above categories, which specifically require human judgment, such asthe beauty of a user interface or whether exposing some piece of dataconstitutes a privacy concern, should remain in the realm of manual testing.

Having said that, it isimportant to note that Google performs a great deal of manual testing, bothscripted and exploratory, but even this testing is done under the watchful eyeof automation. Industry leading recording technology converts manual tests toautomated tests to be re-executed build after build to ensure minimalregressions and to keep manual testers always focusing on new issues. We alsoautomate the submission of bug reports and the routing of manual testing tasks.For example, if an automated test breaks, the system determines the last codechange that is the most likely culprit, sends email to its authors and files abug. The ongoing effort to automate to within the “last inch of the human mind”is currently the design spec for the next generation of test engineering toolsGoogle is building.

Those tools will behighlighted in future posts. However, my next target is going to revolve aroundThe Life of an SET. I hope you keep reading.

How Google TestsSoftware - Part Five的更多相关文章

  1. How Google TestsSoftware - Crawl, walk, run.

    One of the key ways Google achievesgood results with fewer testers than many companies is that we ra ...

  2. How Google TestsSoftware - Part One

    This is the firstin a series of posts on this topic.The one question I get morethan any other is &qu ...

  3. How Google TestsSoftware - Part Three

    Lots of questions in thecomments to the last two posts. I am not ignoring them. Hopefully many of th ...

  4. How Google TestsSoftware - Part Two

    In order for the "you buildit, you break it" motto to be real, there are roles beyond the ...

  5. How Google TestsSoftware - The Life of a SET

    SETs are Software Engineersin Test. They are software engineers who happen to write testing function ...

  6. How Google TestsSoftware - A Break for Q&A

    New material for the thisseries is coming more slowly. I am beginning to get into areas where I want ...

  7. Google是如何做测试的?

    Google是如何做测试的?(一.二) 导读:本文译自 James Whittaker 在 Google 测试官方博客发表的文章<How Google TestsSoftware >. 在 ...

  8. Linux 利用Google Authenticator实现ssh登录双因素认证

    1.介绍 双因素认证:双因素身份认证就是通过你所知道再加上你所能拥有的这二个要素组合到一起才能发挥作用的身份认证系统.双因素认证是一种采用时间同步技术的系统,采用了基于时间.事件和密钥三变量而产生的一 ...

  9. linux上使用google身份验证器(简版)

    系统:centos6.6 下载google身份验证包google-authenticator-master(其实只是一个.zip文件,在windwos下解压,然后传进linux) #cd /data/ ...

随机推荐

  1. iOS App打包上架的流程

    一.申请苹果开发者账号 首先需要申请苹果开发者账号才能在APP store 里发布应用. 开发者账号分为:(1)个人开发者账号   (2)企业开发者账号   主要的区别是:点击打开链接 1.个人开发者 ...

  2. member template

    1.当且仅当类模板的参数相同时,你才能对类实体对象相互赋值,即将一个实体对象整体赋值给另外一个实体对象.不能将一种类型的实体对象赋值给另外一种实体对象.如: Stack<int> intS ...

  3. 【Thinking in Java】类和对象的初始化过程

    在Java中, 当一个类被调用的时候,它的初始化过程是怎么样的呢? 当一个类被实例化的时候,它的初始化过程又是怎样的呢? 为什么static方法不能未经对象就调用非static方法? 下面我们通过例子 ...

  4. If you really want to compile without asm, configure with --disable-asm.

    真是一个奇葩问题,对我来说是的,完全不知道是什么意思,但是他就是出现了. 找到了一个相关问题http://trac.ffmpeg.org/wiki/How%20to%20quickly%20compi ...

  5. Python成长笔记 - 基础篇 (九)

    创建一个socketserver 至少分以下几步: First, you must create a request handler class by subclassing the BaseRequ ...

  6. bzoj 3110

    题意:戳这里 思路:可以用cdq分治(很明显这种模型妹纸分治法很解决)..不过为了学习树套树特地写了一下.. 所谓的树套树也第一层(最外层)普通的维护的是一个node,而树套树维护的是一个数据结构(一 ...

  7. iOS 删除已经配置的类库和移除CocoaPods

    引言 我们使用CocoaPods非常高效地将一些第三方类库导入到我们的项目中,但是不由得产生一个疑问:如果发现某个类库不适用,甚至是整个CocoaPods我们都不想再在项目中持有,那么我们要怎么把这些 ...

  8. Understanding ASP.NET MVC Filters and Attributes

    这篇文章把Asp.net MVC的filter介绍的很详细,值得收藏. http://www.dotnet-tricks.com/Tutorial/mvc/b11a280114-Understandi ...

  9. noty – jQuery通知插件

    noty是一个jQuery的通知(信息提示)插件,灵活轻便,是一个非常棒的用于替代传统提示对话框的插件. 当前最新版本为2.1.0: 从https://github.com/needim/noty 可 ...

  10. UWP应用开发系列视频教程简介 - Built for Windows 10

    万分感谢Fdyo同学给我们带来的有中文字幕的系列教程! http://zhuanlan.zhihu.com/MSFaith/20364660 下面是这系列video教程中的一个截图作为示例,有代码,有 ...