Instead of distinguishingbetween code, integration and system testing, Google uses the language ofsmall, medium and large tests emphasizing scope over form. Small tests coversmall amounts of code and so on. Each of the three engineering roles mayexecute any of these types of tests and they may be performed as automated ormanual tests.

Small Tests are mostly (butnot always) automated and exercise the code within a single function or module.They are most likely written by a SWE or an SET and may require mocks and fakedenvironments to run but TEs often pick these tests up when they are trying todiagnose a particular failure. For small tests the focus is on typicalfunctional issues such as data corruption, error conditions and off by oneerrors. The question a small test attempts to answer is does this code do whatit is supposed to do?

Medium Tests can beautomated or manual and involve two or more features and specifically cover theinteraction between those features. I've heard any number of SETs describe thisas "testing a function and its nearest neighbors." SETs drive thedevelopment of these tests early in the product cycle as individual featuresare completed and SWEs are heavily involved in writing, debugging andmaintaining the actual tests. If a test fails or breaks, the developer takescare of it autonomously. Later in the development cycle TEs may perform mediumtests either manually (in the event the test is difficult or prohibitivelyexpensive to automate) or with automation. The question a medium test attemptsto answer is does a set of near neighbor functions interoperate with each otherthe way they are supposed to?

Large Tests cover three ormore (usually more) features and represent real user scenarios to the extentpossible. There is some concern with overall integration of the features butlarge tests tend to be more results driven, i.e., did the software do what theuser expects? All three roles are involved in writing large tests andeverything from automation to exploratory testing can be the vehicle toaccomplish accomplish it. The question a large test attempts to answer is doesthe product operate the way a user would expect?

The actual language ofsmall, medium and large isn’t important. Call them whatever you want. Theimportant thing is that Google testers share a common language to talk aboutwhat is getting tested and how those tests are scoped. When some enterprisingtesters began talking about a fourth class they dubbed enormous every othertester in the company could imagine a system-wide test covering nearly everyfeature and that ran for a very long time. No additional explanation wasnecessary.

The primary driver of whatgets tested and how much is a very dynamic process and varies wildly fromproduct to product. Google prefers to release often and leans toward getting aproduct out to users so we can get feedback and iterate. The general idea isthat if we have developed some product or a new feature of an existing productwe want to get it out to users as early as possible so they may benefit fromit. This requires that we involve users and external developers early in theprocess so we have a good handle on whether what we are delivering is hittingthe mark.

Finally, the mix betweenautomated and manual testing definitely favors the former for all three sizesof tests. If it can be automated and the problem doesn’t require humancleverness and intuition, then it should be automated. Only those problems, inany of the above categories, which specifically require human judgment, such asthe beauty of a user interface or whether exposing some piece of dataconstitutes a privacy concern, should remain in the realm of manual testing.

Having said that, it isimportant to note that Google performs a great deal of manual testing, bothscripted and exploratory, but even this testing is done under the watchful eyeof automation. Industry leading recording technology converts manual tests toautomated tests to be re-executed build after build to ensure minimalregressions and to keep manual testers always focusing on new issues. We alsoautomate the submission of bug reports and the routing of manual testing tasks.For example, if an automated test breaks, the system determines the last codechange that is the most likely culprit, sends email to its authors and files abug. The ongoing effort to automate to within the “last inch of the human mind”is currently the design spec for the next generation of test engineering toolsGoogle is building.

Those tools will behighlighted in future posts. However, my next target is going to revolve aroundThe Life of an SET. I hope you keep reading.
Instead of distinguishingbetween code, integration and system testing, Google uses the language ofsmall, medium and large tests emphasizing scope over form. Small tests coversmall amounts of code and so on. Each of the three engineering roles mayexecute any of these types of tests and they may be performed as automated ormanual tests.

Small Tests are mostly (butnot always) automated and exercise the code within a single function or module.They are most likely written by a SWE or an SET and may require mocks and fakedenvironments to run but TEs often pick these tests up when they are trying todiagnose a particular failure. For small tests the focus is on typicalfunctional issues such as data corruption, error conditions and off by oneerrors. The question a small test attempts to answer is does this code do whatit is supposed to do?

Medium Tests can beautomated or manual and involve two or more features and specifically cover theinteraction between those features. I've heard any number of SETs describe thisas "testing a function and its nearest neighbors." SETs drive thedevelopment of these tests early in the product cycle as individual featuresare completed and SWEs are heavily involved in writing, debugging andmaintaining the actual tests. If a test fails or breaks, the developer takescare of it autonomously. Later in the development cycle TEs may perform mediumtests either manually (in the event the test is difficult or prohibitivelyexpensive to automate) or with automation. The question a medium test attemptsto answer is does a set of near neighbor functions interoperate with each otherthe way they are supposed to?

Large Tests cover three ormore (usually more) features and represent real user scenarios to the extentpossible. There is some concern with overall integration of the features butlarge tests tend to be more results driven, i.e., did the software do what theuser expects? All three roles are involved in writing large tests andeverything from automation to exploratory testing can be the vehicle toaccomplish accomplish it. The question a large test attempts to answer is doesthe product operate the way a user would expect?

The actual language ofsmall, medium and large isn’t important. Call them whatever you want. Theimportant thing is that Google testers share a common language to talk aboutwhat is getting tested and how those tests are scoped. When some enterprisingtesters began talking about a fourth class they dubbed enormous every othertester in the company could imagine a system-wide test covering nearly everyfeature and that ran for a very long time. No additional explanation wasnecessary.

The primary driver of whatgets tested and how much is a very dynamic process and varies wildly fromproduct to product. Google prefers to release often and leans toward getting aproduct out to users so we can get feedback and iterate. The general idea isthat if we have developed some product or a new feature of an existing productwe want to get it out to users as early as possible so they may benefit fromit. This requires that we involve users and external developers early in theprocess so we have a good handle on whether what we are delivering is hittingthe mark.

Finally, the mix betweenautomated and manual testing definitely favors the former for all three sizesof tests. If it can be automated and the problem doesn’t require humancleverness and intuition, then it should be automated. Only those problems, inany of the above categories, which specifically require human judgment, such asthe beauty of a user interface or whether exposing some piece of dataconstitutes a privacy concern, should remain in the realm of manual testing.

Having said that, it isimportant to note that Google performs a great deal of manual testing, bothscripted and exploratory, but even this testing is done under the watchful eyeof automation. Industry leading recording technology converts manual tests toautomated tests to be re-executed build after build to ensure minimalregressions and to keep manual testers always focusing on new issues. We alsoautomate the submission of bug reports and the routing of manual testing tasks.For example, if an automated test breaks, the system determines the last codechange that is the most likely culprit, sends email to its authors and files abug. The ongoing effort to automate to within the “last inch of the human mind”is currently the design spec for the next generation of test engineering toolsGoogle is building.

Those tools will behighlighted in future posts. However, my next target is going to revolve aroundThe Life of an SET. I hope you keep reading.

How Google TestsSoftware - Part Five的更多相关文章

  1. How Google TestsSoftware - Crawl, walk, run.

    One of the key ways Google achievesgood results with fewer testers than many companies is that we ra ...

  2. How Google TestsSoftware - Part One

    This is the firstin a series of posts on this topic.The one question I get morethan any other is &qu ...

  3. How Google TestsSoftware - Part Three

    Lots of questions in thecomments to the last two posts. I am not ignoring them. Hopefully many of th ...

  4. How Google TestsSoftware - Part Two

    In order for the "you buildit, you break it" motto to be real, there are roles beyond the ...

  5. How Google TestsSoftware - The Life of a SET

    SETs are Software Engineersin Test. They are software engineers who happen to write testing function ...

  6. How Google TestsSoftware - A Break for Q&A

    New material for the thisseries is coming more slowly. I am beginning to get into areas where I want ...

  7. Google是如何做测试的?

    Google是如何做测试的?(一.二) 导读:本文译自 James Whittaker 在 Google 测试官方博客发表的文章<How Google TestsSoftware >. 在 ...

  8. Linux 利用Google Authenticator实现ssh登录双因素认证

    1.介绍 双因素认证:双因素身份认证就是通过你所知道再加上你所能拥有的这二个要素组合到一起才能发挥作用的身份认证系统.双因素认证是一种采用时间同步技术的系统,采用了基于时间.事件和密钥三变量而产生的一 ...

  9. linux上使用google身份验证器(简版)

    系统:centos6.6 下载google身份验证包google-authenticator-master(其实只是一个.zip文件,在windwos下解压,然后传进linux) #cd /data/ ...

随机推荐

  1. new的原罪

    一直以为在开发阶段能够直接调用的,速度而言一定是最优秀的,因为总比后期通过反射之类来调用来得快吧. 下面请看一个SB的例子,重新编译以后,这个类在创建100,000,000实体时居然耗费了16秒的时间 ...

  2. Jquery 表单验证

    <html>     <head>         <meta http-equiv="content-type" content="tex ...

  3. CreateCompatibleDC 与 CreateCompatibleBitmap 小小结

    通常使用CreateCompatibleBitmap时候都会用到CreateCompatibleDC.而是用CreateCompatibleDC的目的不是为CreateCompatibleBitmap ...

  4. java类集开发中一对多和多对多的关系的实现

    摘自<java开发实战经典>李兴华.著 一对多的关系 一个学校可以包含多个学生,一个学生属于一个学校,那么这就是一个典型的一对多关系,此时就可以通过类集进行关系的表示. 在定义Studen ...

  5. C++多态(二)——函数重载(overloading)和操作符重载

       任何函数都能重载. 一.普通函数的重载 C语言中一个函数只能处理一个类型的数据,不可能兼顾两种或多种数据类型:C++使用使用同一名称的函数来处理多个类型的数据. #include <ios ...

  6. Linux内核--网络栈实现分析(五)--传输层之UDP协议(上)

    本文分析基于Linux Kernel 1.2.13 原创作品,转载请标明出处http://blog.csdn.net/yming0221/article/details/7532512 更多请看专栏, ...

  7. oracle数据库常用查询

    一.数据库信息 1.数据库时间 select to_char(sysdate, 'yyyy-mm-dd hh24:mi:ss') AS dbtime from dual; 2.主机OS类型 SELEC ...

  8. 关于Android的背景色配色小结

    三基色原理:三基色是指红,绿,蓝三色,人眼对红.绿.蓝最为敏感,大多数的颜色可以通过红.绿.蓝三色按照不同的比例合成产生.同样绝大多数单色光也可以分解成红绿蓝三种色光.这是色度学的最基本原理,即三基色 ...

  9. ASP.NET 上传文件最大值调整

    首先,最容易找到的是web.config下面配置: <!--maxRequestLength=50MB--> <httpRuntime targetFramework="4 ...

  10. Nim教程【十一】

    引用类型和指针类型 不同的引用可以只想和修改相同的内存单元 在nim中有两种引用方式,一种是追踪引用,另一种是非追踪引用 非追踪引用也就是指针,指向手动在内存中分配的对象: 追踪引用指向一个垃圾收集的 ...