About this document

  • Prerequisite knowledge/experience: Software Testing, Test Automation
  • Applicable Microsoft products: Visual Studio Team System, .NET
  • Intended audience: Software Testers

Definitions and Terms

Test automation – test code written to carry out the execution of a test case in an automated or at least semi-automated fashion

Data-driven testing (DDT) – test cases that are executed multiples times—once for each input of a given set of data

Positive testing – testing normal situations that should not result in any errors or exceptions thrown

Negative testing—testing failure conditions and/or edge cases that expect an error or exception as a result

Test harness – also known as automated test framework, which consists of a test case repository and a test execution engine

Article Summary

This article discusses the need for writing good test automation and some guidance documents that can help facilitate an automation review process

Full Article

Are you automating test cases? If so, then your team should have some sort of process that ensures test automation is written well. Many people argue that since test automation is not shipping code, the code quality level is unimportant.

I would argue strongly against that point of view. Here’s why: Sure, the code doesn’t ship to the customers. Therefore, yes, customers won’t be discovering bugs in it. However, we’re talking about the code that is used to test the code that is shipping to the customer. If the test code has a bug in it then how do we know that it didn’t miss a bug thatwas in the shipping code? The bottom line is we don’t. The quality of test automation is critical to validating the quality of shipping code. Furthermore, just like the code that is shipped, test code has a maintenance life of its own. Good design, use of design patterns, and refactoring is just as valuable on test code as it is for shipping code since someone is going to be modifying or enhancing it somewhere down the line.

For these reasons, the Microsoft.com team (as well as other teams across the company) has a virtual team of engineers focused on test automation with the goal of ”increasing test automation efficacy without introducing too much process overhead”. “Increasing test automation efficacy” is extremely broad so we’ve translated that vision into more specific objectives:

  • Introduce a more planned approach to developing test automation
  • Increase the Return on Investment  of test automation
  • Increase the quality of test automation design and code
  • Promote sharing of test automation best practices
  • Increase awareness of test automation code that is available and reusable
  • Decrease maintenance costs of test automation code
  • Ensure test automation plans are comprehensive and cover more than just functional testing (i.e. performance, security)

In order to achieve these objectives, the team’s first priority was to develop a test automation review process that would facilitate the way we create test automation code. What we came up with is a process that includes two major milestones: a test automation design review and a test automation code review.

The below section is that of a Test Automation Design Review Template that is filled out by the tester designing the automation and then submitted to a review team. This helps the tester cover all the bases of a good design as well as document their intention so that the reviewer can easily understand what the automation design is and provide feedback accordingly. I hope these documents will be helpful for you to use in your own automation review process!

Test Automation Design Review Template

Project Name:
Design Author(s):

<<section guidance>> 
The purpose of this template is two-fold: first, to get you thinking strategically about your automation design for a new project or component and second, to standardize the documentation approach so it is easier for others to review. It is meant to be filled out before you start writing your automation and prior to asking others to review your design. It contains a list of template sections that will help you structure your automation design and address different aspects of it. Please note that this is only meant to spur high-level thinking about the automation design and in no way should replace the rigorous level of detail that goes into identifying specific test cases and execution scenarios. Since the purpose of filling out this document is primarily for the design review, it is not necessarily expected to be a living document that is kept up-to-date at all times.

The text enclosed in the “section guidance” tags is meant to give you guidance around understanding the purpose of each template section and also assist you in filling it out. The section guidance snippets can be deleted once the section has been completed. 
<<end of section guidance>>

1. Test Projects

 Questions to answer:

What projects will be created?
What is the intent of each project?

<<section guidance>> 
Definition: A standard method of categorizing test code into different projects which gives structure and implies the purpose.

Required Projects: 
(More detail is provided in Test Code Layout below) 
1. [ProjectName]Tests - contains test methods (no shared or common libraries) 
2. [ProjectName]TestLibrary - contains test library code (no test methods)

Optional Projects:

This is a list of projects that have been created for current or past releases. It is used to organize similar pieces of code that need to be separate from the two required projects above.

1. Console app 
Purpose - to allow someone to quickly re-run the same test over and over (call the test from the main.cs file's Main() routine, then simply hit F5. This also creates a sandbox where the user can play and modify code temporarily without having to check out any test files or libraries.

Creation - A console app is added to the solution, with one simple Main.cs file. This file is checked into source control as "[Main.cs]". Each new enlistment will need to copy it local to a new file called Main.cs which is not checked into source control. Rather, the file is kept locally and never checked in. The console app project is set as the "Default Startup Project".

2. Web proxy library 
Purpose - to abstract out complex code and code otherwise unrelated to the other projects. This allowed for better organization and clear boundaries of what code does what.

3. WebPage Library or API 
Purpose - to create a programmatic interface for testing a set of webpages that encapsulates common operations frequently used in a number of test cases. Then, instead of updating all test cases that commonly execute the same sequence of steps, the WebPage library wraps this functionality so that breaking changes only need to be updated in one place. 
<<end of section guidance>>

2. Test Code Layout

Questions to answer:

How will projects be structured in source?

<<section guidance>> 
Definition:  A uniform organization scheme that allows a user to quickly identify what code belongs where and how to find it. This applies to both test code (code automating test cases), test libraries (common code shared by test cases), and other forms of code.

NOTE: The level of detail here will obviously vary according to what is known up front, but the more detail one can include here the better.

General guidelines: 
1.  Reusable test code is in a separate project from test cases 
2. A folder should only have similar items in it 
3. Files should be granular enough that multiple people working on the project at the same time will not need to check out the same file at the same time. 
4. The schema should be used by all projects so that users will have a common framework and not have to learn new structures with each new project

Folder structure: 
Main directory just for this solution, preferably off $[Project Name]\Main\Test\ 
Folders: One per project (See below) 
Files: One or more solution files, and a ReadMe.Txt or ReadMe.Docx (containing any special instructions for layout, setting up, or using the automation). 
Sample Project Layout: 
[ProjectName]Tests.csproj 
- [Feature1Folder] 
- - [Feature1][MethodGroupName1]Tests.cs 
- - [Feature1][MethodGroupName2]Tests.cs 
- - [Feature1]DataDrivenTests.xls 
[ProjectName]TestLibrary.csproj 
- Settings.cs or App.config 
- [Feature1Folder] 
- - [Feature1][MethodGroupName1]Lib.cs 
- - [Feature1][MethodGroupName2]Lib.cs 
<<end of section guidance>>

3. Automation Architecture

<<section guidance>> 
Include architecture diagram of your automation here. 
Note: Required for all major releases 
<<end of section guidance>>

4. Designing for Code Reuse

Questions to answer:

What existing code do you plan to leverage in your design?
What reusable functionality do you plan to contribute and how will this be shared?
What reusable methods do you plan to add to higher-level test libraries (refer to section guidance)?
How do you plan to structure your Project Test Library (one layer, two layers etc.)?

<<section guidance>> 
There are 4 main levels of test code abstraction which facilitate re-use:

  1. Test Methods
  2. Project Test Library
  3. Customer/Adopter Test Libraries
  4. Shared Test Libraries

Test Methods 
These are the individually implemented test cases. Any duplicate or copied code should instead refactored and moved up to the test library.

Project Test Library 
The test library is used for our internal testing. There can be multiple levels of abstraction within the test library itself, especially in UI automation, where there is a logical layer and a physical layer. Functionality that would be useful outside of just the internal testing process (i.e. a customer could use it to run tests) should be moved up to the next level of code abstraction, Customer/Adopter Test Libraries.

Customer/Adopter Test Libraries 
The Customer/Adopter Test Libraries can be used by other people to quickly and easily access functionality in the product from a test automation perspective. These libraries should be scrutinized in a somewhat more rigorous manner since the intent is to give it away externally. It is also beneficial to have some accompanying documentation.

Shared Test Libraries 
Shared Test Libraries are automation libraries which are system and project agnostic. There should be no dependencies in these DLLs, other than .Net Frameworks. Any library code that is generic enough to apply to multiple projects/scenarios should be added to these. For example, common stuff like SQL helper objects, Event Log checking, etc is prime for this level of abstraction. 
<<end of section guidance>>

5. Security Design Considerations

Questions to answer:

How will your design accommodate the security testing necessary for this project? (If none, please explain why…)

<<section guidance>> 
The purpose of this section is to document any special design considerations in your automation that are related to testing the security of the product. Also, this section contains tips about security related precautions we need to take in order to keep our testing environment secure.

Secure Testing Environment 
Personally Identifiable Information (PII) - there are times where production data must be used for testing purposes. For example, the test might require actual production-like data distribution. In cases such as this, we need to sanitize the data before importing it into our test environment. 
<<end of section guidance>>

6. Performance Design Considerations

Questions to answer:

How will your design be fit for performance testing?

<<section guidance>> 
Reusing Functional Tests for Performance Testing

    • Displaying Exception Information

      1. We typically do functional testing from the VSTS IDE, which automatically displays information about any exceptions that occur during execution. However, we often do performance testing from the VS Command Prompt where this information is not automatically displayed. In order for the performance tester to be able to see this information without having to break into a debugger, we should create a new test method specifically for performance testing (preferably prefixed with a naming convention that identifies it as such) that wraps the existing test method that was used for functional testing. The body of the new method should have nothing but a call to the functional test method surrounded by a ‘try’ block. The ‘catch’ block should write the exception to the console (so that the perf tester knows what went wrong) and just re-throw the exception.

    • Performance Measurement & Logging

      1. In performance testing we are usually only interested in the actual execution time of the unit under test. It would not be desirable to include overhead code such as test method setup, creation of data to execute the test, and verification of the expected and actual data / state / behavior etc. since we do not want to corrupt the results. If this requirement differs from, for example, another measurement time that must be inclusive of some of these other things, the code should be written in a way that easily allows the performance tester to easily change where timing begins and ends.

      2. Extensive performance logging requirements might require a tool other than VSTS for customized logging. One suggestion is to use log4net. It allows multiple outputs such as DB, file, console etc. We may implement a wrapper for this in the future to enable asynchronous logging. Whatever the tool used, there should be a configuration switch to turn logging on or off.

    • SQL Server

      1. Tables used for testing, whether functional or performance, should have the appropriate indexes. Although the performance measurement results shouldn’t be affected by, for instance, the time it takes to get test data out of a table, it could speed up execution time and therefore allow us to run tests with more load if necessary.

    • Other Best Practices (Append to the list whenever new information is known)

      1. This should be self-evident so take it as a reminder. Document your code with comments that include troubleshooting information. This helps the performance tester with known ‘gotchas’ and other things that may come up.

      2. Have one central place to edit configurable values like connection strings, SqlCommandTimeOut, I/O paths.

      3. Be aware that test methods that use data source attributes are usually so inefficient that they cannot be reused as load tests. In other words, they are too slow to be able to generate enough load/RPS.

      4. For data driven tests, consider buffering data at client prior to the test run.  This helps avoid the overhead of going back and forth between client and data source while performance testing.

      5. If the functional tests call ASPX pages, instrument the pages to have QueryString parameters that can be fed test data for specific scenarios during performance runs.

      <<end of section guidance>>

      8. Process Checklist

       Steps (mark each upon completion)

      [  ] Prepare automation design document using template
      [  ] Automation design reviewed by manager
      [  ] Automation design reviewed by review team
      [  ] All action items from design reviews completed

      9. Tracking Info

      Date Name Document Action (Drafted, Updated, Review etc.)
           

      About the Authors:

      Devin A. Rychetnik is currently working as a Software Development Engineer in Test II for the Windows Marketplace for Mobile team. In addition to testing, his 9 years of experience in software includes development, project management and security. He is currently finishing a Masters Degree in Software Development from the Open University of England and is a certified Six Sigma Green Belt and Project Management Professional (PMP).

Quality in the Test Automation Review Process and Design Review Template的更多相关文章

  1. Plant Design Review Based on AnyCAD

    Plant Design Review Based on AnyCAD eryar@163.com Abstract. AVEVA Review is used to 3D model visuali ...

  2. CAD格式DWF嵌入到自己的网页中展示--Autodesk Design Review

    网页上嵌入CAD图纸,用的 Autodesk Design Review控件嵌入IE, 网上的 dwf viewer方式没成功. Head之间 <script type="text/j ...

  3. 开发中Design Review和Code Review

    一.Design Review 详解 翻译为设计评审,也就是对需求设计进行审核,防止出现异常问题,例如下面的这些 可用性 外部依赖有哪些?如果这些外部依赖崩溃了我们有什么处理措施? 我们SLA是什么? ...

  4. 敏捷软件开发实践-Code Review Process(转)

    介绍: 在敏捷软件开发中,从代码的产生速度上来看,要比 传统Waterfall产生速度高很多.因为我们把时间安排的更加紧凑了.那么这么多的代码,如何能保证这些代码质量呢?很多人可能直接想到静态代码检测 ...

  5. SAP Process Integration - High Level ERP/Integration Process --- Cargill Process Concept Design

    Customer Industry: Commercial off-the-shelf (COTS) application ,, Food Ingredients or Agricultural S ...

  6. Peer Code Reviews Made Easy with Eclipse Plug-In

    欢迎关注我的社交账号: 博客园地址: http://www.cnblogs.com/jiangxinnju/p/4781259.html GitHub地址: https://github.com/ji ...

  7. App 被拒 -- App Store Review Guidelines (2015)中英文对照

    Introduction(简介) We're pleased that you want to invest your talents and time to develop applications ...

  8. 15个最佳的代码评审(Code Review)工具

    代码评审可以被看作是计算机源代码的测试,它的目的是查找和修复引入到开发阶段的应用程序的错误,提高软件的整体素质和开发者的技能.代码审查程序以各种形式,如结对编程,代码抽查等.在这个列表中,我们编制了1 ...

  9. Project Management Process

    Project Management ProcessDescription .............................................................. ...

随机推荐

  1. 学习java随笔第九篇:java异常处理

    在java中的异常处理和c#中的异常处理是一样的都是用try-catch语句. 基本语法如下 try { //此处是可能出现异常的代码 } catch(Exception e) { //此处是如果发生 ...

  2. 一封给“X教授”的回信(讨论Socket通信)

    转载:http://www.cnblogs.com/tianzhiliang/archive/2011/03/02/1969187.html 前几天"X教授"发Email与我讨论S ...

  3. iterator迭代器的使用

    部分摘自C++ Primer: 所有的标准库容器类都定义了相应的iterator类型,如vector:vector<int>::iterator iter; 这条语句定义了一个名为iter ...

  4. MySQL 时间戳(Timestamp)函数

    1. MySQL 获得当前时间戳函数:current_timestamp, current_timestamp() mysql> select current_timestamp, curren ...

  5. mws文件中的tab文件改为相对路径

    用mapinfo将现有的多个图层(tab)文件保存成一个mws工作空间后,将此mws文件发到另一台电脑上后,打开mws,提示无法打开各个tab文件,文件不存在,显示的路径是当时原电脑添加时的绝对路径. ...

  6. angular 跳转页面时传参

    首先,你需要已经配置过你的rout,比如: $stateProvider .state('firstPage',{ url:'/Page/firstPage', templateUrl: 'Page/ ...

  7. JavaScript学习总结【3】、JS对象

    在 JS 中一切皆对象,并提供了多个内置对象,比如:String.Array.Date 等,此外还支持自定义对象.对象只是一种特殊类型的数据,并拥有属性和方法,属性是与对象相关的值,方法是能够在对象上 ...

  8. AS3.0的int uint Number的使用原则

    int uint Number的使用原则: 1.能用整数值时优先使用:int uint 2.整数值有正负时使用:int 3.只处理正整数时使用:uint 4.处理好和颜色相关的值时使用:uint 5. ...

  9. CURL传输与获取功能

    什么是CURL? 利用URL语法爱命令行方式下工作的文件传输工具.它支持很多协议.它支持认证功能.php中常用都实现更复杂的传输功能. 实现的功能: 1.实现远程获取和采集内容 2.实现PHP 网页版 ...

  10. 关于手机端CSS Sprite图标定位的一些领悟

    今天在某个群里面闲逛,看见一个童鞋分享了一个携程的移动端的页面.地址这里我也分享下吧:http://m.ctrip.com/html5/在手机端我都很少用雪碧图合并定位图标,用的比较多就是用字体图标来 ...