Page 3

Tips for Automation Success: Track & Report Test Progress

3

Tips for Test Automation SuccessAn automation engineer’s test automation progress is often a black box to the project team and managers and that is serious “egg on the face” for any automation initiative. One day while automating I started reminiscing about how I used to monitor and report test case status while doing functional testing, and thought to myself “How can I do that with my test automation?”. Shortly after. a process and a tool was born, and stats were included in my weekly reports. I also had the ability to provide detailed test descriptions. Now others had insight to my goal, my progress, I could estimate a completion date, and the project team could review my test descriptions looking for voids in coverage. A bonus benefit to tracking status is that multiple automation engineers can work on one project and not accidentally stomp each other. Seems like a no-brainer right?.. But more often than not I see automation engineers working in an automation black box leaving them unaccountable to all.

Here is an example of how I make myself and my test automation accountable:

  1. I stub out my test cases when reviewing requirements (the final number is my goal). For example, each test case is usually one method in my automation test suite. One hundred tests equates to 100 methods. I use separate classes to segregate functionality. My method name follow a pattern and are very descriptive, which helps me decipher what they are when they are in large lists and allows for easy alphabetical sorting.
  2. When stubbing the tests/method, I write the test description/steps with it’s verification points. For example, in the screenshot below, the “Description” attribute contains these details.Test description
  3. I track test/method development status. In the example below you can see the various status that I use. Status is the key to monitoring progress!Test status
  4. I tie defect ids or agile task numbers to test cases, which makes for easy searching when I’m doing defect regression:
    image 
  5. Finally, I use a tool/automation to extract goal, status, and test description:Test Stats tab Note that in the above “Stats” screenshot I have a Test Summary “Count” which is my goal, I have a count of the various states, and have a percentage of the various states. “Completed” percentage is my progress towards the goal. I typically take a screenshot of this tab and paste it into my status report.

    Test Details tab Note that in the above “Test Details” screenshot, I have a column for Class and Method which allow me to sort by them. Then I have a test “Description”, the test “State”,  the “Reason” for test blockage, and finally a place for “Comments”. This tab is nice for a quick overview of tests, it allows sorting which is nice if you want to, for example, sort by “Blocked”. This can also be exported into an Excel spreadsheet. This view is VERY helpful when you end up having hundreds of automated tests, because scrolling through hundreds of lines of code can make things easy to miss or can get confusing.

 
The 5 points made above were done in my .NET test automation environment which uses a custom attribute I created called “TestProgress”. The reporting GUI uses reflection to extract the class, method, and attribute details. The example is for .NET but this process and pattern could be used in any language that you may be automating in. For example in a scripting language (e.g. Ruby), you could provide “Test Progress” as a comment  above the method and then use regular expressions to parse the files to create your report. For example, the Test Progress comment could look something like:

Ruby Test Progress


Testing in 2009, a Year in Review

3

2009. What an eventful year. Eventful in my personal life as well as in my SQA career. A good, eventful year.

I didn’t blog much in 2009, 17 posts in all, and no topics that were SQA groundbreaking.  Yeah, I’m pretty much ashamed of myself and have watched my blog fall off peoples’ radar.If I were to highlight my favorite post it would be my turn from SQA words to illustrations with Do Loop Until Zero. A hit or a miss, I don’t know; I don’t get comments either way on this blog. But none the less, it’s something I enjoy doing. Hopefully you guys will see more of this “comic”, if all works out well, it will be in the 1st issue of the new and upcoming Software Testing Club magazine.

Though the blog was quiet, my SQA and testing career wasn’t. In the last year I had the ability to start filling a large gap that was present in my testing experience portfolio. Prior to 2009 I had no experience in the Linux world and the technologies that surrounded it. Joining a new group within GoDaddy quickly changed this. In 2009 I did a 180 degree turn from my beloved Windows world and submerged myself in Linux in an effort to test and automate a new, internal product. I was scared to make the jump, mostly because my Windows wisdom would be put to little use, and my lack of Linux knowledge would make me a slower tester and automator. Not so enticing when I really pride myself on speed and efficiency (“Hire me, Hire ME! I’m two testers for the price of one!”). Scared or not it was an awesome opportunity to further my skills, and help a 1.0 product using my experience with agile practices and automation. With the help of an awesome two  man development team, I was able to learn, automate and wade through the following technology highlights in 2009:

Product: A storage system (C, mySQL):

  • I used PuTTY as an SSH client to the dev, test and prod environment running CentOS as a flavor of Linux
  • I extended developer unit tests and automated API functional and boundary testing with Perl unit testing (Test::Unit)
  • I extended PHPUnit to serve as an automation framework for automation of functional tests (use case, boundary, error handling,etc). The framework was named TAEL (Test Automation Ecosystem for Linux).

Product: FTP Server that uses the storage system (Perl, mySQL)

  • I automated use cases, and FTP functions using TAEL. FTP functionality was tested using PHP’s FTP  library. Validation was done through FTP responses, and mySQL queries.
  • I performance tested the FTP server and underlying storage system with Apache JMeter. FTP in JMeter is not very robust, and worse yet forces a connection open, logon and close for every request needed, which is not very realistic. Thankfully it’s open source (Java) so I extended it and tweaked it to fit our needs.

Product: User Management Web Service

  • I automated use cases, boundaries, etc with TAEL. Validation was done by querying mySQL or parsing the Web Service response using XPATH queries.

Tool: User Experience Monitor

  • In an effort to monitor response times on an ongoing basis, I wrote a script that executes basic functionality every 15 minutes, stores the timed results in FTP, where they are picked up and  processed by a chron job that puts the results in a database. Chron takes the results puts them into an XML format which are then viewed in a PHP web page using the chart control XML/SWF charts. We found some very interesting activity and trends through this test/monitor. This turned out to be a very interesting almost real-time QA asset for the team.

Product: REST service

Automation with Ruby: With a department wide goal that everybody must do a little automation, I led them down the path of Ruby/Watir (due to cost, and Ruby being pretty easy to learn). The results are looking pretty good, adoption has gone well and progress is being made. Here are a few details about the framework that I built over a few weekends:

  • Uses a pattern that I call “Test, Navigate, Execute, Assert”
  • Manages tests with the Ruby library: Test::Unit
  • Uses Watir for web page automation
  • Web Service automation is done with:  soap4r  & REXML
  • MySQL database validation with the gem:  dbd-mysql
  • Data driven automation from Excel using Roo

 
Process: Since I’ve been lucky enough to work with a highly motivated, small team of three, our process needs to be and has been really light. We’ve been pretty successful at being extremely agile. For project management we followed a scrum-like process for a little over half a year using the tool Target Process, but then moved to a KanBan approach with our own home-grown tool. Recently we moved from the home-grown tool to a trial with Jira, while trying to maintain the project in KanBan style. I have to  say that I really like KanBan, it works particularly well for our team because it is small. When you’re as small and tight knit as out team is, we always know what each other is working on, so the more light-weight the better. It seems the largest payoff of these types of process and tools for our team is tracking backlog items as well as giving management insight to what we’re up to.

What’s in store for me in 2010? Well, I’ll likely be working on the same products, but as far as major learning and growth opportunity I’m excited to dive into the awesome new features of Visual Studio 2010 for Testers as well as to learn and use some C++. Now, if I can just convince myself to blog about those experiences as I go.



User Agent Switcher XML file update

0

With help from Gerhard, a QAInsight reader, the User Agent Switcher MONTSTER XML file has been updated to use the new, folder feature. Also, another IPhone user agent has been added as well as one for Chrome. As always, the permalink is here, and can be found in the right navigation under the “My Testing Tools” header, link: “User-Agent Info and Tools“.


Firefox “Community Development” model lacks quality

0

FireFoxBugs The recent release of Firefox 3.5 on June 30th appears to be riddled with defects and it’s woes have hit the press. The article points out 55 open defects, I went out assessed the mess myself and see 58 defects and 22 enhancements associated with the 3.5 and 3.6a1 milestones. But here is the kicker, all the defects were reported BEFORE the release date of June 30th. Yet they decided to release with the defects anyway. See the summary for yourself here, notice the dates in the “Opened” column for the 3.6a1 list. Mozilla isn’t shy about their lack of quality either, on the Mozilla QA site they boldly proclaim the upcoming tester’s event “Firefox BugDay – Catch Missed Blocker, Critical, and Major 3.5 bugs!“. Jaw dropping key word being: MISSED. Wow! At least we have to admire their transparency right?

Mozilla, these are the things that kill browsers. Firefox has spent a long time acquiring users, and when you have people uninstalling because of issues, well, it’s just not good. The picture is bleak. Releases with knowledge of critical, major and and a pile of normal defects is bad software development and makes me question the whole “Community Development” model. Does community development mean to Mozilla “Community QA after a release”?  Let’s hope not, I don’t want to see another browser disappear off of the Mozilla branch.


Link roundup for Visual Studio Team System 2010 Test Edition

0

Test and Lab Manager I’ve been playing around with Microsoft Visual Studio 2010 Team System (Beta 1) the last few weeks and I have to say that I’m pretty excited about what Microsoft is doing to help tie development, testing, and environments together.  The things that stands out the most to me is the “Test and Lab Manager”. This tool allows me to write manual tests, automate tests, and then configure, control and run those tests in specified physical or virtual environment. Although beta 1 is pretty rough around the edges, what I’m seeing is really exciting. Through my playing around and research I’ve gathered a few links full of information, screenshots, demos, videos, and official documentation. Peruse and enjoy, but before you get started, go get a rag so that you can clean the drool off of the side of your mouth when you’re done.

MSDN documentation for “Testing the Application” in VSTS 2010:
http://msdn.microsoft.com/en-us/library/ms182409(VS.100).aspx

Video: Functional UI Testing with VSTS 2010
http://channel9.msdn.com/shows/10-4/10-4-Episode-18-Functional-UI-Testing/

How to add a VSTS 2010 coded UI test to a build:
http://blogs.msdn.com/mathew_aniyan/archive/2009/05/26/coded-ui-test-in-a-team-build.aspx

Creating and running a VSTS 2010 coded UI test through a Lab Manager project:
http://blogs.msdn.com/jasonz/archive/2009/05/26/vs2010-tutorial-testing-tutorial-step-2.aspx
http://blogs.msdn.com/mathew_aniyan/archive/2009/05/26/coded-ui-test-from-microsoft-test-lab-manager.aspx

Explanation of the various Test tool names and products:
http://blogs.msdn.com/jasonz/archive/2009/05/12/announcing-microsoft-test-and-lab-manager.aspx

VSTS related blogs:
http://blogs.msdn.com/vstsqualitytools/
http://blogs.msdn.com/amit_chatterjee/ 
http://blogs.msdn.com/mathew_aniyan/


Determining browsers’ rendering and JavaScript engine versions

0

Now that I’ve updated and found a permanent home for the Browser Compatibility Cheat-Sheet, I thought I’d take the time to share the method to my madness for excavating the rendering engine and JavaScript data from browsers (these are the details in the cheat-sheet). If you’re asking yourself “Why do I care Brent?” then I urge you to watch my screencast on Browser Compatibility Testing Risk Analysis. My answer to your question in a nutshell is that you need to care as a developer or tester because fully understanding how browsers work, knowing how similar and different they are, and being able to quickly assess how a feature or change to a Web application will impact your development, testing or regression. I’ll leave it at that for now.

It’s never really been quick or easy when I mine the data (which seems to be about once a year), but when I reassess the cheat-sheet I typically spend most of the time re-reminding myself of how to gather the info for the various browsers, all the while discovering new sources. Here is my high-level browser info excavating process:

  1. Get the info from the browser itself:
    1. Install it
    2. Look at the browser download site, FAQ, & Release Notes
    3. Look at the user-agent string at browserhawk.com
    4. Look at the installed assembly version numbers and text files
    5. Look at the Help/About section
    6. Look at comments in source control and source code (if available)
  2. Search the web
    1. Look at keywords on wikipedia (Netscape, V8, WebKit, etc) . Wikipedia has gained a lot of browser data in the last couple years, some of my favorite and most useful links are comparison of layout engines & Browser Timeline
    2. Search by keywords, and look at articles and comments on the Web

And on to a few lower-level details for a few of the most popular browsers:

Internet Explorer

  1. Rendering Engine: Trident being the name of the IE rendering engine, it was once believed that the version of Trident matched the the version of the MSHTML.dll found at C:\windows\System32\. But with IE 8.0, the IE team has a reference on their blog that rendering engine in 8.0 is Trident 4.0 which can now be found in the user-agent string. This version conflicts with prior data I had gathered but we’ll run with it.  Hopefully, from here on out you can just gather the version from the user-agent string, only time will tell. Here is an example user-agent string for IE8: “Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.0; Trident/4.0;)
  2. JavaScript Engine: JScript is the JavaScript engine in IE, the file version of JScript.dll found in C:\windows\System32\ will give you version number.

Chrome

  1. Rendering Engine: Webkit is the rendering engine for Chrome. To get the rendering engine version, in the browser URL bar just type “About:”. The version will follow the “Webkit:” key. Notice that the User Agent contains the same Webkit number, hence the rendering engine version can be gathered from the User Agent string also.
  2. JavaScript Engine: V8 is the JavaScript engine for Chrome. The version is listed in the same place as the rendering engine and is the number following the “V8:” key.


FireFox
 

  1. Rendering Engine: Firefox uses the Gecko rendering engine and the version number can be found in the user-agent string (can be seen by typing “about:” in the URL bar or going to Help->About in the menu). For example in the following string: “Mozilla/5.0 (Windows; U; Windows NT 6.0; en-US; rv:1.9.0.10) Gecko/2009042316 Firefox/3.0.10”, the “rv:1.9.0.10” is the version and the “2009042316” build number. 
  2. JavaScript Engine: Firefox’s JavaScript engine is SpiderMonkey and I’ve never found references to version numbers. From what I understand the JavaScript engine is compiled to JS3250.dll in \program files\mozilla firefox (js3250.lib on Linux), but the version number never changes on it even though the date and size do, which makes tracking here fairly useless. I’ve always gathered ECMA compatibility data from official release notes as well as using the value for “JavaScriptVer ” key found when visiting browserhawk.com (click the ‘more’ link in top-right corner to get this detail).

Safari

  1. Rendering Engine: Webkit is the rendering engine in Safari and the version can be found in the user-agent string. For example in the string: “Mozilla/5.0 (Windows; U; Windows NT 6.0; en-US) AppleWebKit/528.18 (KHTML, like Gecko) Version/4.0 Safari/528.17” the version number is 528.18. This can also be seen on the webkit.dll file property “Product Version” in Windows (\program files\safari\).
  2. JavaScript Engine: With Safari 4.0 and up the JavaScript engine is new and named Nitro. At this time of writing 4.0 is still in beta and I see no official version for Nitro, but hopefully we’ll find a reference once it comes out of beta. Regarding past versions for their old engine JavaScriptCore, I’ve never found version numbers so I only documented the JavaScript ECMA compatibility found at various places on the web as well as the value for “JavaScriptVer ” key found when visiting browserhawk.com (click the ‘more’ link in top-right corner to get this detail).

 

And there you have it, the method to my madness. If you have differing or better data please feel free to add it in a comment.


Browser Compatibility Cheat-Sheet

0

Browser Compatibility Cheat Sheet This link/post will serve as the official place for the Browser Compatibility Cheat-Sheet and the post can be permanently accessed from the side navigation. Any new updates/links to the cheat-sheet will be put in this post.

What is the Browser Compatibility Cheat-Sheet?

The cheat-sheet is a list of browsers’ rendering engine and JavaScript engine versions. This reference provides testers a quick and easy way to view groupings of browsers and their versions to help determine testing redundancy and trim the “browsers to test” list. For further explanation and details watch the screencast: Browser Compatibility Testing Risk Analysis.

Download the latest Browser Compatibility Cheat-Sheet here (zipped Excel file):

BrowserCompatCheatSheet-052509.zip (8.74 KB)


Automated User Interface Testing with VSTS 2010

0

Thanks to co-worker Julio Verano for passing this on:

Here is a 17 minute video from MIX09 on Automated User Interface Testing with Microsoft Visual Studio Team System 2010.

VSTS 2010 looks like it has great potential for testers and developers, but I think Microsoft is still behind in browser automation and functionality when comparing to the great work from Art of Test with WebAii and Design Canvas.  I’m excited that Microsoft is actively pursuing and growing in this space though. It is needed badly! Here is the current proposed platform support for VSTS 2010:

image


iPhone Browser Simulator on Windows

0

Written over a weekend, Sean Sullivan the CTO of Black Baud lab’s created an iPhone browser simulator, and it’s FREE!

Basically, what he did was take the Webkit rendering engine from Safari and embed it into a Windows application. Browser requests are being made through Safari Webkit using an iPhone mobile user-agent string. Here is an example of the user-agent string that came into my website while capturing the screenshot at the bottom of this post:

Mozilla/5.0+(iPhone;+U;+CPU+iPhone+OS+2_1+like+Mac+OS+X;+en-us)+AppleWebKit/525.18.1+(KHTML,+like+Gecko)+Version/3.1.1+Mobile/5F136

Setup is as simple as installing Safari, and then running the .exe from Black Baud lab.

Check out the details and a screencast on Black Baud lab’s site.

BUT.as always, when it comes to browser compatibility testing there is nothing like testing the real browser on the real platform. Keep these things in mind my dear browser compatibility testing friend:

  1. The rendering results will be based on the version of Safari for Windows you have installed and its Webkit version. This will likely not match the version you intend to test on the iPhone. For example, I’ve installed Safari version 3.2.2 which uses Webkit 525.28.1, the page I’m testing will utilize that version. Make sure you are aware of the iPhone browser version you need to be testing and the Webkit version that comes along with that iPhone browser version. You’ll need to install that version of Windows Safari to be in sync with the iPhone.  How do you know which version of Safari/Webkit is running in an iPhone app install? I don’t know! I can find no reference or history of versions on the Web. I suppose it could be gathered easy enough by installing each iPhone app version and then either doing a Help->About in Safari? Or gathering the data from the user agent string by going to browserhawk.com. Post a link in the comments if you’ve seen this data somewhere. 
  2. Apparently iPhone Safari 3 is not the same code base as Safari 3! The code base was branched. Changes made in that branch could cause your results to vary between the real iPhone browser and the iPhone browser simulator. From the threads I’ve read it seems to be more “shell” type features, which lowers your risk if you’re just looking for rendering issues.

IPhoneEmulator


Post navigation