An automation engineer’s test automation progress is often a black box to the project team and managers and that is serious “egg on the face” for any automation initiative. One day while automating I started reminiscing about how I used to monitor and report test case status while doing functional testing, and thought to myself “How can I do that with my test automation?”. Shortly after. a process and a tool was born, and stats were included in my weekly reports. I also had the ability to provide detailed test descriptions. Now others had insight to my goal, my progress, I could estimate a completion date, and the project team could review my test descriptions looking for voids in coverage. A bonus benefit to tracking status is that multiple automation engineers can work on one project and not accidentally stomp each other. Seems like a no-brainer right?.. But more often than not I see automation engineers working in an automation black box leaving them unaccountable to all.
Here is an example of how I make myself and my test automation accountable:
I stub out my test cases when reviewing requirements (the final number is my goal). For example, each test case is usually one method in my automation test suite. One hundred tests equates to 100 methods. I use separate classes to segregate functionality. My method name follow a pattern and are very descriptive, which helps me decipher what they are when they are in large lists and allows for easy alphabetical sorting.
When stubbing the tests/method, I write the test description/steps with it’s verification points. For example, in the screenshot below, the “Description” attribute contains these details.
I track test/method development status. In the example below you can see the various status that I use. Status is the key to monitoring progress!
I tie defect ids or agile task numbers to test cases, which makes for easy searching when I’m doing defect regression:
Finally, I use a tool/automation to extract goal, status, and test description: Note that in the above “Stats” screenshot I have a Test Summary “Count” which is my goal, I have a count of the various states, and have a percentage of the various states. “Completed” percentage is my progress towards the goal. I typically take a screenshot of this tab and paste it into my status report.
Note that in the above “Test Details” screenshot, I have a column for Class and Method which allow me to sort by them. Then I have a test “Description”, the test “State”, the “Reason” for test blockage, and finally a place for “Comments”. This tab is nice for a quick overview of tests, it allows sorting which is nice if you want to, for example, sort by “Blocked”. This can also be exported into an Excel spreadsheet. This view is VERY helpful when you end up having hundreds of automated tests, because scrolling through hundreds of lines of code can make things easy to miss or can get confusing.
The 5 points made above were done in my .NET test automation environment which uses a custom attribute I created called “TestProgress”. The reporting GUI uses reflection to extract the class, method, and attribute details. The example is for .NET but this process and pattern could be used in any language that you may be automating in. For example in a scripting language (e.g. Ruby), you could provide “Test Progress” as a comment above the method and then use regular expressions to parse the files to create your report. For example, the Test Progress comment could look something like:
2009. What an eventful year. Eventful in my personal life as well as in my SQA career. A good, eventful year.
I didn’t blog much in 2009, 17 posts in all, and no topics that were SQA groundbreaking. Yeah, I’m pretty much ashamed of myself and have watched my blog fall off peoples’ radar.If I were to highlight my favorite post it would be my turn from SQA words to illustrations with Do Loop Until Zero. A hit or a miss, I don’t know; I don’t get comments either way on this blog. But none the less, it’s something I enjoy doing. Hopefully you guys will see more of this “comic”, if all works out well, it will be in the 1st issue of the new and upcoming Software Testing Club magazine.
Though the blog was quiet, my SQA and testing career wasn’t. In the last year I had the ability to start filling a large gap that was present in my testing experience portfolio. Prior to 2009 I had no experience in the Linux world and the technologies that surrounded it. Joining a new group within GoDaddy quickly changed this. In 2009 I did a 180 degree turn from my beloved Windows world and submerged myself in Linux in an effort to test and automate a new, internal product. I was scared to make the jump, mostly because my Windows wisdom would be put to little use, and my lack of Linux knowledge would make me a slower tester and automator. Not so enticing when I really pride myself on speed and efficiency (“Hire me, Hire ME! I’m two testers for the price of one!”). Scared or not it was an awesome opportunity to further my skills, and help a 1.0 product using my experience with agile practices and automation. With the help of an awesome two man development team, I was able to learn, automate and wade through the following technology highlights in 2009:
Product: A storage system (C, mySQL):
I used PuTTY as an SSH client to the dev, test and prod environment running CentOS as a flavor of Linux
I extended developer unit tests and automated API functional and boundary testing with Perl unit testing (Test::Unit)
I extended PHPUnit to serve as an automation framework for automation of functional tests (use case, boundary, error handling,etc). The framework was named TAEL (Test Automation Ecosystem for Linux).
Product: FTP Server that uses the storage system (Perl, mySQL)
I automated use cases, and FTP functions using TAEL. FTP functionality was tested using PHP’s FTP library. Validation was done through FTP responses, and mySQL queries.
I performance tested the FTP server and underlying storage system with Apache JMeter. FTP in JMeter is not very robust, and worse yet forces a connection open, logon and close for every request needed, which is not very realistic. Thankfully it’s open source (Java) so I extended it and tweaked it to fit our needs.
Product: User Management Web Service
I automated use cases, boundaries, etc with TAEL. Validation was done by querying mySQL or parsing the Web Service response using XPATH queries.
Tool: User Experience Monitor
In an effort to monitor response times on an ongoing basis, I wrote a script that executes basic functionality every 15 minutes, stores the timed results in FTP, where they are picked up and processed by a chron job that puts the results in a database. Chron takes the results puts them into an XML format which are then viewed in a PHP web page using the chart control XML/SWF charts. We found some very interesting activity and trends through this test/monitor. This turned out to be a very interesting almost real-time QA asset for the team.
Automation with Ruby: With a department wide goal that everybody must do a little automation, I led them down the path of Ruby/Watir (due to cost, and Ruby being pretty easy to learn). The results are looking pretty good, adoption has gone well and progress is being made. Here are a few details about the framework that I built over a few weekends:
Uses a pattern that I call “Test, Navigate, Execute, Assert”
Process: Since I’ve been lucky enough to work with a highly motivated, small team of three, our process needs to be and has been really light. We’ve been pretty successful at being extremely agile. For project management we followed a scrum-like process for a little over half a year using the tool Target Process, but then moved to a KanBan approach with our own home-grown tool. Recently we moved from the home-grown tool to a trial with Jira, while trying to maintain the project in KanBan style. I have to say that I really like KanBan, it works particularly well for our team because it is small. When you’re as small and tight knit as out team is, we always know what each other is working on, so the more light-weight the better. It seems the largest payoff of these types of process and tools for our team is tracking backlog items as well as giving management insight to what we’re up to.
What’s in store for me in 2010? Well, I’ll likely be working on the same products, but as far as major learning and growth opportunity I’m excited to dive into the awesome new features of Visual Studio 2010 for Testers as well as to learn and use some C++. Now, if I can just convince myself to blog about those experiences as I go.
With help from Gerhard, a QAInsight reader, the User Agent Switcher MONTSTER XML file has been updated to use the new, folder feature. Also, another IPhone user agent has been added as well as one for Chrome. As always, the permalink is here, and can be found in the right navigation under the “My Testing Tools” header, link: “User-Agent Info and Tools“.
The recent release of Firefox 3.5 on June 30th appears to be riddled with defects and it’s woes have hit the press. The article points out 55 open defects, I went out assessed the mess myself and see 58 defects and 22 enhancements associated with the 3.5 and 3.6a1 milestones. But here is the kicker, all the defects were reported BEFORE the release date of June 30th. Yet they decided to release with the defects anyway. See the summary for yourself here, notice the dates in the “Opened” column for the 3.6a1 list. Mozilla isn’t shy about their lack of quality either, on the Mozilla QA site they boldly proclaim the upcoming tester’s event “Firefox BugDay – Catch Missed Blocker, Critical, and Major 3.5 bugs!“. Jaw dropping key word being: MISSED. Wow! At least we have to admire their transparency right?
Mozilla, these are the things that kill browsers. Firefox has spent a long time acquiring users, and when you have people uninstalling because of issues, well, it’s just not good. The picture is bleak. Releases with knowledge of critical, major and and a pile of normal defects is bad software development and makes me question the whole “Community Development” model. Does community development mean to Mozilla “Community QA after a release”? Let’s hope not, I don’t want to see another browser disappear off of the Mozilla branch.
I’ve been playing around with Microsoft Visual Studio 2010 Team System (Beta 1) the last few weeks and I have to say that I’m pretty excited about what Microsoft is doing to help tie development, testing, and environments together. The things that stands out the most to me is the “Test and Lab Manager”. This tool allows me to write manual tests, automate tests, and then configure, control and run those tests in specified physical or virtual environment. Although beta 1 is pretty rough around the edges, what I’m seeing is really exciting. Through my playing around and research I’ve gathered a few links full of information, screenshots, demos, videos, and official documentation. Peruse and enjoy, but before you get started, go get a rag so that you can clean the drool off of the side of your mouth when you’re done.
It’s never really been quick or easy when I mine the data (which seems to be about once a year), but when I reassess the cheat-sheet I typically spend most of the time re-reminding myself of how to gather the info for the various browsers, all the while discovering new sources. Here is my high-level browser info excavating process:
Get the info from the browser itself:
Look at the browser download site, FAQ, & Release Notes
Search by keywords, and look at articles and comments on the Web
And on to a few lower-level details for a few of the most popular browsers:
Rendering Engine: Trident being the name of the IE rendering engine, it was once believed that the version of Trident matched the the version of the MSHTML.dll found at C:\windows\System32\. But with IE 8.0, the IE team has a reference on their blog that rendering engine in 8.0 is Trident 4.0 which can now be found in the user-agent string. This version conflicts with prior data I had gathered but we’ll run with it. Hopefully, from here on out you can just gather the version from the user-agent string, only time will tell. Here is an example user-agent string for IE8: “Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.0; Trident/4.0;)
Rendering Engine: Webkit is the rendering engine for Chrome. To get the rendering engine version, in the browser URL bar just type “About:”. The version will follow the “Webkit:” key. Notice that the User Agent contains the same Webkit number, hence the rendering engine version can be gathered from the User Agent string also.
Rendering Engine: Firefox uses the Gecko rendering engine and the version number can be found in the user-agent string (can be seen by typing “about:” in the URL bar or going to Help->About in the menu). For example in the following string: “Mozilla/5.0 (Windows; U; Windows NT 6.0; en-US; rv:188.8.131.52) Gecko/2009042316 Firefox/3.0.10”, the “rv:184.108.40.206” is the version and the “2009042316” build number.
Rendering Engine: Webkit is the rendering engine in Safari and the version can be found in the user-agent string. For example in the string: “Mozilla/5.0 (Windows; U; Windows NT 6.0; en-US) AppleWebKit/528.18 (KHTML, like Gecko) Version/4.0 Safari/528.17” the version number is 528.18. This can also be seen on the webkit.dll file property “Product Version” in Windows (\program files\safari\).
And there you have it, the method to my madness. If you have differing or better data please feel free to add it in a comment.
This link/post will serve as the official place for the Browser Compatibility Cheat-Sheet and the post can be permanently accessed from the side navigation. Any new updates/links to the cheat-sheet will be put in this post.
What is the Browser Compatibility Cheat-Sheet?
Download the latest Browser Compatibility Cheat-Sheet here (zipped Excel file):
VSTS 2010 looks like it has great potential for testers and developers, but I think Microsoft is still behind in browser automation and functionality when comparing to the great work from Art of Test with WebAii and Design Canvas. I’m excited that Microsoft is actively pursuing and growing in this space though. It is needed badly! Here is the current proposed platform support for VSTS 2010:
Written over a weekend, Sean Sullivan the CTO of Black Baud lab’s created an iPhone browser simulator, and it’s FREE!
Basically, what he did was take the Webkit rendering engine from Safari and embed it into a Windows application. Browser requests are being made through Safari Webkit using an iPhone mobile user-agent string. Here is an example of the user-agent string that came into my website while capturing the screenshot at the bottom of this post:
BUT.as always, when it comes to browser compatibility testing there is nothing like testing the real browser on the real platform. Keep these things in mind my dear browser compatibility testing friend:
The rendering results will be based on the version of Safari for Windows you have installed and its Webkit version. This will likely not match the version you intend to test on the iPhone. For example, I’ve installed Safari version 3.2.2 which uses Webkit 525.28.1, the page I’m testing will utilize that version. Make sure you are aware of the iPhone browser version you need to be testing and the Webkit version that comes along with that iPhone browser version. You’ll need to install that version of Windows Safari to be in sync with the iPhone. How do you know which version of Safari/Webkit is running in an iPhone app install? I don’t know! I can find no reference or history of versions on the Web. I suppose it could be gathered easy enough by installing each iPhone app version and then either doing a Help->About in Safari? Or gathering the data from the user agent string by going to browserhawk.com. Post a link in the comments if you’ve seen this data somewhere.
Apparently iPhone Safari 3 is not the same code base as Safari 3! The code base was branched. Changes made in that branch could cause your results to vary between the real iPhone browser and the iPhone browser simulator. From the threads I’ve read it seems to be more “shell” type features, which lowers your risk if you’re just looking for rendering issues.