Archive for the ‘Quality Assurance’ Category

Watir Podcast #32 with guest Brent Strange!

0

Watir Podcast #32 is out and in this episode Zeljko and Gregg have me as a guest! In #32, we spend some time talking about how our Hosting Team at GoDaddy uses Watir for website automation, the supporting framework and patterns, and much more. When your ears have a few spare minutes you can get the podcast at WatirPodcast.com and TestingPodcast.com.


Tips for Automation Success: Making Tests Repeatable and Consecutive

0

Tips for Test Automation SuccessBeing able to to run individual tests repeatedly and also being able to run them consecutively is a serious challenge when it comes to automation. I think all automation engineers want it, but some fall prey to not having it due to time constraints, setup and teardown complexity, or system access limitations.

What good is automation when it can’t be ran repeatedly, further more running them all at once? Seems like a dumb question, but I’ve seen plenty of non-repeatable tests or suites written by others, and occasionally have fallen prey myself.

How do I ensure repeatability?

  • When planning the automation time budget, I allocate at least 15% for building infrastructure. Can’t get the time? Fight for it, as an automation engineer it’s your job. You have to sell the fact that if you don’t have the time to build in the infrastructure for repeatability your tests will be less useful, take longer to setup due to manual intervention, and will eventually crumble into uselessness over time because manual intervention is not something you can easily hand off to another automation engineer or developer.  
  • I start building in repeatability with test #1. This test can often take the longest to write because I’m often times building in the infrastructure needed to run all my tests repeatedly (reusable functions, system access functions (db connections, etc), setup and teardown functions). 
  • When the test is done being developed,  I run the test, I run it again, I run it again. repeat and rinse until it passes every time.

How do I ensure I can run all my tests consecutively?

  • I build each test so that it can be ran independently, which means that when I run them all from a list, no matter which order they are in, they will not interfere with each other. This is why extracting common code into methods, setup and teardown is instrumental; I will reuse it in almost every test.
  • I try to run all tests after each new test is developed and completed to make sure the new test plays nice with the others. This isn’t always possible if you have long running tests, but if they aren’t, I definitely do this.
  • I run them all at least once a day.

1st Edition of STC Magazine is out

0

imageThe first copy of the Software Testing Club magazine (STC) is out! I had a chance to review it before the release and I have to say it’s a really cool magazine. It’s fun, different, and not so stuffy. SQA talk can get so boring, but STC breaks out of that box. Between the articles written by the community, the artwork, the comic strips, and the smart-ass QA and development quotes on the bottom-left of every other page, I found myself having a great time reading it. I really enjoyed the comics by Andy Glover, and found myself laughing out loud over them.

I’d like to also point out that my Do Loop Until 0 comic is in the magazine. Not necessarily funny, but a quick view of the realities of testers and developers in the software development environment. I’ll admit Do Loop Until 0 can be a little deep at times, but if you study the details closely the irony will hit you like  a sledgehammer. The more I do the strip, the more I realize how Spy vs. Spy influenced me as a child.

Take a look yourself here, I really do think you’d enjoy it.




Tips for Automation Success: Toot Your Horn

0

Tips for Test Automation SuccessToot your automation horn! “beep-beep!” Or is that a “HONK-HONK!”? Typically people don’t know what your up to in your little test automation world if you don’t communicate/toot. Communication gets it out there, getting it out there will allow it to spread. Verbally, in status reports, in executive summaries, etc. “What do I toot?”, you say?  Toot your success and your failure:

  1. Toot: Your test stats:
    • Calculate time saved by running automated tests vs. manually running the tests. Toot the time saved per test run, per week, per month, per year.
    • Automated test case count
    • Test assertion count (often time x4 the number of tests) 
    • Count of and description of defects found
    • Count and description of defects found through early involvement
  2. Toot: Your test framework features and value
    • Code reuse
    • Consistency
    • Shared tests
    • Patterns and practices
  3. Toot: Your failures:
    • So that other automation engineers don’t make the same mistakes
    • To keep things realistic. Positive only is hard to believe!

There is a fine line for tooting automation, “To toot or not to toot?”, that is the question. Don’t be (too) cocky. For example, a good toot is “Automated regression passed! Now that’s nice, the state of the build determined in 2 minutes!”. A bad toot: “This automation is so awesome, you guys would be screwed without it!”. Don’t over toot. Nobody likes an annoying tooter. Toot stats in your status report. Verbally toot once or twice a week to the project/Dev team. Toot your heart out to your fellow automation engineers, they are on the same page.


Audio Podcasts on Testing: TestingPodcast.com

2

Zeljko Filipin has put together a site to that encompasses many testing related audio podcasts at TestingPodcast.com. It’s amazing to see how audio podcasts have grown in the last year within the testing community. QA and testing voices are literally heard, and that’s pretty cool.

Stay tuned to TestingPodcasts.com and you’ll be sure to hear my monotone voice in the next month or so. If you’re a true fan you’ve heard it already in my testing screencasts 🙂


Tips for Automation Success: Track & Report Test Progress

3

Tips for Test Automation SuccessAn automation engineer’s test automation progress is often a black box to the project team and managers and that is serious “egg on the face” for any automation initiative. One day while automating I started reminiscing about how I used to monitor and report test case status while doing functional testing, and thought to myself “How can I do that with my test automation?”. Shortly after. a process and a tool was born, and stats were included in my weekly reports. I also had the ability to provide detailed test descriptions. Now others had insight to my goal, my progress, I could estimate a completion date, and the project team could review my test descriptions looking for voids in coverage. A bonus benefit to tracking status is that multiple automation engineers can work on one project and not accidentally stomp each other. Seems like a no-brainer right?.. But more often than not I see automation engineers working in an automation black box leaving them unaccountable to all.

Here is an example of how I make myself and my test automation accountable:

  1. I stub out my test cases when reviewing requirements (the final number is my goal). For example, each test case is usually one method in my automation test suite. One hundred tests equates to 100 methods. I use separate classes to segregate functionality. My method name follow a pattern and are very descriptive, which helps me decipher what they are when they are in large lists and allows for easy alphabetical sorting.
  2. When stubbing the tests/method, I write the test description/steps with it’s verification points. For example, in the screenshot below, the “Description” attribute contains these details.Test description
  3. I track test/method development status. In the example below you can see the various status that I use. Status is the key to monitoring progress!Test status
  4. I tie defect ids or agile task numbers to test cases, which makes for easy searching when I’m doing defect regression:
    image 
  5. Finally, I use a tool/automation to extract goal, status, and test description:Test Stats tab Note that in the above “Stats” screenshot I have a Test Summary “Count” which is my goal, I have a count of the various states, and have a percentage of the various states. “Completed” percentage is my progress towards the goal. I typically take a screenshot of this tab and paste it into my status report.

    Test Details tab Note that in the above “Test Details” screenshot, I have a column for Class and Method which allow me to sort by them. Then I have a test “Description”, the test “State”,  the “Reason” for test blockage, and finally a place for “Comments”. This tab is nice for a quick overview of tests, it allows sorting which is nice if you want to, for example, sort by “Blocked”. This can also be exported into an Excel spreadsheet. This view is VERY helpful when you end up having hundreds of automated tests, because scrolling through hundreds of lines of code can make things easy to miss or can get confusing.

 
The 5 points made above were done in my .NET test automation environment which uses a custom attribute I created called “TestProgress”. The reporting GUI uses reflection to extract the class, method, and attribute details. The example is for .NET but this process and pattern could be used in any language that you may be automating in. For example in a scripting language (e.g. Ruby), you could provide “Test Progress” as a comment  above the method and then use regular expressions to parse the files to create your report. For example, the Test Progress comment could look something like:

Ruby Test Progress


Testing in 2009, a Year in Review

3

2009. What an eventful year. Eventful in my personal life as well as in my SQA career. A good, eventful year.

I didn’t blog much in 2009, 17 posts in all, and no topics that were SQA groundbreaking.  Yeah, I’m pretty much ashamed of myself and have watched my blog fall off peoples’ radar.If I were to highlight my favorite post it would be my turn from SQA words to illustrations with Do Loop Until Zero. A hit or a miss, I don’t know; I don’t get comments either way on this blog. But none the less, it’s something I enjoy doing. Hopefully you guys will see more of this “comic”, if all works out well, it will be in the 1st issue of the new and upcoming Software Testing Club magazine.

Though the blog was quiet, my SQA and testing career wasn’t. In the last year I had the ability to start filling a large gap that was present in my testing experience portfolio. Prior to 2009 I had no experience in the Linux world and the technologies that surrounded it. Joining a new group within GoDaddy quickly changed this. In 2009 I did a 180 degree turn from my beloved Windows world and submerged myself in Linux in an effort to test and automate a new, internal product. I was scared to make the jump, mostly because my Windows wisdom would be put to little use, and my lack of Linux knowledge would make me a slower tester and automator. Not so enticing when I really pride myself on speed and efficiency (“Hire me, Hire ME! I’m two testers for the price of one!”). Scared or not it was an awesome opportunity to further my skills, and help a 1.0 product using my experience with agile practices and automation. With the help of an awesome two  man development team, I was able to learn, automate and wade through the following technology highlights in 2009:

Product: A storage system (C, mySQL):

  • I used PuTTY as an SSH client to the dev, test and prod environment running CentOS as a flavor of Linux
  • I extended developer unit tests and automated API functional and boundary testing with Perl unit testing (Test::Unit)
  • I extended PHPUnit to serve as an automation framework for automation of functional tests (use case, boundary, error handling,etc). The framework was named TAEL (Test Automation Ecosystem for Linux).

Product: FTP Server that uses the storage system (Perl, mySQL)

  • I automated use cases, and FTP functions using TAEL. FTP functionality was tested using PHP’s FTP  library. Validation was done through FTP responses, and mySQL queries.
  • I performance tested the FTP server and underlying storage system with Apache JMeter. FTP in JMeter is not very robust, and worse yet forces a connection open, logon and close for every request needed, which is not very realistic. Thankfully it’s open source (Java) so I extended it and tweaked it to fit our needs.

Product: User Management Web Service

  • I automated use cases, boundaries, etc with TAEL. Validation was done by querying mySQL or parsing the Web Service response using XPATH queries.

Tool: User Experience Monitor

  • In an effort to monitor response times on an ongoing basis, I wrote a script that executes basic functionality every 15 minutes, stores the timed results in FTP, where they are picked up and  processed by a chron job that puts the results in a database. Chron takes the results puts them into an XML format which are then viewed in a PHP web page using the chart control XML/SWF charts. We found some very interesting activity and trends through this test/monitor. This turned out to be a very interesting almost real-time QA asset for the team.

Product: REST service

Automation with Ruby: With a department wide goal that everybody must do a little automation, I led them down the path of Ruby/Watir (due to cost, and Ruby being pretty easy to learn). The results are looking pretty good, adoption has gone well and progress is being made. Here are a few details about the framework that I built over a few weekends:

  • Uses a pattern that I call “Test, Navigate, Execute, Assert”
  • Manages tests with the Ruby library: Test::Unit
  • Uses Watir for web page automation
  • Web Service automation is done with:  soap4r  & REXML
  • MySQL database validation with the gem:  dbd-mysql
  • Data driven automation from Excel using Roo

 
Process: Since I’ve been lucky enough to work with a highly motivated, small team of three, our process needs to be and has been really light. We’ve been pretty successful at being extremely agile. For project management we followed a scrum-like process for a little over half a year using the tool Target Process, but then moved to a KanBan approach with our own home-grown tool. Recently we moved from the home-grown tool to a trial with Jira, while trying to maintain the project in KanBan style. I have to  say that I really like KanBan, it works particularly well for our team because it is small. When you’re as small and tight knit as out team is, we always know what each other is working on, so the more light-weight the better. It seems the largest payoff of these types of process and tools for our team is tracking backlog items as well as giving management insight to what we’re up to.

What’s in store for me in 2010? Well, I’ll likely be working on the same products, but as far as major learning and growth opportunity I’m excited to dive into the awesome new features of Visual Studio 2010 for Testers as well as to learn and use some C++. Now, if I can just convince myself to blog about those experiences as I go.


User Agent Switcher XML file update

0

With help from Gerhard, a QAInsight reader, the User Agent Switcher MONTSTER XML file has been updated to use the new, folder feature. Also, another IPhone user agent has been added as well as one for Chrome. As always, the permalink is here, and can be found in the right navigation under the “My Testing Tools” header, link: “User-Agent Info and Tools“.


Post navigation