Archive for the ‘Automation’ Category

Watir Podcast #32 with guest Brent Strange!

0

Watir Podcast #32 is out and in this episode Zeljko and Gregg have me as a guest! In #32, we spend some time talking about how our Hosting Team at GoDaddy uses Watir for website automation, the supporting framework and patterns, and much more. When your ears have a few spare minutes you can get the podcast at WatirPodcast.com and TestingPodcast.com.


Tips for Automation Success: Making Tests Repeatable and Consecutive

0

Tips for Test Automation SuccessBeing able to to run individual tests repeatedly and also being able to run them consecutively is a serious challenge when it comes to automation. I think all automation engineers want it, but some fall prey to not having it due to time constraints, setup and teardown complexity, or system access limitations.

What good is automation when it can’t be ran repeatedly, further more running them all at once? Seems like a dumb question, but I’ve seen plenty of non-repeatable tests or suites written by others, and occasionally have fallen prey myself.

How do I ensure repeatability?

  • When planning the automation time budget, I allocate at least 15% for building infrastructure. Can’t get the time? Fight for it, as an automation engineer it’s your job. You have to sell the fact that if you don’t have the time to build in the infrastructure for repeatability your tests will be less useful, take longer to setup due to manual intervention, and will eventually crumble into uselessness over time because manual intervention is not something you can easily hand off to another automation engineer or developer.  
  • I start building in repeatability with test #1. This test can often take the longest to write because I’m often times building in the infrastructure needed to run all my tests repeatedly (reusable functions, system access functions (db connections, etc), setup and teardown functions). 
  • When the test is done being developed,  I run the test, I run it again, I run it again. repeat and rinse until it passes every time.

How do I ensure I can run all my tests consecutively?

  • I build each test so that it can be ran independently, which means that when I run them all from a list, no matter which order they are in, they will not interfere with each other. This is why extracting common code into methods, setup and teardown is instrumental; I will reuse it in almost every test.
  • I try to run all tests after each new test is developed and completed to make sure the new test plays nice with the others. This isn’t always possible if you have long running tests, but if they aren’t, I definitely do this.
  • I run them all at least once a day.

Tips for Automation Success: Toot Your Horn

0

Tips for Test Automation SuccessToot your automation horn! “beep-beep!” Or is that a “HONK-HONK!”? Typically people don’t know what your up to in your little test automation world if you don’t communicate/toot. Communication gets it out there, getting it out there will allow it to spread. Verbally, in status reports, in executive summaries, etc. “What do I toot?”, you say?  Toot your success and your failure:

  1. Toot: Your test stats:
    • Calculate time saved by running automated tests vs. manually running the tests. Toot the time saved per test run, per week, per month, per year.
    • Automated test case count
    • Test assertion count (often time x4 the number of tests) 
    • Count of and description of defects found
    • Count and description of defects found through early involvement
  2. Toot: Your test framework features and value
    • Code reuse
    • Consistency
    • Shared tests
    • Patterns and practices
  3. Toot: Your failures:
    • So that other automation engineers don’t make the same mistakes
    • To keep things realistic. Positive only is hard to believe!

There is a fine line for tooting automation, “To toot or not to toot?”, that is the question. Don’t be (too) cocky. For example, a good toot is “Automated regression passed! Now that’s nice, the state of the build determined in 2 minutes!”. A bad toot: “This automation is so awesome, you guys would be screwed without it!”. Don’t over toot. Nobody likes an annoying tooter. Toot stats in your status report. Verbally toot once or twice a week to the project/Dev team. Toot your heart out to your fellow automation engineers, they are on the same page.


Tips for Automation Success: Track & Report Test Progress

3

Tips for Test Automation SuccessAn automation engineer’s test automation progress is often a black box to the project team and managers and that is serious “egg on the face” for any automation initiative. One day while automating I started reminiscing about how I used to monitor and report test case status while doing functional testing, and thought to myself “How can I do that with my test automation?”. Shortly after. a process and a tool was born, and stats were included in my weekly reports. I also had the ability to provide detailed test descriptions. Now others had insight to my goal, my progress, I could estimate a completion date, and the project team could review my test descriptions looking for voids in coverage. A bonus benefit to tracking status is that multiple automation engineers can work on one project and not accidentally stomp each other. Seems like a no-brainer right?.. But more often than not I see automation engineers working in an automation black box leaving them unaccountable to all.

Here is an example of how I make myself and my test automation accountable:

  1. I stub out my test cases when reviewing requirements (the final number is my goal). For example, each test case is usually one method in my automation test suite. One hundred tests equates to 100 methods. I use separate classes to segregate functionality. My method name follow a pattern and are very descriptive, which helps me decipher what they are when they are in large lists and allows for easy alphabetical sorting.
  2. When stubbing the tests/method, I write the test description/steps with it’s verification points. For example, in the screenshot below, the “Description” attribute contains these details.Test description
  3. I track test/method development status. In the example below you can see the various status that I use. Status is the key to monitoring progress!Test status
  4. I tie defect ids or agile task numbers to test cases, which makes for easy searching when I’m doing defect regression:
    image 
  5. Finally, I use a tool/automation to extract goal, status, and test description:Test Stats tab Note that in the above “Stats” screenshot I have a Test Summary “Count” which is my goal, I have a count of the various states, and have a percentage of the various states. “Completed” percentage is my progress towards the goal. I typically take a screenshot of this tab and paste it into my status report.

    Test Details tab Note that in the above “Test Details” screenshot, I have a column for Class and Method which allow me to sort by them. Then I have a test “Description”, the test “State”,  the “Reason” for test blockage, and finally a place for “Comments”. This tab is nice for a quick overview of tests, it allows sorting which is nice if you want to, for example, sort by “Blocked”. This can also be exported into an Excel spreadsheet. This view is VERY helpful when you end up having hundreds of automated tests, because scrolling through hundreds of lines of code can make things easy to miss or can get confusing.

 
The 5 points made above were done in my .NET test automation environment which uses a custom attribute I created called “TestProgress”. The reporting GUI uses reflection to extract the class, method, and attribute details. The example is for .NET but this process and pattern could be used in any language that you may be automating in. For example in a scripting language (e.g. Ruby), you could provide “Test Progress” as a comment  above the method and then use regular expressions to parse the files to create your report. For example, the Test Progress comment could look something like:

Ruby Test Progress


Testing in 2009, a Year in Review

3

2009. What an eventful year. Eventful in my personal life as well as in my SQA career. A good, eventful year.

I didn’t blog much in 2009, 17 posts in all, and no topics that were SQA groundbreaking.  Yeah, I’m pretty much ashamed of myself and have watched my blog fall off peoples’ radar.If I were to highlight my favorite post it would be my turn from SQA words to illustrations with Do Loop Until Zero. A hit or a miss, I don’t know; I don’t get comments either way on this blog. But none the less, it’s something I enjoy doing. Hopefully you guys will see more of this “comic”, if all works out well, it will be in the 1st issue of the new and upcoming Software Testing Club magazine.

Though the blog was quiet, my SQA and testing career wasn’t. In the last year I had the ability to start filling a large gap that was present in my testing experience portfolio. Prior to 2009 I had no experience in the Linux world and the technologies that surrounded it. Joining a new group within GoDaddy quickly changed this. In 2009 I did a 180 degree turn from my beloved Windows world and submerged myself in Linux in an effort to test and automate a new, internal product. I was scared to make the jump, mostly because my Windows wisdom would be put to little use, and my lack of Linux knowledge would make me a slower tester and automator. Not so enticing when I really pride myself on speed and efficiency (“Hire me, Hire ME! I’m two testers for the price of one!”). Scared or not it was an awesome opportunity to further my skills, and help a 1.0 product using my experience with agile practices and automation. With the help of an awesome two  man development team, I was able to learn, automate and wade through the following technology highlights in 2009:

Product: A storage system (C, mySQL):

  • I used PuTTY as an SSH client to the dev, test and prod environment running CentOS as a flavor of Linux
  • I extended developer unit tests and automated API functional and boundary testing with Perl unit testing (Test::Unit)
  • I extended PHPUnit to serve as an automation framework for automation of functional tests (use case, boundary, error handling,etc). The framework was named TAEL (Test Automation Ecosystem for Linux).

Product: FTP Server that uses the storage system (Perl, mySQL)

  • I automated use cases, and FTP functions using TAEL. FTP functionality was tested using PHP’s FTP  library. Validation was done through FTP responses, and mySQL queries.
  • I performance tested the FTP server and underlying storage system with Apache JMeter. FTP in JMeter is not very robust, and worse yet forces a connection open, logon and close for every request needed, which is not very realistic. Thankfully it’s open source (Java) so I extended it and tweaked it to fit our needs.

Product: User Management Web Service

  • I automated use cases, boundaries, etc with TAEL. Validation was done by querying mySQL or parsing the Web Service response using XPATH queries.

Tool: User Experience Monitor

  • In an effort to monitor response times on an ongoing basis, I wrote a script that executes basic functionality every 15 minutes, stores the timed results in FTP, where they are picked up and  processed by a chron job that puts the results in a database. Chron takes the results puts them into an XML format which are then viewed in a PHP web page using the chart control XML/SWF charts. We found some very interesting activity and trends through this test/monitor. This turned out to be a very interesting almost real-time QA asset for the team.

Product: REST service

Automation with Ruby: With a department wide goal that everybody must do a little automation, I led them down the path of Ruby/Watir (due to cost, and Ruby being pretty easy to learn). The results are looking pretty good, adoption has gone well and progress is being made. Here are a few details about the framework that I built over a few weekends:

  • Uses a pattern that I call “Test, Navigate, Execute, Assert”
  • Manages tests with the Ruby library: Test::Unit
  • Uses Watir for web page automation
  • Web Service automation is done with:  soap4r  & REXML
  • MySQL database validation with the gem:  dbd-mysql
  • Data driven automation from Excel using Roo

 
Process: Since I’ve been lucky enough to work with a highly motivated, small team of three, our process needs to be and has been really light. We’ve been pretty successful at being extremely agile. For project management we followed a scrum-like process for a little over half a year using the tool Target Process, but then moved to a KanBan approach with our own home-grown tool. Recently we moved from the home-grown tool to a trial with Jira, while trying to maintain the project in KanBan style. I have to  say that I really like KanBan, it works particularly well for our team because it is small. When you’re as small and tight knit as out team is, we always know what each other is working on, so the more light-weight the better. It seems the largest payoff of these types of process and tools for our team is tracking backlog items as well as giving management insight to what we’re up to.

What’s in store for me in 2010? Well, I’ll likely be working on the same products, but as far as major learning and growth opportunity I’m excited to dive into the awesome new features of Visual Studio 2010 for Testers as well as to learn and use some C++. Now, if I can just convince myself to blog about those experiences as I go.


Link roundup for Visual Studio Team System 2010 Test Edition

0

Test and Lab Manager I’ve been playing around with Microsoft Visual Studio 2010 Team System (Beta 1) the last few weeks and I have to say that I’m pretty excited about what Microsoft is doing to help tie development, testing, and environments together.  The things that stands out the most to me is the “Test and Lab Manager”. This tool allows me to write manual tests, automate tests, and then configure, control and run those tests in specified physical or virtual environment. Although beta 1 is pretty rough around the edges, what I’m seeing is really exciting. Through my playing around and research I’ve gathered a few links full of information, screenshots, demos, videos, and official documentation. Peruse and enjoy, but before you get started, go get a rag so that you can clean the drool off of the side of your mouth when you’re done.

MSDN documentation for “Testing the Application” in VSTS 2010:
http://msdn.microsoft.com/en-us/library/ms182409(VS.100).aspx

Video: Functional UI Testing with VSTS 2010
http://channel9.msdn.com/shows/10-4/10-4-Episode-18-Functional-UI-Testing/

How to add a VSTS 2010 coded UI test to a build:
http://blogs.msdn.com/mathew_aniyan/archive/2009/05/26/coded-ui-test-in-a-team-build.aspx

Creating and running a VSTS 2010 coded UI test through a Lab Manager project:
http://blogs.msdn.com/jasonz/archive/2009/05/26/vs2010-tutorial-testing-tutorial-step-2.aspx
http://blogs.msdn.com/mathew_aniyan/archive/2009/05/26/coded-ui-test-from-microsoft-test-lab-manager.aspx

Explanation of the various Test tool names and products:
http://blogs.msdn.com/jasonz/archive/2009/05/12/announcing-microsoft-test-and-lab-manager.aspx

VSTS related blogs:
http://blogs.msdn.com/vstsqualitytools/
http://blogs.msdn.com/amit_chatterjee/ 
http://blogs.msdn.com/mathew_aniyan/


Automated User Interface Testing with VSTS 2010

0

Thanks to co-worker Julio Verano for passing this on:

Here is a 17 minute video from MIX09 on Automated User Interface Testing with Microsoft Visual Studio Team System 2010.

VSTS 2010 looks like it has great potential for testers and developers, but I think Microsoft is still behind in browser automation and functionality when comparing to the great work from Art of Test with WebAii and Design Canvas.  I’m excited that Microsoft is actively pursuing and growing in this space though. It is needed badly! Here is the current proposed platform support for VSTS 2010:

image


20 Reasons to use VSTS 2008 as your Automation Framework

2

DuctTapeBailingWire A question from Tobbe Ryber has inspired me to jot down a few things I’ve been meaning to do in a more extensive blog post for a long time. But since it hasn’t happened yet, I figure it probably never will, so you’ll have to settle for my abbreviated, half-ass version.

Twenty reasons to use Visual Studio Team System 2008 Test Edition for your software testing automation framework, ESPECIALLY if your development team is using .NET and Visual Studio:


  1. You are using the .NET platform which is a set of stable and robust libraries that allow you to do just about anything. Make HTTP requests, make Web Service requests, use COM, make database queries, the list goes on and on. Basically anything your .NET developer is doing you’re going to be able to tap into using the same context. Easily.
  2. You have the ability to easily make calls into several layers of the application under test using a few lines or code, without duct taping and bailing wiring a bunch of libraries or technologies together. Imagine automating something in the browser, making a call to a Web service and then calling the database to validate your results…All in a few lines of code.
  3. There are awesome tools and libraries that are built on .NET that allow you to automate browsers, such as SWEA, WatiN, and HTTPWatch.
  4. There is a great library and Visual Studio add-on that allows you to automate multiple browsers (IE 7, 8, FireFox. Safari and Chrome any day now), as well as Silverlight. Best yet, the recorder integrates with the IDE: ArtofTest’s WebAii and Design Canvas.
  5. Your ‘test harness’ is built into the IDE, and your tests can also be ran from the command line.
  6. The IDE is top notch when it comes to development and debugging (and test development and debugging). I’ve been using VS for automation since VS 2005, and when I’ve had to automate in other worlds (e.g. Linux, PHP, and Perl) I honestly feel like I’m working with tools that equate to a chisel and stone tablet.
  7. Auto-complete in the IDE is a huge timesaver. Your time spent searching the internet or referring to a library’s specifications is far less with auto-complete.
  8. Syntax issues with scripting languages (JavaScript, Ruby, etc.) can be a huge waste of time at runtime. If you write a test that runs for minutes, hours, or days it could fail halfway through due to syntax. A compiled language is not going to do this.
  9. The Microsoft.VisualStudio.TestTools.UnitTesting namespace is not just for unit testing, it works great for test automation. It feels a lot like nUnit to me.
  10. Integrating your tests with development builds is cakewalk. Using the mstest command line, it’s easy to have your tests run with a build in TFS or CruiseControl.
  11. You have the ability to easily move some of your tests up the chain to run along side of developers’ unit tests. By doing this you now have automated acceptance tests, so that releases to QA have higher quality.
  12. You are using the same environment/language as your developers. By doing this, you gain:
  13. A. The ability to have developers help you with getting over the .NET language or VS IDE learning curve.
  14. B. Knowledge and use of the same language and libraries used for development, thus having a greater technical understanding of what you’re testing.
  15. C. Easily share and discuss your tests with developers because they are familiar with the language you are using.
  16. Test results are in an XML format which means that if you want to use something other than VSTS to view results you can easily manage it.
  17. The .NET community is huge. Help, technical examples, and issue-workarounds are an Internet search away.
  18. Examples are SUPER helpful through MSDN. Training video series such as How Do I and VSTS Learn are a great alternative.
  19. VSTS also does load testing.
  20. .NET, C#, VB.NET, and Visual Studio experience on your resume are technology skills and buzzwords that lure recruiters.


C# 4.0 Dynamic Types Allows Better Unit and Automated Tests

1

I noticed today that C# 4.0 will offer dynamic typing (no, your keyboard will not type magically for you…the other typing). Why mention it here? Well, if I understand the new feature right, this will fix a fairly large unit testing and automation roadblock caused by static types. For example, when writing tests in .NET and trying to consume/use a method on a .NET API or Web Service your tests are constrained to the method’s inputs static types at compile time. In other words, in the old C# world, if you had a method that looked like this:


public void testMe(int foo){ }

and you wrote a test that called the method like this:

testMe( “poofoo” );

the test would fail at compile time due to static typing (because the method’s input parameter type is an int, but we’re trying to pass a string)

Thus the roadblock I was describing… There are a lot of tests, poor error handling, AND hidden defects when you try to pass invalid types into inputs in a service. So, about now you are saying…”Who gives a crap Brent, if my service’s consumers use .NET they’ll compile, get the error and never even get a chance to send a string in where an int should be”. There was a day when I thought the same thing (a long time ago), until I consumed a Web Service using a Java application. For example, say you’ve exposed a .NET Web Service for the world to use… and I come along and consume it with Java, by doing that I can ignore your static/strong typing and send in that string that you weren’t prepared for. Strings don’t work so well as ints (especially when it’s “poofoo”, or even better “2147483648”). Most of the time the errors are just plain ugly, making the API or Web Service really hard to work with when trying to do good error handling management (imagine getting “you failed, line 67”, but then a code change occurs and now it’s line 68), but sometimes things fail on a larger scale and make some really cool defects.

Back to the point…

I’m thinking the new dynamic type in C# 4.0 will allow you to avoid that compile error if you’re writing your tests using C#/.NET. Thus, I could send in that string as an int and then see what happens at runtime. That test would look something like this:

dynamic poo = “poofoo” ;
testMe(poo);


Neither confirmed or denied at this point, but if true, I’M REALLY EXCITED. This will fix a major issue that I’ve had with testing .NET services using .NET.


It’s interesting the conversations that are spawned from the issues that come out sending invalid types…


Brent: The error message… “You failed, line 67” is poor and confusing, what did I do wrong?

Developer: It choked when you sent in the wrong type. Don’t do that.

Brent: That sucks…so when our customers do the same thing and then call in for help we’ll just tell them that?

Developer: Well, uh..no. Okay, I’ll put in a little error handling and a descriptive error message.

Brent: Sweet!

Brent: (Evil laugh in head) Hey… Will we support that error message as part of the API? Because, if you make it a message that I, a consumer can rely on, that means my code will depend on it, meaning that changes to the error are breaking changes. Breaking changes must be tracked and documented.

Developer: (Slowly backing away) I CANT HEAR YOU!!!



Post navigation