Archive for February, 2006

Pay with your cell phone using TextPayMe


Do you want a free $5.00? No really, a FREE $5.00. No catches. You can get that $5.00 for signing up with the new service called TextPayMe. The service provides you with the ability to send money to other TextPayMe members via your cell phone using text messaging. No more unpaid IOUs. You know what I’m talking about, you keep forgetting to pay those IOUs back don’t you? C’mon you, that’s not fair to your friends and co-workers. They did you a favor by temporarily footing the bill and this is how you treat them in return? Because of your forgetfulness your so called friend is calling you a flake, cheapskate, mooch, rat, welch, stiff, swindle, and loser behind your back! Yeah, that hurts. I know, I know… “you forgot”. It’s not like you were sitting on the extra cash in your bank account gaining interest on YOUR FRIEND’S MONEY. You’ve got a “problem” buddy. Solve your “problem” instantly over the phone with a simple text message such as “pay 3 2065551234”. It’ll come in handy for things like:

– Split your restaurant bills right then and there!
– Pay your team or club dues anywhere and anytime
– Pitch in real money instead of IOUs for a shared gift
– Settle your roommate’s rent and utility bills on the

Pretty cool…technology and all. Times have changed (so Grandpa portrays in his stories about pushing wheelbarrows of gold coins 12 miles uphill through snow to pay off a 1 month old IOU to his bookie that had an astonishingly low interest rate of 3.2%).

Seriously, TextPayMe is a pretty handy service. Click the ad below and sign up!

SignUp at TextPayMe

P.S. If I can get 35 people to sign up for free money and a sweet service I can win an XBOX360. Make sure and click the ad to give me credit. But, I am in no way motivated by this offer. I swear on my Grandpa’s wheelbarrow. It sure would be a God-send to get this nice little reward for my poor children though. My 8 adopted children, who have never had a game console and spend their evenings playing UNO with a deck that has 3 missing cards, because Dad can’t afford to buy them a game console due to his $5.00 an hour QA job.

Isn’t it exciting to see these reminiscent flashes of the dot com boom where companies gave away things to gain customers? Is it a sign that the economy is back to it’s good ‘ol self? Maybe good ‘ol Dad can get bumped up to $6.00 an hour if things keeping looking up!

Using HTTPWatch for Web application testing


Are you developing or testing Web applications? If so, I’m pretty sure you’ve needed a way to look at the HTTP or HTTPS traffic flowing back forth between your Web browser and Server. Have I got the tool for you… For a couple of years now I’ve been using HTTPWatch (from Simtec) to peek into that HTTP world, on the fly, from Internet Explorer. HTTPWatch runs from within Internet Explorer’s Explorer Bar putting it right at your finger tips while you surf, and for Web application testing it can’t get any better than when the test tool is built into IE. HTTPWatch allows you to monitor and peruse:

  • Headers, Cookies and URLs
  • HTTP method (GET, POST, etc…)
  • Time taken to complete a request
  • Size of downloaded page, image or file
  • HTTP status codes or error codes if the request failed
  • Parameters sent in a query strings and POSTs
  • Network operations required, such as DNS lookup or socket connects
  • Whether the content was read from the browser cache or downloaded from the server
  • HTML content (rendered)
  • HTML stream (un-rendered/raw)

You say: “Okay…Wow Brent, I can look at HTTP traffic, what’s the big deal? How can this tool help with my testing?” Given the above feature list, you can and I do, find defects like:

  • 404s (small images, hidden pages in frames)
  • Unnecessary 302 redirects
  • Unnecessary page usage
  • Unnecessary page reloads
  • Necessary or unnecessary page and image caching
  • Confidential information in cookies
  • Confidential information in forms
  • Use of form queries instead of posts
  • Improper use of HTML encoding (header)

As a Quality Assurance engineer testing Web sites this tool is valuable, easy to use, clean, and reliable. With that said, nothing is ever good enough for this QA engineer, I really would like to see the following enhancements:

  • Put it in a FireFox extension too
  • Provide proxy capability where a user can modify the content stream for sends or receives (I’d throw away my favorite proxy tool Paros for this capability)

That aside, the tool does its intended job perfectly and I highly recommend it. You can download the “Basic Edition” for free but it only allows you to use it against a few, popular sites. If you intend to use it for testing you’ll need to buy the “Professional Edition” at $249 for a single user license. The prices get better with larger license packages. $249 for a testing tool of this caliber is cheap. Don’t cheat yourself with less powerful tools like ieHttpHeaders! Get HTTPWatch and GITt-R-DONE.

Update 3/01/2006: Simon at Simtec told me that my suggested enhancments are on their HTTPWatch “To Do” list. Cool! It’s nice to know that Simtec is a company that listens to their customers needs. 

Web Service performance testing, monitoring, and troubleshooting


The IA team is wrapping up our first round of performance and scalability testing for Intelligent Authentication 1.1 here at Corillian and I’ve got to tell you this thing performs! In the past I’ve seen hints and rants on the Internet about .NET Web Services performance being slow which made me weary of what I was going up against for performance testing. I’ve got to tell you that .NET Web Services flat out SCREAM! What is “flat out SCREAM”? I’m talking about response times that are one-tenth of a second on a loaded web server (CPU at 70%) .. and a little over two-tenths of a second when the CPU averages 98% (about ready to tip over and catch fire). Getting there was a bit of a challenge but we’re there. Whew! I think we all learned a lot. I learned some pretty technical and confusing stuff along the way. Two big things taught and learned: Threading and Web Service performance counters. Here are a list of BIG hurdles and how we got over them:

1st hurdle
At about 80 method requests per second the Web Server started returning 503 errors to SilkPerformer. Method requests were receiving the error:

“[HttpException (0x80004005): Server Too Busy]    System.Web.HttpRuntime.RejectRequestInternal(HttpWorkerRequest wr) +148”

1st hurdle fix
Tune the Machine.config file to the Microsoft performance recommendation for Web Services:
Contention, poor performance, and deadlocks when you make Web service requests from ASP.NET applications
Understand what it all means with:
Chapter 17 – Tuning .NET Application Performance

2nd hurdle
At about 115 method requests per second the Web Server started returning 503 errors to SilkPerformer. Method requests were AGAIN receiving the error:

“[HttpException (0x80004005): Server Too Busy]    System.Web.HttpRuntime.RejectRequestInternal(HttpWorkerRequest wr) +148”

But this time we things were a bit different. The Machine.config settings were set to recommended values and the actual number of threads were maxed out too (maxWorkerThreads and maxIOThreads were both set to the limit 100). I asked the Corillian Scalability team if they had ever seen such a thing and low and behold they had. Turns out that when they did the Voyager 70,000 concurrent users test at the Microsoft Scalability Lab a couple years ago they ran into the same issue.

2nd hurdle fix
According to our friends at Microsoft (an MS Engineer in the scalability lab) you need to change the Machine.config default value for appRequestQueueLimit from 100 to 5000. Bam! Issue fixed. We moved on. The setting is probably a little overkill, but the actual setting for you will vary depending on your hardware. Five thousand will nearly guarantee that this setting won’t be your bottleneck anymore.

3rd hurdle
The Web Severs were only processing 150 method requests per second no matter how much load we put on them. We had a bottleneck somewhere but couldn’t seem to find it. Adding various counters revealed that the ASP.NET request queue was pretty “spikey” and sometimes constantly around 100. The more load we put on the higher the queue and the higher the response time. In retrospect this was counter was my obvious clue but I just didn’t know enough at the time.

3rd hurdle fix
The fix ended up being a thread limit we had set in our Web Service. The thread limit to write to our Auditlog in SQL was set to 5. Bumping this up solved the issue. Twenty-five ended up being the perfect number for our hardware. Pouring over Microsoft’s performance recommendations several times and trying different Machine.config settings with no avail left me staring at the following picture only to walk through the application flow myself several times before making the conclusion/guess that the bottleneck had to be the actual Web Service. Monitoring a custom counter in our Web Service yielded the huge pooling of request to write to our log (due to the limited threads). Hitting the pooling threshold in our app caused the requests to start backing out into the ASP.NET request queue. Makes sense now (hindsight is 20/20). Here is that helpful image:

Counters I used the most for Web Service performance testing:
For the most part Microsoft’s performance recommendations point you to all the right counters. There are quite a few, but I used the following for the most part:

To monitor my SQL 2000 Database Server:

PhysicalDisk\Avg. Disk Queue Length
Processor\% Processor Time
Memory\Committed Bytes
Network\Bytes Received/sec
Network\Bytes Sent/sec

To monitor my Web Server that was hosting the Web Service:

ASP.NET\Requests Queued
Processor\% Processor Time
Web Service\Total Method Requests/sec
Memory\Committed Bytes
Network\Bytes Received/sec
Network\Bytes Sent/sec

When the tests weren’t going so well I pretty much added all the counters that you can find in the performance recommendation links I provided above. This obviously helped with troubleshooting. Also, what was helpful to the IA team was the SQL performance tuning book: Microsoft SQL Server 2000 Performance Optimization and Tuning Handbook by Ken England.

What’s next? Well, we’re out of hardware here at Corillian so we’ll be heading up to the Microsoft Scalability Labs in about a month to really push the limits for both IA and Voyager on their GIA-HUGEY hardware.

Financial institutions don’t have to encrypt the customer database


Recently a court has ruled that under Gramm-Leach-Bliley a financial institution is not required to encrypt it’s customer database. The lawsuit was against Brazos Higher Education Service Corporation Inc. because one of their employees negligently stored the unencrypted customer database on a laptop that was stolen from the employees home (read the full article at

Do you have a bank account? Do you invest? Do you have a loan? This impacts you! It’s funny how the media has spent so much time focusing on online banking fraud and now all the financial institutions are scrambling to get measures in place (such as secondary authentication with Corillian’s Intelligent Authentication) while this kind of BS is going on in the background. While financial institutions are focusing on protecting you through the online interface they are giving up your data in mass quantity behind the scenes with unencrypted databases because it’s not required by law…Stupid. It’d be interesting to know the numbers for 2005 that tell you how many accounts were compromised due to online fraud versus how many were compromised due to whole databases being stolen. I’m willing to bet on the later.

This court’s ruling is not in your favor as a customer. Encrypting customer databases won’t solve all our fraud problems but it’s definitely a step in the right direction. My crystal ball tells me that eventually people will wake up and push for this to happen. Wake up. Do you hear me? WAKE UP. Start pushing these financial institutions to do the right thing. Push hard enough, it’ll be in the media. When the media is pushing the issue, a law will follow shortly. Watch. You’ll see. You’ll have to push first though.

SWExplorerAutomation (SWEA) V1.7.4.1 released


Alex the creator of SWExplorerAutomation (SWEA) has recently released a new version: V1.7.4.1

The new version contains the following changes:

Feature: Added  page|frame script calls. Example: control.Invoke(“Script___doPostBack(\”ControlName\”, \”ValueChanged:TestValue\”)”)”

Improvement: Improved support for pages with invalid characters.”

I recently upgraded from 1.6 to 1.7 and noticed that Alex is now charging $59.00 per copy and has a volume discount of $29.50 for 3 or more copies. It’s about time! Alex has worked hard on creating and supporting this app so he deserves to make a few bucks off of us. Besides that, $59 is dirt cheap. It’s worth every cent. Don’t worry, if you just want to give it a test drive you can still download a non-commercial shareware version.

Automating Web UI testing with SWEA, C#, & NUnit (part 3)


This post is a continuation of the previous posts: Automating Web UI testing with SWEA, C#, & NUnit (part 1 & part2).

In this 3rd and final post I’ll show you how to take the C# output from SWEA (discussed in part 2), build a test framework around it, and run those tests using NUnit. The framework promotes usability, consistency, and prevents code duplication. NUnit will extend that framework and also opens the door to putting tests into automated build processes. The tutorial will describe the contents of the example Visual Studio 2003 project found here. The project uses the page to conduct a search, verify the search results, and click the results link. Again, my purpose here is to discuss how to create a test framework around SWEA and conduct tests using NUnit; I won’t be getting into the SWEA API specifics.

Dividing Test Activities
When recording any kind of scripts the output is always redundant if you end up navigating through a sequence of pages to get to your destination page. Over the course of several tests you’ll have redundant code doing the exact same thing just to get to your testing point. When something changes on the page you end up updating the change in several places. To avoid this redundancy, what has worked well for me with both SilkTest and SWEA, is to take that output scripts and divide that into 4 sections: Tests, Common, Navigate, Conduct. I divide this by giving each section its own class. In VS it looks like this:

Now, with separate classes, I can take the auto-genearted SWEA C# code and divide it amongst them. At a high level the separate classes are used to do the following:


The Navigate class contains methods that navigate to page that the test will occur.

The Conduct class contains methods that conduct the actual test and verify the result.

The Tests class is the where the NUnit test cases reside. Inside of Tests I setup SWEA, Setup and Teardown the browser, and use NUnit to wrap the Navigate and Conduct calls.

The Common class contains definitions and methods that are used by all classes (browser definition, user setup, random string generators, etc).

Now that I’ve covered the 10,000 foot view, let’s dive into the details.

Tests class detail
As mentioned above the Tests class provides our test calls done with NUnit. By integrating with NUnit we have our [TestFixture], [TearDown] and [Test] attributes.

To start with, the main part of the TestFixture is the method openBrowser. This method loads the defined SWEA project and settings from the the config file. The purpose of the If statement is not evident in my simple example, but it provides the option to work between multiple SWEA projects (multiple sites) when called from the [Test] attribute. Following the If statement is SWEA browser definitions and the actual opening of the browser along with navigating to the initial URL:

public static void openBrowser( site)
string projDir = “”;
string url = “”;
if (site ==
projDir = ConfigurationSettings.AppSettings[PROJECT_DIR];
if(projDir == null)
projDir = “.\\Google.htp”;
url = ConfigurationSettings.AppSettings[URL];
if(url == null)
url = “”;

myBrowser = new Browser();
myBrowser.ExplorerManager = new ExplorerManager();
myBrowser.ExplorerManager.DialogActivated += new SWExplorerAutomation.Client.DialogActivatedEventHandler (Google.Tests.GoogleTestAutomation.explorerManager_DialogActivated);
myBrowser.ExplorerManager.DialogDeactivated += new SWExplorerAutomation.Client.DialogDeactivatedEventHandler (Google.Tests.GoogleTestAutomation.explorerManager_DialogDeactivated);
myBrowser.ExplorerManager.Error += new SWExplorerAutomation.Client.ServerErrorEventHandler (Google.Tests.GoogleTestAutomation.explorerManager_Error);

The heart of Tests is the actual tests. Tests that open the browser, navigate to the page where the test will be conducted, and conduct the actual test case. In the example you’ll see the browser open to, navigate to the images page, navigate back to the home page, and then conduct a search:

public void A_SearchQAInsight()
//Open browser and load defined site
//Navigate to the “Images” section of the site
//Navigate to the “Home” section of the site
//Conduct the test case; sort pictures and validate 2nd link
Conduct_Google.Search(myBrowser, “QAInsight”, Conduct_Google.TestGoal.Search);

Closing the browser after each test is handled by the TearDown attribute and in the example has one simple method closeBrowser that does just that:

public void closeBrowser()
//Close browser

Navigate class detail
As mentioned, the Navigate class provides methods that navigate to page where the test will occur. It is best to have a method to get to each page, typically my navigate methods follow the site navigation pretty closely. In other words, I usually have a method for every menu and submenu item that the site has. Using SWEA Designer, the navigation will need to be recorded in every Scene, giving the power to navigate to the next test case instead of restarting from the base/home page. In the project example you’ll see this in the SWEA htp file when you load in SWEA Designer (notice the same “Nav_” HTMLAnchors in Scene_GoogleHome and Scene_GoogleImages):

The methods in the Navigate class are simple and at a high level do:

  • Navigate to the page (often times requires multiple clicks)
  • Waits for the Scene to load. This important because when you Conduct a test case right after navigation you need the page to be fully loaded.

public class Navigate_Google
public static void Home(Browser myBrowser)
//Define Scene
myBrowser.Scene = myBrowser.ExplorerManager[“Scene_GoogleHome”];
//Wait for Scene to load (all control properties that have propert set isOptional=false)

Conduct class detail
The Conduct class contains methods that conduct the actual test and then validates the results. Typically tests will require some sort of input that I want to control (i.e. logon name, search term). The provided project has an input property for the Search method as an example for this. The string searchTerm is the actual term that will be fed into the search textbox. To promote method reuse I have the 3rd property TestGoal to conduct slightly different tests on the page but allow me to use the same method. The Search method in the below example contains a switch statement that acts depending on the TestGoal definition. Notice two tests in the Tests class that call the same Search method but use two different TestGoals . One goal clicks the “Google Search” button and the other clicks the “Feeling Lucky” button which has a different result. Once the test is conducted (as far as clicking, submitting forms, etc.), I need to verify the result. For this I use NUnit Asserts, or if you choose you can have the SWEA Scene validate the defined contents by waiting for the resulting Scene to load (this will only work if the controls that you intend to validate have the isOptional property set to false). Both ways have upsides and downsides. Asserts can get into finer detail verification (i.e. textbox values) and allows you to fail a validation without stopping the test. Validating with SWEA Scenes keeps the code slim because you only need to wait for the Scene to load, but the downside to that is if the Scene load fails then the test stops with a SWEA error (I imagine this could be worked around if you really wanted to avoid it). The Conduct class is found below:

public class Conduct_Google
public enum TestGoal{Search, LuckySearch};

public static void Search(Browser myBrowser, string searchTerm, TestGoal testGoal)
//Define and wait for the Scene
myBrowser.Scene = myBrowser.ExplorerManager[“Scene_GoogleHome”];
//Input the string into the textbox and click the Google Search button
((HtmlInputText)(myBrowser.Scene[“textbox_search”])).Value = searchTerm;
//Avoid creating overloaded methods by utilizing a TestGoal parameter
switch (testGoal)
case TestGoal.Search:
//Define the current/expected scene
myBrowser.Scene = myBrowser.ExplorerManager[“Scene_GoogleResults”];
//Wait for the Scene to load
//Validate the QAInsight link is at the top of the list.
//We need to manually validate because the property isOptional=TRUE
//and the Scene load won’t validate it. I set this to TRUE to prevent
//the Scene from the throwing an error because the control/link
//is not found. Instead I choose to to do an Assert and send out a
//friendlier message instead. If the control/link is not at the top
//of the list the test fails & I just send a message back the NUnit console.
((HtmlAnchor)(myBrowser.Scene[“HtmlAnchor_QAInsight”])).IsActive(),“Your Website is not at the top anymore!”);
//Click the link
myBrowser.Scene = myBrowser.ExplorerManager[“Scene_QAInsight”];
//Wait for Scene to load

case TestGoal.LuckySearch:
myBrowser.Scene = myBrowser.ExplorerManager[“Scene_QAInsight”];

Common class detail
The Common class contains definitions and methods that are used by all classes. In the example I define the browser ExplorerManager and Scene so that they may be used by all classes (Tests, Navigate, and Conduct):

public class Browser
private SWExplorerAutomation.Client.ExplorerManager _explorerManager;
private SWExplorerAutomation.Client.Scene _scene;
public enum site {Google, QAInsight}

public SWExplorerAutomation.Client.ExplorerManager ExplorerManager
get { return _explorerManager; }
set { _explorerManager = value; }
public SWExplorerAutomation.Client.Scene Scene
get { return _scene; }
set { _scene = value; }

Running the tests with NUnit
So… there are the guts of my test framework using NUnit, C#, and SWEA. Now all I need to do is to run the tests using NUnit. You’ll notice that the project output is an EXE (GoogleTests.exe). I chose an EXE over a DLL so that I could have the option to run the tests from the command line (the command line interface is not provided in the example). A command line allows me to interface with the test harness and drive the Web browser from other test tools (SOATest for example). Once the EXE is created we simply point NUnit towards it and let ‘er rip! Man, I can’t tell you how gratifying it is to set back in your chair with your feet on your desk watching that browser go a hundred miles an hour testing your site…

It gets even cooler…Integegration with NUnit also gives us the power to integrate the tests into the build process (i.e. NAnt). But, I’ll save the explanation of that for another day.

There you have it. Not too terribly difficult. I’m a rookie programmer and I pulled it off, so you can too (okay, when I got stuck my co-worker Matt helped me work through the issues). But by starting with the example project you should be able avoid the hurdles I hit. Once you have your framework built and you’re familiar with the SWEA API you’ll find that you won’t use the script generation of the SWEA Designer; SWEA designer will simply be the tool to record the Scenes/HTML objects (output to the htp file). Once you get to that point things are gravy. Get on the gravy train! Get your site/browser testing automated with SWEA, C# and NUnit.

Using WatirNUt to create tests to run with NUnit and NAnt


Coworker Dustin Woodhouse has released his utility WatirNUt to the outside world and is anxious to get everybody to use it. What is WatirNUt?

“WatirNUt is a utility that creates a portable, testable NUnit binary wrapper around watir test scripts and supporting files. This binary can easily be executed in NAnt’s <nunit2> task, with aggregated results displayed in your web dashboard.”

A while back, when WatirNUt was still in development, Dustin helped me get my Watir tests running in NUnit with his utility. In a recent blog post I failed to mention how I magically got my Ruby/Watir scripts to run in NUnit. That magic was WatirNUt.

If you’re writing tests using Ruby and Watir give WatirNUt a whirl and get those tests integrated into your build process. beta


Calling all testers! has released a beta version of their new site. To tickle your testing bone they’ve issued a beta testing challenge with prizes! The prize…”US members will receive a t-shirt and all non-US members will receive a digital subscription to Better Software magazine.”

I know what you’re thinking. “A t-shirt for each defect I submit, I could outfit all the homeless people in downtown Portland with a brand spankin’ new t-shirt”. Not so fast my eccentric tester, wipe that drool from you lower lip. StickyMinds says: “Only one prize per person will be awarded regardless of the number of bugs submitted.”

Confusion between Timer vs. Transaction counts with SilkPerformer


Recently I’ve been confused at why my Timer count differed from my Transaction count in recent SilkPerformer test results. Having a second reference to compare SilkPerformer numbers against I had settled on the Timer count being the correct number to report as results but I still felt uneasy because I didn’t know why this was occurring. I finally sat down today and figured out why the numbers differed. Take a look at the following table of Transaction counts and Timer counts:

Notice that the tables are nearly identical but the transaction AnswerChallenge is significantly higher in the Transaction count table making the overall number higher/different (759476 vs. 258285 ). Seems odd, but after thinking through my script and transaction flow I finally figured out why AnswerChallenge varies between counters. Before I explain look at my actual user transactions declaration (user flow):

InitTestCase : begin;
InitUserAndIP : 1;
AuthenticateUser: 1;
ThinkTimeTrxn : 1;
AnswerChallenge: 1; //only runs if Challenge is issued

and then my the code for my AnswerChallenge transaction:

transaction AnswerChallenge
if (isChallenged = “Challenge”) OR (isForceChallenge=”ForceChallengeResponse”) then 
   WebHeaderAdd(“SOAPAction”, ” http://www.blah”);
   “<?xml version=\”1.0\” encoding=\”utf-8\”?>”
   “<soap:Envelope xmlns:soap=\”http://blah”>”
   “</soap:Envelope>”, 0, “text/xml; charset=utf-8);
end AnswerChallenge;

Notice that after the begin statement I check for the presence of two flags:

if (isChallenged = “Challenge”) OR (isForceChallenge = “ForceChallengeResponse”)

If the strings match the variable then the transaction CONTENTS will run, if they don’t the CONTENTS won’t run. Notice that inside of the if statement I have my timer (MeasureStart(“AnswerChallenge”);). AHH! Bingo… Do you see it? The transaction: AnswerChallenge is ALWAYS called due to the user transactions definition, but the CONTENTS of the transaction won’t be executed if the if statement is false. Thus, the transaction count ALWAYS grows regardless of the If statement results. If the if statement ends up being TRUE then the Timer count will go up (and this is the counter I care about).

See the visual difference with a TryScript:

From a programming point of view the behavior is obvious (with hind-sight). When perusing SilkPerformer reports.. not so obvious. Moral of the story: If you care about the count of transactions actually being sent over the wire, and you decide to run a transaction based on the result of another transaction, make sure to look at your Timer counts not your Transaction counts.

I hate DRM


I HATE DRM (Digital Rights Management). I understand the intent and I think the concern is valid, but consumers are suffering because they can’t put a system in place that works. With recent music CD purchases, and attempts to rip them, it’s been nothing but a pain in my ass; purchases being Santana-All That I Am & VanZant-Get Right With the Man (both Sony-BMG). Why is ripping my music important to me? Because I can fit hundreds of songs on a CD and listen to them in the car. My in-dash CD/MP3 player holds 6 CDs. Six CDs of MP3s is VERY convenient in the car. My process is simple: I buy a few CDs, I rip them, I burn them to CD and then I put the original CD into a big box out of the way in the garage.

To better understand my frustration let me tell you what I want as a consumer. The list is short:

  • MP3 format. My reason:
    • I don’t want to use a proprietary format.
    • I’ve ripped every CD I’ve acquired over the last 15 years to MP3 already. I want to keep my collection consistent.
  • Bit-rate equal to or greater than 192kbps. My reason:
    • Less than 192kpbs can be detected by the human ear. =>192kbps can’t.

Is that so freakin’ hard?

Attempting to rip the above CDs using my own applications, and a few that I wouldn’t normally use recommended by friends, resulted in pauses in all the songs, and in some cases the song wouldn’t even rip. The lame DRM alternative? Install and use the MediaMax software conveniently located on the CD. The software gives you the option to rip the CD in the WMA format at 128kpbs; both not up to my needs and standards. ARGHHH!

The workaround? Burn it on a Mac with ITunes. Yeah, I said it… Use a Mac. Whatever DRM technique that Sony-BMG uses to prevent you from ripping on a PC doesn’t work on the Mac. I don’t have a Mac, so I have my friend who owns a Mac do it for me. It seems to me that the workaround is self defeating for DRM. Whose to say my friend didn’t make his/her own copy when they ripped it for me? That potential copyright issue could be prevented IF I COULD JUST FREAKIN’ RIP THE CD I PAYED FOR TO MP3 @192KBPS!

Post navigation