About STP / 877.257.9531
Log In Join Now

Author



Rating

3


Published

Tuesday April 24th 2012 9pm

The Most Happiest Job ... In the World?

Software Testing
A few weeks ago, a website you've probably never heard of called CareerBliss.com came up with a "top twenty happiest jobs" in America.  

Software Tester, or, more accurately, "Software Quality Assurance Engineer", took the number one spot.

No, the survey did not come out on April first, and no, I'm pretty sure it's not a joke.

Now you may have heard of this study, but if you have, I doubt it's because of CareerBliss - more likely is i because Forbes.com picked up the story and ran with it, and a large number of influential people read Forbes.

The reaction of the test community was, shall we say ... skeptical.

On an pure emotional level, I must say, I totally agree with the story.

Where else can you get paid to criticize someone else's work?

Where else can you say "it doesn't work", point the finger at someone else, and walk away?  (Not that we actually do this; I try to be helpful.  The point is that you can add value to a project by bringing out your inner critic.)

A handful of people can make a living but being professional movie critics or product reviewers, but for the rest of us, there is software testing.

I honestly don't find this surprising.  Think about what happens when the website is down.  Ops is scrambling to get the site up, developers are running around coding patches and worrying about the build or the merge, and we in test are going back to sleep.  The PM asks you what you are doing, and you get to reply "What?  The site. It's broken.  That is the status.  Do you want me to file a ticket or something?"

I kid, but not entirely.  When I think about the various ways I have had pressure forced on me (often in companies with no test role) and the service I help provide now, helping to create a safety net for the programmers, well, I feel a little pride.

But there's more

The article by Forbes has seen a fair bit of air time, including blog entries and a mention on SQAForums.  (I've never been active on the forums, but I have to say, some of the comments are entertaining, including "I don't know Jim, I think they need to switch to a better grade of crack.")

By friend Scott Barber has what I think of as probably the most insightful blog post of the bunch.

Earlier I wrote on an emotional level -- reacting to the headline.    Scott dug into the detail of the Forbes article and took issue with some of the characterization of what the "QA" (don't start) role is and how it is performed, ending with this comment:

Feel free to share your thoughts, but this strikes me as "not *even* wrong" to a degree that I can't seem to even reverse-engineer a single measurement dysfunction that could account for all the ways in which this article strikes me as "just not right."

He's got a point, and it's worth reading the Forbes article just to compare the status quo with what we testers actually do.  (If you agree with the article, please, leave me a comment.  Let's talk.  I'm sure there are more kinds of testers found under heaven than are dreampt of in my philosophy.)

But there's something else going on here, too.

Where did these numbers come from?

The CareerBliss data evaluates the key factors which affect work happiness, including: one’s relationship with their boss and co-workers, their work environment, job resources, compensation, growth opportunities, company culture, company reputation, their daily tasks, and job control over the work that they do on a daily basis. The data accounts for how an employee values each factor as well as how important that factor is to the employee’s overall happiness. Each review is given an average score indicating where the company places between one and five. All assessments are derived from February 1, 2011 through January 31, 2012. CareerBliss assessed a total of 100,467 employee reviews. A minimum of 50 employee reviews was required to be considered for CareerBliss’ 20 Happiest Jobs in America. Executive level jobs were excluded for this study.

This tells me that they interviewed random people who took surveys on the internet, asked each of them to rate their happiness on ten attributes from, say, one to five, then to rank those interests, and some weighting is given from the most important to the least.

That's great, now show me the data.

If we had the data, we could slice it six ways.  For example, if control over your own daily tasks is the most important you, you could ignore the "happiest" job and look for the one that offered you that single benefit.  Likewise, you could examine the average deviation for each job -- perhaps software QA has a lot of people who are ecstatic about their job, and a lot that are merely happy, and the average is 4.245.  (I would call that a 'biplolar' distribution.'  It also happens in conference talks when your session proposal is too inclusive, and you draw in folks who may be better served attending a different session. But I digress.)

If the data is bipolar, it is possible that outliers are throwing off the numbers.  Teachers, for example, made third on the unhappiest list, with a score of 3.595.  If there ever were a field I would suspect of a bipolar distribution, it would be teachers - there might a whole lot of 4.9's, a whole lot of 2.0's, and the result is the average represents, well ... no one.

The first part of the problem is this idea of an aggregate, of an average response.  With bipolar data, the aggregate is not representative.  

There are ways of dealing with this in the survey world - for example, you might measure the percentage of people who are all 5's across the board - your "true fans", and only count those.

Then again, perhaps that data really is bell-curve, the groupings are tight, and these numbers are representational.  I'm not expecting a scientifically valid survey, with geographic region factored in and margins of accuracy -- but it is possibly the data falls mostly in line.  (When we talk about average salary, though, that kind of information would be nice.)

The thing is, we don't know.  The data is closed; all we get are the averages.

And that is my point today: We need the data.  The raw data, the real thing.

Next time you go to a talk, or read a nice blog post, or check out something on slideshare, ask for the data.  The raw data; probably something in Excel or a database.  Slice and dice it five ways.  Interview the folks who worked on it. Find something insightful.  Take it up a notch.

As for me, I just sent a little email off to the good people at CareerBliss.com.

Wish me luck.


 


Comments

You must be logged in to comment.
Retrieving Comments...


Advertisement
STPCon Spring 2015



Friend SoftwareTestPro on Facebook
Follow @SoftwareTestPro on Twitter
Create or Join a Crew

Upcoming Virtual Training

12/15/14 Risk Based Testing
12/17/14 The Agile Tester




Explore STP