Sunday, December 16, 2012

My Trip to India

I’ve returned this last Wednesday from 12 days in India and I thought I’d post a few musings on the trip here. (I’ve already posted up some of the work-related things over at my “day job” blog.)

Without a doubt this was one of the most impactful overseas trips I’ve ever had, and I’ve been to a fair number of places outside North America over my life: Okinawa, France, Saudi Arabia, Panama, the Philippines, St. Martin, England, Italy, Hungary, Austria, Poland, Bulgaria, and a likely a few others I’m forgetting.

I did zero preparation for the trip. None. Normally my wife and I are huge planners. We get all kinds of research done for big trips and have lots of things lined up. This time I figured I’d just go over and jump in the deep end and float wherever currents took me. Good choice.

First some of the bad things: India is a country where 300 million people live below poverty, and in India, unlike the US, poverty means poverty. People sleeping on blankets in the dirt with a torn tarp over their head poverty. 20 story five star hotels with tent cities next to them poverty. Hideous water giving you instantaneous dysentery and hepatitis poverty. It’s staggering if you’ve been in similar situations before, mind-numbingly shocking if you haven’t. The piles of trash across the entire country are sad—India is a land of amazing beauty and deep, deep spirituality. It was heartbreaking to see such a wonderful place being used as a dumping ground.

But still…

You get past that, or at least find yourself able to move beyond it, and wow, what India has to show you! Noise, color, smells, sounds, noise, people, food. The list goes on and on and on.

The Food

Food in India is a wonderful experience. India, like Italy, Spain, and Mexico, isn’t one style of cuisine. How could it be? Like those other nations India comes from a very diverse, fragmented history. The food in the south was greatly different from that of the north, and all of it was great. I was lucky to have friends to explain some of the differences to me.  As you’ve likely noticed if you’ve followed this blog for long, I’m a pretty serious foodie, so when I run across stuff like this in the street I’m in love!

IMG_0654

Or when my colleague takes me to a spot like this for dinner on the trip back from Agra:

IMG_0676

This was a dinner from a central region of India which wasn’t typical of Bangalore, but my pal DJ thought I’d enjoy it. How right he was!

2012-12-03_20-23-08_702

Here’s a milk drink we had at a diner-like place in Delhi.

2012-12-10_15-15-28_746

My stomach was in fairly good shape the entire trip, but then I eat a lot of spicy food already so my stomach was somewhat preconditioned. It was pretty amusing how so many of my colleagues and fellow diners there would continually ask “Is this too spicy for you? Are you OK with this?” I suppose I’m a bit atypical in this area, though… (That said, the street food at the cart above did do me in a bit. I knew better, but didn’t care.)

The People

The people are perhaps the happiest, most open, most hospitable I’ve ever come across in any of my travels. Bavarians previously held that spot, but my experiences in India blew even that away. Indians are gracious and full of laughter. I was honored to be invited into two different homes for meals, and they were some of the best times I had. (Being invited into someone’s home for food has a special significance for me, one I’m not able to explain well. Suffice it to say it means a lot.)

Indians are incredibly communal people. You might be taken aback at the depth conversations go between complete strangers. During our 16 hour day trip to Agra, my bachelor colleague got a grilling on his bachelorhood status, advice on why he should marry, and insight on a successful life—all from the middle-aged driver who we’d just met that morning at 6am. Get over the personal boundaries you might have and just enjoy the fact that Indians love people and want to know more about you.

They’re also extraordinarily happy. I had some great meals with various people through the developer and tester communities, and all were filled with laughter and jokes. And good food.

I was also amazed at the forward thinking mindset of the testing professionals I spoke with. India earned a reputation (partially justly) for cheap labor of poor quality. You need to lose that perception of them, right now. The people I spoke with at conferences, user groups, and customer sites were, for the vast majority, serious about taking their work to the next level. They’re looking at the long game, and they’re committed to making serious value-based transformations in how they do their work. I had some amazing conversations there that got me fired up and excited.

the traffic

Traffic in India is non-stop. Traffic is simply an insane, chaotic, wonderful experience if you can let go of any notion of patience. It will take you two hours to travel 60Km in Delhi, and 1.5 hours to go 15Km in Bangalore. Getting frustrated and angry won’t solve a thing, so just sit back and enjoy the tuk tuk (powered three wheeler rickshaw) or cab ride. Don’t freak out that you’ve got a two inch space between your vehicle and the huge dump truck next to you. The driver’s thinking “I had an inch and a half to spare!”

Scooters are family transportation vehicles in India, and you’ll see amazing sights: pipes, furniture, groceries, and more folks than you would think could fit on a two wheeler. Personal favorite: Dad driving, mom on back, puppy in mom’s lap. Personal record: Dad driving, mom near back, four other kids scattered from handlebars to tail end. Yes, six on a two wheeler…

It seems insane to someone from the West, but it’s similar to what I’ve experienced in my previous trips to Panama, the Philippines, etc. You’d think there would be non-stop wrecks, mayhem, and fatalities, but I didn’t see a single accident while I was there. I saw a lot of scratched up, dinged up cars, but not one wreck. Indians understand the implicit system with the traffic, and they, as with so many other things in their nation, just figure out how to have huge numbers of people co-exist in small spaces. With cars. And scooters. And trucks. And pedestrians. And dogs. And tuk tuks.

The horns

Horns require a section of their own. Honking was non-stop in Bangalore, prevalent in my countryside drives, and moderate in Delhi. If you sit back and listen the honking has an entertaining language all of its own.

There’s the single, quick “toot” which is the equivalent of “Coming up on your right/left. Make way, please!” A bit longer “honk” might be used of the person the driver’s passing didn’t move out of the way quickly. Toots escalate through honks up through blaaats to the rarely used Angry Honk where the driver’s really frustrated. My first driver for my day trip to Mysore had a very light, happy honk. I didn’t like the driver for our Agra trip very much. He was Angry Honk right off the bat all day.

The Sights

Where to start? India is full of so many strange, wonderful, overwhelming things. Yes, yes, the Taj Mahal is teh awesum, but you expect that. It’s the Taj Mahal.

IMG_0638

What was more impactful to me were some of the more intimate, less visited temples. I was at a temple near Mysore that was built in 849 that really moved me. This palace below with a shrine to a holy man is a short hour from Agra, and I had one of the best times there.

IMG_0659

There’s also the crazy IT market in Delhi that is straight out of some cross between a peyote-hammered steampunk and MC Escher.

2012-12-10_17-08-25_804 2012-12-10_17-20-22_755

All of this can just drive you crazy trying to take it all in. I stopped taking pictures very early in my trip, choosing to just absorb a lot and get a few pictures in here and there. I think that was a pretty smart choice.

Wrapping Up

I’m really thankful I got the chance to go to India. I’m already trying to line up a return trip or two. I can’t wait to get back and see new old friends and new old sights.

Tuesday, October 16, 2012

Handling Rejection (From Conferences)

Sometimes it seems like a significant part of my day job is having my submissions to various conferences rejected. Off the top of my head, here’s an incomplete list of conferences where I’ve had submissions rejected from over the last year(ish)

  • TechEd
  • DevConnections
  • DevLink
  • Agile Testing Days
  • Agile Dev Practices
  • StarEast/StarWest
  • Some testing conference in London whose name I’ve forgotten

There are a number of other conferences as well, but frankly I’ve lost track.

Rejection stings, for certain, but I have also come to view these rejections as a pretty good learning opportunity. After I get over the pain of rejection, that is.

First off, I always thank the organizers for considering my submissions. I can’t imagine how many hundreds of submissions conferences like DevConnections or StarEast/West get. Taking a moment to thank the content selection crew is simply good manners. (TechEd is different. It’s a total black box, impersonal process, so I never get any contact with humans.)

Secondly, I sit back and think about what might have been the cause for getting passed over. If possible, I try to get feedback directly from the selection folks; however, that’s not always possible.

I’ve found there are a number of useful aspects to consider:

  • Content doesn’t fit. Maybe you’ve just missed the mark with your submissions to that conference. Some years ago I tried wedging a testing talk to an open source conference targeted more to business application developers. My abstract simply didn’t make a good case why the talk would fit in their conference. Make sure what you’re submitting will be useful to the conference organizers.
  • Content lost in the chaff. You need to submit talks that stand out from all the others. “Intro to MVC” is outdated and doesn’t offer up anything unique from the 20 other MVC talks the organizers are looking through. Make a clear case of what value your session brings to the attendees.
  • Content selection crew was overwhelmed. Poorly organized conferences might have too few folks on staff to get a good review in. If you’re not known to the organizers, then they may have simply lost you in the tidal wave of submissions. Networking matters. Experience matters. (I’m very thankful that the CodeMash content chairs works hard to scale out the selection crew every year to avoid just this problem. They still have huge amounts of work.)
  • Poorly written abstract. It happens, even to someone who’s polished and submitted hundreds of abstracts over the last ten years. I’d like to think I’ve learned and don’t do this anymore, but it’s possible. I once wrote another blog post with some thoughts about writing a good abstract.
  • Better submissions from other folks. That happens on occasions, particularly for really large conferences. Look at what did get picked up and compare your submissions to those. Note that you’ll have to do some serious stepping back and viewing things with complete self-honesty and detachment. You can’t let your own pride get in the way with false impressions. Which brings me to…
  • Ego. Yes, sometimes my own ego gets in the way of submissions. Last year I put in four testing talks to a regional conference. None got accepted. Looking back I think I seriously slacked off when writing the abstracts because I felt I was well-known enough that the talks would get picked up anyway. That one stung but good—however, it was a good lesson learned. Respect yourself enough to put aside your ego and care about what you’re putting in. Remember, it’s not about you.
  • Drama. Conference organizers are horribly, insanely busy during planning and especially during execution of the conference. The last thing they need is drama or worries about unreliability. If you’re a Drama Queen or King, or if you’re a flake, then you’ve got a deep, deep hole to dig yourself out of. Getting over that can take years because that sort of trust is hard to rebuild. (As a personal note, seven years ago I bailed from a half-day workshop at a conference put on by my pal Chris Woodruff. I bailed the day before the conference. It’s perhaps the worst I’m a Douchebag moment of my adult life, and I’m still beating myself up about it. Thankfully Chris is an awesome guy who was extraordinarily gracious about it, and I think I’ve somewhat atoned for it by now.)

Rejection’s not easy. I’ve gotten three rejection notices in the last two weeks alone. That said, view it as an opportunity to avoid lashing out and instead consider how you can improve for the next conference you target.

I’m already working on a few more submissions now…

Tuesday, October 09, 2012

Consolidating Drawings from Your iPad’s Paper app

I really like doodling around on the Paper app from 53. It’s a lot of fun, and lets me come up with some funky, off-the-beaten-path presentations.

The work involved in getting those images out of Paper and into Keynote or PowerPoint is a hassle, though. You have to fire up iExplorer and dive into the Apps folder on your iPad when it’s docked.

image

You’ll have to figure out which node under that Journals folder is the book you want. There’s some goo in the model.json files that can help you out, or just look at the Date Modified fields.

Drag that folder over to a working directory on your Mac, then you can start to pull the graphics out. They’re beautiful, high-rez PNG files with transparent backgrounds. Unfortunately, they’re all stored as the same filename in GUID-named folders. <sigh/>

The following Ruby script helps you pull the files out and drop them in a common folder, renaming them on the fly. Run it from the folder holding all the subdirectories with the files. You’ll need to edit sourceDir and targetDir as appropriate for your environment.

require "FileUtils"
 
count = 0
 
sourceDir = "/Users/jimholmes/Documents/tmp/Pages"
targetDir = "/Users/jimholmes/Documents/tmp/Consolidated/"
 
dirContents = Dir.entries(sourceDir)
dirContents.each do |folder|
  next if folder == '.'
  next if folder == '..'
  next if folder == '.DS_Store'
  next if not File.directory?(folder)
 
  source = File.join(sourceDir, folder, "sketchLayer@2x.png")
  target = File.join(targetDir, "Pages-#{count}.png")
 
  puts "Source: #{source}"
  puts "Target: #{target}"
  FileUtils.cp_r(source, target)
  count += 1
end

Mad props to @rubyist who dug me out of some issues with how I was handling the filenames.

My next step some day would be to figure out how to pull this stuff straight from the iPad’s filesystem; however, I’ll have free time for that in approximately 246 years, I think.

Monday, September 17, 2012

Video on “Solving Common Web UI Automation Problems”

Looking to learn more about creating flexible locators, ease your frustrations over dynamic content, and see some flexible ways to pull data from tables in a flexible fashion that’s not tied to sort or column order?

Check out the recording of a webinar I hosted last week on these common web automation problems.

This webinar’s very generic in content. As a matter of fact, most of the examples are in C# using WebDriver.

I’ve got a wrapup posting too listing the various demo code I used during the webinar.

I hope you find it useful!

Wednesday, September 12, 2012

Love Training & Coaching Software and Test Teams? Come Work With Me!

Telerik’s Test Studio is looking for someone to help us make teams using Test Studio be awesome. We’re extremely passionate about ensuring our customers succeed over the long-term with test automation – and a huge part of that success hinges on getting Test Studio users great training in the early phases of their adoption efforts.

We’re looking for someone to join the Telerik team full time to help deliver training and coaching engagements for our customers. As a trainer you’d be spending 25% – 50% of your time on the road at customer sites. We’ve got upcoming training engagements all across the US and internationally as well. You’d also be delivering training via online webinar sessions, and you’d be helping to evolve our existing training curriculum and materials. You’d also be responsible for helping create other content like blog posts, white papers, and videos.

We’ve got a very different philosophy about training for our customers. Training is NOT a revenue stream for us; it’s explicitly about ensuring our customers’ long-term success with Test Studio. Moreover, our onsite training engagements quickly move from an overview of Test Studio into working with the customer teams to write tests in their own environment. You’ll quickly be flipping the switch from trainer to coach during these sessions!

It’s also not just all about Test Studio, either. It’s about becoming an influencer and thought leader in the automated test space. The folks here in this corner of Telerik are awesome about helping improve testing overall, not just via our product. This means you’ll have the opportunity to stay (or get) familiar with other testing tools. (I still spend plenty of time talking about and presenting on WebDriver/Selenium as well as gabbing other automation tools, too.)

What kind of person are we looking for? You’ll need to be passionate about helping customers succeed long term with automation. You’ll need to have enough self-confidence to say “I don’t know” when confronted with something new to you – and the self-motivation and drive to follow up on the accompanying “but I’ll find out and get back to you!”  You’ll need to have exposure to development and testing in real environments – but you don’t need to be an expert in either. (Obviously expertise helps…)

We’re looking for this position to be staffed up in either Austin, Texas, or Hudson, Ohio. Preference would be Austin – the ability to coordinate closely with our Test Studio team down there is a tremendous advantage.

Working for Telerik is amazing. Working with Test Studio at Telerik has evolved into pretty much my dream job. I get to train and coach customers, I get to learn from wicked smart folks in the industry, and I work along with some incredibly awesome product and sales folks.

Interested? Ping me directly at my work email address. I’d love to chat with you.

Saturday, August 11, 2012

Are You a Geek In Dayton? Fill an Opening with the Dayton .NET Developers Group!

The Dayton .NET Developers Group has a need for an enthusiastic person to step up and help lead the group. It’s time for me to step aside after seven or so years with the group, and the current Board of Directors is looking for someone to fill the soon-to-be empty position.

Helping run the DevGroup is a great experience. It’s a couple hours a month of your time, but you’ll get to meet a lot of great folks, and you’ll get some tremendous satisfaction in knowing you’re helping people make tremendously positive impacts in their careers through networking and skills development.

Moreover, as part of the group’s leadership you have an active hand in determining what the group’s direction for topics and special events is. That’s pretty exciting stuff! You’ll also build out your network of contacts to include speakers, sponsors, and companies at the regional and national level.

A few of the things you might be involved with during a regular month:

  • Helping run the monthly meetings (raffles, meeting logistics, greeting, etc.)
  • Helping plan out future topics
  • Helping locate speakers for future meetings
  • Helping grow the group’s membership through drives, media contacts, etc.

Getting involved with running a user group is an extraordinarily rewarding experience. You’ll make a lot of new friends and contacts throughout the Heartland region (or farther afield!), you’ll get even more exposure to career-broadening ideas, and you’ll get opportunities you never imagined. I’d never have started my speaking “career” were it not for having to jump in to fill a meeting when the planned speaker had to reschedule at the last minute. Somehow the group survived me…

Interested, or know someone who’d be a good fit? Drop me a mail via the contact link at the blog’s upper right corner.

Get involved! It’s totally worth it.

Thursday, July 19, 2012

Hear Jim Holmes talk Testing with Jesse Liberty

Jesse Liberty, one of Telerik’s great evangelists, had me on his Yet Another Podcast show to talk about, wait for it, testing. Jesse and I cover a lot of things about testing at all levels. It’s a wandering conversation that I think you’ll find entertaining and useful, too.

Yet Another Podcast Episode 70: Jim Holmes & Automated Testing.

Monday, June 11, 2012

Honest Dialog with Stakeholders on Distributed Teams’ Constraints

My talk on “Effective Distributed Teams” always garners polarized feedback. I get kudos from folks who’ve left with a few new ideas, and I always get one or two evals where I’m skewered for talking too much about the human side of teams and the criticality of working hard, VERY HARD, to ensure you get the right folks on your distributed teams. In my talk I spend nearly 30 minutes at the start focusing on hiring your team members, assembling your team, and dealing with the inevitable issue of offshore workers.

(If you’re interested, here’s my deck from StarEast.)

One attendee at my StarEast talk wrote that I was naïve in my assertions about controlling offshore hiring. The same attendee also felt I was unrealistic in my position on working with the stakeholders/business partners to ensure the team has proper time to get tasks done. I’ve been called a lot of things in life, many of which are I earned, but I’m not sure “naïve” and “unrealistic” are labels which stick…

Here’s why I think it’s absolutely critical for you as a team lead/member to have open and honest dialog with your stakeholders/management/whatever: stakeholders/management/whatever need to understand the risks to the success of their project. They’re putting constraints on your team by asking you to work in a distributed fashion. They’re also potentially putting additional constraints on you by expecting to realize cost savings by using offshore help.

If they’re putting those constraints on you, then they must be made aware of the impacts of those actions: reduced velocity, and potentially reduced quality of work due to all the things inherent with offshored work. (Differing skills, communication difficulties, cultural differences, and of course timezone barriers.)

If your team members aren’t working out, regardless of whether they’re direct reports, distributed employees, or offshored subcontractors, then you need to be empowered to change the makeup of that team. That should include having input on changing a business partner relationship and terminating contracts with poorly performing individuals or companies. Be clear: this is NOT an easy effort, nor is it generally a quick one. I’ve never said otherwise. To the contrary, I emphasize it may take months to terminate a relationship with a poorly performing team member or subcontractor.

The same concept applies to unreasonable expectations around timelines. There’s a huge amount of writing on this topic by folks who are much smarter and wiser than I. The bottom line is you can’t simply accept the situation; you’ve got to ensure some rational discussion happens around scope, effort, and dates.

You need to make sure you’re clear on what YOU need to flex on too: just because you’re the QA lead doesn’t mean you get to hold up the project until you feel comfortable about quality. Your job is not to be the final voice on shipping or not. Your job is to ensure the stakeholders have a clear picture of the overall quality and risk at any given time. Stakeholders make shipping decisions, not you!

Again, I’ll repeat myself: these conversations are rarely easy, but you need to have them, and you need to have them earlier in the project versus later. (Actually, these conversations need to happen constantly through a project’s lifecycle.)

Your project’s stakeholders, believe it or not, really are looking for a project to succeed and help out their bottom line. If something is jeopardizing the project’s success, then they’re going to want to know about the issue. [1] The conversation may not be an easy one to have, and it will likely take several attempts to get clear, but you need to step up to the plate and get the issues out in the open. (You may also find you’re tilting at windmills with management who’s unwilling to support you. In that case, remember you can change where you work or you can change where you work.

I’ve had these conversations a number of times in different organizations and roles. I’ve been part of efforts getting rid of poorly performing workers and subcontractors, and I’ve been part of efforts getting poorly performing workers/subcontractors up to being productive members of our teams. It’s hard work, but it’s worth the effort.

It’s not naïve or unrealistic to think you can change these bad situations. It’s defeatist to think you can’t. Step up to the plate, do the work. You’ll be happy you did.

Really.

[1]Yes, yes, there are rare environments where an organization’s politics are so horrific that management actually looks to sabotage teams and projects. All I can say is that if you’re working in such an environment you’re there by choice. Flee that environment immediately or accept you’re making a choice to stay where you’re at. Leave right now or deal with it.

Wednesday, June 06, 2012

Three Day “Lunch & Learn” on Web UI Automation

Are you looking to start working with web UI automated testing? Are you or your team currently working with functional testing for your web apps, but struggling with writing maintainable, valuable tests? Join me June 26th, 27th, and 28th for a three day series “Getting Started with Web Automated Testing” and I’ll lay out a few pointers to help you through the rough spots.

I’ve written a bit about the series on my blog at Telerik, but the gist of the series is this: three days of webinars, each lasting 30 – 45 minutes. Through the series you’ll learn about fundamentals for web testing:

  • What a web page’s lifecycle looks like, and how automation frameworks monitor that
  • The Document Object Model (DOM)
  • Understanding how a test framework interacts with the page
  • What element locators are, and how it’s critical to carefully choose them
  • Dealing with dynamic content like AJAX
  • Good test case design
  • Good test suite organization
  • Backing APIs

I’ll be using Telerik’s Test Studio to work through most of the examples, but the fundamental points apply regardless of whether you’re using WebDriver, Watir, or another commercial tool.

You can register here for the series!

Tuesday, April 24, 2012

It’s Not About You

Too often we get hung up on ourselves. PMs, developers, testers, whomever. We forget, or chose to ignore, the critical fact that we (almost always) do our work for someone else: a stakeholder, a customer, a user.

In a significant number of cases, the person we’re doing our work for may have put their entire career or business on the line. We need to get over ourselves and remember that we’re working to solve a problem, sometimes a crucial one, for those folks.

Instead of focusing on that, too often we get hung up on the stakeholder’s inability to understand technology (developers), think that we’re the final line of death who decides what ships (QA/testers), or get frustrated when the schedule/scope needs to change (PMs).

This deck is something that’s been noodling around in my head for a long time. I finally got my thoughts to Paper on a new iPad I got for work. It’s not perfect, but it sort of lays out my thoughts on this a bit more clearly. I hope you enjoy it!

Thursday, April 19, 2012

My Slides from #StarEast Distributed Test Team Talk

I’ve uploaded my slides from my StarEast talk “Making Distributed Test Teams Work” to Speakerdeck. This deck is slightly different from the earlier version of the talk. I’ve iterated a few things in the talk and deck based on feedback from the last couple presentations.

If you were at StarEast and attended the talk, then thanks very much! I think this was the best session I’ve had for this talk, and I really appreciated all the audience interaction – that always makes talks so much better!

Friday, January 06, 2012

31 Days of Testing—Day 25: Performance Testing, Part 2

Index to all posts in this series is here!

My past post laid out some overview and planning issues around performance testing. This post points out what you might be interested in, and lays out some resources I’ve found very useful.

What do I Monitor?

Figuring out which metrics, measurements, and counters to monitor can be extremely daunting—there are hundreds of individual counters in Performance Monitor alone! In most cases you don’t need anywhere near the entire set of metrics. A few counters will give us all the information you generally need for starting your performance testing work.

Most performance testing gurus will tell you just a few items will get you started in good shape:

    • Processor utilization percentage
    • ASP.NET requests per second
    • SQL Server batch requests per second
    • Memory usage (total usage on the server, caching usage)
    • Disk IO usage
    • Network card IO

If you’re doing load testing you’ll likely be interested in errors per second and queued requests. Often times soak or endurance testing will look to counters associated with memory leaks and garbage collection too—these help you understand how your application holds up over a long period of stress. However, those are different scenarios. The few counters mentioned above will get you started in good shape.

Where Can I Learn More?

Microsoft’s “Performance Testing Guide for Web Applications” is somewhat older, but remains a tremendous resource for learning about performance testing. It’s an extensive, exhaustive discussion of everything around planning, setting up for, executing, and analyzing results from your performance testing. The guide is freely available on Codeplex.

Steve Smith of NimblePros in Kent, Ohio, has been extremely influential in my learning about performance testing. Steve’s been appointed by Microsoft as a Regional Director because of his technical expertise in many areas. He blogs extensively on many software topics and has great practical examples for performance testing. He also has an online commercial course offered through Pluralsight that’s well worth checking in to.

The website Performance Testing has a great number of references to performance testing information across the Web. The site lists blogs, articles, training material, and other highly helpful information.

I’ve recently come across two folks on Twitter who I’ve found a wealth of information from:

  • Ben Simo, aka Quality Frog, writes and Tweets extensively about testing, but also talks specifically about performance issues regularly.
  • Scott Barber has an amazing blog with scads of information on it, plus he Tweets amazingly good reads on a regular basis.

One of the things Scott Tweeted recently was this nice series on web performance optimization. There’s some tremendously valuable information in its articles.

Go! Get Started!

Spend some time planning out your performance testing effort. Make sure you work HARD to only change one variable at a time. Don’t get flooded with information; more often less information can be more helpful at the start.

Performance testing is a tremendous asset to your projects, and it can also be an extremely fun, interesting, and rewarding domain to work in.

Go! Get started!

Thursday, January 05, 2012

31 Days of Testing—Day 24: Getting Serious About Performance

Updated: Fixed wrong day # in title. Duh.

Index to all posts in this series is here!

In this post I’d like to cover something that too often gets ignored: performance testing. I thought I’d take some time to lay down some of my opinions and experiences around performance testing in general.

The phrase “performance testing” can mean a great many things to different people in different scenarios, so covering a few of the different types of tests may be helpful.

Performance Testing is generally an umbrella term covering a number of different, more complex test environments. I’ve also used the term to describe a very simple set of scenarios meant to provide a baseline for performance regressions.

Load Testing generally uses a number of concurrent users to see how the system performs and find bottlenecks

Stress Testing throws a huge number of concurrent users against your system in order to find “tipping points” – the point where your system rolls over and crashes due to a huge amount of traffic

Endurance/Soak Testing checks your system’s behavior over long periods to look for things like degradation, memory leaks, etc.

Wikipedia’s Software Performance Testing page has some very readable information on the categories.

You can also look at performance testing as a slice of your system’s performance. You can use a specific scenario to dive down in to specific areas of your system, environment, or hardware.

Load, stress, and endurance testing are all that, but turned up to 11. (A reference to Spinal Tap for those who’ve not seen the movie.)

With that in mind, I generally think of performance testing in two categories: testing to ensure the system meets specified performance requirements, and testing to ensure performance regressions haven’t crept into your system. Those two may sound the same, but they’re not.

Performance testing to meet requirements means you’ll need lots of detail around expected hardware configurations, baseline datasets, network configurations, and user load. You’ll also need to ensure you’re getting the hardware and environment to support those requirements. There’s absolutely no getting around the need for infrastructure if your customers/stakeholders are serious about specific performance metrics!

Performance testing to guard against regressions can be a bit more relaxed. I’ve had great successes running a set of baseline tests in a rather skimpy environment, then simply re-running those tests on a regular basis in the exact same environment. You’re not concerned with specific metric datapoints in this situation – you’re concerned about trends. If your test suite shows a sudden degradation in memory usage or IO contention then you know something’s changed in your codebase. This works fine as long as you keep the environment exactly the same from run to run—which is a perfect segue into my next point.

Regardless of whether you’re validating performance requirements, guarding against regressions, or flooding your system in a load test designed to make your database server weep, you absolutely must approach your testing with a logical, empirical mindset. You’ll need to spend some time considering your environment, hardware, baseline datasets, and how to configure your system itself.

Performance testing isn’t something you can slap together and figure out as you go. While you certainly can (and likely will!) adjust your approach as you move through your project, you do indeed need to sit down and get some specifics laid out around your testing effort before you begin working.

First and foremost: set expectations and goals.

Ensure everyone’s clear on why you’re undertaking the performance testing project. If you are looking to meet specific metrics for delivering your system then you’ll need to be extremely detailed and methodical in your initial coordination. Does your system have specific metrics you’re looking to meet? If so, are those metrics clearly understood – and more importantly reasonable?

Keep in mind that your customer/stakeholder may be giving you metrics you think are unreasonable, but it may fit business needs of their which you’re unaware of. You have to put in the extra effort to ensure you understand those higher-level needs.

Your customer may also be giving you vague requirements simply due to their lack of experience or understanding. “We want the page to load fast!” is an oft-heard phrase from stakeholders, but what do they really mean?

Define your environment

If those same metrics are critical to your delivery, then they will also need to be defined based on a number of specific environment criteria such as exact hardware setups, network topologies, etc. These environments should be the same exact environment you recommend to your customers. If you’re telling your system’s users they need a database server with four eight-core CPUs, 32 GB of RAM, and a specific RAID configuration for the storage, then you should look to get that same hardware in place for your testing.

(A tangential topic: it’s happened more than once that a server and environment acquired for performance testing somehow gets borrowed or time-shared out to other uses. Timesharing your performance environment can be a highly effective use of expensive resources, but you’ll need to ensure nothing, absolutely nothing, is being utilized on that server once your performance runs start – you have to have dedicated access to the server to ensure your metrics aren’t being skewed by other processes.)

Agree on baseline data

Something that’s commonly overlooked is the impact of your system’s baseline dataset on your performance tests. You likely won’t get anything near an accurate assessment of a reporting or data analysis system if you’ve only got ten or thirty rows of data in your database.

Creating baseline data can be an extremely complex task if your system is sensitive to the “shape” of the data. For example, a reporting system will need its baseline data laid out across different users, different content types, different date patterns.

Often the easiest route to handle this is to find a live dataset somewhere and use that. I’ve had great success coordinating with users of systems to get their datasets for our testing. You may need to scrub the dataset to clear out any potential sensitive information such as e-mail addresses, usernames, passwords, etc.

If using a live dataset isn’t an option, you’ll need to figure out tooling to generate that dataset for you.

Determine your usage scenarios

Talk through the scenarios you want to measure. Make sure you’re looking to measure the most critical scenarios. Your scenarios might be UI driven, or they could be API driven. Steve Smith has a terrific walkthrough of a real world scenario that gives a great example of this.

Set up your tooling

Once you’ve got a handle on the things I’ve discussed above, look to get your tooling in place. Performance testing utterly relies on an exact, repeatable process. You’ll need to do a large amount of work getting everything set up and configured each time you do a perf run. Avoid doing this work manually; instead, look to tooling to do this for you. You shouldn’t rely on doing the setup manually for two reasons. One: automating setup ensures you’ll cut out any chance of human error. Two: it’s really boring.

Build servers like Hudson, Team City, or TFS can interface with your source control and get your environment properly configured each time you need to run a perf pass. Scripting tools like PowerShell, Ruby, or even good old command files can handle tasks like setting up databases and websites for you.

You’ll also need to ensure you’re setting up your tooling to handle reporting of your perf test runs. Make sure you’re keeping all the output data from your runs stored so you can keep track of your trends and history.

Change only one variable at a time. Compare apples to apples!

It’s critical you take extraordinary care with the execution of your performance testing scenarios! You need to ensure you’re only changing one variable at a time during your test passes, or you won’t understand the impact of your changes.

For example, don’t change your database server’s disk configuration at the same time you push a new build to your test environment. You won’t know if performance changes were due to the disk change or code changes in the build itself.

In a similar vein, ensure no other folks are interacting with the server during your performance run. I alluded to shared servers earlier; it’s great to share expensive servers for multiple uses, but you can’t afford for someone to be running processes of any shape or form while you’re doing your performance passes.

Profiling: Taking the simple route for great information

All the work above can seem extraordinarily intimidating. There’s a lot to consider and take in to account when moving through some of the more heavyweight scenarios I laid out in my introductory post.

That said, you can look to simpler performance profiling as a means to get great insight in to how your application is behaving. Profiling enables you to use one scenario, or a very small set, and see in a slice how your application’s behaving. Depending on the tooling you can see results of performance back to the browser, dive in to performance metrics on the server (think CPU or disk usage, for example). You may even be able to dig down in to the application’s codebase to see detailed metrics around specific components of the system.

Profiling is a great way to start building a history of your application’s performance. You can run regular profiling tests and compare the historical performance to ensure you’re not ending up with performance regressions.

Start small, start smart

As you’ve seen in this post, performance testing can be particularly complex when you’re looking to ensure high performance, reliability, and scalability. You need to approach the effort with good planning, and you need to ensure you’re not changing variables as you move through the testing.

Make sure your performance efforts get you the information you need. Start with small environments and scenarios, ensure you’ve clearly laid out your goals and expectations, and keep a careful eye out as you’re running your tests.

Wednesday, January 04, 2012

31 Days of Testing—Day 23: Acceptance Tests & Criteria in the Real World

UPDATED: I goofed and Andy caught it, thankfully. They’re using Watir, not WatiN in their work. I knew that and still fat-fingered the post. Fixed!

Index to all posts in this series is here!

Today’s post is by Andrew Vida, another smart pal in the Heartland region. I’ve chatted with Andy a number of times at various conferences, and I’ve enjoyed hearing about the work he and Bramha Ghosh do at Grange Insurance in Columbus, OH.

We three have spent a pretty good amount of time moaning about our shared pain in getting great, reliable, valuable functional test suites in place. Andy and Bramha are working in Ruby and Watir, but their issues are my issues are the same issues seen in any technology: dealing with data, environments, timing, and of course the inevitable hardest part: “soft” problems in ensuring clarity of communication between folks on the project team.

Andy offered up the following article for my series based on the work they’ve done trying to get a smooth flow around well-defined acceptance criteria. This is a perfect follow on to yesterday’s post by Jon Kruger!


Using Acceptance Tests to Define Done

Have you ever been on a team and was asked "What is the definition of done?"  You respond by saying, "When all of your automated tests pass, and there are no bugs, then you have satisfied the acceptance criteria.  Done!"  Which then is responded to by "Well, how do I define the acceptance criteria?" Good question!

Understanding the Feature

First things first - you have to understand what feature you'll be building. Building the right product and building the product right takes communication and collaboration between your product owner and your team.

The reason for all of the collaboration is that we're trying to build a shared understanding of what needs to be done and also produce examples that are easy to maintain. There are many ways to work collaboratively and ultimately, you have to decide what works best for your team.

The team I'm currently on has found that smaller workshops work best for us. Those workshops, otherwise known as "Three Amigos", include a business analyst, a developer and a tester who share a similar understanding of the domain.

Lets hypothetically say you're discussing a shopping cart feature for your site.  Start by defining the goals of this feature. By starting with the goal, you'll let everyone know why they're spending their time on implementing the feature.  If you can't come up with a good reason why, then maybe the product owner is wasting everyone's time. 

We've used the Feature Injection template from Chris Matts and Liz Keogh to help us successfully describe why:

As a <type of stakeholder>

     I want <a feature>

So that <I can meet some goal>

Here's our feature description:

As an online shopper
I want to add items to my shopping cart
So that I can purchase them

Determining Acceptance Criteria

Next, your team needs to determine what the system needs to do to meet those goals-the Acceptance Criteria.

In your Three Amigos meeting, be sure to ask questions to clear up assumptions, such as "Are there any products that cannot be purchased online?" or "Does the shopper need to be authenticated to purchase?”

Remember, the scope of feature should be high level as we only want to identify what the application needs to do and not how it's implemented. Leave that part to the people that know how to design software. It was determined by the team that the following are in scope:

  • Only authenticated shoppers can add items to the shopping cart.
  • Cannot add refrigerators to shopping cart.
  • Only 25 items can be added.
  • Shopper can remove items from shopping cart.
  • Shopper can change quantity of items after adding it to the cart.

Hey, now we have some acceptance criteria!

Acceptance Criteria lead to Acceptance Tests

We've used communication and collaboration to determine why a feature is necessary and what the system needs to do to at a high level, so now we can come up with some examples to test our acceptance criteria.

To do this, we'll write some Cucumber scenarios.  We've chosen Cucumber for all of the reasons mentioned in Tim Wingfield's post on Day 15. If you haven't read it, go back and check it out.  It's an excellent post on the benefits of employing Cucumber.

Here are a few scenarios that were created:

Given the shopper is a guest
When they try to add an item to their shopping cart
Then they will receive the error "Only authenticated shoppers can add items to their shopping cart."
 
Given an authenticated shopper
When they click the "Add Item to Cart" button
Then they will have an item in their shopping cart
 
Given an authenticated shopper with an item in their shopping cart
When they click the "Remove Item" button
Then that item is no longer in their shopping cart

These are only a few of the examples that were developed as part of the Three Amigos meeting.  On our team, the output of the Three Amigos is a Cucumber feature file.  We now have a shared understanding and a definition of done!  We can pass on our failing acceptance tests to the Dev team to begin their work.  They will begin by creating failing unit tests and writing enough code to make them pass.  Once they are passing, they can then run the acceptance tests.  Once those are passing then the feature is complete.  We're done!  Those acceptance tests will be added to the regression suite to be ran anytime to ensure that the feature remains done.  Now the feature can be demonstrated to the product owner at the next review. 

What we've just done is taken a trip around the Acceptance Test Driven Development cycle.  Just remember, it's not about the tools or the technology, but rather the communication and collaboration.  Our ultimate goal is to deliver high quality software that functions as the product owner intended.  By including QA in the entire process, we can eliminate many of the problems that plague us earlier so that they don't make it to production.  Quality is not just a QA function, it's a team function.

Tuesday, January 03, 2012

31 Days of Testing—Day 22: Why Collaboration Matters (A Real World Example)

Updated: Index to all posts in this series is here!

Today’s post is reposted from Jon Kruger’s blog. Jon is a tremendously smart, passionate indie working out of Columbus, Ohio. I was lucky enough to work with Jon some years back, and I’ve always had great regard for his views and thoughts.

Jon’s post today really hit home for me because it’s all about communication and collaboration early in the cycle. I can’t jump up and down enough about how critical this is--and Jon’s post is a real-world example of why it’s so important.

I read Jon’s blog this morning and immediately pinged him on IM to see if he’d let me drop his article in to my #31DaysOfTesting series. Thankfully he agreed!

Follow Jon on Twitter, and definitely bookmark or subscribe to his blog. Lots of great stuff in both spots!


Just Another Run of the Mill Wednesday

On my current project, we release every 2 weeks. We do the push to production on Saturday, so we set a deadline of Wednesday night for everything to be developed and tested so that we can have two days for demos and UAT.

I remember a certain Wednesday a couple of months ago where things were chaotic to say the least. We looked at the board on Wednesday in the early afternoon and there were 20 items where testing was not complete. We were running around trying to make sure that everything got tested. The entire development team was helping out with testing. Many people stayed past dinnertime to get everything done.

This past Wednesday was much different. Everyone was very relaxed. There was only one item on the board that was still being tested. We were all working on getting stuff ready for the next iteration. And oh by the way, one of the QA testers was out on vacation and another one had been moved to another project.

I immediately thought back to that chaotic Wednesday a few months ago and thought about everything that has happened since then. We certainly had come a long way to get to the point where things were much more relaxed. So what happened?

The Three Amigos

Before development can start on a feature, we have a “three amigos” meeting where developers, business analysts, and QA people get together and decide on the acceptance criteria for the feature. This helps us all get on the same page and make sure that we know what we’re building. It also gets the QA team involved very early in the process, so when it comes time for them to manually test the feature, they already know it inside out.

Automating acceptance tests

The outcome of the three amigos meeting is acceptance criteria. Developers take these and automate them whenever possible (we use a combination of unit tests and acceptance tests using SpecFlow). The development workflow now looks something like this:

  • Work with the QA team to write out the acceptance tests in SpecFlow
  • Develop all of the components needed to make the feature work, writing unit tests along the way
  • Try and get the acceptance tests to pass, fixing any problems we find along the way

When I’m working on the “try and get the acceptance tests to pass” phase, I’m going to find pretty much all of the coding errors that we made during development. The development ticket is still marked as “In Development” at this point, which is very important. We all take quality seriously, both QA testers and developers. I’m not going to hand it over to be tested by the QA team until I can get all of the acceptance tests to pass.

Almost no bugs

Because we knew what we were building up front and because we automated pretty much all of the testing, the QA team is finding very few bugs in the new features that we’re developing. One of our testers brought this up in our retrospective this past week and mentioned how they got everything tested so much faster because they weren’t finding bugs, writing up bugs, waiting for bugs to be fixed, and retesting bug fixes.

We had looked at the schedule earlier in the week and we had thought that developers might have to help out with testing because one of the testers was on vacation and they had some items to test that we thought would take a long time. In the end, no developers had to help with testing and testing got done ahead of schedule!

Everyone talks about how bugs are a waste of time, how they slow you down, etc., but it was really cool to see it play out. Yeah, getting those acceptance tests to pass takes a little extra time, but now I can hand a completed feature over to QA and have a good chance of not having any bugs. We had two developers working for a week on the feature that we completed, and we did it with no bugs. Not only that, we have automated acceptance tests that will do our regression testing for us.

Recap

A lot of the changes that we’ve made seem to be relatively minor, but they’ve produced huge dividends. Much of it comes down to discipline, not cutting corners, communicating effectively, and taking pride in your work. I’m really excited about what we’re going to be able to do from here and I expect to have even more stories to tell in the near future.

Subscribe (RSS)

The Leadership Journey