Friday, March 17, 2017

Things Have Been Quiet Here

As you’ve noticed, things have been quiet here on my blog, especially since January. Things will remain that way for some time, and I wanted to share some high-level information about why.
For those of you who don’t know, my family was struck by incredible tragedy in January which left my wife dead, my daughter severely injured, my mother-in-law (who lives with us) traumatized, and my son in detention facing many years of legal and mental health issues.
We’ve had an amazing outpouring of love and support from family, friends, the Ashland community, and the greater software community I’ve been involved with over the years.
I’ve especially been humbled by the response to the GoFundMe campaign set up by David Giard.
Moving forward, I’ve left full-time work at Pillar Technology so I can focus on my family. I’ll be speaking at StirTrek in Columbus on 5 May; however, other speaking/conference trips will be dramatically fewer than in past years. I’ll continue some part-time writing work on the side, and I’ll be continuing in a part-time role with Pillar focusing on a number of things where I can add value to that amazing company in my new life.
I appreciate that many readers may want to express support/love/condolences, but I’m going to turn off comments on this particular post. Thank you for your understanding.
More importantly, thank you all for the many years of collaboration, friendship, snark, bi-directional learning, and outright joy I’ve gotten from my involvement with the community.
I’ve always said I’ve gotten far more from the community than I’ve ever received. That’s been never more evident than the last two months. Thank you all.
Finally: Go hug your loved ones. Don’t leave things unsaid. Stop what you’re doing and make sure they know you care. Right. Now. I mean it. Life is fragile and precious. Celebrate it.

Thursday, December 15, 2016

WebDriver / UI Testing: Where to Start

I’ve been asked this question enough times over the years I thought I’d (finally!) write up a quick post to point folks in the right directions for getting started.

What are the best practices for getting started with WebDriver (or some other User Interface functional automation tool)?

So first off, forget “best practices.” There ain’t no such a thing. There are guidelines and lessons learned you need to adapt to your own situation. With that in mind, here are some of my thoughts and references you should consider as starting points for your own journey.

Whatever you do, do not jump straight into throwing code or tooling around in order to try and solve some poorly understood problem! Invest some time thinking, planning, and experimenting. You’ll be much happier with the end result. (Ask me how I know…)

Clearly Define The Problem You’re Trying to Solve

Why do you want to move to WebDriver or some other nifty UI functional test tool? Get together with your entire team (which includes support, product owners, stakeholders, etc.) and talk about the “why?” behind your thinking. Look above the “simple” technical issues to larger process and business problems. Some of those responses might include:

  • We’re seeing a huge spike in support tickets due to unclear functionality or outright defects after every major release
  • We’re missing ship dates due to the time needed for regression testing
  • Work items slip in to following iterations due to time for testing…
  • and the corollary: Testers get work items only a day or two before the iteration finishes
  • Regression defects get found a week or two after the root cause
  • We’re not building what I (the stakeholder/PO) wanted

There aren’t necessarily wrong answers or items for this list. Just make sure you understand the problem you’re trying to solve—and you may find different ones to solve after these discussions!

Choose a Tool That Works for Your Team and Situation

WebDriver is a wonderful tool that I love and use all the time. However, it’s one of many tools that might solve the job for you. Consider the things you identified above, then have a look at all potential solution toolsets.

Do you want a plain English/Domain Specific Language toolset that helps you clarify specifications and behaviors?

Do you want a record and playback tool that will help you get a starting point built quickly? (Dismissive of record and playback? They used to suck. They’ve grown up. A lot. Have a look and be open-minded.)

Do you want to just write thin wrappers around WebDriver and roll forward?

All these are considerations you need to take in mind.

Go read Elisabeth Hendrickson’s wonderful blogpost from 2011 “Selecting test automation tools.” Her post is still one of the best I’ve ever read on going through the selection process. You’ll need to update the list of actual frameworks, drivers, and tools, but still…

Plan Your Approach

Once you’ve selected your toolset, spend some time laying out your initial approach for your UI tests. Spend time working with your toolset to understand how it handles things like representing pages/views/etc. Figure out where you can tie in calls to your own custom code for doing things like test setup, invoking heuristics/oracles, environment configuration, etc.

Get clear on how the toolset works and what a test execution lifecycle looks like. For example, let’s say you’re using Cucumber and WebDriver on the Java stack. You’ll also need some form of test execution framework like TestNG, JUnit, etc.

In these cases JUnit (or TestNG) is generally the starting point for any test execution. Details vary, but JUnit likely starts a single test that calls out to the Cucumber runner. Your Cucumber step definitions make WebDriver statements that manipulate a browser while referencing your own custom page objects for details on each page/view/etc. Those step definitions may also invoke your own custom framework to create data, validate database conditions, and tear down test structures.

Understanding this level of detail ahead of time is critical to getting things well-organized from the start. Of course you’ll learn things and adapt as you move forward, but at least you’ll be starting from a good point.

Understand Your Process

Successful test automation, especially functional/UI test automation, depends on a good process. Conversations around testing need to happen early, regularly, and frequently.

Begin conversations about functional testing at the release planning stage. Make sure your UI/DX designers are taking testability in to account. Start discussing what workflows might look like, and which ones will be critical to automate. Lay out initial plans for test data.

Three Amigos

Three Amigos conversations have become my most-favorite tool for helping teams quickly improve their delivery behaviors. I wrote a post on it earlier this year, and had a great conversation with the folks at the Eat, Sleep, Code podcast on the topic.

Concurrent Testing, Not N+1 (or 2, or 3)

Great Three Amigos are the single most important thing to getting your testing happening in the same iteration as development work is completed. A good conversation will let the team start writing tests at the same time the feature is being developed. Mature teams do this as second nature: A quick discussion of the work flow, a quick scribble of field locators, and off everyone goes. It is literally that easy once you get past the very small learning curve.

Clarify Your Coverage

How many UI tests should we have?

As few as possible, and then ten percent less than that.

Seriously, many newer teams rely far, far too much on UI testing. Push as much testing as close to the code as possible. UI tests should not be checking multiple scenarios of algorithms. UI tests should not be checking every parameter for input validation. Those sorts of tests should be handled at the unit test level, either on server side code via something like JUnit, or in the browser’s JavaScript via something like Jasmine.

Focus your UI tests on high-value, critical flows: Can I create a new sales order? Does a failed timecard properly flag for audit if the number of hours worked are over the business’s max hour limit?

Those are the sorts of high-value scenarios we want validated at the UI level. Handle the other situations at more appropriate levels of unit or integration testing.

Pair Your Developers and Testers

Want the best chance of success for your UI functional tests? Of course you do.

Then burn down silos between testers and developers. Get them pairing together on actual feature work. Have the entire team own the UI automation suite. The benefits from this approach are numerous:

  • Tests rarely have any coverage overlap with other test types because everyone knows what’s getting tested where
  • UI test suites are built and maintained with software craftsmanship principles in mind. Duplication, complexity, dependencies—all are mitigated when folks with different applicable skills are writing the tests.
  • Your system itself becomes more testable as the team designs and builds system code with things like good locators, asynch support, and configurability

Learn the Common Issues of UI Testing

So now you’re eight hours and 23,942 words in to this blog post and I’m finally getting to some specifics around WebDriver. There’s a point to that. If you start way down here and ignore everything above this, well, frankly you’re screwed. Work on all the critical things above first, then when it comes time to handle a few specific things you’ll be in much better shape.

Locators, Locators, Locators

WebDriver (and other UI tools) need a way to find and interact with elements on a page or view. Some tools use different terminology, but the idea’s the same: Find a thing on the page’s DOM (or mobile device’s view, or whatever), then inspect/inject/interact with it somehow.

Here’s a list of things to consider when working with locators:

  • Store locator definitions in a Page Object class or similar abstraction where they’re defined once in the entire codebase. Avoid duplication. All other tests/steps/whatever refer to those abstractions.
  • Prefer using “id” attributes when working with HTML. IDs are fast to work with and are unique on valid HTML pages.
  • Beware of controls that auto-generate IDs. They may be tied to that element’s location, and may not be constant.
  • Look for ways to customize IDs for auto-generated widgets/controls/whatever.
  • Consider JQuery style locators next when IDs aren’t appropriate.
  • Avoid XPath locators, except when it makes sense. Start your XPath expressions as close to the element as possible, never from the root of the document.
  • Consider using text content instead of attribute-based locators when dealing with dynamic content, eg
    FindElement(By.XPath("@myDataTable//tr[contains(.,'Some unique text')]"))

Async

Brittle, intermittently failing tests due to asynchronous situations are often what cause many teams to lose faith in their test suites—or abandon them altogether.

Learning to deal with asynch situations is critical. The first, best step is ensuring a really good Three Amigos conversation as the work item is being discussed. The group needs to explicitly discuss any potential async situations before the tests or system code are written. This ensures the best chance for a stable, reliable test.

Sam: “You’re working this feature to display search results as a user keys in criteria, right?”
Reena: “Yes. The search results change as each character is typed. I know some word wheel searches require two characters to narrow; this one will be on every character.”
Sam: “OK, so that is a callback from each typed character. Does the async action (the callback) complete when the result list updates with the narrowed list, or is there some other action/event?”
Reena: “The list updating signals the callback is complete.”
Sam: “Simple! And this user story is only on narrowing the results, correct? Selecting a result is a different story?”
Reena: “Correct. Do you need any custom IDs for your locators, or will the text content of the list be good enough?”
Sam: “The list will be fine as is, but can you please label the input field with an ID of ‘search’ ?”

And so it goes. Other things to consider to mitigate as much asynch pain as possible (you’ll never eliminate it…)

  • WebDriverWait is your best friend for async situations. Learn how it works, and learn the different ExpectedConditions. (Learn its equivalent for other toolsets.)
  • Determine what the exact condition you need to move forward with your test, then set a wait on that condition, e.g.
    • A button must be enabled. WebDriverWait.until( ExpectedConditions.elementToBeClickable( some element locator )
    • Element contains text WebDriverWait.until( ExpectedConditions.textToBePresentInElement( some text )
      • An element is on the DOM WebDriverWait.until( ExpectedConditions.elementExists( some element locator )
      • and so on
  • Avoid using Thread.Sleep() or any variant of it except when troubleshooting. Use of Thread.Sleep() in production tests should be limited to the most exceptional of situations, and should never be allowed without discussion with other team members. I’ve had suites of over 15,000 tests with less than five Thread.Sleep() statements. That’s your benchmark.
  • Situations with multiple asynch calls do not need complex logic. Simply list out each condition separately. The timing will work out no matter what. Honest.
    WebDriverWait.until( ExpectedConditions.textToBePresentInElement( some text )
    WebDriverWait.until( ExpectedConditions.textToBePresentInElement( some other text )
    WebDriverWait.until( ExpectedConditions.elementToBeClickable( some element locator )
  • When working with complex async situations, consider building flags that appear on the DOM when an action is complete. The flow might look something like this:
    • Click submit button
    • WebDriverWait.until( ExpectedConditions.elementExists( By.Id("update_completed")
    • Complex events begin on server side and in browser
    • Complex events complete
    • Page updates with a hidden <div id='update_completed'>
    • Wait conditions triggers, test moves on

You can see an example of that flag in action on line 23 of this KendoGrid example in my GitHub repo.

Go Practice

Talk, experiment, fail, cuss, learn.

One of the best places to go practice the simple parts of UI automation is the awesome site set up by some of WebDriver’s community leaders.

Monday, June 27, 2016

Podcast on Three Amigos

The nice folks at Telerik (a Progress company) were kind enough to host me for a discussion on how using the Three Amigos can dramatically improve your teams’ effectiveness. Ed Charbeneau saw my blog post “All In With The Three Amigos” and thought it would make a good conversation.

You can find the show at Telerik’s Developer Network blog.

Listen for yourself and let me know what you think!

Tuesday, June 07, 2016

Video of My 'Automated Testing Beyond the Basics' Talk

Last month I did a new talk at StirTrek: “Automated Testing: Beyond the Basics.”

The great folks at StirTrek recorded the sessions and posted them on YouTube. Because they’re all awesome. Watch the video below, or on YouTube.

My usual gonzo deck is up on SpeakerDeck.

Wednesday, June 01, 2016

All In With Three Amigos

Are you running Three Amigos conversations for each work item/user story your team does? If not, start now. Seriously.

Effective Three Amigos conversations cut confusion, reduce rework, and ensure the entire team clearly understands the why of what they’re building. Teams ensure acceptance criteria are clear, nail down test data, discuss data flows, and clear up a myriad of other potential roadblocks—before a single line of code is written.

Over the years I’ve come to the belief Three Amigos is the best thing you can do for improving your software delivery teams’ effectiveness. I used to think it was running good retrospectives; however, I’ve changed focus to Three Amigos.

Three Amigos is a small step you can adopt in your delivery process regardless of what methodology you use. It doesn’t matter if you’re using XP, Scrum, waterfall, or chaos. You don’t need some massive change proposal to your organization’s formally approved process—Three Amigos is nothing more than an effective conversation about each work item.

Here’s how you can start if you’re not familiar with them: For every work item—Every. Single. One.—have your developers, BAs, and testers gather together as the work item is pulled off for work.

Start with a small list of questions for the group to discuss. The questions vary for teams, but here’s a set of questions I’ve used for teams I’ve worked with.

  • WHY are we building this? What is the business value this brings to our users?
  • Are acceptance criteria clear enough?
  • What risks are associated with this? (Regression, dependencies, etc.)
  • Does this item impact existing tests and functionality?
  • What test data is needed?
  • What will be tested in unit, integration, and UI tests? (Who tests what.)

The point of the discussion is to clarify a number of things on each work item in order to avoid confusion, rework, and errors. Well-run Three Amigos generally take five to ten minutes, sometimes shorter. Initially they’ll be longer as your team learns how to work with them.

If there’s any confusion then you have an opportunity to clear it up before you do work. Bad acceptance criteria? Go talk to the product owner and change them. Unsure what test data you need? Take a sidebar and sit down for further discussion. Confusion over who’s testing what? Whiteboard out data flows, workflows, and architectural impacts so the devs and testers can iron out what’s going to be handled in unit, integration, functional, and exploratory testing.

You should have a Three Amigos on every work item, even if it’s just to agree no Three Amigos is needed for that particular work item. Sometimes different team members might have different views about a work item. If you don’t meet, you won’t know.

Again, rolling in to Three Amigos is the single most effective, low-friction thing you can do to improve your teams’ ability to deliver great software.

Need a starting point? Grab my template (Word 2013 document) and modify it to suit your needs!

Are you running Three Amigos? Do you use different discussion points? Let me know what works for you in the comments.

I'm Looking for a Tester / Value Stream Compadre

I’m looking for someone to join me at Pillar Technology to help our clients improve their testing, quality, and value work for their delivery teams. Interested? Read on!

We’re filling a position for an Executive Consultant—someone as a mid-level to senior-level consultant. Pillar doesn’t do staff augmentation (generally). We work with clients who are honestly looking to transform how they look at solving problems through software. My efforts with clients to date have been both tactical and strategic.

At the tactical level I’ve been involved in building testers’ basic testing skills, learning appropriate automation, and working to change mindsets about focusing on valuable work. That’s come about through workshops, pairing, ongoing training, and helping get work done.

Pillar’s strategic level work for our clients helps organizations find ways to increase value by adopting new processes, practices, and mindsets. That means learning to effectively communicate with the “business” side of the house around things like value streams, flow of work, whole-team quality practices (No, testers are not the Quality Assurance folks.), modernizing testing skills, and a host of other activities.

We’re looking for mid- or senior-level consultants to come on board and dive in with me. Here’s the “Must Have” list:

  • Empathy. We value our roles as trusted advisors. Empathy to the clients’ situations is crucial.
  • Attitude. You’re doing the job because you want to. You’re helping lift others up because you see the benefits.
  • Aptitude. You’re able to be successful. What you don’t know you’re willing to dive in and learn—.

Some experiences that are helpful:

  • Walk the walk. You’ve been a practitioner of agile and Lean approaches over a number of years. You know how to adjust for situational context.
  • Understand the power of “I don’t know.” Have the confidence to admit weaknesses, because you know where to go find answers and help.
  • **Skilled at quickly building up an effective test plan.**You know how to focus on risk, value, and what the customer feels is most important.
  • Experience talking value to business. You know the difference between creating epics, features, and user stories that are technical solutions versus items that deliver value to the customer.

Specific tools in your belt/bag that are helpful:

  • Exploratory testing. You’ve done it, and you can explain why it’s different than ad-hoc or monkey testing.
  • Software craftsmanship. You’re able to talk about why it’s important and can provide some guidance around it, even if you’re not a developer by trade.
  • Automated testing. You’ve done it with different tools, you’ve suffered with mistakes you’ve made, and you’ve worked through those mistakes. You know where it fits, where it doesn’t, and how to create a good map of coverage of different testing types. You also understand and deeply believe in the difference between checking and testing, even if you don’t care to waste time arguing about what’s a test or check.

Interested? Drop me a line: JHolmes@PillarTechnology.com. I’d love to chat with you!

Wednesday, May 18, 2016

Telerik Developer Experts Profile

I’ve been a member of the Telerik Developer Experts group since I left Telerik. It’s a group of industry developers who help promote Telerik tools. (Note: Telerik was bought by Progress 18 months or so ago. The Telerik name is sadly going away, but their products are all getting wonderful support!)

Jen Looper, one of Progress’s Developer Advocates, was nice enough to write a small profile about me. I talk quite a bit about the project with one of the most awesome teams I’ve ever worked with, plus I rant a bit about the state of things in huge, slow-moving enterprisey organizations.

You can find the article on Telerik’s blog if you’re interested!

Tuesday, May 17, 2016

We'll Change Right After... No. No, You Won't.

“We’re too busy with the upcoming migration to address our quality
problems right now.”

“We can’t handle trying to change our delivery culture until after we
ship this next release.”

“All hands on deck to support modifying the system to make this next
deal. We don’t have time to add additional testing around that major
subsystem.”

Are you, or your management, saying things like this? They’re all variants of “We’ll change right after the Next Big Thing.

Here’s some tough love for you: No. No, you won’t change.

Addressing culture and quality is a slow, long-term journey—especially if the fundamentals around test and software craftsmanship are new to your team. Addressing those issues takes commitment from everyone involved from your top-most leadership to the grunts slinging and testing code. (And the DevOps folks deploying things!) It also takes months to see the huge benefits.

There’s always a Next Big Thing. Always. If you’re only focused on putting out the fire in front of you you’ll never make time to fix the five flat tires on your vehicle. (Because if things are that bad your spare is likely flat too. To badly mix metaphors.)

You can’t change your culture and fix your delivery if you keep rationalizing priorities. Perhaps you can’t go all in on a massive effort, but that doesn’t mean you shouldn’t find a series of small things to improve.

Commit. Take the step.

Otherwise I just don’t believe you’ll change at all.

Subscribe (RSS)

The Leadership Journey