Friday, December 19, 2014

Choose Wisely AND Back Out Easily

Alan Cooper (Yes, that Alan Cooper) had a great-sounding Tweet yesterday that made me stop and think:
Instead of choosing the correct path in advance, develop better tools to incorrect paths soonest and without rancor.
I get the point I think Mr. Cooper was trying to make: we need to be able to recover more easily from our mistakes in software. I especially love his point of “without rancor” because we need to be more accepting of the fact that the path to success is littered with unsuccessful tangents.
However, I’ve got a subtle iteration of his phrase I like better:
Do your best to choose the correct path at the start, but know how to figure out it’s an incorrect path, and figure out how to reverse soonest and without rancor.
I completely agree that we need to get in the business and cultural mindset of being OK with failure/mistakes, getting out of them as quickly as possible, and doing it without rancor.
That said, I’d prefer the subtle difference of emphasizing avoiding those mistakes where possible. We should not, repeat NOT get wrapped up in fear or analysis paralysis blocking us from decisions that might lead to mistakes or failures, but we should be doing enough sanity checking to ensure the choice meets business value requirements, fits the teams’ ability to deliver, etc.
In my Leadership 101 talk I make the differentiation between smart mistakes and dumb mistakes. Smart mistakes are ones made when you’ve been thoughtful about your approach. Dumb mistakes are when you dive in to dig a 14’x8’x18” rain garden pit, then realize you’ve dug it right next to your house’s foundation–not the place you want a lot of water sitting for extended periods. (Ask me how I know about this…)
Backing out from unsuccessful choices is critical. First spend a little time planning to ensure you can avoid as many of those choices as possible.

Wednesday, December 17, 2014

Rethink Your Hiring Critieria

This article on how Google's changing their hiring practices really hit home with me.
I've long viewed learning ability and problem skills as far outweighing where someone graduated from or what their resume history looked like. I've previously blogged about my thoughts on hiring, and  my position descriptions have tended to drive HR departments crazy because they weren't checklist oriented.
I've always been more interested in how well someone's going to approach working as part of a team over what classes on compilers they took, or what certifications they've passed.
Technology changes too fast to focus on criteria like "3.6 years working with .NET 4.5" or "Must have graduate degree in artificial intelligence." I'd rather have people on my team who have made some big mistakes, learned from them, and want to share that knowledge with the rest of their team.
Do your hiring criteria look like shopping lists in the technology buffet? Perhaps you might reconsider reworking those around criteria that focus on your organization's real needs: candidates who can help you quickly and effectively solve problems core to your business needs.

Monday, December 15, 2014

Hire me!

I'm open for work!
Last week I parted ways with Falafel Software. They’re a great bunch of folks, but we weren’t a good match culturally or philosophically. That’s absolutely OK, and we parted on good terms. I’m happy to continue recommending them and calling them friends.
That means, however, that I’m open and available for new opportunities! I’m actively looking for work as an independent consultant, or for any full-time spots where organizations think I might be helpful.
What sort of things can I help you with? I’m passionate about helping organizations deliver great value to their customers, regardless of whether those customers are external or internal. I’ve helped teams build out their testing skills (not just test automation, but overall testing), smooth out quality issues impacting their organizations, and cut out waste throughout their delivery cycle.
That may sound a bit hand-wavy to you, so for more details check out my LinkedIn profile, visual resume, or more “traditional” (eg ‘non-gonzo’)resume.
Want to chat about things in more detail? I’d love to talk with you! Drop me an email at my new Indie digs: jim@GuidepostSystems.com

Friday, November 28, 2014

Xamarin Studio: Publishing to Device Fails

Problem: Attempting to publish an app from Xamarin Studio to an iOS 8.x device fails. You may see various error messages about unable to write/find Manifest.plist, and you'll likely see an error message akin to AMDeviceSecureInstallApplicationBundle returned: 0xe8000097 (kAMDInstallProhibitedError).

Solution (mine, at least): Ensure the restriction for Installing Apps is active, not disabled. Under settings, check General | Restrictions | and ensure Installing Apps is allowed.

Tuesday, July 29, 2014

Dealing with Legacy Codebases? Find me at ThatConference!

I'm honored to have been selected to speak at ThatConference Aug 11th-13th at the Kalahari Resort at Wisconsin Dells.

I'm giving my talk OMG! This Codebase Sucks! which tries to lay out some ideas to help people fix up problematic codebases while continuing to deliver value via new features, bugfixes, etc. My goal for the talk is to help attendees learn how to decide which parts of the system and environment to focus on, and how to figure out which sections of the codebase to start tearing apart or outright burning down and rebuilding.

All of this has to be done in the context of keeping the system in a state where the team can continue to ship on a regular, if occasionally slightly interrupted, pace. After all, accomplishing the organization's mission doesn't miraculously stop for months so you can focus all your delivery efforts on completely re-writing a codebase! (Occasionally it does, but rarely.)

If you've been around legacy software and struggled through this, then I'd encourage you to attend the talk. You might learn a helpful hint or two, and just as importantly, you might be able to share a gem or three with the other attendees. (Yes, I love interactive audiences in my talks!)

Why ThatConference?

If you're unfamiliar with ThatConference, I encourage you to have a look at it. It's a community-spawned conference that rivals many commercial "big" conferences for content. Moreover, ThatConference (and CodeMash) has an amazing amount of hallway interaction between really smart, passionate folks from widely differing domains. You're able to learn how people from Ruby, Java, .NET, JavaScript, and other domains solve the same sorts of problems you're running into every. single. day. and you'll learn vastly different approaches which will help you as you move forward in your own domain.

Time's short, but tickets are still available.

Just. Go. Do. It.

Bonus Material

Just like on good movie DVDs, here's some extra content: the deck from my OMG! This Codebase Sucks! talk:

Tuesday, June 03, 2014

Handling State for Browser Testing

Marc Escher asked a great question on Twitter about browser testing and state. He actually wrote a Gist on it, but I thought the question merited a response here.

State in Tests

Handling "state" of a system for any test automation is an involved process that should evolve as your test suite does. All the same good design/engineering practices that we use for building good software (abstraction, DRY, readability, etc.) should be applied to our test automation suites. Because, you know, test code is production code.
Here are some thoughts on Marc's great set of questions:
  1. What are patterns for creating expected state prior to running browser tests?
"State" is a broad term and many folks may think of different definitions. I think of it as data, environment, configuration. There's a difference between setting up background "state" for a test suite, and having each test set up its prerequisites.
Setting up an environment for a test suite run is a conscious decision based on what's needed for that particular suite to succeed. I regularly use a combination of loading baseline datasets, altering system configuration (turn off CAPTCHA, select a simple text editor, swap mail providers, etc.), and leveraging test-specific libraries, infrastructure, APIs to get the starting environment set up.
Also, it's very, very critical to understand there's a difference between system-wide state (baseline datasets, eg) and state needed for individual tests. No test should ever, ever rely on sharing state with another test. The risk of side effects, especially in distributed/parallel testing scenarios is just too high. You'll regret it if you rely on this. Ask me and Jeremy Miller how we know this.... (ie, school of painfully learned hard knocks.)
  1. Is it appropriate for the browser tests to communicate directly with the database of the system under test, creating state in the same manner that you would in an integration test?
I generally prefer to use the system under test's own APIs to create state for a test. These APIs ensure CRUD operations are obeying the system's own rules for its data. Using APIs means you don't have duplicated logic around CRUD ops for your tests, which is a good thing. This way you're not having to maintain separate helper methods to handle changes when the app's underlying DB structure changes, for example.
  1. How important is it that browser tests be decoupled from the application under test, such that the only thing required to run tests are the tests themselves, which can be pointed at any applicable URL (dev, test, staging)?
There are a couple different aspects to this, IMO. You want your test suite coupled to the application in the sense you need to leverage the system's own APIs for setup, teardown, configuration, and parts of your test oracles[1]. That said, these calls to the system should be abstracted into a set of helper libraries. The tests themselves should never, ever know how to communicate with the system itself.
For example, you don't want individual tests that know how to invoke a web service endpoint to create a new contact in your Customer Relations Management system. If that endpoint changed, you'd have to touch every test that used that endpoint. That's a recipe for a lot of extra scotch consumption.
Instead of telling the system how to do something, tests should tell a helper function/library what they want done. That library in turn knows how to call a stored procedure to create a user. With this approach, no test ever has to be updated if, for example, the system changes from a stored procedure to a web service for creating new contacts.

Write Tests Like You Write Code

I've generalized a number of things in the above thoughts, but the bottom line is this: use the same ideas approaches for your test suites as your production code. Because tests are production code.
[1] For automated tests, an "oracle" or "heuristic" is the final step in your test after you validate the UI is behaving as expected. It's not enough to leave the test off at that point. You need to check the database to ensure items were properly created, updated, or deleted, for example. You might have an oracle that checks the filesystem to ensure a datafile was properly downloaded.

Wednesday, April 30, 2014

TIL: Arrowhead Anti-Pattern

Today I Learned: The term “arrowhead anti-pattern” from Jeremy Miller’s tweet. (If you’re interested in software craftsmanship and you’re not following him, change that.)

The “arrowhead” describes the visual pattern made by a nasty set of nested conditionals:

if
   if
     if
       if
         do something
       endif
     endif
   endif
 endif

The above snippet was lifted straight from the great article on the C2 wiki. If you want to take the visual aspect a step further, go see the Daily WTF’s article Coding Like the Tour de France.

Nested conditionals are awful. Avoid them. Read up on cyclomatic complexity and learn to handle things differently in your code. You’ll be happy you did. (See Chris Missal’s article on Los Techies for some other good discussion.)

I’ve long known the troubles this sort of code causes. I just didn’t know the cool name for it.

Learn something every day…