Wednesday, December 27, 2017

Never, Ever Joke About Your Teams' Career Safety

Long ago I was on a mission flight with my Air Force E-3 AWACS crew. We had a bit of idle time and the crew team next to my console was cutting up a bit. I joined in the friendly insults. One of the younger crew gave her supervisor a really good one, laughter ensued, and I said something like “Oh, she’ll hear about that on her annual performance report (APR)!”

Without missing a beat the supervisor instantly replied “Of course she won’t. I don’t even joke about something like that landing on her APR. Ever.”

The way he said it made it even more impactful: he didn’t get intense, he didn’t yell, he didn’t joke. He just said it emphatically and in a matter-of-fact tone.

That moment has vividly stuck with me over the 30-plus years since it occurred. It’s changed how I interact with teams I run or even those junior to me who aren’t on my teams. The point is a crucial one in helping set a culture of safety and respect.

Those on your teams, those who report to you, those who have any form of accountability to you should know, without a doubt, that their performance reports will be based only on merit and fact, never spite or rumor.

You don’t do performance reports? Fine. Don’t fixate on the mechanics. This is more about the meta concept: safety in one’s career progression.

The other day on Facebook someone posted an article that ran something like “Seven Signs You’re About to Be Fired.” The poster tagged someone on their team and made a joking comment like “Yo, pay attention!”

I got the joke, but it also made me recall the terrific lesson I learned all those years ago.

Some things you just shouldn’t ever joke about. And your teams should know it.

Wednesday, December 20, 2017

No, I Didn't Automate That Test

No, you don’t need to automate tests for every behavior you build in your system. Sometimes you shouldn’t automate tests because you’re taking on an unreasonable amount of technical debt and overhead.

For example, I’ve got a page using a KendoUI component grid. From that grid I can display, create, and update data. There’s a modest amount of asynch goo happening on the server side after a create or update action, so I tacked on a small bit of code to add an empty element on the page’s DOM as a flag signifying the action was complete. This makes it easy to build a Wait condition based on the appearance of that element. The function is small:

requestEnd: function (e) {
                var node =                   
                    document.getElementById('flags');
                while (node.firstChild) {
                   node.removeChild(node.firstChild);
                }
                var type = e.type;
               $('#flags').append('<div responseType=\'' + type + '\'/>');
            },

It’s behavior. Moreover, other automated tests rely on this, so this failing would break other tests! Why did I decide to not write an automated test for it?

Well this is literally the only custom JavaScript I have on my site at the moment. Think of the work I’d have to do simply to get a test in place for this:

  • Figure out which JS testing toolset to use
  • Learn that toolset
  • Integrate that toolset into my current build chain

That’s quite a bit of work and complexity. Step back and think about a few topics:

What’s the risk of breaking this behavior?

  • I rarely edit that page, so the likelyhood of breaking that behavior is low
  • When I do edit the page, I even more rarely touch that particular part of the page’s code. Likelyhood of breakage is even lower.

What damage happens if I break that behavior?

  • Other tests relying on that element will fail
  • Those failures could lead me astray because they’re failing for an unexpected reason – eg an Update test isn’t failing because the Update test is busted, it’s failing because the flag element isn’t appearing
  • I’ll spend extra time troubleshooting the failure

How do I make sure

It’s a pretty easy discussion at this point: does it make sense to take on the overhead for writing test automation for this particular task? No. Hell no.

It may make sense as I start to flesh out more behavior soon. But not now. A simple few manual tests and it’s good.

Roll on.