Thursday, May 14, 2009

Book Review: Making Things Happen

Making Things Happen: Mastering Project Management by Scott Berkun, pub. O’Reilly, ISBN 0596517718.

This book’s been enjoyable and useful to read. There are some sections which didn’t give me a lot of value, and I think some hard trimming to shorten the book’s length would have been useful, but overall it’s been a positive addition to my bookshelf.

I don’t line up 100% with Berkun’s approach to project management – he seems to be  heavy on loading up on extensive documentation up front, and he’s seemingly tepid about specs being a conversation vehicle between stakeholders and developers. I’m adamant that specs need to evolve with the stakeholders, analysts, devs, testers, and other team members being active participants in the process.

Those nits aside, I got great value out of a number of Berkun’s chapters, particularly those around leadership and trust, decision making, and recovering when things go wrong. He’s also got exercises at the end of each chapter which help get thought-provoking juices flowing.  Additionally, there’s a lot of small, useful details in this book as well, ranging from writing good e-mails to advice on running meetings.

Overall it’s been a good read and I’d happily recommend it.

Tuesday, May 12, 2009

Book Review: Implementing Automated Software Testing

Implementing Automated Software Testing by Elfriede Dustin, Thom Carrett, Bernie Gauf. Addison-Wesley. ISBN 0321580516.

This book isn’t for everyone, but everyone can get some value out of it. What I mean by that rather confusing statement is that folks working in Agile environments will likely want to throw the book across the room while folks in more bureaucratic environments like CMMI or other waterfall environments will likely get a great deal of value from the book.

I’m an Agile fanatic and I had a difficult time dealing with book’s approach which emphasizes spending large amounts of time creating documentation such as requirements traceability matrixes, detailed test plans, etc. My preferred approach is to have testers working side-by-side as part of a team, creating specifications from user stories/requirements and moving those right in to automated test suites via tools like Selenium, Cucumber, or RSpec.

That said, I did indeed get some good value from the book. I found the discussions on making hard evaluations on what to test very worthwhile reading: teams can easily vaporize large amounts of time creating large suites of brittle, unmaintainable automated tests. This book has several really good chapters on using business cases to drive return on investment (ROI) decisions for testing, understanding automated test pitfalls, and adjusting your testing as you progress through your project.

Additionally, one of the book’s high points was on building the test team: “Put the Right People on the Project – Know the Skill Sets Required.” This is a great chapter which emphasizes starting the search by focusing on how to interview test team members – and how those testers’ skills are greatly different than other members of the team.

The book’s very academic, dry tone makes for some difficult reading, and few concrete examples are used until very late in the book. Having spent a large number of years either in the DOD or working for DOD contractors, it quickly became apparent that much of the book seemed targeted to folks working in those environments – too many dry acronyms are scattered through the book, adding to the difficulty in reading.

The lack of examples using real tools frustrated me. While the appendices contain some examples of comparing various tools, the book doesn’t actually show how a real world testing environment would use those tools. One appendix, eight or nine pages in length, is touted as a “Case Study” but falls short, in my opinion.

Overall it’s a decent book. The dry tone and lack of real environments is balanced out by the excellent coverage of team skills and emphasis on selecting how and what you test.

Friday, May 08, 2009

.NET Work in Frederick, MD

Normally I don’t use my blog as a venue for job postings, but this one’s for a rather unique opportunity.

My wife works for Syracuse Research Corporation, a great company based out of, wait for it, Syracuse, NY. The company targets support and research work for the Department of Defense, and I’ve really, REALLY liked what I’ve seen for their culture, leadership, and employee support. If you’ve read my blog much then you may know how critically important those three things are to me.

They currently have an opening for a .NET person in Frederick, Maryland. The position requires a Top Secret/SCI clearance, so it’s a tough billet to fill.

If you’re in that area, or are willing to relocate, I’d encourage you to have a look at the job’s description. If you’re interested, contact me (see link on sidebar) and I can put you in touch with the right folks.

Friday, May 01, 2009

Dayton Code & Coffee, Tuesday 5/5

Following the neat idea of the Columbus Code & Coffee, Chris Schroll, Derek Hubbard, and I are going to meet up at the Panera Bread at Fairfield Commons on Tuesday, 5/5, at 8:00 am.

We’re thinking of working through some of the Ruby Koans from the geeks at EdgeCase, or maybe having a look at Sean Chambers’ SOLID stuff out on Git.

Stop by if you’re interested in gabbing about code or pairing up. Don’t worry about your skill level – just bring some interest!

Remote Retrospectives with Twiddla

I’m a big fan of retrospectives for teams. Retrospectives smooth out bumps in your team’s environment and it gives everyone a sense they’ve had a chance to air their gripes – or offer up praises, even!

Retrospectives for remote teams can be tough, simply because so much value for retrospectives is gained through the interaction of folks in a room. At Telligent the team I’m on is four guys scattered between Ohio and Florida. While I’d really like an excuse to travel down to see Sean near Jacksonville, that won’t quite work.

At yesterday’s retrospective we gave a shot at using Twiddla as a retrospective board. Twiddla, coupled with Tokbox for video/audio, turned out to be a great no-friction solution as a place to put our Good/Bad/Confusing topics. It was also stupid easy to move those topics around as we did our organization phase.

The red/yellow/green dots were my addition after we’d moved and voted on things – I realized that we didn’t have a way to remember which topics were in which category. (Green for good/Red for bad/Yellow for confusing) I think next retrospective we’ll simply preface each topic with G/B/C to keep those straight.

We had a great retrospective, and it was in part because we found a tool that didn’t get in the way. Low-friction, usable tools, FTW!

Subscribe (RSS)

The Leadership Journey