Wednesday, July 30, 2008

Looking for Java Work? We're Looking for You.

Quick Solutions, the great company I work for, has a significant number of openings for sharp folks in the Java domain. We're looking for folks at all skills levels, so we'd be happy to talk with you regardless of whether you're a senior architect or a passionate newcomer.

All slots are in the Columbus, Ohio, area.

Feel free to contact me via the link on the sidebar if you have any questions.

Tuesday, July 29, 2008

The Importance of a Mentor

Mentors are an awfully important part of one's growth in any aspect of your life: personal, professional, community.  I've been lucky to have a number of solid mentors during different phases of my life, some of whom I didn't realize were performing that role until years later as I looked back in reflection.

I strongly believe a good mentor is a critical part of your success in your professional life.  A good mentor acts as a calm, wise advisor who can help you unmask some of your weaknesses, then help you discover a roadmap to fill those gaps.  All of this can happen on many levels: communications, technical, business sense, etc.

A good mentor's an invaluable asset when you're in over your head with something, or in a situation where you're having problems finding the way out.  A mentor's been there, done that, and has the scars to prove it.  Because they're not right in the middle of your problem, and because they've likely experienced similar messes before, a mentor can help you discover new options which you weren't able to see.  Sometimes it's as simple as pointing out to you that you can call clunky sections of code written by your team mates "technical debt" instead of "badly written poo." The latter gets someone's hackles up; the former acknowledges that success was achieved, but with a less-than-optimal piece of code which needs refactoring in order to move forward.

Looking back I realize my best mentors never spoon fed me.  I remember vividly one Sergeant during my time flying on big radar planes:  "You're not a teapot that I'm going to pour knowledge in to.  You need to do the work."  That man's words (and some other choice ones when I screwed up) have stuck with me for a long time.

One of the reasons I joined Quick Solutions was to work under and be mentored by Brian; however, he moved on to a great job, leaving me with some thinking work to do regarding how to find a new mentor.  I'm at a spot on the relatively flat hierarchy at Quick where there's nobody above me suitable a technical mentor.  Sales and business development?  Covered in spades.  Someone to help me further my technical growth and vision?  Not so much.

It's taken me a bit of time to work through thinking this out, but I've come to realize that I need to morph my concept of mentoring.  I need a fundamental shift to something akin to distributed mentoring.  Heck, splitting work into distributed components works great for software, why not for mentoring?

Not surprisingly, my short list of technical mentors is made up from the groups I'm closely tied to and passionate about: peers at Quick Solutions and peers in the regional developer community.  I'm blessed with being in the middle of two groups where I'm not even close to being the smartest guy in the room.  I'm blessed with being in the middle of two groups whose excitement, passion, and vastly differing viewpoints lift me up when I'm stuck in a funk.

I don't want to specifically list out the folks I consider as mentors (you may not return my calls or IMs anymore), but I wanted to write about the mentor shift I'd come around to. 

It may work, it may not.  I'll re-evaluate where I'm at in four to six months and let you know.

Monday, July 28, 2008

Save The Date! 20 Sept: .NET University in Columbus, Ohio

Jeff Blankenburg  and a few other community folks are scrambling around trying to get a .NET University organized for 20 September.  Time and venue are still being nailed down, but it will be held somewhere in the Columbus area.

.NET University events are like mini-Code Camps, but are targeted at newcomers to the .NET platform.  Code Camps tend to dive into 300 and 400 level topics.  .NET U is much more introductory, focusing on fundamental 100 and 200 level concepts.

More details to follow!

Tuesday, July 22, 2008

Paying off Technical Debt Versus Completing Features

I hate technical debt.  Kludgy code, architectural things which looked good then but look like poo now, overly complex or "tricky" sections of a system.  Maybe even low test coverage metrics or shallow tests which get good coverage metrics during stats runs but perhaps don't fully exercise the system.

Technical debt should be paid off as you continue to work on your system, but sometimes, for whatever reasons, we make other choices, skip paying debt down, and let it accumulate as we continue.  This can make a big mess when we hibernate a project for awhile, then attempt to quickly pick things up again at a later point, perhaps even with a set of different folks working on the project.

I just finished up a third phase of a project (third-and-a-half, depending on how you look at it) which has been going in fits and jumps since last April.  The latest phase on the project was to finish up some custom security infrastructure we developed for the client. 

The client had a very small bucket of money for this phase and wasn't going to be able to get all the features she needed completed.  She had enough money for 135 hours of feature development, and her total need was about 210.  Without those 210 she couldn't move on to her global deployment phase, but hours are hours are hours and we don't fudge or cover up the amount of work.  We're completely open with our clients which is awfully important for building long-term relationships.

Well, it turns out that Phil and I kicked butt on the work and completed the 135 hours in amazingly short order.  Now I was at a decision point: pay off some of the technical debt that had accumulated over the entire lifecycle of the project, or push forward with feature development in order to get the system to a point where the client could deploy it?

I spent several commute trips mulling this over (a long commute gives me a lot of time to mull things over...) and decided to press forward with feature hours instead of paying down debt.  It's not a choice I'm overly happy with, but neither option would have left me 100% satisfied.  Why did I make that particular choice in this instance?  Several factors.

First, risk.  My main target for debt paydown would have been the persistence architecture I inherited when I took over the project.  It's overly reliant on static goo and is overly complex.  Testing of that code during refactoring?  Time consuming, even though it's absolutely worthwhile.  The persistence bits work and are fairly sound as written, it's just hard to extend for future use. It makes no sense to do such major rework this late in the game, particularly since I'm all over YAGNI and it looks like there isn't much more extension needed in the near future. 

The next target would have been some WinForms code littered with goo because we did a bad job of enforcing MVPness early on.  I've been slowly refactoring out pieces of that, but it's slow going.  Again, this is a risky section since getting things decoupled and moved out to better separation impacts a lot of different functionality.  Increased regression testing would have been required by the QA folks, and they didn't have a lot of time available.

Decision for these problem areas?  Leave them and move on.

Second, client needs. The client doesn't understand a hooting thing about how our implementation of NHibernate tracks sessions, uses too many static calls, or loads its configuration.  She doesn't know about MVP patterns enabling better testing and separation of concerns, easing testing and extensibility.  The client only knows she has a need to deploy the system in a few months, and she can't do it without the full security stack in place.

Decision for this issue?  Leave the debt in place, enable the client to hit her latest goals -- in part because we all as a team (us + client) had missed some earlier goals. 

The client's now on track to hit her goals, and she's much happier where she's at.  I'm happy that she's going to meet business needs with this latest delivery; after all, we need to deliver value to the customer.  Obviously I'm bittersweet about having missed an opportunity to clean up past technical debt.  At least I can console myself in that I added no new technical debt to this phase of the work. 

Would I do it again? Highly, highly doubtful. This situation was pretty unique, and the victory I got may have been a bit Pyrrhic in that we'll still need to pay off that debt if we get any sort of follow on work.

(By the way, do not confuse me as putting forth features over stable code.  That's not what I've been saying at all.  Technical debt is, to me, an issue different than the stability of what you're writing now.)

(And more by the way, we completed 252 feature hours on her budget of 135 feature hours. Our velocity chart kicks ass!)

Wednesday, July 16, 2008

Troubleshooting 101

Troubleshooting and/or debugging is a large part of our job as geeks.  Bad code, misbehaving apps, finicky servers, we have to juggle all that and more.  Trying to wade through complex systems is never fun, particularly when you're under stress for a delivery or trying to get a production server back up.

I've had a number of years' experience troubleshooting different systems, both hardware and software.  I do not claim to be a troubleshooting ninja, but I've picked up a number of solid lessons learned over time.  This post won't be an in-depth treatise on using the debugger or some obscure tools, rather its on more high-level approaches to help cut your churn.

First off, get serious and go buy a copy of David Agans' Debugging. This book distills Agans' amazing skills into a highly readable, awesome guide.  Everything I have to say pales in comparison.  Go, buy it right now.  I'll wait.

Now that you've ordered that book (and gotten me at least $0.07 in referral fees from, thanks) here are some things I find highly useful for my approach to troubleshooting.  The items aren't listed in any particular order, just a stream of consciousness.  Let's start off with two things which require an incredible amount of self-discipline.


Time box your efforts, right from the start.  Time boxing is perhaps the hardest thing for us geeks to deal with.  "Just five more minutes and I know I'll find the answer!"  Yeah, you said that yesterday morning, dumbass, and now it's 4:30pm the day before delivery. (That would be me, talking to me.)

It sounds silly, but get yourself an egg timer. Honestly. The first step you take before doing anything in a non-trivial problem should be to set yourself a time box.  "I will work on this data transfer issue for no more than one hour before stepping back and reassessing."   Work until that timer goes off and then stop!  Step back, re-evaluate the problem, and look for someone to bounce your assumptions and theories off of.


Sometimes you can get yourself wrapped around too many red herrings and lose sight of the forest for the trees, to mix metaphors in a really ugly way.  You have to discipline yourself to take a break from the issue, then come back to it and look at things again.  Our office has a hallway that runs in a loop around it.  Now folks who work in the office with me may know why I wander that hallway on frequent occasions mumbling like some deranged bag lady.

When you get back from that break, look over your your forensic evidence you've gathered, the assumptions you've made, and any bits you've managed to wicker out about the problem.

Asking for help is uncool.  Folks will begin to know how big a putz I really am if I ask for help.  No, they won't, at least they won't if you've made some basic effort before reaching out.  You may not need anything more than a body to repeat your problem statement and assumptions to. 

Some book I read at some point had a funny story about a high-level dev in a shop who was plagued by folks busting in to his office to get his advice on problems they'd obviously not thought out first.  The constant interruptions crushed his productivity on tasks he was responsible, and the other devs didn't grow their own skills.  The lead took to putting a stuffed bear in a chair in his office.  Devs would come in to his office and his first comment to them would be "Tell it to the bear."  The devs, simply by verbally stating the problem, their assumptions, and theories, would often solve the problem themselves.  Not that I'm telling you to go get a bear and start talking to it at work, but you get the idea.  Bounce ideas off someone, even if it's someone outside your domain.


M as in Messages. Error messages. Dialog boxes. Log files. Console output. Read them carefully. If it's late at night or after a long session, consider re-writing the messages out on paper. Seriously.  I can't count the number of hours I've lost because I blew over a message and let my (bad) assumptions con me into reading something the message didn't say.

A good buddy told me he made one of his mentorees read error boxes to him aloud and verbatim.  Prior to that the mentoree had a bad habit of not taking initial steps himself, or not catching important details which were clearly displayed on the screen in front of him.  A couple weeks of reading those messages aloud finally kicked the mentoree into taking more initial action himself.  (My pal's much more patient than I.  I'd have lost it after a couple days of trying to prod the guy into doing that...)


Look at your system as a pipe of data.  Break that system into halves.  Probe at the halfway point and figure out if your data is good or bad at that point.  Continue breaking the remainder in half until you isolate something you can get your hands around.


It doesn't matter if you do test driven development or not.  Use tests (integration, unit, whatever) to drive your system as you're narrowing down the problem.  Don't waste your time trying to input data to a web form and try to catch data on the other side.  Write an integration or unit test to stimulate parts of your system and work from there.  You're locking down valid areas of the system while beginning to isolate the part that's borked.  You get the added benefit of deepening your coverage of the system.  (I'm not talking just code coverage here, but effective testing of your code.  Two different things.)


I use the Jot function of SlickRun as a scratch pad for writing down bits and pieces of more problematic problems I'm having problems with.  Use a whiteboard.  Lay out the pipeline of the system and break it in half.  Note your inputs, outputs, and areas of concern.  Visualizing a problematic system can be a great help.


This is last, but it's likely the third-most important tip next to time boxing and stepping back.  Don't just jump into code or whip open a telnet session to your misbehaving server.  Take a moment to look at your assumptions once more and figure out a plan of attack.  Write it down or verbalize it to see if your plan's a solid one:

"OK, so the object coming across the wire from the server to the client is borked when it's saved.  It's showing client local time instead of server time.  I will cut the data pipe in half and start looking at the data on the client side of the web service.  I'll check to see if the correct server time is coming in the data transfer object.  If that's correct then I've isolated it to client side.  If it's not I'll go look at the server side and go from there.  I'll do that first check by writing an integration test to call the web service and get one of the DTOs and validate its timestamp."


Assumptions are a tricky thing.  You need to make some assumptions as you begin chasing the issue, but you may have made wrong ones and may end up chasing a red herring.  Look to the simple answers before diving into the deep end.  The simple solutions are the right fix 87.682% of the time.

If you are spending hours discussing minutia of timestamp comparisons and getting into the bowels of a widely used framework like NHibernate then you are likely on the wrong path.  If your first thought is that your OutOfMemoryException errors are caused by a service pack bug instead of a code change, maybe you should look back over the history of that module you just updated.  (That would have been me, chasing a herring for an hour last week instead of noticing that a colleague had updated how a namespace was handled in a call to XmlDocument.CreateElement -- at my vague direction.  Yeesh.)


Time boxing is so important I've listed it twice. Avoid letting yourself get too frustrated over a problem. Take a walk, go get some fresh air. Shoot nerf bullets at your co-workers.  If you're working next to a colleague who's been churning for some time then show them some love and get them to take a break.

Simple, but critical.


None of these things I've written down here are rocket science.  You're likely doing all these and more, so please feel free to comment with things you find helpful, or resources you've found useful when trying to improve your troubleshooting skills.


Jeff Hunsaker pointed out one of the most elementary things which I'm shocked I missed:


Take a few minutes and look over what in your system changed since the bug/problem/issue appeared.  Look over the code history. Look over the data.  Look over the environment your system's running in.  Something's likely changed, but you need to be realistic and careful about going too far down a rabbit hole.  See that part about "Look to the Simplest Stuff First" above.

Another Great Open Source Mind Joins Microsoft

Hamilton Verissimo, founder of the Castle Project with all its amazing goodness, will be joining Microsoft as a PM for the Managed Extensibility Framework. As Hamilton notes in his blogpost, MS is allowing him to continue his work on Castle.  That's awfully, awfully cool!

This is another sign of the slow, terrific changes afoot at Microsoft.  We need to be honest in our (valid!) criticisms of Microsoft, but we also need to be honest and give them props where they're due.  Picking up folks like Hamilton, Phil, and others from the open source community ensures the positive changes at MS continue to move in the right directions.  Pulling in folks like this ensures that certain mindsets in MS will continue to be changed for the better.

Very, very cool.  Congrats, Hammett!

(BTW, Hamilton wrote a great article on the Castle bits for James's and my book.)

Tuesday, July 08, 2008

Votes of No Confidence, Voting With Your Feet, or Putting Too Much Faith in False Prophets

I am so tired of hearing the hyperbole coming out of some folks in the .NET community who are far too wrapped up in themselves.  Yeah, Bellware, I'm talking about you and your ilk.  [1]

I'm tired of seeing all Microsoft described as morally corrupt, or covering their asses, or whatever overly broad, hyperbolic evil empire phrase of the day the fanatics want to throw out.  There is no doubt that some less than helpful folks exist in the corners of Microsoft, but can you honestly throw out those sorts of silly accusations and paint the entire company with the same broad stroke?  Can you give concrete evidence that the company is driven by nefarious goals without any effort inside the company to right past wrongs?  Can you honestly look me straight in the face and make those claims, instead of admitting that there are a tiny few intransigents in Microsoft who are not signing on to some very fundamental, amazing, positive changes the company is making as a whole?


Then STFU, calm down, and get an honest, civil discourse going again.  Leave off the self-aggrandizement, the condescending manifestos, and the attacks.  Start up a conversation with folks like the MVC team, who are actively pulling in community input.  Look up the folks at Microsoft like Sara Ford and the CodePlex geeks who are trying to empower the community.  Work with the evangelists like Drew Robbins and Jason Olson who are driving content to be more real world.  Hook up with the Visual Studio team who spent hours picking the brains of MVPs at the summit in an attempt to figure out how best to get community input to drive Visual Studio's feature set.

Does that sound like a bunch of dishonest, evil-minded folks?  Not in my mind.  (No clue how to try and get this same level of involvement from the Entity Framework folks.  I tried in one "open spaces" session at the MVP summit and was stunned by the EF guy ignoring what I and several others were telling him.)

And oh, by the way, I'm always happy to offer up my own criticism of Microsoft when they're due it.  See my comments here, or a separate post on my blog here.

Why am I so beaked about all this?  I'm tired of the many, MANY good folks at Microsoft being slandered.  Yeah, I used that word deliberately.  I'm also tired of being condescended to or outright attacked by a very small group of folks who seem to think I and other geeks can't evaluate things and make intelligent decisions on our own.  Do I need some pompous manifesto to tell me that NHibernate is better than the Entity Framework?  NO!  I need rational, detailed discussion on the topic. 

Unfortunately, rational discussion seems too far a stretch.  Bellware once even wrote a blog post demeaning folks choosing certain Microsoft technologies as performing "Dependent Driven Design" because they hostage to paying family bills and therefore making design choices to benefit Microsoft sales and pad their own pockets.  I'd point you to his blog post on that topic, but Bellware deleted his entire blog because he was narked about something.  Yeah, that's a mature thing to do.

If you are outside this entire train wreck of an argument and are reading some of these overly zealous conversations then you need to step back and consider what the goal of these one-way rants are.  Do folks who spread this hyperbole honestly expect that it will solve anything?  Is this utter lack of respect how these folks treat those who have differing viewpoints?

Maybe Bellware and the other folks spewing poison should follow a fundamental tenet of Open Space conferences: Vote With Your Feet. 

If those few hyperbolic folks honestly feel that Microsoft is a corrupt, evil, morally bankrupt entity then they should have the courage to live up to the standards they profess and go find work in a different domain.  Go off and write Ruby on the LAMP platform.  There are a bunch of amazingly cool cats in that area.  Why not go hit some Java work instead of .NET?  There are a bunch of wicked smart folks in that domain, too.  Are those folks still holding their MVP awards from Microsoft?  Give 'em up, dudes, and let pass the related business networks (read "increased pipeline and/or salary") that goes along with that award, otherwise you're beholden and corrupt yourself.  (I don't really think that, but you see my point.)

Let me be crystal clear on something: I am in awe of the technical chops of Bellware.  That man passed more brilliant ideas through his lower intestines last week than I will have the rest of the year.   He's brilliant in engineering and design, he's clued in on philosophy (and not just software), and it sounds like he's all over the right kind of processes.  I'm just sick of prima donnas.  I put up with this kind of crap during the many years I spent busting my ass to get great at competitive volleyball while watching all the folks with amazing natural talent spend more effort snarking and deriding others instead of working to lift up the team.  That sucks and it's one of the few things I get really, really pissed about.  (Plus Bellware called me an "arbitrary dick" on Twitter and he hurt my feelings so there.)

So what's the point of this long, somewhat rambling post?  Has Jim gone further around the bend than some of the zealots?  No, I'm just hoping someone who is trying to make a rational decision about various things from Microsoft may stumble across my blog before running in to the EF VONC or some other similarly toned rant.

[1] Do not for one moment think I lump all Alt.NET folks or Entity Framework Vote of No Confidence signatories together as one group of zealots.  I'm talking about a tiny subset of this group.

UPDATE: To be clear.  Bellware didn't just delete one ranting blog post, he deleted his entire blog. That really sucks, because he had a lot of great stuff in there.

Monday, July 07, 2008

NHibernate Entities Not Loading ID Values When Read From The DB

Problem: Your entities aren't loading up their ID properties when you do a find/read/whatever.  Other values for the entities load fine, but not the IDs.

Answer: Check the casing of your properties in your business objects and your mapping files.

<hibernate-mapping xmlns="urn:nhibernate-mapping-2.2"


<id column="groupid" name="GroupID" >

  <generator class ="identity" />


Note the disconnect between the "default-access" attrib of the mapping element and the naming convention in id element's "name" attrib.  Ooops.  Either change the default-access or the casing of the name attrib:

<id column="groupid" name="GroupId" >

  <generator class ="identity" />


More on naming strategies in the NHib dox.

Wednesday, July 02, 2008

New Book: Agile Adoption Patterns

I got a copy of Amr Elssamadisy's Agile Adoption Patterns in the mail today.  I'm excited to have received this for two reasons.  0) I loves me some Agile,  B) I was one of several technical reviewers of this book, Finally) it's a well-done book.  (Yes, I said two and that's three.  It's been one of those days and I'm relaxing with a lovely bit of Caol Ila Islay Scotch as I write this.  Gimme a break.)

Elssamadisy's book is in tough, tough company.  How can you compete in the same space as amazing works like Subramaniam and Hunt's Practices of an Agile Developer or Shore and Warden's The Art of Agile Development? Those are  tough, tough classics to go against when trying to explain how teams/companies should adopt agile practices.

Amr pulls it off by organizing his material in a fresh form which I found very useful.  He hits many of the same points as other works on Agile (smells, process, team empowerment, practices, etc.), but emphasizes the business value of each point.  For example, his chapter on User Story lays out the case that user stories are simple documents in their initial draft.  The value comes from developers having conversations to flesh out the details and implementation of the story.  Product utility is improved, and development costs are reduced.

This same approach is carried on throughout the book, making it very clear what specific benefits you can find from each practice.  Additionally, each practice or chapter follows a nice recipe-like format.  Start off with business value, move on to a sketch describing the practice, follow up with context of the practice and forces impacting it, then look to why you'd want the particular practice, adoption details, and a bit on the practice's cons and variations.

The book starts out with a high-level overview of agile, then moves on to specific patterns/practices.  Each pattern is a short, separate chapter with about 40 patterns in total.  The style of the book is clear, concise, and it's nicely produced.

Another great point about the book is Elssamadisy's ongoing assertion that you don't need to adopt all of the practices.  Rather, find the pain points you have in your environment and look to implement only the patterns which will ease that pain.  This pragmatic approach to agile adoption is a refreshing view in a world where some Agile fanatics insist you must adopt every single practice or you're not doing Agile.  (A fanaticism I emphatically disagree with.)

Overall I think it's a solid addition to the Agile section of your bookshelf.  It's not a replacement for things like Subramaniam's or Shore's works; it's a solid addition to them.

Of course, I'm somewhat biased, having been a tech reviewer on it...

Subscribe (RSS)

The Leadership Journey