Who am I?

I’m an Agilist, a former software engineer, a gamer, an improviser, a podcaster emeritus, and a wine lover. Learn more.

Currently Consuming
  • The Lean Startup: How Today's Entrepreneurs Use Continuous Innovation to Create Radically Successful Businesses
    The Lean Startup: How Today's Entrepreneurs Use Continuous Innovation to Create Radically Successful Businesses
    by Eric Ries
  • The Talent Code: Greatness Isn't Born. It's Grown. Here's How.
    The Talent Code: Greatness Isn't Born. It's Grown. Here's How.
    by Daniel Coyle
  • Alexander Hamilton
    Alexander Hamilton
    by Ron Chernow

Paul Tevis

Entries in software (10)

Thursday
Mar032011

Change I Can Believe In

I discovered today that I’m not the same programmer I used to be.

Most of my time at work recently has been devoted to either team building or organizational change. Half of my team is away at training this week, however, and the rest are at our Wisconsin location working on uncovering what we need to do to get the next critical bit of functionality working. That means that I’ve needed to step in and pick up some of the programming slack. I’ve noticed that my approach to programming has been subtly and profoundly changed by my exposure to Agile/XP technical practices.

The task I’ve been working on is writing test code to support our porting work. We’ve got an OS abstraction library that most of our code is written to. In theory, all we need to do is port that library and most our code will work on a new operating system. Of course, we don’t have any tests for this library. So as we’re porting over each module, we’re writing tests so that we can make sure that it operates the same on both OSs.

One thing that’s obvious is how James Shore’s Let’s Play TDD screencasts have influenced my approach to writing unit tests. I’m not TDDing1 this code; I’m writing characterization tests to establish the current behavior. Still, it’s useful to pretend as though I’m implementing the code as I write the tests. I start by writing the simplest test I can, to verify the tiniest part of the functionality. I want as little time as possible between test runs, and developing the tests incrementally keeps me focused. I also write failing tests — by inserting test values that I know are wrong — to make sure that the tests are doing what I expect before I fix them and make them pass. And perhaps most importantly, I’m giving my test methods highly descriptive names that tell me what behavior they’re verifying. While testing an atomic increment function today, I noticed that it had a return value that I’d largely been ignoring. I had to look at the documentation to see what it returned. As a result Unit_Atomics::test_IncrementAddsOne() became Unit_Atomics::test_IncrementAddsOneAndReturnsOldValue(). Six months ago you wouldn’t have been able to read the header file for my tests and tell what the object I’m testing does.

There’s other stuff too, like my growing impatience with how long our build cycle is, and the slow realization that almost none of the things we call unit tests really are.2 But this is something where it’s not just my awareness that has changed, but my behavior as well. I’m excited to see what’s next.




1 That is, I’m not using Test-Driven Development.

2 Michael Feathers has one of the best descriptions I’ve seen.

Monday
Feb282011

Harder and Better

Today was about practicing what I preach.

For the last several months I’ve been telling the team that we need to take our time and do things well. We’ve dug a pretty deep hole by letting ourselves be pushed beyond a reasonable speed. Along the way we’ve accumulated a fair quantity of technical debt, and I’ve been encouraging people to stop rushing through things and to fix problems as we find them.

Today was, of course, the day were I was tempted to do the opposite. Half of our team was out today, which meant I needed to take up some of the technical slack. We’re right at the end of the sprint, so I was looking to make progress as quickly as possible. I was plowing through things left and right this morning. Then, just after lunch, it happened.

As I was trying to port some of our test code to one of our other supported operating systems, I noticed a problem. Because of some variations in hardware, the test needed to have different settings based on which board it was running on. Unfortunately we didn’t have a way of detecting that. I thought about using the slower but safer settings for both. We could just come back later and clean it up. Then I remembered what I’ve been telling the team: Later never comes.

I came out of the afternoon with cleaner code, a stronger feeling that warnings are really errors you haven’t met, and better understanding of why we need to improve our build cycle times. And, most importantly, a deeper appreciation for why the things we’re trying to do are both hard and worthwhile.

Sunday
Jan092011

Game On

Rather than ramble on further about my skiing weekend, here's something that I wrote a month or two back but never got around to posting. Warning: Software geekery ahead.

For a while now I've been telling our team at work that in order to do Scrum well, we need to raise our technical game.1 There are certain process demands that Scrum makes on that we can't respond to without improving our technical practices. We've been pretty aware that slicing User Stories is something we need to get better at2, but recently I've started to think a lot more Test-Driven Development.

Last week was the perfect storm of influences pushing me to do it. I'd done Alistair Cockburn's Elephant Carpaccio exercise at Agile 2010, and I was curious to see other people on the team take a shot at it. On Wednesday, we ended up using it as an audition piece in a technical interview, pairing up me with the candidate to pair-program it. Matt and Kevin were intrigued enough by it that they tried it themselves on Friday, with me facilitating. In our debrief afterward, we talked quite a bit about Pair Programming3 but we also touched on testing during development. I later gave the exercise a try on my own, trying TDD it using C#'s System.Diagnostics.Debug.Assert() to write test cases.4 It went okay, but it seemed like there was a lot of overhead.

Fast-forward to Saturday morning, when I'm catching up on posts to the XP mailing list. I noticed a post where someone was asking for pointers to good TDD resources, and someone replied with a link to James Shore's Let's Play TDD screencasts. In fifteen minutes, I was hooked.5

As it just so happens, I've been needing to write a little bit of code. For the RPG design I'm kicking around I needed to look at some slightly complicated dice odds in order to make some decisions. In the past, I've coded up a quick Monte Carlo dice roller6 to simulate probabilities. This time, I downloaded Visual C# 2010 Express and NUnit so I could try out these things I'd been seeing.

Thus far, it's great. It's really pulling together a lot things I've been reading about recently: good development tools support, tight feedback loops, expressive names, small methods, seams for separating and sensing, the Single Responsibility Principle, etc. I've been seeing how these could potentially apply to projects at work, but actually getting my fingers working on them is completely different experiences. There are some intriguing flow state things happening here that I can't quite articulate yet.7 But the big takeaway is that no matter how much I'm going to be dealing with the people and process side of this software project at work, I need to up my technical game as well.

Addendum: We ended up hiring the interview candidate in question, and she now reports to me. During our one-on-one on Friday, she expressed an interest in learning more about TDD. We agreed to work through Let's Play TDD videos and practice together.




1 I've become particularly aware of this because of things like Martin Fowler's article on Flaccid Scrum.

2 Thankfully, Mark Levison wrote these two articles that are helping quite a bit.

3 Which really deserves its own post.

4 I believe this marks the third time I've used C#.

5 And insanely jealous of the JUnit integration in Eclipse, especially combined with the automated code generation and refactoring tools. Now I feel like I'm living in the dark ages with Visual C++ 2005.

6 Usually in python.

7 In particular, it's great for working on a project for thirty minutes (or one Pomodoro) at a time, which is about what I can carve out of my schedule.

Monday
Jan032011

Link Roundup For 3 January 2011

I've been on a programming nerdery kick recently. Here are some links to articles loosely around the themes of Test-Driven Development and Refactoring.

Wednesday
Oct272010

Un-Legacying Your Code, One Step At A Time

Michael Feathers' Working Effectively With Legacy Code starts out with the most useful definition of "legacy code" I've ever seen.

To me, legacy code is simply code without tests. I've gotten some grief for this definition. What do tests have to do with whether code is bad? To me, the answer is straightforward, and it is a point that I elaborate throughout the book: Code without tests is bad code. It doesn't matter how well-written it is; it doesn't matter how pretty or object-oriented or well-encapsulate it is. With tests, we can change the behavior of our code quickly and verifiably. Without them, we really don't know if our code is getting better or worse.

As someone who finds himself working without a lot of code that doesn't have tests, this resonates strongly with me.

So, how do we "work effectively" with code like this? By getting it under test. In particularly, by crafting unit tests that (1) run fast1 and (2) help us localize problems. Michael identifies two things we really need in order to write these tests. First, we need separation between modules. Most legacy code bases are hard to write units test for because of snarled dependencies. I've struggled through trying to instantiate a class in a test framework only to end up with half of the application in my test, so I understand the complexities involved. I've also dealt with code that we can't test in a framework because of undesired side effects from running it. Second, we need to be able to sense certain values that our code computes so that we know it did the right thing. Both of these are accomplished by what he calls a "seam":

A seam is a place where you can alter the behavior in your program without editing in that place.2

To exploit this notion of separating and sensing with seams, the book presents a set of dependency breaking techniques3 that can be done fairly safely to bring legacy code under test. The majority of the book is structured with familiar problems as chapter titles.

  • I Don't Have Much Time and I Have To Change It
  • I Can't Get This Class Into a Test Harness
  • I Need to Make a Change. What Methods Should I Test?
  • I Don't Understand the Code Well Enough to Change It
  • How Do I Know I'm Not Breaking Anything?

With only a few minor exceptions, each chapter spoke directly to a problem I've personally encountered in one or more of the code bases I've worked on. Each gave me a specific set of techniques to apply to the problem in such a way that I could see how they would work. I've started to do just that in current project.

The last section of the book is a catalog of the simple but powerful dependency-breaking techniques introduced throughout. Of the two dozen presented, there are handful that form the core and that are used repeatedly throughout, while the rest are used only in particular, less-common situations. What struck me most is how much these techniques are about getting back to core design principles. Classes should have one responsibility. Methods should be short and do one thing. When you find code that violates these principles, you can use the refactorings in this book to introduce new methods and classes to break them up. Yes, it can take a long time to get to that idllyic land where everything is simple, clean, and under test. But a journey of a thousand miles begins with a single step, and if you take just one step a day every day, you'll get there sooner than you think.4




1 And by fast, he means that each test runs in 1/100th of a second. This means usually (1) no hitting the database, (2) no communicating over the network, (3) no touching the filesystem, and (4) no editing special configuration files to run it. Tests that do these things aren't bad, they just aren't unit tests.

2 While much of the book deals with object-oriented programs, because object seams are one of the easiest types to work with, in languages like C we have link seams and preprocessor seams as well.

3 At Agile 2010, I overheard Martin Fowler describe these as "blind refactorings," because you have to make what you hope are behavior-invariant changes without the benefit of tests.

4 And much sooner than if you never take a step.5

5 And even sooner if you don't take steps backward. Stop writing legacy code.

Page 1 2