I think code reviews are crucially important to the success of a
development team. Here are some of my thoughts on how code reviews
might go for a relatively small team.
I understand that at Google they have a dedicated senior person whose
job it is to run the code reviews. That's probably terrific for their
QA. For smaller teams looking to figure out how to do them, here are
some ideas.
On Java Coding Standards
Adopt the Sun standard. Don't think about it too much. It isn't interesting.
You can find this here: http://java.sun.com/docs/codeconv/
Have all of your programmers write their code to conform with it. Their IDEs
should be set up to automatically format it this way. With Eclipse and IntelliJ
and other IDEs, you can create a template so that everyone's source physically
looks the same.
It is terribly boring to sort of drone on and on about whether your fields
belong at the top or bottom of your class, or whether your curly braces belong
on the same line or their own line.
However.
It is also, in my view, very important that everyone's stuff looks the same.
That is, it doesn't matter _that_ your fields are at the top of the class (uh, where
they should be), as long everyone on the team has them in the same place.
This is because code must be readable. If 80% of the cost of software is in the
maintenance, your stuff has to be readable by someone else so that they have the
best chance of finding where the problems are.
This assumes there is some clear set of standards regarding program structure.
Code formatting is worthless if there's no shared understanding of refactoring to
patterns among team members.
The Pragmatic Approach
So here are a few pragmatic ways to approach that matter.
Create a set of development guidelines. These include key aspects of,
for example, Bloch's book Effective Java and books like Bitter Java. These are
an obvious minimum if somehow your team is reading-adverse.
Identify general highlights from these, and publish for everyone to access
(say, on your intranet). Make sure no copyrights are being violated. I'm talking
about standard stuff here.
Add your own guidelines to this as people think of them. Have people
understand and follow these general guidelines. These include reminders such as
"Always override hashcode when you override equals" and
"Here is the definition of a Java Bean",
"Don't use the interface enum antipattern" and like that.
If your developers don't all have a strong, shared understanding of
the Gang of Four and the J2EE patterns, it can take a fairly long time to get it.
Have frequent presentations on design patterns. Have everyone in the
group research and present a different pattern or two.
I have found that if people do these three key things (write to the Sun convention,
follow basic coding guidelines and best practices, and refactoring to patterns),
their code will be clean, readable, maintainable, readily understandable,
and will have the best chance of working and being extensible and scalable.
I know that's a lot of neat marketing words, but it's just true in my
experience.
But what does that have to do with a code review? Everything. It makes
the stuff you bring to the review. You have to minimize the tension
that quickly obtains at code reviews. You also want to focus on interesting
problems and design issues, not curly braces. Set the bar at least that high,
even when your team has some junior and some senior programmers on it,
and you'll have fun and productive code reviews, not bitchy, egomaniacal showdowns.
Setting the bar at least that high also helps move each developer forward at
an appropriate pace.
Code reviews should be something that everyone looks forward to. I love the stuff I write.
I throw half of it away (bad daddy) in refactoring, but it's fun to have interesting conversations
with smart people who care about what they do.
Code reviews should be short, focused, and frequent. What has worked for me is
30 minutes or so every two or three days, first thing in the morning for each employee.
That way, the comments from the reviews can be incorporated into that day's
work, and the reviews will not be general, but be a good learning
opportunity for one or two important things.
At the end of the week, or some similar time frame, the code reviewer
can make some kind of summary statement indicating general things that
the group can focus on. This could be time consuming, and isn't necessary however.
It could be tossed on top of some time reporting or billing tool you might be subject to.
What Code Reviews Accomplish
Doing code reviews this way accomplishes a few things:
1. It helps build the team.
2. Everyone gets individual attention and thereby learns more, faster.
3. It keeps the project focused and moving forward, because people
know they will have to always have something new to bring to the
review.
4.It helps identify parts of the code that can be abstracted up,
refactored in some way, reduced, or eliminated.
Finally, reviews have the obvious benefit of being a way of identifying
people who would really rather be astronauts and ballerinas than coders.
Let those people have an opportunity to reveal their true desires by degrees
as they fail their code reviews. Then they can be let go, into the wild, to
puruse their dreams. Now I don't think "grading" code reviews is a
good idea. It feeds a sort of pervasive macho hierarchical aesthetic which I frankly
find distasteful. But there is a clear record anyway of what sorts of things
are being found in people's code.
The architect, or whoever is doing the code review, should be using it as his
review time as well. He can ask himself, is he doing a good job of
identifying and unifying the issues within use cases, refactoring the design along the way, and
shining a light on paths for developers to take to get there.
Additionally, the architect should have his code reviewed by another
member of the team in order to protect the investment that the
organization makes in the project. You don't want code in the
repository that only one person understands. This can be a disaster.
It will also keep the architect on his toes.
Keep Your Truck Factor High
If you're the guy doing the reviews and you also write code and you're such
a genius that no one else might be trusted to understand what you're writing
when left to their own pitiful devices, then it's your job to make at least
one other person understand. Maintainable code is more maintainable when your
truck factor is high.
Robert Martin is the person I first heard use the term. Your app's truck factor
is the number of people who can get hit by a truck and the company can still maintain
the app.
Take a good, hard look at your organization. What apps do you have with a truck factor of one?
Get pair programming going, even if you hate XP and adopt no other practices, and get your
truck factor up on your apps. Code reviews help keep truck factors high.
Code Reviews and Contractors
For contractors, there are somewhat different rules. If you have contractors writing
apps for your company, you've got to have their stuff reviewed in some regard. You can't
just accept the app. You have to accept the source.
Have the aim of code reviews for contractors be conforming to the spec
and making sure that the reviewer is ensuring durability of the design
in a more general way. This is not a learning experience for the
contractors or team building experience. It is clear and simple
protection of your investment.
These reviews needn't happen in person, and shouldn't. It would be a sort of enormous
waste of money and time and wouldn't be any fun. Contractors should be given a
branch of the repository or some similarly locatable place, and your
team should review that code. The architect should review the code to
make sure that the final product will incorporate well with the common
architecture of SRPMIC. But the real code review can be done by other
members of the team, so that they get used to reading code for
potential performance problems, deadlocks, general strangeness, and so
on.
I've seen a lot of projects where contractor code goes into production, and even they don't
understand the code they committed, because they don't even understand the language it's written
in (Java), because they didn't write the code. Some other guy (a previous customer in many
cases) wrote the code and it went to the big database in the sky and they start the copy and
paste machine and they're off like rabbits.
So, there are two kinds of code reviews with two different purposes. They're both crucial in
my view to your overall success.
Comments