Ticket-count analytics should only be used for their intended purpose — managing the list of open tickets. The temptation to infer other implications in the way of employee performance, product quality, etc. must be avoided. In fact, even as a tool for managing the list of open tickets, we need to be careful.
Whenever presented with a metric that implies a certain conclusion, we need to look for confounds. (This goes for all metrics, not just ticket counts.) What other possible explanations could there be? If there aren’t any that come to mind, then the straight-forward explanation is probably correct. But, if there are potential confounds, then deeper study is required. Does a high bug count really mean that bugs are getting out of control, or does it merely mean that stakeholders are over-reporting improvement requests and new-feature requests as if they are “bugs,” either mistakenly or in an effort to game the system?
Using ticket closure-rate counts as a measure of employee performance is a particularly bad idea. The primary purpose of the bug-tracking tool is to be available to the developers as an honest representation of the state of the tickets lodged against the system. Overloading it with the goal of tracking employee performance is a conflict of interest, and it only begs the employees to game the system. I’ve seen all of the following occur by developers who are more interested in making their closure-rate numbers look good than in actually improving the state of the system:
- They’ll cherry-pick the low-hanging fruit, knowing that a simple typo-correction ticket counts the same as a complicated, intermittent bug that will require hours or days just to build the right test harness.
- They’ll make a half-hearted attempt at triage and then close the ticket as “not a bug,” knowing that the ticket will probably get reopened later, but still getting credit for the closure in the mean time.
- They’ll split the ticket into three parts and then get triple credit for closing three tickets instead of just one.
- If the daily or weekly target is known in advance, they’ll aim for that and no higher, and then coast the rest of the day.
Of course, weeding out team members with this kind of attitude is important in the first place. Still, using the wrong metrics to measure developer effectiveness won’t help. Instead, the metrics that are in play need to encourage long-range thinking. They need to bring out the team’s best efforts towards meeting the product’s or project’s ultimate goals.
Peter Drucker cautioned us about this sixty years ago:
Reports and procedures should be the tool of the man who fills them out. They must never themselves become the measure of his performance. A man must never be judged by the quality of the production forms he fills out — unless he be the clerk in charge of these forms. He must always be judged by his production performance. And the only way to make sure of this is by having him fill out no forms, make no reports, except those he needs himself to achieve performance.
The Practice of Management – 1954 p. 135
— Peter Ferdinand Drucker
In other words, be careful not to confuse the production of paperwork (electronic tickets) with actual production. Measuring the time spent building a product is nowhere near the same thing as measuring the results that a user gets by actually putting the product to good use.