So you are working on a project or running a project, and you need to keep tabs on progress. You have your project plans and timesheets and whatnot, and all of that is very important, but where do you look to see progress day to day? Well, over the past year I’ve been working with hosted Subversion solutions, and this time around I’m running a project using Unfuddle. It has all of the same features I’m used to, such as ease of setup and user management, but the reporting and messaging is far better than I’ve seen in other solutions.

For me the answer to any question of reporting and analytics is tied to the more gritty data, the stuff that you either ignore or don’t know about or dismiss. Really, though, by keeping things simple you can keep your projects from going off the rails. Have your developers commit code and write documentation (via the Notebook feature … very wiki like) and work tickets. Use the Unfuddle dashboard, or install the Mac desktop widget. Check your email to watch for notifications that come through on activity. The team is either working on code and checking it in or not, and they are either writing documentation or they are not.

And best of all, this communication goes on in a community space, by project, where you can put controls in place to make assignments or review progress and have a centralized audit trail. So, today I give props to Unfuddle. Nice work!

screen capture 1

Advertisements

Wrapping up an assessment today and I’d like to give props to Web Log Explorer, which I’ve been using for some ad hoc analysis of IIS log files. The parsing is a bit slow given that I’m running Parallels on a Mac, but not by much, and the options to export the data are quite good. Not sure where log file analysis is headed, since when I started consulting it was all the rage, but today the notion that all things point to "the website" is rather naive. The IIS log files, like so many of them, weren’t customized or extended, so there isn’t much to work with, and the servers are load balanced, so if I really wanted to dig in I’d have to weave them together, and that takes time and processing power.

Even my own tracks today were a bit crazy — in and out of virtual machines and three browsers, downloads and transfers through Remote Desktop, use of TweetDeck instead of going to Facebook and Twitter … things like that. How would you ever track that sort of thing, the new form of "browsing" and interaction? Comments to this question are welcome.

I’ve decided on a return to Microsoft to work on building out solutions for SharePoint/MOSS and try to get in early on the connection between enterprise content management and analytics. I was up the better part of the night installing software, turning my MacBook into the most versatile of machines.

At first I was building out images using Virtual PC, but by now a lot of my towers are short on RAM and a bit too old for heavy work, and I did not want to get new towers and have to deal with driver issues if I was installing say Windows 2003 Server. While tinkering a bit, I installed Parallels on my MacBook (which is running with 2 GB of RAM, so decent enough) and found it fabulous for server as well as desktop software. In fact, server software is easier because you avoid the registration key loops in which you activate a product and then all of a sudden Microsoft thinks you are on a new machine, so it logs you back out. A very annoying aspect of the new virtualized world.

At any rate, I installed the following in short order and find they all run very well:

  • Windows 2003 Server with SharePoint 2007
  • Windows XP (the usual workstation)
  • OpenSuse Linux 11.1

These will get me started, and I’ll gladly pay the $80 for Parallels to have everything on one machine. The goal now is to get back to MOSS and the analytics side of things, and I’m starting with an assessment of a multi-tenant hosted platform that seems to be having a few issues.

Last week I was working with my project managers doing a retroactive analysis on project work in which we were left in a lurch by a developer who was leaving to take a new job. The question came up — could we have had better insight into the risk when the reports we were getting in status meetings and on timesheets all pointed to good progress?

I decided to take a look at Perforce reports, thinking that we should see a pattern of source code check-ins with timesheets and progress on assigned work. What follows is a quick review of what it takes to get reports out of Perforce from a management perspective if you need to watch development.

Perforce has a reporting toolkit for download off of their site, and I sensed that the Windows version was most established even though they have a Linux command line utility. I ran the installer, provided my account information, and connecting was pretty easy. I tinkered with the Linux and BSD utilities for a few minutes on Mac OS X, but no luck there, so off to Windows I went.

The toolkit gives you some Crystal runtime reports, but the date parameters don’t seem to work and I was getting back comically huge “monthly” reports that went back to the beginning of time. The Crystal report output was rather basic, without good filtering or export capabilities, so I skipped past the lot of them and moved on to the ODBC driver.

The ODBC driver was the valuable thing in the mix, and I setup a quick connection to Excel and pulled in data from the CHANGES table. The basic trick here is to apply the p4options = ‘longdesc’ filter to the query, which will give you back all of the comments in the Description field.

For a brief time I toyed around with trying to get into the FILES table, wanting the details of all files checked in for a particular change, but for us the FILES table has over 1.6M records, and Perforce doesn’t really support joins in the traditional way, so for now I’m focused on the information in the FILES table.

At the moment I’m pulling a report that sorts line items in the CHANGES table by USER (asc) and DATE (desc) going back a month or so. When compared to timesheet data, this gives us a bit more insight into the work being done on projects and whether or not team members are a) working on files with the right frequency, b) checking in code on a regular basis, and c) working on the right projects in Perforce.

I’ll give the analysis about a month to see if this is a good way to track progress and assess risk. If it seems helpful, then I’ll probably look at setting up a report in SQL Server Reporting Services and then publish out weekly and monthly reports to the management team.

Comments on reporting on Perforce are welcome.

This week I wander off in a slightly new direction, picking up on a survey based on Reichheld’s ultimate question of “would you recommend us to others?” as a measure of customer satisfaction. Fred has a good point about the simplicity, and he has an equally simple calculation to translate the Likert scale values into a measure, and now I’m looking into the follow-up part.

Do I target the customers who would recommend us first? Do I try to bring the less-than-enthusiastic ones back into the fold and raise their scores? If I do try to raise their scores, how do I measure the steps taken to improve loyalty (a much better word than “satisfaction” I agree)?

My premise starts with dedication to the customer — before I make the first call, I have to get as complete an understanding of their projects and their service before I try to engage in a conversation. Once I have that background, I can move on to the calls, but only when I have a full plan in place. No point in making information gathering calls if I can’t take the responses and try to translate them into something good, like more loyalty or more business, something like that.

Kicking off the new blog

January 23, 2009

Well, I must have started 20 blogs by now, none of them to my satisfaction. Today I was down at the Motion Graphics conference in Chicago, working with the crowdSPRING founders, pondering where I want to take myself in 2009, and for the first time the whole thing felt right. Felt like I was getting my voice back, as it were.

So begins this blog. I have always had a keen interest in offbeat data, not like what I was doing at Miller Brewing with point-of-sale analysis (though trips to the company beer store were great). No, I’m the kind of guy who downloaded keycard system data to figure out how much time the developers were spending on smoke breaks. The kind of guy who ran an analysis of project management and time entry data to figure out how much management time was spent on review and approval each week (40 hours for 12 staff members — ouch).

And so today the idea of the data+graphite blog came to me, to deal with the messy side of data. I’ll let Henry Petroski take it from here:

“The pencil, the tool of doodlers, stands for thinking and creativity, but at the same time, as the toy of children, it symbolizes spontaneity and immaturity. Yet the pencil’s graphite is also the ephemeral medium of thinkers, planners, drafters, architects, and engineers, the medium to be erased, revised, smudged, obliterated, lost – or inked over. Ink, on the other hand, whether in a book or on plans or on a contract, signifies finality and supersedes the pencil drafts and sketches. If early pencilings interest collectors, it is often because of their association with the permanent success written or drawn in ink. Unlike graphite, to which paper is like sandpaper, ink flows smoothly and fills in the nooks and crannies of creation. Ink is the cosmetic that ideas will wear when they go out in public. Graphite is their dirty truth.”

So it begins. The fun side of data, the dirty truth.