In class today, we had a heated debate regarding trustworthy, commercial collaborative software. Not surprisingly, I took the side of SharePoint so joined the group preparing the argument for the Microsoft stack (it was small group session breakout). Interestingly, the argument ended in a stalemate after both presented cases, both sides agreeing that each has their inherent benefits (its god damn hard to come up with an ample argument against Open Source benefits), and innate faults. However, from that preliminary argument, a new dialogue was produced. Regardless of the stack choice that either group was defending, it is typical that management level sponsorship for collaborative environment efforts often has difficulty pegging down the value of standing up a collaborative software instance.
The celebrated dramatist Oscar Wilde once said:
A cynic is a man who knows the price of everything but the value of nothing.
This quote to me means numerous things (since it is so open to elucidation), however none more exceptional than the simplest interpretation. A person at times can repeatedly put an empirical cost on an arbitrary object since this is, well, pragmatic. However, to get something that is not as tangible such as Realized Business Value (RBV) or Return On Investment (ROI) for a collaborative effort is remarkably curious. Although the cost is certainly going to play a parameter when generating your Realized Business Value Docket (RPVD), it certainly is only that, a piece of input while in the presence of other inputs that hold more weight (this is one of the strengths of RBV since it provides some basic levels of cost abstraction, and those living in the small-to-medium size vertical the purchase of SharePoint can be quite expensive).
I want to stress that in my opinion, measuring things like monetized ROI on collaborative software efforts leveraging legacy business value approaches is a faÃ§ade, a mirage, an illusion (this might not apply to some of its subsets). Just don’t even try it, it’s self-defeating and will only suffice in providing a depressing, dismal night where you are fudging numbers more than harvesting from the actual project.
This isn’t to say that you can’t acquire value with output metrics with collaborative software efforts, but it is problematic to present your findings to management since they can prove to be abstract and often times contain a buttload of inference. Commonly, at a most basic level, a manager is going to expect that he can gain an object for a cost from someone and somewhere, and from these parameters as well as insight into the owning organization, an ROI can be determined and implemented. In the end, we can say that managers tend to be good at looking at a particular project with a pair of monetization spectacles, compensating for human factors like apathy to technology. Taking this monetizing approach when attempting to appreciate the value of SharePoint, as well as all collaborative software, is not practical however.
Why? The outputs are different. Plain and simple. Whereas with traditional software development efforts we can do the type of surveying of project results as described previously, this is unquestionably not the case with SharePoint. Collaborative software has outputs such as:
Augmented Information Worker Knowledge
Improved Employee Productivity
Enhanced Customer Service
And while I can name these out just fine, and in some fashion there may be some methods to garnish metrics from them (notably the Enhanced Customer Service entry), how does one place a value on something like enhanced information worker knowledge. While we can certainly see whether there are variations between productivity, this reporting is in essence very circumstantial because we are pretty much preparing a collection of facts rather than a conclusive series. There could be too many factors involved that could cause discrepancies for any of these during that process, as well as for the aggregate output.
So, stuff like this is somewhat conjectural at the moment. I think that the best way to measure something like this would be with revolving, time adjusted surveillance of collaborative software utilization growth and taking into account parallel visualization of the stored and delivered data. However, I don’t think that is enough, because that is only a very small part of the measurement. Rather, it would have to be inclusive to include the more human factors, such as how often a person is involved in face-to-face interactions, movement patterns that occur in the workspace, etc. This type of research would be even further constrained as it would have to compensate for dependencies on legacy collaborative tools, such as email. I am not going to go into the approach I would use with this, I will save that for another post.
I guess what to take away from this, particularly if you are a manager, is this is a new concept and a new type of recognition. And while the benefits might not be immediately available to conventional brick-and-mortar industries, they are there. There just haven’t been any mature, intelligent tools built that inclusively deliver those types of reports.
Yetâ€¦â€¦ (and the gears start turning). Stay tuned!