Economics of SharePoint Governance – Part 21 – Relevant Governance Costs And Management of Governance Costs

Most of the data for cost analysis will have been compiled by formal accounting systems which may have been designed for tax-reporting rather managerial decision-making purposes. One can of course employ the straight accounting data as reported to estimate what I shall call an “accounting cost” function, but the cost decision maker is urged to recognize and remember when using it for simulation purposes that it over-states some costs and understates other costs. Also, any cost function estimated from historic data will be valid for simulation of future circumstances only if historic patterns persist into the future.

The analyst will have to make adjustments to the accounting cost data to estimate a relevant-cost function. No hard data exist at all about opportunity costs (what the firm is not presently doing but could be doing); only the managerial decision maker can estimate the opportunity costs based upon awareness and perceptions of the foregone alternatives. Needless to say, different managers will have different perceptions and assessments of the available opportunities, and some managers will be more successful than others because of their successful recognition of the pertinent opportunities. The well-designed cost accounting system should recognize temporal cost-output mismatching, but if it does not then some effort must be exerted to allocate certain costs (e.g., repair and maintenance expenses) to the proper time periods.

If the objective is to estimate a short-run cost function, care must be taken to exclude all overhead costs. This may be especially difficult if the cost-accounting system obscures the distinction between some direct and some overhead costs. For example, maintenance and repair expenses often are lumped together (they are typically performed by the same in-house crews or outside contractors), but regular maintenance expenses (presumably at approximately the same expense levels period after period) should be treated as fixed costs. However, repair expenses, to the extent that they are not regular occurrences, should be treated as variable costs. Then there is the problem of matching the repair expense to the appropriate time period since the circumstances culminating in the need for the repair may have spanned several production periods.

Depreciation is an especially knotty problem for the determination of relevant costs. Should it be treated as a variable cost to be included in the estimation of a short-run cost relationship? I have already made reference to the likely divergence between the accounting allowance for decrease in the value of the capital stock and the real phenomenon of capital consumption. Real capital consumption surely does vary with the rate of production, which may be quite uneven from period to period. But since it is not possible to know with certainty in advance what will be the actual physical life of the plant or equipment, it may not be possible to meaningfully impute a cost value to the amount of capital consumption to be allocated as a direct cost to a particular production period.

The accountant handles this problem by selecting a depreciable life (on the basis of past experience, as permitted by the tax authority’s rules, or simply arbitrarily), and designing an allowance schedule which may be straight-line, accelerated, or retarded (again, as permitted by the tax authority’s rules). If a straight-line allocation method (the same proportion of the capital value charged as cost during each period) is used, then a case can be made for excluding depreciation from the list of costs relevant to the short run, because even though it represents a direct cost it does not vary with output. But even if an accelerated or retarded depreciation schedule is used, the proportions of the capital value to be allowed as costs change smoothly, predictably, and monotonically (a mathematical term meaning in one-direction only and hence are unlikely to correspond to real variations in output and the capital consumed in producing it. Given these problems, then, a case can be made for excluding depreciation from the estimation of a short-run cost function, but including repair expenses (properly matched to time periods) as mirroring the real consumption of capital, i.e., the expenditure necessary to restore the real decrease in productive capacity caused by use of the capital.

Another short-run cost estimation problem occurs in time series data if there has been non-trivial variation in the prices of the variable inputs. Short-run costs certainly vary with the prices of inputs, so the analyst must make a methodological choice: either choose to “deflate” each of the TVC components (i.e., TLVC, TMVC, etc.) by a price index appropriate to that cost, or choose to include as other independent variables (i.e., the X1, X2,…, Xn in equation C6-1) the prices of the inputs which are thought to be varying significantly. The regression analysis will reveal which in fact are statistically significant determinants of TVC.

Theoretically, the physical units produced (or some multiple) should be used for Q in the regression model. A problem arises if the firm produces a multiplicity of products, some of which are jointly produced. In this case, Q may have to be the “product mix” normally resulting from the production process. If the market values of the jointly-produced products differ significantly, it may be necessary to use a composite index of the jointly-produced outputs where the weights are the current market prices of the products. Finally, if the objective is to estimate a firm-wide cost curve where the firm is producing multiple products, it may be desirable to use data for value of output evaluated at current market prices. If time series data are employed, both the output and input value data should be deflated by appropriate price indexes.  

The alternative heading for this section might be “Getting Enough Useable Data.” In sourcing his data, the analyst is first confronted with a choice between time-series and cross-sectional data. A short-run cost function assumes given technology, managerial capacity, and entrepreneurial abilities. Much recommends the cross-sectional choice for short-run cost estimation since the data are taken across firms or plants, but at a point in time. There is thus no problem of dealing with changes of plant size or of technology, managerial capacity, or entrepreneurial ability. It is perhaps easier to identify the use of the same technology than to find comparable amounts of managerial capacity and entrepreneurial ability in different firms. In this regard, the analyst will simply have to exercise judgment. Alas, there remains a critical problem with cross-sectional data for short-run cost estimation that is difficult to surmount for most firms: data are required from different firms or plants, but competitors often are reluctant to share such sensitive information.

The use of time series data for the firm’s own plant(s) obviates the need to solicit data from competitors. Here the analyst must take care to choose a time span which is not so long that there have occurred changes in technology, management, or entrepreneurship; else data will be for points on different cost functions. This may be a period covering several years, but may be as short as a few months. Then the analyst must divide the period into an adequate number of data collection intervals to yield enough observations for estimation of a statistically-significant cost function. Usually twenty to thirty observations are adequate for this purpose. The duration of the data-collection period may then dictate daily, weekly, or monthly observation intervals. A new short-run cost function must be estimated every time that technology, management, or entrepreneurial ability changes.

In the estimation of a long-run cost function, all costs should be included. It may not be feasible, however, to use a time-series approach since the intervals should be long enough to permit variation in plant size (but still no changes in management or entrepreneurship). In order to get enough long-interval observations, the duration of data collection may have to span several years. Alterations in management or entrepreneurship during the data-collection period will result in points on different long-run cost functions. These conditions may be heroic at best. The use of cross-sectional data, if they can be obtained from other firms and plants, may accommodate requisite variations in plant size, but care must be taken in selection of subjects to avoid different technologies, managerial capacities, or entrepreneurial abilities. Again, these may be heroic conditions.

Whether the analyst is attempting to estimate short- or long-run cost functions, the necessary premise is that each plant is being operated efficiently, with no significant waste of any resources, i.e., at an appropriate point on its cost curve. If at any time or in any plant included in the sample there are wasteful conditions, the observed data will be for a points above the locus of the firm’s true cost curve. If this circumstance occurs in more than a few instances, the estimated cost function will lie above its theoretically true (efficient) location, and will yield erroneous conclusions in simulation exercises.

Whether the management of the firm develops cost-function simulation model or not, the cost-related job involves many facets:

(1) In the long run, selecting the most efficient technology for producing the enterprises selected products;

(2) With that technology, selecting the scale of plant with an output range which is most compatible with current and expected future levels of demand for the products;

(3) Given the right scale of plant, selecting the appropriate output level to meet the enterprise’s goals (profit maximization, cost minimization, optimization, etc.);

(4) For the target level of output, selecting the appropriate internal allocation of the enterprise’s resources, i.e., the most efficient combination of inputs, given the available input prices;

(5) Operating efficiently and without waste, i.e., to operate at points on the enterprise’s production function surface (not below it) and on its respective cost curve (not above it).

Furthermore, economists can argue that if the goal of the enterprise is to maximize profits, and if it does so operate to maximize profits without monopolizing its markets or exploiting its resources, it will also meet a desirable social objective of efficiency in the allocation of resources among industries, among firms within the industries, and between products. Assuming that monopolization and exploitation are averted (admittedly a serious problem), this happy circumstance can be expected to emerge even if social well-being and social economic efficiency are not explicitly managerial goals of the enterprise.

 

Share

Introducing Free SharePoint Governance Software – Riadenia SharePoint Governance Automation – Part 1

Disclaimer – This post is simply an introduction to SharePoint governance software, Riadenia, which will be released shortly. The software will be released shortly after being QAed.

SharePoint governance has been a subject that people have discussed forever, but it really didn’t seem to become such an important buzzword until the 2007 platform was released. I don’t know why that was, but I never heard it come up previously. There has been a lot written about it, from scripted guidance to tooling. Interestingly though, SharePoint governance, as well as computing governance, is for the most part super arbitrary so standards that attempt to define any “best practice” tend to fall woefully short. They don’t even make sense most of the time in terms of pragmatic application. For any meaningful progress in regards to SharePoint governance, the objective of reform must firstly be defined having regard to the standards that an organization wishes to achieve. As I see it, any undertaking would only be of value to an organization if its ultimate aim were to be the establishing of a framework that would allow for rules of governance. By this I mean a system in which everyone is subject to however remains sheltered from arbitrary governance standards.

In this way it should be stressed that firms should look at governance tooling and guidance not really ever as a completed solution but as a means of enabling them to better apply a SharePoint governance framework. This framework, importantly, would need to remain mutable. SharePoint governance, and it’s related tooling, is neither a project nor a technology. It sponsors a control framework for safeguarding your organization at a level that strikes a balance between business needs and protection needs. Basically your firm needs to have a solid framework in place before any governance automation technology could make a difference. These tools are created to enhance your systems not develop them.

Some might argue that implementing governance in SharePoint is as simple as setting basic IT SLA’s in place, pointing to the existence of some of the inherent features that constitute the wider system of the administration of collaborative (SharePoint) software, an honest and objective assessment would make it patently clear that this is no longer the case. Serious doubts have been cast over the competence and integrity of leveraging such basic features. No less significant is the very low level of confidence in the system as a whole, such confidence being a necessary prerequisite to its effectiveness. In view of this, an intention to address those factors have led to the belief that governance is arbitrary for the sake of there being no effective governance system.  It also becomes apparent that the governance reform initiative must be approached on at least two equally important levels; the SharePoint framework and, for the sake of a better term, the human resource.

So what’s the problem? The real crux of SharePoint governance issues arise from the fact that people take canned SharePoint governance advice, and attempt implementation without tailoring towards very crucial enterprise aspects such as SharePoint deployment intention, company culture, and industry. Rarely will SharePoint governance aspects, outside of the most generic counsel, translate well. These needs are not met satisfactorily by a method tailored around informal recommendations behind closed doors. These factors underscore a need for a mechanism that in my view would best be embodied in an independent commission, automated and managed within the framework itself and ultimately in an automated fashion.

So what does all this esoteric crap actually mean? It entails balancing the practical, with the not so practical. There must be pragmatic objects for each object being governened, and this must in turn contains relevant thresholds that define the characteristic, and in a larger sense the limits, of that object. In terms of SharePoint, this is pretty easy to graft what this should be shooting for, for each object, within the context of the Riadenia – SharePoint Governance Automation,  this means sites (SPWeb‘s) limits must be placed. The reason that SPWeb‘s are a practical target is because they represent a good middle tier proxy object, it isn’t as vast and untargetable as a SPFarm, SPWebApplication or SPSite but it isn’t as specific and narrow as a collection of governance-worthy objects like an SPList. Roping this back in, this problem, and the overall advised approach has nothing to do with the version of the software. Rather, this problems spans multiple version of it, even the objects being mentioned in the above are consistent with those present in the current, and last release (2003 didn’t have, for example, the SPWebApplication and SPFarm objects).

The thresholds themselves are nothing fantastic and mind-blowing. Ideally, to build profiles a definitive model can be built that tells you, for example, to set your thresholds x site administrators / securable site object is “good” in which would allow you to average these metrics and add (eg) 1 standard deviation above the average to help you identify better. Through application of the central limit theorum (conditions under which the mean of a sufficiently large number of independent random variables, each with finite mean and variance, will be approximately normally distributed) you can adjust your threshold metrics to select different sets. However, this is beyond the scope of my simple application! I will take it there one day though when I get the chance to get some feedback on current state.

One important piece of adaptive governance procedures is the introduction of some methods of forecasting. The main problem with this effort in large-scale SharePoint projects is the existence of optimism bias and strategic misrepresentation with project promoters. A consequence of such bias is a high incidence of cost overruns and benefit shortfalls in projects. Thus number of measures aimed at eliminating, or at least reducing, optimism bias and strategic misrepresentation in governance development must be introduced. The measures include changed governance structures and better planning methods. The aim is to ensure that decisions on whether to build projects or not are based on valid information about costs and benefits, instead of being based on misinformation as is often the case today. This is not to say that costs and benefits are or should be the only basis for deciding whether to build large projects. Clearly, forms of rationality other than economic rationality are at work in most projects and are balanced in the broader frame of public deliberation and decision making. But the costs and benefits of large-scale projects often run in the hundreds of millions of dollars, with risks correspondingly high. Without knowledge of such risks, decisions are likely to be flawed. When contemplating what planners can do to improve decision making, we need to distinguish between two fundamentally different situations: (1) planners and promoters consider it important to get forecasts of costs, benefits, and risks right, and (2) planners and promoters do not consider it important to get forecasts right, because optimistic forecasts are seen as a necessary means to getting projects started. The first situation is the easier one to deal with and here better methodology will go a long way in improving planning and decision making. The second situation is more difficult. Here changed incentives are essential in order to reward honesty and punish deception, where today’s incentives often do the exact opposite. Thus two main measures of reform will be considered below: (1) better forecasting methods, and (2) improved incentive structures, with the latter being more important.

Thus there are four types of forecasting for each SharePoint object under the governance umbrella introduced.

Naïve / Bayes

The Naive Bayes algorithm is based on conditional probabilities. It uses Bayes’ Theorem, a formula that calculates a probability by counting the frequency of values and combinations of values in the historical data.

Bayes’ Theorem finds the probability of an event occurring given the probability of another event that has already occurred. If B represents the dependent event and A represents the prior event, Bayes’ theorem can be stated as follows.

Prob(B given A) = Prob(A and B)/Prob(A)

To calculate the probability of B given A, the algorithm counts the number of cases where A and B occur together and divides it by the number of cases where A occurs alone.

Simple Moving Average

A simple moving average is the easiest and most popular technical indicator.

The simple moving average is calculated by taking the arithmetic mean of a given set of data values. For example, the basic 5-day moving average of 5, 6, 7, 8, 9 is (5+6+7+8 +9)/5 =35/5 =7.0

As new values become available, the oldest data points must be dropped from the set and new data points must come in to replace them. For example, the basic 5-day moving average of 4, 5, 6, 7, 8, 9 is (4+5+6+7+8)/5 =30/5 =6.0

4 is the newest data point that has come to replace 9. Thus, the data set is constantly “moving” to account for new data as it becomes available. This ensures that only the current information is being accounted for.

Weighted Moving Average

A weighted moving average is simply a moving average that is weighted so that more recent values are more heavily weighted than values further in the past.

The commonest type of weighted moving average is exponential smoothing. The calculation is quite simple:

P0 + αP1 + α2P2 + α3P3 + ⋅⋅⋅+ αnPn + ⋅⋅⋅

where α, the smoothing factor, is more than zero and less than one, P0 is the latest value on which the moving average is being calculated and Pi is the value i periods previously (usually i days ago).

Exponential Smoothing

This is a very popular scheme to produce a smoothed Time Series. Whereas in Single Moving Averages the past observations are weighted equally, Exponential Smoothing assigns exponentially decreasing weights as the observation get older.

In other words, recent observations are given relatively more weight in forecasting than the older observations.

In the case of moving averages, the weights assigned to the observations are the same and are equal to 1/N. In exponential smoothing, however, there are one or more smoothing parameters to be determined (or estimated) and these choices determine the weights assigned to the observations.

This smoothing scheme begins by setting S2 to y1, where Si stands for smoothed observation or EWMA, and y stands for the original observation. The subscripts refer to the time periods, 1, 2, …, n. For the third period, S3 = y2 + (1- ) S2; and so on. There is no S1; the smoothed series starts with the smoothed version of the second observation.

Adaptive Rate Smoothing

Statistical forecasting technique that takes variations into account through a coefficient. This coefficient is allowed to fluctuate with time to reflect significant changes in the pattern of the activity or phenomenon being studied. Adaptive exponential smoothing is an extended version of exponential smoothing.

All these types of averages, for software to be complete, must and are baked into the final code. By keeping the forecasting approaches ambiguous, each of the concerning SharePoint governable objects can be targeted.

Next Post In Series >> Leveraged Metric Constraints And Building Governance Profiles (coming soon!)

Upcoming Posts In Series >> Using Riadenia™SharePoint Governance Automation (coming soon!)

Read More About Intial SharePoint Governance Software Experiments

Share