Economics of SharePoint Governance – Part 21 – Relevant Governance Costs And Management of Governance Costs

Most of the data for cost analysis will have been compiled by formal accounting systems which may have been designed for tax-reporting rather managerial decision-making purposes. One can of course employ the straight accounting data as reported to estimate what I shall call an “accounting cost” function, but the cost decision maker is urged to recognize and remember when using it for simulation purposes that it over-states some costs and understates other costs. Also, any cost function estimated from historic data will be valid for simulation of future circumstances only if historic patterns persist into the future.

The analyst will have to make adjustments to the accounting cost data to estimate a relevant-cost function. No hard data exist at all about opportunity costs (what the firm is not presently doing but could be doing); only the managerial decision maker can estimate the opportunity costs based upon awareness and perceptions of the foregone alternatives. Needless to say, different managers will have different perceptions and assessments of the available opportunities, and some managers will be more successful than others because of their successful recognition of the pertinent opportunities. The well-designed cost accounting system should recognize temporal cost-output mismatching, but if it does not then some effort must be exerted to allocate certain costs (e.g., repair and maintenance expenses) to the proper time periods.

If the objective is to estimate a short-run cost function, care must be taken to exclude all overhead costs. This may be especially difficult if the cost-accounting system obscures the distinction between some direct and some overhead costs. For example, maintenance and repair expenses often are lumped together (they are typically performed by the same in-house crews or outside contractors), but regular maintenance expenses (presumably at approximately the same expense levels period after period) should be treated as fixed costs. However, repair expenses, to the extent that they are not regular occurrences, should be treated as variable costs. Then there is the problem of matching the repair expense to the appropriate time period since the circumstances culminating in the need for the repair may have spanned several production periods.

Depreciation is an especially knotty problem for the determination of relevant costs. Should it be treated as a variable cost to be included in the estimation of a short-run cost relationship? I have already made reference to the likely divergence between the accounting allowance for decrease in the value of the capital stock and the real phenomenon of capital consumption. Real capital consumption surely does vary with the rate of production, which may be quite uneven from period to period. But since it is not possible to know with certainty in advance what will be the actual physical life of the plant or equipment, it may not be possible to meaningfully impute a cost value to the amount of capital consumption to be allocated as a direct cost to a particular production period.

The accountant handles this problem by selecting a depreciable life (on the basis of past experience, as permitted by the tax authority’s rules, or simply arbitrarily), and designing an allowance schedule which may be straight-line, accelerated, or retarded (again, as permitted by the tax authority’s rules). If a straight-line allocation method (the same proportion of the capital value charged as cost during each period) is used, then a case can be made for excluding depreciation from the list of costs relevant to the short run, because even though it represents a direct cost it does not vary with output. But even if an accelerated or retarded depreciation schedule is used, the proportions of the capital value to be allowed as costs change smoothly, predictably, and monotonically (a mathematical term meaning in one-direction only and hence are unlikely to correspond to real variations in output and the capital consumed in producing it. Given these problems, then, a case can be made for excluding depreciation from the estimation of a short-run cost function, but including repair expenses (properly matched to time periods) as mirroring the real consumption of capital, i.e., the expenditure necessary to restore the real decrease in productive capacity caused by use of the capital.

Another short-run cost estimation problem occurs in time series data if there has been non-trivial variation in the prices of the variable inputs. Short-run costs certainly vary with the prices of inputs, so the analyst must make a methodological choice: either choose to “deflate” each of the TVC components (i.e., TLVC, TMVC, etc.) by a price index appropriate to that cost, or choose to include as other independent variables (i.e., the X1, X2,…, Xn in equation C6-1) the prices of the inputs which are thought to be varying significantly. The regression analysis will reveal which in fact are statistically significant determinants of TVC.

Theoretically, the physical units produced (or some multiple) should be used for Q in the regression model. A problem arises if the firm produces a multiplicity of products, some of which are jointly produced. In this case, Q may have to be the “product mix” normally resulting from the production process. If the market values of the jointly-produced products differ significantly, it may be necessary to use a composite index of the jointly-produced outputs where the weights are the current market prices of the products. Finally, if the objective is to estimate a firm-wide cost curve where the firm is producing multiple products, it may be desirable to use data for value of output evaluated at current market prices. If time series data are employed, both the output and input value data should be deflated by appropriate price indexes.  

The alternative heading for this section might be “Getting Enough Useable Data.” In sourcing his data, the analyst is first confronted with a choice between time-series and cross-sectional data. A short-run cost function assumes given technology, managerial capacity, and entrepreneurial abilities. Much recommends the cross-sectional choice for short-run cost estimation since the data are taken across firms or plants, but at a point in time. There is thus no problem of dealing with changes of plant size or of technology, managerial capacity, or entrepreneurial ability. It is perhaps easier to identify the use of the same technology than to find comparable amounts of managerial capacity and entrepreneurial ability in different firms. In this regard, the analyst will simply have to exercise judgment. Alas, there remains a critical problem with cross-sectional data for short-run cost estimation that is difficult to surmount for most firms: data are required from different firms or plants, but competitors often are reluctant to share such sensitive information.

The use of time series data for the firm’s own plant(s) obviates the need to solicit data from competitors. Here the analyst must take care to choose a time span which is not so long that there have occurred changes in technology, management, or entrepreneurship; else data will be for points on different cost functions. This may be a period covering several years, but may be as short as a few months. Then the analyst must divide the period into an adequate number of data collection intervals to yield enough observations for estimation of a statistically-significant cost function. Usually twenty to thirty observations are adequate for this purpose. The duration of the data-collection period may then dictate daily, weekly, or monthly observation intervals. A new short-run cost function must be estimated every time that technology, management, or entrepreneurial ability changes.

In the estimation of a long-run cost function, all costs should be included. It may not be feasible, however, to use a time-series approach since the intervals should be long enough to permit variation in plant size (but still no changes in management or entrepreneurship). In order to get enough long-interval observations, the duration of data collection may have to span several years. Alterations in management or entrepreneurship during the data-collection period will result in points on different long-run cost functions. These conditions may be heroic at best. The use of cross-sectional data, if they can be obtained from other firms and plants, may accommodate requisite variations in plant size, but care must be taken in selection of subjects to avoid different technologies, managerial capacities, or entrepreneurial abilities. Again, these may be heroic conditions.

Whether the analyst is attempting to estimate short- or long-run cost functions, the necessary premise is that each plant is being operated efficiently, with no significant waste of any resources, i.e., at an appropriate point on its cost curve. If at any time or in any plant included in the sample there are wasteful conditions, the observed data will be for a points above the locus of the firm’s true cost curve. If this circumstance occurs in more than a few instances, the estimated cost function will lie above its theoretically true (efficient) location, and will yield erroneous conclusions in simulation exercises.

Whether the management of the firm develops cost-function simulation model or not, the cost-related job involves many facets:

(1) In the long run, selecting the most efficient technology for producing the enterprises selected products;

(2) With that technology, selecting the scale of plant with an output range which is most compatible with current and expected future levels of demand for the products;

(3) Given the right scale of plant, selecting the appropriate output level to meet the enterprise’s goals (profit maximization, cost minimization, optimization, etc.);

(4) For the target level of output, selecting the appropriate internal allocation of the enterprise’s resources, i.e., the most efficient combination of inputs, given the available input prices;

(5) Operating efficiently and without waste, i.e., to operate at points on the enterprise’s production function surface (not below it) and on its respective cost curve (not above it).

Furthermore, economists can argue that if the goal of the enterprise is to maximize profits, and if it does so operate to maximize profits without monopolizing its markets or exploiting its resources, it will also meet a desirable social objective of efficiency in the allocation of resources among industries, among firms within the industries, and between products. Assuming that monopolization and exploitation are averted (admittedly a serious problem), this happy circumstance can be expected to emerge even if social well-being and social economic efficiency are not explicitly managerial goals of the enterprise.

 

Share

Economics of SharePoint Governance Part 14 The Existence and Strength of the Relationship (Multicollinearity, Autocorrelation, And Hetero-skedasticity)

For our purposes, the evaluation criteria developed by the inference analysis will be divided into two groups. The first, usually described as “analysis of variance,” yields four evaluation criteria that may serve as bases for inferences about the existence, strength, and validity of a regression model:

(a) The correlation coefficient, r or R, measures the degree of association or covariation between the dependent variable and an independent variable in the regression model. One such simple correlation coefficient may be computed between the dependent variable and each independent variable in the model, and in a multivariate model between each pair of independent variables. Computerized statistical systems usually produce a matrix of such simple correlation coefficients so that the analyst may ascertain the correlations between the dependent and each of the independent variables, as well as among the independent variables (as illustrated on pages following). If the model contains more than a single independent variable, a so-called multiple correlation coefficient, R, may also be computed to assess the over-all association between the dependent variable and all of the included independent variables taken together. The domain of r is from -1 to +1, with positive values implying direct relationships, and negative values indicating inverse relationships. Values of r near the extremes of this range imply near perfect inverse or direct relationship between the two variables, depending upon sign. Values of r in the neighborhood of zero (positive or negative), however, imply no statistically identifiable relationship between the variables.

(b) The coefficient of determination, r2, is interpreted as the proportion of the variation in the dependent variable data that can be statistically explained by data for the independent variable for which the r2 is computed. In a multivariate regression model, a coefficient of multiple determination, R2, may be computed; if there is only one independent variable in the model, the computed R2 will be equal to the only simple r2. Since r2 is computed as the squared value of the correlation coefficient, r2 is unsigned, and falls within the range of zero to +1. Although computed values of r and r2 contain essentially the same information (except for differences in sign), and each implies a value of the other, many analysts prefer to focus attention on r2 because of its determination interpretation.

The interpretation of any computed r2 statistic is subjective, and hence opens to dispute. For example, how high (toward unity) does the r2 have to be in order for the analyst to infer the existence and strength of a relationship? How low (toward zero) can an r2 statistic be before the analyst may draw the inference that no statistically identifiable relationship exists between the dependent and an independent variable? Analysts in the natural sciences often expect r2 values in excess of 0.9 (or even higher) to indicate the existence of a useable relationship.

Because of the degree of randomness, capriciousness, and ignorance that may characterize human decision making and behavior in the aggregate, a social scientist may defensibly judge an r2 that is in excess of 0.7 (or perhaps even somewhat lower) to be indicative of a statistically meaningful relationship. But most analysts are skeptical of the existence of a statistically meaningful relationship if the r2 between the dependent and independent variables is below 0.3 (in statistical jargon, the null hypothesis, i.e., that there is no relationship, is supported). For our purposes, the r2 values of 0.3 and 0.7 will be taken as evaluation benchmarks: values of r2 in excess of 0.7 are sufficient to reject the null hypothesis; values below 0.3 support the null hypothesis. But the reader should be aware that both of these values are rather arbitrarily selected and are subject to challenge.

Assuming that these values will serve satisfactorily as evaluation criteria, what of the r2 range between 0.3 and 0.7? This constitutes a statistical “no-man’s land” wherein no strong inferences can be drawn about either the existence or the non-existence of a statistically meaningful relationship between two variables. The interpretation of r2 values below 0.3 or in excess of 0.7 may be further refined with reference to the statistical significance of the regression model and particular variables within it.

(c) The computed F-statistic may be used as a basis for drawing an inference about the statistical significance of a regression model. Once the F-value is computed, the analyst must consult an F-distribution table (which may be found in any statistical source book or college-level statistics text). For the particular regression model under consideration, its computed F-value may be compared with F-distribution table values in the column for “1 degree of freedom in the numerator,” and on the row corresponding to the number of degrees of freedom (DF) of the regression model. The DF of a regression model is the number of observations less the number of variables (dependent and independent) in the model. For a regression analysis including a dependent variable and two independent variable conducted with 60 observations, the DF is 57. If the regression model’s computed F-value exceeds the F-distribution value read from the appropriate column and row in the table, the analyst may infer that the model is statistically significant at the level indicated in the heading of the table for the F-distribution (usually .05, i.e., only 5 chances in 100 that the model is spurious).

Suppose that the computed F-value for a regression model is 4.73, with 60 degrees of freedom. An F-distribution table reveals that the F-value required for significance at the .05 level is 4.00; the F-value required for the .01 significance level is 7.08. These findings support the inference that the regression model is statistically significant at the .05 level, but not at the .01 level.

For many less-consequential forecasting purposes, most analysts probably would be willing to accept (though with hesitancy) a regression model with r2 of 0.7 and statistical significance at the .05 level. If truly consequential decisions are to be based upon the regression model forecasts, the analyst may not be willing to use any model for which r2 is less than some very high value (like 0.9 or 0.95), with statistical significance below some very low level (like 0.01 or 0.001).

(d) The standard error of the estimate (SEE) may be used to specify various confidence intervals for forecasts made with the regression model. Realistically speaking, the likelihood that the actual value at some forecast-target date will fall precisely on the value estimated in a regression model is nearly zero. In other words, the forecasted value is a “point estimate” that the analyst hopes will be close to the actual value when it finally occurs. The computed SEE specifies a numeric range on either side of the point estimate within which there is an approximate 66 percent chance that the actual value will fall. Two other confidence intervals are also conventionally prescribed. There is a 95 percent probability that actual value will lie within a range of two standard errors of the forecasted point estimate, and a 99 percent probability that the actual value will lie within three SEEs of the forecasted value. For example, suppose that a regression model forecasts a point estimate of 732 for the target date, with an SEE of 27. The 66 percent confidence interval may thus be computed as the range from 705 to 759 (i.e., 732 +/- 27); the 95 percent confidence interval is from 678 to 786; and the 99 percent confidence interval is from 651 to 813. It should be apparent that the higher the required confidence in the forecasts made with a regression model, the wider will be the range within which the actual value likely will fall. As a general rule, the SEE will be smaller the higher the r2 and the lower the statistical significance of the regression model. Other things remaining the same, a regression model with a smaller SEE is preferable to one with a larger SEE. In any case, the analyst would be better off in reporting the results of a regression model forecast to specify confidence intervals rather than the single-valued point estimate that will almost certainly not happen.

Certain inference statistics are also computed for purposes of assessing the statistical significance of the regression coefficient (b, the estimated slope parameter) of each included independent variable.

(a) The standard error of the regression coefficient, SEC, is computed for each of the slope parameters estimated by the regression procedure. Unless the entire universe of values (all that have or can exist) are available for all variables included in a regression model, the regression analysis can do no more than construct an estimate of the true slope parameter value from the sample of data currently available. By its very nature, time series regression analysis could never encompass the entire span of time from the “beginning” through all eternity. Data for various finite time spans will thus yield differing estimates of the true parameter of relationship between any two variables. The hope of the analyst, and one of the premises upon which regression analysis is erected, is that all such estimated parameter values will exhibit a central tendency to converge upon the true value of relationship, and that any single estimated regression coefficient will not be very far from the true value.

All such regression coefficient estimates are presumed to constitute a normally-distributed population for which a standard deviation (a measure of average dispersion about the mean) may be computed. This particular standard deviation is called the standard error of the regression coefficient. It may be used to specify the 66, 95, and 99 percent confidence intervals within which the true coefficient value is likely to lie. As a general rule, the smaller the value of the SEC relative to its regression coefficient, the more reliable is the estimate of the regression coefficient.

(b) The t-value may be computed for each regression coefficient. In generalized inferential analysis, the “student’s t-test” may be used to test for the significance of the difference between two sample means. Applied to regression analysis, the t-value may be used to test for the significance of the difference between the estimated regression coefficient and the mean of all such regression coefficients that could be estimated. Since the latter is unknowable, the t-value is usually computed for the difference between the estimated regression coefficient and zero. As such, it can only be used to ascertain the likelihood that the estimated regression coefficient is non-zero.

Once the t-value for a regression coefficient is computed, the analyst may consult a student’s t-distribution table on the appropriate DF row to see where the computed t-value would lie. The t-table value just below the computed t-value identifies the column in the t-distribution table that specifies the significance level of the test. Suppose that the absolute value (unsigned) of the computed t-value for an estimated slope parameter is 1.73, with 60 degrees of freedom. A t-distribution table would show 1.73 lying between the values 1.671 and 2.000 on the 60 degree-of-freedom row. The column heading of the row containing the value 1.671 is the 0.1 significance level, implying that there is only one chance in ten that the estimated regression coefficient is not different from zero.

As a general rule, the lower the significance level of a regression coefficient, the more reliable are the forecasts that can be made using the model containing the independent variable for which the regression coefficient was estimated. For especially consequential decision making, the analyst may not be willing to retain any term in a regression forecasting model that is statistically significant above the 0.01 level. Since the t-value may be computed as the ratio of the estimated regression coefficient to its computed SEC, a rule of thumb may be prescribed that permits the analyst to avoid reference to a t-distribution table. If the absolute value of an estimated regression coefficient exceeds its computed SEC, the analyst may infer that the regression coefficient is statistically significant at the 0.33 level or lower. If the regression coefficient is more than twice the magnitude of its SEC, this implies a 0.05 significance level for the coefficient. Likewise, if b exceeds its SEC by a factor of 3, the implied significance level is below 0.01.

There are several possible problems that may emerge in the multiple regressions model contexts. All are consequences of violation of one or another of the assumptions or premises that underlie the multiple regression environments. Adjustments may be made to the data or the analysis to deal with some of the problems, but in cases of others the analysis should simply be aware of the likely effects.

The most fundamental of the multiple regression assumptions is that the independent variables are truly independent of one another. Multicollinearity may be identified by the presence of non-trivial correlation between pairs of the independent variables included within the model. Multicollinearity may be detected by examining the correlation matrix for all of the variables contained in the model.

Multicollinearity is almost certain to be present in any autoregressive or polynomial regression model of order higher than 1st. Because the successive terms in a kth-order autoregressive model use essentially the same data as the first term, except shifted by some number of rows, the assumption of independence among the “independent” variables is clearly violated. Likewise, because the successive terms in a kth-order polynomial model employ the same data as the first term, except as raised to successively higher powers, the assumption of independence again is clearly violated.

Multicollinearity may also be present among the different independent variables included in the model, even if they are not autoregressive with the dependent variable, and even if they are each only 1st order. If two independent variables are linearly similar, i.e., very correlated with each other, it is as if the same variable were included two times in the model, thereby contributing its explanatory power twice, and thus amounting to so much “deck stacking.” The usual effect of the presence of non-trivial multicollinearity is to inflate the standard errors of the coefficients of the collinear independent variables, rendering their computed t-values too low, implying excessively high levels of statistical significance (bad, since the lower the significance level the better).

Some statisticians prefer to remedy the presence of any non-trivial multicollinearity by removal of one or the other of the two collinear variables from the model. Others suggest that if there are good conceptual reasons for including both independent variables, they should both be retained in the model unless the multicollinearity is extreme (i.e., the correlation between the collinear independent variables approaches 1.00 in absolute value), or unless the analyst is particularly concerned about the statistical significance of either of the collinear independent variables. In this latter case, if the independent variables are time series, the analyst might try differencing the collinear independent variables and respecifying the model with the differenced series in place of the raw data series to see if significant information is contained in either of the collinear independent variables that is not also contained in the other.

Another of the premises underlying multiple regression modeling is that the forecast errors constitute an independent random variable, i.e., a random noise series. If there is a discernible pattern in the forecast error series, then autocorrelation is present in the dependent variable series. Autocorrelation may be detected by computing autocorrelation coefficients to some level of specified order of autocorrelation. Alternately, the analyst may construct a sequence plot of the forecast error series. The statistical software system may facilitate this procedure by allowing the user to have the forecast error series written to the next available empty column in the data matrix so that the sequence plot of that column may be constructed. If the forecast error series exhibits alternation of points above and below its mean, then the object series is negatively auto correlated. Positive autocorrelation is present if the error series exhibits “runs” of points above the mean alternating with runs of points below the mean in a cyclical (or seasonal) fashion. The expected number of runs if the series is truly random noise may be estimated for comparison with the actual (by count) number of runs exhibited by the series. If the actual number of runs is smaller than the expected number, then autocorrelation almost surely is present in the dependent variable series.

The effect of the presence of autocorrelation within the dependent variable series is to render the r, F, and t statistics unreliable. In particular, the presence of autocorrelation will likely result in understated standard errors of the regression coefficients, thus causing overstatement of the t-values, implying better (i.e., lower) significance levels for the estimated regression coefficients than warranted. Although the estimated regression coefficients themselves are unbiased (i.e., not unduly specific to the particular data set), autocorrelation results in computed confidence intervals that are narrower than they should be.

Some degree of autocorrelation is likely present in every economic or business time series, and the analyst should probably ignore it unless it is extreme. As noted earlier, one or more auto-regressive terms may constitute or be included in the regression model as the primary explanatory independent variables. If the analyst discovers the presence of non-trivial autocorrelation in a regression model that was specified without autoregressive terms, he might consider respecifying it to include one or more such terms as the means of handling the autocorrelation. The approach in this case is to try to use the auto correlated information rather than purge it from the model.

The problem of hetero-skedasticity occurs if there is a systematic pattern between the forecast error series and any of the independent variable series. Homoskedasticity is the absence of such a pattern. Whether the model exhibits the property of heteroskedasticity may be discerned by having the forecast errors plotted against data for each of the independent variables in scatter diagrams. If the scatter of the plotted points exhibits any discernible path, then heteroskedasticity is present within the model.

If the regression model is non-trivially heteroskedastistic, the mean squared error and the standard error of the estimate will be specific to the particular data set; another data set may yield inference statistics that diverge widely from those computed from the first data set. Likewise, the inference statistics associated with the particular independent variables (SEC, t, and significance level) will also be specific to the data set. I shall leave the matter of heteroskedasticity with the warning to the analyst of the likely consequences for his model, i.e., that its usefulness for modeling may be strictly limited to the range of data included in the object series.  

This series is a lot of parts that I am quasi-using pieces of for a academic research paper stance so bear with me if it gets too esoteric. Or read the other governance articles available within the SharePoint Security category within the main site (available through the parent menu).

Share

Economics of SharePoint Governance – Part 9 – Analysis of Value And Risk

Much of the substance of governance economics is about marginal decision making affecting operations in the short-run time frame. Most of the rest of these segments elaborate the marginal criteria for short-run decision making; however immediately I am going to devote to long-run decision considerations and the analysis of risk.

In addition to the differences noted in previous posts between long and short runs, a major distinction remains to set the long-run decision problem apart from short-run decision making. Since the effects of the long-run decision can be expected to impact the enterprise in the future, some recognition must be made of the remoteness of those effects. The sense of the problem is that the expectation of a benefit to be realized in the future is worth less to the decision maker than an equivalent benefit received immediately. Specialists in finance often refer to this phenomenon as the “time value of money,” but the phenomenon would exist even in a barter economy (one which does not use money). The phenomenon is described by the ancient adage that “a bird in the hand is worth two in the bush.” The problem pertains to costs as well as to benefits. In addition, costs which must be paid at some future time are also less meaningful to the decision maker than those which must be paid today, although the wise decision maker should plan and make careful arrangements to cover expected future costs.  

Since future possibilities are worth less than present realities, economists have acknowledged this phenomenon by discounting the expected future values at an appropriate rate. This rate is usually taken to be the best market rate of interest for which the enterprise can qualify. The interest rate is used as a discount rate on the premise that the equivalent of the expected future value, less the cost of interest, can be had at present by borrowing the future sum. This relationship should be true whether the borrowed principle and the interest to be repaid are expressed in barter or monetary terms. If we let the symbol i stand for the interest rate which will serve as discount rate, the present value of the expected future outcome can be expressed as PV as the “present value” of an expected future income stream, V as a predicted future net income (or return) in each of n future periods (it is “expected” if it is a probability-weighted average of all possible outcomes which may occur). The value of V is taken to be the net of the difference between the future benefit, b, and the cost of realizing it, c, or V = (b – c). PV is less than the sum of the Vs because each V is divided by a number greater than 1, i.e., 1 plus the discount rate i.  

If there are no risk considerations to be taken into account, the appropriate decision procedure is to first compute, using formula (1), the present values of the expected future net benefits for each of the k outcome possibilities for each of the decision alternatives. Then the expected values of possible outcomes of each decision alternative can be computed as their probability-weighted averages using previously described formulas into which the computed present values, PV, are substituted for the outcomes, V. The proper decision criterion is to choose the alternative with the largest present expected value of the possible outcomes.

In the present value formula (1), values for V and i are known or estimated, and the value of PV is computed. An alternative version of the present value formula permits computing the so-called internal rate of return,

In formula (2), K is the capital outlay required to acquire and install an investment opportunity (since it is known today, it is the opportunity’s “present value”), and r is the internal rate of return on the investment opportunity. In this approach, the value of K and the values for V are known (or can be estimated), but the value of r is to be computed. Actually, equation (2) cannot be easily solved for value r, but r may be estimated by successive approximation from annuity tables.  

If we may abstract from the possibility of a scrap value of the capital at the end of its useful life, n, the conceptual sense of r is the discount rate which would be just sufficient to make the sum of the returns, V1…Vn, just equal to the capital outlay, K. The discount rate, r, is interpreted as a rate of return because if all of the net amounts represented by V were invested at an interest rate equal to r, they and their interest earnings would add up to the capital outlay, K. The investment criterion which justifies undertaking the investment opportunity is that its rate of return, r, is at least as great as the best market interest rate, i, for which the enterprise can qualify, or r >= i.  

If we may assume that the predicted net returns, V1…Vn, can be estimated, the internal rates of return can be computed for each prospective investment opportunity currently under consideration by the enterprise’s management. For example, suppose that the management of an enterprise is considering five prospective investment opportunities, designated A through E, which require the capital outlays and promise the internal rates of return. These opportunities have already been arrayed in descending order of internal rates of return. When the capital outlays are “stacked” from left to right on a set of coordinate axes with capital expenditures on the horizontal axis and internal rates of return on the vertical axis, the plotted points A through E constitute a downward sloping path which John Maynard Keynes called the “marginal efficiency of capital” (The General Theory of Employment, Interest, and Money, Harcourt, Brace & World, Inc., 1964, p. 135). Suppose that the enterprise can borrow funds from the capital markets at an interest rate i = 8 percent, represented by the horizontal dashed line. The marginal investment criterion then is to undertake additional investment opportunities in descending order of rates of return as long as the rates of return are at least as high as the interest rate, i, or r >= i. Investment opportunities A, B, and C ought to be undertaken; opportunities D and E ought to be rejected because their rates of return would not be sufficient to pay the interest on borrowing to finance them.      

We should note that this long-run investment decision criterion is appropriate even if the enterprise can finance its capital outlays from internal sources. The rationale is that funds accumulated internally by depreciation allowances and retaining earnings have been (or could have been) “invested” on financial capital markets earning interest rate, i, which must now be foregone if the funds are used to finance the prospective investment opportunities instead.

This series is a lot of parts that I am quasi-using pieces of for a academic research paper stance so bear with me if it gets too esoteric. Or read the other governance articles available within the SharePoint Security category within the main site (available through the parent menu).

Share