< PreviousA PROFESSIONAL DEVELOPMENT JOURNAL for the CONSULTING DISCIPLINES 20 JULY | AUGUST 2019 t h e v a l u e e x a m i n e r These findings should be helpful to valuation analysts. By showing that MA influences the relationship between firm value and accounting fundamentals, we hope to help the analyst to incorporate MA in the valuation process in a more systematic way. More specifically, valuation analysts can potentially increase the accuracy of their value estimates by estimating regression models by MA group. To demonstrate this increase in accuracy, we provide two examples in the appendix, which is found online at Davit Adut, PhD, an assistant professor of accounting at the Albers School of Business and Economics, Seattle University, has held research appointments at the American University and University of Cincinnati. His research focuses on the effect of accounting information on executive compensation, analysts’ forecasts, and corporate governance. His publications appear in the Accounting Review, Journal of Accounting and Public Policy, and Advances in Accounting. He also serves as the senior editor for International Accounting, Auditing, and Taxation. Marinilka Barros Kimbro, PhD, an associate professor of accounting at the Albers School of Business and Economics, Seattle University, research focuses on the effect of accounting information on firm risk, shareholders’ activism, executive compensation, corruption, management accounting and logistics, and advanced techniques for fraud detection. Her publications appear in the Journal of Accounting and Public Policy, Journal of Accounting, Auditing and Finance, Asia-Pacific Journal of Accounting and Economics, Journal of International Financial Management and Accounting, International Journal of Business Performance Management, Journal of Forensic and Investigative Accounting, among others. Her research has been covered by the WSJ, South China Morning Post, and Seattle Business Magazine, among others. Marc Picconi, PhD, an associate professor of accounting at William and Mary’s Raymond A. Mason School of Business, served in the United States Navy for six years as a submarine officer and instructor in leadership training. Prior to joining William and Mary, he taught for seven years at Indiana University’s Kelley School of Business. His research explores how investors and analysts process accounting information, particularly in the areas of pensions and audit fees. His work has been published in The Accounting Review, the Journal of Finance, Contemporary Accounting Research, and Issues in Accounting Education. His paper, The Perils of Pensions: Does Pension Accounting Lead Analysts Astray? won the prestigious American Accounting Association Competitive Manuscript Award in 2005. Philipp Schaberl, PhD, an assistant professor in accounting in the Monfort College of Business at the University of Northern Colorado, has over a decade of experience teaching financial accounting to undergraduate, MACC, and MBA students. Prior to joining UNC, he has taught at the University of Cincinnati and the University of Denver. His research interests include empirical archival capital markets research, financial reporting and analysis, valuation, and the role of information intermediaries. Mr. Schaberl has published his work in journals such as Financial Management, European Financial Management, Advances in Accounting, and The Value Examiner, among others. E-mail: A PROFESSIONAL DEVELOPMENT JOURNAL for the CONSULTING DISCIPLINES VALUATION /////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////// JULY | AUGUST 2019 21 the value examiner RESPONSE: How Not To Use Duff And Phelps Data, May/June 2019 By Joshua Feldman, CPA, CFE, CVA, AIAF I read the article “HOW NOT TO USE DUFF & PHELPS DATA.” I agree with some of Grabowski, d’Almeida & Jacobs (D&P) observation, but certainly not all, as my article (Rethinking Using Arithmetic Mean Returns In Calculating Small Company Risk Premiums, The Value Examiner, November/December 2018) would indicate. I agree that the rationale for using a one percent size premium based upon the company’s “Stable history and lack of debt” is nonsensical. This would be an argument for an adjustment to the Company-Specific Risk Premium (C-SRP) and would apply equally to pre- and post-construction. I do not have much of a problem with the equity risk premium being 5.5 percent, because it is not sensitive to the method of its computation be it arithmetic mean or geometric mean. The two percent contingency added to the post-construction rate not only seems unjustified, but unjustifiable. Let us assume there is a pipeline explosion, leak, or another mishap. Would not the owner have a separate course of action against the pipeline to recover their damages? Not only is the probability of such an occurrence seems to be low, but also the expected loss should be very small if the tort system is functioning properly. I disagree with D&P on several points. As I wrote last year, the use of their 5.60 for size premium is based on arithmetic means and smoothing is logically and mathematically unsupportable. People’s expectations are based on historical experience, and nobody uses arithmetic means when computing historical returns. The model used to justify using arithmetic means is flawed, treating each period as independent of the next, which may be true for periods of a year or less, but are not true for long periods. Using geometric means from 12/31/1925 to 12/31/2016, I have a risk premium of 3.27 percent. I believe that this overstates the risk premium based on market behavior after World War II. The post-war geometric size risk premium is 1.17 percent based on the 2019 SBBI Classic Yearbook data. The expert’s exclusion of a C-SRP for pre-construction is probably inappropriate. The subject company is tiny, with annual revenues below half a million dollars. I have rarely seen a company of this size not to have specific company risk, but your experience should be instructive. One abundantly clear risk is the Company has a single location. Forest fire, flood, road construction are just a few of the existential risks created by one location. The financials were based on tax returns, and there is no indication that they have an annual audit. There is no need one since they carry almost no debt, but this makes the veracity of their results a greater risk. How are there computer systems operated? How much managerial depth do they have? I could ask a bunch of additional questions, but the point here is that besides having no debt and decent profit margins, there is plenty of reason to believe that zero percent is unrealistically low. Increasing the discount rate for both pre-construction and post-construction with a C-SRP would lower the damages. For example, a five percent C-SRP would double the five percent pre-construction cap rate and halve that value. D&P’s CAPM argument holds no water here. The tree company is not public, so there is no reliable way to compute its value using CAPM. Furthermoe the company’s size, industry, and location make it an unlikely target for a public company or private equity group to acquire. The significance of that is the hypothetical buyer cannot be assumed able to diversify away from the Company’s specific risks fully. It is curious how D&P supports their overblown small company risk premium, but poo-poos specific company risk. In defense of size premiums, we see: “Small firms have risk characteristics that differ from those of large firms, including the ability to enter the market, take market share and respond to changes in the market. Large firms generally have more resources to weather economic downturns, spend more on advertising and R&D hire top talent, and access capital and a larger customer base.” On C-SRP, they quote Letters to the Editor ///////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////A PROFESSIONAL DEVELOPMENT JOURNAL for the CONSULTING DISCIPLINES 22 JULY | AUGUST 2019 t h e v a l u e e x a m i n e r the Delaware Court of Chancery “…However, the Build- up Method typically incorporates heavy dollops of what is called ‘company-specific risk,’ the very sort of unsystematic risk that the CAPM believes is not rewarded by capital markets and should not be considered in calculating the cost of capital…” You as valuators should ask yourselves where is the risk differential bigger between a one-hundred billion dollar public company to a one hundred million dollar public company or between a one-hundred million dollar public company and a one million dollar private company? I put my money on the latter, but you can decide for yourselves. D&P’s continued recommendation of a “normalized” risk- free rate as opposed to the spot rate is without any theoretical foundation. The risk-reward concept is based upon substitutability. You cannot substitute 3.5 percent bond rates when they do not exist in today’s market. Sorry. After being consistently over projecting interest rates for more than ten years of advocating this methodology, D&P may want to consider abandoning it. No discussion is made of the analyst’s usage of a five percent presumed growth rate. Wayne County, Ohio’s population grew from 111,564 in 2000 to 114,520 as of April 1, 2010, or 0.262 percent annual growth. It can be noted that annual population growth from 2010–2018 has been about 0.15 percent. U.S. population growth was 0.931 percent per annum between 2000 and 2010. So, growth should be reduced at least 0.67 percent just based on population growth. The regional economy is significant in this industry because trees are too costly to ship long distances due to their weight, bulk, and relatively low value. The usage of the national average is not a reasonable estimate for this business. TAKEAWAYS 1. Support the usage of a size risk premium based upon a reasonable method of computation such as a geometric or logarithmic regression rather than using arithmetic based concoctions based on flawed rationalizations. 2. Company-specific risk premiums belong in companies that might/probably be acquired by someone without a fully diversified portfolio. 3. Be cognizant that the use of means or medians is not always appropriate. Not every characteristic of every business is near the middle of a normal curve. 4. It requires awareness and judgment to identify when central tendencies apply. 5. Avoid using CAPM when it is near impossible to properly select a justifiable comparative, let alone explain its computation to a judge or jury. Basing CAPM solely on capitalization size ignores the broad distribution of betas within a decile range, particularly for the smallest companies (see three above). 6. “Normalized” risk-free rates should be avoided. Theoretically bankrupt and historically inaccurate, this should be a source of embarrassment for anybody that uses them. Maybe someday rates will get high enough, so this becomes irrelevant. Joshua Feldman, CPA, CFE, CVA, AIAF, is a solo practitioner in Cleveland, Ohio focusing on business valuation, litigation support, and fraud examination. Prior to establishing his practice, Mr. Feldman served as a financial executive in the property/casualty insurance industry and worked in a traditional public accounting firm. Mr. Feldman graduated from the Wharton School of the University of Pennsylvania with a BS in Economics with concentrations in finance and political science. E-mail: RESPONSE: Vasicek and Blume Betas: Back to the Future (Parts I and II) January/February 2019; March/ April 2019 By Prof. Dr. Leonhard Knoll, Prof. em. Dr.; Dr. h.c. Lutz Kruschwitz, Prof. Dr.; Dr. Andreas Löffler; and Prof. Dr. Daniela Lorenz In the first two issues of The Value Examiner in 2019,1 Diana Raicov and Richard Trafford offered a further contribution to the ongoing studies on the modification of CAPM and, in particular, on the empirical estimation of Beta. They compare different methods for estimating Beta and evaluate them according to the criteria of unbiasedness, stability, and predictive ability.2 We would like to make some comments on their procedures and results. 1 Diana Raicov and Richard Trafford, The Value Examiner, January/February 2019, pp. 13–22 (Part I), and March/April 2019, pp. 12–22 (Part II). 2 Cf. Raicov, D and Trafford, R., VE J/F 2019, p. 16.A PROFESSIONAL DEVELOPMENT JOURNAL for the CONSULTING DISCIPLINES t h e v a l u e e x a m i n e r JULY | AUGUST 2019 23 THE TWO ROLES OF ECONOMETRICS From the beginning of CAPM in the sixties,3 econometrics played two different roles. The first one was to test the model itself, and the results over some decades were quite mixed.4 The other one was the best possible estimation of the parameters for a direct application of the model. In doing so, one assumes the CAPM to be correct and this has a serious (and unfortunately often neglected) consequence— the estimation model must correspond to the CAPM (as the theoretical model) or the structure of the estimation model must not contradict the structure of the CAPM. This is all the more important for the CAPM, because the fathers of the CAPM did not only create a consistent model, but found themselves in the comfortable situation that a very simple econometric structure corresponded with their theory—the market model.5 There was just one problem— the statistical quality of the results was (as in the case of testing the CAPM itself) not the best. Thus, investors and valuators had, and still have, two alternatives: either they throw away the CAPM and use another theoretical base for valuation or they try to modify the estimation procedure. Both alternatives have been tried extensively since the seventies, but just the second one is relevant in the article by Raicov and Trafford. VASICEK, BLUME, AND INFORMATION EFFICIENCY The list of trials of this second alternative is quite long and shows prominent protagonists, but just a few gained attention and practical application up to now. The approaches by Vasicek6 and especially by Blume,7 that were selected for 3 Beside the Nobel Prize awarded work by Sharpe, W.F. (1964). Capital Asset Prices: A Theory of Market Equilibrium under Conditions of Risk. The Journal of Finance (19), pp. 425–442, especially Lintner, J. (1965). The Valuation of Risky Assets and the Selection of Risky Investments in Stock Portfolios and Capital Budgets. The Review of Economics and Statistics (47), pp. 13–37, and Mossin, J. (1966). Equilibrium in a Capital Asset Market. Econometrica (34), pp. 768–783. J.L. Treynor´s (1961) preceding manuscript Towards a Theory of the Market Value of Risky Assets remained unpublished for a long time, cf. for a reprint Asset Pricing and Portfolio Performance: Models, Strategy, and Performance Metrics, edited by Korajczyk, R.A. (1999), pp. 15–22. 4 Cf. e.g. Copeland, T.E., Weston, J.F., and Shastri, K. (2005), Financial Theory and Corporate Policy, 4th ed., pp. 164–171. In 2013, the well-known journal Abacus dedicated a whole supplement to discussions about the CAPM and its half-century existence. 5 This was introduced by one of these fathers, cf. Sharpe, W.F. (1963). A simplified Model for Portfolio Analysis. Management Science 9 (2), pp. 277–293. 6 Vasicek, O. (1973). A Note on Using Cross-sectional Information in Bayesian Estimation of Security Betas. The Journal of Finance 28 (5), pp. 1233–1239. 7 Blume, M.E. (1971). On the Assessment of Risk. The Journal of Finance 26 examination by Raicov and Trafford, were indeed the ones that are most commonly used. We will not discuss their empirical strength, nor do we criticize the results of Raicov and Trafford. Nevertheless, we want to recall that both approaches contradict the CAPM in a fundamental sense as they rely not only on contemporary share prices, they reflect a situation of missing information efficiency,8 a supporting basis of the CAPM.9 If one believes in lagged information processing or a kind of mean reversion within the relatively short time windows for estimating Beta, one should seriously question one's confidence in the CAPM itself. In our view, this is an even greater problem for using such approaches than the results of studies that were in favor of semi-strong information efficiency.10 FILTERING ADJUSTMENTS Compared with that, the filtering adjustments considered by Raicov and Trafford, play a minor fundamental role. Looking at their use in practice, it is, nevertheless, understandable that the authors tested them. As in their obvious paradigm study by Gray et al.,11 they concede that filtering adjustments may cause biases, especially upward biases. Raicov and Trafford stress this so often12 that (1), pp. 1–10; Blume, M.E. (1975). Betas and their Regression Tendencies. The Journal of Finance 30 (3) pp. 785–795; and Blume, M.E. (1979). Betas and their Regression Tendencies: Some Further Evidence. The Journal of Finance 34 (1) pp. 265–267. 8 The concept of information efficiency is strongly connected with Fama, E.F. (1965). The Behavior of Stock Market Prices. Journal of Business 38 (1), pp. 34–105; and Fama, E.F. (1970). Efficient Capital Markets: A Review of Theory and Empirical Work. The Journal of Finance 25 (?), pp. 383–417; although there were many preceding and contemporaneous authors, cf. Albrecht, P. and Maurer, R. (2016). Investment- und Risikomanagement. 4th ed., pp. 286–287; and Copeland, T.E., Weston, J.F., and Shastri, K. (2005), Financial Theory and Corporate Policy, 4th ed., pp. 353–372. 9 The most serious critique on the testability of the CAPM itself came from Roll, R. (1977). A Critique of the Asset Pricing Theory´s Test, Part I: On Past and Potential Testability of the Theory. The Journal of Financial Economics 4 (2), pp. 129–176. He rightly stated that the CAPM is based on efficient capital markets. If empirical numbers deviate from the model´s predictions, this may be caused by the fact that you have chosen an inefficient representative for the market portfolio in your regression. 10 Just for the sake of integrity, we want to mention that returns mostly show a mean reversion when observed over very long horizons (some decades), cf. for an overview Knoll, L. (2010). Anmerkungen zur Mittelungsproblematik historischer Marktrisikoprämien, in Königsmaier, H. and Rabel, K. (eds.): Unternehmensbewertung. Theoretische Grundlagen – Praktische Anwendung. Festschrift für Gerwald Mandl zum 70. Geburtstag, Wien, pp. 325–344 (336– 339). This is a topic in the measurement of the equity risk premium which we do not need highlight here. 11 Gray, S., Hall, J., Klease, D., and Mc Crystal, A. (2009). Bias, stability, and predictive ability in the measurement of systematic risk. Accounting Research Journal, 22 (3), pp. 220–236. 12 Cf. pp. 17, 20, and 21 in part I and pp. 17 and 20 in part II.A PROFESSIONAL DEVELOPMENT JOURNAL for the CONSULTING DISCIPLINES 24 JULY | AUGUST 2019 t h e v a l u e e x a m i n e r we should investigate why they do not immediately discard such procedures. In doing so, we find a statement on p. 22 of part I (The Value Examiner, January/February 2019) that might give some reason for their examination: “In the last period, the filters eliminate approximately twelve percent of the sample. This percentage is reasonable, and it does not permit the filters to introduce imprecision. The filtered observations display higher OLS Beta than the unfiltered statistics. This can be the result of systematically eliminating firms with very low market capitalization which may also exhibit low Betas. The second filter seems to shift the estimates closer to the absolute mean and proves to be more accurate.” Finally, in the Raicov/Trafford paper, there is no convincing justification for the use of mechanisms that produce biases. In particular, the following questions arise: 1. If twelve percent is reasonable, why is that so and where is the exact limit of reasonableness? 2. Why should firms with low market capitalization be eliminated and why are low Betas bad? 3. Why does shifting the estimates closer to the mean prove to be more accurate when we do not know the true values? The problems with the filtering mechanisms do not stop here. In what follows, we are concentrating on R2 and the t-statistic, because there are strict mathematical relationships between them that are not open to personal interpretation. Gray et al., at least showed the analytical reasons why the R2- and t-statistic-filters produce an upward bias,13 but they do not see that both do essentially the same and can be transferred into one another mathematically:14 with n = number of observations per firm (here 48 months). 13 Cf. Gray, S., Hall, J., Klease, D., and Mc Crystal, A. (2009). Bias, stability, and predictive ability in the measurement of systematic risk. Accounting Research Journal, 22 (3), p. 223. 14 Cf. for the following demonstration Knoll, L., Ehrhardt, J. and Bohnet, F. (2007). Kleines Beta – kleines Bestimmtheitsmaß: großes Problem?. CFO aktuell, 6, pp. 210–213; Knoll, L. (2010). Äquivalenz zwischen signifikanten Werten des Beta-Faktors und des Bestimmtheitsmaßes. Die Wirtschaftsprüfung, 63, pp. 1106–1109; and Ziemer (2018) Der Betafaktor, Wiesbaden, p. 180. The intuition behind that result is quite simple: The RHS of the first eq. in (2) is the F-statistic for a univariate linear regression, cf. e.g. Wooldridge, J.M. (2017). Introductory Econometrics. 6th ed., p 135, and the F-statistic is the square of the t-statistic in univariate cases, cf. ibid., p. 132. This relationship between the two filters also has the consequence that one cannot set critical values for R2 and the t-statistic independently, because just one of them will be a binding restriction. To typify that for the study at hand, insert 2 for t and 48 for n in eq. (1) or 0.1 for R2 and 48 for n in eq. (2) Consequently, the critical value 2 for the t-statistic cannot be binding, as long as there are no t-statistics with less than -2.26. As that result is very seldom observed in practice,15 we ignore it for the moment, but will return to this possibility soon. Taking together this and the upward bias-result (not only)16 by Gray et al., we should expect two effects on the results: 1. The filter of the t-statistic (here t-stat > 2) should never exclude more companies than the filter of R2 (here R2 > 0,1). 2. As the upward bias should be stronger in the binding filter restriction, R2-filtered Beta means should by trend17 be higher than means of measurements filtered by the t-statistic. The fact that one cannot exclude equal effects in (a) and (b) is caused by the discrete composition of the industry groups, i.e., in some groups, there is no Beta with a t-statistic between 2 and 2.26 and, therefore, the firms under consideration and their Beta mean must be the same. 15 Negative Betas are precluded neither by theory; cf. Berk, J. and DeMarzo, P. (2017), Corporate Finance, Global Edition, 4th ed. Prentice Hall, p. 421; nor by empirical findings. While this itself is a reason against the use of a positive t-filter, one must admit that share Betas with t-statistics of that negative magnitude are almost never reported. 16 Cf. e.g. Knoll, L., Ehrhardt, J., and Bohnet, F. (2007). Kleines Beta – kleines Bestimmtheitsmaß: großes Problem?. CFO aktuell, 6, pp. 210–213. 17 The positive relationships Beta/R2 and Beta/t-statistic are strict only c.p. While the volatility of the market return is of no importance, the positive relationships can be disturbed, if smaller Betas are accompanied by smaller volatilities of the share returns or, respectively, smaller dispersions of the disturbance terms, cf. e.g. the formulas in Gray, S., Hall, J., Klease, D., and McCrystal, A. (2009). Bias, stability, and predictive ability in the measurement of systematic risk. Accounting Research Journal, 22 (3), p. 223. Cf. the empirical results of that study, p. 227, and Knoll, L., Ehrhardt, J. and Bohnet, F. (2007). Kleines Beta – kleines Bestimmtheitsmaß: großes Problem?. CFO aktuell, 6, pp. 210–213 (212), for such a trend.A PROFESSIONAL DEVELOPMENT JOURNAL for the CONSULTING DISCIPLINES t h e v a l u e e x a m i n e r JULY | AUGUST 2019 25 Now let us look at Table 2 of part I (p. 21) to test these propositions: If we compare the columns “R sq > 0.1” and “t-stat. > 2”, we observe the following results: Ad (a) In all but one industry group, the number of companies with the filter of the t-statistic is greater than or equal to the number of companies with the filter of the R2—by trend an accordance with the proposition, but as (a) reflects a strict mathematical relationship, the result of the “Telecommunication Services” line is in high need of an explanation. Did Raicov/Trafford observe any t- statistics less than -2.26? Ad (b) In no industry group is the Beta mean lower under the filter of the t-statistic than under the filter of the R2, in most lines it is greater—a perfect contradiction to the proposition. As the last finding does not only contradict the mathematical relationship, but also prior empirical findings,18 it is highly recommended to be discussed. The first possibility for an explanation is that the “Mean”columns for “R sq > 0.1” and “t-stat. > 2” have 18 Cf. again Gray, S., Hall, J., Klease, D., and Mc Crystal, A. (2009). Bias, stability, and predictive ability in the measurement of systematic risk. Accounting Research Journal, 22 (3), pp. 223–236 (table 1 on p. 227), and Knoll, L., Ehrhardt, J., and Bohnet, F. (2007). Kleines Beta – kleines Bestimmtheitsmaß: großes Problem?. CFO aktuell, 6, pp. 210–213 (212). been confounded. When asked this question by e-mail, Mr. Trafford advised that this was not the case. Looking at the section “R sq > 0.1 & Mkt Cap > $ 100m”, it is understandable, that a simple confounding of the mentioned kind cannot be the (only) reason. As this double restriction excludes even more firms with—by empirical trend, not in a strict mathematical manner—lower Betas, the mean Betas should be the highest ones compared to the two alternatives. Indeed, they mostly, but not always, are located between the values in “R sq > 0.1” and “t-stat. > 2”. At this point, we must stop our deliberations, because the only persons who can clarify these relationships are the authors of the original paper. As long as those puzzles are not solved, it makes no sense to discuss further results which may rely on the summary statistics in question or other ones, which cannot be discussed by an outsider just because of obvious structural contradictions. PROVISIONAL CONCLUSIONS Thus, for the time being, we must wait for a statement of the authors concerning these econometric riddles. Concerning the general approach of searching for the best Beta, we have a certain understanding. In practice, valuation means working with the least evil, whenever you have no A PROFESSIONAL DEVELOPMENT JOURNAL for the CONSULTING DISCIPLINES 26 JULY | AUGUST 2019 t h e v a l u e e x a m i n e r perfect theory (and when do you ever have one?). Many experts try to modify this situation by looking for a Beta with better empirical properties. Despite our critique, we find that Diana Raicov and Richard Trafford do this job in a serious way, but as many others, they do not ask whether their estimation models correspond with the theoretical model, i.e., the CAPM, forming the basis for the statistical analysis. Some decades ago, Lord Peter Bauer published a harsh critique under the title The Disregard of Reality.19 His philippic against the overuse of mathematical models in economics culminated in a comparison with a prominent tale: What we see is an inversion of the familiar Hans Andersen story of the Emperor´s new clothes. Here there are new clothes, and at times they are haute couture. But all too often there is no Emperor within.20 Today, the analogue comparison can be made looking at the relationship between the (missing) Emperor economic theory and the, more or less, haute couture clothes econometrics. In general, we should be careful in deviating from the use of the simple own Beta of a share and avoid talking about “modifications” of the CAPM. If we are not content with the empirical properties of the CAPM and, therefore, deviate from its basis requirements, we just should not call our proceeding CAPM-based. Beside the question, whether the proceeding at hand will be an improvement, it will at least be straightforward. Dr. Leonhard Knoll is a free consultant and professor at the University of Würzburg (faculty of economics, department of business administration), where he also received his academic degrees Diplom-Kaufmann, Dr. rer. pol. and the venia legendi in business administration. He is an academic member of the EACVA, the European partner institution of NACVA. E-mail: 19 Bauer, P. (1987). The Disregard of Reality. The Cato Journal 7, pp. 29–42. 20 Bauer, P. (1987), The Disregard of Reality. The Cato Journal 7, pp. 29–42 (36), emphasized in original. Lutz Kruschwitz is professor emeritus of business administration at Free University of Berlin. He graduated in 1968, and received his doctorate in 1970. In 1975, he received the venia legendi for business administration. From 1975 to 2010, he served as professor at the Technical University of Berlin, the University of Lüneburg and the Free University of Berlin. He is an honorary professor at the University of Vienna and received an honorary doctorate from the University of Tübingen in 2006. Daniela Lorenz is full professor of corporate finance at Julius-Maximilians University Würzburg. She studied business administration at Université de Lausanne and Free University of Berlin where she graduated in 2007, and became research and teaching assistant afterwards. During her doctoral studies she also conducted research at Yale University and New York University. In 2011, she received her doctorate from Free University of Berlin where she held a junior professorship of corporate finance and business taxation from 2011 until 2018. Andreas Löffler studied mathematics at Leipzig University and Academy of Science in Berlin (major: differential equations and dynamical systems). He obtained his PhD in mathematics from University Leipzig in 1993, and a second PhD in economics in 1995. In 2000, he was tenured professor at the Chair of Banking and Finance at University Hannover. In 2006, he was appointed to the Chair of Banking and Finance at Friedrich Alexander University Erlangen-Nuremberg and in 2008, he moved to Paderborn as a professor of finance and investment. Since 2012, he is Chair of Banking and Finance at Free University of Berlin. VEA PROFESSIONAL DEVELOPMENT JOURNAL for the CONSULTING DISCIPLINES t h e v a l u e e x a m i n e r JULY | AUGUST 2019 27 Academic Review ROUND TABLE DISCUSSION: EDUCATING TOMORROW’S LEADERS With: Peter Lohrey, PhD, CVA, CDBV; Lari Masten, MSA, CPA, ABV, CFF, CPVA, CVA, MAFF, ABAR; Danny Pannese, MST, CPA, ABV, CVA, CSEP; Keith Sellers, CPA, ABV; and Richard Trafford, MSc, CVA, CFE, MAFF, FAIA, FCT, FHEA Moderated by Nancy McCarthy, Senior Editor, The Value Examiner /////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////// /////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////// Over the course of the year, this column is expertly written by Peter Lohrey. However, in the summer months, Dr. Lohrey takes a well- deserved break. Last month, our guest editor was Matthew Crane, DBA, ASA, CPA. This month, Dr. Lohrey has been joined by several members of The Value Examiner (TVE) editorial board for a lively discussion on the challenges and needs of students who one day hope to enter the valuation profession. In addition to running or being an integral part of a practice, each member of the discussion group teaches at a college or university: Lari Masten and Keith Sellers teach at the University of Denver in Denver, CO; Peter Lohrey is an assistant professor of accounting at Montclair State University in Montclair, NJ; Danny Pannese teaches at Sacred Heart University in Hartford, CT; and Richard Trafford is a Visiting Fellow at Portsmouth University, Portsmouth, UK. In spite of work or vacation schedules, the members of this discussion group made the time to discuss what they see as the pitfalls and positives of how we are academically preparing future generations of accounting students, and, hopefully, the next crop of CVAs. TVE: Welcome everyone, I am so pleased you could make it. Our topic today is “Educating Tomorrow’s Leaders.” I read a statistic recently from the Bureau of Labor Statistics stating employment of accountants and auditors is projected to grow ten percent from 2016 to 2026, faster than the average for all occupations. How well are we preparing students to assume these roles? Richard Trafford: I would say, in the UK at least, we are doing a fairly credible job of it. But I am not so sure we are facing some realities. Lari Masten: I think we are doing a credible job in the U.S., too, but there are some issues for which we need to develop solutions. Richard Trafford: And some realities we need to face. TVE: Such as? Richard Trafford: There are a few areas such as how we present the material—students are so tech-savvy; it is hard to pry them from their iPhones at times. Then there is the sheer amount of data to deal with, and finally, I think Artificial Intelligence or AI is going to be a big game-changer, not just in our profession but in all professions. Lari Masten: I agree. I think there are some global issues facing everyone in this profession or studying to become part of it. TVE: Let’s break this down a bit. How is a student being “tech-savvy” a problem in the classroom? Lari Masten: That’s a good question. There is nothing wrong with knowing, using, and understanding technology. And I think most colleges and universities are doing a good job in terms of course selection. However, we need to incorporate technology better in terms of delivering instruction. Danny Pannese: This is true. The old “learn it by rote” system is dying. Giving students a thirty-page outline and a bunch of question just doesn’t cut it. We need to help students think outside the box. And use technology less as a tool for “getting the right answer” and more as an application for getting the answer right. Lari Masten: Right. The lecture-test method is not doing it anymore. We need to incorporate a more dynamic system of learning. Accounting is a complex topic, so how do we become more interactive? Currently, you can only speak for about ten minutes before you lose the students completely. We can’t afford that. Peter Lohrey: I can attest to that. Students are so used to getting their answers from Google that they are losing the ability to think critically. We need to bring that component ACADEMIC INSIGHTSA PROFESSIONAL DEVELOPMENT JOURNAL for the CONSULTING DISCIPLINES 28 JULY | AUGUST 2019 t h e v a l u e e x a m i n e r back into the equation. It’s not that students can’t do it—it’s that they have not been taught that it is important. In one of my courses, I require both an oral presentation and a written essay. These projects are painful to grade. Students will grab information from the internet and think they mollified me. TVE: I think you all have identified a problem, but what is the solution? Keith Sellers: I think the most important thing to teach, aside from basics, is how to make professional judgments. While it is important to keep the students focused, they do need to rely on their insights, experience, and education. The more you rely on a black box, the less you are making real decisions. Peter Lohrey: What I find troubling is the reliance on tools. Somehow, we need to show students that, while tools are important in making calculations, you cannot leave out the human part of the equation. Keith Sellers: If, after a few years of practicing as a CPA, a student wants to go on to business valuation, they will find they need to have honed their judgment. And it’s not just apps and programs that are distractions. For example, although fair value reporting is now part of mainstream financial reporting, most textbooks provide a somewhat shallow coverage of the topic. Let’s face it, accounting textbooks are very strong on teaching rules but struggle more with topics that require increasing levels of professional judgment. Few full-time professors have experience with valuation, so they “teach the book.” Lari Masten: We have got to add a component to the training that introduces the human judgment aspect. When students are first learning accounting principles, they may lean into automated tools. We need to take the crutch away at some point and emphasize decision-making ability. TVE: You’ve brought up “big data” as an issue. Can you explain how that affects students? Danny Pannese: There is so much information out there. So much. And access to it is not difficult. But not all of it is important or valuable in specific situations. I am concerned that, with their reliance on automated programs, students do not see the “small picture” as it pertains to a client. My concern is that students will not value the information they can access. If they learn they can send off a set of financial data and be “ok” with the canned response, that is troubling and devalues our profession. Keith Sellers: The biggest new thing out there is the push towards data analytics, especially “big data.” I recently received information from a Big Four firm outlining their “needs” from new hires. The list included programming in Python, database skills in SQL, financial modeling, etc. Addition of these topics in accounting curriculums can only reduce the time allocated to valuation-related studies. Peter Lohrey: It comes back to human judgment. The amount of data that can be managed and the decisions of a valuator becomes a bigger issue. We need to contemplate the role of human interaction. Giving a lecture on internet viability— the roles we play will still include a step for humans. We need to think more about what that step will be. TVE: Let’s look at AI. Do you feel it will help or hurt the profession? Richard Trafford: Well, as I’ve said, it is a huge game- changer, and we are just at the beginning of the trend. I am concerned that AI will turn our business into a machine-based profession. AI will reduce the demand for certain repetitive tasks, which could be a positive thing. But it has the potential to take out that human quality we have been discussing. Lari Masten: Given how machine learning can reduce errors, and perform certain functions faster than we humans can, there is a very real concern about its impact. If we can learn to harness the power of AI instead of succumbing to it, we could see an amazing future. Richard Trafford: All true. But it could also eliminate jobs. Machine intelligence is massive in terms of speed and scale. It can identify patterns and be programmed to make informed decisions. How do we address that aspect? Peter Lohrey: For me, the concern about AI is a bit like the concept of self-driving cars, it is coming, but it isn’t here yet, and we have some time to determine how to utilize the process to our best advantage. Danny Pannese: No time like the present. I read an article that quotes AICPA CEO, Barry Melancon, predicting that “the accounting industry could be negatively affected by changes in technology, losing more than one million jobs.” Whether that is an overstatement or not, accountants will have to adopt this technology, just as they had to adopt the computer or internet. And, as educators, we need to be aware of these changes.A PROFESSIONAL DEVELOPMENT JOURNAL for the CONSULTING DISCIPLINES t h e v a l u e e x a m i n e r JULY | AUGUST 2019 29 TVE: Why am I thinking of HAL from Stanley Kubrick’s “Space Odyssey: 2001?” Keith Sellers: I am less afraid of AI at this point than I am of the structure of higher education. Let’s face it, we are geared to having the students pass a CPA exam. And hiring is almost all driven by the Big Four. I don’t know how it is in the UK, Richard, but the Big Four try to get students to commit to a position with their firm by the end of their sophomore or junior year. Right after an internship, they are given an offer, which is usually in the audit department. Later, these seniors and master students worry about what will happen if I back out of their accepted position. In other words, people who would like to do valuation work are locked up long before they graduate to do an audit or tax job. We are trying to avoid this lockstep. Another problem is designing academic curriculums around the desires of firms. Local recruiters tell us they want warm bodies in the audit field who have good people skills and can pass the CPA exam. At the same time, we hear from the top people at these same firms that we should focus on broader decision making, quantitative, and problem-solving skills. Danny Pannese: Well, there is a job that is expected to be significantly impacted by AI—audit. Richard Trafford: Maybe not so terrible, either. In the UK we get direction from the Big Four, also. They are looking for individuals who can communicate at a much higher level. We take a middle path. In the master in accounting [program], one module is business valuation. What we find is that a lot of people in the Big Four want to come out of the audit function. But we do look at things a little differently; for example, the program administrators try to determine where the profession will be ten years hence, which I find a bit problematic—in ten years, if past is prologue, it will be a different world. How do you teach what may be? Keith Sellers: Most U.S. programs don’t have that flexibility to add courses that may be helpful. To add courses is tough to do due to a combination of faculty resources and curriculum limitations. At my program at the University of Denver, we decreased the number of required courses, which hurt programs that were popular and important. The CPA exam largely drives students’ choices for electives. Danny Pannese: I am going to circle back. AI, in my opinion at this point, will make inroads in audit first. If that is true, Big Four will not need warm bodies. If that is the case, students will be freed up to pursue courses that will help with judgment and insight. Peter Lohrey: I attended a conference recently, and in one of the breakout seminars, the presenter discussed the various inroads made with a certain software package. There are some, like TABLEAU, that now can spend more time thinking and reasoning rather than inputting data. I think AI will make more inroads in the audit. Eventually, that will change what the Big Four looks for and, as Danny says, put a spotlight on other functions. Danny Pannese: AI will have the same impact on taxation. In one firm of which I am aware, the actual data processing is sent over to India, and the processed information comes back overnight. Input is all AI. Peter Lohrey: It is our value as thinkers that challenges AI. Richard Trafford: Given that, it seems critical thinking should be part of all curriculums. TVE: You are singing my song. In my teaching experience, students are woefully underprepared in that area. Lari Masten: It does scare me that we are embracing technology in a profession that needs to make critical, serious judgments. And, one size solutions do not fit all. Danny Pannese: I agree; I am afraid we are encouraging students to apply a formula and peddle the results. We need a little more art and maybe a bit less science in our programs. Keith Sellers: I am big on the science side; I think it just gives more info. But it is only a tool. Lari Masten: Right, but I think we rely too much on the science side. Richard Trafford: Given the trend for students in general wanting to know what facts they need to learn to pass, there needs to be a much greater emphasis on teaching critical thinking as an employability skill. Providing students with the ability to think in depth about what is necessary to make a decision is really important. There needs to be more emphasis on critical thinking. Students want to know what facts they need to know to pass a course. Lari Masten: Exactly. As an example, when we were students and learning material as it relates to cost of capital, we read the book every year. Now there is an online program which purports to present the answer. Consequently, students think they do not need to understand the concepts.Next >