Wednesday, August 16, 2017

Autonomous Cars: Altering One in Nine Jobs

It seems clear that driverless vehicles are coming, although the timeline for their arrival remains unclear. David Beede, Regina Powers and Cassandra Ingram of the Economics and Statistics Administration at the US Department of Commerce look at one aspect, "The Employment Impact of Autonomous Vehicles," in ESA Issue Brief #05-17 (August 11, 2017). They set the stage this way: 

"In September 2016, the U.S. Department of Transportation and the National Highway Traffic Safety Administration (NHTSA) published policy guidelines for AVs [autonomous vehicles], recognizing their potential as “the greatest personal transportation revolution since the popularization of the personal automobile nearly a century ago” (NHTSA 2016). ... The worldwide number of advanced driver-assistance systems (ADAS), such as backup cameras and adaptive cruise control, increased from 90 million to 140 million units between 2014 and 2016. Consumers have indicated a willingness to pay $500-$2,500 per vehicle for ADAS. Sensor technologies are rapidly advancing to provide sophisticated information to vehicle operating systems about the surrounding environment, such as road conditions and the location of other nearby vehicles. However, slower progress has been made in developing software that can mimic human driver decision-making, so that fully autonomous vehicles may not be introduced for another ten or more years ..."  
Autonomous vehicles could lead to sweeping changes in personal mobility, car ownership, parking arrangements, traffic congestion, road safety, and more. I ran through some of the main effects in an earlier post on "Driverless Cars" (October 31, 2012).

The focus of Beede, Powers, and Ingram is on jobs that involve a substantial amount of driving. They write:
"In 2015, 15.5 million U.S. workers were employed in occupations that could be affected (to varying degrees) by the introduction of automated vehicles. This represents about one in nine workers. We divide these occupations into “motor vehicle operators” and “other on-the-job drivers.” Motor vehicle operators are occupations for which driving vehicles to transport persons and goods is a primary activity, are more likely to be displaced by AVs [autonomous vehicles] than other driving-related occupations. In 2015, there were 3.8 million workers in these occupations. These workers were predominately male, older, less educated, and compensated less than the typical worker. Motor vehicle operator jobs are most concentrated in the transportation and warehousing sector. Other on-the-job drivers use roadway motor vehicles to deliver services or to travel to work sites, such as first responders, construction trades, repair and installation, and personal home care aides. In 2015, there were 11.7 million workers in these occupations and they are mostly concentrated in construction, administrative and waste management, health care, and government. Other-on-the-job drivers may be more likely to benefit from greater productivity and better working conditions offered by AVs than motor vehicle operator occupations." 
When they break down these jobs by industry, I was interested to note that "government" is the area where the greatest number of jobs will potentially be affected by driverless cars. This suggests that certain might play a leading role in offering examples of how driverless vehicles could work. Or not!
Many of those whose jobs would be affected by autonomous vehicles are likely to push back. When tallying up the costs and benefits, it's worth noting that those who spend a lot of time driving are actually in relatively hazardous jobs, because of the risk of motor vehicle accidents. "[T]he fatality rate (per 100,000 full-time equivalent workers) for motor vehicle operators from on-the-job roadway incidents involving motor vehicles is ten times the rate for all workers, and the numbers of roadway motor vehicle occupational injuries resulting in lost work time per 100,000 full-time equivalent workers is 8.7 times as large as that of all workers."

Any innovation which directly affects the jobs of about one-ninth of all US workers has the potential to be a dislocating shock of some force. Some  types of workers who spend a good portion of every day in a vehicle will have a harder time adjusting to the change; for others, autonomous vehicles may come as a relief, by freeing them up to focus on other parts of their job. The authors note: 
"Workers in motor vehicle operator jobs are older, less educated, and for the most part have fewer transferable skills than other workers, especially the kinds of skills required for non-routine cognitive tasks. ... [I]n contrast to the workers in the occupations we classify as motor vehicle operators, other on-the-job drivers, of which there are about triple the number of motor vehicle operators, have a more diversified set of work activities, knowledge, and skills. For this group, although driving is an important work activity, it is only one of many important work activities, many of which already require the kinds of non-routine cognitive skills that are becoming increasingly in demand in our economy. Such workers are likely to be able to adapt to the widespread adoption of AVs."

Tuesday, August 15, 2017

Adam Smith: The Plight of the Impartial Spectator in Times of Faction

Adam Smith's first great book, the Theory of Moral Sentiments, relies heavily in places on the idea of an "impartial spectator." Smith's notion is that our beliefs about morality are closely related to our notion of how a hypothetical "impartial spectator" would react to a given situation. (Here's a quick overview of Smith's argument.) But what happens to a person trying to think like an impartial spectator--that is, a person trying to preserve the integrity of their own personal judgment--in a time of faction?  Smith argues that anyone trying to act in this way is likely to be marginalized by all competing factions.

Here is Smith's comment from the 1759  Theory of Moral Sentiments (Book III, Ch. 1, paragraph 85). As is my wont, I quote here from the online version of the book at the Library of Economics and Liberty website. Smith wrote: 
"The animosity of hostile factions, whether civil or ecclesiastical, is often still more furious than that of hostile nations; and their conduct towards one another is often still more atrocious. ...  In a nation distracted by faction, there are, no doubt, always a few, though commonly but a very few, who preserve their judgment untainted by the general contagion. They seldom amount to more than, here and there, a solitary individual, without any influence, excluded, by his own candour, from the confidence of either party, and who, though he may be one of the wisest, is necessarily, upon that very account, one of the most insignificant men in the society. All such people are held in contempt and derision, frequently in detestation, by the furious zealots of both parties. A true party-man hates and despises candour; and, in reality, there is no vice which could so effectually disqualify him for the trade of a party-man as that single virtue. The real, revered, and impartial spectator, therefore, is, upon no occasion, at a greater distance than amidst the violence and rage of contending parties. To them, it may be said, that such a spectator scarce exists any where in the universe. Even to the great Judge of the universe, they impute all their own prejudices, and often view that Divine Being as animated by all their own vindictive and implacable passions. Of all the corrupters of moral sentiments, therefore, faction and fanaticism have always been by far the greatest."

Monday, August 14, 2017

Misallocation and Productivity: International Perspective

In pretty much every industry in pretty much every country, the firms exhibit a range of productivity: that is, some well-run and efficient firms produce more output given their levels of labor and capital, while others produce less. What ought to happen in a well-functioning economy is that the lagging-productivity firms should either be catching up with the leading-productivity firms over time, or the laggard firms should shrink in size while the leading firms grow in size. This process has been demonstrably important to economic growth in the past.

However, a wide range of taxes, rules, and institutions may act to inhibit reallocation of resources, and thus to slow down productivity growth. For example, if smaller farms are less efficient that larger farms, but the land use rules in a developing economy keep farm sizes small, then agricultural resources will not be reallocated. Economists refer to this as an issue of "misallocation."

Diego Restuccia and Richard Rogerson discuss "The Causes and Costs of Misallocation" in the Summer 2017 issue of the Journal of Economic Perspectives (31:3: pp. 151-174). The IMF discusses the role of tax policy in creating and sustaining misallocation  in the April 2017 IMF Fiscal Monitor, with an overall theme of "Achieving More with Less." The discussion of reallocation is in "Chapter 2: Upgrading the Tax System to Boost Productivity." The IMF researchers write:
"Resource misallocation manifests itself in a wide dispersion in productivity levels across firms, even within narrowly defined industries. High dispersion in firm productivities reveals that some businesses in each country have managed to achieve high levels of efficiency, possibly close to those of the world frontier in that industry. This implies that existing conditions within a country are compatible with higher levels of productivity. Therefore, countries can reap substantial TFP [total factor productivity] gains from reducing resource misallocation, allowing firms to catch up with the high-productivity firms in their own economies. In some cases, however, the least productive businesses will need to exit the market, releasing resources for the more productive ones. For example, Baily, Hulten, and Campbell (1992) find that 50 percent of manufacturing productivity growth in the United States during the 1980s can be attributed to the reallocation of factors across plants and to firm entry and exit. Similarly, Barnett and others (2014) find that labor reallocation across firms explained 48 percent of labor productivity growth for most sectors in the U.K. economy in the five years prior to 2007.
Resource misallocation is often the result of a large number of poorly designed economic policies and market failures that prevent the expansion of efficient firms and promote the survival of inefficient ones. Reducing misallocation is therefore a complex and multidimensional task that requires the use of all policy levers. Structural reforms play a crucial role, in particular because the opportunity cost of poorly designed economic policies is much greater now in the context of anemic productivity growth. Financial, labor, and product market reforms have been identified as important contributors (see Banerjee and Duflo 2005; Andrews and Cingano 2014; Gamberoni, Giordano, and Lopez-Garcia 2016; and Lashitew 2016). This chapter makes the case that upgrading the tax system is also key to boosting productivity by reducing distortions that prevent resources from going to where they are most productive. ... 
Potential TFP gains from reducing resource misallocation are substantial and could lift the annual real GDP growth rate by roughly 1 percentage point. Payoffs are higher for emerging market and low-income developing countries than for advanced economies, with considerable variation across countries. ...
Many emerging market economies have a relatively small number of leading firms, and a large number of laggards. If the distribution of leaders and laggards in these markets became more equal, similar to the distribution between leaders and laggard firms in US industries, the productivity gains could be large. By the IMF calculations, total factor productivity "would increase by 30 to 50 percent in China and by 40 to 60 percent in India."

In their JEP essay, Restuccia and Rogerson  provide a useful overview of what can cause misallocation, and how economists have sought to measure the potential gains from reducing misallocation. For a flavor of the issues and analysis, here are a few of the studies mentioned in the paper:
"Government regulation can also hinder the reallocation of individuals across space. Hsieh and Moretti (2015) study misallocation of individuals across 220 US metropolitan areas from 1964 to 2009. They document a doubling in the dispersion of wages across US cities during the sample period. Using a model of spatial reallocation, they show that the increase in wage dispersion across US cities represents a misallocation that contributed to a loss in aggregate GDP per capita of 13.5 percent. They argue that across-city labor misallocation is directly related to housing regulations and the associated constraints on housing supply. ..."
"Tombe and Zhu (2015) provide direct evidence on the frictions of labor (and goods) mobility across space and sectors in China and quantify the role of these internal frictions and their changes over time on aggregate productivity. The reduction of internal migration frictions is key and together with internal trade restrictions account for about half of the growth in China between 2000 and 2005. ..."
"Restuccia and Santaeulalia-Llopis (2017) study misallocation across household farms in Malawi. They have data on the physical quantity of outputs and inputs as well as measures of transitory shocks and so are able to measure farm-level total factor productivity. They find that the allocation of inputs is relatively constant across farms despite large differences in measured total factor productivity, suggesting a large amount of misallocation. In fact, they found that aggregate agricultural output would increase by a remarkable factor of 3.6 if inputs were allocated efficiently. Their analysis also suggests that institutional factors that affect land allocation are likely playing a key role. Specifically, they compare misallocation within groups of farmers that are differentially influenced by restrictive land markets. Whereas most farmers in Malawi operate a given allocation of land, other farmers have access to marketed land (in most cases through informal rentals). Using this source of variation, Restuccia and Santaeulalia-Llopis find that misallocation is much larger for the group of farmers without access to marketed land: specifically, the potential output gains from removing misallocation are 2.6 times larger in this group relative to the gains for the group of farms with marketed land." 
There will always be leading and lagging firms, and various hindrances to reallocation of resources across places and firms. In that sense, misallocation is never going away. But studying misallocation offers a useful reminder that productivity growth and economic growth are driven (or not!) by the dynamic forces of competition in a reasonably flexible economic setting.

Moreover, a better understanding of the gaps between leading and lagging companies--why suchh gaps persist, what might help to close them-- may even help to explain one of the really big questions in the global economy, which is the overall slowdown in productivity growth. A 2015 study done by the OECD found that productivity growth among leading companies in various industries has not been slowing down; instead, the gap between leading and lagging companies has expanded, as if lagging companies are having a harder time keeping pace.  

Friday, August 11, 2017

NAFTA in a Multipolar World Economy

Discussions of globalization often seem rooted in an assumption that the main choices, either for the US or for other countries, are either nationalism or global. But there is another possibility, which is that the world economy evolves to a "multipolar" setting, which is based on primarily regional agglomerations of cross-national trade. In this situation, the issue for the US economy is whether it will be part of its geographically natural multipolar group, here in the Americas, or whether it will try view itself as a group of one, competing in the global economy with multipolar groups in Europe and in Asia. Your attitude toward the North American Free Trade Agreement, for example, may vary according to whether you see it as one of many trade deals in a globalizing economy, or whether you see it as the specific trade deal for building a US-centered trading bloc in a multipolar economy.

Michael O’Sullivan and Krithika Subramanian lay out the case for the multipolarity hypothesis in "Getting Over Globalization," written as a report for Credit Suisse Research (January 2017). They write:
"Globalization is running out of steam. We can see this in various ways. Our measure for tracking globalization – made up of flows of trade, finance, services and people – has ebbed in the past year, and has slipped backwards over the course of the past three years so that it has dropped below the levels reached in 2012–2013 to about the same level as crisis-ridden 2009–2010 ... . Perhaps the most basic representation of globalization is trade, and this is sluggish or according to many measures it is plateauing. ... Other indicators of globalization paint a more negative picture – cross-border flows of financial assets (relative to GDP) have continued downward from their pre-financial-crisis peak, most likely because of the effects of regulation and the general shrinking of the banking sector. Trade liberalization, as measured by the Fraser Institute’s economic freedom of the world indices, has been slowly declining since its peak in 2000, although it is still at a relatively healthy level. ... It should be said that the extent of globalization/multipolarity is still at a historically high level, although it is hard not to have the impression that it is on the verge of a downward correction, especially once we consider some of the underlying dynamics. ...

"One of the notable sub-trends of globalization has been a much better distribution of the world’s economic output, led by what were once regarded as overly populous, third world countries such as India and China. This has fueled multipolarity – the rise of regions that are now distinct in terms of their economic size, political power, approaches to democracy and liberty, and their cultural norms. ...
We believe that the world is now leaving globalization behind it and moving to a more distinct multipolar setting. ...

"The ... scenario is based on the rise of Asia and a stabilization of the Eurozone so that the world economy rests, broadly speaking, on three pillars – the Americas, Europe and Asia (led by China). In detail, we would expect to see the development of new world or regional institutions that surpass the likes of the World Bank, the rise of “managed democracy” and more regionalized versions of the rule of law – migration becomes more regional and more urban rather than cross-border, regional financial centers develop and banking and finance develop in new ways. At the corporate level, the significant change would be the rise of regional champions, which in many cases would supplant multinationals. We would also expect to see uneven improvements in human development leading to more stable, wealthier local economies on the back of a continuation of the emerging market consumer trend. ...

"An interesting and intuitive way of seeing how the world has evolved from a unipolar one (i.e. USA) to a more multipolar one is to look at the location of the world’s 100 tallest buildings. The construction of skyscrapers (200 meters plus in height) is a nice way of measuring hubris and economic machismo, in our opinion. Between 1930 and 1970, at least 90% of the world’s tallest buildings could be found in the USA, with a few exceptions in South America and Europe. In the 1980s and 1990s, the USA continued to dominate the tallest tower league tables, but by the 2000s there was a radical change, with Middle Eastern and Asian skyscrapers rising up. Today about 50% of the world’s tallest buildings are in Asia, with another 30% in the Middle East, and a meager 16% in the USA, together with a handful in Europe. In more detail, three-quarters of all skyscraper completions in 2015 were located in Asia (China and Indonesia principally), followed by the UAE and Russia. Panama had more skyscraper completions than the USA."
If one believes that the US should view its economy as part of an emerging American bloc in a multipolar world economy, the North American Trade Agreement between the US, Canada, and Mexico is the foundation for that bloc. C. Fred Bergsten and Monica de Bolle have edited an e-book titled A Path Forward for NAFTA, a collection of 11 short essays discussing NAFTA "modernization," "renegotiation," and "updating" from various national, industry, and foreign policy perspectives (Peterson Institute for International Economics Briefing 17-2, July 2017).  They give some sense of the possibilities for cooperation and agreement, and the unlikeliness that such an agreement will address bilateral trade deficits, in the "Overview" essay:
"The overarching goal of negotiators from the three participating countries must be to boost the competitiveness of North America as a whole, liberalizing and reforming commercial relations between the three partner countries and responding to the many changes in the world economy since NAFTA went into effect in 1994. These changes include the digital transformation of commerce, which has enabled sophisticated new production methods employing elaborate supply chains, transforming North America into a trinational manufacturing and services hub. But concerns about labor, the environment, climate change and energy resources, and currency issues have become more acute than they were at the time NAFTA started. Commerce Secretary Wilbur Ross was thus correct when he said that NAFTA “didn’t really address our economy or theirs [Mexico and Canada] in the way they are today.” ...

"The broadest consistent goal shared by the NAFTA countries should be to strengthen the international position of North America as a whole in a world of tough competition from China and others. Beyond that objective, the negotiators can take steps toward achieving regional energy independence, since all three countries are large consumers and producers of different kinds of energy, from those based on fossil fuels to those derived from new technologies and renewable sources. There is also plenty of room for additional or indeed full liberalization of key sectors, such as financial services and telecommunications, to the benefit of all three economies.

"The new NAFTA could borrow some of the TPP’s innovative approaches and embrace cutting-edge standards for issues such as e-commerce, state-owned enterprises (SOEs), and other sectors that have become central to international trade and investment. The North American partners might be able to help resolve a politically inflammatory issue plaguing trade agreements worldwide: incorporating dispute settlement mechanisms that will make their provisions enforceable and thus credible without being perceived as undermining national sovereignty and widely shared concepts of fairness. Another step in this direction would be to work out a North American competition policy that would enable the three countries to disavow the use of antidumping and countervailing duties against each other, as Australia and New Zealand have done. The NAFTA partners might also strive to achieve a degree of regulatory coherence that has so far eluded the United States and the European Union in their efforts to forge a transatlantic agreement. NAFTA negotiators could permit like-minded countries, notably the members of the Pacific Alliance (Chile, Colombia, and Peru, as well as Mexico), all of which are already free trade agreement partners of the United States, to join NAFTA. ...

"[T]rade agreements are inappropriate and ineffective vehicles for attempting to reduce trade imbalances. The reason is that external imbalances are created by internal macroeconomic imbalances and can be remedied only by changes in the latter. Hence continued US insistence on cutting its trade deficit, especially via bilateral efforts with Mexico, would almost surely lead to dissatisfaction with the outcome and a potential blowup of the entire agreement. Taking the concern about trade deficits at face value, moreover, is a prescription for deadlock with Canada and Mexico, both of which run global trade and current account deficits on the same order of magnitude as the United States. Hence they properly view themselves as deficit countries that need to strengthen, not weaken, their external economic positions. They are most unlikely to accede to US demands to strengthen its external position at their expense, even if the economics were to make that possible, and can in fact be expected to argue (correctly) that the three North American deficit countries should work together to improve their joint and several external positions with the rest of the world.
Those interested in NAFTA and the possibility of an emerging multipolar world economy might wish to check some earlier posts:

Thursday, August 10, 2017

The US Fiscal Outlook

Pretty much everyone agrees that the US fiscal outlook for the long-run--a few decades into the future--looks grim unless changes are made. Here's are estimates of the ratio of accumulated federal debt/GDP throughout US history, and projected up through 2050, from a Congressional Budget Office Report in March 2017. The spikes of government debt during wartime, the Great Depression, and the Reagan and Obama administrations are clear. The trajectory forecast would take US government debt outside past experience.


Alan J. Auerbach and William G. Gale set the stage for the discussion that needs to  happen in "The Fiscal Outlook In a Period of Policy Uncertainty," written for the Tax Policy Center (August 7, 2017). Douglas W. Elmendorf and Louise M. Sheiner also tackle these issues in "Federal Budget Policy with an Aging Population and Persistently Low Interest Rates," in the Summer 2017 issue of the Journal of Economic Perspectives (31: 3, pp. 175-194).

Auerbach and Gale summarize their theme straightforwardly:

"Budget deficits appear manageable in the short run, but the nation’s debt-GDP ratio is already high relative to historical norms, and even under optimistic assumptions,  both measures will rise in the future. Sustained deficits and rising federal debt will crowd out  future investment, reduce prospects for economic growth, and impose burdens on future generations. ...
"For example, we find that just to ensure that the debt-GDP ratio in 2047 does not exceed the current level would require a combination of immediate and permanent spending cuts and/or tax increases totaling 3.2 percent of GDP. This represents about a 16 percent cut in non-interest spending or a 19 percent increase in tax revenues relative to current levels. To return the debt-GDP ratio in 2047 to 36 percent, its average in the 50 years preceding the Great Recession in 2007-9, would require immediate and permanent spending cuts or tax increases of 4.6 percent of GDP. The longer policy makers wait to institute fiscal adjustments, the larger those adjustments would have to be to reach a given debt-GDP ratio target in a given year. While the numbers above are projections, not predictions, they nonetheless constitute the fiscal backdrop against which potentially ambitious new tax and spending proposals should be considered." 
Auerbach and Gale go through a variety of ways of projecting future deficits, but the overall message that there is a long-run reason for concern keeps coming through. They also offer a useful reminder that even if the proximate cause of the higher federal debt burden is projections for higher spending on entitlement and especially health programs, there are a number of cases in addressing a problem by reversing its cause doesn't make sense. As I sometimes say, "When someone is hit by a car, you don't fix their injury by reversing the cause--that is, backing up the car over their body." As Auerbach and Gale write:
"Looking toward policy solutions, it is useful to emphasize that even if the main driver of long-term fiscal imbalances is the growth of entitlement benefits, this does not mean that the only solutions are some combination of benefit cuts now and benefit cuts in the future. For example, when budget surpluses began to emerge in the late 1990s, President Clinton devised a plan to use the funds to “Save Social Security First.” Without judging the merits of that particular plan, our point is that Clinton recognized that Social Security faced long-term shortfalls and, rather than ignoring those shortfalls, aimed to address the problem in a way that went beyond simply cutting benefits. A more general point is that addressing entitlement funding imbalances can be justified precisely because one wants to preserve and enhance the programs, not just because one might want to reduce the size of the programs. Likewise, addressing these imbalances may involve reforming the structure of other spending, raising or restructuring revenues, or creating new programs, as well as simply cutting existing benefits. Nor do spending cuts or tax changes need to be across the board. Policy makers should make choices among programs. For example, more investment in infrastructure or children’s programs could be provided, even in the context of overall spending reductions." 
Elmendorf and Sheiner tackle a different aspect of the same question. They agree that federal deficits are on "an unsustainable path." However, they also note that interest rates are very low, which offers an opportunity for federal borrowing aimed at infrastructure and long-run investments. They write:
"Both market readings and detailed analyses by a number of researchers suggest that Treasury interest rates are likely to remain well below their historical norms for years to come, which represents a sea change for budget policy. We argue that many—though not all—of the factors that may be contributing to the historically low level of interest rates imply that both federal debt and federal investment should be substantially larger than they would be otherwise. We conclude that, although significant policy changes to reduce federal budget deficits ultimately will be needed, they do not have to be implemented right away. Instead, the focus of federal budget policy over the coming decade should be to increase federal investment while enacting changes in federal spending and taxes that will reduce deficits gradually over time."
As a focused argument in economic reasoning, Elmendorf and Sheiner make a strong case. As a matter of political economy, it's trickier, because one can raise at least three questions.

1) If the US political system decides not to focus on deficit-reduction now, is it capable of focusing the additional spending on investment that will raise long-run growth, or will the additional budget flexibility just lead to more transfer payments?

2) If the US political system doesn't focus on deficit reduction in the near-term, then in the medium-term roughly a decade from now, it will need to preside over even greater budget changes (as Auerbach and Gale explain) to avoid the outcome that everyone agrees is unsustainable. It would be a hard political U-turn to shift from larger deficits for investment in the present, to taking budgetary steps in the fugure that will offset that additional borrowing, and more besides, in the future.

3) The theoretical case for enacting changes now that will have the effect of holding down the increase in deficits in the long-run is strong. But in practical terms, just what these changes should be is less clear. For example, Congress could pass a law which places limits on, say, the level of government health care spending from 2030 to 2040, but there's no reason to believe that those limits will have any actual force when those years arrive. There are a few changes, like phasing in an older retirement age or a change in benefit formulas for Social Security, that might have a better chance of persisting. It seems useful to think more about about budgetary policies that could be enacted in the present, but would have most of their effect after a long-term phase-in, and would be relatively resistant to future political tinkering.

It's not exactly news that democratic political systems are continually enticed to focus on the present and push costs into the future in a wide range of contexts: public borrowing, pensions, environment, and others.

Wednesday, August 9, 2017

William Playfair: Inventor of the Bar Graph, Line Graph, and Pie Chart

William Playfair (1759-1823) wasn't sure himself whether he had actually invented the bar graph and the line graph. So after he had published The Commercial and Political Atlas in 1786, he kept an eye out for other examples. After 13 years of looking, but not finding any predecessors, he declared himself to be the inventor in his 1798 book Lineal Arithmetic, where he wrote (pp. 6-7):
"I confess I was very anxious to find out if I was actually the first who applied the principles of geometry to matters of finance, as it had long before been applied to chronology with great success. I am now satisfied, upon due enquiry, that I was the first; for during 11 years I have never been able to learn that anything of a similar nature had ever before been produced.

"To those who have studied geography, or any branch of mathematics, these charts will be perfectly intelligible. To such, however, as have not, a short explanation may be necessary.

"The advantage proposed by those charts, is not that of giving a more accurate statement than by figures, but it is to give a more simple and permanent idea of the gradual progress and comparative amounts, at different periods, by presenting to the eye a figure, the proportions of which correspond with the amount of the sums intended to be expressed.

"As the eye is the best judge of proportion, being able to estimate it with more quickness and accuracy than any of our other organs, it follows that wherever relative quantities are in question, a gradual increase or decrease of any revenue, receipt or expenditure of money, or other value, is to be stated, this mode of representing it is peculiarly applicable; it gives a simple, accurate, and permanent idea, by giving form and shape to a number of separate ideas, which are otherwise abstract and unconnected. In a numerical table there are as many distinct ideas given, and to be remembered, as there are sums, the order and progression of those sums, therefore, are also to be recollected by another effort of memory, while this mode unites proportion, progression, and quantity all under one simple impression of vision, and consequently one act of memory. "
Cara Giaimo provides an overview of Playfair's story in "The Scottish Scoundrel Who Changed How We See Data: When he wasn’t blackmailing lords and being sued for libel, William Playfair invented the pie chart, the bar graph, and the line graph," appearing in Atlas Obscura (June 28, 2016). Giaimo described Playfair as a "near-criminal rascal." He apprenticed with James Watt, of steam engine fame, failed at silversmithing, falsely claimed to have invented the semaphore telegraphy, tried blackmailing a Scottish lord, sold tracts of American land he didn't actually own to French nobility, and died in poverty and obscurity. For some additional detail on Playfair's colorful life, Giaimo links to a 1997 article, "Who Was Playfair?" by Ian Spence and Howard Wainer.

But for social scientists, what's interesting is that Playfair pushed back against the style of argument of his time--mainly verbal persuasion and perhaps a few tables--and invented these graphs. For example, here's the first bar chart, showing Scotland's trading partners.



Here's an early line graph from Playfair's 1786 atlas, showing England's imports and exports to Denmark & Norway in the 18th century.

And Playfair wasn't done. In his 1801 book The Statistical Breviary, he invented the pie chart. It appears in the middle of a group of other circular charts, and shows Turkish land holdings. Moreover, Playfair  hand-colored the "slices" of the pie, thus initiating the idea of color-coding. Here's the overall page, followed by a close-up of the pie chart itself.

The first pie chart, drawn among other circular charts by Playfair in 1801, and illustrating the Turkish Empire's land holdings. A closeup of the pie is available here.


I suspect that the invention of the line graph, bar graph, and pie chart were--like so many inventions--something that would have been invented during this time frame by someone, and sooner rather than later. But Playfair was first, and deserves the credit.

Homage: I ran across the Giamo article thanks to Tyler Cowen and the always-intriguing Marginal Revolution blog.

Tuesday, August 8, 2017

Negative Interest Rates: Evidence and Practicalities

Seven central banks around the world have lowered the interest rate that they use to implement monetary policy to a negative rate: along with the very prominent European Central Bank and Bank of Japan, the others include the central banks of Bulgaria, Denmark, Hungary, Sweden, and Switzerland. How is this working out? When (not if) the next recession  hits, are negative interest rates a tool that might be used by the US Federal Reserve? The IMF has issued a staff report on "Negative Interest Rate Policies--Initial Experiences and Assessments" (August 2017). In the Summer 2017 issue of the Journal of Economic PerspectivesKenneth Rogoff explores the arguments for negative interest rates (as opposed to other policy options) and practical methods of moving toward such a policy in "Dealing with Monetary Paralysis at the Zero Bound" (31:3, pp. 46-77).

When (and not if) the next recession comes, monetary policy is likely to face a hard problem. For most of the last few decades, the standard response of central banks during a recession has been to reduce the policy interest rate under their control by 4-5 percentage points. For example this is how the US Federal Reserve cut it interest rates in response to the recessions that started in 1990, 2001, and 2007.

The problem is that when (not if) the next recession hits, reducing interest rates in this traditional way will not be practical. As you an see, the policy interest rates has crept up to about 1%, but that's not high enough to allow for an interest rate cut of 4-5% without running into the "zero lower bound."

The problem of the zero lower bound seems unlikely to go away. Nominal interest rate can be divided up into the amount that reflects inflation, and the remaining "real" interest rate--and both are low. Inflation has been rock-bottom now for about 20 years, even as the economy has moved up and down, leading even Fed chair Janet Yellen to propose that economists need to study "What determines inflation?" Real interest rates have been falling, and seem likely to remain low.  The Fed is slowly raising its federal funds interest rate, but there is no current prospect that it will move back to the range of, say, 4- 5% or more. Thus, when (not if) the next recession hits, it will be impossible to use standard monetary tools to cut that interest rate by the usual 4-5 percentage points.

What macroeconomic policy tools will the government have when (not if) the next recession hits. Fiscal policy tools like cutting taxes or raising spending remain possible, although with the Congressional Budget Office forecasting a future of government debt rising to unsustainable levels during the next few decades, this tool may need to be used with care.  Hitting the zero bound is why the Fed and other central banks turned to "quantitative easing," where the central bank buys assets like government or private debt, although this raises obvious problems of what assets to buy, how much of these assets to buy--and the likelihood of political intervention in these decisions.

Thus, some central banks have taken their policy interest rates into negative territory. As the figure shows, the Bank of Denmark went negative in 2012, while a number of others did so in 2014 and 2015.

There are a number of concerns with negative interest rates. Will negative interest rates be transmitted through the economy in a similar way to traditional reductions in interest rates? Will negative interest rates weaken the banking sector? What sort of financial innovations might happen as investors seek to avoid being affected by negative rates? The IMF staff report argues that so far, the evidence is reasonably positive:
"There is some evidence of a decline in loan and bond rates following the implementation of  NIRPs [negative interest rate policies]. Banks’ profit margins have remained mostly unchanged. And there have not been significant shifts to physical cash. That said, deeper cuts are likely to entail diminishing returns, as interest rates reach their “true” lower bound (at which point agents shift into cash holdings). And pressure on banks may prove greater; especially in systems with larger shares of indexed loans and where banks compete more directly with bond markets and non-bank credit providers. ... On balance, the limits to NIRPs point to the need to rely more on fiscal policy, structural reforms, and financial sector policies to stimulate aggregate demand, safeguard financial stability, and strengthen monetary policy transmission."
For those who instinctively recoil from the notion of a negative interest rate, it's perhaps useful to remember that it has occurred quite often in recent decades. Any time someone is locked into paying or receiving a fixed rate of interest, and then sees inflation move up, a negative interest rate results. Thus, back in the 1970s and early 1980s, lots of Americans were receiving negative interest rates if they had money in bank accounts or Treasury bonds, and were paying negative interest rates if they already had a fixed-rate mortgage. In short, the innovation here isn't that real inflation-adjusted interest rates can be negative, but rather that a  nominal interest rate is negative.

It's also worth remembering that this policy interest rate is related to the everyday interest rates that people and firms pay and receive, but it's not the same. The interest rates for borrowers, for example, are also affected by underlying factors like risk and collateral. In short, negative policy interest rates does mean downward pressure on interest rates, but it doesn't mean that the credit card company is going to be paying you if you charge more on your credit card, or that negative interest will start eating away your home mortgage.

Thus, the existing evidence on negative interest rates to this point show that having the policy interest rate be a few tenths of a percent in the negative is possible, and can be sustained for several years. It doesn't show in a direct way how banks, households, and the economy would react if negative nominal interest rates became larger and widespread through the economy.

An obvious issue with negative interest rates, and a focus of the IMF report, is what happens if people and firms decide to hold massive amounts of cash, which pays a zero interest rate, to avoid the negative interest rate. In Kenneth Rogoff's paper in the Summer 2017 issue of JEP, he makes the case for the practicality of moving gradually to a dual-currency system, where electronic money is the "real" currency and paper money trades with electronic money at a certain "exchange rate."  Rogoff writes:
"The idea of one country having two different currencies with an exchange rate between them may seem implausible, but the basics are not difficult to explain. The first step in setting up a dual currency system would be for the government to declare that the “real” currency is electronic bank reserves and that all government contracts, taxes, and payments are to be denominated in electronic dollars. As we have already noted, paying negative interest on electronic money or bank reserves is a nonissue. Say then that the government wants to set a policy interest rate of negative 3 percent to combat a financial crisis. To stop a run into paper currency, it would simultaneously announce that the exchange rate on paper currency in terms of electronic bank reserves would depreciate at 3 percent per year. For example, after a year, the central bank would give only .97 electronic dollars for one paper dollar; after two years, it would give back only .94. ...

"In most advanced countries, private agents are free to contract on whatever indexation scheme they prefer; this is not a condition that can be imposed by fiat. If the private sector does not convert to electronic currency, the zero bound would re-emerge since it still exists for paper currency. Finally, one must consider that after a period of negative interest rates, paper and electronic currency would no longer trade at par, which would be an inconvenience in normal times. Restoring par would require a period of paying positive interest rates on electronic reserves, which might potentially interfere with other monetary goals."
Rogoff recognizes that negative interest rates raise a number of practical and economic problems, including issues of regulatory, accounting, and tax policy. But from his perspective, negative interest rates are the best of the alternatives when a central bank faces the problem of a zero lower bound on interest rates. For example, quantitative easing only seems to have mild effects, while exposing the central bank to political pressures about who gets the loans from the central bank. Re-setting the central bank inflation target from 2% to 4% might help push up nominal interest rates, and thus allow those rates to be cut in a future recession while remaining above-zero, but given that central banks have spent decades establishing their goal of 2% inflation in the minds and expectations of financial markets, such a shift isn't to be contemplated lightly. Looking at these and other policy options--like all countries simultaneously trying to weaken their currencies in order to boost exports--Rogoff argues that negative interest rates are the simplest and cleanest option, with the best chance of working well.

From my own point of view, negative policy interest rates are one of those subjects that literally never crossed my mind up until about 2009. When the central banks of smaller economies like Denmark and Switzerland first used negative policy interest rates, but the main goal seemed to be to assure that the exchange rate of their currencies didn't soar. I wasn't quite ready to draw lessons for the US Federal Reserve from the Swiss National Bank or Danmarks Nationalbank. But when the Bank of Japan and the European Central Bank started employing mildly negative interest rates, and it seemed to be working without major glitches, it became clear that more serious attention needed to be paid. I remain dubious about interest rates in the range of negative 3-5%, but my reasons are less about technical economics and more about potential counterreactions.

Back in the 1970s, people put up with the idea that the inflation rate was higher than the interest on their bank account or on Treasury bonds, but the nominal interest rates they received was still positive. Maybe the public in other countries would accept a situation in which their bank accounts were eroded by 3-5% per year by a negative interest rates, but I have a hard time imagining that this would fly in a US political context. In an economy where negative interest rates are common, I would also expect large financial institutions like pension funds, insurance companies, and banks to make strenuous efforts to sidestep their effects. I've reached the point where I'm willing to consider negative interest rates as a serious possibility, but I suspect that the practical problems and issues of substantially negative interest rates are at this point underestimated.