Sunday, July 23, 2017

The Price of Health Care

Even if you are only a little bit familiar with different health care systems in the world, you probably know that America spends more on health than any other country in the OECD in terms of both per capita and percentage of GDP. With such high spending, you would expect outcomes -- such as life expectancy or amenable mortality (basically preventable deaths) -- to be much better than other countries that spend less. Strangely, as data from a recent paper on the German health care system shows, this is not the case.
In spite of massive spending increases and a relatively high baseline in 2000, the US remains significantly behind other developed countries in terms of preventable deaths. On top of this, the improvement in amenable mortality for each dollar of new spending is a lot lower than the other countries.

This is where purchasing power parities (PPPs) come in. High prices for various health care related goods and services such as prescription medication or MRI scans could explain much of America's elevated health care costs, rather than high quantity/quality of care. If this were the case, that would explain why American health care spending continues to rise rapidly without significant improvement in outcomes.

Finding PPP data for different countries would shed light on this because it would give us a good comparison of the quantity of health care that each country consumes as opposed to the amount of money it spends. If the quantity of health care per capita in the US was similar to or less than other countries, then that would explain the lackluster outcomes it experiences.

Until recently there was no data that I could find for health care specific PPPs outside of Europe, but apparently in May the OECD and Eurostat published a report that updated the previous estimates with data from the US and a few other non-European countries. Figure 4 in the report shows that higher prices explain some, but by no means most of all, of the discrepancy between outcomes and spending in the US health care system.
Alternative explanations as to why quality of health care lags spending so much in the US are necessary. Wasteful spending brought on by the gratuitous use of expensive tests and procedures and drugs probably makes a big difference here. Also, if there was a single payer insurance market, the government would have a significant amount of leverage in lowering prices, but it's unclear how much can be gained from fixing incentives and switching to single payer.

Health care spending in America is also highly concentrated among high spenders, suggesting that programs that increase spending on people who currently don't have insurance (and therefore don't spend much right now) won't necessary do much to solve the problem. Reducing total spending might require curtailing superfluous spending on things like cosmetic surgery and rationing expensive procedures that many people depend on.

Ultimately, the US has a lot to gain from health care reform that increases coverage and -- hopefully -- reduces costs, but we should all be wary of thinking we can get a free lunch on health care.

Friday, July 14, 2017

East Asia and Economic Convergence

Japan's impressive post-WWII economic growth is a prime example of economic convergence -- Japan's GDP per capita went from a little under 40% of the US in 1960 to over 90% in the early 1990s. This is a classic prediction of the Solow growth model; poorer countries will have quick economic growth as they invest in capital and will slowly catch up to rich countries like the United States.
Then, all of a sudden, a recession in 1996 and the Asian Financial Crisis in 1997 hit, and Japan has been stuck at roughly 73% of US GDP per capita since then (the data from Fred end in 2010, but World Bank has data from 1990 to 2015 that show the same thing). A lot of ink has been spilled in pursuit of an answer to the question of why Japan has settled into a permanently poorer equilibrium, and I'm not sure if I have much to add. I am highly skeptical that demand side factors can depress an economy for more than two decades, especially when better cyclical indicators like unemployment and employment rates tell the opposite story. Japan's demographic transition is also pretty important -- the working age population has been shrinking since the mid '90s meaning that the amount of workers per person (and consequently GDP per person) has had downward pressure for quite a long time.

Regardless, a 20% reduction in GDP per capita relative to the US is pretty huge, and makes me question my expectation that technologically advanced countries with institutions that don't prevent growth from taking place (think North Korea or Zimbabwe) will unconditionally converge the wealthiest countries. Thinking about this led me to the other major wealth East Asian countries: Taiwan, Hong Kong, and South Korea (Hong Kong is technically a special administrative region in China, but some combination of capitalism and former British rule make it both rich and free enough to count as a separate country in this case).
As you can see these three countries look a lot like Japan did at various points in the past. If you compare at which year each country was in about the same position as Japan in 1960 (that is about 40% of US GDP per capita), you can see how far behind Japan they are in terms of convergence. Hong Kong is the furthest along, although it's about 15 years behind in its process of convergence while Taiwan and Korea come in about 16 and 20 years behind Hong Kong, respectively.

Hong Kong and Japan are the two more interesting cases here: both experienced large slumps in the late '90s that lasted well int the 2000s, but then things start to diverge. In the mid 2000s Hong Kong starts to take off while Japan remains plugging along at around three quarters of US GDP per capita. The real question is which is the exception and which is the rule. As a resident of Japan, a small selfish part of me hopes that Taiwan and Korea will eventually get stuck at around the same level as Japan, but it's really more likely that Japan is mired in its own problems and will continue to stagnate while the other countries grow.

This is easier to see when you look at labor productivity -- GDP per hour worked -- instead of GDP per capita, because things that affect hours worked per employee or the overall employment rate can actually misrepresent the state of convergence.
Labor productivity tells a slightly different story than GDP per capita; while Japan still shows stagnation at around 70% of US productivity after 1996, Hong Kong's recent impressive growth in GDP per capita seems to have been caused by a large increase in either employment rates or hours and Koreans have made up for slower productivity growth relative to Taiwan by working long hours.

Japan's collapse in GDP per capita in the late '90s seems to reflect labor market problems unique to Japan as opposed to evidence against convergence. Average hours worked in Japan has been declining for decades as people unable to find full time employment switch to low paying part time jobs ("バイト").
This is probably a symptom of an economy that has been weak for more than 20 years -- the unemployment rate only recently fell back to the level of the late 1990s -- and has little to do with Hong Kong, Korea, or Taiwan. All four regions do face low fertility rates and will likely start being affected by the same demographic transition as Japan over the next few decades, but as long as they avoid a mass transition to part time employment they should look forward to some combination of fewer hours and higher GDP.

The reason for Japan's slowdown in productivity growth still evades me. I find it hard to believe that it's normal for a country to just stop converging with productivity 30% lower than the US, but I guess Hong Kong, Korea, and Taiwan will be a test of this as they either continue to grow or stagnate relative to America over the next few years.

Monday, June 26, 2017

How Healthy is the US Labor Market?

The plunge in labor force participation since the great recession in 2008 has led many to rightly question how well the unemployment rate -- the percent of the labor force that has looked for work in the last 4 weeks -- measures the true level of "slack" in the labor market.

Much (most) of the decline in labor force participation can be explained by the retirement of baby boomers, the oldest of which turned 65 in 2010, and by people between the ages of 16 and 24 choosing to focus on education instead of working.
The decline in labor force participation among young people is something that was occurring before the great recession. Even though it the recession seems to have sped up the decline, I'm pretty certain that most of the people between 16 and 24 who left the labor force in 2008 and 2009 or simply haven't joined since then (I'm in that group) wouldn't rejoin even if there was no slack in the labor market.

While the labor force participation rate for Americans 55 and older didn't actually decline in the great recession, it stopped a decades-long trend upward and has flattened out since then.
The fact that the participation rate for older Americans has settled at a lower level than participation for the general public, and that the share of the civilian noninstitutional population (basically everyone above the age of 16 who isn't deployed in the military or in prison) that is older than 55 years is increasing, has put significant downward pressure on the total labor force participation rate.
That being said, it's hard to be certain whether or not the recession has had lasting cyclical impacts on labor force participation, which warrants using statistics other than the unemployment rate to gauge the strength of the labor market.

One such popular measure is the broadest measure of underemployment put out by the Bureau of Labor Statistics (BLS) -- total unemployed, plus all marginally attached workers plus total employed part time for economic reasons -- or the U-6 unemployment rate.
I don't really like this as a measure of labor market strength though, because it doesn't count people who retired earlier than they wanted to as a result of the recession, people are only counted as "marginally attached" if they have searched for work in the last 12 months, and part time workers aren't really unemployed (there is an alternative measure that excludes involuntary part time workers but it still has the problems I mentioned above).

Some people like to look at the employment rate for the so called "prime age" population who are between the ages of 25 and 54 (I don't know why the cutoff is 54, going up to 65 makes way more sense to me) in order to weed out the effects of aging and lower youth participation on the labor market.
This is a relatively good solution for the years after 1990, and it does show that the labor market is considerably weaker than the unemployment suggests (although not so weak that we need to "prime the pump"), but it has trouble before 1990 because women were still joining the labor force en masse for most of the latter half of the 20th century.

Basically every statistic that you can easily get from BLS data has a problem like this, so it's really hard to get a good measure of how healthy the labor market is, but there is a solution. The Congressional Budget Office (CBO) looks at the demographic composition of the working age population and comes up with what it thinks the labor force participation rate would be at full employment. It calls this measure the "potential labor force", which tries to estimate the movement in labor force participation caused by gender and aging and can then be used to estimate the cyclical component of labor force participation.
It is then possible to find the "adjusted" unemployment rate with the CBO's estimate of the potential labor force. The above chart shows the actual unemployment rate reported by the BLS as well as my calculation of the adjusted unemployment rate using the potential labor force from the CBO's 2007 and 2017 data for "Potential GDP and Underlying Inputs" (all CBO data is available here). The dashed grey line is the natural rate of unemployment -- that is the unemployment rate that is consistent with full employment -- according to the CBO.

Since I was only able to find annual "potential labor force" figures, the estimates only extend to 2016, but both the 2007 and 2017 figures are broadly consistent with the prime age employment rate: the unemployment rate overstates the health of the labor market by between 1 and 2 percent (depending if you use the 2007 or 2017 value for the potential labor force). This is similar to where we were in 2003 or 1994, so while there's no real cause to worry about joblessness right now the recovery isn't completely over yet either. As a side note, tax cuts are even more of a stupid idea now than they were in 2003, but that issue deserves a whole post of its own.

Wednesday, June 21, 2017

Is Raising the Inflation Target Possible (Right Now)?

Ever since a group of 22 economists wrote an open letter to the Federal Reserve advocating for an increase in the inflation target, economics blogs have been abuzz with discussion about the merits of a targeting an inflation rate greater than 2%.

While I don't have much to say about the value of changing the inflation target (I pretty much agree with the authors of the letter), I do think there are several practical issues that the Fed would have to deal with if it did want to start targeting e.g. 4% PCE inflation.

As I see it there are currently two obstacles that make it practically hard for the Fed to increase its inflation target: 1) the Fed doesn't have enough credibility, and would squander what little it has if it tried, to raise inflation and 2) interest rates are so low that providing the necessary stimulus to get inflation to 4% is basically impossible in the short to medium term.

The Fed adopted a 2% inflation target in 2012 after unofficially targeting 2% inflation for years up until that point, but ironically core PCE inflation has not been 2% since March of 2012 (the headline rate was briefly 2.15% this February).

This consistent undershoot of the inflation has begun to take its toll on the Fed's credibility, with both expected inflation measured by the University of Michigan survey of Consumers and spread between normal treasury securities and inflation protected ones falling below normal levels after 2014.
Since the Fed can't even meet its own low inflation target, what makes anyone think it can meet a higher one? Before we even think about raising the inflation target, we should make sure the Fed is actually capable and willing to let inflation reach 2%. If it that doesn't happen, I'm skeptical that markets and consumers (who are probably too backward looking to expect inflation until they've been experiencing it for a while anyway) will take an increased inflation target seriously.

Beyond just announcing that it is now targeting a higher inflation rate, say 4%, the Fed would have to take concrete action to raise inflation to its new target. This would involve lowering interest rates considerably, because inflation would now be about 2.4% below target instead of just 0.4%. A useful way of thinking about this is comparing a Taylor Rule with a 2% inflation target and one with a 4% inflation target.

Here the blue line is the actual Federal Funds rate, while the red line is what a Taylor Rule -- with a coefficient of 1.5 on inflation and 1 on the "output gap" (in this case defined as the difference between the prime age employment rate and its "full employment" level of 80%) -- would suggest the interest rate be with a 2% inflation target. The green line shows what interest rate we should expect the Fed to set given a four percent inflation target. Basically, in order to commit to raising inflation to 4%, the Fed would have to either find a way to make interest rates significantly negative or otherwise go back to the zero lower bound for the foreseeable future.

I know that even the people who signed the letter weren't suggesting an immediate switch to a higher inflation target, but to the extent that they want the change to happen in the next few years, during which the economic climate will probably remain about the same, it's unclear as to whether or not a quick increase in inflation is actually that possible.

Thursday, June 15, 2017

Microfoundations, the Euler Equation, and the Phillips Curve

Late last night (early this morning for everyone in America?) I had a conversation on Twitter with Noah Smith that got a lot of attention, and later turned into a discussion with with Jonathan Hyde about the New Keynesian Phillips Curve.

The actual thread is pretty long and confusing, so I'll just summarize it here. First in a sort of tongue-in-cheek way I asked Noah why economists model agents rationally when they are evidently not rational. He and I then went back and forth about empirical evidence, and I said "Well the Euler equation at least is patently wrong, and the NKPC [New Keynesian Phillips Curve] has trouble explaining inflation since 2008," which led to everyone's favorite tweet of the night (after my joke about how the models in Physics actually work):

 This is where Jonathan came to the defense of the NKPC:


 While his comment is not wrong, I think it points to one of the main problems with macro: insistence on modelling the underlying factors behind observed phenomena like the Phillips Curve, aka obsession with microfoundations.

The expectations augmented Phillips Curve has been around since the late 1960's, but microfoundations meant that economists spent years trying to figure out why rational optimizing firms would not adjust their prices instantly -- for a while the debate was between Taylor contracts and menu costs as a source of nominal rigidity -- but eventually settled with the mathematically tractable hand wave that is Calvo pricing.

The Calvo model basically says that firms face a constant an exogenous probability that they will not be able to change prices in the next period, so they might set their price too high or too low this period so that they won't be stuck with an extremely suboptimal price next period. This obviously doesn't classify as a "microfoundation" in that it doesn't appeal to a friction that actually exists in the real world.

In principle, microfoundations might seem like a good idea since they help deal with the Lucas Critique that relationships that appear in the aggregate data -- like the relation between inflation and unemployment -- might disappear when they are exploited. The problem comes when you realize that models with perfectly rational utility/profit maximizing agents don't work empirically. When that fact becomes clear, there are two options: add a whole bunch of implausible assumptions about agents (like Calvo pricing or habit formation) to make the model fit the data or just go back to modelling things ad hoc.

Economics has mostly gone with the first approach in the last 30 years, adding tons of parameters (e.g., the fraction of firms that cannot change their prices in a given period, the degree of habit formation, the elasticity of demand for individual monopolistically competitive firms, etc.) to a model just so they can come up with models that behave a lot like the old ad hoc models.

I don't think that economic incentives shouldn't be considered when making modelling decisions, but frequently models with completely rational agents and utility maximization produce results that go too far. A key example of this is consumption. In my last post I showed that consumption is pretty much entirely explained by income and wealth, which is a prediction of the Permanent Income Hypothesis (PIH). Essentially, agents try to "smooth" consumption in the face of shocks to income, so they consume out of their wealth when their income falls and save when income is high. A slight amount of consumption smoothing also occurs when people become unemployed. The consumption Euler equation also predicts consumption smoothing, but to an absurd degree. Whereas Jason Smith and I found that consumption is highly receptive to changes in income, the Euler equation predicts complete consumption smoothing.

A more ad hoc consideration of economic incentives and the economic data would lead to a highly qualified version of the PIH that does a much better job empirically than a strict utility maximizing approach. The same thing goes for the Phillips Curve, where assuming rational expectations makes the relationship statistically insignificant (see the t statistic on u_gap, the gap between the unemployment rate and the Congressional Budget Office's estimate of the natural rate of unemployment, in the table below):


Ultimately, in my opinion, the empirical failures of microfounded models show that trying to rigorously model the underlying causes of relationships we see in the aggregate data is a waste of time. Business cycle theory in particular should be more empirically oriented and less focused on logical consistency.

Update: here's a link to the code where I compare the New Keynesian Phillips Curve and a backwards looking Phillips Curve to the data.

Tuesday, June 13, 2017

Modelling Consumption

I was running over things to write about over the next few weeks, and I decided to just casually see how much income and wealth explain consumption. I didn't expect to see anything spectacular, but I pulled the quarterly personal consumption expenditures series as well as disposable personal income and household and nonprofit net worth and ran a regression. The IPython notebook is here.

At first I ran the regression on all the data from 1952 on, and the result actually shocked me -- the first time I think I can say that while referring to statistics -- the R squared value was literally 1. I was too surprised to look at the p values or t-statistics for each variable, but I decided to only look at data after 1990 to see if that changed anything.

Here's the output for the regression on data after 1990:
I still suspect that I did something wrong here, but the fit is, in a word, impressive.

I also plotted the prediction given the coefficients from the regression against the actual data:

For good measure, these are the series IDs from FRED I used: Personal Consumption Expenditures: PCEC, Disposable Personal Income: DPI, Households and Nonprofit Organizations; Net Worth, Level: TNWBSHNO.

Update: I removed wealth from the regression, and the fit is still extremely high. I guess now the question is, if disposable income is such a good predictor of consumption, why does the Old Keynesian consumption function get routinely bashed for being inaccurate? Yes, I know the difference between average and marginal propensity to consume is important, but why insist on the consumption Euler equation given it's almost comical level of inaccuracy while ignoring the startlingly accurate Keynesian consumption function?

The results of the regression without household wealth:

Update 2: At Jason Smith's suggestion, I looked at the correlation with first differences (really 4 quarter growth rates) and found that the R squared went down to more normal levels, but that adding wealth does actually improve the fit considerably. The importance of wealth here is a partial win for the Permanent Income Hypothesis but ultimately still a loss for the consumption Euler equation. Here are charts for the new regression:


Saturday, June 10, 2017

Monetarism and the Neo-Wicksellian Framework

I know I'm about a year late to the party, but recently I have been listening to David Beckworth's Macro Musings podcast. Two interviews that particularly caught my attention were the one with Nick Rowe and the one with Brad Delong. In the ten months since Brad and Nick were on the podcast the world has been too preoccupied with Donald Trump's antics and, more recently, the snap general election in the UK to do much discussion of economics, but now I want to talk a little bit about the relationship between monetarism and new Keynesianism.

Both Brad and Nick argue when talking to David that new Keynesians are really all monetarists, or, more specifically, that in the 1990's everyone agreed that economic fluctuations were caused by disruptions in the demand and supply for money and that the big question was what rule should replace Milton Friedman's k% rule for monetary policy. According to Nick, the absence of money in new Keynesian models is really just implicit because if the model had no money (i.e. if it were a barter economy) agents would just "barter their way back to full employment."

I think the idea that recessions are just excess demand for money is interesting, especially since it applies even in the very Keynesian context of IS-LM. With the LM curve relating real money demand to output and the interest rate, it is clear that shifts in the IS curve are the same thing as higher demand for money at a given interest rate. If we write the LM curve as M/P = L(Y,r) = aY - br and the IS curve as Y = c - dr, then we can figure out what happens if a recession hits -- in this case that means c falls. In a normal IS-LM diagram, the IS curve would shift left, and both r and Y would fall, but if you look at it slightly differently by holding Y constant given the fall in c, you can see that a shift in the IS curve just changes the level of r for a given Y. Substituting this back into the LM curve shows that money demand for a given level of Y has increased: M/P = aY - br = aY - b/d(c+Y) = (a - b/d)Y -b/dc.

That might be really confusing, and I apologize for not being able to explain things as succinctly as Nick can, but basically I'm saying that recessions (drops in aggregate demand) are just increases in the demand for money (given the level of real GDP) that are not met with increases in the supply of money. Since the quantity of money demanded must equal the quantity supplied, output falls to reduce money demand to the appropriate level. That's why if c falls by, for example, 10 in the model above, then the real money supply must increase by b/d*10 for output to remain constant.

Bringing expectations into the money demand function makes things a little more interesting. Let's say money demand increases when inflation expectations are low because, per the fisher equation, low inflation expectations mean low nominal interest rates and low interest rates cause less incentive for people to hold interest bearing assets in place of money. In this case, if something causes inflation expectations to fall precipitously, money demand will increase and, absent central bank action to increase the money supply, a recession will occur. If, as is true in most cases, central banks have control over inflation expectations in the medium to long term, they effectively have the power to shift around the demand for money. Thus, monetary policy can take two forms: open market operations and expectations management.

This is where Neo-Wicksellian comes in. I've written a little bit about this before, but monetarists usually prefer the money demand/supply description of monetary policy because they see interest rates as a bad indicator of economic conditions (never mind that wild swings in money demand also make most measures of the money supply bad indicators of economic conditions). I, however, think that the monetary explanation of business cycles and the Neo-Wicksellian description are very similar, but that using interest rates has a couple of distinct advantages.

In its simplest form, the Neo-Wicksellian framework just says that if the central bank sets a nominal interest rate above the natural interest rate, there will be a recession and vice versa. Thus, with a relatively constant natural interest rate, high interest rates lower aggregate demand while low interest rates raise aggregate demand. The problem is that the natural rate of interest -- which you could consider as the level of r that keeps Y constant in the IS curve above -- fluctuates around a lot, which makes interest rates look procyclical.

Another important thing to notice is that central banks can influence the natural rate by changing inflation expectations, which is basically the same as how they change money demand. In way, the Neo-Wicksellian framework in which recessions are equivalent to interest rates above the natural rate of interest and the "monetarist" view in which recessions are excess demand for money are basically the same thing, just with the focus on different variables.

The Neo-Wicksellian view does have one advantage in my opinion though: it more accurately shows constraints on monetary policy. In the monetarist view, the solution to a shortfall in aggregate demand is always more money, but the Neo-Wicksellian view shows that there are constraints to monetary policy, at least in the present. If the natural interest rate falls below zero, the central bank no longer has the ability to just cut interest rates/just increase the money supply to ward of a recession. At this point, the only way a central bank can end the shortfall in aggregate demand is to increase expected inflation so that the natural interest rate is no longer negative. This is why increasing the monetary base from about $800 billion to about $1.7 trillion didn't stop GDP from collapsing in 2008.

Given that Neo-Wicksellian and monetarist frameworks both work as ways of looking at the same IS-LM model, I don't know how valid it is to say that everyone became a monetarist or that everyone became a "new Keynesian" in the 1980's and 1990's. It seems like there really isn't that much difference between either group in the first place. At least over the last few years, the real split in economics seems to be between people like John Cochrane who are skeptical of sticky prices and that fiscal policy can raise aggregate demand at all and everyone else.

Popular Posts