The Robots, AI, and Unemployment Anti-FAQ

https://arbital.com/p/EliezerYudkowsky.RobotsAIUnemploymentAntiFAQ

by Eliezer Yudkowsky Mar 30 2015 updated Mar 30 2015


Q. Are the current high levels of unemployment being caused by advances in Artificial Intelligence automating away human jobs?

A. Conventional economic theory says this shouldn't happen. Suppose it costs 2 units of labor to produce a hot dog and 1 unit of labor to produce a bun, and that 30 units of labor are producing 10 hot dogs in 10 buns. If automation makes it possible to produce a hot dog using 1 unit of labor instead, conventional economics says that some people should shift from making hot dogs to buns, and the new equilibrium should be 15 hot dogs in 15 buns. On standard economic theory, improved productivity - including from automating away some jobs - should produce increased standards of living, not long-term unemployment.

Q. Sounds like a lovely theory. As the proverb goes, the tragedy of science is a beautiful theory slain by an ugly fact. Experiment trumps theory and in reality, unemployment is rising.

A. Sure. Except that the happy equilibrium with 15 hot dogs in buns, is exactly what happened over the last four centuries where we went from 95% of the population being farmers to 2% of the population being farmers (in agriculturally self-sufficient developed countries). We don't live in a world where 93% of the people are unemployed because 93% of the jobs went away. The first thought of automation removing a job, and thus the economy having one fewer job, has not been the way the world has worked since the Industrial Revolution. The parable of the hot dog in the bun is how economies really, actually worked in real life for centuries. Automation followed by re-employment went on for literally centuries in exactly the way that the standard lovely economic model said it should. The idea that there's a limited amount of work which is destroyed by automation is known in economics as the " lump of labour fallacy".

Q. But now people aren't being reemployed. The jobs that went away in the Great Recession aren't coming back, even as the stock market and corporate profits rise again.

A. Yes. And that's a new problem. We didn't get that when the Model T automobile mechanized the entire horse-and-buggy industry out of existence. The difficulty with supposing that automation is producing unemployment is that automation isn't new, so how can you use it to explain this new phenomenon of increasing long-term unemployment?

Baxter robot

Q. Maybe we've finally reached the point where there's no work left to be done, or where all the jobs that people can easily be retrained into can be even more easily automated.

A. You talked about jobs going away in the Great Recession and then not coming back. Well, the Great Recession wasn't produced by a sudden increase in productivity, it was produced by… I don't want to use fancy terms like "aggregate demand shock" so let's just call it problems in the financial system. The point is, in previous recessions the jobs came back strongly once NGDP rose again. (Nominal Gross Domestic Product - roughly the total amount of money being spent in face-value dollars.) Now there's been a recession and the jobs aren't coming back (in the US and EU), even though NGDP has risen back to its previous level (at least in the US). If the problem is automation, and we didn't experience any sudden leap in automation in 2008, then why can't people get back at least the jobs they used to have, as they did in previous recessions? Something has gone wrong with the engine of reemployment.

Q. And you don't think that what's gone wrong with the engine of reemployment is that it's easier to automate the lost jobs than to hire someone new?

A. No. That's something you could say just as easily about the 'lost' jobs from hand-weaving when mechanical looms came along. Some new obstacle is preventing jobs lost in the 2008 recession from coming back. Which may indeed mean that jobs eliminated by automation are also not coming back. And new high school and college graduates entering the labor market, likewise usually a good thing for an economy, will just end up being sad and unemployed. But this must mean something new and awful is happening to the processes of employment - it's not because the kind of automation that's happening today is different from automation in the 1990s, 1980s, 1920s, or 1870s; there were skilled jobs lost then, too. It should also be noted that automation has been a comparatively small force this decade next to shifts in global trade - which have also been going on for centuries and have also previously been a hugely positive economic force. But if something is generally wrong with reemployment, then it might be possible for increased trade with China to result in permanently lost jobs within the US, in direct contrast to the way it's worked over all previous economic history. But just like new college graduates ending up unemployed, something else must be going very wrong - that wasn't going wrong in 1960 - for anything so unusual to happen!

Q. What if what's changed is that we're out of new jobs to create? What if we've already got enough hot dog buns, for every kind of hot dog bun there is in the labor market, and now AI is automating away the last jobs and the last of the demand for labor?

A. This does not square with our being unable to recover the jobs that existed before the Great Recession. Or with lots of the world living in poverty. If we imagine the situation being much more extreme than it actually is, there was a time when professionals usually had personal cooks and maids - as Agatha Christie said, "When I was young I never expected to be so poor that I could not afford a servant, or so rich that I could afford a motor car." 

 Many people would hire personal cooks or maids if we could afford them, which is the sort of new service that ought to come into existence if other jobs were eliminated - the reason maids became less common is that they were offered better jobs, not because demand for that form of human labor stopped existing. Or to be less extreme, there are lots of businesses who'd take nearly-free employees at various occupations, if those employees could be hired literally at minimum wage and legal liability wasn't an issue. Right now we haven't run out of want or use for human labor, so how could "The End of Demand" be producing unemployment right now? The fundamental fact that's driven employment over the course of previous human history is that it is a very strange state of affairs for somebody sitting around doing nothing, to have nothing better to do. We do not literally have nothing better for unemployed workers to do. Our civilization is not that advanced. So we must be doing something wrong (which we weren't doing wrong in 1950).

Q. So what is wrong with "reemployment", then?

A. I know less about macroeconomics than I know about AI, but even I can see all sorts of changed circumstances which are much more plausible sources of novel employment dysfunction than the relatively steady progress of automation. In terms of developed countries that seem to be doing okay on reemployment, Australia hasn't had any drops in employment and their monetary policy has kept nominal GDP growth on a much steadier keel - using their central bank to regularize the number of face-value Australian dollars being spent - which an increasing number of influential econbloggers think the US and even more so the EU have been getting catastrophically wrong. Though that's a long story.[1] Germany saw unemployment drop from 11% to 5% from 2006-2012 after implementing a series of labor market reforms, though there were other things going on during that time. (Germany has [ twice the number of robots per capita][4] as the US, which probably isn't significant to their larger macroeconomic trends, but would be a strange fact if robots were the leading cause of unemployment.) Labor markets and monetary policy are both major, obvious, widely-discussed candidates for what could've changed between now and the 1950s that might make reemployment harder. And though I'm not a leading econblogger, some other obvious-seeming thoughts that occur to me are:

Q. Some of those ideas sounded more plausible than others, I have to say.

A. Well, it's not like they could all be true simultaneously. There's only a fixed effect size of unemployment to be explained, so the more likely it is that any one of these factors played a big role, the less we need to suppose that all the other factors were important; and perhaps what's Really Going On is something else entirely. Furthermore, the 'real cause' isn't always the factor you want to fix. If the European Union's unemployment problems were 'originally caused' by labor market regulation, there's no rule saying that those problems couldn't be mostly fixed by instituting an NGDP level targeting regime. This might or might not work, but the point is that there's no law saying that to fix a problem you have to fix its original historical cause.

Q. Regardless, if the engine of re-employment is broken for whatever reason, then AI really is killing jobs - a marginal job automated away by advances in AI algorithms won't come back.

A. Then it's odd to see so many news articles talking about AI killing jobs, when plain old non-AI computer programming and the Internet have affected many more jobs than that. The buyer ordering books over the Internet, the spreadsheet replacing the accountant - these processes are not strongly relying on the sort of algorithms that we would usually call 'AI' or 'machine learning' or 'robotics'. The main role I can think of for actual AI algorithms being involved, is in computer vision enabling more automation. And many manufacturing jobs were already automated by robotic arms even before robotic vision came along. Most computer programming is not AI programming, and most automation is not AI-driven. And then on near-term scales, like changes over the last five years, trade shifts and financial shocks and new labor market entrants are more powerful economic forces than the slow continuing march of computer programming. (Automation is a weak economic force in any given year, but cumulative and directional over decades. Trade shifts and financial shocks are stronger forces in any single year, but might go in the opposite direction the next decade. Thus, even generalized automation via computer programming is still an unlikely culprit for any sudden drop in employment as occurred in the Great Recession.)

Q. Okay, you've persuaded me that it's ridiculous to point to AI while talking about modern-day unemployment. What about future unemployment?

A. Like after the next ten years? We might or might not see robot-driven cars, which would be genuinely based in improved AI algorithms, and would automate away another bite of human labor. Even then, the total number of people driving cars for money would just be a small part of the total global economy; most humans are not paid to drive cars most of the time. Also again: for AI or productivity growth or increased trade or immigration or graduating students to increase unemployment, instead of resulting in more hot dogs and buns for everyone, you must be doing something terribly wrong that you weren't doing wrong in 1950.

Q. How about timescales longer than ten years? There was one class of laborers permanently unemployed by the automobile revolution, namely horses. There are a lot fewer horses nowadays because there is literally nothing left for horses to do that machines can't do better; horses' marginal labor productivity dropped below their cost of living. Could that happen to humans too, if AI advanced far enough that it could do all the labor?

A. If we imagine that in future decades machine intelligence is slowly going past the equivalent of IQ 70, 80, 90, eating up more and more jobs along the way… then I defer to Robin Hanson's analysis in Economic Growth Given Machine Intelligence, in which, as the abstract says, "Machines complement human labor when [humans] become more productive at the jobs they perform, but machines also substitute for human labor by taking over human jobs. At first, complementary effects dominate, and human wages rise with computer productivity. But eventually substitution can dominate, making wages fall as fast as computer prices now do."

Q. Could we already be in this substitution regime -

A. No, no, a dozen times no, for the dozen reasons already mentioned. That sentence in Hanson's paper has nothing to do with what is going on right now. The future cannot be a cause of the past. Future scenarios, even if they seem to associate the concept of AI with the concept of unemployment, cannot rationally increase the probability that current AI is responsible for current unemployment.

Q. But AI will inevitably become a problem later?

A. Not necessarily. We only get the Hansonian scenario if AI is broadly, steadily going past IQ 70, 80, 90, etc., making an increasingly large portion of the population fully obsolete in the sense that there is literally no job anywhere on Earth for them to do instead of nothing, because for every task they could do there is an AI algorithm or robot which does it more cheaply. That scenario isn't the only possibility.

Q. What other possibilities are there?

A. Lots, since what Hanson is talking about is a new unprecedented phenomenon extrapolated over new future circumstances which have never been seen before and there are all kinds of things which could potentially go differently within that. Hanson's paper may be the first obvious extrapolation from conventional macroeconomics and steady AI trendlines, but that's hardly a sure bet. Accurate prediction is hard, especially about the future, and I'm pretty sure Hanson would agree with that.

Q. I see. Yeah, when you put it that way, there are other possibilities. Like, Ray Kurzweil would predict that brain-computer interfaces would let humans keep up with computers, and then we wouldn't get mass unemployment.

A. The future would be more uncertain than that, even granting Kurzweil's hypotheses - it's not as simple as picking one futurist and assuming that their favorite assumptions correspond to their favorite outcome. You might get mass unemployment anyway if humans with brain-computer interfaces are more expensive or less effective than pure automated systems. With today's technology we could design robotic rigs to amplify a horse's muscle power - maybe, we're still working on that tech for humans - but it took around an extra century after the Model T to get to that point, and a plain old car is much cheaper.

Q. Bah, anyone can nod wisely and say "Uncertain, the future is." Stick your neck out, Yoda, and state your opinion clearly enough that you can later be proven wrong. Do you think we will eventually get to the point where AI produces mass unemployment?

A. My own guess is a moderately strong 'No', but for reasons that would sound like a complete subject change relative to all the macroeconomic phenomena we've been discussing so far. In particular I refer you to "[Intelligence Explosion Microeconomics: Returns on cognitive reinvestment][8]", a paper [ recently referenced][9] on Scott Sumner's blog as relevant to this issue.

Q. Hold on, let me read the abstract and… what the heck is this?

A. It's an argument that you don't get the Hansonian scenario or the Kurzweilian scenario, because if you look at the historical course of hominid evolution and try to assess the inputs of marginally increased cumulative evolutionary selection pressure versus the cognitive outputs of hominid brains, and infer the corresponding curve of returns, then ask about a reinvestment scenario -

Q. English.

A. Arguably, what you get is I. J. Good's scenario where once an AI goes over some threshold of sufficient intelligence, it can self-improve and increase in intelligence far past the human level. This scenario is formally termed an 'intelligence explosion', informally 'hard takeoff' or 'AI-go-FOOM'. The resulting predictions are strongly distinct from traditional economic models of accelerating technological growth (we're not talking about Moore's Law here). Since it should take advanced general AI to automate away most or all humanly possible labor, my guess is that AI will intelligence-explode to superhuman intelligence before there's time for moderately-advanced AIs to crowd humans out of the global economy. (See also section 3.10 of the aforementioned [paper][8].) Widespread economic adoption of a technology comes with a delay factor that wouldn't slow down an AI rewriting its own source code. This means we don't see the scenario of human programmers gradually improving broad AI technology past the 90, 100, 110-IQ threshold. An explosion of AI self-improvement utterly derails that scenario, and sends us onto a completely different track which confronts us with wholly dissimilar questions.

Q. Okay. What effect do you think a superhumanly intelligent self-improving AI would have on unemployment, especially the bottom 25% who are already struggling now? Should we really be trying to create this technological wonder of self-improving AI, if the end result is to make the world's poor even poorer? How is someone with a high-school education supposed to compete with a machine superintelligence for jobs?

A. I think you're asking an overly narrow question there.

Q. How so?

A. You might be thinking about 'intelligence' in terms of the contrast between a human college professor and a human janitor, rather than the contrast between a human and a chimpanzee. Human intelligence more or less created the entire modern world, including our invention of money; twenty thousand years ago we were just running around with bow and arrows. And yet on a biological level, human intelligence has stayed roughly the same since the invention of agriculture. Going past human-level intelligence is change on a scale much larger than the Industrial Revolution, or even the Agricultural Revolution, which both took place at a constant level of intelligence; human nature didn't change. As Vinge observed, building something smarter than you implies a future that is fundamentally different in a way that you wouldn't get from better medicine or interplanetary travel.

Q. But what does happen to people who were already economically disadvantaged, who don't have investments in the stock market and who aren't sharing in the profits of the corporations that own these superintelligences?

A. Um… we appear to be using substantially different background assumptions. The notion of a 'superintelligence' is not that it sits around in Goldman Sachs's basement trading stocks for its corporate masters. The concrete illustration I often use is that a superintelligence asks itself what the fastest possible route is to increasing its real-world power, and then, rather than bothering with the digital counters that humans call money, the superintelligence solves the [ protein structure prediction problem][10], emails some DNA sequences to online peptide synthesis labs, and gets back a batch of proteins which it can mix together to create an acoustically controlled equivalent of an artificial ribosome which it can use to make second-stage nanotechnology which manufactures third-stage nanotechnology which manufactures diamondoid molecular nanotechnology and then… well, it doesn't really matter from our perspective what comes after that, because from a human perspective any technology more advanced than molecular nanotech is just overkill. A superintelligence with molecular nanotech does not wait for you to buy things from it in order for it to acquire money. It just moves atoms around into whatever molecular structures or large-scale structures it wants.

Q. How would it get the energy to move those atoms, if not by buying electricity from existing power plants? Solar power?

A. Indeed, one popular speculation is that optimal use of a star system's resources is to disassemble local gas giants (Jupiter in our case) for the raw materials to build a Dyson Sphere, an enclosure that captures all of a star's energy output. This does not involve buying solar panels from human manufacturers, rather it involves self-replicating machinery which builds copies of itself on a rapid exponential curve -

Q. Yeah, I think I'm starting to get a picture of your background assumptions. So let me expand the question. If we grant that scenario rather than the Hansonian scenario or the Kurzweilian scenario, what sort of effect does that have on humans?

A. That depends on the exact initial design of the first AI which undergoes an intelligence explosion. Imagine a vast space containing all possible mind designs. Now imagine that humans, who all have a brain with a cerebellum, thalamus, a cerebral cortex organized into roughly the same areas, neurons firing at a top speed of 200 spikes per second, and so on, are one tiny little dot within this space of all possible minds. Different kinds of AIs can be vastly more different from each other than you are different from a chimpanzee. What happens after AI, depends on what kind of AI you build - the exact selected point in mind design space. If you can solve the technical problems and wisdom problems associated with building an AI that is nice to humans, or nice to sentient beings in general, then we all live [ happily ever afterward][11]. If you build the AI incorrectly… well, the AI is unlikely to end up with a specific hate for humans. But such an AI won't attach a positive value to us either. "The AI does not hate you, nor does it love you, but you are made of atoms which it can use for something else." The human species would end up disassembled for spare atoms, after which human unemployment would be zero. In neither alternative do we end up with poverty-stricken unemployed humans hanging around being sad because they can't get jobs as janitors now that star-striding nanotech-wielding superintelligences are taking all the janitorial jobs. And so I conclude that advanced AI causing mass human unemployment is, all things considered, unlikely.

Q. Some of the background assumptions you used to arrive at that conclusion strike me as requiring additional support beyond the arguments you listed here.

A. I recommend [Intelligence Explosion: Evidence and Import][12] for an overview of the general issues and literature, [Artificial Intelligence as a positive and negative factor in global risk][13] for a summary of some of the issues around building AI correctly or incorrectly, and the aforementioned [Intelligence Explosion Microeconomics][8] for some ideas about analyzing the scenario of an AI investing cognitive labor in improving its own cognition. The last in particular is an important open problem in economics if you're a smart young economist reading this, although since the fate of the entire human species could well depend on the answer, you would be foolish to expect there'd be as many papers published about that as squirrel migration patterns. Nonetheless, bright young economists who want to say something important about AI should consider analyzing the microeconomics of returns on cognitive (re)investments, rather than post-AI macroeconomics which may not actually exist depending on the answer to the first question. Oh, and [Nick Bostrom][14] at the [Oxford Future of Humanity Institute][15] is supposed to have a forthcoming book on the intelligence explosion; that book isn't out yet so I can't link to it, but [Bostrom][14] personally and [FHI][15] generally have published some excellent academic papers already.

Q. But to sum up, you think that AI is definitely not the issue we should be talking about with respect to unemployment.

A. Right. From an economic perspective, AI is a completely odd place to focus your concern about modern-day unemployment. From an AI perspective, modern-day unemployment trends are a moderately odd reason to be worried about AI. Still, it is scarily true that increased automation, like increased global trade or new graduates or anything else that ought properly to produce a stream of employable labor to the benefit of all, might perversely operate to increase unemployment if the broken reemployment engine is not fixed.

Q. And with respect to future AI… what is it you think, exactly?

A. I think that with respect to moderately more advanced AI, we probably won't see intrinsic unavoidable mass unemployment in the economic world as we know it. If re-employment stays broken and new college graduates continue to have trouble finding jobs, then there are plausible stories where future AI advances far enough (but not too far) to be a significant part of what's freeing up new employable labor which bizarrely cannot be employed. I wouldn't consider this my main-line, average-case guess; I wouldn't expect to see it in the next 15 years or as the result of just robotic cars; and if it did happen, I wouldn't call AI the 'problem' while central banks still hadn't adopted NGDP level targeting. And then with respect to very advanced AI, the sort that might be produced by AI self-improving and going FOOM, asking about the effect of machine superintelligence on the conventional human labor market is like asking how US-Chinese trade patterns would be affected by the Moon crashing into the Earth. There would indeed be effects, but you'd be missing the point.

Q. Thanks for clearing that up.

A. No problem.


ADDED 8/30/13: Tyler Cowen's reply to this was one I hadn't listed:

Think of [ the machines of the industrial revolution][16] as getting underway sometime in the 1770s or 1780s. The big wage gains for British workers [ don't really come until the 1840s][17]. Depending on your exact starting point, that is over fifty years of labor market problems from automation.

See [here][18] for the rest of Tyler's reply.

Taken at face value this might suggest that if we wait 50 years everything will be all right. [Kevin Drum][19] replies that in 50 years there might be no human jobs left, which is possible but wouldn't be an effect we've seen already, rather a prediction of novel things yet to come.

Though Tyler also says, "A second point is that now we have a much more extensive network of government benefits and also regulations which increase the fixed cost of hiring labor" and this of course was already on my list of things that could be trashing modern reemployment unlike-in-the-1840s.

'Brett' in MR's comments section also counter-claims:

The spread of steam-powered machinery and industrialization from textiles/mining/steel to all manner of British industries didn't really get going until the 1830s and 1840s. Before that, it was mostly piece-meal, with some areas picking up the technology faster than others, while the overall economy didn't change that drastically (hence the minimal changes in overall wages).


[1] The core idea in market monetarism is very roughly something like this: A central bank can control the total amount of money and thereby control any single economic variable measured in money, i.e., control one nominal variable. A central bank can't directly control how many people are employed, because that's a real variable. You could, however, try to control Nominal Gross Domestic Income (NGDI) or the total amount that people have available to spend (as measured in your currency). If the central bank commits to an NGDI level target then any shortfalls are made up the next year - if your NGDI growth target is 5% and you only get 4% in one year then you try for 6% the year after that. NGDI level targeting would mean that all the companies would know that, collectively, all the customers in the country would have 5% more money (measured in dollars) to spend in the next year than the previous year. This is usually called "NGDP level targeting" for historical reasons (NGDP is the other side of the equation, what the earned dollars are being spent on) but the most advanced modern form of the idea is probably "Level-targeting a market forecast of per-capita NGDI". Why this is the best nominal variable for central banks to control is a longer story and for that you'll have to read up on market monetarism. I will note that if you were worried about hyperinflation back when the Federal Reserve started dropping US interest rates to almost zero and buying government bonds by printing money… well, you really should note that (a) most economists said this wouldn't happen, (b) the market spreads on inflation-protected Treasuries said that the market was anticipating very low inflation, and that (c) we then actually got inflation below the Fed's 2% target. You can argue with economists. You can even argue with the market forecast, though in this case you ought to bet money on your beliefs. But when your fears of hyperinflation are disagreed with by economists, the market forecast and observed reality, it's time to give up on the theory that generated the false prediction. In this case, market monetarists would have told you not to expect hyperinflation because NGDP/NGDI was collapsing and this constituted (overly) tight money regardless of what interest rates or the monetary base looked like.

[2] Call me a wacky utopian idealist, but I wonder if it might be genuinely politically feasible to reduce marginal taxes on the bottom 20%, if economists on both sides of the usual political divide got together behind the idea that income taxes (including payroll taxes) on the bottom 20% are (a) immoral and (b) do economic harm far out of proportion to government revenue generated. This would also require some amount of decreased taxes on the next quintile in order to avoid high_ marginal_ tax rates, i.e., if you suddenly start paying \$2000/year in taxes as soon as your income goes from \$19,000/year to \$20,000/year then that was a 200% tax rate on that particular extra \$1000 earned. The lost tax revenue must be made up somewhere else. In the current political environment this probably requires higher income taxes on higher wealth brackets rather than anything more creative. But if we allow ourselves to discuss economic dreamworlds, then income taxes, corporate income taxes, and capital-gains taxes are all very inefficient compared to consumption taxes, land taxes, and basically anything but income and corporate taxes. This is true even from the perspective of equality; a rich person who earns lots of money, but invests it all instead of spending it, is benefiting the economy rather than themselves and should not be taxed until they try to spend the money on a yacht, at which point you charge a consumption tax or luxury tax (even if that yacht is listed as a business expense, which should make no difference; consumption is not more moral when done by businesses instead of individuals). If I were given unlimited powers to try to fix the unemployment thing, I'd be reforming the entire tax code from scratch to present the minimum possible obstacles to exchanging one's labor for money, and as a second priority minimize obstacles to compound reinvestment of wealth. But trying to change anything on this scale is probably not politically feasible relative to a simpler, more understandable crusade to "Stop taxing the bottom 20%, it harms our economy because they're customers of all those other companies and it's immoral because they get a raw enough deal already."

[3] Two possible forces for significant technological change in the 21st century would be robotic cars and electric cars. Imagine a city with an all-robotic all-electric car fleet, dispatching light cars with only the battery sizes needed for the journey, traveling at much higher speeds with no crash risk and much lower fuel costs… and lowering rents by greatly extending the effective area of a city, i.e., extending the physical distance you can live from the center of the action while still getting to work on time because your average speed is 75mph. What comes to mind when you think of robotic cars? Google's prototype robotic cars. What comes to mind when you think of electric cars? Tesla. In both cases we're talking about ascended, post-exit Silicon Valley moguls trying to create industrial progress out of the goodness of their hearts, using money they earned from Internet startups. Can you sustain a whole economy based on what Elon Musk and Larry Page decide are cool?

[4] Currently the conversation among economists is more like "Why has total factor productivity growth slowed down in developed countries?" than "Is productivity growing so fast due to automation that we'll run out of jobs?" Ask them the latter question and they will, with justice, give you very strange looks. Productivity isn't growing at high rates, and if it were that ought to cause employment rather than unemployment. This is why the Great Stagnation in productivity is one possible explanatory factor in unemployment, albeit (as mentioned) not a very good explanation for why we can't get back the jobs lost in the Great Recession. The idea would have to be that some natural rate of productivity growth and sectoral shift is necessary for re-employment to happen after recessions, and we've lost that natural rate; but so far as I know this is not conventional macroeconomics.

Cross-posted with permission from original source.


Comments

Alexei Andreev

Looks like at some point one of our scripts did something funky to this page (probably replaced <a href="AlexeiAndreev.html">Alexei Andreev</a> with a link to the page with id 1) and now it's broken.