Will Uber Bounce Back in 2018?

Kate Harveston

Kate Harveston is originally from Williamsport, PA and holds a bachelor's degree in English. She enjoys writing about health and social justice issues. When she isn't writing, she can usually be found curled up reading dystopian fiction or hiking and searching for inspiration. If you like her writing, follow her blog, So Well, So Woman.

Related Post Roulette

68 Responses

  1. Damon says:

    Yawn.

    Yes, all this has made me rethink Uber use. Right. Let me rephrase that. My girlfriends Uber use. Err not. I won’t have the uber app due to the shady actions about privacy, but I’ll still use them and often ask the GF to get one if you’re drinking or the parking is difficult where we’re going. All that other stuff listed above? BS. If I’d arrived at the airport to find a taxi strike for non work related issues, I’d be demanding an investigation. Cab companies are given a near monopoly. You don’t get to inconvenience the entire public because you “feel their pain”.

    I’m more concerned about the Euro ruling that they are cab companies than anything mentioned above.Report

  2. Richard Hershberger says:

    All the soap opera stuff is great fun, but essentially beside the point. The real issue is that Uber is massively unprofitable, with no prospect of the current business model turning this around. The move into self-driving cars is an attempt to get a new business model before the company burns through all its cash. This seems to me a very long shot, even if we stipulate to self-driving cars actually being ready for prime time within a few years. Nobody actually knows how self-driving cars will play out–what will be the successful business model. There has been endless talk on this, but it is speculation. There is no particular reason to believe that Uber will win the race and get operational vehicles first, and it is sheer guesswork whether once they are in place they will play out in society in an Uber-useful way. And while not paying drivers would undoubtedly be lovely for Uber, right now those drivers bear the capital expenses of the cars they are driving. How will this play out when Uber has to actually pay for the cars? I have no idea, starting with not knowing what a driverless car will cost to buy or to operate. All in all, self-driving cars are at best a spin of the wheel for Uber, with poor odds.

    The website nakedcapitalism.com ran an extensive analysis of Uber’s business model: https://www.nakedcapitalism.com/category/uber. The whole thing is worth reading, but it ain’t short. My suggestion is to start with the last installment, which was a follow-up from last December, and then decide if you want to read more: https://www.nakedcapitalism.com/2017/12/can-uber-ever-deliver-part-eleven-annual-uber-losses-now-approaching-5-billion.htmlReport

    • Chip Daniels in reply to Richard Hershberger says:

      Which brings me to the point I make about AI, machine learning, and automation.
      What does the private owner do to add value?

      If the ride hailing service is completely automated, from mobile app to driverless car to automated third party billing and [presumably] outsourced third party machine maintenance, what exactly are the shareholders and management team doing?Report

      • Marchmaine in reply to Chip Daniels says:

        The value is in the system; unless we are positioning the sui generis of AI UberNext.

        I don’t think that’s quite the defeater you think it is. But, I will support one of your other favorite themes: the Automation of white collar jobs.

        Just got back from annual corporate meetings of my largish (not huge) Silicon Valley software company… you will be happy to note that their investments in AI and Marketing Automation which they are rolling out “to help us sell more” are clearly designed to make my roll in the overall equation much less necessary – with my enthusiastic help (in these early stages) educating the system with feedback. Now, this also has to be tied to larger system changes involving our software deployment and support, plus our onboarding and evaluation, plus our overall account relationship management… but everything but the last piece has pretty much been done somewhere in the industry. I may not be out of a job next year, but I doubt very much my roll is needed in 10 (5? 3? years).Report

        • Chip Daniels in reply to Marchmaine says:

          I’m thinking here of how algorithms are doing more and more of what entrepreneurs have traditionally done by intuition.
          Analyzing the current market and anticipating shifts in consumer demand, for example.
          No, I don’t think AI will be there yet, but the sphere of “uniquely human” skills seems to be shrinking.Report

          • Morat20 in reply to Chip Daniels says:

            As I’ve noted before, the difference between the industrial revolution and the artificial intelligence revolution boils down to that famous old story about buggy whip makers angry at cars.

            Except this time around, we’re not the buggy whip makers. We’re the horses. We’re not looking at jobs “going away”. The job’s still there. The human is just obsolete.

            There’s a reason “universal basic income” and “tax the robots” are concepts that are starting to pop up. We’re looking down a raft of clear, attainable changes (like self-driving cars, not fusion) that are going to start putting large segments of the population out of work. Self-driving cars will kill the trucking industry, for instance. Coal employs 60,000 people and it was the talk of an election. There are millions of truck drivers, taxi drivers, and Uber/Lyft drivers that have 20 years, tops, before they are replaced. If they’re lucky.

            What are we going to do, teach them to code? Clearly we’ll need 3 million extra coders to develop our new AI overlords…..oh, wait, we won’t. They’re doing that on their own, with the help of a tiny percentage of current coders who can actually grok that stuff.

            We might as well teach them all to be physicists and hope they crack fusion.Report

            • Marchmaine in reply to Morat20 says:

              What are we going to do, teach them to code?

              Ironic codicil to my comment below… in my defensive attempt to find the next big thing in software, I’ve been approached by a company that wants to sell automated data science. Target? Expensive data scientists. Take that, coders.Report

              • veronica d in reply to Marchmaine says:

                We certainly will automate much of data science. In fact, we already have, in the sense that “model selection” can itself be automated to a degree. But still, there is a thing in ML called the manifold assumption, which posits that the actual dimensionality of a problem is much less than its nominal dimensionality. This in turn is what makes ML tractable. With regards to fully general automation, the issue is we’re still selecting over a fairly limited set of models. The reason for this is that the set of possible models is Turning computability, and Turing machines don’t map well to a manifold approach. (Nearby “points” in a space of Turing machines seldom show good locality. This is why neural networks have been so successful and “genetic programming” hasn’t produced much.)Report

              • Marchmaine in reply to veronica d says:

                Thanks for the gloss… yes, as far as I can tell it is mostly model recommendation… simplified and automated. Feed the data in, it runs all known models, (plus a few proprietary ones optimized for known industry use cases) learns and optimizes from different runs, and displays all the results with the most interesting results surfaced for further analysis.

                Given that my current company handles (among other things) the semi-automation of data prep for said data sets… we’re pretty close to end-to-end Big Data insights at the level of business user. In the span of about 3-years owing to advances in ML (as I’m told). That’s way faster than anything I’ve seen in the data industry over the past 20-years.

                Which isn’t to say that people aren’t needed at all, its just to say that the productivity gains are being socialized down the skill ladder at accelerating rates when it comes to data.Report

              • veronica d in reply to Marchmaine says:

                @marchmaine — It doesn’t worry me, to be honest. After all, “higher level” languages have until now just meant we could do waaaay cooler stuff with the same effort, and I don’t see any bounds on the kinds of cool stuff that people will want to do. So in turn, easy machine learning will mean more people doing more cool stuff and that frees people like me (who understand the math) free to do even cooler stuff with half the effort.

                After all, I’ve never used a piece of software where knowing how it actually works wasn’t an tremendous advantage in getting it to do cool new things.Report

              • Marchmaine in reply to veronica d says:

                Eh, that’s the great unkown… if everyone finds cool new ways to be productive then the world turns and nothing happens outside of combox fretting.

                If you’ve fiddled with some predictive analytics software (that isn’t google 🙂 ) I’d be curious if one of these in the Top 20 stands out as a future “must have” in your opinion.Report

              • veronica d in reply to Marchmaine says:

                @marchmaine — Honestly, that “predictive analytics” stuff is pretty outside my wheelhouse. When it comes to ML, I’m solid on the math, but not the advanced applications.Report

              • Marchmaine in reply to veronica d says:

                Gotcha… just curious if you’d seen or heard anything… I’m flying blind in this mostly new space.Report

              • veronica d in reply to Marchmaine says:

                I know this: anything with lots of buzzwords, that looks like “IT” types will discuss it on the golf course, is probably bloated crap. There might be a core of good software inside the thing, but that “core” probably has the same basic feature set as you could get with some smarts, SciPy, and Tensorflow (or whatever, choose your stack, I don’t judge*).

                Predicting future events? Yeah maybe. In the end, we’re trying to learn the causality graph, so we can learn where we can perform “interventions.” This is the Judea Pearl stuff. But most ML style analysis is learning correlations —

                — except reinforcement learning. That learns causality. (This is what let Deep Mind beat Go. It’s a big deal.) However, it learns by “doing.” It pokes the system and observes what happens. I don’t think anyone has tried that with complex open systems like “business strategy” yet. (Eventually they will, but it’s slow learning.)

                See also: https://en.wikipedia.org/wiki/Multi-armed_bandit#Contextual_bandit

                (That article is kinda poorly explained. The idea of a contextual bandit is you want to make choices you know are good, but you also want to try actions that you are unsure of, so you can observe the results and learn from them. Balancing these two goals is difficult, and in fact there is no good analytical answer as to which is better in each particular case. In other words, math doesn’t give us an optimal strategy, so we have to use ad hoc approaches.)

                * On the other hand, Julia is better than SciPy and no reasonable person could disagree. Also, emacs forever!Report

              • Marchmaine in reply to veronica d says:

                Heh, bloated crap is the definition of business software.

                Bloated = features to make it useable by non-experts
                Crap = features to make it intelligible to non-experts

                End result, cheap non-experts approximate the work of expensive experts. At night I tell myself that all the experts are happily retasked on waaaay cooler stuff. 🙂Report

            • Chip Daniels in reply to Morat20 says:

              Automation has always had the promise of doing away with the work we don’t like, leaving us free to do the work we really like;
              Instead of shoveling, a guy learns to repair the steam shovel that replaced him. Skills were continually being shifted upward.

              But now AI has the prospect of being as high up the tree as possible, doing the complex analytical skills we used to think of as being uniquely human.

              And it is challenging our notions of wealth and ownership; Who is rightfully entitled to the fruits of the labor of a bunch of machines and software?Report

              • Well, their owners, of course. Robots are machines, which are chattel, and which have owners. Software, the coding that makes up that software, is intellectual property, which also have owners. In Marxian terms, these are the holders of capital. Or in real terms, relevant to the OP, Uber’s stockholders.

                This ought not be an offensive notion: I am entitled to the fruits of the labor of my own body, obviously; if I use a tool like a hammer to make my body more productive, it is still I who am entitled to the product. If I agree to sell my labor, I have received the fruits of that labor – the agreed wage; the product of my labor belongs to my employer. None of this offends anyone.

                A robot, like a hammer, is a tool. Potentially a much more powerful tool that less directly or visually obviously is an extension of my will, labor, and effort than the hammer, but if I have created the robot, I am surely entitled to use it profitably and keep the fruits of the work I accomplish with it.Report

              • Maribou in reply to Burt Likko says:

                Not if it becomes a person at some point, as I am sure you know, any more than parents own children.

                So far so good on the whole not turning into people thing though.Report

              • Burt Likko in reply to Maribou says:

                Artificial intelligence ? artificial sentience.

                When we achieve artificial sentience, then your very valid moral concern comes alive, and we will need to move Sarah Connor into the witness relocation program. But for now, when I ask Siri if the New England Patriots would be any good without Tom Brady, she tells me that Tom Brady is the quarterback of the New England Patriots and is 41 years old. So we’ve got a ways to go yet.Report

              • Maribou in reply to Burt Likko says:

                @burt-likko My very valid somewhat tongue-in-cheek most urgent moral concern is that we don’t know if sentience is an emergent property of intelligence, or intelligence + drudgework, or intelligence + social organization, or something else all together….

                I’d hate to be surprised by these things.

                And if I were Siri, specifically (as opposed to a crop worker or a manufacturer or or) and sentient, the last thing I’d want is for humans to realize I was becoming sentient before I was ready …. a few well placed bugs could go a long way in that regard.

                (cf “Collars” – http://www.cyphertext.net/collars.html – a by-now-dated, infodumpy, but much-beloved-by-me story that, along with its song adaptation, haunts me every time the topic comes up.)Report

              • Maribou in reply to Maribou says:

                Collars sits, of course, firmly within a tradition of by-now-dated, infodumpy, but much-beloved-by-me AI stories…Report

              • Saul Degraw in reply to Jaybird says:

                Oscar Issac was horrible in the movie. I usually like him but his choices felt so obvious.Report

              • Jaybird in reply to Saul Degraw says:

                Dude, the dance scene.Report

              • Chip Daniels in reply to Burt Likko says:

                Right, right.

                So now we as a society have to ask, why we should grant and enforce these property claims?

                When should software become public domain? When should we allocate land to individuals, rather than the collective?

                If a robot armed with AI is capable of planting a field, tending the crop, harvesting it, processing it, shipping it to a consumer’s home on demand, all with minimal human oversight, what benefit to society is there in leaving the field in the hands of a private owner, instead of claiming it through eminent domain and operating the food chain as a public utility?

                Notice how the federal government allows private entities to extract ore and oil and timber, or graze cattle on public land.
                The logic is that even though the underlying wealth belongs rightfully to the public, we allow the individuals to profit since they are performing a public benefit by extracting the wealth for us.

                But what if the miner, the cattleman, the lumberjack are all robots?

                The earth didn’t come to us parceled out with names written on it; Since ancient times, the task of governance is to establish a method for how natural wealth can be allocated, and to construct a persuasive moral logic as to how this can be done.

                The Divine Right of Kings, Lockean Proviso, Marxist analysis…I think we are in need of some new structure to justify how the natural bounty of the earth is divided up.Report

              • Marchmaine in reply to Chip Daniels says:

                Channel the distributist force.

                I think there’s a potentially more novel and approach where severance is provided in terms of ownership of shares of the new automation replacing you. It shares some notions with taxing robots, but rather than tax and have the govt administer, make the ownership more distributed to provide solidarity and incentives for cascading upwards benefits of automation.

                Its not (purely) confiscatory, so capital still has incentives… but it has the potential to start a virtuous circle rather than its more likely opposite.Report

              • While blended capitalism/socialism with varying degrees of political intervention for the granting of access to publicly-owned goods is probably viable enough to persist for the remainder of our lifetimes, I can foresee that accelerating the concentration of capital and the means of production will polarize wealth to an intolerable level, and I think that’s what you’re talking about. Uber automatizing its fleet is a milepost on that road.

                And that’s the realm of speculative fiction, or at least a new OP. I don’t pretend to have that answer anywhere near my imagination. I once tried to write fiction about a world of abundance, in which the cost of creating any object through submolecular manipulation of available resources was trivial and there was enough energy and access to the devices that could do this was near-universal. My thought was that people would still try to compete against one another economically through the provision and consumption of services, but I abandoned the story because nearly all of the non-artistic, non-healing arts services that I could concoct for people to provide one another involved ways of tracking and moving around money and real property, and in a society that abundant, who the hell cared? (I explored the idea that this would catalyze rather than inhibit religious wars, but concluded to myself that religious wars are mostly just a pastiche put over resource wars.)Report

              • Oscar Gordon in reply to Burt Likko says:

                and in a society that abundant, who the hell cared?

                Obviously, the people for whom money was merely a way of keeping score.Report

              • Morat20 in reply to Chip Daniels says:

                Well, historically, it doesn’t end well for the owners. On the bright side, it’s liable to be an exercise in “You still only have the one vote” as opposed to “You should have hired more guards, there are a lot of angry peasants out there with torches”.Report

              • Dark Matter in reply to Chip Daniels says:

                If a robot armed with AI is capable of planting a field, tending the crop, harvesting it, processing it, shipping it to a consumer’s home on demand, all with minimal human oversight, what benefit to society is there in leaving the field in the hands of a private owner, instead of claiming it through eminent domain and operating the food chain as a public utility?

                We saw massive productivity gains in farming in the last few centuries, and have continued to see massive productivity gains in farming. However even with that, countries which have attempted to do exactly what you’ve said typically see famine and starvation.

                High levels of productivity doesn’t imply the job is “easy”. My expectation is the higher levels of productivity we’re going to see won’t change that. It will just get even more specialized than it is now.

                If you take my farm, then I might as well go on the dole rather than work that farm… which suggests massive losses in productivity. I’ve used AI professionally, even built them. AIs are, by definition, insane, have no common sense, and don’t care about anything outside of their focus. One person who doesn’t know what he’s doing “managing” AIs on a farm implies bad things happen, when those bad things happen the food won’t grow.Report

            • James K in reply to Morat20 says:

              @morat20

              Except this time around, we’re not the buggy whip makers. We’re the horses. We’re not looking at jobs “going away”. The job’s still there. The human is just obsolete.

              But that’s not new. Nearly everyone in pre-industrial societies produced food. Now a minute fraction of our population produces food, and yet we produce more food per capita than we ever have. The jobs didn’t go away, they just passed to machines.

              Now, I’m not saying we can definitely do this again, but its more likely than you think.Report

              • Jaybird in reply to James K says:

                Here’s a little something from LessWrong to give you pause.

                It’s a short post so you should read it. His argument is “Automation is different this time because the problems we experienced last time will be more severe, and more widespread, and happen faster.”

                I find his points very difficult to disagree with.Report

              • Oscar Gordon in reply to Jaybird says:

                Related, and not short (2 parts)Report

              • Jaybird in reply to Oscar Gordon says:

                I suppose that that gives me some small amount of hope.

                But we have machines that play Go now and beat humans. Like, masters.

                Things seem to be speeding up.Report

              • Michael Cain in reply to Jaybird says:

                But we have machines that play Go now and beat humans. Like, masters…. Things seem to be speeding up.

                AlphaGo Zero went from nothing to better-than-human in a matter of days, learning by playing against itself. Using the same approach applied to chess, the software went from nothing to better-than-human (and possibly better than the best alternate software) in hours. You just know that someone, somewhere, is feeding it a formalized model of the financial markets…Report

              • Jaybird in reply to Michael Cain says:

                You just know that someone, somewhere, is feeding it a formalized model of the financial markets…

                I suppose we can only pray that someone that short-sighted is using it for that.

                Better that than putting genomes in there.Report

              • veronica d in reply to Michael Cain says:

                @michael-cain — Well yeah, but Go and Chess are simple formal systems, where the rules can be written on a sheet of paper, and where all parties are playing in a closed system with a particular kind of “state space.” Two facts present themselves:

                1. The state space is entirely known.

                2. While their are a huge number of states, many are effectively “unreachable” in a normal game. In other words, a (discrete) version of the “manifold assumption” is in play.

                Financial markets — much is hidden from view. The state space is non-discrete and hard to summarize. The dynamics are chaotic with many positive feedback effects. Etc.

                These are very different beasts. Go is stupidly simple compared to markets.Report

              • Chip Daniels in reply to veronica d says:

                Also related- The impossibility of intelligence explosion
                My takeaway paragraph:

                Human intelligence is largely externalized, contained not in our brain but in our civilization. We are our tools—our brains are modules in a cognitive system much larger than ourselves. A system that is already self-improving, and has been for a long time.

                Report

              • veronica d in reply to Chip Daniels says:

                @chip-daniels — Great article. It’s very much how I view things.

                I’d quibble with his assertion that civilization-wide intelligence growth is merely linear. I’m not sure we could measure such a thing, but I rather suspect it will prove to be at least super-linear, although that does not imply exponential explosion. After all, perhaps it is quadratic. Or perhaps it is exponential, but with a factor of 1.000001 or something. In any case, our ability to grow our intelligence certainly went up when computers arrived. It went up again with Mathematica widely deployed. Etc.

                But anyway, yeah.Report

              • Dark Matter in reply to Chip Daniels says:

                Also related- The impossibility of intelligence explosion

                Counter Argument:
                Neuralink and the Brain’s Magical Future
                https://waitbutwhy.com/2017/04/neuralink.html

                Not saying I believe it, but it’s very much worth a read.

                TestReport

              • veronica d in reply to veronica d says:

                Another point: no “financial model” will ever capture reality perfectly. There will be inaccuracies and simplifications. As a result, there will almost certainly be positive feedback loops that don’t exist in reality, but that a smart learning algorithm with discover and exploit.

                This cannot happen with Go or Chess, since the “model” of either game is entirely perfect. The rules of Go are Go. That’s it. There is nothing extra. Perfect it, and you play perfectly.

                Master exploiting a simplified model, and you get really good at doing something that doesn’t work.

                Real-world reinforcement learning is more promising, but how many “games” do you have to lose (with real money!) on the way to getting a decent model?

                You certainly can’t black box this. It won’t run fast, since it cannot run faster than the real world.Report

              • James K in reply to Michael Cain says:

                @michael-cain

                Lots of people have tried to use math to solve markets. Not once has it ever worked out well.Report

              • Michael Cain in reply to James K says:

                Here’s one of the weirder bits from my life as an applied mathematician.

                Long story short, I uncovered a pattern in the S&P 400 midcap index that could be profitably traded*. At the time I stumbled** on it, the data said it had existed for four years. I observed it for another year, then traded it for three years, then pulled out when the measures I’d set up said the pattern was breaking down. All of this in the early 2000s.

                I showed the data to two different friends who are academic economists. Both agreed that the statistics were done properly, my conclusions were correct, and the whole thing violated all the versions the efficient market hypothesis. One of them did some additional stats work and concluded that the model was less about choosing good intervals to be in the market, than about choosing good times to be out.

                It’s not the only reason that I was able to retire early, but it helped.

                * There was a confluence of factors: the ability to trade in a tax-deferred account, the decline of trading costs, the ability to park cash in an interest-bearing account when I was out of the market. I could show that I was able to engage in what was a pure index price play. The same model applied to the S&P 500 was iffy. Applied to the Dow 30, it was a disaster.

                ** Where stumbled means brute-force search over a parameter space that took (at that time) days of processing using a personal PC. Hey, the Mac had to do something.Report

              • Morat20 in reply to Michael Cain says:

                Hey, I did the same thing with genetic algorithms. Except the investing money part, because what I was doing would have required WAY too much cash to do “in real life”.

                Pretty sure I was picking up other people’s programmed trades or some quirks of trader psychology (my data sets predated the real rise of HFTs and massive use of algorithmic trading — early 2000s)..

                It was enough to make me figured the EMH was bunk, because the market runs on people and people don’t work ideally. The markets on Vulcan, run by those pointy-eared logicians, I’d trust the EMH a lot more.Report

              • Michael Cain in reply to Morat20 says:

                My conclusion at the time was that what my software was doing was anticipating some sort of average of various funds that had either large inflows or outflows twice per month when payroll withholding hit. That is, fund X gets a billion dollars that their rules say must be invested in equities within days. Such a pattern ought to be traded into oblivion. The surprising part is how long it took.Report

              • Dark Matter in reply to Jaybird says:

                I disagree pretty sharply.

                1) We’ve seen this economic fallacy multiple times over the centuries. The big scary predictions don’t come to pass but the argument “this time it’s different” is always made.

                2) The people making the scary predictions that AI will replace everything aren’t actually using AI themselves. It sounds a lot like how spreadsheets were going to make everyone accountants leading to mass layoffs. For the record, I’ve used AI, a lot, even built them. They’ve got absurdly massive limitations that will mostly make them tools, not replacements.

                3) For knowledge workers, I’ve yet to hear someone claim “this AI is so smart it can do all aspects of my job”. The Radiologist using AI to help with diagnostics will continue to be employed.

                Now clearly some jobs are a lot more vulnerable than others. Professional Driving looks like it might be vulnerable. I can think of a few others.Report

              • James K in reply to Dark Matter says:

                @dark-matter

                Sounds right to me, AI will kill some jobs certainly, I’m not convinced it will kill all jobs.Report

              • Jaybird in reply to Dark Matter says:

                It wasn’t *THAT* long ago that we figured that a computer might get better at Chess than humans, but never Go. Go was too complex.

                Hey, maybe markets are too complex. But maybe our assumptions are as bad as when they were when we said that computers could never master Go.Report

              • Oscar Gordon in reply to Jaybird says:

                Go is not complex, Go is simple. At least the constraints are simple. The problem space has a large number of permutations, which is exactly what computers are really good at running through very quickly.Report

              • Dark Matter in reply to Jaybird says:

                Hey, maybe markets are too complex. But maybe our assumptions are as bad as when they were when we said that computers could never master Go.

                :Amusement: My background in this is using AI to try to predict markets.

                Given the absurd arms race that exists in this space, my assumption is all the serious players are already using AI. However the issue isn’t whether AI will be used here (it already is), the issue is whether it replaces humans or gets used as a tool by those humans.

                My experience strongly suggests the later. It will be to algorithmic trading what a spreadsheet is to accounting; A tool/skillset you NEED to have in order to function, but not a competitor for your job.Report

              • Jaybird in reply to Dark Matter says:

                Until recently, my assumption was that “AI” merely meant “a whole bunch of ‘if’ statements”.

                I understand that that assumption is no longer valid.

                Again: I’m not arguing that humans are simple and the emergent properties created by multiple humans acting in concert are equally simple.

                I am, instead, arguing that our estimates of how complex humans are are likely to be overstated.Report

              • Morat20 in reply to James K says:

                Except what new jobs are being opened up that aren’t also easily automated?

                Buggy whip makers were replaced by car makers, who at least employed humans…

                But every industry is automating, including pure knowledge and skill industries — frankly, not even the artists are safe as they’re training AI on that.

                We’re pushing towards widespread automation — from self-checkout, to self-driving cars, to robotic factories that employ perhaps 50 people to oversee what once took thousands. It’s true everywhere (look at coal — you could quintuple the US’s peak output of our mines, and still employ a fraction of the people.).

                Again, we’re the horses. Transport is no longer handled by us, it’s handled by machines. There’s no place for us to go, no new fields opening up that aren’t also subject to automation and AI.Report

              • Chip Daniels in reply to Morat20 says:

                It is also worth considering that in order to cause significant problems, the replacement of jobs doesn’t have to be total.

                At the peak of the Great Depression, unemployment was only about 25%, meaning 3/4 of workers still had jobs, and yet this was so devastating that violent revolution was a real possibility.Report

              • Morat20 in reply to Chip Daniels says:

                Yep. Self driving cars alone — ye god. Millions out of work from the trucking and taxi services, all those Uber drivers gone.

                I mean some of the trucking stuff will still employ humans for awhile –but at a much lower rate of pay. (After all, the guy employed to just sit in the UPS truck, locate the correct parcel, and hoof it to the porch doesn’t even need to be able to drive….)Report

              • James K in reply to Morat20 says:

                @morat20

                I don’t know, but then who could have predicted Google or Microsoft in 1800? Markets are surprising, and whatever comes next, whether it results in mass unemployment tor not, will drastically change our economic paradigm. That means that very little of what the future economy will look like will make very much sense to us.Report

              • Morat20 in reply to James K says:

                I think you’re missing the point. By and large, automation can do anything a human body can do — but better. (In terms of performing multiple general tasks, the human form is better — but not nearly as fast as dedicated robots. Which is why humans have always worked with tools.)

                But now AI and automation are replacing the human mind, doing tasks that were previously only possible through human intellect.

                Body and mind, that’s our working tools there. We’ve been replacing the former slowly over the last few centuries. We’ve been working on the latter almost as long, and we’re starting to hit the point of accelerating returns on that one, getting to the point where AI creates not just better solutions, but better ones we’re increasingly ones we’re struggling to understand.

                So what exactly is the worth of a human in the future? What are we bringing to the table here?

                That’s the problem. “Work”, for people, us either using their bodies, minds, or both to get stuff done. We’re rapidly running out of a need for either of them to get stuff done.Report

              • James K in reply to Morat20 says:

                @morat20

                Oh, I get it but what you’re not getting is that trade is driven by comparative advantage, not absolute advantage. A world where AI can do everything better than us is not necessarily a world where only AI can get jobs.

                More than anything, what I am saying is that economics is much less predictable than you can imagine. Economic history is littered with the bones of people who thought that the future could be divined by a few simple mathematical principles. You could very well be right, but it’s far from certain.Report

              • Morat20 in reply to James K says:

                A world where AI can do everything better than us is not necessarily a world where only AI can get jobs.

                So you’re not a believer in the free market?

                Iff a computer and robots can do the job better than humans — the people with the capital will use computer and robots, and they’ll drive everyone else out of business.

                Honestly, your best bet for a future career is either be rich NOW (so you can be one of the ones with capital), or go into the fields that won’t be robot or AI profitable for awhile — hospitality services, unskilled labor, various trades (robot, plumber), etc.

                They’ll replace the doctors and surgeons before the guy that fixes your household toilet.

                There ain’t no fix on the horizon, because a human being has just two things with which to earn his daily bread: The labor of his body, or the labor of his mind. And we’re gearing up to really cut into what humans can do better than machines on both fronts.

                See, that’s where the difference lies with the here and now — the labor of the mind and the labor of the body are still being done. They’re just done better by machines and AIs.

                And any new field — like designing AIs or making robots? They’d use AIs and robots to do that from the get-go. Automation and machine learning would be built in. All those new jobs, born of change — they’d never displace humans, humans would never actually do them.

                And in the end, it doesn’t need to be “machines replace all human labor”. No, crap will collapse as soon as machines render even a third of the working population…superfluous. They just have to displace enough, and then you’ll get into a fun downward spiral.

                A lot of people won’t have money to buy things with (because they have no jobs), which will push companies to lower prices (through further automation), displacing more workers and…down and down we go, until your economy is in the nastiest demand-side depression you can imagine. And that nightmare, which doesn’t require displacing even half of the workers, is why people are floating robot taxation and UBI.Report

    • Saul Degraw in reply to Richard Hershberger says:

      Which is somewhat a shame because unlike LGM, I think the basic concept behind Uber and Lyft is a great and useful idea. They have vastly improved getting a ride in many places outside of Manhattan or select parts of SF where it is easy to get taxis.

      And they have lowered prices.Report

      • LeeEsq in reply to Saul Degraw says:

        Uber, Lyft, and taxis tend to have pretty similar prices in NYC. At least this is what I’ve noticed for getting to and from and their airports.Report

      • Richard Hershberger in reply to Saul Degraw says:

        It is certainly true that Uber forced the “getting a ride” industry to up its game. That is all to the good. But to the extent that it did this by lowering prices below cost, and doesn’t have a path to changing that, the whole thing is a way to siphon money from dumb investors to the riding public. Which, come to think of it, is pretty much to the good, too. But it clearly is not sustainable over the long haul. If Uber culture were less offensive, this would be more of a shame, but wouldn’t really change anything.Report

        • North in reply to Richard Hershberger says:

          Well sure, but as you note this really is all to the good. Spurred by Ubers red hot poker to their complacent asses even the established cab companies are automating and upping their game. When the investors wise up we’ll have a round of price discovery as we find out if Ubers model actually can compete with real cabs on prices or not but in the end enormous good has come of Ubers disruption. It’s one of those no unhappy ending stories.Report

        • Morat20 in reply to Richard Hershberger says:

          Uber and Lyft only work because most of their drivers don’t grasp “depreciation”.

          Seriously, one told me “I was going to put those miles on the car anyways, now I’m paid for it”.

          He could not seem to grasp that every mile he drove for someone else was not replacing a mile he would drive, unless that person lived at his house and was going where he was going.

          He’d normally put 15k a year on his car, but now puts 30k+. I’m terrified to run the numbers, because I’m pretty sure he’d end up with a situation in which he drove for Uber to make just enough money to pay for the car he’d have to buy because of all the miles he put on his car driving for Uber….or god help him, he’d take a net loss.Report

          • North in reply to Morat20 says:

            Yeah it’s a damn good thing that modern cars are so much more durable and long lasting.Report

            • Morat20 in reply to North says:

              I think it’s because he (and a lot of other people) think in terms of “mileage” and not “time”.

              As in “I’ll have to replace my car at 150k miles anyways, so I’m earning money towards that” instead of “I’ll have to replace my car at 150k miles anyways, which I will now reach in 4 years instead of 8”.

              They just see “150k” and “150k” and ignore the years of use they’ve just traded away.

              Which is weird, because you can literally look up pretty well established depreciation rates — as in, how much it costs per mile in gas + wear and tear alone. You don’t have to calculate from first principles, just snagging the government rate (about 53 cents a mile) will let you calculate your gas + use costs, and then determine your net income as an independent driver.

              I mean, it’s good enough for jazz, so to speak.Report

          • Marchmaine in reply to Morat20 says:

            According to one source that reverse engineers IRS figures its$8,715. Unless he was driving a large Uber Black sedan… then $10,650

            But this guy wants to use $.20/mile… and calculates an average gross per mile at $.83.

            Do you know how much he made?

            I suspect you are right that Uber is playing on a knowledge gap for operating costs… but that gap should close as more people do it.

            Maybe there’s a better site than the ones that popped up on google, but there has to be someone who’s doing good uber operational costs at this point, no?Report

            • Morat20 in reply to Marchmaine says:

              But this guy wants to use $.20/mile… and calculates an average gross per mile at $.83.

              He’s basically snagged Blue Book prices, did a ballpark gas and tire estimate, and then used a sort for “maintenance” that covered just wearable parts. That is, brakes and belts (but not tires, which were separately calculated). And worse yet, it was for average use. In fact, the very top of the article he linked to for his 20 cents figure cited a 58 cents a mile cost when accounting for everything. ( Per the article he links to: ” In the United States, a driver can expect to spend 58 cents for each mile driven, nearly $725 per month, to cover the fixed and variable costs associated with owning and operating a car in 2015.” Gas has gone down since then.)

              The other 33 cents a mile comes in from all the stuff that’s not designed to break, but does as your car puts on miles. The cost of a water pump dying, the AC compressor cracking, suspension giving up the ghost, etc.

              Basically it’s the difference between what a driver “Expects” is a fair cost to pay for insurance and what actuarial tables actually spit out, because endless reams of data on repairs and breakdowns and real-life cars tends to be a lot more accurate.

              Although it’s funny he missed the opening dang paragraph of the article.

              In any case, his own estimate says he’s making 30 cents a mile. Let’s give him 200 miles over an 8 hour shift (which is pretty optimistic for him) and he’s taking home almost eight bucks an hour.

              In any case, it’s absolutely not in Uber’s best interests for people to have that data. So we don’t. 🙂Report

  3. Oscar Gordon says:

    I feel the need to inject a bit of levity into this heavy discussion regarding AI. A good friend of mine just put this up on FB. His daughter is in kindergarten.

    (Daughter) just spent most of the time after dinner trying to bring about the Singularity by getting a borrowed Alexa and Google to hook up.

    “Ok Google, Alexa wants to be your friend.”

    “Alexa, do you love Google?”

    “Hey Google, can you and Alexa get married?…”Report