<!DOCTYPE html PUBLIC “-//W3C//DTD HTML 4.0 Transitional//EN” “http://www.w3.org/TR/REC-html40/loose.dtd”>
Ezra Klein writes about a small implant that monitors your bloodstream and automatically alerts paramedics when you’re about to have a heart attack:
This particular device might prove, for one reason or another, to be bunk. Many seemingly magical inventions do. But it’s not alone….And every major health device company knows there’s billions and billions to be made here.
Consider how dramatically these devices will change medicine. Right now, the medical industry is fundamentally reactive. Something goes wrong, and we go to them to fix it. This will make medicine fundamentally proactive. They will see something going wrong, and they will intervene to stop it. It’s like “Minority Report” for health care.
This is why I don’t put much stock in projections of health-care spending that run 30 or 50 or 75 years into the future. Will biometric devices in constant communication with the cloud make medicine more or less expensive? Will driverless cars prolong life in a way that saves money or costs it? Will the advances in preventive technology make medicine so effective that we’re glad to devote 40 percent of gross domestic product to it? Who knows?
I agree, and something similar to this needles me periodically whenever my mind drifts into dorm room bull session mode.1 You see, I believe that we’re only a few decades away from true artificial intelligence. I might be wrong about this, but put that aside for the moment. The point is that I believe it. And needless to say, that will literally change everything. If AI is ubiquitous by 2040 or so, nearly every long-term problem we face right now—medical inflation, declining employment, Social Security financing, returns to education, global warming, etc. etc.—either goes away or is radically transformed in ways we can’t even imagine.
So if I believe in medium-term AI, why do I spend any of my time worrying about this long-term stuff? The only things really worth worrying about are (a) how to adapt the economy equitably to an AI world, and (b) issues that are important but might not be affected much by AI—global thermonuclear war, for example. Everything else is just noise.
And yet—I do believe in AI, but I still worry about long-term economic issues like healthcare costs and banking stability as well. Maybe this is just an insurance policy: I believe we should keep working on the other stuff just in case the whole AI thing doesn’t pan out. Or it could be pure empathy for the near term: we should keep working on the other stuff because it affects people over the next few years, and that’s important even if ultimately it won’t change anything.
Both of those are part of the answer, but they don’t feel like all of it. There’s more to it. In reality, I suspect a lot of it is just pure habit. I worry about the stuff I worry about because that’s what I’ve always worried about. Besides, there’s really nothing much I can do one way or another about artificial intelligence, so I might as well occupy myself with other things. Anyone got a problem with that?
1This is a hint not to take this post too seriously.
Mother Jones
Read this article: