[[{“value”:”From the Institute for Progress. There are four of us, namely Dylan Matthews, Matt Clancy, and Jacob Trefethen as well. There is a transcript, and here is one very brief excerpt: Tyler Cowen: I see the longer run risks of economic growth as primarily centered around warfare. There is lots of literature on the Industrial Revolution. People
The post Metascience podcast on science and safety appeared first on Marginal REVOLUTION.”}]]
From the Institute for Progress. There are four of us, namely Dylan Matthews, Matt Clancy, and Jacob Trefethen as well. There is a transcript, and here is one very brief excerpt:
Tyler Cowen: I see the longer run risks of economic growth as primarily centered around warfare. There is lots of literature on the Industrial Revolution. People were displaced. Some parts of the country did worse. Those are a bit overstated.
But the more productive power you have, you can quite easily – and almost always do – have more destructive power. The next time there’s a major war, which could be many decades later, more people will be killed, there’ll be higher risks, more political disorder. That’s the other end of the balance sheet. Now, you always hope that the next time we go through this we’ll do a better job. We all hope that, but I don’t know.
And:
Tyler Cowen: But the puzzle is why we don’t have more terror attacks than we do, right? You could imagine people dumping basic poisons into the reservoir or showing up at suburban shopping malls with submachine guns, but it really doesn’t happen much. I’m not sure what the binding constraint is, but since I don’t think it’s science, that’s one factor that makes me more optimistic than many other people in this area.
Dylan Matthews: I’m curious what people’s theories are, since I often think of things that seem like they would have a lot of potential for terrorist attacks. I don’t Google them because after Edward Snowden, that doesn’t seem safe.
I live in DC, and I keep seeing large groups of very powerful people. I ask myself, “Why does everyone feel so safe? Why, given the current state of things, do we not see much more of this?” Tyler, you said you didn’t know what the binding constraint was. Jacob, do you have a theory about what the binding constraint is?
Jacob Trefethen: I don’t think I have a theory that explains the basis.
Tyler Cowen: Management would be mine. For instance, it’d be weird if the greatest risk of GPT models was that they helped terrorists have better management, just giving them basic management tips like those you would get out of a very cheap best-selling management book. That’s my best guess.
I would note that this was recorded some while ago, and on some of the AI safety issues I would put things differently now. Maybe some of that is having changed my mind, but most of all I simply would present the points in a very different context.
The post Metascience podcast on science and safety appeared first on Marginal REVOLUTION.
Science, Uncategorized, Web/Tech
Leave a Reply