Future Imperfect: High Points
Saturday I spoke at a Foresight Institute Unconference, using material from my next book, Future Imperfect. Since the audience at their events is already familiar with a lot of odd idea about the future, I decided to focus on a few things that I thought were interesting and might be unfamiliar. Since I suspect many readers of this blog have similar backgrounds, I thought they might be interested in a very brief precis. For details, see the webbed manuscript of the book.
1. Privacy.
Public key encryption has the potential to give us a level of privacy in cyberspace greater than anything we have ever experienced in realspace. Not only would it be possible to communicate with reasonable confidence that only the intended recipient could read your messages, it would be possible, using digital signatures, to combine anonymity and reputation--have an online persona with provable online identity, but control the link between that and your realspace persona.
Surveillance technology, the combination of video cameras on poles, face recognition software, and databases, has the potential to give us a level of privacy in realspace lower than anything we have ever experienced--everything you do in public places not merely recorded but findable. Wait a few years until we can produce video cameras with the size and aerodynamic characteristics of mosquitos, and "public places" become more or less everywhere.
What if we get both? The net result depends on two questions. Can you control the interface between realspace and cyberspace--strong encryption does you no good if a video mosquito is watching you type. How important is realspace anyway? The latter question depends on a third technology--virtual reality. In the limit, nothing much of importance is happening in realspace, just bodies in storage lockers being fed nutritious glop which VR turns into sushi and chocolate, while all the real action is in (encrypted) cyberspace.
2. Should we regulate nanotech?
Some of the Foresight people, despite generally libertarian biases, think we should, given the specter of a high school kid in his basement lab destroying the world. I think we need to consider the balance between offensive and defensive technologies. If, in nanotech, offense has a huge advantage, then we're probably done for. If not, it's worth remembering that there will be lots of private demand for defense but the only people who spend really large sums on finding better ways to kill people and smash stuff are governments. So putting governments in charge of regulating nanotech has a strong feel of setting the fox to guard the henhouse.
3. Can technological progress make us worse off?
Yes. Making human society work depends on a very intricate coordination--someone has to make the inputs to make the inputs to make the inputs to what I am producing. The centralized solution to that problem works only on a small scale. The decentralized solution--markets and trade, or something similar--depends on being able to break the world up into pieces (my stuff and your stuff) such that what I do mostly affects my piece (except with your permission) and what you do mostly affects yours. Technological progress can, among other things, increase the size and scale of what individual humans can do, which might result in each person's actions having effects most of which are divided among a very large number of other people. If so, the number of solutions to the coordination problem might be reduced from one to zero.
Comments welcome. Anyone who wants to criticize the above for being only a sketch is invited to first read the longer version.