In conjunction with Foresight Update 51
by J. Storrs Hall, PhD.
Research Fellow, Institute for Molecular Manufacturing
The word “disaster” comes to us from Latin, with “dis”, meaning apart, generally interpreted in this context as meaning the opposite of, and astrum, meaning star, generally interpreted as an astrological reference to one’s fortune. “Catastrophe”, though it might appear to contain the same root, does not. It is from Greek: it literally means downturn, and was originally used to refer to the final act of a tragedy. In ancient Greek tradition, tragedy was often the inevitable working out of hubris.
As I write this the fragments of the space shuttle Columbia are still being gathered from eastern Texas and western Louisiana, and no official determination has been made of the cause of the breakup. It is somewhat heartening to note that the hubris of the NASA bureaucracy that caused the Challenger explosion did not seem to be present this time, and the accident appears to have been, as far as anyone can say so far, a case of implacable chance over the level best of a team sincerely dedicated to safety.
It is not clear which accident is cause for greater concern to the nanotechnology community. Both are troubling. Challenger demonstrated once again that bureaucracies will readily abandon common sense and human concerns in the face of political pressures. Columbia demonstrates that accidents will happen in spite of the best that our best and brightest can do.
This is troubling for the nanotechnology community because there is the potential for more widespread devastation in the general deployment of a new technology. Unfortunately the current fad is to ignore the benefits of any new technology and reap the mindshare associated with trumpeting (often exaggerating) the dangers. This is true not only of politicians but throughout the intelligentsia, the used car dealer class of the world of ideas, including columnists, commentators, and writers of thriller novels—e.g. anyone who can make money from it.
The fact that the media mindlessly amplify these claims with no filter of understanding or common sense doesn’t help. The day after Columbia was lost, CNN had one of its “helpful” subtitles on the screen as a NASA official was being interviewed. The subtitle explained that Columbia was travelling 18 times the speed of light when the mishap occurred. No wonder they had a problem.
The fact is that if you took a shuttle ride a year, your risk of dying would only be about twice what it is from other causes the rest of the year. For edge-of-the-envelope exploration, this is, in historical human terms, remarkably safe. Charles Lindbergh was called “Lucky Lindy” because everyone who had attempted the solo Atlantic crossing before him perished in the attempt.
Developing nanotechnology, of course, will involve dangers of a different kind than physical exploration. The major difference is that there is a danger in nanotechnology of a system getting out of hand and threatening people not directly involved in the effort, or indeed humanity in general. This kind of danger is associated with replicators that can operate in the natural environment.
There have been previous replicator releases associated with exploration or trade, and they have often caused disasters — both disease and pests have been introduced into unresistant environments. The big question is whether that kind of thing can be avoided with nanotechnology, or will it happen in spite of our best efforts?
To achieve safety in a space shuttle, one must anticipate all contingencies and make sure an enormously complex system operating at the very edge of technological capabilities works right in every case.
With nanotechnology, we have a much easier problem. We can define a set of conditions under which our system should work, and simply make sure that it doesn’t work otherwise. The simplest example is the use of a fuel or feedstock molecule that doesn’t occur in nature, but which is necessary to the construction and operation of the replicator. By necessary I don’t mean the mechanism checks to see if the special molecule is there and then does something else. I mean that the special molecule is a basic, integral part of the operation that can’t work without it.
The reason this is so compelling an approach is that it will almost certainly be easier to design machines that way. As Eric Drexler pointed out many years ago, we don’t design cars to run on gasoline because that will prevent them from becoming feral and running on wood; we design them that way because it’s the easiest way to get them to work.
Most of the advantages of replicators for specific jobs can be had by “truncated replicators”, ones built around non-replicating kernels. This is the same basic logic as the feedstock limit. Suppose you want a seed to grow into a house. It will need to replicate, say, a trillion nanobots to do so. Let’s assume you want the bulk of the material to come from on-site. You can still provide a seed with a stock of a trillion kernel nanomechanisms. The nanobots replicate everything but the kernel; they have neither know-how nor tools to build kernels. If the kernel is a full cubic micron of nanomachinery (e.g. a computer processor), the original stock of a trillion is only a cubic centimeter.
The major temptation to do otherwise will be the desire to operate cheaply by using “free” natural resources as fuel or material. In the long run, this is self limiting; the natural environment is a finite resource and will be getting more expensive relative to industrial output for the foreseeable future. The best course seems to be to hasten the process by allowing for the manufacture of feedstocks, or kernels, by nanotechnology.
If we trust our safety to a centralized bureaucracy whose edicts, imposed by force, are counter to the perceived interest of actual developers (including, e.g., “rogue states”), unconstrained replicator release is almost certain. On the other hand, early responsible development can create an economic environment where the incentive to “cheat” is reduced or eliminated. In such an environment, normal regulatory and enforcement mechanisms have a much better chance of success.
As long as humans are around, accidents, even disasters, will happen. But with foresight, common sense, and good will, there is good hope that they can remain disasters and not catastrophes.