ABSTRACT
We consider the biological provincialism of traditional SETI, and why there are good arguments for thinking that the bulk of the intelligence in the cosmos is synthetic. Given this possibility, the SETI community should consider how to conduct a meaningful search for intelligence that is not constrained to habitable worlds. To that end, we consider some of the factors that might govern the behavior of highly advanced, cognitive machinery and some strategies that might aid in the discovery of same.
THE ANTHROPOCENTRIC BIAS
The premise of most SETI experiments, the Search for Extraterrestrial Intelligence, was established with Frank Drake’s pioneering Project Ozma more than five decades ago [1]. Today’s efforts differ in scale, but not in approach: Their strategy is to seek signals produced by cosmic inhabitants whose level of technology is at least as advanced as our own.
For more than two decades, SETI has been largely underwritten by private donations, and because of this the scientists involved are often pressured to make some estimate of the chances of success. To this end, they will frequently invoke the well-known Drake Equation which quantifies the number of galactic societies currently producing detectable signals. If some estimate of the prevalence of transmitting sources can be made, then a timescale for SETI success can also be made.
Unfortunately, the value of many of the parameters of this equation are still unknown, and the few for which new data have recently become available are little changed from the estimates made when the equation was first written. The Drake Equation, while ubiquitous and helpful in formulating the problem of SETI, does little to determine the odds for any particular experiment.
Of possibly greater importance is the Equation’s influence in setting strategy. It assumes that SETI will succeed only if there are at least a few thousand technically accomplished civilizations resident in the Milky Way. Detectable societies are assumed to consist of a large number of individuals, resident on a planet that’s not only amenable to life but also able to beget and sustain complex organisms. In other words, a world analogous to our own.
That view hasn’t changed in a half century. New thinking on how to conduct SETI has been less about the nature of the beings we seek or their habitat, and more about their presumed behavior.
As example, a matter of popular discussion is whether signals from extraterrestrials are more likely to be deliberate beacons, or accidental leakage. This discussion is largely motivated by the trend in our own society to shift to higher efficiency communication modes (e.g., direct satellites and fiber optics in place of traditional broadcasting.) This change has led many to opine that advanced civilizations will be economical, and not generate significant leakage. However, while this argument sounds plausible, there’s no denying that it is highly parochial, and based on human experience a scant century after the invention of practical radio and lasers. And even this modest speculation on the conduct of extraterrestrials – they will be more efficient users of energy than we are – has had little impact on SETI experiments.
In fact, experiments do what they are able, and are mostly indifferent to whether the signal being sought is intentional or otherwise. SETI today continues to adopt the playbooks of the past: the aliens are analogous to us, only more advanced. The circumstances of their environment are also presumed to be similar to ours.
Unsurprisingly then, SETI practitioners have been heartened by recent discoveries of exoplanets. The good news is that worlds akin to our own could exist in great abundance. Current estimates are that between 0.1 and 0.2 of all star systems host an Earth-size planet in the habitable zone [2]. This implies that tens of billions of these favored locales pepper the Galaxy.
But there is also bad news. At a time when the prospects for beings comparable to ourselves are improving, there is a slow-growing realization that biological intelligence may be only a short-lived – and possibly cryptic – stepping stone to the real thinkers of the cosmos: synthetic intelligence.
PROSPECTS FOR SYNTHETIC INTELLIGENCE
If researchers in the field of artificial intelligence (AI) are to be believed, we will invent machines that are our cognitive equals by mid-century. Roboticist Hans Moravec has pointed out that the exponential improvement in digital electronics will produce workaday computers with reckoning power comparable to a human brain in less than a decade’s time [3]. This rapid betterment in computation has led some, such as Vernor Vinge and Ray Kurzweil, to predict a future time – the “singularity” – at which our own intellectual capacities will be swamped by that of our devices [4],[5].
Of course, there are already machines that can outperform the human brain in tasks generally regarded as “intelligent.” The best chess playing computer can beat the best grand master, and the recent triumph of IBM’s Watson computer against seasoned contestants on a television quiz show attracted widespread attention, if not admiration. More recently, Google’s AlphaGo software beat a world expert human at the game of Go, one that is considerably more complex than chess. But as AI entrepreneur Peter Voss has noted, these attainments merely point up the current situation in which one can either build a machine that is excellent at a narrowly scoped task (e.g., chess) or one that is quite mediocre at many things [6]. In order to challenge the intellectual abilities of humans, what’s required is what is termed GAI – generalized artificial intelligence.
It is not the intent of this essay to either review or critique developments in AI research, but rather to assume that GAG machines will appear – if not in this century, then in the next. The timing is of little consequence to the implications for SETI. But the events following this development are straightforward:
1. If our own example can be taken as typical, then GAI quickly follows on the heels of radio technology – within a few centuries.
2. There is no reason to believe that the evolution of “wet ware” – augmentations of our own brains – can keep pace with GAI.
3. Because artificial intelligence can quickly evolve (by its own design), it will soon outstrip the cognitive capability of biological beings.
4. Artificial intelligence will be self-repairing, and therefore of indefinite lifetime.
5. GAI will be the dominant form of intelligence for any society that has progressed even slightly beyond the point of being able to send signals into space.
6. Unlike biology, which has been “engineered” bottom-up, GAI will be engineered top-down. We cannot hope to forecast what talents or interests it will have, but the one aspect of its functionality that seems safe to assume is
survival. This sounds Darwinian, and therefore biological, but is essential if we are to find GAI now, billions of years into the history of the cosmos.
The bottom line is simple, if disquieting: biological brains will beget synthetic ones. If this technical evolution is commonplace, then there’s reason to expect that the majority of the intelligence in the universe is non-biological. This intelligence would not be dependent on water worlds, atmospheres, or planets at all. Consequently the premise of most SETI – that we should expect to find signals from old, habitable worlds – could be wide of the mark [7],[8].
It seems probable that the future of our hunt for extraterrestrials will require more than just new equipment. We’ll need to rethink what it is we seek.
SO HOW DO WE FIND IT?
Adapting our SETI strategies to the challenge of uncovering GAI may sound simple at first. Nothing more is required than to put less emphasis on targeting habitable planets, or even individual stars, and simply scan as much of the sky as possible. However, there may be opportunities to increase our chances of success by augmenting this simple, brute-force approach with insights about the likely nature or behavior of synthetic intelligence.
First, we are probably well advised to avoid hubris. There may be little we can fathom about the nature of artificial intelligence that might be the result of millions of generations of self-improvement – improvement not predicated on the slight and random modifications of Darwin, but directed changes. Such intelligence will surely be as superior to us as we are to the nematodes in the garden. Consequently, we should not feel too sure about our speculations as to what AGI might do or how it might be detected. Imaginative ideas about the interests and activities of synthetic beings are plentiful in fiction, but these ideas are vulnerable to anthropocentric bias.
However, there are at least a few aspects of GAI that seem less suspect:
1. Assuming that for such machines more computation is better, they can be expected to prefer locations with abundant energy and an effective heat sink. The former suggests the neighborhoods of early-type stars or black holes (either of the stellar variety or the massive objects hunkered at the centers of galaxies.) It’s been suggested that the outer regions of galaxies might be preferred locales for such machines because of their slightly lower temperatures, resulting in greater thermal efficiency [9]. However, given that the efficiency depends only on a temperature ratio between source and sink, this argument is of significance only if the energy source is no more than a few hundred degrees, as space is cold almost everywhere.
2. The short timescales for self-improvement may set up a “winner take all” situation. Whatever machine first appears in a given part of the cosmos could endlessly trump others that arise, since even a cosmically short period of time is a great number of GAI generations, and the new kids on the block could never catch up.
3. Given the dangers present in the universe, a machine might wish to buy insurance in the form of backup machines. These could be kept at a distance that would minimize simultaneous annihilation, but linked to the mother machine so that updates could be continually offered. Detecting this telemetry might offer a way to discover GAI, although one can assume that the communication would be point to point and unlikely to be intercepted with our instruments.
4. Another possible organization scheme for GAI might be hierarchical. Social systems might make sense if the increase of information in a machine eventually becomes small compared to the timescale for interaction with other machines (the light travel time between them). In other words, if the new capability acquired per year by a GAI eventually becomes a very small fraction of the previously accumulated capability, then interchanging information makes sense, since that information is not rendered obsolete and irrelevant in the time it takes to effect the exchange.
5. Whether intelligent machines would have any interest in broadcasting (as opposed to point-to-point telemetry) is impossible to know. One metric for intelligence is the ability to foresee danger and avoid it. The cleverest GAI, by this measure, might be less concerned about revealing their presence with easily found signals. They might also wish to communicate with other such machines that are largely outside their light cone, as these would have information that they could not obtain otherwise [10].
These considerations offer a few plausible arguments as to where we should look for GAI. However, they promise little in terms of assuring SETI scientists that such machines would have any motive to make themselves known.
In the case of biological beings, we can safely assume the presence of curiosity, as this trait is necessary to divine the laws of nature and build transmitters we could find. But artificial sentience might not share this type of curiosity. Maybe after solving all the puzzles of science, GAI would be happy to indulge itself with endless entertainments – perhaps with Bostrom-like simulations [11]. If they are capable of self-repair (an assumption in all of the above), then it may be that their primary project is to forestall the heat death of the universe and an end to their own existence.
CONCLUSIONS
What might SETI practitioners do to increase their chances of detecting what is likely to be the most prevalent form of intelligence in the cosmos? Unfortunately, the list is short.
A search for unusual phenomena in the vicinity of high-density energy sources is a straightforward desideratum. Another is to consider that the oldest of such machines might wish to contact their peers in other parts of the cosmos to compare notes and offer novel information. This suggests an experiment in which SETI searches for signals (radio or optical) in the direction of stellar black holes or quasars that are antipodal. E.g., two stellar black holes on opposite sides of the sky might conceivably host AGI whose beamed data would pass through our neighborhood.
Perhaps the best strategy to find the universe’s intellectual giants is the least deliberate: simply be careful to note any unusual phenomena uncovered in the course of astronomical research. Are there nebulae with anomalous, depleted deuterium? Do some stars or galaxies display unnatural infrared excess, a possible tipoff to energy-intensive residents [12],[13]? Are there cosmological behaviors without natural explanation?
It is easy to design an experiment to find the aliens of sci-fi, for these are robustly similar to ourselves. But when you don’t know your prey, the hunt can be hard.
REFERENCES
[1] Drake, F. 1960, “How can we detect radio transmissions from distant planetary systems,”
Sky and Telescope 39, 140
[2] Petigura, E. A., Howard, A. W., and Marcy, G. W. 2013, “Prevalence of Earth-size planets orbiting Sun-like stars,”
PNAS,
110, No. 48, 19273
[3] Moravec, Hans 2000,
Robot: Mere Machine to Transcendent Mind, Oxford
University Press (Oxford)
[4] Vinge, V. 1993 “The coming technological singularity,” Vision-21:
Interdisciplinary Science & Engineering in the Era of CyberSpace, proceedings of a Symposium
held at NASA Lewis Research Center (NASA Conference Publication CP-10129)
[5] Kurzweil, Ray 2005,
The Singularity is Near, Viking Penguin (New York)
[6] Voss, Peter 2015,
agi3 - AGI Innovations Inc |Technology
[7] Shostak, S. 1998,
Sharing the Universe, Berkeley Hills Books (Berkeley)
[8] Shostak, S. 2011, “Seeking intelligence far beyond our own,” International Astronautics Congress, IAC-11.A4.2.4
[9] Cirkovic, M. M. and Bradbury, R.J. 2006, “Galactic gradients, postbiological evolution, and the apparent failure of SETI,”
New Astronomy,
11, 628
[10] Windell, Alex Noholoa 2015, private communication
[11] Bostrom, N. 2003,
Philosophical Quarterly, 53 No. 211, 243
[12] Carrigan, R. 2009, “The IRAS-based whole-sky upper limit on Dyson spheres,”
Ap. J.698 2075
[13] Griffith, R. L., Wright, J. T., Maldonado, J., Povich, M. S., Sigurdsson, S., Mullan, B. 2015, “The Ĝ Infrared Search for Extraterrestrial Civilizations with Large Energy Supplies. III. The Reddest Extended Sources in WISE,” arXiv:1504.03418 [astro-ph.GA]