Monday, September 17, 2012

Spreading happiness to the stars seems little harder than just spreading

Imagine there are two advanced interstellar civilizations near one another who begin outward colonization around the same time, in an otherwise uninhabited accessible universe. One civilization likes to create convert star systems into lots of people leading rich, happy lives full of interest and reward. Call them the Eudaimonians. The other is solely interested in expanding its sphere of colonization as quickly as possible, and produces much less or negative welfare. Call them the Locusts. How much of a competitive advantage do the Locusts have over the Eudaimonians? How much of the cosmic commons, as Robin Hanson calls it, would wind up transformed into worthwhile lives, rather than burned to slightly accelerate colonization efforts? If the Locusts will inevitably capture almost all resources, then little could be done to avert astronomical waste, but an even waste-free split of the accessible universe could be half as good as a Eudaimonic monopoly.

I would argue that in our universe the Eudaimonians will be almost exactly as competitive as the Locusts in rapidly colonizing the stars. The reason is that the Eudaimonians can also adopt a strategy of near-maximum colonization speed until they reach the most distant accessible galaxies, and only then divert resources to producing welfare. More below the fold.

How much would a eudaimonic payload slow down a Von Neumann Probe?
It appears that the fastest way to colonize the universe would be with self-replicating Von Neumann probes. Seeds must be sent from oasis to oasis (star to star, and then galaxy to galaxy). These probes must be sophisticated enough to travel at speeds near physical limits, and construct additional probes and fuel or launching apparatus at the new oases they arrive at. This means that, in addition to fuel, propulsion, and manufacturing plants, they would need artificial intelligence (AI) systems with databases adequate to guide this complicated enterprise.

But the program of an AI, large stores of astronomical observations for navigation, and vast stores of technological information would take up an enormous amount of memory and storage space, perhaps many exabytes or more. Given this large body of information, adding additional directives to ensure that the probes eventually turn to producing welfare need only increase storage needs by a very small proportion, e.g. by 1 in 1 billion. Directives could directly specify the criteria to be eventually optimized, or could simply require compliance with further orders traveling behind the frontier of colonization.

A Von Neumann probe's other, non-data storage, would also take up mass and slow travel. Minimum scale for effective construction, navigation, defense against impacts, and similar could require that these other component be orders of magnitude greater in total mass and construction time. Then copying directives for post-colonization activities might only slow travel by a trillionth, i.e. during the time the Locusts traveled 10,000,000,000 light-years, the Eudaimonians would travel 9,999,999,999.99 light years. Such a small difference would be negligible compared to random noise in the colonization process such as variation in the distance between stars, meteorite impacts, supernova irradiation, and so forth.

There isn't room to reach the infinite limit
If dense, unclaimed, accessible resources extended arbitrarily far in all directions then any speed penalty would be too much. If the two civilizations started off 10 light years apart, then a 1 in 1 trillion speed penalty would be neglible after enough time for probes to travel 10 billion light years, but overwhelming after enough time for a trillion trillion light years of travel. The faster probes would enclose the slower, and gain virtually all of the accessible resources (although perhaps leaving much of a sphere billions of light-years across to the slower).

However, in our actual universe, because of the expansion of the universe galaxies are receding, and more rapidly with distance. Thus only galaxies within a small number of billions of light-years are potentially reachable, and selection effects on waves of colonization only have a limited opportunity to shape the distribution of colonizers.

Mutation is easier to resist for computers than animals
Biological life on Earth has evolved through mutation, and the reproductive process introduces significant errors in each generation. However, digital information storage allows for the comparison of redundant copies and the use of error-correcting codes, making substantive mutation many orders of magnitude less likely than in Earthly life.

The opportunity for "commons-burning" is greatest when the cost in colonization is lowest
The idea of burning the cosmic commons is that vast quantities of resources would be used very inefficiently to speed travel. For example, whole solar systems disassembled to make many-stage rockets (with exponentially-increasing fuel demands) to rendez-vous with other rockets to build more for a 1 in 10 trillion chance of reaching the frontier, or dismantling clusters of galaxies to produce telescopes and send messages to advise the colonizers at the frontier how to increase speed by 1 in 1 trillion for a short period.

Any Locust robotic systems that found themselves behind the frontier (e.g. seed construction or laser launch facilities after their early seeds had left) would set to work at such tasks. And to the extent that such support tasks gave noticeable speed boosts, the Eudaimonians would initially do so as well, since this would on net increase the amount of resources ultimately converted into worthwhile lives.

However, as the accessible universe fills up the chance for such activities to make a difference declines, since fewer potential descendant colonies can be created. The Eudaimonians would have no reason to burn up the last oases colonized, (what for?), and could greatly cut back on wasteful marginal speedups well before that.

21 comments:

Robin Hanson said...

Yes, if you can control a large fraction of the initial frontier of competitors, you can control a large fraction of the final space of non-burned resources, *if* you are patient to first compete as aggressively as the rest, and only use the oases after they have fallen too far behind the frontier to have much of a chance to make a difference there. But that does require a lot of foresight and self-control.

Carl said...

Fortunately, programming self-control into a Von Neumann probe isn't the same problem as finding a human with that patience.

Robin Hanson said...

You might look at the "Stay Behind" section in this paper: http://hanson.gmu.edu/hardscra.pdf
It discusses this issue of switching to use oases once you've fallen too far behind the colonization frontier.

Carl said...

Thanks Robin.

Hedonic Treader said...

Thanks for the update! Yes, probes carrying payload for non-reproductive purposes (hopefully a non-mutation goal system specifying the creation of hedonium with any disposable resources!) doesn't have to suffer completely. But it is at a disadvantage.

If this disadvantage is small enough to be overpowered by noise or maybe an early launcher advantage, maybe the expectation value of (non-reproductive) happiness is non-trivial.

However, this makes several additional assumptions: There is no mutation in the goal systems that would create non-happiness-maximizing mutants within the colonization front that do sufficiently better, that captured oases can be efficiently defended against attackers (or else all available resources need to be invested in attack or defense to perpetuate mere existence against other phenotypes), and that there will actually be someone who builds a non-mutation goal algorithm into the van-Neumann probes with the explicit goal to create hedonium, even though it pays them nothing personally and it would be easier just to create von-Neumann probes for reproductive purposes.

Also note that "people leading rich, happy lives full of interest and reward" is not the same thing as hedonium, is probably more complex to specify (and therefore more expensive), probably significantly less efficient in implementation, and possibly open to new local darwinian evolution, which may yet again lead to more misery than happiness.

A relevant point here is whether the probes themselves are sentient and, if so, whether they experience more misery than satisfaction during colonization. Even in the non-reproductive payload version, we may still end up with only small amounts of deliberately created happiness in relation to these other system functions (colonization, attack, defense, simulation etc.) If every failed probe running out of energy feels like a death from starvation, and the happiness-generation is inefficient or has to burn most resources in defending oases, the total hedonistic calculus may well be negative (again).

Note that I didn't claim it can't work, just that I'm not convinced we can automatically assume a positive expectation value, let alone a decidedly positive one.

Carl said...

"and that there will actually be someone who builds a non-mutation goal algorithm into the van-Neumann probes with the explicit goal to create hedonium, even though it pays them nothing personally and it would be easier just to create von-Neumann probes for reproductive purposes."

Sending seeds initially requires a huge mobilization of resources. Creating Locust replicators isn't exactly exciting to rally around. On the other hand, probes that eventually unfold to implement a payload promise the chance to provide defense and send back resources (if close) and information (less close) or create other valued things (past the point of no return).

Hedonic Treader said...

I don't quite see the plausibility of returns in physical resources from interstellar travel, unless FTL works. Defense by buffering against alien colonization waves is more plausible, scientific information is even more plausible.

"Other valued things" could appeal not just to utilitarians, but maybe to self-interested investors, e.g. simulations of their minds in their private digital utopias. But even then you're not at hedonium, and there's a risk their personal versions of utopia include more sentient minds who suffer in the process (sadism, wildlife simulations, NPCs, other entertaining violence with real pain etc.)

Without deliberate ethical boundaries, it is still not clear to me why we should expect more pleasure than pain. The majority of people with whom I discussed this topic was frustratingly indifferent to such indirect creation of more suffering. In other words, I see no rational reason to sufficiently trust human nature to create a net-positive hedonistic function.

Carl said...

"I don't quite see the plausibility of returns in physical resources from interstellar travel, unless FTL works."

Within our galaxy, one can take slow orbits over many millions of years to ship stuff back. If you limit travel speeds, you could send back the mass of Jupiter burning only a moderate portion of it in fusion reactions to power the travel. You could also send stuff back more inefficiently earlier on.

"(sadism, wildlife simulations, NPCs, other entertaining violence with real pain etc.)"

[I assume here that we're talking about a civilization with human-descended minds in charge.]

If it's cheap people like to create happiness, and to cover their moral bases, e.g. by buying "carbon offsets" or Catholic indulgences for apparent sins. It would be a very tiny increase in cost to offset these things with some hedonium, and dolorium wouldn't be optimized for entertainment. If this is not legislated and private property continues to exist, a passionate minority could use hedonium (as their ethical lights define it) to offset a much greater quantity of unpleasant sims and NPCS.

I would expect some continuation of existing anti-cruelty trends, bolstered by things like in vitro meat, not to mention "perfect actor" programs which could enjoy producing the appearance of pain in entertainment simulations.

"that captured oases can be efficiently defended against attackers (or else all available resources need to be invested in attack or defense to perpetuate mere existence against other phenotypes)"

Interstellar distances are quite large. I could see defensive needs interfering with very long-term plans (e.g. wanting to slowly extract energy over trillions of years from a system, so that lower temperatures make it possible to do more computation) but one could at least quickly burn through resources doing computation.

"system functions (colonization, attack, defense, simulation etc.)"

Note that probes need to be heavy on propulsion, fuel storage, reaction mass, meteor defense, and so forth. Less room for computation. Industrial machinery for building new seeds (solar panels and antimatter factories, for instance) and weaponry aren't going to have a high density of computation.

davidpearce said...

Nice post. Carl, you've pitted Eudaimonians against Locusts. Locusts produce much less or negative welfare. But who prevails if we pit Eudaimonians against Classical Utilitarians (or safety-conscious Negative Utilitarians etc) who launch a utilitronium shockwave?

Hedonic Treader said...

"Note that probes need to be heavy on propulsion, fuel storage, reaction mass, meteor defense, and so forth. Less room for computation."

Yes, and for most of the travel time, it would be on stand-by. Still, it's plausible that the best algorithms contain error signals and make decisions (e.g. you're running out of energy, you'd better use a less promising by physically closer oasis etc.) The question is how sentient those are.

"Industrial machinery for building new seeds (solar panels and antimatter factories, for instance) and weaponry aren't going to have a high density of computation."

Not the machinery. But the administrative algorithms? Very probably.

But overall, you're probably right. If the payload instructs creating sentient beings for their own sake, and if we expect an echo of currently robust (?) benevolence bias in how those structures unfold, their hedonic states probably dominate the calculus. I now give higher credence to the possibility that I was at least somewhat too pessimistic before.

Carl said...

"Still, it's plausible that the best algorithms contain error signals and make decisions (e.g. you're running out of energy, you'd better use a less promising by physically closer oasis etc.) The question is how sentient those are."

Yes, I was allowing for this. Even assuming that the control algorithms are about as bad as they could be doesn't seem to change the conclusions much because of the differences in allocated resources and conditions (higher operating temperatures, for example).

davidpearce said...

Carl, Sorry, it was more of an observation. Here on Earth, the possibility that we might ever convert ourselves into utilitronium - usually assumed to be relatively homogeneous matter and energy optimised for maximum utility - is sociologically implausible. Eudaimonian scenarios are much more credible. But what about the inert matter and energy in the rest of the accessible universe? Are classical utilitarian or instead complicated, Rube Goldberg paths to happiness-creation more likely? The scenario which maximises the abundance of happiness / positive value in an empirical sense, i.e. launching a utilitronium shockwave, also seems technically easier, and more efficient, than messy Eudaimonian scenarios. Or maybe the assumption that "utilitronium" is relatively homogeneous matter and energy is incorrect.

Tim Tyler said...

Those falling behind the leading edge will surely do R+D and gather energy and then beam the results to the front using lazers.

Tim Tyler said...

I find the idea of attempting to dictate a shift in values of entities many billions of years in the future to be bizarre. Our job is to keep the spark alive. Our descendants will be much better placed to know what they want than we are.

Tim Tyler said...

David, happiness now risks eternal oblivion. It is important for us not to guzzle too much happiness juice - or else we are likely to get fat and slow - and then be eaten by fitter aliens. Conquering the universe is the first step.

davidpearce said...

Tim, a few thoughts...
1) How worried should be really be about being eaten, literally or metaphorically, by fitter aliens? Yes, the principle of mediocrity dictates the existence of other civilisations. But this doesn't mean we're at risk. Quite possibly primordial life originates more than once only in a vanishingly small percentage of life-supporting Hubble volumes. If so, we are typical in being effectively alone. Certainly, all the conjectures I've read about the origin of information-bearing self-replicators contain an extremely thermodynamically improbable step at some stage or other. Presumably we'll know more in the next few decades.

2) Today, negative utilitarians are usually accounted a (potentially) greater existential risk than their classical counterparts owing to their willingness to press the world's notional "off" switch. But arguably it's the classical utilitarian who is the greater threat to intelligent life. For whereas the negative utilitarian believes that once we have permanently phased out the biology of suffering, our ethical duties have been discharged, the classical utilitarian is obliged to keep on striving for ubiquitous maximum bliss. I'd find this worry especially troubling if the Singularity Institute's conception of an Intelligence Explosion - and a severe risk of non-(human)friendly AGI - is correct. For whereas tiling the world with e.g. paperclips is intuitively arbitrary and implausible, the need to tile the world with utilitronium is an implication of a classical utilitarian ethic with which an AGI might plausibly first be programmed, naively or otherwise. However, I'm a sceptic about FOOM scenarios, so I'm not the best person to explore this possibility.

3) I'd agree with you, Tim, that we should try and avoid doing anything irrevocable until we're sure we know what we're doing. Aiming to create minds animated by information-sensitive gradients of well-being [one version of Carl's Eudaimonian life-forms] is more prudent than propagating ultra-intense pure pleasure. How prudent exactly, I'm not sure: discussing unknown unknowns is always a challenge. But by way of example of the case for prudence, normally it goes without saying that we have an ethical responsibility for the future but not the past. Maybe an advanced civilisation wouldn't agree. Within the framework of post-Everett quantum mechanics, physicist Lev Vaidman, for instance, argues ("Backward evolving quantum states") for the two-state vector formalism where causality is time-symmetric:
http://www.tau.ac.il/~vaidman/lvhp/m100.pdf
http://en.wikipedia.org/wiki/Two-state_vector_formalism
http://physicsworld.com/cws/article/news/2012/aug/03/can-the-future-affect-the-past
Such scenarios would make our ultimate ethical responsibilities much wider than is normally supposed.

Tim Tyler said...

David, it is the anthropic principle - not the principle of mediocrity - that applies to our existence. The principle of mediocrity tells us nothing about the existence of aliens. We don't know that we are alone, the chances seem fairly high that we are not, and we probably shouldn't be gambling on having no neighbours - or benevolent ones.

Aliens or no, it seems prudent to look after our existence - and lay off the "happiness juice" - until we are more certain of basic security issues. That seems unlikely to happen for the next billion years or so. My council would be to forget about dosing ourselves with superhappiness.

davidpearce said...

The anthropic principle tells us it's unsurprising that we find ourselvesin a Hubble volume whose parameters seem fine-tuned to allow the existence of life. But the anthropic doesn't, as far as I can tell, say anything about the proportion of life-supporting Hubble volumes where primordial
life arises more than once - and hence whose adaptive radiation could lead to cosmological conflict. Note I wasn't claiming that we are really alone. Rather my best guess - and in our current state of ignorance, it really is
only a guess - is that googols of civilisations exist elsewhere, but they are either causally inaccessible beyond our cosmological horizon, or (in the case of life in other quasi-classical Everett branches) interfere only to a vanishingly small extent with "us'.

I understand your worries about "happiness juice". But recall that unlike a classical utilitarian, I'm not urging that we are ethically obliged to
maximise pure bliss, nor am I urging that we do anything to compromise the development of full-spectrum superintelligence. Rather the existence among intelligent agents of experience severely below "hedonic zero" is, I'd argue, itself a form of existential risk, because some of those agents may decide, not merely that the world is better off without them, but that it
would be better if the world didn't exist either. Not everyone who shares David Benatar's bleak diagnosis of life ("Better Never To Have Been") will share his unworkable antinatalism.

By contrast, making sure that we all value life by underwriting gradients of intelligent well-being can potentially ensure that each of us feels we have a vested interest in life's continuation.

Hedonic Treader said...
This comment has been removed by a blog administrator.
Siddharth said...

Two questions:

1) I don't understand why programming in 'maximize welfare' is easy? Isn't that what Singularity Institute is trying to come up with--a friendly AI? What if it doesn't have a nice compact description?

2) Wouldn't the disadvantage be in the number of further von Neumann probes sent? If the goal is to increase happiness, then with all the new resources you are mining, you aren't making any new von Neumann probes to send further. Locusts would setup shop, and churn out probes, right?

Carl said...

Siddharth:

1) Selecting a payload is a separate problem, but encoding a desired payload should be within the reach of a civilization that can build fast probes (remember that if some send of slow probes early, they can be outpaced by later fast probes or weapons).

2) The point of the post is that those seeking to produce some payload beyond colonization can first focus on probe-building and speed, and eventually switch over to producing their desired payload as the cost to colonization speed declines.