Aggregative hedonistic utilitarians are often concerned with the expected value of pleasure minus pain going forward. For instance, they may wonder how to value the expected Astronomical Waste if humanity were rendered extinct by a sudden asteroid impact.
One important consideration is that it appears that biological life as we know it generates pain or pleasure at a very low density relative to long-term technological possibilities. As Nick Bostrom's astronomical waste paper notes, the energy output of the Sun is a number of orders of magnitude higher than the energy that goes into life on Earth, and even higher than the energy going to power animal nervous systems. Further, ultra-efficient computing substrates could run emulations of animal nervous systems at much lower cost in energy.
For particular accounts of the normative importance of pain and pleasure, one could further streamline conscious software programs to have just the right features to maximize pain or pleasure produced by a given lump of mature computing hardware ("computronium").
Call computronium optimized to produce maximum pleasure per unit of energy "hedonium," and that optimized to produce maximum pain per unit of energy "dolorium," as in "hedonistic" and "dolorous." Civilizations that colonized the galaxy and expended a nontrivial portion of their resources on the production of hedonium or dolorium would have immense impact on the hedonistic utilitarian calculus. Human and other animal life on Earth (or any terraformed planets) would be negligible in the calculation of the total. Even computronium optimized for other tasks would seem to be orders of magnitude less important.
So hedonistic utilitarians could approximate the net pleasure generated in our galaxy by colonization as the expected production of hedonium, multiplied by the "hedons per joule" or "hedons per computation" of hedonium (call this H), minus the expected production of dolorium, multiplied by "dolors per joule" or "dolors per computation" (call this D).
By symmetry, my default expectation would be that H=D. Insofar as pain and pleasure involve accessibility to conscious reflection, connections to decision-making and memory, these pose like demands for both pain and pleasure. Evolutionary arguments about the distribution of pain and pleasure in the lives of animals, e.g. that in the lifecycle of some organism there are more things that it is important to avoid than to approach, are irrelevant to hedonium and dolorium. Pleasure (or pain) is set to maximum, not allocated to solve a control problem for a reproduction machine.
This is important to remember since our intuitions and experience may mislead us about the intensity of pain and pleasure which are possible. In humans, the pleasure of orgasm may be less than the pain of deadly injury, since death is a much larger loss of reproductive success than a single sex act is a gain. But there is nothing problematic about the idea of much more intense pleasures, such that their combination with great pains would be satisfying on balance.
So the situation would look good for hedonistic utilitarians of this sort: all that is needed is a moderately higher (absolute as well as relative) expected quantity of hedonium than dolorium. Even quite weak benevolence, or the personal hedonism of some agents transforming into or forking off hedonium could suffice for this purpose.
Now, the "measurement" of pain and pleasure brings in definitional and normative premises. Some may say they care more about pleasure than pain or vice versa, while others build into their "unit" of pain or pleasure a moral weighting in various tradeoffs. However, if we make use of data such as the judgments and actions of agents in choice problems, quantity of neuron-equivalents involved, and so forth, the symmetry does seem to hold. I would distinguish traditional and negative-biased hedonistic utilitarians in terms of the tradeoffs they would make between the production of hedonium and dolorium.
Very nice post, Carl. I shall have to think about this some more.
ReplyDeleteYou're basically saying that the only two things that will matter to the overall calculation will be (1) maximally efficient torture and (2) maximally efficient enjoyment.
Even if H=D, I think I still care more about D, as you hint in your last paragraph. Thus, I'm less interested to reduce extinction risks so that more computronium can be produced; I'm more anxious to ensure that the amount of torture that does take place is minimized. What would you recommend for someone with this aim?
Yes, I want to distinguish between the empirical and structural properties on the one hand, and normative valuation on the other (although as noted in the post, I suspect that intuitions are skewed by our relative lack of experience with hedonium-intensity pleasures; as we have discussed elsewhere, this is reason to think that your fully informed self that had experienced both extreme pains and extreme pleasures would be less negative-skewed).
ReplyDelete"What would you recommend for someone with this aim?"
This is a narrow post, and not the best place for this.
Excellent post, very thoughtful and concise, even though I don't share the implied values.
ReplyDeleteI don't see why we would expect a symmetry in the first place: far more physical states are deadly than are healthy, so why would we expect equally many mental states to be as pleasureable/non-aversive as painful/aversive?
ReplyDeleteIf anything, the neuroscience would seem to suggest that pleasure is hard to induce: http://en.wikipedia.org/wiki/Pleasure_center
In contrast, horrific pain is easy to induce (just stimulate peripheral nerves).
Gwern, those are just the sort of arguments I was referring to in the sixth paragraph.
ReplyDeleteMy point doesn't invoke evolution, though... if anything, it's more of an entropy or thermodynamics argument: if pain is high entropy states and pleasure are low entropy states, then intrinsically the latter will be harder to create and maintain and require more negentropy to be used up.
ReplyDeleteStates in which the organism is well-functioning are lower entropy than dysfunctional states. If you have evolutionary pressures to assign pain to dysfunction, then pain will be bound up with lower-entropy states in evolved creatures. But I don't see a need for this assignment to be copied in artificial structures.
ReplyDeleteLikewise, damage to an organism can be detected in a highly localized and useful way (severe pressure or eat) and shown to be helpful. So evolutionary pressure supports relatively direct connections between pain and certain nerves, and less direct relations for harder-to-identify goods. Again this is not something that needs to carry over to artificial minds.
> But I don't see a need for this assignment to be copied in artificial structures.
ReplyDeleteI think a mismatch here. Are we discussing minds isolated from the outside universe or ones connected?
If artificial minds, with utility functions on the outside universe, are connected to true info on the outside universe, then it will take as much negentropy to make them realize high utility as it takes to arrange the universe in a high utility way. Which for any utility function we would care about, is a low entropy universe (as opposed to a high entropy heat-soup).
Cross-posted from the dystopic scenarios discussion on felicifia:
ReplyDeleteI think the assumption of symmetry because H and D aren't constrained by fitness considerations is a valid one, but it may reduce our expectation value of both H and D in any scenario in which resource use is mostly driven by darwinian algorithms. Assume a space colonization event resulting in an open evolution of cultures, technologies, space-faring technological and biological phenotypes etc. How many of them will produce either H or D? Wireheading temptations can locally generate H, game-theoretic considerations can result in D (threats of supertorture as an extortion instrument). But assuming a relatively low level of global coordination, both H and D will probably only exist in small quantities: There will be ordinary selection effects against wireheads; Darwinism favors reproduction optimizers instead.
Furthermore, the expectation values of H and D seem to be linked: In scenarios in which a high quantity of H can be expected, high quantities of D are also more probable, and vice versa. Assume a scenario in which powerful factions have explicit hedonistic goals and want to produce H. Those are exactly the kinds of scenarios in which we would see rivals credibly threatening to produce large quantities of D in order to extort resource shares for their own fitness from the hedonistic factions. Conversely, if D has no practical use because no one powerful enough will care about it, H is also much less likely because the powerful factions all care about other things than hedonism (probably just survival and reproduction of their idiosyncratic patterns).
If the expectation values of H and D are roughly linked, and open colonization and evolution cause strong selection effects against using resources on H and D, H-D may not dominate the expected utility of a big future after all.
Hedonic Treader, would threats of supertorture be more common than promises of super-pleasure?
ReplyDeleteMalo Bourgon mentioned to me that the literature on negativity bias seems relevant.
ReplyDeleteSimon Knutsson wrote a response to this article.
ReplyDelete