Contents warning: Reading this text could be very prejudicial to mental health, including but not limited to panic attack.

I think much s-risk research is focused on the possibility of non-human animals or computer (simulation) suffering in the future. However, it is entirely possible currently existing people (you, I, and our family and friends), or your possible offspring could suffer astronomically (extremely intensely (e.g., immolation) and/or for very long time (e.g. trillions of years)) in the future, by AI or human-controlled AI.

It seems possible that current humans, if their lifespan expanded radically by the molecular assembler or other technologies, could suffer many billions/trillions of years, possibly with physical pain and mental suffering of the greatest intensity a sentient being could experience. (

While strict utilitarians would care themselves and insects equally, most people care about themselves, their offspring and their family the most. I think s-risk advocacy should consider awakening people that they themselves and their children, family and friends could suffer more than trillion years.

Although the potential children you could have now will not suffer s-risk if you choose not to have them, actually future AI could harvest your genetic material or information to create your genetic offspring to torture them billion/trillions of years.

Sadistic AI or sadist-controlled AI caused astronomical future suffering could be the worst in all three dimensions of suffering-focused three-dimensional population ethics, P (People): the number of victims, I (Intensity): the intensity of suffering, and D (Duration): the duration each victim suffers. For example, for P, our universe is said to have resources to contain 5*10^46 humans. (p. 8, For I, the intensity could range from solitary confinement to immolation. For D, the duration could range from one billion to many trillions of years per each sentient being.

It seems plausible to think, unlike extinction risk, suffering risk could have different badness among its suffering scenarios. For example, pronatalism-caused astronomical future suffering (AFS) and sadism-caused astronomical future suffering would arguably have similar suffering-years, as maximum possible P*D is ultimately determined by resource constraints of the universe. There is no intrinsic reason to think high-intensity suffering computation would need much more resources than low-intensity suffering computation. However, sadism-caused AFS could be much worse than pronatalism-caused AFS in the I (intensity) dimension. Even if sadism-caused s-risk is 10^3 times less likely than pronatalism-caused s-risk, it also seems plausible sadism-caused AFS could be more than 10^3 times worse than pronatalism-caused AFS in the dimension of I, or the total suffering (P*I*D). If it is true that (pronatalism-caused s-risk) / (sadism-caused s-risk) < (the amount of suffering of sadism-caused AFS) / (the amount of suffering of pronatalism-caused AFS), it seems there is some reason to also focus on sadism-caused s-risks, unless focusing on sadism-caused s-risk is not effective or even prejudicial for (expected) suffering reduction. Although I think (1) awakening people that they themselves and their children/family/friends could be victim of s-risks; (2) awakening people that sadism, not just pronatalism, could be a cause of s-risks; could raise awareness on s-risks issues, I think perhaps there is more information hazard risk of sadism-caused s-risk research/advocacy than pronatalism-caused s-risk research/advocacy. So I am not sure whether this post will have positive NU-EV, nor am I sure it could have more than 50% chance of positive NU-EV (possibly 60-65%? What do you think about (1) the advocacy value of awakening people that they themselves and their children/family/friends could suffer AFS, possibly for billion/trillions of years; (2) whether sadism-caused s-risk is, for its probability or its badness, an important concern or cause area; (3) possible information hazards of sadism-caused s-risk.

Thoughts on risks of (astronomical) future suffering (a/k/a suffering risk or s-risks)