AD1184
Celestial
You have probably seen in the news the collapse of the cryptocurrency exchange FTX, and stories about its drug-addled charlatan founder, Sam Bankman-Fried.
What has come to light is that Bankman-Fried, along with a growing proportion of influential big tech types, and possibly policymakers, is an adherent of twin philosophical fads called effective altruism and longtermism. Effective altruism is, in simplified terms, the principle that one's altruistic efforts ought to be optimized to result in the greatest good. Longtermism is the principle that one ought to take a very long view of humanity's existence, and calculate one's efforts to performing the greatest good throughout the rest of humanity's existence, rather than in the present. So, expending one's resources to help victims of flooding in poor countries in the present is out. Instead, the greatest moral good is achieved in devoting our efforts to minimizing existential risks to humanity and bringing about space colonization, so that we might build an intergalactic empire supporting exponentially more humans living not-miserable lives, the majority of whom will only have a digital, rather than physical, existence (I am not making this up). Members of the movement are also obsessed with the idea of a global apocalypse being brought about by artificial intelligence, like in the Terminator movies, and the ever-present need to take steps to avert this. Bankman-Fried is said to be consumed with this idea.
There is a good article about the movement here:
Ever wondered why Elon Musk is so dead-set on starting an otherwise useless Mars colony? Or why Mark Zuckerberg recently made an abrupt change of tack to make the goal of his company the realization of his ridiculous "Metaverse" concept? Longtermism might have the answer.
The leading lights in the movement are two Oxford University philosophers, William MacAskill and Nick Bostrom. Bostrom came up with the "simulation hypothesis" a few years ago (which incidentally is another thing that Elon Musk has spoken about publicly on multiple occasions and apparently takes seriously). You will note the religious quality of these beliefs about humanity's purpose, and their certainty about where we are headed as a species and what are the most desirable circumstances for future humans to live under.
People holding these beliefs are clearly dangerous were they to achieve positions of power, as they can lead the adherent to believe in a radical departure from traditional morality, and justify deliberate harm to many in the present in order to bring about an imagined utopian future. Their beliefs are equally repugnant to those traditionally liberal, conservative, or socialist. There is a suggestion that Bankman-Fried's belief in these principles led him to see conning people out of large amounts of money as a justifiable means to the end of supporting the goals of the effective altruism movement. He bankrolled many non-profits devoted to effective altruism, and couched his answers to many interviewers' questions about his personal philosophy in the jargon of the EA movement.
What has come to light is that Bankman-Fried, along with a growing proportion of influential big tech types, and possibly policymakers, is an adherent of twin philosophical fads called effective altruism and longtermism. Effective altruism is, in simplified terms, the principle that one's altruistic efforts ought to be optimized to result in the greatest good. Longtermism is the principle that one ought to take a very long view of humanity's existence, and calculate one's efforts to performing the greatest good throughout the rest of humanity's existence, rather than in the present. So, expending one's resources to help victims of flooding in poor countries in the present is out. Instead, the greatest moral good is achieved in devoting our efforts to minimizing existential risks to humanity and bringing about space colonization, so that we might build an intergalactic empire supporting exponentially more humans living not-miserable lives, the majority of whom will only have a digital, rather than physical, existence (I am not making this up). Members of the movement are also obsessed with the idea of a global apocalypse being brought about by artificial intelligence, like in the Terminator movies, and the ever-present need to take steps to avert this. Bankman-Fried is said to be consumed with this idea.
There is a good article about the movement here:
What the Sam Bankman-Fried debacle can teach us about "longtermism"
I'm not surprised that longtermism led to fraud, corruption and disaster. I'm mostly surprised it wasn't worse
www.salon.com
Ever wondered why Elon Musk is so dead-set on starting an otherwise useless Mars colony? Or why Mark Zuckerberg recently made an abrupt change of tack to make the goal of his company the realization of his ridiculous "Metaverse" concept? Longtermism might have the answer.
The leading lights in the movement are two Oxford University philosophers, William MacAskill and Nick Bostrom. Bostrom came up with the "simulation hypothesis" a few years ago (which incidentally is another thing that Elon Musk has spoken about publicly on multiple occasions and apparently takes seriously). You will note the religious quality of these beliefs about humanity's purpose, and their certainty about where we are headed as a species and what are the most desirable circumstances for future humans to live under.
People holding these beliefs are clearly dangerous were they to achieve positions of power, as they can lead the adherent to believe in a radical departure from traditional morality, and justify deliberate harm to many in the present in order to bring about an imagined utopian future. Their beliefs are equally repugnant to those traditionally liberal, conservative, or socialist. There is a suggestion that Bankman-Fried's belief in these principles led him to see conning people out of large amounts of money as a justifiable means to the end of supporting the goals of the effective altruism movement. He bankrolled many non-profits devoted to effective altruism, and couched his answers to many interviewers' questions about his personal philosophy in the jargon of the EA movement.
Last edited: