Effective altruists” believe their purpose in life is to relieve suffering, but many don’t work for charities. Eilish Grieveson takes a look.
Charitably-minded people are diverting money from tech and finance careers into charities ranked with an actuary’s eye – and increasingly worrying about robots.
Late-night surfers of YouTube’s shores might stumble upon a channel with videos whose bizarre titles and thumbnails suggest an automatic content generator run amok.
Many have an insect theme: “Sped Up Footage of a Dying Fly” and “Black Beetle Drinking from Band-Aid”. Among them is a 50-minute video narrated by the earnest voice of a quietly spoken American backed by gentle new age music. The title: “Ethical Treatment of Dust Mites”.
Strange as they are, these videos were in fact made by one of the founding members of a growing social movement that has gained support from Bill Gates, Elon Musk and the late Stephen Hawking, as well as researchers at Oxford and Cambridge universities.
The maker is Brian Tomasik, a philosopher and former Microsoft programmer who has donated roughly US$200,000 to the movement, dubbed “effective altruism”.
When Tomasik was 15 he developed chronic esophagitis, resulting in severe pain through his teenage years. Tomasik says the experience “rewired his brain” and awakened him to the scale and intensity of global suffering.
His family encouraged him to plan a career with a non-profit organisation, or perhaps in policy analysis. But, influenced by ethical philosopher Peter Singer, the then 20-year old argued on his website in 2006 that if he worked for a charity he would be taking that job away from a person who might not be as passionate as he was to reduce suffering, but would nonetheless do the job just as well.
By instead pursuing a career in a lucrative industry, he would not only be able to donate more but he might positively influence the culture of his employers.
His essays gained popularity online and, in 2009, Oxford philosophy postgraduate William MacAskill and two friends founded ‘Giving What We Can’, which uses cost-benefit analysis to evaluate and rank charities on their effectiveness and to promote regular donating. Today, 3300 ‘Giving What We Can’ members have pledged to give 10 percent of their income to charity each year.
MacAskill then went on to found 80,000 Hours, a career advice non-profit organisation that promotes ‘earning to give’, and guides young people towards careers which are a good personal fit, and come with reasonably lucrative salaries.
The name comes from the roughly 80,000 hours people will spend at work.
Earning to give
“Would Bill Gates have done more good if he’d worked at a small non-profit?” the organisation asks on its website. “We don’t normally think of software engineering as a path to doing good, but Gates has saved the lives of millions of children by funding vaccines. That’s a huge amount of good, even if you’re not keen on Microsoft.”
It recommends people who want to make a difference seek jobs that are highly paid, and are also regarded well enough that they can “switch out later”.
The best options include jobs in finance, real estate and technology – such as quantitative trading or software engineering. However, the site also points out that tradespeople such as plumbers can earn more than many university graduates.
Following the principles of “earning to give”, Matt Wage, a philosophy graduate, took a job as an arbitrage trader on Wall Street in order to donate over half of his pre-tax salary to the Against Malaria Foundation, which distributes low-cost long-lasting insecticidal nets across sub-Saharan Africa. Since 2000, the nets have prevented an estimated 450 million cases of malaria.
New Zealand’s philanthropic answer is the Effective Altruism NZ charitable trust. It has 185 followers on Facebook, provides advice on recommended charities and supplies free copies of William MacAskill’s guidebook ‘Doing Good Better’.
It ran a Christmas fundraising campaign for the Against Malaria Campaign to match donations up to a total of $17,300.
Members of its Facebook group were asked how they enacted the principles of effective altruism (EA). Nine local members said they are specifically ‘earning to give’, and two are studying courses chosen with the aim of being able to ‘earn to give’. Other members said they were “directly working on improving the world” or in education they hoped would enable them to do so.
The administrator of the social media group, Catherine Low, said the general feeling was that they could have a greater impact working in a role that directly helped people unless they had “a particularly great ability to land very high-paying jobs”.
Down to earth vs AI concerns
Globally the “effective altruism” movement has become divided between those who use their earnings to improve lives here and now, and those who believe there is little point in fighting diseases of poverty if genetic engineering can revolutionise medicine.
The majority of effective altruists regard reducing global poverty as their top ethical priority. But increasing philanthropic interest in future risks, such as artificial intelligence, has become a major point of contention within the EA community.
In 2014, the Future of Life Institute was founded by EA-affiliated cosmologist Max Tegmark, focusing particularly on how to safeguard humanity from risks arising from artificial intelligence. Elon Musk and Stephen Hawking joined the board of advisors, with Musk donating US$10 million.
Three years later, the EA grant-making organisation Open Philanthropy Project donated US$3,750,000 to the Machine Intelligence Research Institute (MIRI). That was followed by a US$488,000 grant in August 2018 from Effective Altruism’s ‘Long-Term Future Fund’.
MIRI is a thinktank focused on the threat of artificial intelligence. It was founded by Eliezer Yudkowsky, a self-taught logician who popularised his philosophy of science and rationality in a successful fan-fiction, ‘Harry Potter and the Methods of Rationality’.
On Yudkowsky’s website, LessWrong, members have speculated that a vengeful super-intelligent AI – a ‘basilisk’ – might eternally punish those who attempt to thwart its development. Many on the forum became extremely distressed at the prospect.
In 2013, Michael Anissimov, an alt-right ideological leader and former paid publicist for MIRI, said they were right to be so, as it would be “foolish” to not take the basilisk idea seriously. Yudkowsky himself rejects the ‘basilisk’ idea and claims to be “actively hostile” to the alt-right.
Techno-progressives who worry about vengeful, god-like robots and plan for human genetic engineering make for odd bedfellows with big-hearted activists who sacrifice consumer comforts in hopes of ending global poverty, disease and factory farming.
But at the 2017 global EA conference, by and large, the two extremes still managed to find some common ground, with many delegates having a foot in both camps, according to blogger Scott Alexander.
As for Tomasik – who worries even about the welfare of dust mites – when interviewed for the 80,000 hours blog and asked if his ethical preoccupations reduced his quality of life, he replied with characteristic equanimity. “It’s quite possible that I’m happier and more fulfilled than if I had never thought about suffering and was still playing video games.” But he’s “probably not too much happier” in light of the tendency of people to return to their baseline of happiness regardless of good or bad life events, he added.