Measure is unceasing

RSS Feed, subscribe per email, all content

Effective Altruism No Longer an Expanding Empire. (2023/01/30)

In early 2022, the Effective Altruism movement was triumphant. Sam Bankman-Fried was very utilitarian and very cool, and there was such a wealth of funding that the bottleneck was capable people to implement projects. If you had been in the effective altruism community for a while, it was easier to acquire funding. Around me, I saw new organizations pop up like mushrooms.

Now the situation looks different. Samo Burja has this interesting book on [Great Founder Theory][0] , from which I’ve gotten the notion of an “expanding empire”. In an expanding empire, like a startup, there are new opportunities and land to conquer, and members can be rewarded with parts of the newly conquered land. The optimal strategy here is unity . EA in 2022 was just that, a united social movement playing together against the cruelty of nature and history.


Imagine the Spanish empire, without the empire.

An in-progress experiment to test how Laplace’s rule of succession performs in practice. (2023/01/30)

Note: Of reduced interest to generalist audiences.

Summary

I compiled a dataset of 206 mathematical conjectures together with the years in which they were posited. Then in a few years, I intend to check whether the probabilities implied by Laplace’s rule—which only depends on the number of years passed since a conjecture was created—are about right.

My highly personal skepticism braindump on existential risk from artificial intelligence. (2023/01/23)

Summary

This document seeks to outline why I feel uneasy about high existential risk estimates from AGI (e.g., 80% doom by 2070). When I try to verbalize this, I view considerations like 

There will always be a Voigt-Kampff test (2023/01/21)

In the film Blade Runner, the Voight-Kampff test is a fictional procedure used to distinguish androids from humans. In the normal course of events, humans and androids are pretty much indistiguishable, except when talking about very specific kinds of emotions and memories.

Similarly, as language models or image-producing neural networks continue to increase in size and rise in capabilities, it seems plausible that there will still be ways of identifying them as such.

Image produced by DALLE-2

Interim Update on QURI’s Work on EA Cause Area Candidates (2023/01/19)

Originally published here: https://quri.substack.com/p/interim-update-on-our-work-on-ea

The story so far:

Prevalence of belief in “human biodiversity” amongst self-reported EA respondents in the 2020 SlateStarCodex Survey (2023/01/16)

Note: This post presents some data which might inform downstream questions, rather than providing a fully cooked perspective on its own. For this reason, I have tried to not really express many opinions here. Readers might instead be interested in more fleshed out perspectives on the Bostrom affair, e.g., here in favor or here against.

Graph

Can GPT-3 produce new ideas? Partially automating Robin Hanson and others (2023/01/11)

Brief description of the experiment

I asked a language model to replicate a few patterns of generating insight that humanity hasn’t really exploited much yet, such as:

  1. Variations on “if you never miss a plane, you’ve been spending too much time at the airport”.
  2. Variations on the Robin Hanson argument of “for common human behaviour X, its usual purported justification is Y, but it usually results in more Z than Y. If we cared about Y, we might do A instead”.

Forecasting Newsletter for November and December 2022 (2023/01/07)

Highlights

Me in 2022 (2023/01/03)

The above .gif shows a snapshot taken by my computer at 12:00 for a large number of days of the year. The below .gif shows a snapshot taken by my computer every five minutes on the 22/02/2022:

A basic argument for AI risk (2022/12/23)

Rohin Shah writes (referenced here):

Currently, I’d estimate there are ~50 people in the world who could make a case for working on AI alignment to me that I’d think wasn’t clearly flawed. (I actually ran this experiment with ~20 people recently, 1 person succeeded. EDIT: I looked back and explicitly counted – I ran it with at least 19 people, and 2 succeeded: one gave an argument for “AI risk is non-trivially likely”, another gave an argument for “this is a speculative worry but worth investigating” which I wasn’t previously counting but does meet my criterion above.)

I thought this was surprising, so here is an attempt, time-capped at 45 mins.

Hacking on rose (2022/12/20)

The rose browser is a minimal browser for Linux machines. I’ve immensely enjoyed hacking on it this last week, so I thought I’d leave some notes.

Rose is written in C, and it’s based on Webkit and GTK. Webkit is the engine that drives Safari, and a fork of some previous open-source libraries, KHTML and KJS. GTK is a library for creating graphical interfaces. You can conveniently use the two together using WebKitGTK.

Image of this blogpost from the rose homepage
Pictured: An earlier version of this blogpost in the rose browser.

COVID-19 in rural Balochistan, Pakistan: Two interviews from May 2020 (2022/12/16)

The interviews were carried out to better inform a team of forecasters and superforecasters working with an organization which was aiming to develop better COVID-19 forecasts early on in the pandemic for countries and regions which didn’t have the capability. Said team and I came up with the questions, and the interviews themselves were carried out in Urdu by Quratulain Zainab and then translated back to English.

Back then, I think these interviews were fairly valuable in terms of giving more information to our team. Now, more than two years later I’m getting around to sharing this post because it could help readers develop better models of the world, because it may have some relevance to some philosophical debates around altruism, and because of “draft amnesty day”.

Interview 1.

Goodhart’s law and aligning politics with human flourishing (2022/12/05)

Note: Written for someone I’ve been having political discussions with. For similarly introductory content, see A quick note on the value of donations.

The world’s major ideologies, like neoliberalism and progressivism, are stuck in a stalemate. They’re great at pointing out each other’s flaws, but neither side can make a compelling case for itself the other can’t poke holes into. To understand why, I want to look to Goodhart’s law and presocratic Greek philosophy for insights. Ultimately, though, I think we need better political tools to better align governments and institutions with human flourishing.

The situation

List of past fraudsters similar to SBF (2022/11/28)

To inform my forecasting around FTX events, I looked at the Wikipedia list of fraudsters and selected those I subjectively found similar—you can see a spreadsheet with my selection here. For each of the similar fraudsters, I present some common basic details below together with some notes.

My main takeaway is that many salient aspects of FTX have precedents: the incestuous relationship between an exchange and a trading house (Bernie Madoff, Richard Whitney), a philosophical or philanthropic component (Enric Duran, Tom Petters, etc.), embroiling friends and families in the scheme (Charles Ponzi), or multi-billion fraud not getting found out for years (Elizabeth Holmes, many others).

Fraud with a philosophical, philanthropic or religious component

Some data on the stock of EA™ funding (2022/11/20)

Overall Open Philanthropy funding

Open Philanthropy’s allocation of funding through time looks as follows:

Bar graph of OpenPhil allocation by year. Global health leads for most years. Catastrophic risks are usually second since 2017. Overall spend increases over time.

Forecasting Newsletter for October 2022 (2022/11/15)

Highlights

More content