Measure is unceasing

RSS Feed

Infinite Ethics 101: Stochastic and Statewise Dominance as a Backup Decision Theory when Expected Values Fail (2022/05/20)

In Exceeding expectations: stochastic dominance as a general decision theory, Christian Tarsney presents stochastic dominance (to be defined) as a total replacement for expected value as a decision theory. He wants to argue that one decision is only rationally better as another one when it is stochastically dominant. For this, he needs to say that the choiceworthiness of a decision (how rational it is) is undefined in the case where one decision doesn’t stochastically dominate another one.

I think this is absurd, and perhaps determined by academic incentives to produce more eye-popping claims rather than more restricted incremental improvements. Still, I thought that the paper made some good points about us still being able to make decisions even when expected values stop being informative. It was also my introduction to extending rational decision-making to infinite cases, and a great introduction at that. Below, I outline my rudimentary understanding of these topics.

Forecasting Newsletter: April 2022 (2022/05/10)

Highlights

EA Forum Lowdown: April 2022 (2022/05/01)

Imagine a tabloid to the EA Forum digest’s weather report. This publication gives an opinionated curation of EA forum posts published during April 2022, according to my whim. I probably have a bias towards forecasting, longtermism, evaluations, wit, and takedowns. 

You can sign up for this newsletter on substack.

Simple Squiggle (2022/04/17)

Linkpost for github.com/quantified-uncertainty/simple-squiggle

“Simple Squiggle” is a simple parser that manipulates multiplications and divisions between numbers and lognormal distributions. It uses an extremely restricted subset of Squiggle’s syntax, and unlike it, the underlying code is not easily extensible.

Better scoring rules (2022/04/16)

Linkpost for github.com/SamotsvetyForecasting/optimal-scoring

This git repository outlines three scoring rules that I believe might serve current forecasting platforms better than current alternatives. The motivation behind it is my frustration with scoring rules as used in current forecasting platforms, like Metaculus, Good Judgment Open, Manifold Markets, INFER, and others. In Sempere and Lawsen, we outlined and categorized how current scoring rules go wrong, and I think that the three new scoring rules I propose avoid the pitfalls outlined in that paper. In particular, these new incentive rules incentivize collaboration.

Open Philanthopy’s allocation by cause area (2022/04/07)

Open Philanthropy’s grants so far, roughly:

This only includes the top 8 areas. “Other areas” refers to grants tagged “Other areas” in OpenPhil’s database. So there are around $47M in known donations missing from that graph. There is also one (I presume fairly large) donation amount missing from OP’s database, to Impossible Foods

A quick note on the value of donations (2022/04/06)

The value you get from money is higher the less money you have. So if you live on $50k a year, $100 is worth much less than if you earn $500 a year.

If you eyeball a map of GPD per capita, then Europe/the US is living on around $50k/year, Latin America is living on $10k a year, and central Africa is living on $1k/year:

Forecasting Newsletter: March 2022 (2022/04/05)

Highlights

Forecasting Newsletter: April 2222 (2022/04/01)

Highlights

Valuing research works by eliciting comparisons from EA researchers (2022/03/17)

tl;dr: 6 EA researchers each spent ~1-2 hours estimating the value (relative counterfactual values) of 15 very different research documents. The results varied highly between researchers and within similar comparisons differently posed to the same researchers. This variance suggests that EAs might have relatively undeveloped assessments of the value of different projects.

Executive Summary

Six EA researchers I hold in high regard—Fin Moorhouse, Gavin Leech, Jaime Sevilla, Linch Zhang, Misha Yagudin, and Ozzie Gooen—each spent 1-2 hours rating the value of different pieces of research. They did this rating using a utility function extractor, an app that presents the user with pairwise comparisons and aggregates these comparisons to produce a utility function.

Samotsvety Nuclear Risk Forecasts — March 2022 (2022/03/10)

Thanks to Misha Yagudin, Eli Lifland, Jonathan Mann, Juan Cambeiro, Gregory Lewis, @belikewater, and Daniel Filan for forecasts. Thanks to Jacob Hilton for writing up an earlier analysis from which we drew heavily. Thanks to Clay Graubard for sanity checking and to  Daniel Filan for independent analysis. This document was written in collaboration with Eli and Misha, and we thank those who commented on an earlier version.

Overview

In light of the war in Ukraine and fears of nuclear escalation[1], we turned to forecasting to assess whether individuals and organizations should leave major cities. We aggregated the forecasts of 8 excellent forecasters for the question What is the risk of death in the next month due to a nuclear explosion in London? Our aggregate answer is 24 micromorts (7 to 61) when excluding the most extreme on either side[2]. A micromort is defined as a 1 in a million chance of death. Chiefly, we have a low baseline risk, and we think that escalation to targeting civilian populations is even more unlikely. 

Forecasting Newsletter: February 2022 (2022/03/05)

Highlights

Five steps for quantifying speculative interventions (2022/02/18)

Summary

Currently, we can’t compare the impact of speculative interventions in a principled way. When making a decision about where to work or donate, longtermists or risk-neutral neartermists may have to choose an organization based on status, network effects, or expert opinion. This is, obviously, not ideal.

I propose a simple solution, if not an easy one. First, estimate the impact of an intervention in narrow units (such as micro-covids, or estimates of research quality). Then, convert those narrow units to more and more general units (such as QALYs, or percentage reduction in x-risk).

We are giving $10k as forecasting micro-grants (2022/02/08)

(Cross-posted from the Forecasting Newsletter.)

After the apparent success of ACX grants (a), we received $10k from an anonymous donor to give out as micro-grants through Nuño’s Forecasting newsletter.

Some examples of projects we’d be excited to fund might be:

Splitting the timeline as an extinction risk intervention (2022/02/06)

Edit: No longer as excited. Per this comment:

I also think it is astronomically unlikely that a world splitting exercise like this would make the difference between ‘at least one branch survives’ and ‘no branches survive’. The reason is just that there are so, so many branches

and per this comment:

Forecasting Newsletter: January 2022 (2022/02/03)

Highlights