2018-2019 Long-Term Future Fund Grantees: How did they do?
Introduction
At the suggestion of Ozzie Gooen, I looked at publicly available information around past LTF grantees. We’ve been investigating the potential to have more evaluations of EA projects, and the LTFF grantees seemed to represent some of the best examples, as they passed a fairly high bar and were cleanly delimited.
For this project, I personally investigated each proposal without consulting many others. This work was clearly limited by not reaching out to others directly, but requesting external involvement would have increased costs significantly. We were also partially interested in finding how much we could figure out with this limitation.
Background
During its first two rounds (round 1, round 2) of the LTF fund, under the leadership of Nick Beckstead, grants went mostly to established organizations, and didn’t have informative write-ups.
The next few rounds, under the leadership of Habryka et. al., have more informative write-ups, and a higher volume of grants, which are generally more speculative. At the time, some of the grants were scathingly criticised in the comments. The LTF at this point feels like a different, more active beast than under Nick Beckstead. I evaluated its grants from the November 2018 and April 2019 rounds, meaning that the grantees have had at least two years to produce some legible output. Commenters pointed out that the 2018 LTFF is pretty different from the 2021 LTFF, so it’s not clear how much to generalize from the projects reviewed in this post.
Despite the trend towards longer writeups, the reasoning for some of these grants is sometimes opaque to me, or the grant makers sometimes have more information than I do, and choose not to publish it.
Summary
By outcome
Category | Number of grants | Funding ($) |
---|---|---|
More successful than expected | 6 (26%) | $ 178,500 (22%) |
As successful as expected | 5 (22%) | $ 147,250 (18%) |
Not as successful as hoped for | 3 (13%) | $ 80,000 (10%) |
Not successful | 3 (13%) | $ 110,000 (13%) |
Very little information | 6 (26%) | $ 287,900 (36%) |
Total | 23 | $ 803,650 |
Not included in the totals or in the percentages are 5 grants worth a total of $195,000 which I tagged didn’t evaluate because of a perceived conflict of interest.
Method
I conducted a brief Google, LessWrong and EA forum search of each grantee, and attempted to draw conclusions from the search. However, quite a large fraction of grantees don’t have much of an internet presence, so it is difficult to see whether the fact that nothing is findable under a quick search is because nothing was produced, or because nothing was posted online. Overall, one could spend a lot of time with an evaluation. I decided to not do that, and go for an “80% of value in 20% of the time”-type evaluation.
Grantee evaluation examples
A private version of this document goes by grantees one by one, and outlines what public or semi-public information there is about each grant, what my assessment of the grant’s success is, and why. I did not evaluate the grants where I had personal information which people gave me in a context in which the possibility of future evaluation wasn’t at play. I shared it with some current LTFF fund members, and some reported finding it at least somewhat useful.
However, I don’t intend to make that version public, because I imagine that some people will perceive evaluations as unwelcome, unfair, stressful, an infringement of their desire to be left alone, etc. Researchers who didn’t produce an output despite getting a grant might feel bad about it, and a public negative review might make them feel worse, or have other people treat them poorly. This seems undesirable because I imagine that most grantees were taking risky bets with a high expected value, even if they failed in the end, as opposed to being malicious in some way. Additionally, my evaluations are fairly speculative, and a wrong evaluation might be disproportionately harmful to the person the mistake is about.
Nonetheless, it’s possible that sharing this publicly would produce positive externalities (e.g., the broader EA community gets better models of the LTFF in 2018/2019). I’ve created a question on the EA Forum here to ask about people’s perspectives on this tradeoff.
Still, below are two examples of the type of evaluation I did. The first one states fairly uncontroversial figures which are easy to find about a fairly public figure. The second is about a private individual whom I’m explicitly asking for permission, and includes some judgment calls.
Robert Miles ($39,000)
Short description: Producing video content on AI alignment
Publicly available information:
- At the time of the grant, “the videos on his Youtube channel picked up an average of ~20k views.” This has increased much more in recent times; the 9 videos made in the two years since the grant have an average of 107k views. Assuming a grant a year, this comes to 7cts/view, which seems very, very cheap.
- Robert Miles gets ~$12k/year on Patreon, and I imagine some ad revenue, so the Shapley value of the LTF is somewhat lower than one might naïvely imagine. I don’t think this is much of a concern, though, something like $30cts/view is still pretty cheap.
Shape of the update:
- Video impressions increased a lot, but the defining factor is probably whether watching Robert Miles’s videos produces researchers, donors, or has some broader societal change. This is still uncertain, and difficult to measure.
- Overall result: More successful than expected.
Vyacheslav Matyuhin ($50,000)
Short description: An offline community hub for rationalists and EAs in Moscow. “There’s a gap between the “monthly meetup” EA communities and the larger (and significantly more productive/important) communities. That gap is hard to close for many reasons.”
Publicly available information:
- Rationalist Community Hub in Moscow: 3 Years Retrospective
- LW user: berekuk
- I reached out to Vyacheslav, and he answered here in some detail.
- Kocherga continues to exist, but its webpage is in Russian.
Shape of the update:
- This is difficult to evaluate for a non-Russian. Per Vyacheslav Matyuhin’s self report, the grant kept Kocherga alive until 2020, and it then attained financial independence and continued activities online throughout 2020/2021 (“Kocherga is still going — we moved everything to Zoom a year ago, we have 8-12 events and ~100-120 registrations per week, and we did some paid rationality trainings online too.”)
- The main organizer was feeling somewhat burnt out, but is now feeling more optimistic about Kocherga’s impact from an EA perspective.
- Overall, the case for Kocherga’s impact seems pretty similar to the one three years ago.
- Overall result: As successful as expected
Observations
I don’t really have any grand conclusions, so here are some atomic observations instead:
- There is a part of me which finds the outcome (a 30 to 40% success rate) intuitively disappointing. However, it may suggest that the LTF was taking the right amount of risk per a hits-based-giving approach.
- Perhaps as expected, grantees with past experience doing the thing they applied to seem to have done significantly better. This suggests a model where people pursue some direction for free, show they can do it, and later get paid for it. However, that model has tradeoffs.
- I find it surprising that many grantees have little information about them on the internet. Initially, I was inclined to believe that their projects were thus not successful, and to update negatively on not funding information of success. However, after following up on some of the grants, it seems that some produced illegible, unpublished or merely hard to find outputs, as opposed to no outputs.
- I was surprised that a similar project hadn’t already been carried out (for instance, by a large donor to the LTFF, or by the LTFF itself.)
- I can imagine setups where this kind of evaluation is streamlined and automatized, for instance using the reviews which grantees send to CEA.
- I know that I am missing inside information, which makes me more uncertain about conclusions.