There will always be a Voigt-Kampff test
In the film Blade Runner, the Voight-Kampff test is a fictional procedure used to distinguish androids from humans. In the normal course of events, humans and androids are pretty much indistiguishable, except when talking about very specific kinds of emotions and memories.
Similarly, as language models or image-producing neural networks continue to increase in size and rise in capabilities, it seems plausible that there will still be ways of identifying them as such.
For example, for image models:
- They may have watermarks or stenographic messages which could be used to detect them
- They may have a bias towards particular types of prettiness or perfection
- They may not render certain complicated details, like hands, teeth, letters, etc.
- They may struggle with compositionality, light, consistency, etc.
And for language models:
- They may not have good models of things that humans don’t often talk about, like intimate fears, shame, or the specific details of sexual attraction.
- They may not be up to the latest news, if they are only trained on events up to a certain point in the past.
- They may have a distinctly bland speech.
- They may have catchphrases or favour certain ways of expressing themselves
- They may struggle to produce original thoughts and ideas
- They may have idiosyncratic challenges, like not being able to decode ASCII art, not getting certain jokes, etc.
From left to right: original historical image, image of myself, combination of the two produced using DALLE-2 to modify the jacket to also have a white shirt. This is a small-scale example of how the idiosyncrasies that allow us to unmask DALLE-2 do not matter to its ability to produce value: I like the third image a lot and I am using it on my social media profiles.
But much like in the original Blade Runner movie, these details may not really matter for their economic impact, and the fact that a way exists at all of identifying them will be even less relevant. Similarly, the fact that DALLE-2 and other image models have difficulties correctly rendering teeth or objects in relationship to each other doesn’t really reflect their current ability or future potential to replace many thousands of artists, and generally shape the demand curves of art.
I was thinking about this because I was recently forecasting on a question about “AGI”, where “AGI” was defined as a system that: “is capable of passing adversarial Turing test against a top-5% human, who has access to experts.” But such a system might take a really long time to be developed, even if the economic impact of an AI system is pretty great, because such a system might still have its own idiosyncrasies.
Ultimately, this makes me think that nitpicks and gotchas about ways to differentiate humans and machines aren’t just all that relevant to predicting their future impact. What I care about is closer to the real-world impact of these machines.
That’s all for now.