AGI is a cipher character in the Silicon Valley romantasy
In 2005 we had Bella Swan, in 2025 we have AGI
Did you ever find yourself reading Twilight and wondering why all the other characters loved Bella Swan so much when she had almost no definable personality traits beyond being clumsy? Seriously, the girl shows up in the cafeteria and instantly everyone at school (supernatural creatures included) is just obsessed with her.
Bella is not just the face of millennial vampire mania, she’s also an archetypical cipher character.
Ciphers are book and film characters with vaguely defined personalities and motivations. They serve as a tool for the author to project ideas onto, or for the reader to project their own interpretations.
In the case of our tepid heroine Bella Swan, her emptiness helped to make Twilight such a relatable and successful phenomenon. Because she was nothing, she could be everything. Any young reader could see themselves projected onto the heroine of the saga. Yes, even me, as a gangly 12 year old brown girl living in the dusty suburbs of Texas. From Bella Swan to Nick Carraway, ciphers have long been used as a critical component of storytelling.
In 2005 we had Bella, but two decades later we have AGI, the cipher character of Silicon Valley mythology.
There isn’t an agreed upon definition of AGI. Instead, supporters often enumerate what AGI will be able to do, which is an extremely long list including everything from ending poverty, to solving physics, to fixing climate change, and curing all diseases.
AGI is our north star, and its superpower is being everything and nothing, all at once.
Every technology has risks, externalities, and side effects. Whether or not the technology is considered good largely depends on how we choose to deploy it—what uses cases we prioritize, and what limitations we put in place. Making these decisions is a complicated and thoughtful process that necessitates a deep understanding of the potential and current harms and benefits, as well as consulting a wide range of perspectives.
Because AGI is so vaguely defined and so all encompassing, it’s nearly impossible to make these critical decisions and reason about tradeoffs.
For example, we know that AI requires a lot of water, energy, land, and raw materials. To what extent are we okay with using these precious finite resources to power AI? Are we okay with AI consuming 5%, or 20%, or 50% of our energy supply? Are we okay with AI data centers being prioritized over homes and communities during times of high energy demand? Are we okay with water being diverted to chip manufacturers instead of farmers during a drought?
If the payoff is ending poverty, curing cancer, stopping climate change, and leading to prosperity for all, then maybe we are.
If the payoff is enabling advertisers to automate the generation of targeted ads, and power an infinite scroll of AI slop, then maybe not.
Our current north star is a shape shifter that can conveniently be used to justify anything and everything. From astronomical spending, to resource consumption, to worker exploitation, everything becomes permissible in the name of AGI.
There is no universally correct ethical lens for making decisions around how we create technology, where we allocate resources, and what side effects we’re willing to tolerate. This process will forever be messy, complicated, and nuanced. But these conversations will never even happen if we don’t get clear on what it is we’re building and who we’re building it for.
As long as the goal of AI remains ill-defined, it will be used as a rhetorical device to rationalize the actions, ambitions, and desires of its creators.

