Neither ignorance nor retreat

Discordance

While working on an AI product a few years ago, I realized I had no theological or ethical framework for how to think about AI.

I asked my pastor about the theology of AI, and he said he wasn't sure.

Meanwhile, among the leading voices in the field of AI, there were warnings.

Sam Altman said in 2015, "I think that AI will probably, most likely, sort of lead to the end of the world. But in the meantime, there will be great companies created with serious machine learning."

Some others major figures in the field, Yoshua Bengio, Geoffrey Hinton, and Eliezer Yudkowski offered their own warnings.¹

For a few years, I continued working on AI-adjacent products unsure of how I should think about AI. Meanwhile, I followed AI safety researchers and kept up with those who campaigned to slow down AI development (David Deutsch, James Miller, Andrew Critch, Greg Colbourn, Trevor Bingham among them).

It was a discordant place to be.

The call to steer

Luckily, in April 2025, I encountered a breakthrough via essays by Michael Nielsen and Holden Karnofsky.

I initially discovered Michael Nielsen from his essay, "ASI existential risk: reconsidering alignment as a goal." From there, I read his much longer essay, "How to be a wise optimist about science and technology?"

The essay offers a refreshing framing that the choices technologists have are not limited to acceleration or deceleration. Instead, one can think of technology as increasing the supply of capabilities, which enables more ability to do harm. In order to limit harm, we must then increase the supply of safety.

Michael calls this concurrent acceleration of capabilities and of safety "coceleration."

Michael's use of safety as an abstract quantitative noun is an innovation. From this, he then analyzes the supply of safety as the total of market-supplied safety and non-market safety.

I can't do justice to his analysis here, but I recommend you read his essay.

Secondarily, Michael underlines the need for more imagination of futures that we want to build toward: "Can we imagine truly wonderful futures, insanely great futures, futures worth fighting for?" If we can't imagine good futures, we won't get good futures.

As a designer, I was quite excited by this. After all, this was my skillset – to deeply understand human needs and to come up with a compelling vision for the future.

By some fortuitousness, in the week after, I stumbled upon Holden Karnofsky's³ essay "Rowing, Steering, Anchoring, Equity, Mutiny," which further supported the call-to-action in Michael's essay.

In this essay, Holden Karnofsky imagines civilization as a ship, and there are five conceptual models for helping the world: rowing, steering, anchoring, equity, and mutiny.

  • To row is to help the ship go faster to whatever destination it's going to – an example would be to work in tech, advance science, accelerate economic development, etc.

  • To steer is to anticipate where the ship is heading and steer it to a better destination - an example would be predicting and mitigating risks from climate change or AI.

  • To anchor is to hold the ship in place - an example would be a social conservative attitude that seeks to return society to where it was a generation or two ago

  • To focus on equity is to make conditions on the ship more fair – an example would be helping the underprivileged or the poor.

  • And to mutiny is to challenge the entire premise of the ship - an example would be radical challenge to the current political economic system.

Where before, I could imagine only two options—to row or to mutiny—I now realized there was another possibility: to steer.

In reflection, while mindlessly throwing my skills behind AI products might be irresponsible in light of known risks, I now see that rejecting technologies and my own role in steering them would be irresponsible in its own way.

In part 2, I outline an optimistic future with AGI.

Footnotes

  1. Yoshua Bengio writes, "I cannot rationally reject the catastrophic possibilities nor ignore the deep sense of empathy I feel for… the multitudes whose lives may be deeply affected or destroyed if we continue denying the risks of powerful technologies." (https://yoshuabengio.org/2023/08/12/personal-and-psychological-dimensions-of-ai-researchers-confronting-ai-catastrophic-risks/)

    Geoffrey Hinton says, "If we allow it to take over, it will be bad for all of us. We’re all in the same boat with respect to the existential threat. So we all ought to be able to cooperate on trying to stop it." (https://mitsloan.mit.edu/ideas-made-to-matter/why-neural-net-pioneer-geoffrey-hinton-sounding-alarm-ai)

    Eliezer Yudkowski says, "If somebody builds a too-powerful AI, under present conditions, I expect that every single member of the human species and all biological life on Earth dies shortly thereafter." (https://www.datacenterdynamics.com/en/news/be-willing-to-destroy-a-rogue-data-center-by-airstrike-leading-ai-alignment-researcher-pens-time-piece-calling-for-ban-on-large-gpu-clusters/)


  1. Holden Karnofsky co-founded GiveWell and Open Philanthrophy, and now works at Anthropic

Ideas are like children. You don't just need to give birth to them; you also need to raise them, teach them, challenge them, and show them the world. Giving birth to an idea is a necessary condition and sets the boundaries for so much of what it can achieve. But if you're unable to raise it to become a world champion, it isn't worth anything. -lobochrome

©2025 Andrew Sider Chen

Ideas are like children. You don't just need to give birth to them; you also need to raise them, teach them, challenge them, and show them the world. Giving birth to an idea is a necessary condition and sets the boundaries for so much of what it can achieve. But if you're unable to raise it to become a world champion, it isn't worth anything. -lobochrome

©2025 Andrew Sider Chen

Ideas are like children. You don't just need to give birth to them; you also need to raise them, teach them, challenge them, and show them the world. Giving birth to an idea is a necessary condition and sets the boundaries for so much of what it can achieve. But if you're unable to raise it to become a world champion, it isn't worth anything. -lobochrome

©2025 Andrew Sider Chen