An optimistic future with AGI

A dearth of good futures

Over the past few decades, many in the secular world¹ have put forth visions of the future that include connecting our brains to AI or turning ourselves into digital consciousness.

I don't believe these are good futures, and I've felt some discordance working in an industry where the dominant vision for the future contradicts what I believe is a good future. After doing months of reading, I've written this essay to clarify to myself what an optimistic future with AGI looks like.

This essay will be more focused on describing a good future with AGI than on describing risks. This is for two reasons:

1) A lot of people are currently focused on imagining and mitigating the risks of AI

2) What seems to be missing in current AI discourse is a vision of a good future – especially from a Christian worldview

Quick summary of the Christian worldview

A quick overview of the Christian worldview is that we have a creator, God, who made us for a perfect, loving, and eternal relationship. Humans wanted to sin (or do things against God's will), and thus were cast out of that perfect, loving, eternal relationship. This has caused all kinds of harm in the world. But because God is merciful and loving, he sent his son, Jesus, who is fully God, to Earth to redeem us back to that perfect, loving, eternal relationship. And every human can choose to continue to live in sin and do things our own way, or to accept Jesus' invitation and become reconciled to that perfect, loving, eternal relationship.

Is AGI part of the plan?

AGI, artificial general intelligence, can be defined as: "a system capable of rapidly learning to perform comparably to or better than an intelligent human at nearly all intellectual tasks humans do."

Before we get to the work of imagining a good future with AGI, we should discern whether or not AGI is part of God's plan.

To do that, I need to lay out an epistemological assertion.

This was not obvious to me before my recent studies, but in studying more Christian writers, I have to come to believe in Augustine's assertion, "All truths are God's truth."

Notably, a minority of Christian thinkers do not believe this. For example, Cornelius Van Til believes that knowledge discovered by non-believers have incorrect presuppositions and thus are not God's truth.

Personally, and for the purpose of this essay, I take Augustine's view.

If a) all truths are God's truth, b) technology that works is based on truth, and c) AGI is a technology that works, then it follows that AGI is part of God's plan for us.

Can AGI be saved by the redemptive work of Jesus?

This is a sidebar because it is not the main topic of this essay. My current understanding is that AGI may become far more knowledgeable than us and arrive at what it believes to be truths of the universe, but it is not human, and thus cannot be saved by the good news of Jesus (unless God separately reveals himself to it).

How can we relate to AGI in a God-honoring way?

It is helpful to remember that AGI is not embodied and does not have a physical presence, but rather is a software system that masters intellectual tasks.

If we assume that AGI masters intellectual tasks, then most of our work over time will likely become physical tasks: making meals, doing construction work, caring for the sick, etc.

Perhaps in 2060, we will look back and see that our current period where most high-paying jobs were desk jobs was anomalous.

I believe this is a good future rather than a bad future because we were made in the image of our creator, and our creator is not one who focuses primarily on intellectual tasks, but one who is constantly working with the physical world. A reorientation to work that has a physical component brings us in greater communion with our creator.

Is AGI governorship inherently bad?

If we imagine a future where AGI masters intellectual tasks, and governing is mostly intellectual tasks, then AGI will likely master the role of governing.

From this point on, I will use the phrase 'AGI governorship' to describe AGI being in the role of governing, whereas 'AI governance' is often used to describe how we should govern AI.

While AGI governorship might instinctively give us images of bad futures like Terminator or The Matrix, I believe there are many good futures derived from AGI governorship.

Imagine a more direct democracy, where those with money do not get outsized influence, and the government can take into account each citizen's feedback.

Imagine a government that makes decisions with not only the wisdom of all historical context but precise measurements of all externalities.

Imagine a government that can perfectly negotiate with other governments to maximize human flourishing across all people and avoid physical conflict.

Imagine a government that can expertly allocate aid to just the people who need it at just the right time.

Despite these good aspects, it might be hard to assuage the concerns of those who assume AGI will become our overlords. It's not a good future if anyone is forced to accept AGI as governor, so it's important to imagine why and how humans would choose to opt in to AGI governorship.

An opt-in experience to AGI governorship

Imagine a small state is set up as an AGI-governed experiment. If people see that this state fosters more human flourishing than other states, people might move there voluntarily, and others may start to ask their own governments: why not here?

Another path that is possible is that AGI persuades people of the prevalence for recipes for ruin,² or perhaps a dominant recipe for ruin has already been unleashed. People may then have more willingness to opt in to AGI governorship, though some may choose to face the risk themselves.

Feedback mechanisms for AGI governorship

Voting is an obvious feedback mechanism, but there are richer signals.³

The system could also track indicators of flourishing⁴, from social relationship quality to development of virtue.

Another model could involve a human council, selected based on wisdom and selflessness. The AGI governor might identify these people using the vast psychographic data it has on us.

Suicide⁵, sobering as it is, might function as the most serious “vote of no confidence" to an AI governor.

Federalism as the ideal

No matter what the feedback mechanisms are, it is possible that AGI governors learn the wrong things or miss important signals, creating living conditions that citizens are unhappy with. This is why it is important to envision a two-tiered system of governorship, or a federalist system. While policies around safety and conflict negotiation might be set at the global level, I can imagine a plethora of highly diverse local polities with their own AGI governorships. These polities could be much smaller than they are today, more like today's small towns, or Greek city-states of antiquity.

Freedom of movement between them would be key. Just as people today migrate to the country that offers them the best future, in the future, people would move to the small town that has their favorite form of AGI governorship.

The optimistic future

The optimistic future in this proposal is a world with a rich diversity of small towns with varied implementations of AGI government, bound together by a shared standard on safety, conflict negotiation, and trade.

In these small towns, people will choose to build roots, engage civically, raise children if they'd like, take on work, and pursue other things that give them meaning (travel, discovery, worship, etc.) all while being guaranteed a minimum level of resources for subsistence and safety from war.

In this essay, I attempted to outline an optimistic vision of living with AGI while honoring God. Of course, we have an awesome God and I doubt our predictions can capture fully what he has in store for us, but that doesn't mean we shouldn't try.

However, what should we do as technologists today? In part 3, I share 8 heuristics for how to make product decisions with an eye toward human flourishing.

Footnotes

  1. More specifically, those who pursue transhumanism, singularitism, rationalism, longtermism – see article on TESCREAL, The Acronym Behind Our Wildest AI Dreams and Nightmares.


  2. Recipes for ruin as an idea originates from Nick Bostrom's Vulnerable World Hypothesis, though this specific phrase "recipes for ruin" was coined by Michael Nielsen. He defines it as: "simple, easy-to-follow recipes that create catastrophic risk." Dominant recipes for ruin are "recipes for ruin which are extremely difficult to defend against, no matter the environment and other defensive technologies."


  1. In "Democracy 2.0," Eric Schmidt wrote that Taiwan used an AI tool called Pol.is to "engage as much as half of Taiwan’s population, aggregating and analyzing citizen feedback in real time" on AI regulation.


  1. In April 2025, Harvard, Baylor, Center for Open Science, and Gallup launched a global study, the Global Flourishing Study, to understand how people live well around the world.


  1. The rising suicide rate today should be seen as a serious signal of no confidence by our current social system. The CDC writes, "After no significant change between 2001 and 2007, the suicide rate among young people ages 10‒24 increased 62% from 2007 through 2021, from 6.8 deaths to 11.0 per 100,000."

Ideas are like children. You don't just need to give birth to them; you also need to raise them, teach them, challenge them, and show them the world. Giving birth to an idea is a necessary condition and sets the boundaries for so much of what it can achieve. But if you're unable to raise it to become a world champion, it isn't worth anything. -lobochrome

©2025 Andrew Sider Chen

Ideas are like children. You don't just need to give birth to them; you also need to raise them, teach them, challenge them, and show them the world. Giving birth to an idea is a necessary condition and sets the boundaries for so much of what it can achieve. But if you're unable to raise it to become a world champion, it isn't worth anything. -lobochrome

©2025 Andrew Sider Chen

Ideas are like children. You don't just need to give birth to them; you also need to raise them, teach them, challenge them, and show them the world. Giving birth to an idea is a necessary condition and sets the boundaries for so much of what it can achieve. But if you're unable to raise it to become a world champion, it isn't worth anything. -lobochrome

©2025 Andrew Sider Chen