AI: Improved Means, Unimproved Ends?

Social media and smartphones captured our attention. How can we ensure AI doesn't capture our hearts but instead leads to hopeful futures for our kids?

Lately, in wrestling with what artificial intelligence means for my generation and even more importantly my children’s generation (apparently called Gen Alpha), I have been thinking a lot about the generation that came to grips with the advent of humanity’s ability to split atoms.

In his 1964 acceptance speech for the Nobel Peace Prize, Martin Luther King Jr. reflected on the awesome and awful technological capabilities humanity developed during the age of Turing and Oppenheimer. It was an era marked by the first modern computers and the first rockets into space, but also the advent of the nuclear bomb:

"Every man lives in two realms, the internal and the external. The internal is that realm of spiritual ends expressed in art, literature, morals, and religion. The external is that complex of devices, techniques, mechanisms, and instrumentalities by means of which we live.

Our problem today is that we have allowed the internal to become lost in the external. We have allowed the means by which we live to outdistance the ends for which we live. So much of modern life can be summarized in that arresting dictum of the poet Thoreau: 'Improved means to an unimproved end'."

Our generation seems to be struggling with the same fundamental challenge. Our technologies and techniques (our means) are becoming more sophisticated, but we all seem lost when it comes to our shared sense of spiritual purpose (our ends).

A Meeting of Two Realms

Over the last few years, I’ve had the awe inspiring if not daunting opportunity to reflect with a community of thinkers, researchers, and technologists on how we might discover a new shared horizon for humanity in the age of AI.

It feels like an impossible task in an age of deep distrust, polarization, and growing conflict across every possible division of humankind. Weirdly thoughk even in this context, it feels to me on a deep intuitive level, that this is the work that we must do together in our societies if want the next generation to have hope again. Hope is one of those spiritual ends that humans can’t seem to flourish without.

Do we really want another generation to grow up without hope?

On May 23rd and 24th, this group of thinkers and technologists had a chance to meet together in one of the most beautiful and inspiring locations to continue this metaphysical quest.

We met at the Pontifical Academy of Sciences, a Renaissance villa nestled among the Vatican gardens, where past Popes have met scientific greats like Albert Einstein and Stephen Hawking.

Within this historic location, Cardinal Peter Turkson hosted a forum of scientists, philosophers, technologists, artists, businesspeople, and faith leaders to explore the question:

How might we design artificial intelligence to foster human flourishing?

In this Forum, King’s two realms—the internal and the external—converged to discuss how we use the means of AI to achieve higher ends for humanity.

The meeting was organized by two leading AI experts in the Catholic Church who I have had the pleasure of working with over these past years: Father Philip Larrey, a professor of philosophy at Boston College and Chairman of the Humanity 2.0 Foundation, and Matthew Sanders, CEO of the Humanity 2.0 Foundation and the creator of Magisterium.Ai, an artificial intelligence application designed to assist with the understanding and dissemination of Catholic Church teachings.

I was asked by Fr. Larrey and Matthew Sanders to speak about the human flourishing research we have been working on with the Harvard Human Flourishing Program, the Center for Positive Psychology at the University of Pennsylvania and the Pontifical Gregorian University.

Issues and possibilities

To kick off the Forum, Fr. Larrey, author of Artificial Humanity, provided his latest insights on the perils and opportunities of AI.

The potential perils could include massive job losses before new jobs are created, the inability to control AI once it has become more intelligent than humans, deep fakes further undermining truth and social trust (even the Pope has been a victim of AI deepfakes), and other existential risks for humanity as AI capabilities rapidly evolve.

On the more hopeful side, Larrey enumerated that AI presents incredible opportunities for humanity as well: personal AI assistants that turbocharge our creative capabilities, new AI health innovations like DeepMind’s AlphaFold that are spurring incredible advances in molecular biology, new abilities to explore the universe, technologies that help us better understand and relate with other forms of life on planet Earth, and the ability to have the collective intelligence of all humanity, including past humans, at our fingertips.

I cannot even the imagine the world that my young children will grow up into. It is mind boggling.

Will it capture our hearts?

Father Michael Baggot, a Professor of Bioethics and Theology at two of the Pontifical Universities, shared that while the creation of AI represents something beautiful about humanity’s reflection of God as a creator, the current trends in AI development and commercialization are trending in a troubling direction. Father Baggot asked,

"Social media and smartphones have captured our attention; now will AI companions capture our hearts?"

In the room were executives from leading technology firms developing AI including Keyun Ruan, Chief Information Security Officer at Alphabet, the parent company of Google, Janet Adams Chief Operating Officer of Singularity NET, a billion-dollar AI and robotics company, and Renate Nyborg, CEO and founder of Meeno.com, an AI tool that provides relationship advice.

These executives debated whether we were facing existential risk or building utopia with the potential advent of artificial general intelligence (AGI). OpenAI, the creator of ChatGPT, defines AGI as “highly autonomous systems that outperform humans at most economically valuable work.”

I’ve written often about the precarious economic situations many people face due to technological advances and growing wealth concentration, so we should be having more public conversations about the future of work and livelihoods.

These questions and others about AGI risks are existentially important for humanity. But during the debate, my mind continued to circle Fr. Baggot’s question: “Will our hearts be captured by AI?

For those that are skeptical and asking “How is this even possible?”, we can look at the fastest growing part of the AI-to-consumer market. On the popular platform Character.AI, 20 million people are interacting with AI companions for an average of two hours a day and engaging with their chatbots 298 times per month. This platform’s worldwide market share (15.8%) is only second to OpenAI’s ChatGPT which has 60.2% of the market.

OpenAI seems to be aware of the appeal of AI companions and made their recent releases of ChatGPT significantly more anthropomorphic. Sam Altman and OpenAI stepped into controversy by using a voice for their chatbot named Sky that sounded eerily like the voice of Scarlett Johansen.

The actor played an AI companion that seduces Joaquin Phoenix’s lonely character in the 2013 movie Her. Just days before the Forum, OpenAI’s intentions were made clear with the release of ChatGPTo, Altman tweeting one word: “Her.”

The backstory: Altman had originally approached Johansen to use her voice, but she refused, protecting her rights as an artist. Now her public complaint has made OpenAI backtrack on their strategy.

This resistance from Johansen will not stop OpenAI from making future versions of ChatGPT increasingly sophisticated at building intimate relationships with their users.

Why does Father Baggot care about our hearts being captured by AI systems and companies that run them? Are we talking about our literal, physical hearts or something else?

The heart and the depths of the human soul

The heart is an ancient metaphor for that part of me and any other human being that is the source of all capacity for free action, our receptivity to inspiration, and our desire to love and be loved. It is that part of us that longs for a more just, beautiful, and peaceful world. It is the fundamental core of our will where our thoughts, intentions, and decisions originate.

The poets, philosophers, and mystics throughout the ages have called on humans to protect and nurture that mysterious part of us that cannot be measured empirically or fully explained through rational means.

Philosopher Max Scheler wrote that the heart is the source of our capacity for “value-perception,” the ability to perceive what is virtuous in ourselves, other people, and in the world. At the deepest levels, the heart is a poetic way of speaking about the spiritual capability that can contemplate the value of our existence and most importantly the value of the Sacred.

To thinkers like Scheler, the heart is more fundamental than even the rational part of our mind. He builds on the ideas of the French polymath Blaise Pascal who wrote:

“The heart has its reasons which reason knows nothing of... We know the truth not only by the reason, but by the heart.”

Personally, my heart is much more important in my relationship to others and to the Sacred dimension of life than my rational mind. It’s that part of me that connects with the beauty of nature, art, and architecture.  As I reflected on Father Baggot’s words, I heard birdsong filtering through the windows of the Academy, reminding me of our place in a garden.

Maybe, if we are successful in building a shared understanding of the importance of the unique capabilities of our hearts, we can fulfill MLK’s call and design the means of our technology to serve better spiritual ends for humanity, we can make the outer realm serve our inner realm.  

Ron Ivey

Ron is a writer, researcher, and strategic advisor to business, governments, and philanthropies with a focus on social trust, belonging, and human flourishing.

Ron is currently the Managing Director of the Humanity 2.0 Institute and a Research Fellow at the Harvard Human Flourishing Program where he co-leads the Trust and Belonging Initiative.

Ron also currently serves as a Fellow at the Centre for Public Impact, a global think tank seeking to re-imagine government and restore relationships between governments and those they govern.

Next
Next

A higher salary won’t satisfy our desires for our work