Why I'm Writing About AI for Human Flourishing
Essay 1 of 50
I recently caught myself deep in a three-hour loop with an AI assistant—refining a strategy, pressure-testing assumptions, iterating on language—and realized I couldn’t clearly trace which ideas were mine, which were the machine’s, or even which of us was really driving the thinking. I consider myself deeply independent-minded. That moment unsettled me.
I’m not alone. Seventy-two percent of U.S. teens use AI chatbots, and a third prefer talking to AI over people for serious or personal conversations.1 Parents are wrestling with whether their kids should use ChatGPT for homework. Meanwhile, truck driver is the most common job in 29 U.S. states2—yet autonomous vehicles are already piloting long-haul routes. Industries with high AI exposure now show 3x higher revenue growth per worker.3 McKinsey forecasts $3–5 trillion in agentic commerce by 2030.
The wave is building fast. In 2024, a few of us pitched a vision at AT&T HQ: what if we embedded AI directly into the network so your personal AI could talk to business AIs or other personal AIs on your behalf, regardless of the end-user device? Some in the room were skeptical. By 2025, AT&T announced it was testing a version of the Digital Receptionist with customers. Now in 2026, OpenClaw agents—the open-source project whose lobster-themed branding belies its significance—are messaging and voice-calling humans and other AIs across the open web. It’s hard to think exponentially, until exponential change is staring you in the face.
This revolution is horizontal. It touches healthcare, finance, education, entertainment, security, etc. — every domain of knowledge work and, increasingly, physical work. As Dario Amodei, CEO of Anthropic, warned in his recent essay “The Adolescence of Technology”: “Humanity is about to be handed almost unimaginable power, and it is deeply unclear whether our social, political, and technological systems possess the maturity to wield it.”4 He predicts AI could displace half of all entry-level white-collar jobs in the next one to five years—even as it accelerates economic growth.
Some are attempting to opt out: “AI is not good for humanity nor is it good for me.” They could be right. I hope many of us in tech will work hard to make them wrong. The question isn’t whether AI will reshape education, commerce, work, and daily life—it’s already happening. The question is: how do we harness this extraordinary power so more people flourish everywhere?
That’s what this newsletter explores—not as an academic exercise, but as an active investigation into the ideas, products, services, and actions that serve human flourishing.
What I Bring to This
I spent the last several years as a product executive at AT&T, where I helped launch software-defined and AI-powered services on the network. Before that, I built products and teams in the startup world—including at SAFR, where I helped deploy facial recognition technology and developed guiding principles for when and how to deploy AI that could be used for both good and harm. I got my real start in product at Amazon, enabling the creator economy through Kindle Direct Publishing and Kindle Enterprise Publishing —watching firsthand how technology can democratize opportunity at scale.
Before tech, I advised corporate, government, and non-profit leaders across 15 countries as a Monitor consultant. That experience gave me a conviction I still carry: markets are powerful engines for innovation, but they fail—sometimes catastrophically—and context matters. How can we tap into market, political, and philanthropic forces to unlock greater flourishing? Through AGResults, I saw how thoughtful incentive design can mobilize market forces toward outcomes they wouldn’t naturally produce: vaccines for livestock in developing countries, agricultural innovations for smallholder farmers, solutions that pure profit motive ignores. That shapes how I think about AI: we need entrepreneurial energy, but also clever mechanisms that bend market forces toward human outcomes.
Where I Stand
AI’s upside won’t be evenly distributed. High-income countries account for 87 percent of notable AI models, 86 percent of AI start-ups, and 91 percent of AI venture capital.5 Low-income countries represent less than 1 percent of global AI activity. Within the United States, economists warn about “K-shaped” outcomes—where AI accelerates gains for those already thriving while leaving others further behind. Oxford Economics, asked if AI will reinforce this K-shaped economy for decades, answered: “Absolutely.”6
I’m a father watching my own children navigate a world where AI is ambient, and I support organizations like CodeBrave bringing tech education to disadvantaged kids in Lebanon. The gap between those with AI access and those without will define opportunity for the next generation.
And here’s where I plant a flag: I believe humans have inestimable worth—not because of what we produce or our intelligence, but because we’re made in the image of God. In 2013, Larry Page reportedly accused Elon Musk of being “speciesist” for prioritizing human welfare over potential silicon-based digital life.7 Count me in that camp. Humans—every single one—possess inherent dignity that transcends utility because we’re ultimately ‘subsovereign’. That conviction grounds everything I write about AI for human flourishing. We’re not optimizing for intelligence in general. We’re building for human beings and better human outcomes.
What This Newsletter Will Explore
AI is the ultimate dual-use technology. We can’t build for good without managing against the bad. So I want this newsletter to explore how we bend AI toward human flourishing through profound ideas and the teams, products, services, laws, standards, and movements that bring them to fruition. Here’s what I’m interested in:
Vision. Future scenarios—both utopian and dystopian—that help us understand what we’re building toward and what we’re trying to avoid. Not visions disconnected from technical or market realities, but ones that negotiate paths to optimize impact. What does it look like to get this right? What does catastrophe look like?
Strategy. I was taught strategy as a cascading set of interrelated choices about where to play, how to win, what capabilities to organize, and where to get started. So, how shall resolve those choices differently in the AI era?
Human needs. What are parents actually thinking about AI in education? What works to bring opportunity to underserved communities? What do people in areas with poor health outcomes need to thrive, and how could AI help?
Products that work. What makes AI learning companions different from generic chatbots? How will the Gates Foundation + OpenAI Horizon1000 initiative extend the reach of overstretched healthcare providers in Africa? What can we learn from companies getting this right—and wrong?
AI Product craft. How do we best build products, teams, and organizations in an AI-native world? How do we hone product sense when the code powering the experience is non-deterministic?
Systems and standards. How do we solve the identity problem when AI agents transact on our behalf? What protocols enable human agency rather than corporate capture?
Policy and governance. What laws and regulations bend AI toward flourishing? What movements are emerging—or should emerge—to shape AI’s trajectory?
The personal. How do I get the most out of my AI tools without outsourcing the thinking that matters? How is our family guiding our children through that same challenge?
The Practice
I’m committing to publishing one essay roughly every week for the next year—50 essays exploring AI for human flourishing from various angles. (Future essays will be shorter!)
This is as much about the practice of writing as the ideas themselves. I believe quantity leads to quality. That most of us discover what we think by writing it down. That showing up consistently and shipping imperfect work is how you build something meaningful over time.
Some essays will examine specific products and how they work (or fail) and the the craft necessary to build them well. Others will explore future scenarios that shape what we should build. Some will dig into policy, governance, and politics. Others will be personal reflections on parenting, education, and faith in the age of AI. Some will be wrong; that’s where I’ll probably learn the most.
Finding Wisdom Together
The question underlying all of this is how we find wisdom—not just knowledge, not just capability, but wisdom—in an age when AI can generate answers faster than we can formulate questions.
Many of us are wrestling with this. Builders trying to create products that serve human dignity. Parents navigating what their kids should learn. Policymakers trying to govern technology that moves faster than legislative cycles. Educators rethinking what it means to teach when AI can tutor.
Writing is my way of thinking through these questions. If you’re wrestling with similar tensions—between innovation and responsibility, between market forces and human values, between what’s possible and what’s wise—I’d be glad to have you along for this exploration.
What questions do you have about AI and human flourishing? What are you wrestling with? What examples have you seen of AI done right or done poorly? Reply to this email or comment below. Your questions will shape what I write about in the weeks ahead.
Here’s to a year of learning together.
—Dan
Common Sense Media, Talk, Trust, and Trade-Offs: How and Why Teens Use AI Companions (2025); Pew Research Center, Teens, Social Media and AI Chatbots 2025.
FleetOwner, Truck driver is the most common job in 29 states, based on U.S. Census Bureau data visualized by NPR Planet Money.
Founders Forum Group, AI Statistics 2024–2025: Global Trends, Market Growth & Adoption Data (2024). Industries with high AI exposure show 3x higher revenue growth per worker.
Dario Amodei, “The Adolescence of Technology” (2025).
World Bank, Digital Progress and Trends Report 2025: Strengthening AI Foundations (2025). High-income countries account for 87% of notable AI models, 86% of AI start-ups, and 91% of cumulative venture capital funding in AI start-ups.
Fortune, Oxford Economics: AI is unlikely to help resolve K-shaped economy anytime soon (January 2026).
OECD AI Incidents Database, Larry Page Calls Elon Musk “Speciesist” Over AI Safety Concerns. Also documented in Max Tegmark’s Life 3.0: Being Human in the Age of Artificial Intelligence (Knopf, 2017).


Great endeavor Dan. I look forward to reading your essays, learning from them, and hopefully engaging to push the thinking on some as well.