×

What Aristotle and Socrates can teach us about using generative AI

araelf/iStock/Getty Images Plus

Follow ZDNET: Add us as a preferred source on Google.


ZDNET's key takeaways

  • AI language models can erode our ability to create new ideas.  
  • Other types of AI models can help us improve our critical thinking. 
  • The focus on globalization technology must shift to local intelligence.

We are at a confluence of AI advancement, geopolitical fracturing, and machine-speed cyber threats; a perfect storm that will decisively separate the successful organizations from those that fail to adapt. This was the topic of discussion for Ray Wang, CEO of Constellation Research, and me during our recent weekly podcast DisrupTV, where a critical truth emerged: the current period is defined by more than just rapid technological change. 

Insights delivered fresh from Davos by our esteemed panel, including Peter Danenberg, distinguished software engineer at Google DeepMind and architect of Gemini's key features, and Dr. David Bray, distinguished chair at the Stimson Center and CEO of LeadDoAdapt Venture, are essential for every forward-thinking executive.

Also: The secret to AI job security? Stop stressing and pivot at work now - here's how

With battle-tested perspectives originating from Google's AI labs and the geopolitical heart of Washington, DC, their guidance provides a vital roadmap for leaders navigating what is arguably the most consequential inflection point in modern business history.

AI that teaches us how to think

For CTOs and product leaders navigating the AI landscape beyond the noise, Peter Danenberg offers a rare and valuable perspective. As the leader of rapid prototyping for Google's Gemini platform, Danenberg has a proven track record, routinely taking concepts to demonstration in under 24 hours and shaping features used by tens of millions daily.

His insights are amplified by a highly unconventional background that weaves together humanities, computer science, and music, alongside his work building the highly successful Google Gemini Meetup. This community event, accessible at geminimeetup.ai, has rapidly expanded from a modest 10 attendees to a gathering of 300 to 600 people every other Friday.

Also: Nervous about the job market? 5 ways to stand out in the age of AI

However, Danenberg's most significant and concerning contribution comes from his recent TEDx talk 'Competence in the Age of LLMs,' which focused on competence in the age of large language models (LLMs). Brain-scan research he presented revealed a troubling pattern: when individuals used LLMs for creative tasks, their brains showed significantly less activity than people who used traditional methods like pencil and paper or even Google search. Can the use of LLMs erode our ability to gain competence and mastery? Danenberg drew on lessons from Aristotle and Socrates in his TEDx talk to share insights into the future. 

LLMs currently excel at being poietic tools, generating "the new" by weaving human knowledge into coherent content, but risk eroding competence if users become passive verifiers rather than active thinkers. Danenberg argued that we need to move toward peirastic LLMs, models designed not just to provide answers, but to "pressure test" ideas through Socratic dialogue and challenging questions. Insights from Danenberg's framework included:

  • Poietic vs. peirastic: Danenberg distinguishes between poietic (rhetoric/generation) and peirastic (dialectic/testing). Current LLMs primarily offer "poietic dishes" with only a "peirastic garnish," often prioritizing user satisfaction over intellectual growth.
  • The risk of outsourcing: By outsourcing critical thinking, Danenberg believed humans risk becoming mere "verifiers" of AI output, which leads to a loss of creative imagination and mastery.
  • Socratic enkindling: Drawing on Socrates, he emphasized that true learning and the "enkindling of the soul" come from a shared social process of challenge and questioning. 

Danenberg concluded by stating that the research revealed that LLM users reported an almost total lack of ownership over their work and struggled to recall basic facts about what they had produced. He elaborated on this phenomenon: "The pencil and paper people who sweated over their work felt that the essay was legitimately theirs. The LLM people, if you ask them about something in the third paragraph, they have no idea what you're talking about."

Also: AI is disrupting the career ladder - I learned 5 ways to get to leadership anyway

So, can LLMs help give birth to Ideas? Yes, as peirastic interlocutors, LLMs can function as Socratic interlocutors, challenging a user's assumptions and helping them arrive at deeper truths through dialogue. Creating intentional learners, as a shift toward an intention economy, means LLMs can support deliberate learning purposes by acting as lifelong companions that facilitate growth through difficult questioning. 

But risks exist. If models remain purely generative (poietic) and focus on engagement metrics, they may create a dopamine cycle that frustrates deep learning and encourages the simple externalization of thought without internal mastery. 

Here are Danenberg's key recommendations for technology leaders using AI models:

  1. Embrace Socratic AI over generative AI: Danenberg is pioneering a fundamentally different approach with Peter Norvig and researchers at the University of Oxford, building AI systems that test and question rather than generate. "After about 10 to 15 minutes of being questioned by the LLM, people basically had enough," Danenberg noted from user testing. "Being questioned by the LLM is exhausting." The challenge is balancing this dialectical testing with creative processes, ensuring users emerge with artifacts they feel ownership over, something Danenberg calls "the coloring book principle," where you have something to "stick on the fridge."
  2. Prioritize ambient, multimodal AI companions: Beyond image generation and chatbots, Danenberg sees the next horizon as "ambient models that are companions with you in the world, seeing what you see, listening to what you're hearing, with immediate context of whatever you're experiencing." This shift from app-based interaction to ambient presence requires sophisticated multimodal models processing images, sound, and text simultaneously.
  3. Build community-driven innovation loops: The Gemini Meetup exemplifies a return to Silicon Valley's roots, open sharing, rapid feedback, and community building. "I can take things users ask me directly to leadership and advocate for them," Danenberg explained. "The things our users feel and want are sometimes 180 degrees removed from our direction." This direct user engagement has proven invaluable for catching misalignments before they become costly mistakes.

Navigating the post-globalization reality 

Dr. David Bray, a two-time Global CIO Award winner recognized by Business Insider as one of the '24 Americans changing the world under 40,' offered essential guidance to boards and C-suite executives navigating today's intense technological and geopolitical volatility, drawing on his experience leading complex digital transformations across both public and private sectors.

A crucial insight from Bray, building on earlier discussions about AI, is the danger of completely outsourcing human judgment. He strongly advised organizations to pair AI with human judgment. Bray pointed out a concerning divergence: while shareholder pressure is pushing public companies to cut staff for short-term profitability, winning private companies are succeeding by integrating humans and AI. In this winning model, AI handles known threats, freeing up humans to focus on "unknown unknowns."

"If you outsource your thinking, you outsource your talent," Bray cautioned, emphasizing that this strategy may secure a short-term gain but risk the company's long-term future. Bray delivered a sober post-Davos 2026 assessment: "Davos made it very clear that the era of globalization is currently on hold, if not ended. Companies and countries are being asked to pick a side." 

Also: Forget the chief AI officer - why your business needs this 'magician'

This perspective is vital for leaders who have yet to grasp that traditional decision-making models are no longer sufficient in our rapidly changing world. As Bray stressed, "You may not care about geopolitics, but geopolitics cares about you."

Bray's key recommendations for boards and CEOs included:

  1. Instrument for machine-speed threats and responses: Nation-state actors are using openly available generative AI tools to plan sophisticated targeted attacks on corporations. "With AI and automation, the speed and scale of both good things and bad things like cyberattacks are just massive," warned Bray. "You can create the appearance of people not liking your brand that are all bots." Organizations must examine core processes, such as customer interactions, business continuity, and data security, and ask: "Are we able to be responsive and adaptive at machine speed?" The answer for most companies is no, creating existential vulnerability.
  2. De-risk by region, not globally: The globalization playbook, where business processes are treated globally, is dead. "You've got to throw out the globalization playbook," Bray stated bluntly. "We're back in the era where location matters, and how you deal with it is contextual." His recommendation: examine global operations and supply chains region by region, such as the Middle East, South America, and the Pacific, each requiring different de-risking strategies. Surprisingly, fewer than 20% of multinational companies have board members with a strong understanding of their operational geographies.
  3. Elevate general counsel as geopolitical risk partners: Bray identified an emerging pattern: "General counsels are stepping up to help de-risk. They get law and geopolitics but need tech background." The winning combination pairs CIOs with general counsel to present a compelling case to boards: "Here's the tech and cybersecurity impacts" plus "here's the legal and policy impacts." This partnership addresses the reality that no board would accept members who can't read a profit-and-loss statement, yet most executives lack geopolitical and technical expertise.

Human-AI collaboration at machine speed

The key to future success lies in mastering human-AI collaboration. The organizations that will thrive are those that operate at machine speed without sacrificing the critical thinking necessary to gain a competitive edge. The danger of outsourcing thought to LLMs is clear: Danenberg's research highlights the risk of cognitive atrophy. At the same time, Bray's geopolitical analysis shows that machine-speed threats from nation-states and competitors are already a reality.

The solution is to develop AI systems that augment, rather than replace, human cognition, to match the speed of these emerging threats. This future is built on bi-directional learning. As Danenberg noted about Socratic AI, the focus must be on teaching people to "critique when the machine gets something wrong" and "interrogate the machine to get it better."

Also: Climbing the career ladder? 5 secrets to building resilience from leaders who were once in your shoes

Bray provided a practical framework for this approach: "Let the AI get trained on all the critical vulnerabilities and do the known knowns, but have humans deal with the unknown unknowns and feed that information back to the machine. Those are the ones that are winning." By allowing AI to handle established threats while humans focus on novel challenges and use that knowledge to refine the AI, organizations can secure a competitive advantage.

Decisions can have an outsized influence

For boards and business leaders at every organizational level, this isn't a time for following conventional wisdom or established playbooks. As Bray emphasized, we're operating in "a slipstream of time where the decisions you make now will have an outsized influence." The leaders who will succeed are those willing to:

  • Deeply understand the fundamental capabilities and limitations of emerging technologies.
  • Make principled decisions during geopolitical and technological uncertainty.
  • Establish outside independent voices with permission to speak truth to power to help inform crucial technology, board, and C-suite decisions.

As for the future, Bray summed it up best: "It's about collective intelligence, people both internal and external to an organization, alongside AI. That's how we ensure the overall impact of technologies is positive for the world." 

The technological, economic, and geopolitical convergence we're experiencing isn't just another shift; it's a fundamental transformation. The key question for senior executives isn't whether AI and geopolitical change disrupt you. The question is whether you'll master human-AI collaboration to determine the organizations that will thrive in shaping the future.

Artificial Intelligence

Editorial standards

Post Comment