Skip to content

Do Chatbots Expand or Narrow Leaders’ Perspectives?

Unpacking the risks and rewards of using AI-powered tools as strategic advisors

By Deborah Aarts - Sep 18, 2025

Text originally published at: Smith School of Business

In boardrooms around the world, leaders are turning to a new source of strategic guidance: AI. Many executives are now supplementing and even replacing the expertise and counsel of trusted associates with tools like ChatGPT, Claude and chatbots of their own design to flesh out ideas, make decisions and—in some cases—replace their attendance in meetings they can’t or don’t want to attend.For busy leaders, the benefits of AI as a thought partner appear compelling: With a few minutes of screen time and some well-crafted prompts, leaders can arrive at synthesized and distilled insights that might otherwise be impossible without limitless time and resources. Chatbots can give half-baked ideas form and substance, guide complex or nuanced decisions, and even expand perspectives: “I ask ChatGPT to become aware of where my biases and blind spots might be, and the answers it gives are a really, really good starting point to check your thinking,” Coursera CEO Jeff Maggioncalda told CBS in 2023. And you sure can’t beat it for speed, as Rokt chief commercial officer Elizabeth Buchanan recently told Business Insider: “I use AI to accelerate how I consume information and frame decisions.”

But this use case for GenAI is not without critics. Some experts are warning about the technology’s deleterious effects on how people think: A recently published study indicates a significant negative correlation between frequent AI tool use and impaired critical thinking abilities. And this summer, researchers found that while use of ChatGPT can boost the overall quality of individual ideas, it tends to reduce the diversity and individuality of concepts and increase the risk of groupthink. For all its perspective-broadening potential, in practice GenAI is still known to introduce and reinforce biases. Plus, despite its advancements in recent years, it’s still prone to making things up.

So, which is it? Do chatbots broaden a leader’s perspective, or narrow it?

Dean McKeown believes it depends on how they’re used. McKeown is the director of the AI, Finance and Analytics suite of programs at Smith School of Business, where he works to help leaders separate the hype from the reality in harnessing emerging technologies. And, as he explains to Smith Business Insight contributor Deborah Aarts, GenAI’s best use case as a leadership aide comes when those deploying it understand the risks and limitations.

As someone who works with executives every day, how are you seeing business leaders engage with chatbots and other LLM-based tools?

Senior leaders are very bullish about the technology, and they’re actually using it. They’re using it for ideation, for sure, and also as an advising tool for decisions and strategy. In fact, leaders tend to be the early adopters—often the first in the organization to embrace the technology. They’re telling their teams, ‘We have to use this. It can be a differentiator for us. Let’s use it, and let’s allocate funds to it as well.’

What is chatbot technology giving them, as leaders?

It’s very easy to use, and it will very quickly give business school textbook kinds of answers. That efficiency piece is significant. You don’t have to read a 300-page business book—you can just ask for a summary. You can ask for a SWOT analysis for an idea and you’ll get one right away. AI is extremely efficient and quite good at doing that sort of a task. But that creates an interesting challenge.

How so?

There’s a tendency to listening to what LLMs have to say, and think, ‘Oh, yeah, that’s perfect. That’s what I’m going to do.’ But there are limits to that. It’s called generative AI, but what it is actually generating is words that follow other words that follow other words—as opposed to a creative idea that’s never really been thought of before. Important variables in good leadership—like creativity, thinking outside the box and critical thinking—may not be apparent in what a chatbot has to say.

Take the SWOT analysis example I just referenced. Is what the LLM gives you the right analysis? Or is it simply something fine that will look good in a presentation? You need the critical thinking capacity to probe: Is this information actually what I need to develop a strategy for the organization going forward?’

Can you give a real-world example?
Say you’re having supply chain issues: You’re having trouble getting the parts to your factory in time. You can ask an AI to tell you what to do about it and it will give you proven examples of what’s happened before. It can tell you to get a new supplier who can give you a greater number of parts. And that’s fine. But it won’t give you solutions that haven’t been thought of before. It won’t say, ‘You know, what you should be thinking about is developing your own manufacturing plant.’Some situations require brand new ideas. Consider the tariff situation we’ve experienced this year: This isn’t something we’ve experienced, at least not to this degree. Since there’s no data about how to handle a situation like this, a creative solution likely won’t show up in an LLM. It can only use data that exists. It’s actually impossible for it to give you something that it doesn’t know.

Situations like this still require a human brain. And I think that’s the biggest challenge for any leader using a chatbot: How can you use LLMs to think creatively and think critically?

Is that possible?

I think so. And I’ll caveat this by saying these tools are getting better all the time, and so fast, so it may be possible sooner than we think.

But today, it still takes effort. It’s not just about running with what the LLM tells you. It means taking a moment to think it through. It means probing, and probing again. It means being very clear about what you’re asking it to do. Fundamentally, it means using AI like a scaffold for your decision-making process.

That seems like a lot of effort for technology that’s supposed to make everything easier.

It is, but think of it this way: Diversity of thought is always important. No good leaders surround themselves with people who think the way they do all the time—they want creativity at the table. They want people with different backgrounds and different educations to supplement their thinking. They want people to challenge their opinions. I would argue it’s the exact same thing when you’re talking about computer systems. You need to use different tools in different ways and look at outputs from different points of view to minimize the risk of groupthink or echo chambers.

What else should leaders keep in mind before consulting chatbots for guidance?

For all their enthusiasm about the technology, I think people in C-suites are also realizing that there’s tremendous risk associated with it. There’s the echo-chamber risk, of course, but also the use of data and privacy, which is becoming really important. And we still see a lot of AI hallucinations. If a computer system is making a decision for a leader of a large organization and it happens to be the wrong decision, the damage does not affect only one or two people, right? It might be thousands. That’s where it gets scary.

It’s important for every organization to develop guiding principles for how they’ll use AI. In our AI and Ethics course, we help students develop scorecards or checklists to make sure that every decision made with artificial intelligence is adheres to a list of principles. That’s how you develop systems that ensure everyone, from the top down, is using GenAI responsibly for evidence-based decision-making.

Scroll To Top