Missing the Real Value of AI
- Russell Fitzpatrick, PhD

- May 21
- 3 min read
Last week, The New York Times published an article that was, in many ways, a perfect snapshot of our current moment with AI. Titled “Students Are Complaining That Professors Are Using A.I. Too Much” by Kashmir Hill, the piece details a surprising reversal: students—who are warned not to use ChatGPT in class—are now lodging formal complaints when their professors rely too heavily on it.
According to one student, she feels shortchanged. If her high tuition is meant to pay for human insight, guidance, and interaction, why is she getting generically generated content and AI-written feedback?
On the surface, this is a story about fairness, expectations, and academic policy. But to me, it reveals something deeper:
We are still fundamentally misunderstanding how to use AI.
The article describes professors using ChatGPT and other generative tools to:
Draft slides and lecture notes
Grade student essays
Generate case studies or rubrics
Offer “nicer” feedback to students
Save time and reduce cognitive load
In other words, AI is being used by some professors as a time-saving productivity assistant. And yes, that’s a valid use case. But if that’s all we see, we’ve already limited its potential.
While trying to squeeze AI into our existing workflows, we are missing its benefit in reimagining our thinking.
AI as a Thinking Partner, Not Just a Tool
When I read stories like this, I see a missed opportunity. AI isn’t just a tool that completes tasks faster. It can be a cognitive partner that can challenge how we think, process, and decide. That’s the distinction that might just define whether we adapt or get left behind.
When professors use ChatGPT to generate lecture notes but don’t pause to reflect on how AI might enhance dialogue, deepen perception, or elevate the learning experience, they’re using the tool but missing the transformation.
When leaders use AI to crank out slide decks, automate agendas, or summarize meetings, but never rethink how they make decisions or navigate complexity, they’re saving time but missing the upgrade.
Enhancing Human Potential
In the NYT article, a student complains about receiving feedback from ChatGPT, not her professor. She didn’t feel seen. She felt outsourced. And that feeling is legitimate.
But what if the professor had used AI to enhance her capacity to respond, freeing up space to offer deeper insight? What if she’d used ChatGPT to identify patterns or blind spots in her grading, so she could be more human, not less?
The problem isn’t that AI was used, it’s that it was used without vision. Just to save time.
That’s the shift I’m advocating for.
The goal isn’t to replace human intelligence with artificial intelligence.The goal is to extend human intelligence, to evolve how we think, decide, lead, and grow.
But let me be perfectly clear: there is absolutely nothing wrong with using AI to save time, streamline workflows, or scale tasks. I do it every day.
But if that’s all we do - treat AI like a digital assistant rather than a cognitive partner - we’re just reinforcing outdated mental models with newer tech.
The AI-Enhanced Leader Is Coming
My new book, The AI-Enhanced Leader: How to Upgrade Your Thinking and Leading officially launches June 10, 2025 (Amazon).
It’s not a book about using AI tools. It’s a book about how AI can be used to enhance the way we think. I introduce a new model of leadership for the AI era called Synergistic Leadership, a paradigm built on the idea that machine intelligence is not a threat to leadership, but a lever for transformation. At its core is the belief that the greatest upgrade AI offers is not technical, but cognitive.




Comments