There’s a structural risk that’s not getting enough attention, the erosion of cognitive capability, due to overdependence on LLMs. And this is no longer just a philosophical concern. It’s now backed by hard data
A recent 2024 study from the MIT Media Lab, titled Your Brain on ChatGPT, offers one of the most compelling warnings yet. Researchers tracked brain activity using EEG across three groups: one using ChatGPT, one using a traditional search engine, and one relying solely on their own thinking. The results were striking:
- Participants using ChatGPT exhibited the weakest neural connectivity, especially in brainwave patterns tied to memory and attention (alpha and beta frequency bands).
- They struggled to recall what they had just written with AI assistance indicating lower memory retention and reduced ownership of the content.
- Even after switching back to non-AI tasks, their brains did not immediately re-engage, suggesting lasting cognitive dampening effects
- The output itself became more homogeneous, structurally similar, and lacking the originality seen in human-generated essays.
This is exactly the kind of invisible tradeoff we’re walking into: short-term ease in exchange for long-term erosion of critical faculties.
What This Means for Sales Professionals
In sales, especially high-consideration B2B sales, your competitive edge is not what you say, it’s how you think. Your ability to read a room, find the gap between what’s said and what’s meant, or sense when to pause vs. push that doesn’t come from a prompt. That comes from lived pattern recognition and active cognitive presence.
The danger isn’t just that LLMs will give you generic output. It’s that, over time, you’ll stop trusting your own judgment, and worse, stop practicing it.
Which is why the differentiator isn’t prompting. It’s what you do before the prompt.
Prompting vs. Priming
Most conversations about LLMs center around prompting technique. I think the real lever is priming.
Prompting is the task.
Priming is the context that defines how the task should be approached.
Priming is not about adding more words to your prompt. It’s about bringing your own experience, strategy, and behavioral understanding into the process before the model is ever involved. That’s where quality lives.
And to be clear this is not the same as RAG (retrieval-augmented generation).
RAG customizes data. Priming narrows focus and aligns the model to your intent and context.
Priming in Action
If you’re heading into a pricing conversation with a CFO, here’s what an unprimed prompt looks like:
“Give me pricing objections in enterprise software."
Now contrast that with a primed sequence, where you first inject:
- The buyer is a High-D, Low-I CFO, results-driven, low tolerance for fluff.
- They’ve flagged budget compression, 15% OpEx reduction target this quarter.
- Integration risks were raised earlier, this is likely still a latent concern.
- Historical buying behavior suggests preference for multi-year, front-loaded discounts.
Now You Ask
“Based on this profile, what pricing objections are likely, and how should I frame my responses?
Same model. But now, you’ve framed the problem space. You’ve preserved your own strategic judgement. And you’re leveraging the system as a multiplier, not a substitute.
If everyone is using the same tools, the difference won’t come from how long your prompt is.
Final Thought
The Your Brain on ChatGPT study makes one thing clear: over time, overuse of AI weakens the very faculties we depend on to sell well, memory, judgement, originality, and confidence in our own thinking
It will come from how much thought you put into the input before you ever prompt.
The edge belongs to those who prime with intent.
About the Author
Anoop George is the CEO and Founder of Skwill.AI. His sales experience spans 30 years, and he is committed to making coaching accessible to all, by combining behavioral science, human expertise, and the power of AI. Anoop is an alumnus of Carnegie Mellon University, a fitness enthusiast, and loves cooking.