The AI Expert in the Room (Who Doesn’t Know What a Token Is)

|Emilio Harrison
The AI Expert in the Room (Who Doesn’t Know What a Token Is)

“Oh yeah, I’ve been working with LLMs extensively,” he says in the Zoom call, leaning back with that confident tech-bro energy. “The key is just making sure your neural networks are aligned with your prompt architecture.”

I’m nodding along, but something feels off. Two minutes later, he’s talking about “training ChatGPT on our company data” and how “the AI learns from every conversation.”

I ask, “Quick clarification, when you say training, do you mean fine-tuning or just adding context to prompts?”

Silence. Then, “Yeah, exactly, training.”

He doesn’t know. And suddenly, I realize he has no idea what a token is, what a context window does, or the difference between inference and training. But he needed to sound like an expert. And I get it. I really do.

We’re All Faking It

There’s this thing happening right now in every Slack channel, every standup, every coffee chat about AI. People are posturing as experts while getting foundational concepts completely wrong.

The thing is, I don’t think it’s arrogance. I think it’s terror.

AI feels like an extinction-level event for knowledge work. Every article screams that our jobs are at risk. Every LinkedIn post shows someone claiming they “10x’d their productivity with AI.” The implicit message: If you’re not an AI expert, you’re obsolete.

So we perform expertise. We throw around “large language models” and “prompt engineering” and “AI agents” like shields against irrelevance. 

I did it too 

For the past 5 years, I’ve been an expert in UX Research. I knew the methodologies. I could moderate a usability test in my sleep. I had mastery, and that felt good. Safe.

Then AI exploded, and suddenly I’m in conversations where people are talking about embeddings and vector databases and RAG architectures. My expertise felt… quaint. Like showing up to a rocket launch with a really nice bicycle.

So I did exactly what that guy in the Zoom call did. I started nodding along. Dropping terms I’d read in articles. Positioning myself as someone who “gets it.”

In one meeting, I confidently explained how we could “leverage AI to automate our research synthesis” without actually understanding what that would require.

A designer asked a follow-up question about how the model would handle qualitative data, and I… deflected. Changed the subject. Performed confidence I didn’t have.

I was so afraid of looking like I was falling behind that I’d rather bullshit than admit I didn’t know.

The Quote That Changed My POV

Then I read this line from Brené Brown:

”The big shift here is from wanting to ‘be right’ to wanting to ‘get it right.’”

It hit me like a door to the face. I was so focused on appearing valuable, on being right about AI, that I’d stopped trying to actually learn. To get it right.

The posturing wasn’t helping me. It was keeping me shallow. I was rehearsing answers instead of asking questions. I was protecting my ego instead of learning.

Why This Is So Hard

Here’s the thing I want to acknowledge: The fear is real.

AI is changing knowledge work. Jobs are being redefined. The ground is shifting under all of us.

When you’ve spent years building expertise in one domain, and suddenly there’s a new domain that feels existential, of course you grasp for credibility. Of course you perform confidence you don’t have.

It’s not stupidity. It’s survival instinct.

And learning publicly, admitting you don’t know something in front of colleagues, in front of your boss, in front of people who might be deciding your future, is genuinely vulnerable. It feels like handing someone ammunition.

Sometimes, in some rooms, a little strategic positioning is necessary. I get it.

But here’s what I’ve learned: The performance has a cost. It keeps you from actually learning the thing you’re pretending to know. It keeps you in shallow conversations instead of deep ones. And it makes you spend energy managing an image instead of building real capability.

What I’m Trying Now

I started saying “I don’t know” out loud.

In meetings. In Slack. In conversations with people I want to impress.

“I don’t actually understand how fine-tuning works, can you explain that?”

“I’ve read about RAG but I haven’t implemented it. What’s your experience been?”

“I’m still figuring this out. Can we think through it together?”

And here’s what happened: Nothing bad. No one thought I was incompetent. Most people seemed relieved to have permission to also not know everything.

The designer who asked me that follow-up question? We ended up having an hour-long conversation where we both admitted we were confused about the same things. We learned together. We actually got it right instead of pretending we already were.

An Invitation

Next time you’re in a conversation about AI or any new, scary thing try noticing the gap between what you’re saying and what you actually understand.

Notice if you’re reaching for jargon to sound credible. Notice if you’re nodding along when you’re actually lost. Notice if someone else is doing the performance, and maybe offer them an out: “I’m still learning this stuff too, want to figure it out together?”

The shift from “be right” to “get it right” isn’t easy. It won’t feel safe at first.

But I’m finding it’s the only way to actually learn. And maybe, the only way to build real expertise in something that’s changing this fast.

You don’t have to be the expert. You just have to be honest about where you are.