Skip to content
2 min read

“Mary had a...” — What That Taught Me About AI

Featured Image

When I first started exploring AI, I kept hearing the phrase “prompt engineering.” I assumed it just meant typing clever things into ChatGPT.

Then I learned what happens when you type:
“Mary had a…”

The model completes it with:
“little lamb.”

Not because it knows anything about Mary or lambs — but because, mathematically, those words are the most likely next tokens based on all the text it’s ever seen.

That’s when it clicked:
LLMs don’t understand language — they predict it.

They’re not thinking — they’re pattern matching across staggering amounts of data. That shift in mindset changed everything for me. I stopped thinking of AI as magic, and started thinking about it as a probability engine — powerful, but predictable if you know how to work with it.

To learn how to actually work with it, I dove deeper:
- I took a Coursera course on Prompt Engineering for ChatGPT by Dr. Jules White at Vanderbilt University
- I completed a short course specialization on LLMs at deeplearning.ai
- I ran dozens of experiments, just to see what small tweaks in a prompt would do

It made me better at talking to engineers. But more importantly, it made me better at thinking clearly about AI strategy — and at spotting where it could actually be useful in business.

If you’re a founder, exec, or operator trying to understand AI, I highly recommend:
-- Taking a prompt engineering course (start with Coursera or deeplearning.ai)
-- Testing prompts yourself — change one word at a time and watch the difference
-- Thinking about prompts like product specs: you get out what you put in

You don’t need to code to understand AI. But you do need to engage with it directly.