- The AI Brief
- Posts
- Know Your AI Model - LLMs vs. LRMs
Know Your AI Model - LLMs vs. LRMs
How to assign the right AI model to the right job. Plus, catch up on Apple’s latest drops, ChatGPT’s new vision skills, and Veo-3’s leap in text-to-video.

💼 In Today’s 5-min AI Brief
Your Everyday AI Toolkit: Gemini's Hidden Gems - AI workflows you can reuse
Prompt of the Week: Split creative vs. logical tasks and assign the right AI model to each
AI News You Can Use: LLMs vs LRMs, Apple drops new AI features, and ChatGPT gets better at “seeing” your screen
What I’m Learning: Exploring how AI-first organizations think, and how text-to-video tools like Google’s Veo are rewriting what’s possible in content creation
Welcome back to The AI Brief, your weekly upgrade in working smarter with AI. Today I’m pulling back the curtain on a common pitfall in AI use at work: trusting the wrong model.
Have you ever used AI and thought, “Wow, that sounds right”… only to realize it was confidently wrong? You’re not alone, and you might’ve used the wrong model for the job.
I’ll break down the key differences between Large Language Models (LLMs) and Large Reasoning Models (LRMs), show you how to avoid hallucinations, spotlight a powerful (but underused) feature in Gemini, and run through the latest AI drops.
🛠️ Your Everyday AI Toolkit
Gemini's Hidden Gems: What They Are & Why They Matter
If you’re using Google’s Gemini, there’s a subtle-but-powerful feature you might be missing: Gems.
Gems are customizable AI agents inside Gemini. You can think of them as saved prompt templates, except smarter. They allow you to set instructions, tone, goals, and even preferred formats once, and reuse them without repeating yourself.
Examples of useful Gems:
“Turn my voice memos into LinkedIn posts”
“Summarize any report in a client-facing format”
“Rewrite emails in confident, but friendly tone”
“Daily agenda creator from my Google Calendar + Gmail”
How to create a Gem:
Open Gemini.
On the left sidebar, click “Your Gems.”
Tap “Create a New Gem” and input your instructions.
Name it, add a description, and test it.
Note: Once saved, these Gems live in your Gemini workspace and are one click away. They’re especially valuable for recurring tasks like content repurposing, internal summaries, or copy adjustments.

💭 Prompt of the Week
“Which parts of [insert task] require creativity, and which need step-by-step accuracy? Split the workflow accordingly, and assign each part to a Large Language Model (LLM) or Large Reasoning Model (LRM).”
This prompt gives you a repeatable way to get the best results by knowing which type of AI model to leverage per task and sub-task.
💡 AI News You Can Use
The Chicago Sun-Times published a summer book list where 10 out of 15 titles didn’t exist. The writer used AI, and it confidently hallucinated fake books right into print. This kind of slip-up isn’t just embarrassing. At work, it could mean misleading data, botched analysis, or losing trust.
What’s an AI hallucination? A hallucination happens when an AI model generates something that sounds correct, but isn’t. It can invent facts, misquote data, or reference things that never existed.
So what’s the fix? It starts with knowing which type of model you’re working with.
Large Language Models (LLMs): Fast, Creative, but Unreliable
Great for: Drafting emails, ideation, blog posts, rewriting content
Watch out for: Confident wrong answers, made-up details, lack of precision
Large Reasoning Models (LRMs): Slow, Structured, and Built for Accuracy
Great for: Project planning, structured decisions, calculations, rule-based workflows
Watch out for: Slower responses, less conversational tone
For a quick reference, here’s a cheatsheet of which models in each of the most popular AI chatbots are LLMs and LRMs:

The middle column lists each chatbot’s LLM for creative/low-stakes work, while the right column shows its LRM when precision and accuracy is needed.
Pro move to avoid hallucinations? Split your workflow. Let LLMs do the creative heavy lifting, then hand off to an LRM for structure and sanity checks.
👉 See my 2-part video breakdown for a live demo on how to use both types of models in a real workflow.
Last Week in AI
Apple is moving fast.
Apple’s Shortcuts app gets an AI glow-up - you can now chain AI actions together. For example: record a lecture, auto-transcribe it, compare it to your notes, and have ChatGPT summarize what you missed.
Apple Watch just got an AI workout buddy! A new fitness coach learns from your activity history and gives you personalized motivation mid-workout in a voice modeled after real Fitness+ trainers. It's like having your own hype squad on your wrist.
Provide a screenshot and ChatGPT gets to work! A new visual intelligence tool lets ChatGPT understand whatever you’re looking at. Snap a screenshot and it can:
Explain what’s on the page
Add a calendar event from a flyer
Find a similar item on Etsy
📖 What I’m Learning
If you’re interested in what I’ve been digging into… check out:
How to build an AI-first organization | Ethan Mollick - I’ve been diving into a conversation between Ethan Mollick (Wharton professor and leading AI researcher) and Joel Hellermark (Sana CEO) on how organizations can actually adopt AI beyond just adding tools.
One standout insight: being “AI-first” isn’t about replacing people, it’s about redesigning workflows where AI is the default, not the add-on. It’s a shift in mindset, not just infrastructure.
Veo 3 Channel Surfing - See Results from Google's Text-to-Anything Tool! - Text-to-video is getting scary good. Google’s new Veo model lets you create cinematic video - dialogue, sound effects, camera cuts = all from a single text prompt. I watched a demo where each scene in a channel-surfing-style montage was generated with one prompt each, and the quality was wild.
Final Thoughts
The future of AI at work won’t be driven by tools alone - it will be shaped by how we assign and combine them. Now’s the time to start practicing with both LLMs and LRMs. The more you understand how they think, the better you’ll be at knowing which one to trust with which task.
Until next time,
Thaddeus
Reply