← Back to Eric Florenzano's Blog

The Power of Examples

By Eric Florenzano • October 13, 2025

No matter how long I work with LLMs, people constantly forget the power of examples. Examples are so, so important. Let me show you why.

No matter how long I work with LLMs, people constantly forget the power of examples. Examples are so, so important. Let me show you why.

The Informal Approach

Sometimes it's informal examples where in your prompt you'll say: "I want a summary. Don't give me bullet points, don't give me a list - give me three paragraphs that start with the main point." You're giving partial sequences that hint at what you want.

But that's not always enough. When you're too informal, the LLM might give you:
- Four paragraphs instead of three
- A conclusion paragraph that starts with "In conclusion"
- The main point buried in the middle

The problem with informal examples is that you're not being precise enough. Sometimes the LLM will get the wrong idea or won't learn enough from the examples because they're too informal or something like that.

The Formal Approach

At that point, you need formal examples. Right in your prompt:

Input: "The weather was sunny. The event had 200 attendees."
Output: "Weather: sunny. Attendance: 200."

Input: "Q3 revenue dropped 8% despite increased marketing spend."
Output: "Q3: revenue -8%, marketing spend +15%."

Show the precise input and output. Yes, it's all happening within your prompt. But that's fine. This is when it starts to get a little bit better, where you're actually showing it the precise input, the precise output.

The Conversation History Hack

My favorite way is faking conversation history. When you ask your question, prepend fake turns:

User: "Extract the key facts from this meeting note: 'Team discussed the Q3 roadmap. Sarah from engineering raised concerns about the API rate limits. Decision made to postpone the mobile app release by 2 weeks.'"
Assistant: "Key facts: Q3 roadmap reviewed, Engineering concern: API rate limits, Mobile app delay: 2 weeks"

User: "Extract the key facts from this meeting note: 'Marketing presented Q4 campaign budget. John questioned the ROI projections. Approved $50K additional spend for influencer partnerships.'"
Assistant: "Key facts: Q4 budget presented, ROI projections questioned, Additional spend: $50K for influencers"

Then ask your real question. The LLM slots into the role-play where it's already acted correctly. What it is is it's slotting into a role play session where it's already acted in the exact correct manner. And it just needs to sort of keep acting in that same correct manner.

What About Agents and Tools?

Here's what people miss: give your LLM tools with examples. I think here's a way of doing examples that I don't think a whole lot of people are thinking about but it's incredibly important. It's when you give an LLM tools, you give the LLM access through those tools of examples of in context things.

Game Development Example: Build one perfect mini-game yourself. Maybe a Tetris clone with clean code, proper scoring, smooth controls. Just completely do it all yourself. Or even do it with the aid of LLMs, but whatever, you're pouring over every single detail, and you're keeping everything concise, and it's not really about fun, it's about just making an incredible example, and you put that all together. Check it into your repo. Then tell your LLM: "Use this as your prototype. Match this code style and structure. If you need any inspiration or if you'd like as a prototype, just look over at this."

When the LLM hits something tricky—like implementing piece rotation—it can reference your example without cluttering the context window. This works incredibly well, because now the LLM, when it has its own internal uncertainties about how to do something it can reference that example, but it's not necessarily taking up a ton of space in the context window. And if the LLM doesn't think it needs that, or if it doesn't have errors that it needs to address or something like that, then it doesn't need to necessarily go there and investigate and figure out.

So putting one game example in the context window or not in the context window, but in the tool visibility, so that the tools see it and allow you to pull parts into the context window. That's good, but it's better if you can sort of like, do that 3 times or 4 times or 5 times.

The Production Flywheel

Here's where it gets really powerful: comb through your production data and find the exemplars. If you're running a service, it's honestly great to comb through the data and try and find the very best examples, trickiest examples, most complex examples, interesting examples, you know, stuff like that. Take those examples and make them available to your LLM.

Obviously respect privacy - follow your user agreements and data provenance. When needed, have an LLM generate similar-looking data that points to the same behavioral patterns without any identifying features. Make sure that whatever agreement you have in place about the data's provenance and the user agreements are all being followed. And oftentimes what it can be good to do is take those examples that you've got from live data and show them to an LLM and ask the LLM to come up with similar looking data so that you get data that sort of points at the same basin in LLM space but has no identifying features from any current user. It's really just about the behavior at that point.

Build a flywheel where you're constantly identifying great examples from production and adding them to your example library.

Real-World Results

I just did this just today in my normal day-to-day life. I pointed my new vibe-coding project at my previous app: "Use this as your example. Same folder structure, same error handling, but add XYZ." I just started vibe coding an app, and I pointed it at my previous vibe coded app, and I said, use this as your example and as prototype and then do these new things. And I gave it a whole long instructions set about how I wanted.

It matched my style perfectly—clean code, light dependencies, the same sqlalchemy+sqlite stack, even the little kubernetes conventions I'd settled on. I could not believe how much stuff it got to my preferences, and it did the way I wanted it to do. It didn't even involve me writing a claude.md, it didn't involve me putting in any work other than all the work I had already put into my previous project. I was just able to give it as an example.

The Bottom Line

Examples are your secret weapon. They're not always obvious to reach for, but they're integral to modern LLM workflows. They're an integral part of an LLM workflow in a modern agentic day and age.

Next time you're stuck: Don't refine your prompt. Don't add more instructions.

Add an example.