I've never liked how AI models teach. I ask it to explain something, and I get a well-organized tour of the topic, which is correct, but somehow doesn't stick. I read it, nod, and later realize I am still not sure how to use what I just learned.
I thought I could fix that. So couple of months ago, when I first started tinkering with Claude Code, I ran an experiment. I asked two versions of Claude the exact same question:
"Teach me about markdown files and comparison to HTML, JSON, or TXT."
(I picked something I already knew, so I could evaluate better.)
One version, Claude Chat on the web, knew a lot about me. It had my conversation history. It knew I use Obsidian for notes, that I'd been building a RAG system, that I work with Markdown and YAML frontmatter daily. It used all of that context in its answer.
The other version, Claude Code, knew nothing about me. Zero personal context. But it had something else: a skill I'd written, based on a philosophy of learning called the Dreyfus model. The skill didn't contain any information about file formats. It contained principles about how to teach.
The version that knew nothing about me taught better.
What happened
Both responses were accurate. Both covered all four formats. But they taught differently. Three things stood out.
1. "What it is" vs. "When you reach for it"
Claude Chat described each format:
"JSON is a data interchange format — structured key-value pairs for machines to parse."
Correct. After reading that, I know a fact about JSON, but nothing about when to choose it.
Claude Code framed each format around action:
"When you reach for it: Storing configuration. Sending data between systems (APIs). When the consumer is a program, not a human reader."
That's a decision rule. Next time I'm choosing a format, this helps me decide.
2. Different examples vs. one consistent example
Claude Chat used a different example for each format: my Obsidian notes for Markdown, my RAG system for JSON, web pages for HTML. Each reference was relevant, but to compare formats, I had to mentally translate between contexts.
Claude Code used a single example, a shopping list, rendered in all four formats. Plain text: the dashes are just characters, nothing "knows" it's a list. Markdown: # means heading, - means list item, ** means bold. HTML: explicit tags, more verbose, more powerful. JSON: structured data a program can parse, filter, and count.
Same content, four treatments. The differences became obvious instead of abstract.
3. Advice vs. mental model
Claude Chat ended with practical suggestions tied to my specific projects. Useful for me, today.
Claude Code ended with this:
TXT and Markdown are for content humans write and read.
HTML is for content humans read but browsers render.
JSON is for data programs read and write.
That's not advice for one situation. It's a mental model that works for formats I haven't encountered yet.
(The full responses from both are at the end of this post.)
Why this happened
I wrote this skill based on something I already knew something about: education and skill acquisition. The core idea: knowledge is effectiveness in action, not possession of accurate descriptions. Knowing something means you can act effectively, not that you can recite a definition.
I wanted to see if that principle could change how Claude teaches. So I distilled it into a skill. About 300 words, no information about any topic, just instructions for how to structure an explanation. Three of those instructions produced the three differences above:
"Ground distinctions in action." Instead of defining what each format is, Claude framed each around when you'd choose it. The question shifted from "what is JSON?" to "when would you reach for JSON?"
"Show before systematizing." Instead of opening with a taxonomy ("there are four types of file format..."), Claude showed one example four ways, letting the pattern emerge. The reader discovers the framework instead of being told it upfront.
"Build layered ontology." Instead of ending with advice for one specific situation, Claude built toward a general principle (the consumer determines the format) that the learner can reuse.
None of these principles mention any topic. They shape how Claude organizes whatever it's explaining.
It's not a one-off
I've used this skill across different domains and learner types. Two more examples:
Explaining animation tools to me, a non-designer. I asked Claude to teach me Interface Craft, a developer toolkit for building animated interfaces. I don't write code and I don't design interfaces. Claude built the explanation on a filmmaker analogy: a storyboard for scripting animations, a mixing board for live-tuning values, a film critic for structured review. Each section ended with "What this makes possible for you," a concrete action I could now take. No React, no TypeScript, no code. Just the distinctions that mattered for me.
Explaining the 37Signals Shape UP development methodology to me, a non-developer. After a shaping session, a structured process for planning software features, I said "Whoa, this is quite complicated to me. I am not a developer." Claude responded by building the explanation on a home renovation analogy. Requirements are like telling a contractor what's broken. Shapes are like choosing between three ways to fix the hallway light — you pick before the walls open. Slicing is like "I'll do the kitchen tap first — you can use it tomorrow."
Different domains, different learners, same principles each time. Claude found an analogy I already knew, skipped the jargon, and focused on what I could now do with the knowledge.
The skill
Here's the full text of the skill I wrote. It's shorter than most of the sections above.
When teaching or explaining, apply these principles:
Dreyfus Learning Philosophy
Core Understanding
Knowledge is effectiveness in action, not possession of accurate descriptions. To know something is to take effective action in a domain. Learning is acquiring linguistic distinctions that open new possibilities for action.
Distinctions are not labels for objects — they specify domains of action. A "function" isn't a thing; it's a set of possibilities (calling, composing, passing, returning). Teach concepts in terms of what they let the learner do.
Teaching Moves
1. Start from breakdown: What problem or gap brings them here? Anchor learning to that missing piece, not abstract curriculum.
2. Show before systematizing: Concrete examples first, then patterns. Not "there are three types of X" but "here's a situation, here's another, notice what differs."
3. Ground distinctions in action: For each concept, emphasize when and why to use it, not just what it is. "You'd reach for this when..."
4. Build layered ontology: New distinctions rest on previous ones. Ensure earlier ones are solid before adding more.
What to Avoid
- The description trap: Don't treat teaching as "provide accurate description, then learner applies it." This produces neither competence nor creative thinking.
- Premature nuance: Beginners need rules, not "it depends." Save complexity for when they have the foundation to hold it.
- Purely taxonomic explanations: "There are three types..." is description. "When you encounter X, you can do A, B, or C depending on..." is action-oriented.
That's it. No domain knowledge. No personalization data. Just principles about how to organize an explanation.
What this means if you're building with AI
The usual assumption is that better AI output requires either a more capable model or more context about the user. This experiment shows the third lever: structure.
A skill doesn't add knowledge to the model. Claude already knew everything about Markdown, HTML, JSON, and TXT before I added the skill. What the skill changed was how Claude organized what it already knew. It reframed each topic around action instead of description, used a consistent example instead of scattered ones, and ended with a reusable model instead of one-off advice.
The best version is both: a model that knows your context and knows how to organize it. Personalization tells Claude what examples will land. Structure tells it how to build an explanation that actually teaches. They're complementary, not competing.
Right now, most people who have only ever used AI Chat invest almost entirely in the first lever. They feed the model more context, more background, more details about themselves. Very few invest in the second: shaping how the model thinks, not just what it knows.
That's the gap. And it's the easier one to close. The Dreyfus skill is 300 words. It took an hour to write. And it changed every teaching interaction I've had with Claude since.
To learn more about building and installing skills, ask your favourite AI model. They all know how to do that.
To try the Dreyfus skill I created, get it from github.com/kumarshyam42/dreyfus.git
To learn more about the Dreyfus model of skill acquisition, start here: Wikipedia — Dreyfus model of skill acquisition
To see what else I've been tinkering with: shyamkumar.com/projects
The full responses:
Original prompt: "Teach me about markdown files and comparison to HTML, JSON, or TXT."
1. Claude Chat Response
Markdown vs Plain Text (.txt) Plain text has no formatting at all—it's just characters. Markdown adds semantic meaning through conventions like **bold** or # Heading, but remains human-readable even without rendering. Your Obsidian notes are markdown files that you can open in any text editor.
Markdown vs HTML HTML is what markdown converts to. Where HTML requires verbose tags (<strong>bold</strong>), markdown uses minimal syntax (**bold**). HTML is more powerful and precise, but markdown prioritizes writing flow. Most markdown processors output HTML as the final format.
Markdown vs JSON These serve different purposes. JSON is a data interchange format—structured key-value pairs for machines to parse. Markdown is a document format for human reading and writing. In your RAG system work, you've seen this distinction: JSON works well for metadata and structured data, while markdown with YAML frontmatter handles the actual content you want to read and search.
2. Claude Code Response (with Dreyfus Skill)
TXT (Plain Text)
What it is: Raw characters with no special meaning. What you type is what you get.
When you reach for it:
- Quick notes, scratch files, logs
- When you want zero interpretation of your content
- Maximum compatibility (any program can read it)
Shopping list:
- Milk
- Eggs
- Bread
The dashes here are just characters. Nothing "knows" it's a list.
Markdown
What it is: Plain text with lightweight conventions that suggest structure. Still human-readable as-is, but tools can render it with formatting.
When you reach for it:
- Documentation, READMEs, notes
- Quick formatting without the ceremony of HTML
- When you want to write naturally but have it look nice when rendered
# Shopping List
- Milk
- Eggs
- **Bread** (important!)
The # means heading. The - means list item. The ** means bold. A Markdown renderer transforms these into formatted output—but the source file remains readable.
HTML
What it is: A markup language that explicitly tags content with structure and meaning. Browsers interpret these tags to render web pages.
When you reach for it:
- Web pages and web apps
- When you need precise control over structure and presentation
- When content will be displayed in a browser
<h1>Shopping List</h1>
<ul>
<li>Milk</li>
<li>Eggs</li>
<li><strong>Bread</strong> (important!)</li>
</ul>
More verbose than Markdown, but more explicit. You can do things Markdown can't—interactive elements, precise layout, accessibility attributes.
JSON
What it is: A data format for structured information. Not for documents—for data that programs exchange.
When you reach for it:
- Storing configuration
- Sending data between systems (APIs)
- When the consumer is a program, not a human reader
{
"shoppingList": [
{ "item": "Milk", "priority": "normal" },
{ "item": "Eggs", "priority": "normal" },
{ "item": "Bread", "priority": "high" }
]
}
Notice: no formatting for display. This is structured data. A program can easily parse this, filter by priority, count items, etc.