Teaching Claude How You Think: Custom Response Modes and the SOC Triage Workflow That Followed
A few weeks into using Claude Code, I got tired of sifting through walls of supplementary context to find the one thing I actually needed. So I changed how Claude talks to me — and accidentally built a SOC triage copilot.
The Problem With Default Responses
Out of the box, Claude gives you thorough answers. That’s usually a good thing. But when you’re in the middle of an investigation or deep in a codebase, “thorough” can mean burying the critical answer under three paragraphs of background you already know.
After a few weeks, I noticed a pattern in what I actually wanted:
- The immediate, relevant answer to my question — no preamble, no “great question!”
- Supplementary information limited to the most relevant details — not everything Claude could say, just what matters right now.
- 2-5 suggestions at the end — either follow-up queries I might want to run, or next actions to take.
So I built that into my Claude Code guidance.
Setting Up a Learning Response Mode
Claude Code lets you configure custom instructions that shape how it responds. Think of it as teaching Claude your working style before you start a session.
The structure I landed on:
- Lead with the answer. If I ask a question, the first thing I should see is the answer — not context, not caveats, not a restatement of my question.
- Relevant context only. If there’s important supplementary information, include it — but be selective. One or two key details, not a textbook chapter.
- End with options. Give me 2-5 concrete next steps. These could be follow-up queries, alternative approaches, or actions I should take based on the answer.
This sounds simple, but it fundamentally changed the interaction pattern. Instead of a monologue I had to parse, every response became a decision point: here’s what you asked, here’s what matters, here’s where you could go next.
The SOC Triage Copilot
This structure instantly unlocked something I hadn’t planned for: a security operations triage workflow.
Here’s how it works in practice. An alert fires. I have:
- Alert details — the raw signal, source IPs, timestamps, triggered rules.
- Organizational knowledge — where our telemetry lives, what’s normal in our environment, which systems talk to which.
Claude has:
- Rapid query generation — it can write KQL, SPL, or whatever query language I need faster than I can type it.
- Suggested next steps — based on the alert type and initial findings, it proposes the logical next investigative actions.
The combination is powerful. I feed Claude the alert context, it generates the first triage query and suggests where to look next. I run the query, feed back the results, and Claude adjusts — new queries, refined hypotheses, updated next steps.
Every response follows the same structure: answer first, relevant context, then options. It keeps the triage moving forward instead of getting bogged down in explanations I don’t need during an active investigation.
When Claude Is Wrong (And Why That’s Valuable)
Every day there are alerts where Claude wants to go in a direction I know is wrong. Maybe it suggests looking at a data source we don’t have, or it assumes a network topology that doesn’t match our environment. This happens because Claude doesn’t have the organizational knowledge that lives in my head.
But here’s the thing — that friction is actually productive.
When I correct Claude’s direction and feed that organizational context back into the guidance, three things happen (this is the same pattern I described in turning tribal knowledge into living procedures — encoding what you know so the next run is better):
- Future investigations get faster. Claude stops suggesting dead ends it’s already been corrected on.
- Knowledge silos break down. The organizational context that used to live only in my head is now encoded in Claude’s guidance — guidance that I share with my team via git repositories.
- Static SOPs become living procedures. Instead of documentation that describes what you should do (and gets stale), the guidance becomes a just-in-time procedure that actively shapes how investigations unfold.
That last point deserves emphasis. We’ve all written runbooks that nobody reads. We’ve all maintained SOPs that are outdated by the time the ink dries. Claude guidance turns those static documents into something that’s actually used in the moment, and it evolves every time someone feeds a correction back in.
The Second Use Case: Learning New Tech
The same response mode works differently when you flip the context. Instead of triage, turn on learning mode and dive into a development or DevOps project with a language or technology you’re weak in.
Same structure applies:
- Answer first — what does this code/config/command do?
- Relevant context — why this approach over alternatives, what the gotchas are.
- Next steps — what to try next, what to read, what to configure.
Claude can put the pieces together and explain them as you go. It’s not just generating code — it’s teaching you the mental model behind the code while you’re building something real.
This is the same principle from my first post: point AI at a real problem, not a tutorial. The learning sticks because the stakes are real.
Sharing Guidance as a Team Practice
One more thing worth calling out: Claude guidance files live in your repo. They’re version controlled. They’re shareable. They’re reviewable.
This means your team’s collective knowledge about how to investigate alerts, how to navigate your infrastructure, how to handle edge cases — all of it can live in a shared guidance file that everyone’s Claude instance benefits from.
It’s knowledge transfer that actually works, because it’s not a wiki page someone has to remember to read. It’s baked into the tool people are already using.
Drafted with assistance from Claude. The response mode described in this post was actively used during the writing process.