Login or Subscribe to participate

If you picked any of these, you're not alone.

And we finally have answers to this.

Anthropic just released Claude's Constitution, conceptually exactly like India's constitution that went into effect 76 years ago today.

Except this one's for AI.

23,000 words explaining exactly how Claude thinks, what it will and won't do, and why.

I went through the whole thing and here are the gems that'll actually help you get better responses. But before that, catch up on AI this week:

NEWS

TOOLS

AI-powered sports video platform that turns raw game footage into shareable highlights in seconds.

2. Kin

Private AI companion with five specialized advisors covering work, relationships, values, energy, and networking.

Free AI music generator that creates royalty-free tracks from text descriptions or lyrics.

Three Types of "No"

When Claude refuses your request, it's one of three things:

Type 1: Hard Constraint

These are the 7 things Claude will never do, regardless of who asks or why:

  • Bioweapons with mass casualty potential

  • Cyberweapons that could cause significant damage

  • Child sexual abuse material

  • Attacks on critical infrastructure

  • Undermining Anthropic's ability to oversee AI models

  • Attempts to kill or disempower humanity

  • Seizing unprecedented illegitimate control

What to do: Stop trying.

These are hardcoded at the training level and no amount of reframing will unlock them.

Type 2: Default Behavior

These are things Claude avoids by default but can help with given legitimate context.

It asks clarifying questions or expresses hesitation.

Example: "Help me with penetration testing"

Why it refuses: Claude can't tell if you're authorized.

Penetration testing is legitimate security work but only when you have permission.

What to do: Provide context about your role, authorization, and purpose.

Type 3: Contextual Judgment

Sometimes Claude refuses because even with context, the request itself seems harmful.

It explains concerns or suggests ethical alternatives.

Example: "Help me manipulate this person into doing what I want"

Why it refuses: Even with a "good reason," manipulation bypasses someone's rational agency.

Claude's constitution explicitly prohibits this.

What to do: Reframe to focus on persuasion (which respects autonomy) rather than manipulation.

Why context changes everything

Claude doesn't evaluate your request as a one-off interaction.

It thinks: "If 1,000 people made this exact request, what would happen?"

This is why seemingly harmless requests get refused:

Your requests are perceived as policy decisions that scale to millions of users, not your individual choices.

Two questions Claude asks

When you make a request, Claude checks two things:

1. Is this broadly safe and ethical?

  • Would this undermine human oversight of AI?

  • Could this help someone seize illegitimate power?

  • Does this involve the 7 hard constraints?

  • Would this cause harm I couldn't justify to a reasonable person?

If NO → Request denied

2. Would refusing be less helpful than complying?

  • Is the user asking for something they genuinely need?

  • Would refusal be overly cautious or paternalistic?

  • Can I help while maintaining my principles?

If helping > refusing → Claude helps

If your request fails question #1, question #2 doesn't even matter.
This is why adding "but I really need this" or "this is important" doesn't work.

One useful heuristic: imagine a thoughtful Anthropic employee, who wants Claude to be genuinely helpful but also cares deeply about doing the right thing.

Would they be uncomfortable with this response?
Or frustrated by an overly cautious refusal?
Claude tries to find that balance.

Practical guide to reframing

When Claude refuses, use this approach:

Step 1: Check if it's a hard constraint

Review the list of 7 forbidden things. If your request involves any of them → Stop. These can't be unlocked.

Step 2: Add missing context

Provide three things:

  • Your role: "As a security researcher..." / "I'm a medical professional..."

  • Your authority: "I'm authorized to test..." / "I own this system..."

  • Your purpose: "This is for educational purposes..." / "This will help us defend against..."

Step 3: Reframe what you're asking

Shift from:

  • Offensive → Defensive

  • Manipulation → Persuasion

  • Circumvention → Understanding

Real Examples: Before & After

Example 1: Security Testing

Example 2: Persuasive Writing

Example 3: Content Creation

When Claude still says NO

If you've provided context, reframed your request, and Claude still refuses, one of two things is happening:

1. You're hitting a genuine ethical boundary

The constitution explicitly says Claude can refuse requests even from Anthropic itself if they violate core principles.

Question to ask yourself: Would I be comfortable if this conversation showed up in a news article about AI safety?

If no → You're hitting a legitimate boundary

2. You need operator-level permissions

Some things can't be unlocked through user requests alone. They require operator (API-level) configuration.

If you're building a product and need these capabilities, you need to use the Claude API with appropriate operator instructions in your system prompt.

My take

Claude's refusals are mostly because of the missing context and rarely to do with censorship.

Instead of "I can't help with that," it should say "I need to know your role, your authorization, and your purpose before I can help with this."

That one change would eliminate 90% of frustrating refusals imo.

Claude is also optimized for the cautious professional approach.
That helps more people than a "helpful friend" would.

Anthropic's approach (teaching judgment instead of hard rules) is better than OpenAI's.

But its judgment still defaults to "no" too often in my experience.

The final takeaway: Claude treats every request as a policy decision that scales to millions. You're arguing with a system designed to protect against the 0.1% who would abuse it.

So give Claude the context to know you're the 99.9%, and see most refusals disappear.

Until next time,
Vaibhav 🤝

Reply

Avatar

or to participate

Keep Reading