7 Comments
User's avatar
Robert McGuire's avatar

Love the disclosure test. I think a lot of people concerned about AI taking our jobs should be more worried about the more mundane risk of us just giving our jobs away to AI — for example, the job of reviewing, making decisions and taking responsibility, as you describe.

Joel Salinas's avatar

Thank you, Robert, and yeah, that is so true. I appreciate you engaging.

Alex Randall Kittredge's avatar

The framing of this story does a great job surfacing accountability, but I’m struck by how clean the narrative is: principled lab, overreaching state, clear moral high ground for leaders who “own” what AI does in their name.

In practice, though, most leaders are operating in messy, highly constrained environments where “walk away from the contract” is not a realistic option, and where AI use is often mandated or embedded in vendor stacks they don’t really control.

How would you apply your accountability principle to those far more common cases where a leader can’t simply refuse the system, but also can’t completely vouch for what’s happening under the hood?

Joel Salinas's avatar

That's a good point. There was also the risk that the government would just make Claude work for the government as part of the "war effort." I think when that happens, it's up to the leader to set their own guardrails. That may even just be some disclosures or risk limitations. You're so right to voice that it's often not as clean as this. Actually, the majority of the time it's not!

Alex Randall Kittredge's avatar

Thanks for the thoughtful reply. We're all dealing with uncharted territory in this "new normal"

John Brewton's avatar

The real question is not what AI can do, but what you are willing to own.

Joel Salinas's avatar

Exactly, and no AI will take the fall for you.