Guess the MMORPG business model (originally pioneered by casinos and their ilk) has escaped containment. Figure out what secret no-no hurts and squeeze that for money.
In unrelated news, Claude said something weird to me this morning before it gave me the first response of the day. "If you or someone you know is having a difficult time, free support is available." with a little stylized birdy icon sitting on a human hand. Should I be worried? 😅 Guess Anthropic is building a lawsuit shield now.
You’re absolutely right, Dave, but I hope that gambling addiction and the protocols for exploiting human vulnerabilities will either be criminalized or placed under some kind of audited regulatory framework!
Nah. We'll just start calling them "prediction markets" instead and lobby to have no regulatory oversight about insider trading… 😏 It's not like we haven't been warned by dystopian and science fiction writers about this for decades. 🙄
What struck me most was that first slide four psychiatric labels not as a diagnostic tool, but as a targeting map. There's something deeply unsettling about a system that looks at a person's anxiety or impulsivity and asks not how do we help, but how do we use this?
I find Mila's point about scale is what really stayed with me. A human consultant exploiting vulnerabilities is already wrong. But an algorithm doing it to hundreds of thousands of people, invisibly and automatically that's a different category of harm entirely.
The PCL framework shows that personalization doesn't have to mean manipulation, I feel the difference comes down to intent, are you adjusting to serve the person, or to pressure them?
Thank you both for bringing this into the open. These are exactly the conversations leaders need to be having before they sign the next vendor contract.
The same points struck me as well, Diamantino! And sadly it’s something I foresee happening more and more, which is why I love how @Mila Agius outlined it and brought it to light so clearly.
Diamantino, you captured the core tension precisely: scale doesn’t just amplify systems – it changes their moral category! The real dividing line isn’t personalization vs automation, but whether a system reduces cognitive load or quietly converts vulnerability into leverage..
As a tech leader, I know for a fact that nothing we do with our platforms are indeed engineered for a particular(s) purpose(s), to maintain engagement. We no longer have neutral apps.
Excellent work, as usual, Mila! You can teach ethics frameworks, but you can’t make someone choose integrity. Here you show that no amount of pressure will move you off how you want to show up as an expert. I won’t even start on how they designed that system - it’s horrifying. Thanks for this collab!
Thanks, dear Anna, for such a thoughtful reflection – you put your finger on something essential: frameworks can guide behaviour, but integrity is always a choice. I’m truly glad the piece resonated with you, and I appreciate you reading it with such depth!!
Do you think the fix for "Instant Bank Runs" is adding manual circuit breakers to slow things down, or must we fully automate risk management to match the speed of the rails? :)
I love the point that money now moves faster than we can think—it's a sharp look at a huge systemic gap.
I’ve subscribed and would be happy to support each other.
This piece crystallizes a line that’s been blurry for a long time: the moment personalization stops reducing cognitive load and starts increasing psychological leverage.
What really lands for me is the idea of a “pressure map” built on psychiatric labels – once you let a revenue target sit on top of that kind of profiling, almost every optimization will drift toward exploitation, especially once it’s handed to an algorithm and scaled.
This is a super fascinating read, team! Loved it, although the topic is slightly horrifying. Thank you for raising awareness around the topic and for the work you do in this space, Mila.
This is so important to understand, not because of the labels or even the tactics, but because of how easy it would be for smart, well-intentioned people to rationalize it. When revenue is the primary metric, and AI removes the friction of scale, the ethical conversation can quietly fade into the background. No one sets out to build a weapon. They set out to build something that works.
What struck me most is that this was not a technology failure. It was a leadership and incentive failure. AI did not invent exploitation; it simply made it efficient and scalable. That distinction matters because it shifts the responsibility back where it belongs. The issue is not whether the model performs. The issue is what it has been designed to optimize.
As leaders, we have to ask ourselves what we are rewarding before we reward performance. Once a system is trained, automated, and embedded into infrastructure, intention becomes part of the architecture. Architecture is much harder to unwind than a campaign or a messaging tweak. If we don't define our guardrails at the beginning, scale will magnify whatever values we encoded, whether we meant to or not.
Personalization can absolutely reduce cognitive load and help people make clearer decisions. I believe that deeply. But if the system is designed to take advantage of someone’s emotional patterns rather than support their agency, then we need to question what we are building. I appreciate the courage it takes to surface this kind of story, because these are the conversations we need to have before systems become too big, too profitable, and too normalized to question.
Thanks Sherry, absolutely agree with you, AI is a powerful tool, but as always the key question is whose hands it’s in. In most cases, we can’t look under the hood of “advanced technologies” and end up judging them solely by their performance. I hope lawyers and criminologists are already working on ways to audit systems like these..
💯, dear Dennis, It’s just a pity that many marketers, in pursuit of promotion and higher pay, are willing to equate vulnerability with need. In my practice, I’ve heard statements like this more than once – and every time, it ended in a scandal
I think an early-warning system for different risks within the Substack space (one that we pick up through reading each other’s articles) is incredibly useful and relevant for all of us!
Guess the MMORPG business model (originally pioneered by casinos and their ilk) has escaped containment. Figure out what secret no-no hurts and squeeze that for money.
In unrelated news, Claude said something weird to me this morning before it gave me the first response of the day. "If you or someone you know is having a difficult time, free support is available." with a little stylized birdy icon sitting on a human hand. Should I be worried? 😅 Guess Anthropic is building a lawsuit shield now.
really?? wow!!
Yeah, it was entertaining enough that I screenshot it for entertainment later.
You’re absolutely right, Dave, but I hope that gambling addiction and the protocols for exploiting human vulnerabilities will either be criminalized or placed under some kind of audited regulatory framework!
Nah. We'll just start calling them "prediction markets" instead and lobby to have no regulatory oversight about insider trading… 😏 It's not like we haven't been warned by dystopian and science fiction writers about this for decades. 🙄
What struck me most was that first slide four psychiatric labels not as a diagnostic tool, but as a targeting map. There's something deeply unsettling about a system that looks at a person's anxiety or impulsivity and asks not how do we help, but how do we use this?
I find Mila's point about scale is what really stayed with me. A human consultant exploiting vulnerabilities is already wrong. But an algorithm doing it to hundreds of thousands of people, invisibly and automatically that's a different category of harm entirely.
The PCL framework shows that personalization doesn't have to mean manipulation, I feel the difference comes down to intent, are you adjusting to serve the person, or to pressure them?
Thank you both for bringing this into the open. These are exactly the conversations leaders need to be having before they sign the next vendor contract.
The same points struck me as well, Diamantino! And sadly it’s something I foresee happening more and more, which is why I love how @Mila Agius outlined it and brought it to light so clearly.
Diamantino, you captured the core tension precisely: scale doesn’t just amplify systems – it changes their moral category! The real dividing line isn’t personalization vs automation, but whether a system reduces cognitive load or quietly converts vulnerability into leverage..
As a tech leader, I know for a fact that nothing we do with our platforms are indeed engineered for a particular(s) purpose(s), to maintain engagement. We no longer have neutral apps.
Unfortunately yes..101%
Excellent work, as usual, Mila! You can teach ethics frameworks, but you can’t make someone choose integrity. Here you show that no amount of pressure will move you off how you want to show up as an expert. I won’t even start on how they designed that system - it’s horrifying. Thanks for this collab!
Yes, Anna! Exactly!!!
Thanks, dear Anna, for such a thoughtful reflection – you put your finger on something essential: frameworks can guide behaviour, but integrity is always a choice. I’m truly glad the piece resonated with you, and I appreciate you reading it with such depth!!
excellent read! it’s great to see the collab here! i’ve been a big fan of Mila for a while now as well!
Thanks, dear Christopher, my friend 🙏
I have too!!
Do you think the fix for "Instant Bank Runs" is adding manual circuit breakers to slow things down, or must we fully automate risk management to match the speed of the rails? :)
I love the point that money now moves faster than we can think—it's a sharp look at a huge systemic gap.
I’ve subscribed and would be happy to support each other.
Jorrit
Just subscribed! Great points, it’s a tough balance to keep
This piece crystallizes a line that’s been blurry for a long time: the moment personalization stops reducing cognitive load and starts increasing psychological leverage.
What really lands for me is the idea of a “pressure map” built on psychiatric labels – once you let a revenue target sit on top of that kind of profiling, almost every optimization will drift toward exploitation, especially once it’s handed to an algorithm and scaled.
Thanks, Alex!
This is a super fascinating read, team! Loved it, although the topic is slightly horrifying. Thank you for raising awareness around the topic and for the work you do in this space, Mila.
Thanks, Dallas ) Joel is an awesome author & consultant & he knows what exactly he wants as a result!
Disturbing but crucial, ethical AI isn’t optional when systems can exploit human vulnerabilities at scale.
I completely agree with you, as always – the human factor remains a poorly controllable risk that most companies tend to underestimate..
This is so important to understand, not because of the labels or even the tactics, but because of how easy it would be for smart, well-intentioned people to rationalize it. When revenue is the primary metric, and AI removes the friction of scale, the ethical conversation can quietly fade into the background. No one sets out to build a weapon. They set out to build something that works.
What struck me most is that this was not a technology failure. It was a leadership and incentive failure. AI did not invent exploitation; it simply made it efficient and scalable. That distinction matters because it shifts the responsibility back where it belongs. The issue is not whether the model performs. The issue is what it has been designed to optimize.
As leaders, we have to ask ourselves what we are rewarding before we reward performance. Once a system is trained, automated, and embedded into infrastructure, intention becomes part of the architecture. Architecture is much harder to unwind than a campaign or a messaging tweak. If we don't define our guardrails at the beginning, scale will magnify whatever values we encoded, whether we meant to or not.
Personalization can absolutely reduce cognitive load and help people make clearer decisions. I believe that deeply. But if the system is designed to take advantage of someone’s emotional patterns rather than support their agency, then we need to question what we are building. I appreciate the courage it takes to surface this kind of story, because these are the conversations we need to have before systems become too big, too profitable, and too normalized to question.
Thanks Sherry, absolutely agree with you, AI is a powerful tool, but as always the key question is whose hands it’s in. In most cases, we can’t look under the hood of “advanced technologies” and end up judging them solely by their performance. I hope lawyers and criminologists are already working on ways to audit systems like these..
Personalization becomes unethical the moment it targets human vulnerabilities rather than serving needs
💯, dear Dennis, It’s just a pity that many marketers, in pursuit of promotion and higher pay, are willing to equate vulnerability with need. In my practice, I’ve heard statements like this more than once – and every time, it ended in a scandal
What a fascinating read - thanks for introducing me to Mila's work Joel 👏
Thanks, Alex, for you kind words 🙏
I think an early-warning system for different risks within the Substack space (one that we pick up through reading each other’s articles) is incredibly useful and relevant for all of us!