24 Comments
User's avatar
Shamim Rajani's avatar

A key takeaway here is the risk of overreliance on AI. The more we depend on it without questioning or verifying, the more we slowly distance ourselves from our own critical thinking.

One practice I follow is avoiding prompt templates. Instead, I begin by adding my own thoughts, context, and perspective. That makes it easier to critically evaluate both the question I’m asking and the AI's response.

Joel Salinas's avatar

Over reliance on AI is a true danger, essentially not understanding its limits and trusting it more than one should. Thank you for sharing.

Rodney Daut's avatar

Shamim, that's a great practice to keep the mind sharp. I also like having the AI ask me questions to help me realize what I might not have thought of too. That way it becomes a thinking partner - like Socrates. :)

Nancy Hendrickson's avatar

As a freelance writer with decades of experience, I def don't need AI to write for me, but I do love the brainstorming capacity. However, as you reference - AI without truly solid prompting, can really go off the rails. It also seems to get 'stuck' in a response and won't move off it. It's like a good office assistant who occasionally order 15 pizzas because they forgot they already ordered 15 a few minutes earlier.

Joel Salinas's avatar

I love the office assistant example! Thank you for sharing, Nancy!

Nancy Hendrickson's avatar

You’re welcome. Personally, I find AI is better at analysis than creativity. 😀

Joel Salinas's avatar

Yes!! Same for me, I add the creativity ;)

Rodney Daut's avatar

Yes, AI is great for brainstorming. It's really good at coming up with new angles for headlines too.

And yes, it can also get pretty dumb at times. For earlier ChatGPT models, I'd ask it to do a simple task like count the number of "ands" and "buts" in a text and it came up with the wrong answer over and over. But it kept saying it was right.

I finally said "You are consistently failing at this task. What would it take for you to do it right?" It said, it would need a python code to run to do it right. So I had it create the code and run it and viola! It got the task done. :)

Neural Foundry's avatar

Brilliant breakdown on skill categories. That Chicago Sun-Times example is wild but honestly the scarier thing is how often we almost publish the same stuff without realizing it. I've caught myself accepting AI's confident tone as truth morr times than I'd like to admit. The instrumental vs core skill divide is spot-on tho, there's real value in knowing enough to catch when output drifts.

Rodney Daut's avatar

I was amazed that the list was published with more fake books than real ones too. That should never happen in any major publication.

Joel Salinas's avatar

Yes that example shocked me too!

Dennis Berry's avatar

AI can generate content that sounds credible, but without verification, it can just as easily spread fiction as fact.

Joel Salinas's avatar

That’s exactly it!

Rodney Daut's avatar

Sadly, too many people start to believe what AI tells them without checking it.

One good thing about noticing AI hallucinations is that once you experience them, you start to become more skeptical of AI outputs.

Of course you have to be aware that you experienced a hallucination in the first place.

John Brewton's avatar

This explains why unchecked speed quietly erodes trust.

Rodney Daut's avatar

Exactly. Going too fast in the wrong direction does no one any good.

Melanie Goodman's avatar

I agree, the real risk is letting outputs slide through without human judgement.

What you describe mirrors what I see with teams who treat AI as a junior assistant, not an authority.

Stanford research has shown large language models can produce confident but incorrect answers in over 25 percent of factual tasks, which makes verification non-negotiable.

The discipline Rodney has built around what to delegate and what to retain feels like the difference between speed and recklessness.

Reducing build time that dramatically only works if quality control stays human-led.

How are you seeing people practically build verification habits into their everyday AI workflows?

Rodney Daut's avatar

One way to verify AI's work is to have another AI fact-check the outputs.

So the person who submitted the article with books to read that did not exist could have easily avoided that by having another AI check that all the books actually existed.

When it comes to copywriting - the AI might come up with an interesting angle for a landing page. We humans came up with another. We split-test to see who is right.

And no surprise, I found the AI didn't improve results in our copywriting tests more than half the time. But it was still worth it as we tested ideas much faster than before so the net result was positive.

Joel Salinas's avatar

Yes, hallucinations and their impact on credibility is real!

Karen Spinner's avatar

Great article! As AI advances, it’s easy to forget that even the latest models still hallucinate! And the rates aren’t trivial…

https://github.com/vectara/hallucination-leaderboard

Joel Salinas's avatar

That’s a great chart! yeah specially as creators, credibility is huge and it can get so easily lost

Byron's avatar

Three hours last night with Gemini-CLI trying to access links to sources using links it had listed as sources. My GEMINI.md file states for it to act in Auditor mode and provide links to the sources and put every output into a table. It was putting truncated statements, truncated document titles, and then leaving out links. The links it gave returned 404 page not found errors. Even when Gemini-CLI used webfetch, it would get errors. Finally, Gemini would give me the Google Search page link and I had to go look for the sources. Even after that it would still claim it had fixed the problem. I finally went to Manus.im and asked for the same material. Within ten minutes, Manus had returned links that worked and included a document that is produced yearly from a respected investment bank that Gemini had totally missed. The Manus report was done in twenty minutes total.

Why did I spend so much time with Gemini? I was trying to learn how to better design the GEMINI.md instructions to avoid this type of problem in the future. It is really tough when the model doesn't look at the GEMINI.md enough to maintain its instructions. It kept jumping into CEO mode and making decisions based on the incorrect information it was providing.

Rodney Daut's avatar

Byron, it's so frustrating when the AI says it did the job right, then did it wrong, then keeps making errors no matter how many times we re-instruct it. I'm glad you found another AI - Manus.im - that did the job right.

Joel Salinas's avatar

That's really interesting. My personal solution is notebooklm and Claude.