11 Comments
User's avatar
Daria Cupareanu's avatar

I’ve run into this exact issue. At the start of this year, I was working on a career manager MVP that matched users (based on a personality test) with careers aligned to their traits and interests. We tested it with a few hundred people, and during that early phase, I used GPT to prototype the output, feeding in the same structure: new user + their test results + career matches.

But with memory on, GPT started blending responses from past users. Results got weird, and accuracy dropped fast. I eventually had to turn memory off to keep things clean.

Once we shipped the product, the issue disappeared ofc, but yeah, memory loops are real.

Totally agree that part of the solution is being more intentional: reviewing what’s stored, and adding clearer memory instructions like:

“When we’re working on X, update your memory to Y”

Thanks for mapping this all out, super relevant for anyone building with GPT in real-world workflows.

Expand full comment
Luan Doan's avatar

Thanks so much for sharing this, real-world cases like yours help people get a much more grounded perspective on how ChatGPT memory actually behaves in practice. Your example really illustrates how memory loops can quietly creep in and mess with accuracy.

Expand full comment
Karen Blue's avatar

Hmm. You're using ChatGPT. I've been using Copilot for some time, and I don't think it has a memory on/off function. So, I've begun saving meaningful feedback in Word documents so I can "refresh" his memory. I'm doing blogs on Reinvent Your Life in substack and Bot has been very helpful with specific requests based on my voice and style from previous posts. I've used your prompt to get notes from my previous posts -- worked like a charm. Thanks.

Expand full comment
Luan Doan's avatar

With your goal, you can try the ChatGPT Project or Claude Project. You can add guidelines, repetitive feedback, and previous posts into the project. This way, the AI can refer to them each time you start a new conversation. From what I’ve seen, it looks like Copilot doesn’t have a project feature.

Expand full comment
Karen Blue's avatar

Thanks Luan for your feedback. I appreciate it.

Expand full comment
Jenny Ouyang's avatar

Absolutely important discussion!

I’ve run into the exact same issue, even on the same project, no less. I’ve also noticed that the quality of the very first GPT output often shapes the tone and depth of the entire project. It’s like a butterfly effect in prompting.

Expand full comment
Luan Doan's avatar

Totally agree, that “butterfly effect in prompting” is such a spot-on way to describe it

Expand full comment
Janina's avatar

Read one of mine about Soumates that I’m quite keen to share with the world..❤️

https://open.substack.com/pub/janinatran/p/soulmates-myth-of-the-missing-half?r=ow5p4&utm_medium=ios

Expand full comment
WilM's avatar

Dude, you are amazing. Thank you for sharing this freely. 🙏

Expand full comment
Wyndo's avatar

This is super helpful, thanks!

Had earlier convo with a friend talking about this. Memory can be double edge sword if done wrong. But can be impactful if done right with right context.

Expand full comment
Luan Doan's avatar

Yes, totally agree. Like you said, it really depends on how well we understand its strengths and weaknesses. When used intentionally, memory can turn ChatGPT into a powerful partner instead of a risky shortcut.

Expand full comment