Discussion about this post

User's avatar
Daria Cupareanu's avatar

I’ve run into this exact issue. At the start of this year, I was working on a career manager MVP that matched users (based on a personality test) with careers aligned to their traits and interests. We tested it with a few hundred people, and during that early phase, I used GPT to prototype the output, feeding in the same structure: new user + their test results + career matches.

But with memory on, GPT started blending responses from past users. Results got weird, and accuracy dropped fast. I eventually had to turn memory off to keep things clean.

Once we shipped the product, the issue disappeared ofc, but yeah, memory loops are real.

Totally agree that part of the solution is being more intentional: reviewing what’s stored, and adding clearer memory instructions like:

“When we’re working on X, update your memory to Y”

Thanks for mapping this all out, super relevant for anyone building with GPT in real-world workflows.

Expand full comment
Karen Blue's avatar

Hmm. You're using ChatGPT. I've been using Copilot for some time, and I don't think it has a memory on/off function. So, I've begun saving meaningful feedback in Word documents so I can "refresh" his memory. I'm doing blogs on Reinvent Your Life in substack and Bot has been very helpful with specific requests based on my voice and style from previous posts. I've used your prompt to get notes from my previous posts -- worked like a charm. Thanks.

Expand full comment
9 more comments...

No posts