Discussion about this post

User's avatar
Daria Cupareanu's avatar

Great breakdown, Luan!

If we know how to communicate well and clearly, we also already know how to prompt well. We just forget to apply that same clear thinking we use with people when we talk to AI (supposing we have it, though we've seen it everywhere that that's not always the case).

Plus, it's easier with people because they will come back with follow-up questions or look puzzled when they don’t get it, while AI just makes assumptions, tricking you into thinking it ‘gets it’ on some level, or leaving you frustrated when the output misses what you had in mind.

Love the confusion test idea! Besides doing it by self-checking your prompt, you can also hand the confusion test to the LLM, like "Don’t follow these instructions yet. First, tell me what’s unclear, what assumptions you’d have to make, or what extra context you’d need before doing this well.” I do this a lot, and it helps, as you’re not blindly trusting the AI to do the task, you’re asking it to audit the task request.

Expand full comment
WilM's avatar

Thank you so much for these tips. Incredibly useful. Asking insightful questions - do you think that's an outgrowth of critical thinking skills or to what do you attribute that ability?

Expand full comment
3 more comments...

No posts