If we know how to communicate well and clearly, we also already know how to prompt well. We just forget to apply that same clear thinking we use with people when we talk to AI (supposing we have it, though we've seen it everywhere that that's not always the case).
Plus, it's easier with people because they will come back with follow-up questions or look puzzled when they don’t get it, while AI just makes assumptions, tricking you into thinking it ‘gets it’ on some level, or leaving you frustrated when the output misses what you had in mind.
Love the confusion test idea! Besides doing it by self-checking your prompt, you can also hand the confusion test to the LLM, like "Don’t follow these instructions yet. First, tell me what’s unclear, what assumptions you’d have to make, or what extra context you’d need before doing this well.” I do this a lot, and it helps, as you’re not blindly trusting the AI to do the task, you’re asking it to audit the task request.
Great tip on using the LLM itself to help clarify the prompt before diving in. That extra layer of reflection can make a big difference in getting more accurate results. Really appreciate you sharing it!
Thank you so much for these tips. Incredibly useful. Asking insightful questions - do you think that's an outgrowth of critical thinking skills or to what do you attribute that ability?
In my case, asking insightful questions came from the process of communicating and working with AI over time. At first, I also asked simple questions like, "Give me a better idea?" and got general results. Then I tried again, adding more context, like goals, etc. Gradually, I learned how to prompt AI in a way that produced results closer to my goals with less back and forth. Here’s one tip that has helped me improve a lot: don’t be afraid to ask the wrong questions. Just ask, keep the conversation going, and improve gradually.
Great breakdown, Luan!
If we know how to communicate well and clearly, we also already know how to prompt well. We just forget to apply that same clear thinking we use with people when we talk to AI (supposing we have it, though we've seen it everywhere that that's not always the case).
Plus, it's easier with people because they will come back with follow-up questions or look puzzled when they don’t get it, while AI just makes assumptions, tricking you into thinking it ‘gets it’ on some level, or leaving you frustrated when the output misses what you had in mind.
Love the confusion test idea! Besides doing it by self-checking your prompt, you can also hand the confusion test to the LLM, like "Don’t follow these instructions yet. First, tell me what’s unclear, what assumptions you’d have to make, or what extra context you’d need before doing this well.” I do this a lot, and it helps, as you’re not blindly trusting the AI to do the task, you’re asking it to audit the task request.
Great tip on using the LLM itself to help clarify the prompt before diving in. That extra layer of reflection can make a big difference in getting more accurate results. Really appreciate you sharing it!
Thank you so much for these tips. Incredibly useful. Asking insightful questions - do you think that's an outgrowth of critical thinking skills or to what do you attribute that ability?
In my case, asking insightful questions came from the process of communicating and working with AI over time. At first, I also asked simple questions like, "Give me a better idea?" and got general results. Then I tried again, adding more context, like goals, etc. Gradually, I learned how to prompt AI in a way that produced results closer to my goals with less back and forth. Here’s one tip that has helped me improve a lot: don’t be afraid to ask the wrong questions. Just ask, keep the conversation going, and improve gradually.
Oh, yeah, great advice! I often do find that my questions get better as I dig deeper. Thanks!