If you’ve used ChatGPT, Perplexity, Gemini or another AI assistant long enough, you’ve probably noticed the same things I did: sometimes it gives you a very good answer, other times something vague, generic, robotic, or completely off-topic.
This happens because, like any sophisticated tool, AI needs fine-tuning and additional instructions.
Lately I’ve started writing down what works for me — a kind of cheatsheet for situations where the AI Assistant goes off the rails, answers generically, or freezes completely. I decided to gather everything into one article; maybe it will be useful for others as well.
Basic rules for getting good answers
An AI assistant like ChatGPT responds well when it knows exactly what you’re aiming for. That’s why it’s useful to phrase your questions in a structured way. In short, it’s good to always start with:
- a clear task,
- the context in which you’re using it,
- the criteria you’ll use to evaluate the answer,
- constraints (e.g. budget, time, technical level),
- examples.
This is the short version, but you can see how to write an optimized prompt for an AI assistant in the dedicated article.
When the answer is wrong or suspicious
One advantage of AI is that you can ask it to check itself:
- Ask for step-by-step reasoning, how it arrived at that information.
- Ask for sources. ChatGPT is actually quite good at providing the online sources it used in certain conversations. Gemini, on the other hand, often avoids giving clear sources with direct links from the internet.
- Dispute certain points. If the answer is wrong and you contradict it, the AI model will usually reevaluate its answer.
- Ask it to check the logic or calculations. Especially if you have an ongoing chat with lots of calculations, where you’ve added a lot of information and generated multiple answers, it’s good to specify an intermediate checkpoint to the model. For example: “This is version 1 of the calculations, we’ll continue from here.” This is useful because otherwise the AI model may refer back to information from the beginning of the discussion that might no longer be relevant.
When the answer is too short
By default, ChatGPT and other AI models tend to over-summarize, sometimes even overusing bullet points. The solution:
- Ask for elaboration, concrete examples.
- Ask for pros and cons.
- Say “continue” until you reach the level of detail you want.
- Ask for a narrative answer — this will usually add more detail.
When the answer is too technical
If you get an answer that sounds like it was written by an angry high school math teacher:
- Ask “explain it to me like I’m 5.” I usually phrase this as “explain it at kindergarten level.”
- Ask for analogies.
- Ask for a visual explanation. Although here the results are mixed: very often it’s not that good at “drawing” the explanation properly — it’s either too simplified, disconnected from reality, or inaccurate. That’s normal though; it doesn’t have enough guidance.
When it sounds robotic
This is where style comes in — and this issue was, or still is, more common with older models; the newer ones already have a more natural tone by default:
- “Write naturally, conversationally, narratively.”
- “No corporate jargon.”
- “Avoid the phrase X.”
- “Write like an expert in field X or Y.”
When the answer is generic or low quality
Tricks that work:
- Assign a role: “You are a software architect / teacher / tech journalist, you work in field X.” Although with role assignment, if you have specific data to analyze, it’s best to do this right at the beginning of the conversation.
- Specify the format: list, table, analysis, diagrams, etc.
- Specify that you already have knowledge in that field and it can move to a higher level of explanation. I usually use a phrase like “let’s skip the amateur level.”
- Ask for alternatives, to provide multiple versions of the same answer so you can choose the one you like.
When the output is disorganized
The solution: force the structure.
- Ask for tables, lists, bullet points. These are the easiest for it to provide, especially bullet points.
- Ask for an executive summary. This depends on the type of output you’re looking for.
- Ask for prioritization or scores. This is very useful, especially when you want to analyze long data lists. I often ask it to combine multiple criteria to define the score, to make the ranking as realistic as possible.
- Ask for diagrams or decision trees. Here too the results are mixed, just like with explanatory images.
When ChatGPT completely freezes
I don’t know if you’ve ever been curious to look into how different files — docs, text, images, etc. — are generated in a chat with an AI. To generate them, it spins up a virtual environment on the backend infrastructure, where it runs various programming languages, for example Python. And with those it generates what you ask for.
But that virtual environment sometimes crashes. And the AI assistant doesn’t have permission to restart that virtual environment, so it can no longer generate what you’re asking for in that particular chat.
Solutions:
- Rephrase the request completely.
- Close the conversation and start another one. If it still doesn’t work after rephrasing, pretty much the only thing left to do is to start the conversation in a new chat, where a new virtual environment will be started. Of course, it can be frustrating to lose the information already written or generated in the previous chat. Some of that information will be retained in the AI model’s memory, if you have it enabled, but you’ll still need to re-enter some of the instructions.



