To get the best results using AI, provide clear and specific instructions, and always review the responses to ensure accuracy.
5 Things You Can Do to Improve Your Use of AI, According to Experts
1. Provide Clearer Instructions to AI
Due to the conversational capabilities of generative AI, people often use short and imprecise prompts, much like when chatting with a friend. The problem is that AI systems may misinterpret your messages when given instructions because they lack human-like skills that allow them to read between the lines.
To illustrate this issue, Maarten Sap, an assistant professor at Carnegie Mellon’s School of Computer Science, explained that he once told a chatbot he was reading a million books. The chatbot took this phrase literally… Sap notes that his research found that large language models (LLMs) struggle to understand non-literal references more than 50% of the time.
The best way to avoid this issue is to clarify your prompts. Be explicit and precise. Chatbots function like assistants, so you need to tell them exactly what you want to achieve. While this approach may require more effort when writing instructions, the results will better align with your expectations.
2. Verify Your Answers
If you’ve ever used an AI chatbot, you know that it can “hallucinate,” meaning it generates incorrect information. These hallucinations can occur in several ways, such as:
- Providing factually incorrect answers
- Inaccurately summarizing information
- Agreeing with incorrect facts shared by a user
According to Maarten Sap, hallucinations mainly occur in specialized fields like law and medicine, where they exceed 50%. The challenge is that these errors can be difficult to detect because they are presented in a way that appears logical, even if they are completely wrong.
AI models often reaffirm their responses using phrases like “I am confident”, even when they provide incorrect information. A research study found that AI models are confident in their facts but incorrect in their answers 47% of the time.
The best way to protect yourself from hallucinations is to double-check responses. You can cross-check information with external sources or rephrase your question to see if the AI provides the same answer.
While it may be tempting to rely on ChatGPT for topics you’re unfamiliar with, it’s easier to spot errors when your questions remain within your area of expertise!
3. Keep Your Data Confidential
Generative AI tools are trained on vast amounts of data. They also require new data to continue learning and improving their efficiency. As a result, AI models often use their outputs to refine their training.
The issue is that these models sometimes reproduce the training data in their responses. This means that your private information could potentially appear in responses given to other users. The best way to maintain good AI hygiene is to avoid sharing any sensitive or personal data.
Many AI tools, including ChatGPT, offer options that allow users to opt out of data collection. Disabling data collection is always a good choice, even if you don’t plan on sharing sensitive information.
4. Be Mindful of How You Communicate with AI Models
The advanced capabilities of AI and its ability to interact using natural language have led some people to overestimate its power.
Anthropomorphism—attributing human characteristics to AI—can be a slippery slope. If people perceive these AI systems as human-like, they may be more likely to entrust them with greater responsibilities or share sensitive data.
According to experts, one way to avoid this issue is to stop assigning human attributes to AI models when discussing them. For example, instead of saying, “The model thinks you want a balanced response,” Maarten Sap suggests a more precise alternative: “The model is designed to generate balanced responses based on its training data.”
5. Be Cautious About When You Use AI Language Models
Although these models seem capable of assisting with a wide range of tasks, there are many situations where they cannot provide optimal support. While benchmarks exist to evaluate them, they only cover a small fraction of real-world AI interactions.
Additionally, AI models may not function equally well for everyone. There have been documented cases where LLMs produced racially biased responses, highlighting that these models may not always be suitable for certain use cases.
Thus, the best approach is to be thoughtful and cautious when using AI models.