Skip to content
Back to Blog

How to Use AI Tools for Research, Responsibly

About the Author: Hello, readers! My name is Loki, and I’ve been writing on Write the World for a few years now. I’m 14, and I started doing fantasy writing in first grade, though now my genre is typically nonfiction. I enjoy imagery and unconventional poem styles, as well as sculpting and birdwatching. My goal as Write the World's teen AI Liaison is to educate, not for the purpose of discouraging all artificial intelligence in writing, but to help you understand the effects of such tools and safe ways to use them. 

Nearly every field is utilizing artificial intelligence (AI) technologies in some way, from science to politics to the arts. When it comes to writing–across all disciplines–AI may seem like a useful tool for a core component of the process: the research that is done before writing takes place. 

At first, it might appear as though artificial intelligence tools present a brilliant way to learn new information and collect it in a neat summary, without doing the work of digging through articles and taking notes. As many things are, though, I’ve found that AI-assisted research presents more complex ethical and intellectual considerations when fact-finding and analyzing. In this article, I share the main concerns that I have about using AI for research, as well as the best strategies I’ve found for using it in a helpful, less problematic manner. 

dan-dimmock-3mt71MKGjQ0-unsplash

Cautions for Using AI in the Research Process

I categorize the cons of using artificial intelligence for research in three ways: poor data quality, intentional and unintentional bias, and academic/scientific misconduct. 

Data quality, the first concern, means the degree of accuracy, reliability, and relevancy that information has in relation to its intended uses. It’s a way to measure how well the information, or data, supports what the writer is trying to communicate or cite. High quality data is crucial for making informed decisions. When using AI for research, it can be much more difficult to identify what data is reliable, as well as where the data itself is coming from. This leads us directly into the second downside of using AI for research: bias.

Bias is any intentional or unintentional favoring of one side against another. This could be a small bias, like listing a personal flavoring preference in a cooking blog, or something more major, like a political leaning in a popular newspaper. AI models are only as reliable as the data they are trained on, meaning if the source has biased information, or if the model itself was trained on biased information, the information it provides you with will contain the same biases. Biased information could be anything from the initial source author’s personal leanings, to unrepresentative samples in a study the AI is citing. 

The third downside to AI-based research is a bit different from the others, as I find it to be more easily preventable despite its being just as, if not more, prevalent in practice: academic and scientific misconduct. Misconduct means any dishonest use of AI tools to gain an unfair advantage, or to misrepresent your own work in an academic or research setting.

Some examples include using AI for research and wording, but not crediting the AI and the sources(s) it was trained on, using AI to paraphrase articles for quick and easy research workarounds, and most frighteningly, data manipulation, which has become an increasingly large concern. Data manipulation refers to fabricating and manipulating information using AI, which is a violation of research ethics. 

 

Suggestions for Responsible AI-Assisted Research

While there’s no one right way to use AI for research, there are best practices that can help writers leverage its benefits while keeping away from ethical pitfalls. When using AI for anything in the writing process, not just research, accountability should be the first priority. Transparency is what makes it clear what is a writer’s original work, and what in their work draws from other sources.  

So, make sure that if and when you use AI, you disclose your use and specify which external, AI-generated (or, of course, non-AI-generated!) information you cited and/or quoted in your work. 

Further, to ensure accountability, be sure to fact-check all of the information that you cite, as well as check for bias to the best of your ability, using outside sources.

AI usage grants writers speed, scalability, and pattern recognition when researching. It can accelerate tasks, as well as help to summarize and recognize themes or trends in vast data sets. It’s still very important, however, to understand that AI is not a replacement for peer review or manual research. Rather than trying futilely to make it work as a replacement, we can instead use it as an assistant throughout the research and writing stages. 

Remember to always use your own judgment as a writer, researcher, and human, before settling on a data point from a generative AI tool! 



Share this post: