Over the past few weeks, I have read several newspaper articles and seen numerous social media postings from researchers expressing skepticism about using AI in research. It is fine that some people prefer not to use new tools, but I don’t see why that should hinder the rest of us from exploring the possibilities that are out there. Many researchers do use AI tools, which I think is perfectly fine. The problem is that not many people talk about how they use the tools and reflect more on how they work. I think it is imperative that professors explore AI in various ways to see what works and what does not. After all, we are going to teach upcoming generations and lead the way in both development and use. We then need hands-on experience and qualified opinions.

I have worked with (and partly developed) various types of AI during my entire career, so this is not something new to me. That is also what I am trying to tell everyone. Current AI tools are just tools like others. Progress has occurred quickly in recent years, but we are still relying on computer systems to automate more of our workflow. This is both helpful and troublesome at the same time, just like all previous technologies have been when they were introduced. It is our job to determine what works and how to utilize the available tools most effectively.

My Current Use of AI Tools in Research

This blog post is as much for myself as for others. Since things move quickly, it helps to write down how I am using various AI tools right now. This is different from what it was half a year ago, and things will likely change again soon.

  • DuckDuckGo: I constantly have to remind people who claim that they “don’t use AI” that they actually do—all the time. Web searching is a ubiquitous example of AI technology. Just think about how “googling” has become a word. My personal preference these days, though, is the privacy-savvy DuckDuckGo.

  • Whisper: The University of Oslo has built its own wrapping, Autotekst@UiO, around the Whisper model, including the National Library’s Norwegian-language Whisper model. This is an excellent tool for transcribing audio recordings (and audio from video recordings). I am increasingly using it to transcribe recordings of my own talks, voice memos, and interviews with others. Those texts can then be further manipulated using text-based AI tools.

  • Copilot and ChatGPT: For general search tasks and ideation, I use the University of Oslo’s (UiO) instances of Copilot and ChatGPT. Unlike the standard versions of these tools, my text input is not used for further training, as per UiO’s guidelines. These tools help me efficiently generate ideas and find relevant information. Generally, I find that Copilot performs better with web searches than ChatGPT. Both are also excellent for summarizing blocks of text, such as writing the first version of abstracts for articles.

  • Copilot in Visual Studio: I have switched to using Visual Studio as my primary text editor because of its integration with Copilot. For lazy and rusty developers like myself, it really works wonders in accelerating my coding tasks. It enables me to accomplish what I want to do in a much shorter time than I would have done on my own. In fact, these days, I wouldn’t have had time to solve problems manually, so Copilot is the reason I have been able to do any programming at all lately. It also helps me write cleaner and more efficient code that can be shared with others.

  • NotebookLM: NotebookLM is invaluable for working with larger (collections of) documents. Again, I use the UiO version to ensure that documents I upload are not used for further training. What is excellent about NotebookLM is that it doesn’t generate unusual content (like Copilot and ChatGPT can do), but instead builds upon what is in the uploaded documents. That is super-helpful to summarize the content of a PhD dissertation, for example. It can create podcasts that are amazingly realistic, although more of a gimmick than really useful for me.

  • Elicit: Elicit is my go-to tool for in-depth literature searches. I appreciate its transparency in showing how it sources information, which helps me discover new references and double-check that I haven’t missed important material. While I rarely use its outputs directly, it serves as a valuable sanity check during literature reviews.

  • Grammarly: As a non-native English speaker, Grammarly is essential for spell checking and grammar improvements. I wouldn’t use Copilot or ChatGPT for this, as it completely rewrites text. Instead, Grammarly acts more like a human copy editor, offering suggestions that help me maintain my own voice while improving clarity and correctness. Lately, it has begun reformatting sentences, so I am increasingly refusing suggestions if I feel that they break with my personal writing style.

Using these tools has significantly improved my productivity recently. The aim is not to produce substantially more output, but to utilize tools that can help me accomplish more advanced tasks than I would have otherwise been able to do and, hopefully, also improve the overall quality of my work. As a researcher, however, it is necessary to be transparent about the AI tools used. As a teacher, it is essential to test out various tools so that I can advise my students on what to use and how to utilize them effectively.