Generative artificial intelligence (AI) has transformed the world in just three years. Isabelle Kohler ponders this rapid transformation and questions whether the focus on output in academia risks undermining the core purpose of PhD training. She argues that PhD students should develop skills and expertise, not just deliver results, and explores when AI use enhances learning versus when it bypasses the growth process entirely.
ChatGPT, the generative artificial intelligence (AI) chatbot developed by OpenAI, was initially released on 30th of November 2022 – almost three years ago. Little did I expect at that time that it would become the fastest-growing consumer software application in history and would impact my work in the way it did over the last three years.
I’m not exactly what we call an early-adopter when it comes to technology – rather, I belong to the “early-majority” category (the category where people adopt new ideas just before the average member of a social system). I remember the first weeks of trying ChatGPT – the output was already quite impressive, but very slow and often inaccurate. This was especially the case when used as literature search, citing papers that didn’t exist and having frequent hallucinations. I also remember discussing this with students, playing with the tool to show how they should use their critical thinking skills to define whether an answer was correct.
The efficiency gains are undeniable – and yet, this is precisely what sometimes keeps me awake at night.
How naive I was during those days – not fully understanding how it would revolutionize the world. There was one aspect that worried me, though – the fact that AI would make our society less human, and that humans would suffer from this.
Fast forward to the end of 2025, after three wild years of generative AI. Anthropic, Google, Meta, Mistral and Microsoft joined OpenAI in the generative AI race, disrupting our entire society. Today, generative AI allows us to converse with a stranger in their native language, make hyperrealistic short movies, automate customer services almost without the need for humans anymore, analyze hundreds of pages of text and summarize it in less than a minute, build an app without any coding skills, or take notes during a Teams meeting.
In life science and chemistry, generative AI has brought incredible new opportunities: acceleration of the drug discovery process, designing of new (complex) molecules, streamlining complex data analysis pipelines, and advancing personalized medicine. It has also helped in accelerating time-consuming processes such as literature search, drafting of reports, and summarizing documents. The efficiency gains are undeniable – and yet, this is precisely what sometimes keeps me awake at night.
These are tasks that every researcher should go through, without the intervention of AI
Generative AI also brings multiple challenges to academia – especially related to teaching and training of students and PhD students. The emergence of generative AI has forced us to change the way we teach, and adjust student assignments accordingly. For example, in my courses, students used to have to write a small report – now, I’ve replaced this report by an oral examination, which allows me to better see whether a student has developed sufficient critical thinking during the assignment, and understood the information they gathered. That works, but that costs me much more work – especially in the case of courses with a relatively high number of students.
But adapting our teaching methods is the easy part. What makes me more worried for the future is that we start to rely so much on AI to automatize processes that we lose our capacity to think critically. Tasks that can be easily absorbed by generative AI include for instance routine technical execution, writing pieces of text, and data processing. But these are tasks that every researcher should go through, without the intervention of AI, to develop their skills as a researcher. Don’t get me wrong – I’m not against AI use in research. A non-native speaker using AI to polish their English, or a researcher using AI to quickly scan literature before diving into deep reading – these can enhance rather than replace learning. The question is not whether to use AI, but when its use supports skill development versus when it bypasses it entirely.
We should collectively remember that academia is a place to learn and grow.
Because I spent countless hours integrating chromatographic peaks, I now understand the importance of peak integration for the accuracy of results. Because I wrote many scientific articles and reviews, I trained my reading and writing skills – understanding how to convey a scientific message in the most accurate way and tailor it to my audience. These tasks were the foundations of my development as a professional researcher.
We tend to forget that a PhD student’s main goal is not to generate output – rather, a PhD student should to be trained to become a highly-skilled researcher and scientist. If AI replaces most tasks to increase output efficiency, we’ll lose sight of this crucial aspect. There’s a reason why a PhD contract is 4 years – because building skills takes time. Giving all “simple” tasks to AI will result in generations of researchers who have not had the chance to fully develop their critical thinking and creativity. There’s a real danger of creating a generation of researchers who can direct AI but don’t have the deep technical intuition that comes from years of hands-on work.
If you’re a student, PhD student or postdoc reading these lines: don’t forget that facility should not always be the way to go. Yes, using tools that can help us be more efficient is valuable, but it should not be at the cost of your own professional development. Try to think beyond the next assignment, paper or output you need to deliver, and reflect on how this task or assignment can help you develop your skills and grow as a researcher, scientist, and person. We should collectively remember that academia is a place to learn and grow. This is not a place where we should escape this learning process – for the sake of tomorrow’s science.
If you are interested in learning more about how to navigate academia and use generative AI in your work, do not hesitate to join the NextMinds Community! For this, you have plenty of choices: visit NextMinds website to learn more about my work, sign up for the newsletter, and follow me and NextMinds on LinkedIn.






Nog geen opmerkingen