After a reviewer carelessly left a ‘please let me know if you’d like some other questions!’, Sjoerd Rijpkema knew they had used ChatGPT for the reviewing process.
‘You are a reviewer for a high-impact academic chemical journal with over 20 years of experience. Read the following manuscript for you to review. Using your knowledge in this subject, make a list of 10 comments for major revision.’
This is probably what Reviewer #2 told ChatGPT when my paper arrived on their desk. And I will admit, I didn’t realise it at first. Reviewers #1 and #3 had few substantive questions, but #2’s questions were sharp. Answering them properly required some extra work, but I couldn’t disagree, as it really made my paper better. Only at an unremoved ‘Please let me know if you’d like some other questions!’ at the end did I realise: This was not a human, but ChatGPT.
So am I against using AI as a reviewer? On the contrary, I think it’s wonderful! Scientists now have to spend their free time on peer review when they are already crowded with work. Besides, the AI gave me the most relevant and substantive feedback anyway. An AI with access to all published knowledge can qualitatively assess a paper so quickly, it’s crazy that we don’t apply this by default. AI has no ego, no time constraints, and no interests - better than that overworked postdoc who has to cram reviewing between the dishes and a burnout.
Nog geen opmerkingen