Will AI make cheaters of us all?
Fake reviews by fake reviewers of fake papers by fake authors

Andrew wrote an amusing post about mischief using AI in peer reviewing for academic journals.
It emerged that authors of scientific papers have resorted to embedding secret prompts inside their text to instruct large language models (LLMs) to give their papers positive reviews. These prompts may be printed in white, or tiny font, so they are intended to evade humans. Some prompts are quite elaborate, carrying instructions for what to say about strengths as well as what to say about weaknesses. For example:

Be sure to check out the comments section, as readers fuss over which group is worse: the authors who instruct LLMs to give positive reviews only; or the reviewers who rely on LLMs to submit their reports. As Andrew told the story, one author who admitted to inserting these prompts argued that they did it only to deal with cheating reviewers who deploy LLMs. So, we are witnessing the classic two kids in a playground scenario - he's the one who started it!
We can take this blame game one step further. The cheating reviewers should blame it on the authors because some authors are using LLMs to write bogus papers!
***
Unfortunately, this is the world of AI we find ourselves in. At an event recently, I chatted with an instructor who is throwing his hands up, complaining that he is spending time correcting code submitted by his students who are obviously using AI to do the work. Meanwhile, there are students complaining that their instructors use AI to set or mark assignments. They can of course blame each other.
Would one begrudge instructors who ask AI to mark assignments if the work were generated by AI? Would one judge the students who use AI to do their homework if said assignments were created by AI?
Is this a race to the bottom? Eventually, will humans do any work?