[TYPES] AI-generated conference submissions
Jonathan Aldrich
jonathan.aldrich at cs.cmu.edu
Mon Mar 16 23:15:24 EDT 2026
Ugh, I have been fortunate enough not to encounter that in my reviewing.
But I hear it is very common in other subfields so it was probably just a
matter of time before it got to PL.
If this is an ACM conference, our policies already require disclosing the
use of AI for anything beyond grammar-checking-style applications:
https://urldefense.com/v3/__https://www.acm.org/publications/policies/new-acm-policy-on-authorship__;!!IBzWLUs!RC4FcA0-IHc55AxdQMuWNHIfsDdNqAlbvh2m7TpSk7wCaTZl7_iQsMZfqu-oMv768kp_7vwINcEPMtugQGMRVD8jcVueMjwevkEYu5ZLnw$
If there are any non-ACM PL conferences that don't have this policy, I
would encourage them to adopt it.
ACM is evaluating tools that can (heuristically) detect AI use, as well as
tools that can identify hallucinated references. We should start to see
deployment of these within the next year--if anyone on this list is an ACM
PC chair and wants to do an early trial, let me know and I can connect you
with people who may be able to arrange that. It's of course important to
have a human verify any tool reports based on heuristics as they may be
incorrect, but the point is that they can save time in identifying problems.
Regarding reviewing workload, it's very unfortunate. The tools mentioned
above will eventually help some. In the meantime, it's my view that once
you determine that a paper is so flawed it cannot be accepted, especially
if that flaw involves misconduct such as undisclosed AI use or otherwise
makes the paper very difficult to read, it's reasonable for the reviewer to
stop reading and return a review based on the portion they read. Of
course, I would mention the situation to the PC chair to make sure they are
OK with this; most probably are.
Best,
Jonathan
On Mon, Mar 16, 2026 at 9:50 PM Stephanie Balzer <stephanie.balzer at gmail.com>
wrote:
> [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list
> ]
>
> Dear all,
>
> I have now been numerous times on the receiving end on a what it appears
> to me (almost) entirely AI-generated conference submissions that I was
> assigned to review. Of course I have no proof, but to me it was pretty
> obvious. The submissions in question consist of an amalgamation of
> meaningful words (sometimes not entirely from the context the paper
> ought to be about), are generally well written, although meaningless,
> and even come backed up with some rules with horizontal lines and proof
> sketches (sometimes from various contexts). That catch, however, is
> that the whole composition doesn't make sense.
>
> What are we going to do about this as a community?
>
> I have numerous concerns here: My immediate concern is that I do not
> like to spend my time on such submissions. Even though it's quite
> obvious immediately that the paper is meaningless, it still takes some
> time to make sure and justify the verdict. Another concern I have is
> the risk that, under time pressure, no due diligence is done, and we may
> end up accepting such a paper.
>
> As a first step we may require authors to declare whether AI was used in
> preparing their submission and what for and we delimit what uses are
> permitted.
>
> Looking forward to your thoughts,
>
> Stephanie
>
>
More information about the Types-list
mailing list