[TYPES] AI-generated conference submissions
Mae Milano
mpmilano at cs.princeton.edu
Tue Mar 17 12:05:12 EDT 2026
Hey Jon,
I think it's dangerous to conflate "uses of AI that were ultimately
senseless" with "fraudulent generation of lies." I was not aiming at
students that *sought to deceive*: I was aiming at students who
believed that their rule nests *supported their work*, and ultimately
lacked the necessary training or insights to confirm this before
submission. Such students may be "trying their luck," but they
invariably believe that there's better-than-even odds of the stuff
they've generated not just passing peer review, but actually being
*right*.
The trouble is that it's quite difficult to distinguish between the
clever liars and the least-experienced students. So, as long as it
doesn't change the amount of effort it asks of us, what's the harm in
just treating them *all* like inexperienced students? Perhaps paired
with some institutional memory; if we get repeated submissions that
fail to hit basic standards even after a response or two from us with
this mindset, stronger action would be quite warranted---up to and
including temporary submission bans.
On Tue, Mar 17, 2026 at 4:27 AM Jon Sterling <jon at jonmsterling.com> wrote:
>
> Hi all,
>
> This is indeed a concerning trend... Just wanted to chime in with a couple comments: I agree that our communications should be kind. But I also am having trouble understanding the circumstances under which a student might produce a bunch of stuff they don't understand (e.g. the awful POPL-style rule nests) and send it to a conference without being aware that they are committing a severe breach: the issue is not so much standards of the field, but the fact that nobody who will ever be capable of acting in good faith with the community needs to be taught, first, that you should not attempt to hoodwink the community by sending lies and fabrications to a conference.
>
> In other words, almost all of us started off incredibly ignorant of scientific standards and community style and needed a lot of guidance, but there is a difference between messing up some unspoken community convention and sending lies and fabrications to a conference. It is, by definition, impossible to do the latter inadvertently or by mistake.
>
> With this said, I must very strongly agree with Mae's comments about the need for parsimonious formalism, and the perfunctory way that even professionals fill up our papers with unnecessary formal junk that we have to admit nobody is reading carefully anyway. But I wish to be very careful here, because I don't think it is right to view the problem of "good faith contributions that are overly complicated" as part of a spectrum that also contains "bad faith contributions that are overly complicated because they are aping the former". These are two entirely different problems. We should not blame good faith people who include a real thousand-case induction in their POPL appendix for the bad faith people who fabricate a fake thousand-case induction. Thousand-case inductions are objectively bad, but what matters here is intent.
>
> Finally, systems like Lean and Rocq can indeed really help people bridge the gap between their ambition and their abilities. I had to retract my first submission to POPL ever, early in my PhD, because there was a serious error in the mathematics. I learned Rocq and figured out how to state and formalise the result I'd intended, and I hope that other young people will be guided to have similar experiences.
>
> Best wishes to all,
> Jon
>
>
> On 17 Mar 2026, at 4:08 am, Mae Milano <mpmilano at cs.princeton.edu> wrote:
>
> [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ]
>
> This is a quagmire of a topic! I'm chiming in here mostly to point out
> that we have *existing SIGPLAN policies*, inherited from our
> publishers, that require AI disclosure on all submissions. So, if the
> authors did use AI and failed to disclose it, that matter can be
> referred to the publication board for review.
>
> I'd however like to take the "student's side" as well for a moment
> here. I have been somewhat lucky; despite being on 3/4 of the PACMPL
> PCs this year (bad idea, I know) I'm not feeling at all buried under
> an avalanche of AI-generated papers that I have to review. But I
> *have* seen an enterprising undergraduate student or two who, newly
> enamored with both programming languages *and* AI, take a crack at
> spinning their idea all the way up to a paper largely without
> supervision. Where I have seen this, these students aren't trying to
> game metrics, or boost their publication counts, or on some doomed
> quest to get into graduate school; rather, they are genuinely hoping
> that they have made a contribution, and have come from an academic
> background that consistently rewarded burying the reader in formalism
> as a way of "showing your work" no matter how useful that formalism
> ultimately proves to be. They have an idea, and they know that papers
> about the idea include a dense section of symbols that they can't
> really read; so they generate a dense section of symbols and can't
> really read it.
>
> We definitely don't want to encourage this behavior! But when we
> ultimately reject these papers, I hope we do so with the idea of this
> student in mind. We may be the first "real" PL researchers from which
> they've had a chance to receive direct feedback, and therefore are
> also the first people with a clear opportunity to teach them about the
> standards of publication in our field.
>
> So, here's my suggestion: let's come up with a "form response" that
> can be lightly tailored to each submission, outlining the 'usual'
> kinds of confusion that Stephanie is calling out here and that I'm
> sure many of us have seen from our own [undergraduate] students at one
> time or another. Let's aim to make this response not about AI, but
> about the need for parsimonious formalism---and in particular, why the
> submissions we're seeing don't really fit that bill. We can include a
> suggestion to work in Lean or Rocq as a way not just to increase
> confidence in the results but provide guardrails around the formal
> development more generally. And let's accept, as a community, that
> it's ok to send a response like this *without* a deep review when the
> presentation warrants it---even if there might have been an overlooked
> recoverable insight deep within the technical tangle.
>
> I hope we take this opportunity! Let's lead with kindness, and see
> what happens next.
>
>
> Mae
>
> On Mon, Mar 16, 2026 at 4:03 PM Stephanie Balzer
> <stephanie.balzer at gmail.com> wrote:
>
>
> [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ]
>
> Dear all,
>
> I have now been numerous times on the receiving end on a what it appears
> to me (almost) entirely AI-generated conference submissions that I was
> assigned to review. Of course I have no proof, but to me it was pretty
> obvious. The submissions in question consist of an amalgamation of
> meaningful words (sometimes not entirely from the context the paper
> ought to be about), are generally well written, although meaningless,
> and even come backed up with some rules with horizontal lines and proof
> sketches (sometimes from various contexts). That catch, however, is
> that the whole composition doesn't make sense.
>
> What are we going to do about this as a community?
>
> I have numerous concerns here: My immediate concern is that I do not
> like to spend my time on such submissions. Even though it's quite
> obvious immediately that the paper is meaningless, it still takes some
> time to make sure and justify the verdict. Another concern I have is
> the risk that, under time pressure, no due diligence is done, and we may
> end up accepting such a paper.
>
> As a first step we may require authors to declare whether AI was used in
> preparing their submission and what for and we delimit what uses are
> permitted.
>
> Looking forward to your thoughts,
>
> Stephanie
>
>
More information about the Types-list
mailing list