[TYPES] AI-generated conference submissions
Jon Sterling
jon at jonmsterling.com
Tue Mar 17 12:16:47 EDT 2026
On Tue, Mar 17, 2026, at 4:05 PM, Mae Milano wrote:
> Hey Jon,
>
> I think it's dangerous to conflate "uses of AI that were ultimately
> senseless" with "fraudulent generation of lies." I was not aiming at
> students that *sought to deceive*: I was aiming at students who
> believed that their rule nests *supported their work*, and ultimately
> lacked the necessary training or insights to confirm this before
> submission. Such students may be "trying their luck," but they
> invariably believe that there's better-than-even odds of the stuff
> they've generated not just passing peer review, but actually being
> *right*.
I wonder if we are speaking past each other... I think it is indeed a severe breach of ethics to send something that you knowingly haven't understood to a conference, and it is hard for me to come up with any valid argument to the contrary.
I would also like to clarify that what matters is not whether the AI-generated stuff was senseless or sensible (indeed, that's almost entirely irrelevant)—what matters is that the author did not understand the output, but included it because they felt that is part of the convention of the field. That is a serious breach, and if I were aware of any students under my influence engaged in such things, they would receive a very firm reprimand. It would be a serious breach even if the AI-generated stuff turned out to be totally sensible. We should consider that possibility too (the senseless transmission of sensible things) when evaluating the ethics of these scenarios. It is indeed deeply wrong to submit text that you have not understood, whether or not that text contains valid or true science.
More information about the Types-list
mailing list