[TYPES] AI-generated conference submissions

Michael Shulman shulman at sandiego.edu
Tue Mar 17 20:46:39 EDT 2026


I think in Jon's phrase "something that you knowingly haven't understood",
the word "you" refers to all the authors of the paper.  They may not
individually understand all parts of the paper, but as long as at least one
of them understands each part, they can assert that collectively they
understand it.  An AI is qualitatively different because it cannot take
responsibility, hence cannot be a co-author.

It may be easy for students to fall into the trap of thinking that the AI
"understands" what it told them and therefore they don't have to.  That
doesn't make it okay; it means they need to be educated, gently and firmly.

On Tue, Mar 17, 2026 at 2:56 PM Martin Lester <martin.lester at gmail.com>
wrote:

> [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list 
> ]
>
> Hi all.
>
> I think it's worth considering the specifics of this comment:
>
> On Tue, Mar 17, 2026 at 4:50 PM Jon Sterling <jon at jonmsterling.com> wrote:
> >
> > I wonder if we are speaking past each other... I think it is indeed a
> severe breach of ethics to send something that you knowingly haven't
> understood to a conference, and it is hard for me to come up with any valid
> argument to the contrary.
> >
>
> Is it a breach of ethics to submit a paper (as corresponding author)
> if my co-author wrote a proof that I haven't checked or don't
> understand, but I trust my co-author's honesty and judgement? I think
> this is fine.
>
> What, then, if I conceive of the AI as a colleague or co-author?
>
> Under current rules and conventions, this is clearly not OK.
> Submission rules ban AI-generated paper content and say AIs can't be
> co-authors. Socially and legally, we don't view AIs as sentient and
> don't permit them to take responsibility for their actions.
>
> But if you interact with AIs in a conversational way, bouncing ideas
> back and forth (as I do), it may be easy to fall into this trap.
> Consider Terrence Tao's comment (over a year old, when AIs were not as
> good as they are now) that conversing with AIs is like advising a
> "mediocre, but not completely incompetent, graduate student". If
> you're a student, AIs may well appear to display knowledge and
> intelligence comparable to your peers.
>
> Is it OK for two graduate students to submit a co-authored paper,
> where the corresponding author hasn't fully checked and understood the
> other's work? If so, then, rules and regulations notwithstanding,
> wouldn't it be OK for a graduate student to submit, without full
> understanding, a paper where some of the work was generated by an AI?
>
> (For the avoidance of doubt, I don't condone writing/submitting papers
> with needless formalism, or with AI-generated content that you haven't
> acknowledged and checked/understood.)
>
> Yours,
>
> Martin.
>


-- 
Michael Shulman
Professor of Mathematics for Humans
<https://urldefense.com/v3/__https://home.sandiego.edu/*shulman/humans.html__;fg!!IBzWLUs!U4lUSKApSEa8kZDULmwhqh5d_pIXmVOsnrU81ePNxRsikKF2Xcc4JLW0vDjuXocChMB7jUOzZ4Dz3wRhOi3-N7RNFYTq-7E$ >
University of San Diego

"The role of the intellectual cannot be to excuse the violence of one
side and condemn that of the other."
        -- Albert Camus


More information about the Types-list mailing list