-
Notifications
You must be signed in to change notification settings - Fork 282
Description
I did not see an LLM policy in the repository, but I felt it was important to note that, given open-source sensitivities to those tools.
We have not yet discussed this as maintainers, but it is probably time to do so (or perhaps too late! 😆):
- What are our general views on contributions involving LLMs?
- Are there standard policies we might adopt or adapt? (For reference, @marcharper shared: https://wiki.gentoo.org/wiki/Project:Council/AI_policy)
Personally: I am not keen to review or maintain code generated by an LLM. In teaching, I am jaded having seen a significant amount of low-quality, AI-generated code that students could neither explain nor justify. It is often syntactically plausible but mathematically or logically incorrect—e.g., LLMs confidently producing code to give “solutions” to intractable differential equations.
That said, I have no objection to contributors using an LLM for idea exploration or boilerplate as long as the resulting code is genuinely authored, understood, and validated by a human. I would probably struggle to identify where the line is here.