Replies: 2 comments 6 replies
-
|
I have an overall much more negative posture towards the LLM tools than you do. I feel no hostility towards the users, but the technology is not being developed or promoted in a responsible way and we're seeing some of the consequences. My post here will barely touch on the ethical or legal issues. I have genuine concerns on these fronts, but I think any attempt to discuss them here will derail us. Banning the tools is incorrect (for us)I'm not concerned that a ban might drive some contributors away. Bluntly put, anyone who can't contribute without the help of an LLM probably shouldn't be trying to make changes in But I don't believe that a policy banning LLM-assisted contributions (as, e.g., Gentoo has done) is appropriate. As much as I admire the clarity of this posture, I think this turns the project into a place I have to police for "bad contributions" which I then block/ban. I don't want to engage with the project in this way, and we haven't had a severe enough flood of slop on our project1 to drive me towards supporting such a ban. I think we'll still see some level of LLM-generated issue reports and PRs even with a ban. It then turns any further interaction negative, since the appropriate action is to close the issue without further evaluation. I'm also not that keen on the idea of taking options away from contributors. If someone finds a useful way to leverage local models (which reduce many of the ethical concerns) to do their work, and their contributions are of high quality, why should I care to forbid them from using their preferred toolchain?2 Proposed policy: you are responsible for your workI'd like us to put together a policy which is generic -- spans LLMs and any other current or future assistive tools -- and emphasizes the following two main aspects:
The above is not quite a draft of the exact policy, but that's the main thrust of it. What do you think about it as a starting point? Are there major, important elements of good contributions which this misses? I saw Claude suggested some bullet points aligned with the Ownership element. We might adapt some such list of affirmations into our PR template, but I don't think that's the core issue. I also see that it also used the word Disclosure, but defined it narrowly to LLMs. I'd rather we define it broadly -- partly, I think this sends a good social signal that LLM users aren't being singled out and penalized. Contributors who might be scared off by an overtly hostile policy will ideally find such a definition more welcoming. I find it funny and somewhat topical that while skimming that document, I spotted an error which @webknjaz would be very unlikely to make:
This makes it sound like Miguel Grinberg is a Flask maintainer. Miguel's tutorials and books have done a lot to advance Flask, but he's not (and as far as I know, has never been) a maintainer. (I also find "Python's Seth Larson" to be odd phrasing for his role, but 🤷 on that) Even the highest quality tool (Claude is well regarded) in the hands of a domain expert produces errors. I'm sure a deeper reading would reveal other errors. Footnotes
|
Beta Was this translation helpful? Give feedback.
-
|
I'm not sure how to incorporate this, but I was recently shown this project which has a hard no-AI policy: Claude will apparently respect this, and refuses to help a user circumvent this instruction. Per above, I don't think such a policy is appropriate for us, but it shows that a pretty short prompt can strongly influence the behavior (perhaps specifically if it's in the branded |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
I want to research including an "ai policy" across projects I'm involved in and I'm tracking some references to relevant documents and discussions @ https://gist.github.com/webknjaz/10f1106d0a3fd489745c2bc656400d1f. I've fed that gist into Claude and let it traverse like 357 sources out of that.
Here's what it came up with: https://claude.ai/public/artifacts/de2ffa40-98dc-4458-b1b5-f3e645bf7840. That document has some summaries and hightlights, principles/observations distilled, a policy draft, integration suggestions and guidelines for coping with spam.
I think it did a good job illuminating what's beeing observed across FOSS and showing how different parties react, why a clear/transparent document is even necessary. Let's talk about putting one up for pip-tools.
cc @sirosen @hugovk
Beta Was this translation helpful? Give feedback.
All reactions