Rendered at 06:42:52 GMT+0000 (Coordinated Universal Time) with Cloudflare Workers.
nmg 1 days ago [-]
> ## Other organizations
> These are organized along a spectrum of AI friendliness, where top is least friendly, and bottom is most friendly.
This section is an extremely useful reference
dgellow 20 hours ago [-]
[dead]
staticassertion 22 hours ago [-]
This policy is straightforward and shouldn't be particularly controversial (I'm sure it will be bikeshedded to death though). It basically bans the obvious stuff ("don't just drop LLM generated comments onto PRs") and allows the important stuff like LLMs writing code so long as you disclose.
edit: Wow people did not read the policy. It's literally just "if you use an LLM you are responsible for it, we will reject low quality PRs, please disclose that you have used an LLM". This is bog standard.
WCSTombs 20 hours ago [-]
So...big caveat that this is still under review, so what we're talking about is a moving target, but based on what I can see, it seems considerably more nuanced than that. They basically ban LLM-authored code, with a careful carve-out to run an experiment to try to get only high-quality LLM PRs:
> It's fine to use LLMs to answer questions, analyze, distill, refine, check, suggest, review. But not to create.
> We carve out a space for "experimentation" to inform future revisions to this policy.
Importantly, the LLM contributions must be solicited, i.e., the people responsible for reviewing the final implementation have to opt in explicitly beforehand.
staticassertion 15 hours ago [-]
I think that the only significant caveat here is the need for reviewers to opt in, otherwise it's effectively "you can do it if you are open about it and are responsible for the output". The only notable ask here that's different from other policies is "if it's an LLM, tell reviewers beforehand".
TBH I think that makes no sense ("I have an LLM written PR ready, can I open it?") but yeah the policy is also in draft and has actually already changed since my first comment.
singularity2001 19 hours ago [-]
"allows the important stuff like LLMs writing code so long as you disclose."
Are you sure? It says:
"It's fine to use LLMs to answer questions, analyze, distill, refine, check, suggest, review. But not to *create*."
staticassertion 15 hours ago [-]
Yes. The policy is pretty clear on what the rules are for LLM generated code. You need a reviewer to agree to review LLM generated code, you need to read the code yourself, etc.
dgellow 20 hours ago [-]
The discussion thread in the PR is also interesting to got through, lots of people concern in the HN discussion are already well discussed there
OptionOfT 14 hours ago [-]
I like the 'better, but not faster' phrasing.
I'm not gonna say no to some system that validates my code before I present it as a PR, and if said system (it being static checking, dynamic checking, or LLM), gives me a 'comment', I will interpret the output, validate, and decide whether to take action, and how.
And maybe there I disagree a little with the proposal. It's me, it's my code. I stand behind it. But I get where the authors come from, and I believe it's a fine compromise.
I think more importantly in the world of Software Engineering we're seeing a split.
On one hand, people who go all-in on AI and take the output as 100% correct, copy-paste it as theirs without reviewing, prompt it to create PRs and submit them as-is, and worse, as their own.
Then there is the other side, people who use AI as a tool to validate and go deeper.
The problem with the first group that everywhere they feel entitled to shift the validation onus from themselves to the receipient of what they're sending, it being PR or review comment or message or whatever.
In PRs now the repo maintainer has to do a lot more work, as they cannot rely on the social construct of "OptionOfT wrote this, they have 10+ years of experience in these and these systems, so we can look at the PR through that lens".
Equally, I've been on the receiving end of AI PR comments (the PR was human-authored), but copy-pasted by humans, presenting the comment as their own, without properly (if anything) validating the correctness of the comment, or whether it actually makes sense for the PR. Lots of derailment there. This now increases the workload of the PR author, as above. We need to validate and cannot rely on the social construct. Is the comment even correct? What's the context? Why? Is it a hallucination?
And the downside is that it looks like the first group is now going faster, but the second group is actually slowing down due to the increased burden.
It's in-line with the 'nanny' stereotype of the Rust community that they give you permission to act in a way they would never be able to verify anyways:
> The following are allowed.
> Asking an LLM questions about an existing codebase.
> Asking an LLM to summarize comments on an issue, PR, or RFC...
Like seriously, what's the point of explicitly allowing this? Imagine the opposite were true, you weren't allowed to do this - what would they do? Revert an update because the person later claimed they checked it with an LLM?
The Linux policy on this is much superior and more sensible.
MaulingMonkey 1 days ago [-]
> Like seriously, what's the point of explicitly allowing this?
Explicit permission can be useful to preemptively cut off some questions from well meaning people who, acting in good faith, might otherwise pester for clarification (no matter how silly / "obvious" it might otherwise be), or get agitated by misconstruing an all-banned list as being an overly verbose "no LLMs ever" overreach.
> It's in-line with the 'nanny' stereotype of the Rust community that they give you permission to act in a way they would never be able to verify anyways: [...]
Many of us work or have worked in corporate settings where IT takes great pains to help detect and prevent data exfiltration, and have absolutely installed the corporate spyware to detect those kinds of actions when performed on their own closed source codebases. Others rely on the honor system - at least as far as you know - but still ban such actions out of copyright/trade secret concerns. If you're steeped deeply enough in that NDA-preserving culture, a reminder that you've switched contexts might help when common sense proves uncommon.
While nannying can be obnoxious, I'm not sure that having a document one can point to/link/cite, to allay any raised concerns, counts.
bcjdjsndon 20 hours ago [-]
> If you're steeped deeply enough in that NDA-preserving culture, a reminder that you've switched contexts might help when common sense proves uncommon.
What?
MaulingMonkey 19 hours ago [-]
> If you're steeped deeply enough in that NDA-preserving culture
If you've throroughly absorbed a culture of honoring non disclosure agreements (NDAs), which are legal contracts demanding you keep secrets and avoid sharing sensitive data or code...
> a reminder that you've switched contexts might help
A reminder that rust-lang is a transparent, open source project, with no non-disclosure agreements or trade secrets to keep private unto itself might help [1].
> when common sense proves uncommon.
Because everyone misses the "obvious" sometimes. And because "obvious" is a subjective value judgement, meaning people will disagree what is or is not obvious.
-------
1. That said, if you've got a private, corporate-internal, closed source fork, you might still be bound by such concerns. For example, various people have ported rust's stdlib to work on various consoles (xbox, playstation, etc.) - and one of the reasons you don't see that upstreamed is because doing so would require violating console vendor NDAs, as well as possibly their company's NDAs - possibly for such banal reasons as not wanting to leak a hint of a console port or new title before their marketing plans are ready to go to capitilize on any hype.
vintermann 1 days ago [-]
> Like seriously, what's the point of explicitly allowing this?
I would have LOVED if the university course I took last winter had this. I had to take a very paranoid attitude to what was allowed.
What they're trying to avoid is a lot of unnecessary conflict with zealous anti-AI people calling for your exclusion for admitting to doing these things. There are people who would ban this too.
davesque 23 hours ago [-]
So then the Rust maintainers are going to give you an F on your report card?
bcjdjsndon 20 hours ago [-]
Try using allman braces and see how far you get on a basic issue like that
aabhay 22 hours ago [-]
No they’ll just drop() you
kouteiheika 1 days ago [-]
> Like seriously, what's the point of explicitly allowing this? Imagine the opposite were true, you weren't allowed to do this - what would they do?
Imagine if they just say "LLMs are banned" then there's a lot of ambiguity. So they specifically outlined that generative uses of LLMs are banned, and that non-generative ones are not banned (i.e. "allowed").
I think it's a poor choice of words on their part, but it makes sense (considering what their policy is). It's more of a "we're not disallowing use in these particular scenarios, so you can still use LLMs for these if you want". Remember: it's a big project, and if they don't explicitly state something then people will ask and waste everyone's time.
saghm 1 days ago [-]
If anything, it reads to me as a proactive rebuttal of complaints that they don't allow LLMs; they're definitively stating that they do allow using them for very specific purposes.
bcjdjsndon 19 hours ago [-]
Needs to be "solicited" from a senior dev. How many requests for ai code do you think they will be making?
saghm 4 hours ago [-]
I can't find any reference to using LLMs to ask questions in the ways that are cited from the parent comment needing to be solicited.
bcjdjsndon 19 hours ago [-]
Y tho? It's already bad enough a programming language wants to play politics (doesn't matter what my politics are if I want to code in the c "community"), now they're taking purely emotional stances like "AI evil"
kouteiheika 18 hours ago [-]
> now they're taking purely emotional stances like "AI evil"
But they aren't? Nowhere in the document it says this; in fact, it says the opposite - that they don't want to make a moral judgement.
> It's already bad enough a programming language wants to play politics (doesn't matter what my politics are if I want to code in the c "community")
It also doesn't matter what your politics are in the Rust community. My personal politics don't agree with the majority of prominent Rust contributors either, and that's fine. It doesn't (and hasn't) stopped me from being able to use Rust for over a decade now. Ignore politics and just engage on a purely technical level, and you'll be fine.
bcjdjsndon 17 hours ago [-]
> It also doesn't matter what your politics are in the Rust community
Largely true, apart from the trans issue. Refuse to agree a man is a woman because he says it's true, and you're a bigot
bendmorris 16 hours ago [-]
You don't have to agree to treat people with respect. Using someone's preferred pronouns doesn't hurt you.
If you find it that hard to keep those feelings to yourself in the context of contributing to rust, and are too stubborn to make a one character change to pronouns when you’re addressing people, I have to imagine that you’re utterly incapable of working in any professional context, where you’re expected to conform to many more things that make less sense. This seems like a problem with you.
staticassertion 22 hours ago [-]
They're just giving examples of what you can do and explicitly saying so. Saying "you couldn't stop me" is completely missing the point.
This is not very different from the Linux kernel's policy so it's an odd comparison. It's actually almost identical in practical terms.
edit: lol proof that this doc needs to be stupidly explicit is in the pudding with the HN comments going out of their way to radically misread it
davesque 23 hours ago [-]
It feels telling that it reads like university course guidelines.
dgellow 20 hours ago [-]
What do you mean?
DennisL123 1 days ago [-]
Does the policy fix the issue of many low quality PRs being submitted? Unlikely.
Will it fix a related but different problem? Likely.
TazeTSchnitzel 23 hours ago [-]
The people who submit low quality LLM-generated PRs often don't bother to read the policies first, but at least it will be easier to reject those.
def13 19 hours ago [-]
The point here if you read contributor comments is mainly to allow people to shut a PR down without having claims of “unfairness” because some other PR wasn’t shut down. These are “moderation policies” in the style of old internet forums, their primary purpose is to clear up ambiguity and make maintainer’s (moderators) lives easer.
The birth of vibe coding has seen interactions on public FOSS projects increasingly reminiscent of the flame wars and moderator hammers of the old forum days. A lot of projects have been behind the curve on preparing and codifying the hammers, probably because no maintainer really wants to be a moderator, but thats where its naturally landed unfortunately.
bcjdjsndon 18 hours ago [-]
> probably because no maintainer really wants to be a moderator, but thats where its naturally landed unfortunately.
Yeah this is autistic bunk. If you run an open source project, dealing with people is part and parcel of it, disagreements as well.
jojomodding 8 hours ago [-]
Not sure what is particularly autistic about the observation that "couldn't we just all get along" is unfortunately not how the world works even if we would like it to?
jynelson 18 hours ago [-]
[dead]
afdbcreid 17 hours ago [-]
While we surely hope that at least some people will read and honor the policy, of course we know not everyone will. But creating a policy gives us teeth. Currently sending such PR is not disallowed, provided it doesn't fall in the thin area of some previous policies about slop PRs. With this policy, doing it will be escalated to the moderation team. First time you'll get a warning, second time you'll be banned from the project.
saagarjha 21 hours ago [-]
Ok but what if their OpenClaw reads it for them
tick_tock_tick 22 hours ago [-]
Some of these are just straight up unhinged.
> Using an LLM to discover bugs, as long as you personally verify the bug, write it up yourself, and disclose that an LLM was used.
What are they going to do go back and reject a bug if someone later admits they found it with an LLM? Honestly they and most other project would probably be better off just ignoring the situation until norms start developing.
ZeroGravitas 20 hours ago [-]
They're trying to avoid a Boy Who Cried Wolf situation.
If they get swamped with 100 bugs that turned out, after they investigate them to be hallucinations then it's likely they will ignore or lose in the noise a real bug.
A llm generated bug that pretends it was a human created bug would be trying to abuse that presumption of validity, and therefore considered a dick move.
bcjdjsndon 19 hours ago [-]
> If they get swamped with 100 bugs that turned out, after they investigate them to be hallucinations then it's likely they will ignore or lose in the noise a real bug.
But theyre saying if they're 100 correct bug reports it's still banned.
That's hysterical
That's the baby out with the bathwater.
afdbcreid 17 hours ago [-]
No, you read incorrectly. Using LLMs to discover bugs is allowed given that you personally verified them.
saagarjha 21 hours ago [-]
The assumption here is that people act in good faith. If you break the rules, this indicates that you are not acting in good faith, and perhaps should no longer be welcome.
bcjdjsndon 19 hours ago [-]
Sounds very welcoming. I came here for the code, not to join some social club
3836293648 12 hours ago [-]
Then don't go interact with the social club?
If you want to interact with their project, follow their rules on their terms, otherwise just grab the binaries and don't stir things up.
staticassertion 22 hours ago [-]
What are you even talking about lol the policy doesn't imply that at all.
That's in the "allowed with caveats" section. It's just saying to not open bug reports without first reading them yourself or your bug may be closed. No one is saying "by policy we will have to add the bug back in" jesus christ
The policy is insanely straightforward, idk how you can be misinterpreting it this badly. It's just "Disclose that you use a model, you are on the hook for reviewing model output as a human" and then some clear cut examples.
Kudos to the team for this. I think it’s brave of them to stand up for their own experiences and push back against the hype train.
Before you knee jerk hate on the team for being luddites, consider:
1. For a language like rust there’s too few eyes and too many mouths. Reviewing is a job, and is extremely taxing.
2. The code base needs to be highly hermetic because it’s load bearing across the global economy
3. Most changes are only relevant if they’ve followed extensive process, including community feedback.
afdbcreid 23 hours ago [-]
Note that there are currently several proposed policies (plus hundreds of discussions mostly in private channels), and frankly I'm not sure we'll ever reach a consensus (I'm a Rust project member).
classified 1 days ago [-]
This is highly interesting. It seems clear to me that a lot of thought and work went into this. If I ever were to write a similar document, I'm sure I could learn a lot from this one. Props to the authors and all involved.
pixlmint 19 hours ago [-]
I wonder what will happen once these guidelines end up in the LLM training datasets
jynelson 19 hours ago [-]
if an LLM says "I can't open a PR automatically until you solicit a review from a maintainer", i think that's good actually. likewise for proactively following the rest of the rules.
bcjdjsndon 18 hours ago [-]
It's not the submitter who solicits, but the reviewer. They can't give code, AND THEN get approval, they need to be asked specifically for an llm created PR.
I think many projects will adopt this instead of allowing everyone / blocking everyone
Many projects have "ai slop" check in place to directly close and ban user if it is "ai slop". Else, it will be hard to handle the velocity of PRs
Chris2048 23 hours ago [-]
Maybe a network of ppl who can vouch they meet in real life?
I don't know if having your name/ face a secret is still acceptable? Maybe tiers of devs (anon vs other) on that one?
spprashant 1 days ago [-]
Github just won't respond at all.
ares623 1 days ago [-]
Oh no where is Bun gonna be ported to next?
lifthrasiir 1 days ago [-]
Nothing. You can always vibe-code in Rust even when the rust-lang/rust repository itself largely forbids vibe coding.
staticassertion 22 hours ago [-]
> even when the rust-lang/rust repository itself largely forbids vibe coding.
This policy does not seem to forbid vibe coding?
lifthrasiir 20 hours ago [-]
It does in the narrower sense of vibe coding (as opposed to more general agentic coding, which is also called vibe coding from time to time...).
> Solicited, non-critical, high-quality, well-tested, and well-reviewed code changes that are originally authored by an LLM are allowed, with disclosure.
Vibe coding (in its original meaning) would have hard time arguing it's of high quality.
staticassertion 15 hours ago [-]
I guess that's the problem with the term. It should likely be left entirely out of a document like this since it's just confusing.
dgellow 20 hours ago [-]
I read it as a hypothetical
voidhorse 1 days ago [-]
But one of the reasons they switched was because the compiler upstream for the original language they used, Zig, wouldn't accept slop contributions they wanted to make for Bun perf. What will they do when they need to try to push a slop contribution upstream to rust?
At this point they will probably just fork yet again and maintain some vibe compiler.
noobermin 23 hours ago [-]
Is there a citation for this? This was my suspicion but it's quite amazing if this was the actual reason for the bun spectacle
sheept 20 hours ago [-]
No, they've explicitly denied it.[0] However, they do regularly dig at how much faster their fork is[1][2] that they can't merge because of Zig's AI policy.
Huh. I wonder if the original intent was to merge an AI generated PR to a high-profile project like Zig. It makes the headlines and generates hype. But that went embarassingly bad for them so they had "port Bun to Rust" as a backup.
whattheheckheck 1 days ago [-]
They should make FullstackLang. It compiles English in .md to machine code that can directly run on the specialized hardware it designs for it that you have to 3d print at runtime. Every program gets its own custom hardware. Composability and reuse be damned. Pay the token masters for every thought you have
mhjkl 9 hours ago [-]
Nim
sergiopreira 18 hours ago [-]
[dead]
7e 1 days ago [-]
[flagged]
giancarlostoro 1 days ago [-]
The term scope creep comes to mind. Programming languages do not need to grow exponentially 24/7, its okay to let it grow slowly and stay mature and secure. If Rust were too bleeding edge, the safety promises would corrode over time. I think a better use of some of those PRs is to focus on crates as proof of concepts for things that could benefit Rust if it were included either in the standard library, or just available as a crate you can use for programmer ergonomic reasons.
grey-area 1 days ago [-]
Please do fork Rust and maintain it for the LLM true believers. I’m sure the real rust team would be delighted to see fewer low-effort PRs.
Given what you’ve said above it would be an easy task ‘accelerating quality and features exponentially’, so you’ll soon be able to show them (perhaps within days!), the error of their ways.
Please go do it now, we’ll wait.
mw888 1 days ago [-]
That's an ambitious conclusion, and not as overly so as some may think.
But I believe it is not the reason Rust adopted this policy, I think they just have a more basal and subjective dislike of AI irrespective of whatever truth you may have just cited.
fgfarben 1 days ago [-]
It doesn't really read like a Luddite policy.
Rust is already well past 1.0. At best an LLM could discover a vulnerability (and the human using it can file a patch) or can help a human improve ergonomics.
voxl 1 days ago [-]
LLM delusion is insufferable. If all it takes is tokens to make a significantly better in programming language in logarithmic time why hasn't anyone done it?
cornholio 1 days ago [-]
As someone who's vibecoding my own self-hosted language (via a typescript to c++ transpiler and bootstrap), I can tell you mainline commercial models like Opus 4.7 aren't quite there yet. I'm getting 10KB source files balloon into 80MB outputs for now.
The main problem is that the the problem space is vast and highly interconnected, the LLM needs to reason about the entire language every time it suggest an architectural change, but it can't, so it suggests local changes that make sense to me - a language hobbyist - then runs into much more difficult problems down the road.
Maybe Mythos with a lot of (competent) human hand-holding and pre-design can do it.
jcgrillo 1 days ago [-]
> I expect soon we will see Rust forks with a pro-LLM policy
I sure hope so. I expect the end result will disprove the following:
> The Rust team will never be able to catch up to them
The AI jackasses have been braying in this key for going on a few years now, and there hasn't been one single time any of this breathless noise has resulted in something meaningfully superior. It's time to put up or shut up. Enough bullshit talk. If you can vibeslop a better Rust (or whatever), JFDI and leave everyone behind.
ares623 1 days ago [-]
Would love to see that happen, personally. All this power being held back by red tape. We need to unleash the beast.
What do you think is stopping anyone from starting a fork right now? Is it a licensing issue?
greenavocado 1 days ago [-]
Attention issue. They are desperate.
triyambakam 22 hours ago [-]
Saying "LLM" now sounds dumb. Just say "model". Some are no longer "large" and that is arbitrary.
dryarzeg 1 days ago [-]
> This policy is intended to live in Forge as a living document, not as a dead RFC.
Oh... I can’t say for certain who wrote it, and I won’t make any definitive claims - personally, I tend to think it was probably mostly written, or at least conceived, by a man - but this sort of phrase… I get a nervous twitch every time I see it, even though it’s actually quite a clever rhetorical device. Hell... Maybe I just need a break; I don’t know, since I’m starting to see LLMs everywhere...
saghm 1 days ago [-]
I feel like I saw phrasing like this pretty often even before LLMs were a thing
> These are organized along a spectrum of AI friendliness, where top is least friendly, and bottom is most friendly.
This section is an extremely useful reference
edit: Wow people did not read the policy. It's literally just "if you use an LLM you are responsible for it, we will reject low quality PRs, please disclose that you have used an LLM". This is bog standard.
> It's fine to use LLMs to answer questions, analyze, distill, refine, check, suggest, review. But not to create.
> We carve out a space for "experimentation" to inform future revisions to this policy.
Importantly, the LLM contributions must be solicited, i.e., the people responsible for reviewing the final implementation have to opt in explicitly beforehand.
TBH I think that makes no sense ("I have an LLM written PR ready, can I open it?") but yeah the policy is also in draft and has actually already changed since my first comment.
Are you sure? It says:
"It's fine to use LLMs to answer questions, analyze, distill, refine, check, suggest, review. But not to *create*."
I'm not gonna say no to some system that validates my code before I present it as a PR, and if said system (it being static checking, dynamic checking, or LLM), gives me a 'comment', I will interpret the output, validate, and decide whether to take action, and how.
And maybe there I disagree a little with the proposal. It's me, it's my code. I stand behind it. But I get where the authors come from, and I believe it's a fine compromise.
I think more importantly in the world of Software Engineering we're seeing a split.
On one hand, people who go all-in on AI and take the output as 100% correct, copy-paste it as theirs without reviewing, prompt it to create PRs and submit them as-is, and worse, as their own.
Then there is the other side, people who use AI as a tool to validate and go deeper.
The problem with the first group that everywhere they feel entitled to shift the validation onus from themselves to the receipient of what they're sending, it being PR or review comment or message or whatever.
In PRs now the repo maintainer has to do a lot more work, as they cannot rely on the social construct of "OptionOfT wrote this, they have 10+ years of experience in these and these systems, so we can look at the PR through that lens".
Equally, I've been on the receiving end of AI PR comments (the PR was human-authored), but copy-pasted by humans, presenting the comment as their own, without properly (if anything) validating the correctness of the comment, or whether it actually makes sense for the PR. Lots of derailment there. This now increases the workload of the PR author, as above. We need to validate and cannot rely on the social construct. Is the comment even correct? What's the context? Why? Is it a hallucination?
And the downside is that it looks like the first group is now going faster, but the second group is actually slowing down due to the increased burden.
https://github.com/jyn514/rust-forge/blob/llm-policy/src/pol...
It's in-line with the 'nanny' stereotype of the Rust community that they give you permission to act in a way they would never be able to verify anyways:
> The following are allowed. > Asking an LLM questions about an existing codebase. > Asking an LLM to summarize comments on an issue, PR, or RFC...
Like seriously, what's the point of explicitly allowing this? Imagine the opposite were true, you weren't allowed to do this - what would they do? Revert an update because the person later claimed they checked it with an LLM?
The Linux policy on this is much superior and more sensible.
Explicit permission can be useful to preemptively cut off some questions from well meaning people who, acting in good faith, might otherwise pester for clarification (no matter how silly / "obvious" it might otherwise be), or get agitated by misconstruing an all-banned list as being an overly verbose "no LLMs ever" overreach.
> It's in-line with the 'nanny' stereotype of the Rust community that they give you permission to act in a way they would never be able to verify anyways: [...]
Many of us work or have worked in corporate settings where IT takes great pains to help detect and prevent data exfiltration, and have absolutely installed the corporate spyware to detect those kinds of actions when performed on their own closed source codebases. Others rely on the honor system - at least as far as you know - but still ban such actions out of copyright/trade secret concerns. If you're steeped deeply enough in that NDA-preserving culture, a reminder that you've switched contexts might help when common sense proves uncommon.
While nannying can be obnoxious, I'm not sure that having a document one can point to/link/cite, to allay any raised concerns, counts.
What?
If you've throroughly absorbed a culture of honoring non disclosure agreements (NDAs), which are legal contracts demanding you keep secrets and avoid sharing sensitive data or code...
> a reminder that you've switched contexts might help
A reminder that rust-lang is a transparent, open source project, with no non-disclosure agreements or trade secrets to keep private unto itself might help [1].
> when common sense proves uncommon.
Because everyone misses the "obvious" sometimes. And because "obvious" is a subjective value judgement, meaning people will disagree what is or is not obvious.
-------
1. That said, if you've got a private, corporate-internal, closed source fork, you might still be bound by such concerns. For example, various people have ported rust's stdlib to work on various consoles (xbox, playstation, etc.) - and one of the reasons you don't see that upstreamed is because doing so would require violating console vendor NDAs, as well as possibly their company's NDAs - possibly for such banal reasons as not wanting to leak a hint of a console port or new title before their marketing plans are ready to go to capitilize on any hype.
I would have LOVED if the university course I took last winter had this. I had to take a very paranoid attitude to what was allowed.
What they're trying to avoid is a lot of unnecessary conflict with zealous anti-AI people calling for your exclusion for admitting to doing these things. There are people who would ban this too.
Imagine if they just say "LLMs are banned" then there's a lot of ambiguity. So they specifically outlined that generative uses of LLMs are banned, and that non-generative ones are not banned (i.e. "allowed").
I think it's a poor choice of words on their part, but it makes sense (considering what their policy is). It's more of a "we're not disallowing use in these particular scenarios, so you can still use LLMs for these if you want". Remember: it's a big project, and if they don't explicitly state something then people will ask and waste everyone's time.
But they aren't? Nowhere in the document it says this; in fact, it says the opposite - that they don't want to make a moral judgement.
> It's already bad enough a programming language wants to play politics (doesn't matter what my politics are if I want to code in the c "community")
It also doesn't matter what your politics are in the Rust community. My personal politics don't agree with the majority of prominent Rust contributors either, and that's fine. It doesn't (and hasn't) stopped me from being able to use Rust for over a decade now. Ignore politics and just engage on a purely technical level, and you'll be fine.
Largely true, apart from the trans issue. Refuse to agree a man is a woman because he says it's true, and you're a bigot
A dissenting view.
If you find it that hard to keep those feelings to yourself in the context of contributing to rust, and are too stubborn to make a one character change to pronouns when you’re addressing people, I have to imagine that you’re utterly incapable of working in any professional context, where you’re expected to conform to many more things that make less sense. This seems like a problem with you.
This is not very different from the Linux kernel's policy so it's an odd comparison. It's actually almost identical in practical terms.
edit: lol proof that this doc needs to be stupidly explicit is in the pudding with the HN comments going out of their way to radically misread it
Will it fix a related but different problem? Likely.
The birth of vibe coding has seen interactions on public FOSS projects increasingly reminiscent of the flame wars and moderator hammers of the old forum days. A lot of projects have been behind the curve on preparing and codifying the hammers, probably because no maintainer really wants to be a moderator, but thats where its naturally landed unfortunately.
Yeah this is autistic bunk. If you run an open source project, dealing with people is part and parcel of it, disagreements as well.
> Using an LLM to discover bugs, as long as you personally verify the bug, write it up yourself, and disclose that an LLM was used.
What are they going to do go back and reject a bug if someone later admits they found it with an LLM? Honestly they and most other project would probably be better off just ignoring the situation until norms start developing.
If they get swamped with 100 bugs that turned out, after they investigate them to be hallucinations then it's likely they will ignore or lose in the noise a real bug.
A llm generated bug that pretends it was a human created bug would be trying to abuse that presumption of validity, and therefore considered a dick move.
But theyre saying if they're 100 correct bug reports it's still banned.
That's hysterical
That's the baby out with the bathwater.
If you want to interact with their project, follow their rules on their terms, otherwise just grab the binaries and don't stir things up.
That's in the "allowed with caveats" section. It's just saying to not open bug reports without first reading them yourself or your bug may be closed. No one is saying "by policy we will have to add the bug back in" jesus christ
The policy is insanely straightforward, idk how you can be misinterpreting it this badly. It's just "Disclose that you use a model, you are on the hook for reviewing model output as a human" and then some clear cut examples.
Before you knee jerk hate on the team for being luddites, consider:
1. For a language like rust there’s too few eyes and too many mouths. Reviewing is a job, and is extremely taxing. 2. The code base needs to be highly hermetic because it’s load bearing across the global economy 3. Most changes are only relevant if they’ve followed extensive process, including community feedback.
> People must be vouched for before interacting with certain parts of a project (the exact parts are configurable to the project to enforce).
https://github.com/mitchellh/vouch
I think many projects will adopt this instead of allowing everyone / blocking everyone
Many projects have "ai slop" check in place to directly close and ban user if it is "ai slop". Else, it will be hard to handle the velocity of PRs
I don't know if having your name/ face a secret is still acceptable? Maybe tiers of devs (anon vs other) on that one?
This policy does not seem to forbid vibe coding?
> Solicited, non-critical, high-quality, well-tested, and well-reviewed code changes that are originally authored by an LLM are allowed, with disclosure.
Vibe coding (in its original meaning) would have hard time arguing it's of high quality.
At this point they will probably just fork yet again and maintain some vibe compiler.
[0]: https://x.com/jarredsumner/status/2051600118886138262
[1]: https://x.com/bunjavascript/status/2048427636414923250
[2]: https://x.com/jarredsumner/status/2053050239423312035
Given what you’ve said above it would be an easy task ‘accelerating quality and features exponentially’, so you’ll soon be able to show them (perhaps within days!), the error of their ways.
Please go do it now, we’ll wait.
But I believe it is not the reason Rust adopted this policy, I think they just have a more basal and subjective dislike of AI irrespective of whatever truth you may have just cited.
Rust is already well past 1.0. At best an LLM could discover a vulnerability (and the human using it can file a patch) or can help a human improve ergonomics.
The main problem is that the the problem space is vast and highly interconnected, the LLM needs to reason about the entire language every time it suggest an architectural change, but it can't, so it suggests local changes that make sense to me - a language hobbyist - then runs into much more difficult problems down the road.
Maybe Mythos with a lot of (competent) human hand-holding and pre-design can do it.
I sure hope so. I expect the end result will disprove the following:
> The Rust team will never be able to catch up to them
The AI jackasses have been braying in this key for going on a few years now, and there hasn't been one single time any of this breathless noise has resulted in something meaningfully superior. It's time to put up or shut up. Enough bullshit talk. If you can vibeslop a better Rust (or whatever), JFDI and leave everyone behind.
What do you think is stopping anyone from starting a fork right now? Is it a licensing issue?
Oh... I can’t say for certain who wrote it, and I won’t make any definitive claims - personally, I tend to think it was probably mostly written, or at least conceived, by a man - but this sort of phrase… I get a nervous twitch every time I see it, even though it’s actually quite a clever rhetorical device. Hell... Maybe I just need a break; I don’t know, since I’m starting to see LLMs everywhere...