A user asked on the official Lutris GitHub two weeks ago “is lutris slop now” and noted an increasing amount of “LLM generated commits”. To which the Lutris creator replied:
It’s only slop if you don’t know what you’re doing and/or are using low quality tools. But I have over 30 years of programming experience and use the best tool currently available. It was tremendously helpful in helping me catch up with everything I wasn’t able to do last year because of health issues / depression.
There are massive issues with AI tech, but those are caused by our current capitalist culture, not the tools themselves. In many ways, it couldn’t have been implemented in a worse way but it was AI that bought all the RAM, it was OpenAI. It was not AI that stole copyrighted content, it was Facebook. It wasn’t AI that laid off thousands of employees, it’s deluded executives who don’t understand that this tool is an augmentation, not a replacement for humans.
I’m not a big fan of having to pay a monthly sub to Anthropic, I don’t like depending on cloud services. But a few months ago (and I was pretty much at my lowest back then, barely able to do anything), I realized that this stuff was starting to do a competent job and was very valuable. And at least I’m not paying Google, Facebook, OpenAI or some company that cooperates with the US army.
Anyway, I was suspecting that this “issue” might come up so I’ve removed the Claude co-authorship from the commits a few days ago. So good luck figuring out what’s generated and what is not. Whether or not I use Claude is not going to change society, this requires changes at a deeper level, and we all know that nothing is going to improve with the current US administration.



Welcome to the largest gaming community on Lemmy! Discussion for all kinds of games. Video games, tabletop games, card games etc.
Video games, tabletop, or otherwise. Posts not related to games will be deleted.
This community is focused on games, of all kinds. Any news item or discussion should be related to gaming in some way.
No bigotry, hardline stance. Try not to get too heated when entering into a discussion or debate.
We are here to talk and discuss about one of our passions, not fight or be exposed to hate. Posts or responses that are hateful will be deleted to keep the atmosphere good. If repeatedly violated, not only will the comment be deleted but a ban will be handed out as well. We judge each case individually.
Try to keep it to 10% self-promotion / 90% other stuff in your post history.
This is to prevent people from posting for the sole purpose of promoting their own website or social media account.
This community is mostly for discussion and news. Remember to search for the thing you’re submitting before posting to see if it’s already been posted.
We want to keep the quality of posts high. Therefore, memes, funny videos, low-effort posts and reposts are not allowed. We prohibit giveaways because we cannot be sure that the person holding the giveaway will actually do what they promise.
Make sure to mark your stuff or it may be removed.
No one wants to be spoiled. Therefore, always mark spoilers. Similarly mark NSFW, in case anyone is browsing in a public space or at work.
Don’t share it here, there are other places to find it. Discussion of piracy is fine.
We don’t want us moderators or the admins of lemmy.world to get in trouble for linking to piracy. Therefore, any link to piracy will be removed. Discussion of it is of course allowed.
PM a mod to add your own
Video games
Generic
Help and suggestions
The AI hate crowd on Lemmy is pretty insufferable. Same folks would be complaining about Cloud tech back in the day.
Know the limits of AI and use it appropriately. Completely shunning AI is just silly.
Cloud tech is still bad, but even that wasn’t pushed as hard as AI is being pushed.
You’ll own nothing and you’ll be happy.
If the maintainers are open about their use of it and attribute the code made in this fashion properly, there’s far less of a problem. If people don’t like it they can stop using the project and/or fork it to only have human-made code in their version. That’s their choice in FOSS. The jerk reaction and obfuscation makes it far bigger of a problem than this should have been. I don’t trust people when they react to criticism in this fashion. They would have done better just stating the topic was off limits before they lost their cool.
I certainly agree with your point on the jerk reaction, but given some of the guttural hate people have for AI, even though I don’t agree with his tactic to deal with it, I can certainly see the frustration.
Using AI to help code doesn’t necessarily mean completely vibe coding.
Weak moral compass
I don’t care how many years of coding you have if your using AI to clear your backlog you are not going to review everything. And I’m sick of people saying I’m different I am using AI responsible. We all know eventually there well be a bug out in by AI.
The maintainer openly admitted to suspecting this would be become an issue and hid the co-authorship, promptly telling the “haters” to wish them luck finding the AI generated code. Who are the insufferable ones here again?
I am not hating on AI per se, it’s more so how it’s being used and how ridiculous it is.
I agree, we can’t neccesarily get rid of AI, but at the very leastm before we implement such technologies fully, we should look towards the pros and cons of such a technology.
I don’t disagree with AI use, I however fucking hate slop.
edit: Had to correct something lol
I mean we can get ride of it ir t requires huge data centers to do thus. Yes even your self hosted LLMs.
Won’t repeat what I wrote just hours ago in https://lemmy.world/post/44130119/22616090 but just the ending :
"I would personally consider instead Bottles, GOG (have different problems), Steam (obviously not open source and basically monopolistic position), etc.
Overall I think preventing discussion is unhealthy (even though sadly sometimes needed, here I lack context, maybe the issue poster did this numerous time on other platforms, title definitely was provocative) but removing provenance is NEVER a good choice. They want to use Claude on their repo? Absolutely fine (even though not to me) but hiding it makes it instantly untrustworthy to me. In fact I even argued in the past that even though I personally do not use GenAI/LLMs (for coding or otherwise) except for testing it should always be disclosed precisely so that others can make THEIR choice in consequence, including using or contributing, cf https://fabien.benetou.fr/Analysis/AgainstPoorArtificialIntelligencePractices"
Did you mean unhealthy?
I’m just gonna stop you right there.
🥗
I don’t understand what you meant here. OP did mean to write “unhealthy*”.
The joke is that stopping discussion is healthy (which was obviously wrong). So I said I was stopping you–and thus the discussion–and then showed it was healthy with a salad.
I get it now. As the Firesign Theater said: “ha ha that’s very logical.”
Damned, edited, thanks! (shows the benefit of discussing ;)
Open source stuff is awesome and I really like people improving Linux in their spare time
But, to do it this way is basically saying “fuck you” to the community which is fucked up.
Could have talked about how AI helps him or how he uses it for templates or whatever and damn even if I didn’t agree with those points either that’s a lot better than being like “alright good luck finding it now then bitch”
I wouldn’t mess with anything this guy does anymore after this.
Are you talking about his way of communicating or about his AI use? I think it could have been said a bit more level headed, but I mostly agree with what he’s said. I also see no issue with the part “good luck finding it then” that seems to sound malicious to you. To me this means “if you can’t find a difference in quality, your whole complaint is invalid because there basically is no difference in quality”. Yes, it’s still AI and should not be viewed as more than a knowledgeable intern, yada yada, but I hope the point comes across…
Think of it like a jeweller suddenly announcing they were going to start mixing in blood diamonds with their usual diamonds “good luck finding them”.
Functionally, blood diamonds aren’t different.
Leaving aside that you might not want blood diamonds, are you really going to trust someone who essentially says “Fuck you, i’m going to hide them because you’re complaining”
If you don’t know what blood diamonds are, it’s easily searchable.
I’ll go on record as saying the aesthetic diamond industry is inflationist monopolist bullshit, but that doesn’t alter the analogy
Secondly, it seems you don’t really understand why LLM generated code can be problematic, i’m not going to go in to it fully here but here’s a relevant outline.
LLM generated code can (and usually does) look fine, but still not do what it’s supposed to do.
This becomes more of an issue the larger the codebase.
The amount of effort needed to find this reasonable looking, but flawed, code is significantly higher than just reading a new dev’s version.
Hiding where this code is makes it even harder to find.
Hiding the parts where you really should want additional scrutiny is stupid and self-defeating.
Thanks, I think your first point is a really valid one. AI technology is far from clean, especially in a political scope.
To your second point. I see that, but on the other hand, it makes an impression on me as if human code would be free of such errors. I would not put human code on an (implied) pedestal (especially not mine), but maybe I’m missing your point. I think being suspicious about AI code is good but same goes for human code. To me it sounds like nobody should ever trust AI code because there can or will be mistakes you can’t see, which is reasonably careful at best and paranoid at worst. At some point there is no difference anymore between “it looks fine” and “it is fine”.
Let’s assume we’re skipping the ethical and moral concerns about LLM usage and just discuss the technical.
Nobody who knows anything about coding is claiming human code is error free, that’s why code reviews, testing and all the other aspects of the software development lifecycle exist.
Nobody should trust any code unless it can be verified that it does what is required consistently and predictably.
This is a known thing, paranoia doesn’t really apply here, only subjectively appropriate levels of caution.
Also it’s not that they can’t be seen, it’s just that the effort required to spot them is greater and the likelihood to miss something is higher.
Whether or not these problems can be overcome (or mitigated) remains to be seen, but at the moment it still requires additional effort around the LLM parts, which is why hiding them is counterproductive.
This is important because it’s true, but it’s only true if you can verify it.
This whole issue should theoretically be negated by comprehensive acceptance criteria and testing but if that were the case we’d never have any bugs in human code either.
Personally i think the “uncanny valley code” issue is an inherent part of the way LLM’s work and there is no “solution” to it, the only option is to mitigate as best we can.
I also really really dislike the non-declarative nature of generated code, which fundamentally rules it out as a reliable end to end system tool unless we can get those fully comprehensive tests up to scratch, for me at least.
Thanks for taking the time to reply.
Greater compared to human code? Not sure about that, but I’m not disagreeing either. Greater compared to verified able programmers, sure, but in general?..
I don’t think I’m getting your point here. Do you mean by that, the code basically lacks focus on an end goal? Or are you talking about the fuzzyness and randomization of the output?
Both.
The reasons are quite hard to describe, which is why it’s such a trap, but if you spend some time reviewing LLM code you’ll see what I mean.
One reason is that it isn’t coding for logical correctness it’s coding for linguistic passability.
Internally there are mechanisms for mitigating this somewhat, but its not an actual fix so problems slip through.
The latter, if you give it the exact same input in the exact same conditions, it’s not guaranteed to give you the same output.
The fact that its sometimes close to the same actually makes it worse because then you can’t tell at a glance what has changed.
It also isn’t a simple as using a diff tool, at least for anything non-trivial, because it’s variations can be in logical progression as well as language.
Meaning you need to track these differences across the whole contextual area which, if you are doing end to end generation, is the whole codebase.
As I said, there are mitigations, but they aren’t fixes.
In my opinion, he should’ve left it as a co author. I think if you as a user have an ethical issue with Claude, that’s your choice and you can make the decision not to use lutris. I mostly agree with what he says until that part about removing Claude so “good luck finding it”.
It’s not about finding a difference for people (usually), it’s about how that model was trained on the work of others, without consent, for free, to then sell. He made his points about how much it helps, that it’s better than using Meta, Google, OpenAI, or Copilot and I think that’s probably true. But he made that case, so why then hide what Claude has done?
In gaming, Valve requires you to list if you have used AI in the creation of your game and you describe in what way. It’s not because the game will 100% of the time be absolute slop (right now it usually is), it’s so that the potential customer can be informed and choose to or not to support the use of AI in those products.
As far as I’m reading, most people who reviewed the actual code think it’s fine. So, again, I don’t see the point in hiding it other than being somewhat petty.
Right, fair point with the training data.
The point in hiding it was that it was being used, without harassment or complaint, right up until he added attribution which resulted in an avalanche of complaints which require resources to deal with. Discord, the forums and Github pull requests now require much more moderation labor, which takes away from the project.
People had no complaints about the code quality until he started adding AI attribution. So he removed the attribution.
Like he said, if people can’t tell the difference until he started marking the code AI assisted… then they don’t actually have an argument and are simply bringing anti-AI politics into the project.
If there’s no difference in quality why obfuscate it? Why hide something that you think is a valuable tool if your code can speak for itself?
He could have used that opportunity to take a standing his own way “this is what I am doing and if you don’t like it feel free to make a fork but I think this is blown out of proportion for: (reasons he could list his opinions on)”
But being like “good luck finding it now” is 100% malicious in this context. Or if malicious is too strong of a word for this, its definitely not user friendly at all.
And certainly not very “open”.
I don’t see it as obfuscation if there is no underlying difference. Why treat working code differently depending on the source if what matters is that it works (which it does by definition). Of course there has to be more quality control if AI is able to produce more code, but I don’t think that’s the point here right? Why highlight the different sources of the code if, as you said, the code can speak for itself. What’s the difference to you if you can’t tell them apart?
The difference is that AI is a known issue creator (that huntarrr app comes to mind) with many projects and AI usage is supposed to be disclosed transparently for compliance with copyrights and licensing.
But even despite all that its kind of a shitty way to go about it the way he did, in my opinion.
The timeline was that he started adding attribution indicating the use of AI.
Then the anti-AI drones started bombarding the Github, Discord and forums with harassment. His recent statements and removal of attribution are entirely addressed at and because of the anti-AI people harassing the project staff.
He’s not removing it and saying ‘fuck you’ to the users. He’s tired of being harassed by third parties who are not involved with the project in any way and so he removed the source of the harassment.
To be honest I don’t give a shit if a dev uses AI or not. As long as the code does what it is suppost to. In my personal experience AI, while still not anywhere near to capabilitys of a decent dev, can sometimes find and fix errors that I would have missed.
I use AI to look at my git diffs before I push them up. I use a local LLM and specifically instruct it to look for typos, left over debug prints, or stupid logic.
It’s caught quite a few stupid things that I’m apparently blind to and my coworker appreciates it.
That’s not to say I’d sit back and let it write whole features, pushing it right to master after a short skim… Like someone else I know has started doing. But it can absolutely have a useful purpose.
When we write code we use a compiler to translate it into other code that the computer can understand. Now we tell AI to write code that is then compiled into other code that the computer can understand.
It seems very similar at the end of the day. The problem is it makes the process easier. That’s what everyone is so upset about. And that’s only an issue because we don’t feel special anymore. It sucks but I’m sure it will pass. Even if it takes a generation
I must disagree with you here. Telling the compiler what to do is not like prompting an LLM. I see writing code as a form of art and a big part of that is understanding the logic behind the program and the creating process. Imagine it like painting a picture. The artist/dev will undergo all the stages of drawing/coding the vision will change in the process and the outcome might be different then what was originally anticipated.
This pipeline of creating gives the project usually a better result. One could say it gives the project more soul.
With AI you are no longer the artist you are the manager requesting the result and since AI does not undergo this process of creativity the result is a soulless husk. At best only what you asked for but nothing more.
If people where complaining about AI because of its ease of use the same people would be complaining about pythons approach of humanspeech-like-code. (Not saying that there are no people that do so)
So with this logic are you also not an artist if you use tools like Photoshop? Do you need to write with pen and paper?
Is writing code in any language other than assembly also cheating?
I don’t know why this reply is being downvoted
If I had to guess, it’s probably because most gamers aren’t programmers.
No of course not. Did you even finish reading my comment? I thought made it clear that the ease of use is not the issue. The lack of creativity is. Using Photoshop still requires you to think about what you want and how to get there. AI just gives you the output. There is no creativity involved in prompting.
When the first drawing tablets came out people loved them. Almost no one was the impression that it was “cheating”. Even with the use of AI you can still make creative projects but the creativity comes from you. Vibecoding or using image-gen does not involve creative thought.
EDIT: Imagine playing a game made by someone who is not passionate about their work. That’s what it feels to play an AI made game.
Vibecoding is idea driven implementation. You have an idea, you are creative in your ideas and not in the implementation.
“Tell you never wrote code before without telling me you never wrote code before”-ass answer.
There’s a difference between using AI to help you code and pure vibe coding. The latter is how you end up with slop, but the former can absolutely speed up skilled developers.
Same is true across the board with AI use. It can easily be a force multiplier for people as long as you don’t turn off your brain and slop away.
It’s similar, but it’s not the same thing.
Anyone can have an AI “write code”, but ultimately, you’re still responsible for the output of the AI and ensuring that the end result is good. If you are a competent developer, you know things like testing, storage, security and safety (especially when dealing with sensitive data like user data), backups, monitoring, etc along with understanding each line of code. AI will never be perfect because humans aren’t perfect either, AI requires code review just like humans require code review. If you aren’t a programmer, you won’t be able to review the code AI writes, and mistakes will be missed, just like not reviewing human-written code because humans make mistakes too. I don’t see that ever changing because no software is perfect, there will always be bugs no matter what (once the software is complex/sophisticated enough).
AI does generate societal damage, but that’s mostly because of how companies abuse it and less because of the technology itself.
That’s my thoughts on AI and especially AI coding. That ended up being much longer than I expected and there’s more to it but you get the idea.
I never said anything about not reviewing the code. You still need to review it and test it and all that. But using a tool to generate the code isn’t the end of the world. It’s just the next iteration of how we tell computers what to do. Saying no ai code seems like a recipe for failure.
Or at least create boilerplate, test cases, etc.
The ai to do tests and boilerplate was like AI 3 months ago. Now just genuinely oneshots complex implementations
I know, as long as you don’t want scalability, maintainabiliy, reliability or security.
the temptation of using claude code is probably higher than it looks like for a single dev, I think. Hey, in the end one can just have this in their IDE and essentially have your own unpaid intern. It’s a fairly new situation.
Yup, single dev here, can confirm. I’m coding for a living but am mediocre at it since I jumped from civil engineering to something I kind of enjoy. To me coding assistants are a huge help. Finding solutions, discussing ideas, writing down implementation plans, can’t do all that stuff with my colleagues since they have no clue about my work.
the hardest part is coming up with ideas when it’s not a job and just an interest, than finding a path to the realisation of these ideas.
The idea is the easy part I need something to do X.
So your not that food and using AI to make your products better so you can sell your coding abilities. Or your better then you think you are.
if you’re interested i used AI to learn a library to make my link-scraping script and return me only the open access pdf from Google scholar. yeah. it is virtually useless because i need to check all the same. But boy did it make me feel smart.
Your own unpaid intern, who is paid by someone else, employed by someone else, and who has access to all your repos secrets and business logic.
Yeah, nah. I think id rather not train my competitors.
Are you hosting your code in any way accesible from the internet? Gitlab, gitea, forgrjo?
Github. We use azure too. I trust Microsoft a hell of a lot more than these AI clowns.
I have bad news for you…
I know I know. To give up so easily is just fatalism though. You have to at least try where you can.
Use local models then.
If i exclude the offline software I use, from xed to libreoffice and the likes, I own very little of the services in my hand.
Hiding it won’t make the code any better genius.
If you can’t tell then it means it’s good enough
More likely if we can’t tell then it was always shit. AI can’t write good code for anything non-trivial.
Fuck nuance, AI bad, herp derp!
Oops. Guess I’m uninstalling Lutris.
Personally, I have blocked Claude on GitHub, which helpfully puts a huge banner on any project it has infected.
Then unless I have absolutely no choice but using it, I get rid of it.
I understand the hatred towards AI, but people gotta understand that there’s a difference between coding with AI and Vibecoding. They are DIFFERENT THINGS! AI is userful, what is not are both vibecoding and shaming a developer with 30 years of real world experience with no AI support for using it for once. Using AI is ok if you do that critically and with common sense
I totally agree. I’m not an AI hype man. I want to scream whenever I see a PR littered with emojis, bullet lists, and way too much text for a simple change. I hate the discussions about the transformative power of AI, the 10x production gains, all the million tools, agents, skills, plugins, methods I should be using but I am already behind and old and probably unemployed next week, right? Still, AI use is not inherently bad. It gets me unstuck. It finds subtle errors I wasn’t noticing, it writes documentation faster and better than I can. I hate the companies who are pushing it, the methods of it’s training, but the tool itself is just a tool and sometimes a very useful one. IMHO we shouldn’t shame every open source developer just for using it. As long as they are responsible with it, I’m fine with some AI code in my software.
You are correct but people in general are pretty bad at subtly and grey area. Just look at the current state of political discourse in the US. Probably half the people that support the likes of Trump do so because they like black/white binary choice and can’t handle shades of grey in their life emotionally.
100% american public debate lol
I for news for you its the same thing. There is no difference besides maybe the prompt same AI is writing the code. And I do bit believe a coder is going over every single line of code.
Pass
Please, go ahead and remove everything “AI” in your life. No social media. No GPS. No assist when driving or being driven. No streaming of any kind. No meteo apps. Ask your boss to remove everything related to prevision in his company. Ask your doctor to not use any tool to help his diagnosis if you have a scanner for cancer.
Let’s see how many of those you can “pass”. Or let’s see if it helps you develop a critical mind about to use which tool for which job and how to use it.
I’m already full Linux at work. Location on my mobile is always OFF unless I need it on rare occasions. I don’t stream. I self host.
Say your last sentence into a mirror today.
Bro, you are on fucking Lemmy. We are all like you. You are not special. You never ever use GPS to locate yourself, right? You never go from a to b. You never go in a shop to buy food. You never go to the doctor. You never buy anything online. You never watch YouTube. Sure.
Who hurt you
Ho, so that’s your argument? What a fucking kid. It’s easy to have an opinion. It’s harder to know why and not being a fucking parrot because you’re so edgy.
If it’s making commits for you you’re vibe coding.
I use it at work, I use it for troubleshooting and if I get it to generate anything for me, I stage them and review them before committing myself
No, vibe coding explicitly requires NEVER looking at the actual code. I can give claude a ticket, it creates a plan. I review that plan, maybe change some things. Then claude does the thing. I review the code, then tell claude to fix X. Then I test, then I tell claude to create a commit.
There we have claude creating a commit without any of it being vibe coded
That is still vibe coding.
Jokes on you, I’ve used it to untangle messy git problems (with a backup of course).
You can do that with 99.9% less damage to the environment and the working class with
git -f rebase, or even the old tried and true method ofrm -rf && git pull ....Not in my case I’m afraid.
Oh you have to deal with the actually gnarly parts of git… sending my condolences.
It’s OK, I did it to myself. Claude fixed it though. :) I think it’d still be broken otherwise lmao
Are you asking people to be rational? What kind of monster are you
It is more nuance than rationality.
There are plenty of reasons to hate on AI. But in the end they are just tools to automate things. It depends entirely on how it is being used. With enough effort and most importantly checking the output, you can create things faster while still keeping the same quality as before.
Calling anything that even slightly touched an LLM “slop” and crawling in a fetus position while crying is a lot less rational. These people have no idea about the real world.
No. They are, specifically, tools to automate things in the most destructive way possible.
“if you’re gonna be the bitch, be the whole bitch”
Somehow hiding the code feels worse than using the code. This whole thing is yuck.
Yeah, management wants us to use AI at $DAYJOB and one of the strategies we’ve considered for lessening its negative impact on productivity, is to always put generated code into an entirely separate commit.
Because it will guess design decisions at random while generating, and you want to know afterwards whether a design decision was made by the randomizer or by something with intelligence. Much like you want to know whether a design decision was made by the senior (then you should think twice about overriding this decision) or by the intern that knows none of the project context.
We haven’t actually started doing these separate commits, because it’s cumbersome in other ways, but yeah, deliberately obfuscating whether the randomizer was involved, that robs you of that information even more.
Well when you have a massive problem of harassment, death threats and fucking retarded shit stains screaming at every single dev that is even theorized to use ai regardless if it’s true or not.
I blame fucking no one for hiding the fact.
This is on the users not the dev. The users are fucking animals and created this very problem.
Blaming the wrong people and attacking them is the yuck.
Scream at the executives and giant corpos who created the problem not some random indie dev using a tool.
Then just quit it isn’t worth it. I know AI has uses and is useful.
To admit some context: My company has strongly encouraged some AI usage in our coding. They also encourage us to be honest about how helpful, or not, it is. Usually, I tell them it turns out a lot of garbage and once in a while helps make a lengthy task easier.
I can believe him about there being a sweet spot; where it’s not used for everything, only for processes that might have taken a night of manual checks. The very real, very reasonable backlash to it is how easily a poor management team or overconfident engineer will fall away from that sweet spot, and merge stuff that hasn’t had enough scrutiny.
Even Bernie Sanders acknowledged on the senate floor that in a perfect world, where AI is owned by people invested in world benefit, moderate AI use could improve many people’s lives. It’s just sad that in 99.9% of cases, we’re not anywhere near that perfect world.
I don’t totally blame the dev for defending his use of AI backed by industry experience, if he’s still careful about it. But I also don’t blame people who don’t trust it. It’s kind of his call, and if the avoidance of AI is important enough to you, I’d say fork it. I think it’s a small red flag, but not nearly enough of one for me to condemn the project.
I don’t think you should make a claim like this while AI is being heavily subsidized and burning VC cash to stay afloat. The truth is whatever value it may add to such a society might actually be completely negated by it’s resource costs. Is even “moderate” AI use ecologically or economically sustainable?
For full disclosure, I remembered once someone claimed to me there are AI models that use much less power. But, to confirm that statement before replying, I looked up an investigation, and they say it’s much murkier, and that a company’s own claims are usually understating it. So, you’re on point.
Indeed, as they said in Italian “if my grandmother had wheels she would have been a bike” … the reasoning might be theoretically correct but in the current situation it’s just not the case.
It can be useful for generating switch cases and other such not-quite copy-paste work too. There are reasonable use cases… if you ignore how the training data was sourced.
And the incredible amount of damage and destruction it’s still inflicting on the environment, society, and the economy.
No amount of output is worth that cost, even if it was always accurate with no unethical training.
Worth mentioning that the user that started the issue jumps around projects and creates inflammatory issues to the same effect. I’m not surprised lutris’ maintainer went off like they did, the issue is not made with good faith.
Yes, both threads are led by two accounts with probably less than 50 commits to their names during the last year, none of which are of any relevance to the subject they are discussing.
In a world where you could contribute your time to make some things better, there is a certain category of people who seek out nice things specifically to harm them. As open source enters mainstream culture, it also appears on the radar of this kind of people. It’s dangerous to catch their attention, as once they have you they’ll coordinate over reddit, lemmy, github, discord to ruin your reputation. The reputation of some guy who never ever did them any harm apart from bringing them something they needed, for free, but in a way that doesn’t 100% satisfy them. Pure vicious entitlement.
I’d sooner have a drink with a salesman from OpenAI than with one of them.
Just, what kind of pleasure can one derive from harming these projects? It’s so frigging weird, man.
Throwing down people is the easiest way to stand above them. 😒
bro everything is being built with AI right now. ive been doing interviews and every single company I talk to is using these tools to the full extent possible for whatever it is that they’re doing. they have little reason not to
The guy really should have stayed chill about it instead of opening up a tirade, but what he’s saying is correct
What funny is they just but in an open source project and tell AI to write a new tool now its closed source. Pretty soon their well be no point of any developers as people can just ask AI to write their program as that is what is happening anyway.
where does it say in the article that this is becoming a closed source project?
he removed mentions of claude from his git commits because he was tired of getting shit on for using ai
if you’re going to stoop so low as to use fucking AI have the decency to show it so people with actual standards know to avoid it. but to be fair, a cat n mouse game of whether it was used or not would make me avoid it anyway
They did but then people complained about them using AI
if you don’t want people to complain about you using AI, then don’t use AI. it’s easier than you think
This guy gets it.
Be open about it. Many people will not like it. Many people will not trust your product any longer. You need to be willing to let those people go with grace, or else you’re already taking on a project you can’t handle.
Getting real tired of this armchair activism, man. I get it, we all hate LLMs but it’s literally one or two burnt out guys writing this in their spare time. If people really want to do something useful at least go and review the code and then you can shit on his work for legitimate reasons if you really do find it’s bad. Stop demonizing open source devs ffs.
No wonder they burn out more and more. Nobody wants to contribute and help, but everyone is quick to criticise