I personally know people who look down upon people who use LLMs to write code. There is a lot of hate in some of senior developers that I talk to. I don't know if this growing tendency to be suspicious of AI usage is good or bad.
For example, towards the final semester of my bachelors degree, my algorithms class started reporting students for academic misconduct because they the TAs started assuming that all the optimal solutions to assignment problems were written by LLMs. In fact, several classmates started purposely writing sub-optmial solutions so that the TAs at least grade them without any prejudice.
I worry that because LLM slop also tends to be so well presented, it might compel software developers to start writing shabby code and documentation on purpose to make it appear human.
At the moment it is the other way around. LLMs rarely write good code if not instructed by someone that knows what they are doing.
And even then the code is rarely good.
> AI slop is digital content made with generative artificial intelligence, specifically when perceived to show a lack of effort, quality or deeper meaning, and an overwhelming volume of production.
Curious to know if others are seeing a similar uptick in AI slop in issues or PRs for projects they are maintaining. If yes, how are you dealing with this?
Some of the software that I maintain is critical to container ecosystem and I'm an extremely paranoid developer who starts investigating any github issue within a few minutes of it opening. Now, some of these AI slop github issues have a way to "gaslight" me into thinking that some code paths are problematic when they actually are not. And lately AI slop in issues and PRs have been taking up a lot of my time.
I haven’t seen anything obvious, even including the other repos where I look through issues a lot.
Maybe it’s only the really popular and buzzword-y repos that are targets?
In my experience, the people trying to leverage LLMs for career advancement are drawn to the most high profile projects and buzzwords, where they think making PRs and getting commits will give them maximum career boost value. I don’t think they spend time playing in the boring repos that aren’t hot projects.
Motivation is that large-language models have a very straight-forward task of predicting the next token and the dataset is easy to get. With this app I aim to do two things:
1. Gather a fairly large dataset that captures the brush-strokes for various art prompts.
2. Bootstrap an algorithm / model that can decompose any image/art/illustration into brush strokes.
A longer-term goal for this app is to build an auto-complete (Co-pilot or Grammarly equivalent) for art.
FAANG is to be blamed for rapid rise in incompetence in tech. So many engineers are grinding on leetcode and bagging jobs that pay them insanely high salaries. These engineers know next to nothing when it comes to real world engineering challenges. I knew a guy who had a 200 day leetcode streak but didn't know the difference between a process and thread.
We have essentially collectively decided that these stupid hiring processes are an accurate proxy for finding real talented people.
I know so many colleagues who are here just to virtue signal that they work for a "cool" company. And it has become super easy to get into these companies and coast.
I worry that because LLM slop also tends to be so well presented, it might compel software developers to start writing shabby code and documentation on purpose to make it appear human.