I am searching for stuff on google as normal, come across a link and then look at the article.
What I am finding is that a lot of these article are now following the same pattern, pages with many sections describing how to solve the same problem in many different ways but don’t actually work half of the time.
But, yes. The web has become poisoned beyond all usability with AI-generated hallucinations and garbage. They killed the Internet as a useful source of information.
LLMs don’t really learn, though. They are fed a corpus that determines their output. You can’t tell them they’ve made a mistake and expect them to learn for later, because that’s not how the system works. Even the ones that search the web in realtime to try to synthesize new information aren’t really learning, as we think of it. They just sort of trust the vibes of truthy stuff (stuff that feels true based on whatever parameters the programmers chose) on the web, and the amount of truthy stuff on the web that is hallucinated by AIs is growing at an accelerating pace.
So, basically, in many real ways AI will almost certainly get worse, not better. It kinda has to get worse because it’s being fed on a diet of stupider and stupider web content, because the web is more and more written by AI, with all its hallucinations and programmed biases. The end of the predominantly human-generated web happened a year or two ago, and now it’s getting more and more poisoned with radioactive AI slop that makes everyone, including LLMs, dumber.
Also, that’s why I’m so vicious about LLM-generated posts here. I’ve banned a bunch of folks for posting AI slop, and will ban a bunch more in the future. A few borderline users slip through, but I delete them and ban the user if things continue to look like AI slop. But, y’all would be surprised by how many we delete/ban every week.
In addition to all the other awful stuff AI is doing, it is also wasting a bunch of human time weeding out the hallucinated trash. I’m sure every site that allows user content faces the same problem, so not only is it destroying the value of the web, AI is also wasting thousands of human hours every day.
-snip-, oh, I agree, even though you do seem inherently negative. I can see justifiably
I’m afraid I am more positive (slightly) about the future.
and believe that eventually the programers will develop a better model that will have an intelligent approach and the internet will be reset back to some form of sanity and we can all go back to reading posted content in the belief it is genuine, tutorials work, and maybe contain facts. in my next life perhaps
People using AI to generate content, as referenced by the OP above, do so because it brings visitors to their sites and generates revenue from advertising. I don’t think the people developing AI can have any control over how it is ultimately used unless they charge a fee, making it unprofitable for those uses. And that is unlikely to stop this kind of content being produced because something will inevitably pop up to plug the gap.
The issue is more people than AI. I agree with much of what both Joe and Stegan say. The reality is that, unfortunately, the human generators of the nonsense (regarding help and how to articles) are not bothering to check their work. When I was younger they called it “lazy” but I’m sure that’s not a politically correct term anymore.
AI is a great tool for organizing one’s notes into coherent articles but the old garbage in garbage out still holds true. If facts and verified methods are presented to the AI, really complete and useful materials can be produced. To me, AI is really glorified spell and grammar check.
Another issue, however, is that things change quicker these days. I see many instances of help articles and tech information that don’t work because they were written a while back and the systems the articles worked on have changed so the articles are now fairly obsolete. I say fairly because much of the time we can extrapolate a solution based on our own general knowledge of a topic.
Bringing this back to Virtualmin, I can see the point of not making sweeping changes to the interface as it helps keep continuity in the usage of the system. But I don’t see usage of AI as bad to help in enhancing documentation.
Yes, I was searching recently for documentation on integrating Power BI using PHP, and came across a site (rollout dot com) with articles which I’m sure are AI generated and include lots of code which looks correct but just doesn’t work. Even so much as having namespaces of libraries completely wrong. I’d link to an example, but don’t want to inadvertently improve their SEO!
Yeah, a lot of those articles are likely made by AI. They often repeat the same info in different ways to rank higher on Google, but don’t always give helpful or accurate answers.
It’s trivial (making a web page and hosting it is trivial, querying any of the various AI APIs is trivial though becoming more expensive as they try to make the LLMs stop bleeding money). But I’m not going to contribute to it by giving pointers on how to make and publish AI slop.
I suspect the folks doing it at scale are doing it with LLMs running locally, which is why the pages are often really awful. The really high end LLMs are still lying machines and if you notice a source you previously trusted starts using them you should adjust your trust way downward. But the small local models that are cheap to run just produce gibberish most of the time. The spammers taking over the web with this trash don’t care about quality, only quantity and plausible sounding articles at the cheapest possible price.
In fairness I am not looking at making page with Ai, I am interested in the mechanism In how my seemingly random Google search is instently converted to a search resuly linked to an ai generated page. Are they doing something weird with Google?
How are the page generated so quickly if that is the case
I think they’re just generating billions of pages. The more specific/unusual the query, the more likely you are to only find an AI-generated page about it. I don’t think they’re generating the page in response to the query, as Google wouldn’t be able to index it that quickly.
That said, Google is also eating up users that used to go to actual websites by automatically generating (often hallucinating) an AI answer. So, it’s becoming harder for people who write online for a living to actually make any money or get any readers.
This makes sense, maybe they are getting weird search terms from tracking still present on browsers.
I did not think of this. Google got into trouble for getting all of the news feeds and passing of as their own (i believe), naybe there will be something done about this AI scraping.
However, I have found some of the information given back very useful in thise snippet windows at the top.