Wikipedia Pauses AI-Generated Summaries After Editor Backlash

submitted by

www.404media.co/wikipedia-pauses-ai-generated-s…

Text to avoid paywall

The Wikimedia Foundation, the nonprofit organization which hosts and develops Wikipedia, has paused an experiment that showed users AI-generated summaries at the top of articles after an overwhelmingly negative reaction from the Wikipedia editors community.

“Just because Google has rolled out its AI summaries doesn't mean we need to one-up them, I sincerely beg you not to test this, on mobile or anywhere else,” one editor said in response to Wikimedia Foundation’s announcement that it will launch a two-week trial of the summaries on the mobile version of Wikipedia. “This would do immediate and irreversible harm to our readers and to our reputation as a decently trustworthy and serious source. Wikipedia has in some ways become a byword for sober boringness, which is excellent. Let's not insult our readers' intelligence and join the stampede to roll out flashy AI summaries. Which is what these are, although here the word ‘machine-generated’ is used instead.”

Two other editors simply commented, “Yuck.”

For years, Wikipedia has been one of the most valuable repositories of information in the world, and a laudable model for community-based, democratic internet platform governance. Its importance has only grown in the last couple of years during the generative AI boom as it’s one of the only internet platforms that has not been significantly degraded by the flood of AI-generated slop and misinformation. As opposed to Google, which since embracing generative AI has instructed its users to eat glue, Wikipedia’s community has kept its articles relatively high quality. As I recently reported last year, editors are actively working to filter out bad, AI-generated content from Wikipedia.

A page detailing the the AI-generated summaries project, called “Simple Article Summaries,” explains that it was proposed after a discussion at Wikimedia’s 2024 conference, Wikimania, where “Wikimedians discussed ways that AI/machine-generated remixing of the already created content can be used to make Wikipedia more accessible and easier to learn from.” Editors who participated in the discussion thought that these summaries could improve the learning experience on Wikipedia, where some article summaries can be quite dense and filled with technical jargon, but that AI features needed to be cleared labeled as such and that users needed an easy to way to flag issues with “machine-generated/remixed content once it was published or generated automatically.”

In one experiment where summaries were enabled for users who have the Wikipedia browser extension installed, the generated summary showed up at the top of the article, which users had to click to expand and read. That summary was also flagged with a yellow “unverified” label.

An example of what the AI-generated summary looked like.

Wikimedia announced that it was going to run the generated summaries experiment on June 2, and was immediately met with dozens of replies from editors who said “very bad idea,” “strongest possible oppose,” Absolutely not,” etc.

“Yes, human editors can introduce reliability and NPOV [neutral point-of-view] issues. But as a collective mass, it evens out into a beautiful corpus,” one editor said. “With Simple Article Summaries, you propose giving one singular editor with known reliability and NPOV issues a platform at the very top of any given article, whilst giving zero editorial control to others. It reinforces the idea that Wikipedia cannot be relied on, destroying a decade of policy work. It reinforces the belief that unsourced, charged content can be added, because this platforms it. I don't think I would feel comfortable contributing to an encyclopedia like this. No other community has mastered collaboration to such a wondrous extent, and this would throw that away.”

A day later, Wikimedia announced that it would pause the launch of the experiment, but indicated that it’s still interested in AI-generated summaries.

“The Wikimedia Foundation has been exploring ways to make Wikipedia and other Wikimedia projects more accessible to readers globally,” a Wikimedia Foundation spokesperson told me in an email. “This two-week, opt-in experiment was focused on making complex Wikipedia articles more accessible to people with different reading levels. For the purposes of this experiment, the summaries were generated by an open-weight Aya model by Cohere. It was meant to gauge interest in a feature like this, and to help us think about the right kind of community moderation systems to ensure humans remain central to deciding what information is shown on Wikipedia.”

“It is common to receive a variety of feedback from volunteers, and we incorporate it in our decisions, and sometimes change course,” the Wikimedia Foundation spokesperson added. “We welcome such thoughtful feedback — this is what continues to make Wikipedia a truly collaborative platform of human knowledge.”

“Reading through the comments, it’s clear we could have done a better job introducing this idea and opening up the conversation here on VPT back in March,” a Wikimedia Foundation project manager said. VPT, or “village pump technical,” is where The Wikimedia Foundation and the community discuss technical aspects of the platform. “As internet usage changes over time, we are trying to discover new ways to help new generations learn from Wikipedia to sustain our movement into the future. In consequence, we need to figure out how we can experiment in safe ways that are appropriate for readers and the Wikimedia community. Looking back, we realize the next step with this message should have been to provide more of that context for you all and to make the space for folks to engage further.”

The project manager also said that “Bringing generative AI into the Wikipedia reading experience is a serious set of decisions, with important implications, and we intend to treat it as such, and that “We do not have any plans for bringing a summary feature to the wikis without editor involvement. An editor moderation workflow is required under any circumstances, both for this idea, as well as any future idea around AI summarized or adapted content.”

929

Log in to comment

100 Comments

Why the hell would we need AI summaries of a wikipedia article? The top of the article is explicitly the summary of the rest of the article.

Even beyond that, the "complex" language they claim is confusing is the whole point of Wikipedia. Neutral, precise language that describes matters accurately for laymen. There are links to every unusual or complex related subject and even individual words in all the articles.

I find it disturbing that a major share of the userbase is supposedly unable to process the information provided in this format, and needs it dumbed down even further. Wikipedia is already the summarized and simplified version of many topics.

There's also a "simple english" Wikipedia: simple.wikipedia.org

Ho come on it’s not that simple. Add to that the language barrier. And in general precise language and accuracy are not making knowledge more available to laymen. Laymen don’t have to vocabulary to start with, that’s pretty much the definition of being a layman.

There is definitely value in dumbing down knowledge, that’s the point of education.

Now using AI or pushing guidelines for editors to do it that’s entirely different discussion…

The vocabulary is part of the knowledge. The concept goes with the word, that's how human brains understand stuff mostly.

You can click on the terms you don't know to learn about them.

You can click on the terms you don't know to learn about them.

This is what makes Wikipedia special. Not the fact that it is a giant encyclopedia, but that you can quickly and logically work your way through a complex subject at your pace and level of understanding. Reading about elements but don't know what a proton is? Guess what, there's a link right fucking there!

They have that already: simple.wikipedia.org

some article summaries can be quite dense and filled with technical jargon, but that Al features needed to be cleared labeled as such and that users needed an easy to way to flag issues with "machine-generated/remixed content once it was published or generated automatically.

I feel like if they feel that this is an issue generate the summary in the talk page and have the editors refine and approve it before publishing. Alternatively set an expectation that the article summaries are in plain English.

some article summaries can be quite dense

Well yeah, that's the point of a summary. If I want something in long form, I'll read the article.

Which is why they're looking to add a easy to reed short overview.

And what about simple wikipedia?

A page detailing the the AI-generated summaries project, called “Simple Article Summaries,” explains that it was proposed after a discussion at Wikimedia’s 2024 conference, Wikimania, where “Wikimedians discussed ways that AI/machine-generated remixing of the already created content can be used to make Wikipedia more accessible and easier to learn from.” Editors who participated in the discussion thought that these summaries could improve the learning experience on Wikipedia, where some article summaries can be quite dense and filled with technical jargon, but that AI features needed to be cleared labeled as such and that users needed an easy to way to flag issues with “machine-generated/remixed content once it was published or generated automatically.”

The intent was to make more uniform summaries, since some of them can still be inscrutable.
Relying on a tool notorious for making significant errors isn't the right way to do it, but it's a real issue being examined.

In thermochemistry, an exothermic reaction is a "reaction for which the overall standard enthalpy change ΔH⚬ is negative."[1][2] Exothermic reactions usually release heat. The term is often confused with exergonic reaction, which IUPAC defines as "... a reaction for which the overall standard Gibbs energy change ΔG⚬ is negative."[2] A strongly exothermic reaction will usually also be exergonic because ΔH⚬ makes a major contribution to ΔG⚬. Most of the spectacular chemical reactions that are demonstrated in classrooms are exothermic and exergonic. The opposite is an endothermic reaction, which usually takes up heat and is driven by an entropy increase in the system.

This is a perfectly accurate summary, but it's not entirely clear and has room for improvement.

I'm guessing they were adding new summaries so that they could clearly label them and not remove the existing ones, not out of a desire to add even more summaries.

Wikimedians discussed ways that AI/machine-generated remixing of the already created content can be used to make Wikipedia more accessible and easier to learn from

The entire mistake right there. Look no further. They saw a solution (LLMs) and started hunting for a problem.

Had they done it the right way round there might have been some useful, though less flashy, outcome. I agree many article summaries are badly written. So why not experiment with an AI that flags those articles for review? Or even just organize a community drive to clean up article summaries?

The questions are rhetorical of course. Like every GenAI peddler they don't have an interest in the problem they purport to solve, they just want to play with or sell you this shiny toy that pretends really convincingly that it is clever.

Fundamentally, I agree with you.

The page being referenced

Because the phrase "Wikipedians discussed ways that AI..." Is ambiguous I tracked down the page being referenced. It could mean they gathered with the intent to discuss that topic, or they discussed it as a result of considering the problem.

The page gives me the impression that it's not quite "we're gonna use AI, figure it out", but more that some people put together a presentation on how they felt AI could be used to address a broad problem, and then they workshopped more focused ways to use it towards that broad target.

It would have been better if they had started with an actual concrete problem, brainstormed solutions, and then gone with one that fit, but they were at least starting with a problem domain that they thought it was a applicable to.

Personally, the problems I've run into on Wikipedia are largely low traffic topics where the content is too much like someone copied a textbook into the page, or just awkward grammar and confusing sentences.
This article quickly makes it clear that someone didn't write it in an encyclopedia style from scratch.

Mathematics articles are the most obtuse I come across. I think the Venn diagram of good mathematicians and good science communicators is very close to non-intersecting.

Somebody tried to build a bridge between both groups but they ran into the conundrum that to get to the other side they would first need to get half way to that side, then get half way of the remaining distance, then half way the new remaining distance and so on an infinite number of times, and as the bridge was started from the science communicators side rather than the mathematicians side, they couldn't figure out a solution and gave up.

I know one study found that 51% of summaries that AI produced for them contained significant errors. So AI-summaries are bad news for anyone who hopes to be well informed. source https://www.bbc.com/news/articles/c0m17d8827ko

I'm so tired of "AI". I'm tired of people who don't understand it expecting it to be magical and error free. I'm tired of grifters trying to sell it like snake oil. I'm tired of capitalist assholes drooling over the idea of firing all that pesky labor and replacing them with machines. (You can be twice as productive with AI! But you will neither get paid twice as much nor work half as many hours. I'll keep all the gains.). I'm tired of the industrial scale theft that apologists want to give a pass to while individuals who torrent can still get in trouble, and libraries are chronically under funded.

It's just all bad, and I'm so tired of feeling like so many people are just not getting it.

I hope wikipedia never adopts this stupid AI Summary project.

People not getting things that seem obvious is an ongoing theme, it seems. We sat through a presentation at work by some guy who enthusiastically pitched AI to the masses. I don't mean that's what he did, I mean "enthusiasm" seemed to be his ONLY qualification. Aside from telling folks what buttons to press on the company's AI app, he didn't know SHIT. And the VP got on before and after and it was apparent that he didn't know shit, either. Someone is whispering in these people's ears and they're writing fat checks, no doubt, and they haven't a clue what an LLM is, what it is good at, nor what to be wary of. Absolutely ridiculous.

"Pause" and not "Stop" is concerning.

Is it just me, or was the addition of AI summaries basically predetermined? The AI panel probably would only be attended by a small portion of editors (introducing selection bias) and it's unclear how much of the panel was dedicated to simply promoting the concept.

I imagine the backlash comes from a much wider selection of editors.

Wikimedia has too much money, maybe this has started to create rotten tumors inside it.

I like that they are listening to their editors, I hope they don't stop doing that.

there's a summary paragraph at the top of each article which is written by people who have assholes probably. it's the whole reason to use wikipedia at this point

This was my very first thought as well. The first section of almost every Wikipedia article is already a summary.

Yes, but we didn't emit nearly enough co2 on that one

Good, we don't need LLMs crowbarred into everything. You don't need a summary of an encylopedia article, it is already a broad overview of a complex topic.

Why would anyone need Wikipedia to offer the AI summaries? Literally all chat bots with access to the internet will summarize Wikipedia when it comes to knowledge based questions. Let the creators of these bots serve AI slop to the masses.

when wikipedia starts to publish ai generated content it will no longer be serving its purpose and it won't need to exist anymore

Too late.

With thresholds calibrated to achieve a 1% false positive rate on pre-GPT-3.5 articles, detectors flag over 5% of newly created English Wikipedia articles as AI-generated, with lower percentages for German, French, and Italian articles. Flagged Wikipedia articles are typically of lower quality and are often self-promotional or partial towards a specific viewpoint on controversial topics.

Human posting of AI-generated content is definitely a problem; but ultimately that's a moderation problem that can be solved, which is quite different from AI-generated content being put forward by the platform itself. There wasn't necessarily anything stopping people from doing the same thing pre-GPT, it's just easier and more prevalent now.

Human posting of AI-generated content is definitely a problem

It isn't clear whether this content is posted by humans or by AI fueled bot accounts. All they're sifting for is text with patterns common to AI text generation tools.

There wasn’t necessarily anything stopping people from doing the same thing pre-GPT

The big inhibiting factor was effort. ChatGPT produces long form text far faster than humans and in a form less easy to identify than prior Markov Chains.

The fear is that Wikipedia will be swamped with slop content. Humans won't be able to keep up with the work of cleaning it out.

Well, something like it will still need to exist. In which case we can fork because it's all Creative Commons.

If I wanted an AI summary, I'd put the article into my favourite LLM and ask for one.

I'm sure LLMs can take links sometimes.

And if Wikipedia wanted to include it directly into the site...make it a button, not an insertion.

Who at Wikimedia is so out of touch that they thought that this was a good idea? They need to be replaced.

Same person who saw most American adults have a 6th grade reading level or lower?

Honestly that's the reason I thought it was a good idea at least. Might actually give them a place to start learning from and improve.

Those Americans with a 6th grade reading level or less are precisely the people who shouldn’t be reading AI summaries. They’ll lack the critical thinking and reading skills to catch on to garbage.

Simple Wikipedia already exists and is great.

Problem is they can't read Wikipedia articles in the first place. A lot of it, in particular anything STEM, is higher level reading.

What you're advocating for is the same as dropping off a physics textbook at an elementary school.

Thats why I mentioned Simple Wikipedia.

This is far more readable that what an AI generated version of the article would make.

Didn't know that exists, and that needs more marketing. I literally have a "Daily Wikipedia Article" thing and never came across it. And maybe a different name, like Simplified Wikipedia, because I thought you meant something different.

Yeah - tbh the name sucks. I hate recommending it to students, because it feels like I’m calling them dumb.

But yes 100%. Instead of doing dumb AI shit, they should be advertising what they already have.

If someone is going to Wikipedia specifically looking for information in a STEM field, then an AI summary isn't going to help them. Odds are they can also read, because they're looking up STEM topics.

Also, is Wikipedia not available around the world, or you just think only Americans can't read? Inflammatory just for the sake of being inflammatory I'm guessing. Shit troll job.

I thought the AI thing was going to be rolled out only in the USA?

I think that's not possible. Wikipedia collects as little user data as possible, and providing a different UX in different countries sounds like it would already be too intrusive in that regard.

People with low reading level deserve the same attention to detail and veracity as the rest of us.

I mean, the LLM thing has a proper field for deployment - it can handle the translation of articles that just don't exist in your language. But it should be a button a person clicks with their consent, not an article they get by default, not a content they get signed by the Wikipedia itself. Nowadays, it's done by browsers themselves and their extensions.

The main issue I have as an editor is that there is no straightforward way to retrain the LLM to correct faulty training as directly or revertably as the existing method of editing an article's wikicode. Already, much of my time updating Wikipedia is spent parsing puffery and removing phrases like “award-winning” or “renowned”, inserted by malicious advertisers trying to use Wikipedia as a free billboard. If a Wikipedia LLM began making subjective claims instead of providing objective facts backed by citations, I would have to teach myself machine learning and get involved with the developers who manage the LLM's training. That raises the bar for editor technical competency which Wikipedia historically has been striving to lower (e.g. Visual Editor).

Why is it so damned hard for coporate to understand most people have no use nor need for ai at all?

"It is difficult to get a man to understand something, when his salary depends on his not understanding it."

— Upton Sinclair

Wikipedia management shouldn't be under that pressure. There's no profit motive to enshittify or replace human contributions. They're funded by donations from users, so their top priority should be giving users what they want, not attracting bubble-chasing venture capital.

One of the biggest changes for a nonprofit like Wikipedia is to find cheap/free labor that administration trusts.

AI "solves" this problem by lowering your standard of quality and dramatically increasing your capacity for throughput.

It is a seductive trade. Especially for a techno-libertarian like Jimmy Wales.

It pains me to argue this point, but are you sure there isn't a legitimate use case just this once? The text says that this was aimed at making Wikipedia more accessible to less advanced readers, like (I assume) people whose first language is not English. Judging by the screenshot they're also being fully transparent about it. I don't know if this is actually a good idea but it seems the least objectionable use of generative AI I've seen so far.

Considering ai uses llms and more often than not mixes metaphors, it just seems to me that the wkimedia foundation is asking for misinformation to be published unless there are humans to fact check it

Didn't they just pass a site-wide decision on the use of LLMs in creating/editing otherwise "human made" text?

Why do they need to take the human element out? Why would anyone want them to?

God I hope this isn't the beginning of the end for Wikipedia. They live and die on the efforts of volunteer editors (like Reddit relied on volunteer mods and third party tool devs). The fastest way to tank themselves is by driving off their volunteers with shit like this.

And it's absurdly easier to lose the good will they have than to rebuild it.

Wait until you learn about their budget. When they do donation drives with a sad jimmy wales face they make it seem like Wikipedia is about to go offline. However if they were only hosting Wikipedia they already have enough money to do that pretty much until the end of time. However the foundation spends a ton of money on the pet causes of the board members. The causes aren’t necessarily bad or anything, but misleading donors like that is super messed up.

I bet they will try again.

Wouldn't be surprised, since "no" as a full sentence does not exist in tech or online anymore - it's always "yes" or "maybe later/not now/remind me next time" or other crap like that...

Oh absolutely, the moneyfurnace wikimedia foundation needs to find ways to justify its own existence after all (^:

I get that the simple language option exists, and i definitely think I'm not qualified to really argue what Wikipedia should or should not do. But I wanted to share what my lemmy feed looked like when I clicked into this post and I gotta say, I sorta get it.

Good! I was considering stopping my monthly donation. They better kill the entire "machine-generated" nonsense instead of just pausing, or I will stop my pledge!

If they have enough money to burn on LLM results, they clearly have enough and I don't need to keep donating mine.

Good! I was considering stopping my monthly donation.

Ditto. I don't want to overreact, but it's not a good look.

Yes, throw out the one thing that differentiates you from the unreliable slop.

On the one hand, it’s insulting to expect people to write entries for free only to have AI just summarize the text and have users never actually read those written words.

On the other hand, the future is people copying the url into chat gpt and asking for a summary.

The future is bleak either way.

On the third hand some of us just want to be able to read a fucking article with information instead of a tiktok or ai generated garbage. That's wikipedia, at least it used to be before this garbage. Hopefully it stays true

The ai garbage at the top doesn’t stop you from doing that.

You are correct that it would not instantly become unusable. But when all editors with integrity have ceased to contribute in frustration, wikipedia would eventually become stale, or very unreliable.

Also there is nothing stopping a person from using an llm to summarize an article for them. And the added benefit to that is that the energy and reasources used for that would be only used on the people that wanted to, not on evey single page view. I would assume the enegy consumption on that, would be significant.

I’m willing to bet they would cache the garbage ai summary… not that that makes a difference to your overall point.

Who could have know, in this day and age, that this would be met with backlash? Truly an unprecedented occurance.

Summaries for complex Wikipedia articles would be great, especially for people less knowledgeable of the given topic, but I don't see why those would have to be AI-generated.

the Top section of each wikipedia article is already a summary of the article

Fucking thank you. Yes, experienced editor to add to this: that's called the lead, and that's exactly what it exists to do. Readers are not even close to starved for summaries:

  • Every single article has one of these. It is at the very beginning – at most around 600 words for very extensive, multifaceted subjects. 250 to 400 words is generally considered an excellent window to target for a well-fleshed-out article.
  • Even then, the first sentence itself is almost always a definition of the subject, making it a summary unto itself.
  • And even then, the first paragraph is also its own form of summary in a multi-paragraph lead.
  • And even then, the infobox to the right of 99% of articles gives easily digestible data about the subject in case you only care about raw, important facts (e.g. when a politician was in office, what a country's flag is, what systems a game was released for, etc.)
  • And even then, if you just want a specific subtopic, there's a table of contents, and we generally try as much as possible (without harming the "linear" reading experience) to make it so that you can intuitively jump straight from the lead to a main section (level 2 header).
  • Even then, if you don't want to click on an article and just instead hover over its wikilink, we provide a summary of fewer than 40 characters so that readers get a broad idea without having to click (e.g. Shoeless Joe Jackson's is "American baseball player (1887–1951)").

What's outrageous here isn't wanting summaries; it's that summaries already exist in so many ways, written by the human writers who write the contents of the articles. Not only that, but as a free, editable encyclopedia, these summaries can be changed at any time if editors feel like they no longer do their job somehow.

This not only bypasses the hard work real, human editors put in for free in favor of some generic slop that's impossible to QA, but it also bypasses the spirit of Wikipedia that if you see something wrong, you should be able to fix it.

Yeah this screams "Let's use AI for the sake of using AI". If they wanted simpler summaries on complex topics they could just start an initiative to have them added by editors instead of using a wasteful, inaccurate hype machine

I mean that's kinda why there's simple english is it not?

For English yes, but there's no equivalent in other languages.

Maybe we could generate those with AI... oh wait, I think I see the problem...

I thought they had German at least in a simplified version?

The wikipedia is already the processed food of more complex topics.

There are also external AI tools that do this just fine.

But imagine these tools generating summaries of summaries.

These summaries are useless anyways because the AI hallucinates like crazy... Even the newest models constantly make up bullshit.

It can't be relied on for anything, and it's double work reading the words it shits out and then you still gotta double check it's not made up crap.

There are 1000 ways to get a summary of a Wikipedia article. I don’t think they need to offer them directly on the site. If someone wants a machine generated summary or translation they already have a tool to do that. Seems like a waste of Wikipedia’s Resources

Well done (and keep fighting)!

Aaaaarrgg! This is horrible they stopped AI summaries, which I was hoping would help corrupt a leading institution protecting free thought and transfer of knowledge.

Sincerely, the Devil, Satan

holy shit! Satan has a lemmy account.

Lucifer is literally the angel of free thought. Satanism promotes critical thinking and the right to question authority. Wikipedia is one of the few remaining repositories of free knowledge and polluting it with LLM summaries is exactly the inscrutable, uncritiqueable bullshit that led to the Abrahamic god casting Lucifer out.

I realize your reply is facetious, but there's a reason we're dealing with christofascists and not satanic fascists. Don't do my boy dirty like that.

Apologies, no offense meant.

Forgiven and forgotten. Want a beer or other culturally appropriate cold, refreshing beverage?

Is it still possible to see those generated summaries somewhere? Would like to see what their model outputs for some articles, especially compared to the human written lead-in.

As far as I've seen they only generated one example summary, which is linked in OP. It's not good, as Wikipedians have pointed out: https://en.wikipedia.org/wiki/Wikipedia:Village_pump_(technical)#The_Dopamine_summary

Wrong community, please repost to the community for Onion articles.

I still use Wikipedia monobook, so I had no idea this was a feature.

Two other editors simply commented, “Yuck.”

What insightful and meaningful discourse.

If they’re high quality editors who consistently put out a lot of edits then yeah, it is meaningful and insightful. Wikipedia exists because of them and only them. If most feel like they do and stop doing all this maintenance for free, then Wikipedia becomes a graffiti wall/ad space and not an encyclopedia.

Thinking the immediate disgust of the people doing all the work for you for free is meaningless is the best way to nose dive.

Also, you literally had to scroll past a very long and insightful comment to get to that.

Also, you literally had to scroll past a very long and insightful comment to get to that.

No I didn't. It's in the summary, appropriately enough.

And user backlash. Seriously, wtf?

Comments from other communities

Wikipedia has in some ways become a byword for sober boringness, which is excellent.

This is both funny and also an excellent summary of why Wikipedia uniquely has an incentive not to jump on the AI bandwagon. Like a bank maintaining COBOL decades after everyone else moved on, its (goal of) reputation for reliability means that there's a strong internal conservative faction opposed to introducing new disruptive features.

Isn't the Wikipedia article usually already the summary of the topic?

If there's an article with more than 20 references to papers it's usually already abridged enough.

Just auto-generate videos with AI images and voiceover and add subway surfers gameplay on the side for those who think this slop is needed.

Correct. The function is completely unnecessary.

Wikipedia would probably be the only organization that I would trust with AI. They've been using it for a while now to flag sections that might need to be rewritten, but they don't let the AI write anything itself, only notify human editors that there might be a problem. Or, at least that was what I heard a couple of ywars ago when they talked about it last.

That is not the case here. These are not bots which flagged issues, but literally a LLM to help with writing "summaries", which is why the reaction is so different.

Yeah, I was thinking that if any organization would do AI summaries right, it would be Wikipedia. But I trust the editors the most.

For som reason, "ywars" changed your voice into that of a pirate, and it made me cackle. Thanks 💛

Lmao. Yeah, I don't use auto-complete and rarely re-read things when I write, so mistakes are bound to happen :P

Fair. I should really quit using autocomplete and stop using Gboard for privacy reasons. Honestly, I'm just a little bit away from de-googling and going graphene. Just gotta spin up immich and a few other servers.

Isn’t Wikipedia already a summary of a topic? Not sure how you can increase the density of information in the pages without making it absolutely useless.

If you want a summary, go somewhere else for it, Wikipedia is a website where you read the entire damn page if you want to learn about something, including sources.

I think something like this pretty much cannot exist without pushback. There's a reason why the high-level definitions at the start of articles are so jargon-laden, namely that experts want to get the definitions precise. Those same experts will almost necessarily puke, if you just take the jargon out.

And while it is typically possible to write a explanation without jargon, this takes a super expert to have it still be precise.
Maybe they could give the "Simple English" more of a spotlight instead, so those super experts write more of those.

A well written wikipedia article about a complex topic is already a summary!

There's also simplified english available for many pages

https://en.wikipedia.org/wiki/Encyclopedia

An encyclopedia is a reference work or compendium providing summaries of knowledge, either general or special, in a particular field or discipline.

Obligatory cross-reference: This came up in the stubsack before 404media wrote about it.

Fuck it, repeating my joke from the earlier thread: Inviting the most pedantic nerds on Earth to critique your chatbot slop is a level of begging to be pwned that’s on par with claiming the female orgasm is a myth.

I need to check the stubsack more often.

What is stubsack? It just links to Lemmy threads
.

The stubsack is the weekly thread of miscellaneous, low-to-mid-effort posts on awful.systems.

The simultaneous problem and benefit of the stubstack thread is that a good chunk of the best posts of this community are contained within them.

I knew AI would eventually come for one of the greatest things humans have ever used the internet for, but I'm so disappointed that it has come from within.

I've cancelled my monthly donations. We can't trust the Wikimedia Foundation at all, ever again. Genuinely sickening anti-human sentiment from those freaks.

It is so concerning given that they're entrusted with something so collaborative and so amazing.

time to donate my money to a different wiki that only has the noblest of intentions, wikifeet (jk)

Refreshing. An online community that wears its intentions on its sleeve.

Is there anything closer to the human soul?

I think you're deliberately setting up for this response, so: "more like human sole".

I wasn't, but that is toetally the perfect response

You should consider donating to the internet archive.

Don't worry; I already do! But, great suggestion.

Thank God we didn't get help for people in digesting complex topics. Then how would they blame the experts for not making things simple enough that they should have to try learning.

Also, people should learn about complex intelligent systems, and how all of their problems with AI are just problems with capitalism that will still inevitably exist even without AI/the loom.

hey dawg if you want to be anti-capitalist that’s great, but please interrogate yourself on who exactly is developing LLMs and who is running their PR campaigns before you start simping for AI and pretending like a hallucination engine is a helpful tool in general and specifically to help people understand complex topics where precision and nuance are needed and definitely not fucking hallucinations. Please be serious and for real

points at literally every other technology or piece of shared socio-economic infrastructure

gestures more heavily

also checks your sources, whether it's wikipedia, LLMs, or humans! all confabulate!

Dis you:

could you explain how?
or how the examples i gave are not as valid to your current direction of critique?

i'm not saying 'i'm intelligent' or 'the system will not abuse these tools'

are you suggesting my understanding is overfit to a certain niche, and there is a flagrant blindspot that wasn't addressed by my earlier comment?

also i use uncommon words for specificity, not to obfuscate. if something hasn't made sense, i would also elaborate.
(we also have modern tools to help unravel such things as well, if you don't have a local tutor for the subject.)

or we can just give inaccurate caricatures of each other, and each-others points of view.
surely that will do something other than feed the ignorance and division driven socio-economic paperclip maximizer that we are currently stuck in.

holy shit I’m upgrading you to a site-wide ban

so many paragraphs and my eyes don’t want any of them

Note to the Peanut gallery: this guy knows about paperclipmaxxing but not this more famous comic. Curious. lmfao

Ai doesn't help anyone, its just corporate slop.

You learn to digest deep subjects by reading them.

yes you need to read things to understand them,
but also going balls deep into a complex concept or topic with no lube can be pretty rough, and deter the attempt, or future attempts.

also do you know what else is corporate slop?
the warner/disney designed art world?
every non-silicon paperclip maximizing pattern?
the software matters more than the substrate.

the pattern matters more than the tool.

people called digital art/3d art 'slop' for the same reason.

my argument was the same back then.
it's not the tool, it's the system.

'CGI doesn't help anyone'

attacking the tool of CGI doesn't help anyone either.

that being said... AI does literally help some people. for many things.
google search was my favourite AI tool 25 years ago, but it's definitely not right now.

the slop algorithms were decided by something else even before that. see: enshittification and planned obsolescence.

aka, overfitting towards an objective function in the style of goodheart's law.

also you can read a 'thing'
but if you're just over-fitting without making any transferable connections, you're only feeding your understanding of that state-space/specific environment. also other modalities are important. why LLMs aren't 'superintelligent' despite being really good with words. that's an anthropocentric bias in understanding intelligent systems. i know a lot of people who read self help/business novels,
which teach flawed heuristics. which books unlearn flawed heuristics?

early reading can lead to better mental models for interacting with counterfactual representations. can we give mental tools for counterfactual representation some hype?

could you dive into that with no teachers/AI to help you? would you be more likely to engage with the help?

it's a complicated situation, but overfitting binary representations is not the solution to navigating complexity.

god I looked at your post history and it’s just all this. 2 years of AI boosterism while cosplaying as a leftist, but the costume keeps slipping

are you not exhausted? you keep posting paragraphs and paragraphs and paragraphs but you’re still just a cosplay leftist arguing for the taste of the boot. don’t you get tired of being like this?

that being said… AI does literally help some people. for many things. google search was my favourite AI tool 25 years ago, but it’s definitely not right now.

lol

yes you need to read things to understand them

OK, here's your free opportunity to spend more time doing that. Bye now.

AI is a pseudoscience that conflates a plagiarism-fueled lying machine with a thinking, living human mind. Fuck off.