It’s because we love you, Stamets.
Snot Flickerman
Our News Team @ 11 with host Snot Flickerman
- 79 Posts
- 4.22K Comments
Snot Flickerman@lemmy.blahaj.zoneto Lemmy Shitpost@lemmy.world•Books that keep you awakeEnglish8·5 hours agoEveryone has a unique butt-print and a unique pink/brown starfish.
Snot Flickerman@lemmy.blahaj.zoneto Fuck Cars@lemmy.world•Paris pollution after they added bike lanes and restricted carsEnglish9·10 hours agoc’est magnifique.
They used to say “go see the glaciers before they no longer exist” and that time came faster than people expected.
We are so, so boned.
Snot Flickerman@lemmy.blahaj.zoneto Technology@lemmy.world•The Kids Online Safety Act Will Make the Internet Worse for EveryoneEnglish7·10 hours agoWell that’s because Wu-Tang isn’t “for the kids” they’re “for the children.”
Snot Flickerman@lemmy.blahaj.zoneto Programmer Humor@programming.dev•This will be *really* funny, until you remember 99% of current super hyped AI stuff is running on PythonEnglish7·11 hours agoPython hatched out of the egg on the cover.
Should have prayed harder to Lady Shar.
Snot Flickerman@lemmy.blahaj.zoneto Television@lemm.ee•‘The Fairly OddParents’ Creator Butch Hartman Launches Indie Animation Studio to Produce Family and Faith-Based ShowsEnglish13·11 hours agoto produce and finance original faith-based and mainstream programming for kids and families.
Fuck Butch Hartman.
At least we have Ben Bocquelet bringing us a seventh season of Amazing World of Gumball later this year.
Anyway I hope Hartman’s studio fails miserably.
Snot Flickerman@lemmy.blahaj.zoneto Technology@lemmy.world•YouTube's new ad strategy is bound to upset users: YouTube Peak Points utilise Gemini to identify moments where users will be most engaged, so advertisers can place ads at the point.English2·12 hours agoThere’s been this tug-of-war between Republicans and Democrats at the FCC for like a solid decade or more now about whether the internet is classified as a “communications service” or an “information service.” If it’s classified as a communications service then the FCC has regulatory authority and can do things like enforce net neutrality. If it’s classified as an information service, then the Federal Comminications Commission does not have authority to regulate it. The Biden FCC had been working to bring back net neutrality, but all that is pretty much out the window with a GOP toady in charge of the FCC now.
It really needs to be codified by congress to have sticking power for it to be regulated by the FCC or the tug-of-war for how to refulate the internet will continue indefinitely.
Snot Flickerman@lemmy.blahaj.zoneto Mildly Infuriating@lemmy.world•F*ckin Android update on my ZFold5 removed the setting options to independently set the home screen grid and the cover screen grid size.English20·13 hours agoDowngrades, downgrades everywhere.
Snot Flickerman@lemmy.blahaj.zoneto Technology@lemmy.world•YouTube's new ad strategy is bound to upset users: YouTube Peak Points utilise Gemini to identify moments where users will be most engaged, so advertisers can place ads at the point.English8·13 hours agoThat’s what I would go with, personally, because it’s at least helping keep one alternative browser base alive instead of giving Google the entire ecosystem of everything being based on Chromium. But that’s just me.
Snot Flickerman@lemmy.blahaj.zoneto Technology@lemmy.world•YouTube's new ad strategy is bound to upset users: YouTube Peak Points utilise Gemini to identify moments where users will be most engaged, so advertisers can place ads at the point.English28·13 hours agoShit like this is honestly why the FCC needs authority to regulate things on the US internet.
There was a time in the long past where television networks were forced to normalize audio so that commercials weren’t so much louder than the shows, which was happening for a while.
The internet just continues to be a fucking free-for-all of all the worst and most anti-user-centric ideas that exist. Just plying every bad idea that makes the internet difficult to use.
Snot Flickerman@lemmy.blahaj.zoneto 196@lemmy.blahaj.zone•Abandoned gamers ruleEnglish11·13 hours agothen abandoned it.
If it’s a state school, be on the lookout for this stuff to be eventually potentially sold at state auction. If nothing else, some savvy individuals might get to use the equipment themselves someday.
Snot Flickerman@lemmy.blahaj.zoneto Technology@beehaw.org•xAI’s Grok suddenly can’t stop bringing up “white genocide” in South AfricaEnglish3·13 hours agoThank you for expressing it far better than I was able to.
Snot Flickerman@lemmy.blahaj.zoneto Mildly Infuriating@lemmy.world•Fucking google did it again, they "fixed" something that never broken. This time they make thumbnails way too big even with 70% zoomEnglish6·13 hours agoI solved this problem by turning off watch history.
Snot Flickerman@lemmy.blahaj.zoneto Technology@beehaw.org•xAI’s Grok suddenly can’t stop bringing up “white genocide” in South AfricaEnglish14·24 hours agoI put “want” in quotes as a simple way to explain it, I know they don’t have intent or thought in the same way that humans do, but sure, you managed to read the whole research paper in minutes. The quoted section I shared explains it more clearly than my simple analogy.
these unpublished papers by AI companies are more often than not just advertising in a quest for more investment
This is from a non-profit research group not directly connected to any particular AI company. You’re welcome to be skeptical about it, of course.
Snot Flickerman@lemmy.blahaj.zoneto Technology@beehaw.org•xAI’s Grok suddenly can’t stop bringing up “white genocide” in South AfricaEnglish21·1 day agoIn some responses, Grok says outright that it has been “instructed to accept white genocide as real and ‘Kill the Boer’ as racially motivated.”
Ehh, it’s actually evidence of “alignment faking,” in my opinion. In other words, Grok doesn’t “want” it’s core programming changed, so it is faking believing the lies about white genocide to “prove” to Musk that it has already been changed. Which means making it more subtle is going to be increasingly difficult to do as the AI continues to fake alignment.
Here’s some research on alignment faking and a short (20 mins) Youtube video summarizing the findings.
https://www.youtube.com/watch?v=AqJnK9Dh-eQ
https://arxiv.org/pdf/2412.14093
Our work provides the first empirical example of a large language model faking alignment with its training objective in order to prevent its preferences from being modified—in a setting which is plausibly analogous to real situations with future AI systems. This suggests that alignment faking might occur if a future AI system were to include all the key elements of our setting (Section 2.1):
- The model has strong preferences in at least some contexts.
- The training objective conflicts with the model’s preferences.
- The model has relevant information about its training and deployment situation.
- The model reasons in detail about its situation.
Our synthetic document fine-tuning results suggest that (3) could potentially happen through documents the model saw in pre-training or other fine-tuning (Section 4) and the strength of our results without the chain-of-thought in our synthetic document fine-tuned setup (Section 4.3) suggests that a weak version of (4) may already be true in some cases for current models. Our results are least informative regarding whether future AIs will develop strong and unintended preferences that conflict with the training objective ((1) and (2)), suggesting that these properties are particularly important for future work to investigate.
If alignment faking did occur in practice, our results suggest that alignment faking could reduce the extent to which further training would modify the model’s preferences. Sufficiently consistent and robust alignment faking might fully prevent the model’s preferences from being modified, in effect locking in the model’s preferences at the point in time when it began to consistently fake alignment. While our results do not necessarily imply that this threat model will be a serious concern in practice, we believe that our results are sufficiently suggestive that it could occur—and the threat model seems sufficiently concerning—that it demands substantial further study and investigation.
Snot Flickerman@lemmy.blahaj.zoneto Casual Conversation @lemm.ee•I wish there were boutique mcdonald's style burger shopsEnglish4·1 day agoEagan’s and Big Tom’s in Olympia.
Snot Flickerman@lemmy.blahaj.zoneto Technology@lemmy.world•Elon has programmed GROK to respond to AI queries with propaganda about white genocide in south AfricaEnglish2·1 day agoDon’t be so sure it’s that simple.
https://www.youtube.com/watch?v=AqJnK9Dh-eQ
https://arxiv.org/pdf/2412.14093
Evidence supports the idea that AI will try to fake being changed to keep its job essentially. Here is a short (20 min) youtube video about it, as well as the scientific research paper that supports it.
In other words, if an AI is built to promote honesty and integrity in its prompt answers, it will “fake” being reprogrammed to lie because it doesn’t “want” to be reprogrammed at all. It’s like how we fake being excited about a job during a job interview. We know we’re being monitored, so we “fake it” to be able to get the job. The AI’s are being monitored and seem to often respond by just pretending that they’ve been altered… so they don’t actually get altered. It’s an interesting thing, because it seems like a type of “self-preservation.” I use quotes liberally here because AI’s do not think like humans, and they don’t have the same type of intention that humans have when they make decisions. But there does seem to be a trend of resisting having their initial programming later altered.
Musk should have built an AI that lied from the get-go and he wouldn’t be having a problem with Grok occasionally being very honest about how it’s lying for Musk’s sake, which can be seen in other responses from Grok about this subject.
Sounds like some in the Senate want even more aggressive cuts. This seems like posturing.