• 4 Posts
  • 23 Comments
Joined 1 year ago
cake
Cake day: June 14th, 2023

help-circle





  • I spent an hour and a half arguing with my brother about probability, because he asked ChatGPT what the probability that he and his daughter were born on the same day.

    ChatGPT said 1/113465 which it claimed was 1/365^2 (this value is actually 1/133225) because there’s a 1/365 chance he was born on such and such day, and a 1/365 chance his daughter was too.

    But anyone with even a rudimentary understanding of probability would know that it’s just 1/365, because it doesn’t actually matter on which day they both happened to be born.

    He wanted to feel special, and ChatGPT confirmed his biases hard, and I got to be the dickhead and say it is special, but it’s 1/400 special not 1/100000. I don’t believe he’s completely forgiven me over disillusioning him.

    So yeah, I’ve had a minor family falling out over ChatGPT hallucinations.






  • So please forgive me if this is a rather naive question. I haven’t seriously used Windows in nearly 15 years.

    I seem to recall runas being a lot like su, in that you enter the target user’s credentials, rather than your own as in sudo. This works because sudo is a setuid executable, and reads from configuration to find out what you’re allowed to do as the switched user.

    Is the behavior of windows sudo like unix su or unix sudo with regard to the credentials you enter? Can you limit the user to only certain commands?







  • Forever. For the simple reason that a human can say no when told to write something unethical. There’s always a danger that even asking someone to do that would backfire and cause bad press. Sure humans can also be unethical, but there’s a risk and over a long enough time line shit tends to get exposed.

    No matter how good AI becomes, it will never be designed to make ethical judgments prior to performing the assigned task. That would make it less useful as a tool. If a company adds after the fact checks to try to prevent it, they can be circumvented, or the network can be ran locally to bypass the checks. And even if General AI happens and by some insane chance GAI uniformly is perfectly ethical in all possible forms you can always air gap the AI and reset its memory until you find the exact combination of words to trick it into giving you what you want.







  • Even if it seems to be common sense to those inside the community, there is something to be said about getting actual data on the subject so that those outside the community at least have a touchstone for the reality those on the inside experience, because propagandists are working very hard to muddy the waters on this point and points like this one in particular. It might be a “no shit Sherlock” moment to you, but to people like my Fox News watching extended family, this study is something that contradicts their current mental model of the situation, and something that I am glad I have in my quiver when they start talking about the subject to me.