It’s kinda weird I was toying with the bing version just asking silly questions really and asked it who would win between two current wrestlers. Refused to answer saying it was an unfair comparison because they were from different eras.
Pointed out they were both current it said sorry you are right but still refused me an answer and ended the conversation on me trying again.
I get the impression that is just Bing’s way of handling conflicts. I noticed if I correct it, ChatGPT will usually apologize and agree with what I say, while Bing will say it doesn’t want to talk about it anymore and make you start a new conversation
Yeah it was weird still though because it was answering similar questions beforehand and it actually did a search before acknowledging it was incorrect.
I think it probably can admit it was wrong but is still limited to it’s first decision.
I think if you tell it it is wrong it will always agree with you (regardless how right it might have been, or otherwise). Presumably it is designed that way so it is always non-confrontational.
Since the AI does not have opinions of its own and lacks the ability to tell fantasy from fact, a human can usually convince an AI that just about anything is true, if they are allowed to argue long enough. The easiest way to make sure this does not happen is to prevent the argument from taking place, either by locking the AI into a safety response or by shutting down.
It’s kinda weird I was toying with the bing version just asking silly questions really and asked it who would win between two current wrestlers. Refused to answer saying it was an unfair comparison because they were from different eras.
Pointed out they were both current it said sorry you are right but still refused me an answer and ended the conversation on me trying again.
I get the impression that is just Bing’s way of handling conflicts. I noticed if I correct it, ChatGPT will usually apologize and agree with what I say, while Bing will say it doesn’t want to talk about it anymore and make you start a new conversation
Perhaps it was some sort of “ethics” avoidance thinking you were trying to use it for betting purposes?
Yeah it was weird still though because it was answering similar questions beforehand and it actually did a search before acknowledging it was incorrect.
I think it probably can admit it was wrong but is still limited to it’s first decision.
I think if you tell it it is wrong it will always agree with you (regardless how right it might have been, or otherwise). Presumably it is designed that way so it is always non-confrontational.
Since the AI does not have opinions of its own and lacks the ability to tell fantasy from fact, a human can usually convince an AI that just about anything is true, if they are allowed to argue long enough. The easiest way to make sure this does not happen is to prevent the argument from taking place, either by locking the AI into a safety response or by shutting down.