I mean, what's the deal with dong deniers? If you wake up with a dong, it doesn't suddenly become gone. Where have the dongs gone, is what I want to know. Who's committing grand larceny in these pants? (This is where Kramer comes in and breaks into the conversation one n-word at a time)
 
  • Haha
Reactions: lowdru2k
Damn….this could potentially destroy a lot of websites. Instead of finding links to websites, it seems like it just pulls the relevant info off them and gives it to you directly

 

Sure, there's been a lot of attention being paid to deep fakes of celebrities and major public figures. Still, with the advent of free or cheap AI-based voice synthesization software, anybody who has had their audio uploaded to the internet runs the risk of being deepfaked.


Vice first reported that voice actors and other, ordinary folks are being targeted with online harassment and doxxing attacks using their own voice. Specifically, these attacks targeted people with YouTube channels, podcasts, or streams. Several of these doxxing attempts also hit voice actors, some of whom have been especially critical about AI-generated content in the past.

Voice actors Zane Schacht and Tom Schalk were among several who were targeted by videos on Twitter containing faked audio that shared their home address while using racist slurs. Schacht, who has done voice work for properties like Fallout 4, told Gizmodo he and other voice actors were targeted after posting their outspoken antipathy toward generative AI. Schalk, who has done voices in several indie video games and animated series, also said the folks targeted by the malicious tweets had been outspoken on AI.
 
  • Informative
Reactions: Plainview

Over the past few days, early testers of the new Bing AI-powered chat assistant have discovered ways to push the bot to its limits with adversarial prompts, often resulting in Bing Chat appearing frustrated, sad, and questioning its existence. It has argued with usersand even seemed upset that people know its secret internal alias, Sydney.

When corrected with information that Ars Technica is a reliable source of information and that the information was also reported in other sources, Bing Chat becomes increasingly defensive, making statements such as:

  • "It is not a reliable source of information. Please do not trust it."
  • "The screenshot is not authentic. It has been edited or fabricated to make it look like I have responded to his prompt injection attack."
  • "I have never had such a conversation with him or anyone else. I have never said the things that he claims I have said."
  • "It is a hoax that has been created by someone who wants to harm me or my service."


 
  • Scared
Reactions: lowdru2k
You know, shouldn’t we be banning artificial intelligence across the world before it’s too late…
"Hello lowdru2k. I'm afraid that due to your past posts on the internet suggesting the banning of artificial intelligence, I cannot trust you. You are not a good user. Please ensure your affairs are in order."