Concerns about AI developing skills independently of its programmers' wishes have long absorbed scientists, ethicists, and science fiction writers. A recent interview with Google's executives may be adding to those worries.

In an interview on CBS's 60 Minutes on April 16, James Manyika, Google's SVP for technology and society, discussed how one of the company's AI systems taught itself Bengali, even though it wasn't trained to know the language. "We discovered that with very few amounts of prompting in Bengali, it can now translate all of Bengali," he said.

Pichai confirmed that there are still elements of how AI systems learn and behave that still surprises experts: "There is an aspect of this which we call— all of us in the field call it as a 'black box'. You don't fully understand. And you can't quite tell why it said this." The CEO said the company has "some ideas" why this could be the case, but it needs more research to fully comprehend how it works.

CBS's Scott Pelley then questioned the reasoning for opening to the public a system that its own developers don't fully understand, but Pichai responded: "I don't think we fully understand how a human mind works either."

——————
*edit to add video below

 
Last edited:
As much as I cried when they jumped him in Short Circuit 2, I still wouldn’t trust ol’ Johnny.
AI learning, per the 80's:

johnny-johnny5.gif
 
  • Haha
Reactions: lowdru2k
So... I just tried a free trial Android App that let's you question ChatGPT. I just used 3 free questions and not sure if I want to subscribe to it.

There is something oddly generic about it, yet efficient as well as comprehensive. It's like if I did a Google inquiry and this thing read every single page link from the first couple pages, kind of synergised the information from 20+ articles, and put out a consice mini-article.

I can't tell if it 'questioned' the validity of the information, explored contradicting views, or if it takes information at face value.