YouTube will ask you to rethink posting that comment if AI thinks it’s offensive



a close up of a metal fence: YouTube has more than 2 billion monthly visitors. Angela Lang/CNET


© Provided by CNET
YouTube has more than 2 billion monthly visitors. Angela Lang/CNET

YouTube will start asking commenters to reconsider posting something before it goes up if Google’s artificial intelligence identifies that comment as potentially offensive, YouTube said Thursday. The new YouTube prompt suggests that commenters review the company’s community guidelines if they’re “not sure whether the post is respectful,” and then gives the option to either edit the content or post anyway. 



graphical user interface, text, application: The new comment prompts will suggest a commenter reconsider their post if it's potentially offensive. YouTube


© Provided by CNET
The new comment prompts will suggest a commenter reconsider their post if it’s potentially offensive. YouTube

“To encourage respectful conversations on YouTube, we’re launching a new feature that will warn users when their comment may be offensive to others, giving them the option to reflect before posting,” YouTube said in a blog post announcing the feature and other measures meant to improve inclusivity on the platform. 

Loading...

Load Error

The feature is now

Read More

When AI sees a man, it thinks “official.” A woman? “Smile”

When AI sees a man, it thinks “official.” A woman? “Smile”

Sam Whitney (illustration), Getty Images

Men often judge women by their appearance. Turns out, computers do too.

When US and European researchers fed pictures of members of Congress to Google’s cloud image recognition service, the service applied three times as many annotations related to physical appearance to photos of women as it did to men. The top labels applied to men were “official” and “businessperson”; for women they were “smile” and “chin.”

“It results in women receiving a lower status stereotype: that women are there to look pretty and men are business leaders,” says Carsten Schwemmer, a postdoctoral researcher at GESIS Leibniz Institute for the Social Sciences in Köln, Germany. He worked on the study, published last week, with researchers from New York University, American University, University College Dublin, University of Michigan, and nonprofit California YIMBY.

The researchers administered their machine vision test to Google’s artificial intelligenceimage service and those

Read More