Trump Threatens Military Spending Veto In Social Media Bias Battle

Donald Trump has threatened to veto a major military funding bill unless Congress abolishes a liability law protecting social media firms regularly accused of bias by the president.

Section 230 of the Communications Decency Act gives immunity to tech companies such as Facebook and Twitter from legal action on content posted by users.

Both platforms have found themselves the target of incandescent fury from Trump in recent weeks after they began attaching disclaimers to social media posts by the president that claimed he had lost last month’s election due to voter fraud.

Trump has doubled on a months-old push to abolish the statute in response — a move that has been backed by his congressional allies.

“Section 230… represents a serious threat to our national security and the integrity of the elections,” the president tweeted on Tuesday night.

US President Donald Trump has threatened to veto a major military funding bill unless Congress abolishes a liability law protecting social media firms US President Donald Trump has threatened to veto a major military funding

Read More

Social media’s problem isn’t bias, it’s advertising. And that we can fix.

Remember when we all thought the internet would miraculously make the world better? That was then, and now we know the truth: bad people still do bad things. They just do it on the internet.

The ad-driven dynamic of commercial social media makes it profitable to drive outrage. Thoughtful and fact-based dialog is the first casualty.

Furthermore, automation has made it profitable to give each user a view of the world that maximizes involvement, without any sense of proportion or reality. The baseless Q fantasies are a case in point.

The Q delusions exploit defects in human information processing. At a more general level, our social media giants exploit the same defects to sell ads, without giving us the social contact we crave, especially in a pandemic.

Underneath the neighborly and family content are social network algorithms designed to drive users to more extreme content. Facebook in particular has repeatedly

Read More

Training AI algorithms on mostly smiling faces reduces accuracy and introduces bias, according to research

Facial recognition systems are problematic for a number of reasons, not least of which they tend to exhibit prejudice against certain demographic groups and genders. But a new study from researchers affiliated with MIT, the Universitat Oberta de Catalunya in Barcelona, and the Universidad Autonoma de Madrid explores another problematic aspect that’s received less attention so far: bias toward certain facial expressions. The coauthors claim that the impact of expressions on facial recognition systems is “at least” as impactful as wearing a scarf, hat, wig, or glasses, and that facial recognition systems are trained with highly biased datasets in this regard.

The study adds to a growing body of evidence that facial recognition is susceptible to harmful, pervasive prejudice. A paper last fall by University of Colorado, Boulder researchers demonstrated that AI from Amazon, Clarifai, Microsoft, and others maintained accuracy rates above 95% for cisgender men and women but misidentified

Read More

Stanford and Carnegie Mellon find race and age bias in mobility data that drives COVID-19 policy

Smartphone-based mobility data has played a major role in responses to the pandemic. Describing the movement of millions of people, location information from Google, Apple, and others has been used to analyze the effectiveness of social distancing polices and probe how different sectors of the economy have been affected. But a new study from researchers at Stanford and Carnegie Mellon finds that particular groups of people, including older and nonwhite U.S. voters, are less likely to be captured by mobility data than demographic majorities. The coauthors argue that these groups could be disproportionately harmed if biased mobility data is used to allocate public health resources.

Analytics providers like Factual, Radar, and PlaceIQ obtain data from opt-in location-sharing apps but rarely disclose which apps feed into their datasets, preventing policymakers and researchers from understanding who’s represented. (Prior work has shown sociodemographic and age biases of smartphone ownership, with children and the

Read More