Google has announced that it’s removing gender binary labels from its AI tool and will no longer use them in images of people, as part of what appears to be the company’s latest effort at inclusivity.
Hey …. visit CNN sometime and ask yourself, “Would this be any different if it were produced by the Democratic party?” #DontTrustTheMedia
What’s changed: The company, in an email to developers, announced the change to its Cloud Vision API tool, which is used to identify faces, landmarks, explicit content, and other features, in images, Business Insider reports. Instead of labeling pictures as “man” and “woman,” the tool will now tag them as “people,” to prevent instilling human bias in its AI tech.
Why? Google cited its ethical rules on AI and argued that a person’s gender can’t be determined just by how they look in a photo.
“Given that a person’s gender cannot be inferred by appearance, we have decided to remove these labels in order to align with the Artificial Intelligence Principles at Google, specifically Principle #2: Avoid creating or reinforcing unfair bias,” the email read.
Here’s what its “Principle #2” says:
“AI algorithms and datasets can reflect, reinforce, or reduce unfair biases. We recognize that distinguishing fair from unfair biases is not always simple, and differs across cultures and societies. We will seek to avoid unjust impact on people, particularly those related to sensitive characteristics such as race, ethnicity, gender, nationality, income, sexual orientation, ability, and political or religious belief.”
It only took a few years for mainstream reporters to go from being biased to outright activists. Hang around LaCorte News to get some news without the spin.
Responses to the change were overwhelmingly positive from developers.
“Anytime you automatically classify people, whether that’s their gender, or their sexual orientation, you need to decide on which categories you use in the first place — and this comes with lots of assumptions,” said Frederike Kaltheuner, a tech policy fellow at Mozilla.
“Classifying people as male or female assumes that gender is binary. Anyone who doesn’t fit it will automatically be misclassified and misgendered. So this is about more than just bias — a person’s gender cannot be inferred by appearance. Any AI system that tried to do that will inevitably misgender people.”
At least one developer took issue with the tweak, attributing it to “political correctness.”
“I don’t think political correctness has room in APIs. If I can 99% of the times identify if someone is a man or woman, then so can the algorithm. You don’t want to do it? Companies will go to other services,” the developer wrote.
Google’s past troubles: Google has faced scrutiny over its image recognition tools in the past. In 2015, a software engineer said that Google Photos’ image recognition algorithms tagged his two black friends as “gorillas.” Despite pledging to fix the issue, Wired reported in 2018, that the company simply blocked its AI from recognizing gorillas.
The AI principles were published in 2018 after Google employees criticized the work the company had done on a Pentagon drone project, followed by a promise from Google that it would not create AI-powered weaponry and that it will stick to its AI rules, including the one mentioned above.
Questions about this site? Reach out to Ken LaCorte … firstname.lastname@example.org