News
A health care algorithm makes black patients substantially less likely than their white counterparts to receive important medical treatment. The major flaw affects millions of patients, and was ...
For example, algorithms used in facial recognition technology have in the past shown higher identification rates for men than for women, and for individuals of non-white origin than for whites.
Previous adversarial examples have largely been designed in “white box” settings, where computer scientists have access to the underlying mechanics that power an algorithm. In these scenarios ...
A study published Thursday in Science has found that a health care risk-prediction algorithm, a major example of tools used on more than 200 million people in the U.S., demonstrated racial bias ...
I love both of these examples, because I love the idea that we can take our own democratic action to make the world a bit less complicated. Alas, it is not that simple.
These “sniffing algorithms”—used, for example, by a sell-side market maker—have the built-in intelligence to identify the existence of any algorithms on the buy side of a large order.
Last month, Twitter users uncovered a disturbing example of bias on the platform: An image-detection algorithm designed to optimize photo previews was cropping out Black faces in favor of white ones.
But algorithms are nothing more than computer programs making decisions based on rules: either rules that we gave them, or rules they figured out themselves based on examples we gave them.
Some results have been hidden because they may be inaccessible to you
Show inaccessible results