Cross-posted from : https://feddit.de/post/5357539
Original link: https://www.theguardian.com/technology/2023/nov/02/whatsapps-ai-palestine-kids-gun-gaza-bias-israel
Cross-posted from : https://feddit.de/post/5357539
Original link: https://www.theguardian.com/technology/2023/nov/02/whatsapps-ai-palestine-kids-gun-gaza-bias-israel
ANNs like this will always just present our own biases and stereotypes back to us unless the data is scrubbed and curated in a way that no one is going to spend the resources to. Things like this are a good demonstration of why they need to be kept far, far away from decision making processes.
Also, it’s the type of thing that makes me very worried about the fact that most of the algorithms used in things like police facial recognition software, recidivism calculation software, and suchlike are proprietary black boxes.
There are - guaranteed - biases in those tools, whether in their processors or in the unknown datasets they’re trained on, and neither police nor journalists can actually see the inner workings of the software to know what those biases are, to counterbalance them or to recognize if the software is so biased as to be useless.
Something as simple and obvious as this makes me wonder what other hidden biases are just waiting to be discovered.
I think the best example about how AI will only further a bias that’s already there is the one when Amazon used AI to weed out applications by training an ai with which applications resulted in hired people and which failed - eventually they found that they almost only had interviews with men and upon closer inspection identified that they already were subconsciously discriminating against women earlier but at least HR sent them an equal amount of men and women to the interviews which now wasn’t the case anymore since the AI didn’t see the value in sending the women to interviews if most of them wouldn’t be hired anyway.