• Voli@lemmy.ml
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    I wish the term Ai would be stopped, because these devices are far from the idea of what ai is.

    • Floey@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      AI has been used to refer to all kinds of dynamic programming in the history of computation. Algebraic solvers, edge detection, fuzzy decision systems, player programs for video games and tabletop games. So when you say AI is this or that you are being rather prescriptivist about it.

      The problem with AI and ML is more one of it being presented to the public by grifters as a magical one stop solution to almost any problem. What term was used hardly matters, it was the propaganda that carried the term. It would be like saying the name Nike is the reason for the shoe brand’s success and not it’s marketing.

      So discredit the grifters, and if you want to destroy the term then look to dilute it by using it to describe even more things. It was never really a useful term to begin with. I’ll leave you with this quote

      A lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it’s not labelled AI anymore.

    • Phanatik@kbin.social
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      It’s almost like the incessant marketing of standard optimisation algorithms as artificial intelligence has diluted the tech industry with meaningless buzzwords.

    • psivchaz@reddthat.com
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      I always thought machine learning was descriptive and made sense. I guess it just didn’t get investors erect enough.

  • Solar Bear@slrpnk.net
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    1 year ago

    The computer didn’t get it wrong; the computer did exactly what it was programmed to do. Blaming the computer implies that this can be solved by fixing the computer, that it “just wasn’t good enough yet”, when it was the humans who actually did it. It was the humans who were supposed to exercise their judgment that got it wrong. You can’t fix that from the computer.

  • Uriel238 [all pronouns]@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Ever since we let law enforcement use facial recognition technology, they’ve been arresting people for false positives, sometimes for long periods of time.

    it’s not just camera problems and being poorly trained regarding non-whites, but that people actually look too much alike, especially when using the tech on blurry low-res security footage,

  • catsup@lemmy.one
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    1 year ago

    TLDR:

    In 2018, a man in a baseball cap stole thousands of dollars worth of watches from a store in central Detroit.

    The AI was trained on a database of mostly white people The photos of people of colour in the dataset were generally of worse quality, as default camera settings are often not optimised to capture darker skin tones.

    Mr Williams’ photo didn’t come up first. In fact, it was the ninth-most-probable match.

    Regardless…

    Officers drove to Mr Williams’ house and handcuffed him.

    They arrested him in front of his five and two-year-old kids…


    Ai with bad training data + lazy cops who didn’t learn how to use the tools they were given = this mess