Algorithms should be regulated for safety like cars, banks, and drugs, says computer scientist Ben Shneiderman — Quartz
When these programs are wrong—like when Facebook mistakes you for your sibling or even your mom—it’s hardly a problem. In other situations, though, we give artificial intelligence much more responsibility, with larger consequences when it inevitably backfires.
Ben Shneiderman, a computer scientist from the University of Maryland, thinks the risks are big enough that it’s time to for the government to get involved. In a lecture on May 30 to the Alan Turing Institute in London, he called for a “National Algorithm Safety Board,” similar to the US’s National Transportation Safety Board for vehicles, which would provide both ongoing and retroactive oversight for high-stakes algorithms.
“When you go to systems which are richer in complexity, you have to adopt a new philosophy of design,” Shneiderman argued in his talk. His proposed National Algorithm Safety Board, which he also suggested in an article in 2016, would provide an independent third party to review and disclose just how these programs work. It would also investigate algorithmic failures and inform the public about them—much like bank regulators report on bank failures, transportation watchdogs look into major accidents, and drug licensing bodies look out for drug interactions or toxic side-effects. Since “algorithms are increasingly vital to national economies, defense, and healthcare systems,” Shneiderman wrote, “some independent oversight will be helpful.”
On est proche de la proposition de ETC Group pour un Office of assesment of technology. Il ya quelque chose à creuser pour redonner un sens collectif à la fuite en avant technologique (oiu plutôt l’hubris technologique).
#algorithmes #politique_numérique #intelligence_artificielle
▻https://seenthis.net/messages/604728 via Articles repérés par Hervé Le Crosnier