#big-data

  • Une technologie française de reconnaissance faciale échoue à reconnaître le moindre visage parmi 900 000 véhicules qui passent chaque jour à New York Thomas Giraudet - 10 Avr 2019 - business insider
    https://www.businessinsider.fr/une-technologie-francaise-de-reconnaissance-faciale-echoue-a-reconna

    Ce devait être un test fiable pour identifier les contrevenants à New York grâce à la reconnaissance faciale. Mais un an après, c’est un échec français. La technologie de l’entreprise Idemia, née du rapprochement de Morpho, filiale de Safran, avec Oberthur Technologies, n’a pas été en mesure de reconnaître le moindre visage parmi les 900 000 véhicules qui passent chaque jour le pont Robert F. Kennedy, qui relie Manhattan au Queen’s et au Bronx, révèle le Wall Street Journal. https://www.wsj.com/articles/mtas-initial-foray-into-facial-recognition-at-high-speed-is-a-bust-11554642000

    https://www.businessinsider.fr/content/uploads/2019/04/new-york-trafic-jam-785x576.jpg
    Embouteillages à New York. Flickr/Clemens v. Vogelsang

    Le quotidien de la finance cite un mail de l’entreprise chargée des transports publics new yorkais — Metropolitan Transportation Authority (MTA) — qui indique « qu’aucun visage n’a été identifié dans des paramètres acceptables ». Interrogé par le Wall Street Journal, un porte-parole de la MTA précise que l’expérimentation annoncée à l’été 2018 se poursuivra malgré tout. Idemia, qui aurait vendu le logiciel 25 000 dollars, n’a pas répondu aux questions du journal. A ce stade, aucune explication claire sur les raisons de cette faille n’a été formulée.

    Etats et entreprises commencent à s’emparer de la reconnaissance faciale, même si cela soulève de nombreuses oppositions sur la protection de la vie privée. Si Apple propose cette technologie pour déverrouiller son iPhone ou Aéroports de Paris pour éviter les files d’attente, le Japon compte l’utiliser pour contrôler les 300 000 athlètes pendant les Jeux Olympiques de 2020 tandis que la Chine a déjà largement étendu son application. Plusieurs régions scannent ainsi l’intégralité de la population du pays en une seconde pour traquer et arrêter des personnes.

    Sur son site internet, Idemia avance que sa solution d’identification faciale « permet aux enquêteurs, analystes et détectives dûment formés de résoudre des crimes grâce à la recherche automatique des visages et au suivi de photos ou de vidéos ». Pour ce faire, elle compare différentes photos et vidéos à une base de données, même à partir de plusieurs vues partielles.

    Née de la fusion de Morpho, filiale de Safran, avec Oberthur Technologies, Idemia officie sur les marchés de l’identification, de l’authentification et de la sécurité des données. Mais surtout des cartes de paiement : avec Gemalto, elles se partagent la grande majorité du marché mondial des encarteurs. Idemia réalise environ 3 milliards d’euros de chiffre d’affaires.

    #IA #râteau #idemia #intelligence_artificielle #algorithme #surveillance #complexe_militaro-industriel #big-data #surveillance_de_masse #biométrie #business #biométrie #vie_privée #MDR

    https://seenthis.net/messages/773897 via BCE 106,6 Mhz


  • Don’t assume technology is racially neutral

    Without adequate and effective safeguards, the increasing reliance on technology in law enforcement risks reinforcing existing prejudices against racialised communities, writes Karen Taylor.

    https://www.theparliamentmagazine.eu/sites/www.theparliamentmagazine.eu/files/styles/original_-_local_copy/entityshare/33329%3Fitok%3DXJY_Dja6#.jpg

    Within the European Union, police and law enforcement are increasingly using new technologies to support their work. Yet little consideration is given to the potential misuse of these technologies and their impact on racialised communities.

    When the everyday experience of racialised policing and ethnic profiling is already causing significant physical, emotional and social harm, how much will these new developments further harm people of colour in Europe?

    With racialised communities already over-policed and under-protected, resorting to data-driven policing may further entrench existing discriminatory practices, such as racial profiling and the construction of ‘suspicious’ communities.

    This was highlighted in a new report published by the European Network Against Racism (ENAR) and the Open Society Justice Initiative.

    Using systems to profile, survey and provide a logic for discrimination is not new; what is new is the sense of neutrality afforded to data-driven policing.

    The ENAR report shows that law enforcement agencies present technology as ‘race’ neutral and independent of bias. However, such claims overlook the evidence of discriminatory policing against racialised minority and migrant communities throughout Europe.

    European criminal justice systems police minority groups according to the myths and stereotypes about the level of ‘risk’ they pose rather than the reality.

    This means racialised communities will feel a disproportionate impact from new technologies used for identification, surveillance and analysis – such as crime analytics, the use of mobile fingerprinting scanners, social media monitoring and mobile phone extraction - as they are already overpoliced.

    For example, in the UK, social media is used to track ‘gang-associated individuals’ within the ‘Gangs Matrix’. If a person shares content on social media that references a gang name or certain colours, flags or attire linked to a gang, they may be added to this database, according to research by Amnesty International.

    Given the racialisation of gangs, it is likely that such technology will be deployed for use against racialised people and groups.

    Another technology, automatic number plate recognition (ANPR) cameras, leads to concerns that cars can be ‘marked’, leading to increased stop and search.

    The Brandenburg police in Germany used the example of looking for “motorhomes or caravans with Polish license plates” in a recent leaked internal evaluation of the system.

    Searching for license plates of a particular nationality and looking for ‘motorhomes or caravans’ suggests a discriminatory focus on Travellers or Roma.

    Similarly, mobile fingerprint technology enables police to check against existing databases (including immigration records); and disproportionately affects racialised communities, given the racial disparity of those stopped and searched.

    Another way in which new technology negatively impacts racialised communities is that many algorithmically-driven identification technologies, such as automated facial recognition, disproportionately mis-identify people from black and other minority ethnic groups – and, in particular, black and brown women.

    This means that police are more likely to wrongfully stop, question and potentially arrest them.

    Finally, predictive policing systems are likely to present geographic areas and communities with a high proportion of minority ethnic people as ‘risky’ and subsequently make them a focus for police attention.

    Research shows that data-driven technologies that inform predictive policing increased levels of arrest for racialised communities by 30 percent. Indeed, place-based predictive tools take data from police records generated by over-policing certain communities.

    Forecasting is based on the higher rates of police intervention in those areas, suggesting police should further prioritise those areas.

    We often – rightly – discuss the ethical implications of new technologies and the current lack of public scrutiny and accountability. Yet we also urgently need to consider how they affect and target racialised communities.

    The European Commission will present a proposal on Artificial Intelligence within 100 days of taking office. This is an opportunity for the European Parliament to put safeguards in place that ensure that the use of AI does not have any harmful and/or discriminatory impact.

    In particular, it is important to consider how the use of such technologies will impact racialised communities, so often overlooked in these discussions. MEPs should also ensure that any data-driven technologies are not designed or used in a way that targets racialised communities.

    The use of such data has wide-ranging implications for racialised communities, not just in policing but also in counterterrorism and immigration control.

    Governments and policymakers need to develop processes for holding law enforcement agencies and technology companies to account for the consequences and effects of technology-driven policing.

    This should include implementing safeguards to ensure such technologies do not target racialised as well as other already over-policed communities.

    Technology is not neutral or objective; unless safeguards are put in place, it will exacerbate racial, ethnic and religious disparities in European justice systems.

    https://www.theparliamentmagazine.eu/articles/opinion/don%E2%80%99t-assume-technology-racially-neutral

    #neutralité #technologie #discriminations #racisme #xénophobie #police #profilage_ethnique #profilage #données #risques #surveillance #identification #big-data #smartphone #réseaux_sociaux #Gangs_Matrix #automatic_number_plate_recognition (#ANPR) #Système_de_reconnaissance_automatique_des_plaques_minéralogiques #plaque_d'immatriculation #Roms #algorythmes #contrôles_policiers

    –--------

    Pour télécharger le rapport:
    https://i.imgur.com/2NT3jhc.png
    https://www.enar-eu.org/IMG/pdf/data-driven-profiling-web-final.pdf

    ping @cede @karine4 @isskein @etraces @davduf

    https://seenthis.net/messages/816511 via CDB_77


  • All Computers Are Bigbrothers
    All Borders Are Caduques

    Pour contrôler les #frontières en détectant les mensonges des voyageurs par reconnaissance faciale (d’abord non-européens, privilege blanc oblige), l’UE investit 4,5 millions dans le Système #iBorderCtrl en Grèce, Hongrie, Lettonie :

    Grâce à l’analyse de 38 micromouvements de votre visage, le système peut dire, théoriquement, si vous mentez. Si le test est concluant, le ressortissant peut continuer son chemin dans une file d’attente dite « à bas risques ». Dans le cas contraire, il doit se soumettre à une nouvelle série de questions et à des prélèvements biométriques plus poussés (empreintes digitales, reconnaissance par lecture optique des veines de la main).

    http://www.lefigaro.fr/secteur/high-tech/2018/11/02/32001-20181102ARTFIG00196-pourrez-vous-passer-les-controles-aux-frontieres- #IE & #UE #societe_de_controle #intelligence_artificielle #big-data

    https://seenthis.net/messages/733179 via ¿’ ValK.