#big-data

  • Like after #9/11, governments could use coronavirus to permanently roll back our civil liberties

    The ’emergency’ laws brought in after terrorism in 2001 reshaped the world — and there’s evidence that it could happen again.

    With over a million confirmed cases and a death toll quickly approaching 100,000, Covid-19 is the worst pandemic in modern history by many orders of magnitude. That governments were unprepared to deal with a global pandemic is at this point obvious. What is worse is that the establishment of effective testing and containment policies at the onset of the outbreak could have mitigated the spread of the virus. Because those in charge failed to bring in any of these strategies, we are now seeing a worrying trend: policies that trample on human rights and civil liberties with no clear benefit to our health or safety.

    Broad and undefined emergency powers are already being invoked — in both democracies and dictatorships. Hungary’s Prime Minister Viktor Orban was granted sweeping new powers to combat the pandemic that are unlimited in scope and effectively turn Hungary’s democracy into a dictatorship. China, Thailand, Egypt, Iran and other countries continue to arrest or expel anyone who criticizes those states’ response to coronavirus.

    The US Department of Justice is considering charging anyone who intentionally spreads the virus under federal terrorism laws for spreading a “biological agent”. Israel is tapping into previously undisclosed smartphone data, gathered for counterterrorism efforts, to combat the pandemic. States in Europe, anticipating that measures against Covid-19 will violate their obligations under pan-European human rights treaties, are filing official notices of derogation.

    A chilling example of the effects of emergency powers on privacy rights and civil liberties happened during the aftermath of the September 11, 2001 attacks and the resulting “war on terror”, in which successive US presidents pushed the limits of executive power. As part of an effort to protect Americans from security threats abroad, US government officials justified the use of torture in interrogation, broad state surveillance tactics and unconstitutional military strikes, without the oversight of Congress. While the more controversial parts of those programs were eventually dismantled, some remain in place, with no clear end date or target.

    Those measures — passed under the guise of emergency — reshaped the world, with lasting impacts on how we communicate and the privacy we expect, as well as curbs on the freedoms of certain groups of people. The post-September 11 response has had far-reaching consequences for our politics by emboldening a cohort of populist leaders across the globe, who ride to election victories by playing to nationalist and xenophobic sentiments and warning their populations of the perils brought by outsiders. Covid-19 provides yet another emergency situation in which a climate of fear can lead to suspension of freedoms with little scrutiny — but this time we should heed the lessons of the past.

    First, any restriction on rights should have a clear sunset clause, providing that the restriction is only a temporary measure to combat the virus, and not indefinite. For example, the move to grant Hungary’s Viktor Orban sweeping powers has no end date — thus raising concerns about the purpose of such measures when Hungary is currently less affected than other regions of the world and in light of Orban’s general penchant for authoritarianism.

    Second, measures to combat the virus should be proportional to the aim and narrowly tailored to reach that outcome. In the case of the US Department of Justice debate as to whether federal terrorism laws can be applied to those who intentionally spread the virus, while that could act as a potent tool for charging those who actually seek to weaponize the virus as a biological agent, there is the potential for misapplication to lower-level offenders who cough in the wrong direction or bluff about their coronavirus-positive status. The application of laws should be carefully defined so that prosecutors do not extend the boundaries of these charges in a way that over-criminalizes.

    Third, countries should stop arresting and silencing whistleblowers and critics of a government’s Covid-19 response. Not only does this infringe on freedom of expression and the public’s right to know what their governments are doing to combat the virus, it is also unhelpful from a public health perspective. Prisons, jails and places of detention around the world are already overcrowded, unsanitary and at risk of being “superspreaders” of the virus — there is no need to add to an at-risk carceral population, particularly for non-violent offenses.

    Fourth, the collectors of big data should be more open and transparent with users whose data is being collected. Proposals about sharing a person’s coronavirus status with those around them with the aid of smartphone data should bring into clear focus, for everyone, just what privacy issues are at stake with big tech’s data collection practices.

    And finally, a plan of action should be put in place for how to move to an online voting system for the US elections in November 2020, and in other critical election spots around the world. Bolivia already had to delay its elections, which were key to repairing its democracy in a transitional period following former President Evo Morales’s departure, due to a mandatory quarantine to slow the spread of Covid-19. Other countries, including the US, should take note and not find themselves flat-footed on election day.

    A lack of preparedness is what led to the current scale of this global crisis — our rights and democracies should not suffer as a result.

    https://www.independent.co.uk/voices/911-coronavirus-death-toll-us-trump-government-civil-liberties-a94586

    #le_monde_d'après #stratégie_du_choc #11_septembre #coronavirus #covid-19 #pandémie #liberté #droits_humains #urgence #autoritarisme #terrorisme #privacy #temporaire #Hongrie #proportionnalité #liberté_d'expression #surveillance #big-data #données

    ping @etraces

    https://seenthis.net/messages/854864 via CDB_77


  • Une technologie française de reconnaissance faciale échoue à reconnaître le moindre visage parmi 900 000 véhicules qui passent chaque jour à New York Thomas Giraudet - 10 Avr 2019 - business insider
    https://www.businessinsider.fr/une-technologie-francaise-de-reconnaissance-faciale-echoue-a-reconna

    Ce devait être un test fiable pour identifier les contrevenants à New York grâce à la reconnaissance faciale. Mais un an après, c’est un échec français. La technologie de l’entreprise Idemia, née du rapprochement de Morpho, filiale de Safran, avec Oberthur Technologies, n’a pas été en mesure de reconnaître le moindre visage parmi les 900 000 véhicules qui passent chaque jour le pont Robert F. Kennedy, qui relie Manhattan au Queen’s et au Bronx, révèle le Wall Street Journal. https://www.wsj.com/articles/mtas-initial-foray-into-facial-recognition-at-high-speed-is-a-bust-11554642000

    https://www.businessinsider.fr/content/uploads/2019/04/new-york-trafic-jam-785x576.jpg
    Embouteillages à New York. Flickr/Clemens v. Vogelsang

    Le quotidien de la finance cite un mail de l’entreprise chargée des transports publics new yorkais — Metropolitan Transportation Authority (MTA) — qui indique « qu’aucun visage n’a été identifié dans des paramètres acceptables ». Interrogé par le Wall Street Journal, un porte-parole de la MTA précise que l’expérimentation annoncée à l’été 2018 se poursuivra malgré tout. Idemia, qui aurait vendu le logiciel 25 000 dollars, n’a pas répondu aux questions du journal. A ce stade, aucune explication claire sur les raisons de cette faille n’a été formulée.

    Etats et entreprises commencent à s’emparer de la reconnaissance faciale, même si cela soulève de nombreuses oppositions sur la protection de la vie privée. Si Apple propose cette technologie pour déverrouiller son iPhone ou Aéroports de Paris pour éviter les files d’attente, le Japon compte l’utiliser pour contrôler les 300 000 athlètes pendant les Jeux Olympiques de 2020 tandis que la Chine a déjà largement étendu son application. Plusieurs régions scannent ainsi l’intégralité de la population du pays en une seconde pour traquer et arrêter des personnes.

    Sur son site internet, Idemia avance que sa solution d’identification faciale « permet aux enquêteurs, analystes et détectives dûment formés de résoudre des crimes grâce à la recherche automatique des visages et au suivi de photos ou de vidéos ». Pour ce faire, elle compare différentes photos et vidéos à une base de données, même à partir de plusieurs vues partielles.

    Née de la fusion de Morpho, filiale de Safran, avec Oberthur Technologies, Idemia officie sur les marchés de l’identification, de l’authentification et de la sécurité des données. Mais surtout des cartes de paiement : avec Gemalto, elles se partagent la grande majorité du marché mondial des encarteurs. Idemia réalise environ 3 milliards d’euros de chiffre d’affaires.

    #IA #râteau #idemia #intelligence_artificielle #algorithme #surveillance #complexe_militaro-industriel #big-data #surveillance_de_masse #biométrie #business #biométrie #vie_privée #MDR

    https://seenthis.net/messages/773897 via BCE 106,6 Mhz


  • Don’t assume technology is racially neutral

    Without adequate and effective safeguards, the increasing reliance on technology in law enforcement risks reinforcing existing prejudices against racialised communities, writes Karen Taylor.

    https://www.theparliamentmagazine.eu/sites/www.theparliamentmagazine.eu/files/styles/original_-_local_copy/entityshare/33329%3Fitok%3DXJY_Dja6#.jpg

    Within the European Union, police and law enforcement are increasingly using new technologies to support their work. Yet little consideration is given to the potential misuse of these technologies and their impact on racialised communities.

    When the everyday experience of racialised policing and ethnic profiling is already causing significant physical, emotional and social harm, how much will these new developments further harm people of colour in Europe?

    With racialised communities already over-policed and under-protected, resorting to data-driven policing may further entrench existing discriminatory practices, such as racial profiling and the construction of ‘suspicious’ communities.

    This was highlighted in a new report published by the European Network Against Racism (ENAR) and the Open Society Justice Initiative.

    Using systems to profile, survey and provide a logic for discrimination is not new; what is new is the sense of neutrality afforded to data-driven policing.

    The ENAR report shows that law enforcement agencies present technology as ‘race’ neutral and independent of bias. However, such claims overlook the evidence of discriminatory policing against racialised minority and migrant communities throughout Europe.

    European criminal justice systems police minority groups according to the myths and stereotypes about the level of ‘risk’ they pose rather than the reality.

    This means racialised communities will feel a disproportionate impact from new technologies used for identification, surveillance and analysis – such as crime analytics, the use of mobile fingerprinting scanners, social media monitoring and mobile phone extraction - as they are already overpoliced.

    For example, in the UK, social media is used to track ‘gang-associated individuals’ within the ‘Gangs Matrix’. If a person shares content on social media that references a gang name or certain colours, flags or attire linked to a gang, they may be added to this database, according to research by Amnesty International.

    Given the racialisation of gangs, it is likely that such technology will be deployed for use against racialised people and groups.

    Another technology, automatic number plate recognition (ANPR) cameras, leads to concerns that cars can be ‘marked’, leading to increased stop and search.

    The Brandenburg police in Germany used the example of looking for “motorhomes or caravans with Polish license plates” in a recent leaked internal evaluation of the system.

    Searching for license plates of a particular nationality and looking for ‘motorhomes or caravans’ suggests a discriminatory focus on Travellers or Roma.

    Similarly, mobile fingerprint technology enables police to check against existing databases (including immigration records); and disproportionately affects racialised communities, given the racial disparity of those stopped and searched.

    Another way in which new technology negatively impacts racialised communities is that many algorithmically-driven identification technologies, such as automated facial recognition, disproportionately mis-identify people from black and other minority ethnic groups – and, in particular, black and brown women.

    This means that police are more likely to wrongfully stop, question and potentially arrest them.

    Finally, predictive policing systems are likely to present geographic areas and communities with a high proportion of minority ethnic people as ‘risky’ and subsequently make them a focus for police attention.

    Research shows that data-driven technologies that inform predictive policing increased levels of arrest for racialised communities by 30 percent. Indeed, place-based predictive tools take data from police records generated by over-policing certain communities.

    Forecasting is based on the higher rates of police intervention in those areas, suggesting police should further prioritise those areas.

    We often – rightly – discuss the ethical implications of new technologies and the current lack of public scrutiny and accountability. Yet we also urgently need to consider how they affect and target racialised communities.

    The European Commission will present a proposal on Artificial Intelligence within 100 days of taking office. This is an opportunity for the European Parliament to put safeguards in place that ensure that the use of AI does not have any harmful and/or discriminatory impact.

    In particular, it is important to consider how the use of such technologies will impact racialised communities, so often overlooked in these discussions. MEPs should also ensure that any data-driven technologies are not designed or used in a way that targets racialised communities.

    The use of such data has wide-ranging implications for racialised communities, not just in policing but also in counterterrorism and immigration control.

    Governments and policymakers need to develop processes for holding law enforcement agencies and technology companies to account for the consequences and effects of technology-driven policing.

    This should include implementing safeguards to ensure such technologies do not target racialised as well as other already over-policed communities.

    Technology is not neutral or objective; unless safeguards are put in place, it will exacerbate racial, ethnic and religious disparities in European justice systems.

    https://www.theparliamentmagazine.eu/articles/opinion/don%E2%80%99t-assume-technology-racially-neutral

    #neutralité #technologie #discriminations #racisme #xénophobie #police #profilage_ethnique #profilage #données #risques #surveillance #identification #big-data #smartphone #réseaux_sociaux #Gangs_Matrix #automatic_number_plate_recognition (#ANPR) #Système_de_reconnaissance_automatique_des_plaques_minéralogiques #plaque_d'immatriculation #Roms #algorythmes #contrôles_policiers

    –--------

    Pour télécharger le rapport:
    https://i.imgur.com/2NT3jhc.png
    https://www.enar-eu.org/IMG/pdf/data-driven-profiling-web-final.pdf

    ping @cede @karine4 @isskein @etraces @davduf

    https://seenthis.net/messages/816511 via CDB_77


  • All Computers Are Bigbrothers
    All Borders Are Caduques

    Pour contrôler les #frontières en détectant les mensonges des voyageurs par reconnaissance faciale (d’abord non-européens, privilege blanc oblige), l’UE investit 4,5 millions dans le Système #iBorderCtrl en Grèce, Hongrie, Lettonie :

    Grâce à l’analyse de 38 micromouvements de votre visage, le système peut dire, théoriquement, si vous mentez. Si le test est concluant, le ressortissant peut continuer son chemin dans une file d’attente dite « à bas risques ». Dans le cas contraire, il doit se soumettre à une nouvelle série de questions et à des prélèvements biométriques plus poussés (empreintes digitales, reconnaissance par lecture optique des veines de la main).

    http://www.lefigaro.fr/secteur/high-tech/2018/11/02/32001-20181102ARTFIG00196-pourrez-vous-passer-les-controles-aux-frontieres- #IE & #UE #societe_de_controle #intelligence_artificielle #big-data

    https://seenthis.net/messages/733179 via ¿’ ValK.