Societal upheaval during the COVID-19 pandemic underscores need for new AI data regulations – TipsClear

As a long-time proponent of AI regulation designed to protect public health and safety while encouraging innovation, I believe Congress should be quick to enact, on a bipartisan basis, the article 102 (b) of the Artificial Intelligence Data Protection Act – my bill and now a House of Representatives discussion bill. Guardrails in the form of ethical ethics legislation in relation to article 102 (b) are necessary to maintain the dignity of the individual.

What does section 102 (b) of the AI ​​Data Protection Act provide and why is the urgency for the federal government to enact it now?

To answer these questions, we must first understand how artificial intelligence (AI) is used at this historic moment when our democratic society is faced with two simultaneous existential threats. It is only then that the risks that AI poses to our individual dignity can be recognized and that Article 102 (b) must be understood as one of the most important remedies for protecting the freedoms dear to Americans and who form the basis of our society.

America is now experiencing mass protests demanding an end to racism and police brutality, and watching civil unrest unfold as it attempts to quell the deadly COVID-19 pandemic. Whether we are aware of it or approve it, in both contexts – and in all other aspects of our lives – AI technologies are deployed by government and private actors to make critical decisions about us . In many cases, AI is used to help society and bring us to the next normal as quickly as possible.

But so far, policymakers have largely overlooked a critical public health and safety issue related to AI. With respect to AI, the main focus has been on issues of fairness, bias and transparency in the data sets used to train the algorithms. There is no doubt that the algorithms produced a bias; just look at recruiting employees and taking out loans to find examples of unfair exclusion of women and racial minorities.

We have also seen AI generate unexpected and sometimes inexplicable results from the data. Take the recent example of an algorithm designed to help judges administer a fair sentence for non-violent criminals. For reasons that have not yet been explained, the algorithm assigned higher risk scores to defendants under the age of 23, resulting in sentences 12% longer than their older peers who had been incarcerated more frequently, while not reducing incarceration or recidivism.

But the current twin crises reveal another more vexing problem that has been largely overlooked – how should society approach the scenario where the AI ​​algorithm did it right, but ethically, society is uncomfortable with the results? Since the primary purpose of AI is to produce accurate predictive data from which humans can make decisions, now is the time for lawmakers to resolve not what is possible when it comes to AI, but what should be prohibited.

Governments and private companies have an endless appetite for our personal data. Currently, AI algorithms are used around the world, including the United States, to accurately collect and analyze all kinds of data about all of us. We have facial recognition to monitor protesters in a crowd or to determine if the general public is observing an appropriate social distancing. There is data on cellphones for contact tracing, as well as public posts on social media to model the spread of the coronavirus to specific postal codes and to predict the location, size and potential violence associated with the protests . And let’s not forget the drone data used to analyze the use of masks and fevers, or the personal health data used to predict which patients hospitalized with COVID are most likely to deteriorate.

It is only thanks to AI that this amount of personal data can be compiled and analyzed on such a massive scale.

This access by algorithms to create an individualized profile of our cell phone data, social behavior, health records, travel patterns and social media content – and many other sets of personal data – in the name of keeping the peace and limiting a devastating pandemic can, and will allow various government actors and societies to create predictably precise profiles of our most private attributes, political tendencies, social circles and behaviors.

Without regulation, society risks that these AI-generated analytics will be used by law enforcement, employers, owners, doctors, insurers – and all other private, commercial, and government businesses that may collect them. or buy them – to make predictive decisions, whether accurate or not, that impact our lives and undermine the most basic notions of liberal democracy. AI continues to assume an ever-increasing role in the employment context to decide who should be interviewed, hired, promoted and fired. In the context of criminal justice, it is used to determine who to incarcerate and what sentence to impose. In other scenarios, AI restricts people to their homes, limits certain treatments in hospitals, refuses loans and penalizes those who disobey the regulations relating to social distancing.

Too often, those who avoid any type of AI regulation seek to dismiss these concerns as hypothetical and alarmist. But just a few weeks ago, Robert Williams, a black man and a resident of Michigan, was wrongly arrested because of a false facial recognition match. According to reports and a press release from the ACLU, Detroit police handcuffed Mr. Williams on his lawn in front of his terrified wife and two daughters, ages two and five. The police took him to a detention center about 40 minutes away, where he was locked up overnight. After an officer admitted during questioning the next afternoon that “the computer must have gone bad,” Mr. Williams was finally released – almost 30 hours after his arrest.

Although it is widely believed that this is the first confirmed case of Amnesty International’s incorrect facial recognition leading to the arrest of an innocent citizen, it seems clear that it will not be the last. Here, AI served as the main basis for a critical decision that had an impact on the individual citizen – to be arrested by law enforcement. But we must not only focus on the fact that the AI ​​failed in identifying the wrong person, denying him his freedom. We need to identify and outlaw cases where AI should not be used as the basis for specified critical decisions – even when it is “correct”.

As a democratic society, we should not be more comfortable being arrested for a crime that we have considered but that we have not committed, or being denied medical treatment for a disease that will end. no doubt by death over time, as we believe with Mr. Williams’ error. Stop. We must establish a “no-fly zone” to preserve our individual freedoms. We must not allow certain key decisions to be left solely to the predictive output of artificially intelligent algorithms.

To be clear, this means that even in situations where each expert agrees that the incoming and outgoing data is completely impartial, transparent and accurate, there must be a legal ban on using it for any type of predictive or substantial decision making. Admittedly, this is counterintuitive in a world where we seek mathematical but necessary certainty.

Section 102 (b) of the Artificial Intelligence Data Protection Act does this correctly and rationally in the context of two scenarios – where AI generates correct and / or incorrect results. It does this in two key ways.

First, section 102 (b) specifically identifies decisions that can never be made in whole or in part by AI. For example, it lists specific misuse of AI that would prohibit only covered entities from relying on artificial intelligence to make certain decisions. These include recruiting, hiring and disciplining individuals, refusing or limiting medical treatment, or issuers of medical insurance who make decisions about coverage for medical treatment. In light of what society has recently found, no-go areas should probably be extended to further minimize the risk of AI being used as a tool for racial discrimination and harassment of protected minorities.

Second, for some other specific decisions based on AI analyzes that are not purely and simply prohibited, Article 102 (b) defines the cases where a human should be involved in the decision-making process.

By promulgating section 102 (b) without delay, legislators can maintain the dignity of the individual by not allowing the most critical decisions that impact the individual to be left solely to the predictive exit of algorithms artificially intelligent.


Content Protection by

Check Also

Quantexa raises $64.7M to bring big data intelligence to risk analysis and investigations – TechCrunch

Quantexa raises $64.7M to bring big data intelligence to risk analysis and investigations – TipsClear

The broader area of ​​cybersecurity – not only defending networks, but also identifying fraudulent activity …

Leave a Reply

Your email address will not be published. Required fields are marked *