Advertisement | विज्ञापन

Don’t ask if artificial intelligence is good or fair, ask how it shifts power

Law enforcement, marketers, hospitals and other bodies apply artificial intelligence (AI) to make decisions on matters such as who is profiled as a criminal, who can buy what products at what cost, who gets medical treatment. get and who gets hired. These entities monitor and predict our behavior, often driven by power and profits.

It is no longer uncommon for AI experts to ask whether AI is ‘fair’ and ‘for good’. But ‘fair’ and ‘nice’ are infinitely wide words that can be squeezed into any AI system. The question of posing goes deeper: How is AI shifting power?

Starting July 12, thousands of researchers will meet virtually at the week-long International Conference on Machine Learning, one of the largest AI meetings in the world. Many researchers think that AI is neutral and often beneficial, influenced only by biased data obtained from an unfair society. In fact, an indifferent field serves the powerful.

In my view, those working in AI need to elevate those who are excluded from shaping it, and in doing so they will need to restrict relationships with powerful institutions that benefit from surveillance of people.

There are. Researchers should listen to, elevate, cite and support communities that have borne the brunt of surveillance: often women, those who are black, Indigenous, LGBT+, poor or disabled. Conferences and research institutions should assign key time slots, locations, funding and leadership roles to members of these communities. In addition, discussions on how research transfers power and how it should be evaluated in grant applications and publications.

A year ago, my colleagues and I created the Radical AI Network, based on the work of the people we encountered. The group is inspired by black feminist scholar Angela Davis’ observation that “radical means ‘to hold things at the root'”, and that the core problem is that power is unevenly distributed. Our network emphasizes listening to those who are marginalized and affected by AI, and advocating for anti-repression techniques.

Consider an AI that is used to classify images. Experts train the system to find patterns in photographs, perhaps to identify one’s gender or actions, or to find matching faces in a database of people. ‘data subjects’ – by which I mean people who are tracked, often without consent, as well as those who manually categorize photographs to train AI systems, usually for meager pay – Often exploited and evaluated by AI systems.

Researchers in AI focus heavily on providing highly accurate information to decision makers. Remarkably little research is focused on serving data subjects. There is a need for ways for these people to investigate AI, counteract it, influence it or even destroy it.

For example, advocacy groups are advancing ways to protect personal data when interacting with our data body US Fair-Housing and Child-Protection Services. Less attention is paid to such work. Meanwhile, mainstream research is making systems that are extraordinarily expensive to train, powering already powerful institutions, ranging from Amazon, Google and Facebook, to home surveillance and military programs.

Many researchers have trouble viewing their intellectual work with AI as a further disparity. Researchers like me spend our days working on systems that are mathematically beautiful and useful to us, and listening to the success stories of AI, like winning a Go championship or showing promise in cancer detection. It is our responsibility to recognize our contrasting perspectives and listen to those affected by AI.

Through the lens of power, it is possible to see why accurate, generalizable and efficient AI systems are not good for everyone. In the hands of exploitative companies or repressive law enforcement, more accurate facial recognition systems are harmful.

Organizations have responded with a pledge to design ‘fair’ and ‘transparent’ systems, but fair and transparent according to whom? These systems sometimes minimize losses, but are controlled by powerful institutions with agendas of their own. At best, they are unreliable; At worst, they turn out to be ‘ethics-washing’ technologies that still perpetuate inequality.

Already, some researchers are uncovering the system’s hidden limitations and failures. They round out their research findings with advocating for AI regulation. His work includes criticizing the inadequate technical ‘improvement’. Other researchers are explaining to the public how natural resources, data, and human labor are extracted to create AI.

Ruha Benjamin, a race-and-technology scholar at Princeton University in New Jersey, encourages us to remember to “imagine and craft a world you can’t live without, just like those people you live in.”

Leave a Comment