From Symmetry: “A physicists’ guide to the ethics of artificial intelligence”

Symmetry Mag

DOE’s Fermi National Accelerator Laboratory Wilson Hall.
DOE’s SLAC National Accelerator Laboratory campus with world’s first x-ray laser- the Linac Coherent Light Source (LCLS) unveiled in 2009.

From Symmetry

5.6.24
Laura Dattaro

1
Illustration by Sandbox Studio, Chicago with Abigail Malate

Physics may seem like its own world, but different sectors using machine learning are all part of the same universe.

In 2017, Savannah Thais attended the NeurIPS machine-learning conference in Long Beach, California, hoping to learn about techniques she could use in her doctoral work on electron identification. Instead, she returned home to Yale with a transformed worldview.

At NeurIPS, she had listened to a talk by artificial intelligence researcher Kate Crawford, who discussed bias in machine-learning algorithms. She mentioned a new study showing that facial-recognition technology, which uses machine learning, had picked up gender and racial biases from its dataset: Women of color were 32% more likely to be misclassified by the technology than were White men.

The study, published as a master’s thesis by Joy Adowaa Buolamwini, became a landmark in the machine-learning world, exposing the ways that seemingly objective algorithms can make errors based on incomplete datasets. And for Thais, who’d been introduced to machine learning through physics, it was a watershed moment.

“I didn’t even know about it before,” says Thais, now an associate research scientist at the Columbia University Data Science Institute. “I didn’t know these were issues with the technology, that these things were happening.”

After finishing her PhD, Thais pivoted to studying the ethical implications of artificial intelligence in science and in society. Such work often focuses on direct impacts on people, which can seem entirely separate from algorithms designed to, say, identify the signature of a Higgs boson in a particle collision against a mountain of noise.

But these issues are interwoven with physics research, too. Algorithmic bias can influence physics results, particularly when machine-learning methods are used inappropriately.

And work done for the purpose of physics likely won’t stay in physics. By pushing machine-learning technology ahead for science, physicists are also contributing to its improvement in other areas. “When you’re in a fairly scientific context, and you’re thinking, ‘Oh, we’re building these models to help us do better physics research,’ it’s pretty divorced from any societal implications,” Thais says. “But it’s all really part of the same ecosystem.”

Trusting your models

In traditional computer models, a human tells the program each parameter it needs to know to make a decision—for example, the information that a proton is lighter than a neutron can help a program tell the two types of particles apart.

Machine-learning algorithms, on the other hand, are programmed to learn their own parameters from the data they’re given. An algorithm can come up with millions of parameters, each one with its own “phase space,” the set of all possible iterations of that parameter.

Algorithms don’t treat every phase space the same way. They weight them differently according to their usefulness to the task the algorithm is trying to accomplish. Because this weighting isn’t decided directly by humans, it is easy to imagine that making decisions by algorithm could be a way to remove human bias. But humans do still add their input to the system, in the form of the dataset that they give the algorithm to train on.

In her thesis, Buolamwini analyzed an algorithm that created parameters for facial recognition based on a dataset comprised largely of photos of White people, mostly men. Because the algorithm had a variety of examples of White men, it was able to come up with a good rubric for differentiating between them. Because it had fewer examples of people of other ethnicities and genders, it did a worse job differentiating between them.

Facial-recognition technology can be used in a variety of ways. For example, facial-recognition technology can be used to verify someone’s identity; many people use it every day to unlock their smart phones. Buolamwini gives other examples in her thesis, including “developing more empathetic human-machine interactions, monitoring health, and locating missing persons or dangerous criminals.”

When facial-recognition technology is used in these contexts, its failure to work equally well for all people can have a range of consequences, from the frustration of being denied access to a convenience, to the danger of being misdiagnosed in a medical setting, to the threat of being falsely identified and arrested. “Characterizing how your model works across phase space is both a scientific and an ethical question,” Thais says.

Cosmologist Brian Nord has been thinking about this for years. He began using machine learning in his work in 2016, when he and his colleagues realized machine-learning models could classify objects observed by telescopes. Nord was particularly interested in algorithms that could decode the weirdness of light bending around celestial bodies, a phenomenon known as gravitational lensing. Because such models excel at classifying items based on existing data, they can identify the stars and galaxies in images far better than a human can.

But other uses of machine learning for physics are far less trustworthy, says Nord, a scientist in Fermilab’s AI Project Office and Cosmic Physics Center. Where a traditional program has a limited number of parameters that physicists can manually tweak to get correct results, a machine-learning algorithm uses millions of parameters that often don’t correspond to real, physical characteristics—making it impossible for physicists to correct them. “There is no robust way to interpret the errors that come out of an AI method that we can look at in terms of how we think of statistics in physics,” Nord says. “That is not a thing that exists yet.”

If physicists aren’t aware of these issues, they may use models for purposes beyond their capabilities, potentially undermining their results.

Nord is working to push machine-learning capabilities to aid with all steps of the scientific process, from identifying testable hypotheses and improving telescope design to simulating data. He envisions a not-too-distant future where the physics community can conceive of, design, and execute large-scale projects in far less time than the decades such experiments currently take.

But Nord is also keenly aware of the potential pitfalls of driving machine-learning technology forward. The image-recognition algorithm that enables a cosmologist to distinguish a galaxy cluster from a black hole, Nord points out, is the same technology that can be used to identify a face in a crowd.

“If I’m using these tools to do science and I want to make them fundamentally better to do my science, it is highly likely I am going to make it better in other places it’s applied,” Nord says. “I’m essentially building technologies to surveil myself.”

Responsibilities and opportunities

Physics is behind one of the most famous scientific ethical quandaries: the creation of nuclear weapons. Since the time of the Manhattan Project—the government research program to produce the first atom bomb—scientists have debated the extent to which their involvement in the science behind these weapons equates to a responsibility for their use.

In his 1995 Nobel Peace Prize acceptance lecture, physicist Joseph Rotblat, who walked away from the Manhattan Project, appealed directly to scientists’ ethical sensibilities. “At a time when science plays such a powerful role in the life of society, when the destiny of the whole of mankind may hinge on the results of scientific research, it is incumbent on all scientists to be fully conscious of that role, and conduct themselves accordingly,” Rotblat said.

He noted that “doing fundamental work, pushing forward the frontiers of knowledge… often you do it without giving much thought to the impact of your work on society.”

Thais says she sees the same pattern being repeated among physicists working on artificial intelligence today. There’s rarely a moment in the scientific process when a physicist can pause to consider their work in a larger context.

As physicists increasingly learn about machine learning alongside physics, they should also be exposed to ethical frameworks, Thais says. That can happen at conferences and workshops and in training materials online.

This past summer, Kazuhiro Terao, a staff scientist at SLAC National Accelerator Laboratory, organized the 51st SLAC Summer Institute, which centered around the theme “Artificial Intelligence in Fundamental Physics.” He invited speakers on topics such as computer vision, anomaly detection and symmetries. He also asked Thais to address ethics.

“It’s important for us to learn not just the hype about AI, but what kinds of things it can do, what kinds of things it can be biased about,” Terao says.

Artificial-intelligence ethics research can teach physicists to think in ways that are also useful for physics, Terao says. For example, learning more about bias in machine-learning systems can encourage a healthy scientific skepticism about what such systems can actually do.

Ethics research also provides opportunities for physicists to improve the use of machine learning in society as a whole. Physicists can use their technical expertise to educate citizens and policymakers on the technology and its uses and implications, Nord says.

And physicists have a unique opportunity to improve the science of machine learning itself, Thais says. That’s because physics data, unlike facial-recognition data, is highly controlled—and there’s a lot of it. Physicists know what kinds of biases exist in their experiments, and they know how to quantify them. That makes physics as a field a perfect “sandbox” for learning to build models that avoid bias, Thais says.

But that can only happen if physicists incorporate ethics into their thinking. “We need to be thinking about these questions,” Thais says. “We don’t get to escape the conversation.”

See the full article here .

Comments are invited and will be appreciated, especially if the reader finds any errors which I can correct.


five-ways-keep-your-child-safe-school-shootings

Please help promote STEM in your local schools.


Stem Education Coalition

Symmetry is a joint Fermilab/SLAC publication.


Leave a comment