Videnskab
 science >> Videnskab >  >> Elektronik

Kan maskinlæringsmodeller overvinde skæve datasæt?

Kredit:CC0 Public Domain

Kunstige intelligenssystemer kan muligvis udføre opgaver hurtigt, men det betyder ikke, at de altid gør det retfærdigt. Hvis de datasæt, der bruges til at træne maskinlæringsmodeller, indeholder skæve data, er det sandsynligt, at systemet kan udvise den samme skævhed, når det træffer beslutninger i praksis.

For eksempel, hvis et datasæt hovedsageligt indeholder billeder af hvide mænd, så kan en ansigtsgenkendelsesmodel, der er trænet med disse data, være mindre nøjagtig for kvinder eller personer med forskellige hudfarver.

En gruppe forskere ved MIT søgte i samarbejde med forskere ved Harvard University og Fujitsu, Ltd. at forstå, hvornår og hvordan en maskinlæringsmodel er i stand til at overvinde denne form for datasætbias. De brugte en tilgang fra neurovidenskab til at studere, hvordan træningsdata påvirker, om et kunstigt neuralt netværk kan lære at genkende objekter, det ikke har set før. Et neuralt netværk er en maskinlæringsmodel, der efterligner den menneskelige hjerne på den måde, den indeholder lag af indbyrdes forbundne noder, eller "neuroner", der behandler data.

De nye resultater viser, at diversitet i træningsdata har stor indflydelse på, om et neuralt netværk er i stand til at overvinde bias, men samtidig kan datasætdiversitet forringe netværkets ydeevne. De viser også, at hvordan et neuralt netværk trænes, og de specifikke typer neuroner, der opstår under træningsprocessen, kan spille en stor rolle for, om det er i stand til at overvinde et forudindtaget datasæt.

"Et neuralt netværk kan overvinde datasætbias, hvilket er opmuntrende. Men det vigtigste her er, at vi skal tage højde for datadiversitet. Vi er nødt til at stoppe med at tænke på, at hvis du bare indsamler et væld af rådata, vil det blive dig et eller andet sted. Vi skal være meget forsigtige med, hvordan vi designer datasæt i første omgang," siger Xavier Boix, en forsker ved Institut for hjerne- og kognitiv videnskab (BCS) og Center for hjerner, sind og maskiner (CBMM). ), og seniorforfatter af papiret.

Medforfattere omfatter tidligere kandidatstuderende Spandan Madan, en tilsvarende forfatter, der i øjeblikket er i gang med en ph.d. ved Harvard, Timothy Henry, Jamell Dozier, Helen Ho og Nishchal Bhandari; Tomotake Sasaki, en tidligere gæsteforsker nu forsker ved Fujitsu; Frédo Durand, professor i elektroteknik og datalogi og medlem af Computer Science and Artificial Intelligence Laboratory; og Hanspeter Pfister, An Wang-professor i datalogi ved Harvard School of Engineering and Applied Sciences. Forskningen vises i dag i Nature Machine Intelligence .

Tænker som en neurovidenskabsmand

Boix og hans kolleger nærmede sig problemet med datasætbias ved at tænke som neurovidenskabsmænd. I neurovidenskab, forklarer Boix, er det almindeligt at bruge kontrollerede datasæt i eksperimenter, hvilket betyder et datasæt, hvor forskerne ved så meget som muligt om den information, det indeholder.

Holdet byggede datasæt, der indeholdt billeder af forskellige objekter i forskellige positurer, og kontrollerede omhyggeligt kombinationerne, så nogle datasæt havde mere mangfoldighed end andre. I dette tilfælde havde et datasæt mindre mangfoldighed, hvis det indeholder flere billeder, der kun viser objekter fra ét synspunkt. A more diverse dataset had more images showing objects from multiple viewpoints. Each dataset contained the same number of images.

The researchers used these carefully constructed datasets to train a neural network for image classification, and then studied how well it was able to identify objects from viewpoints the network did not see during training (known as an out-of-distribution combination).

For example, if researchers are training a model to classify cars in images, they want the model to learn what different cars look like. But if every Ford Thunderbird in the training dataset is shown from the front, when the trained model is given an image of a Ford Thunderbird shot from the side, it may misclassify it, even if it was trained on millions of car photos.

The researchers found that if the dataset is more diverse—if more images show objects from different viewpoints—the network is better able to generalize to new images or viewpoints. Data diversity is key to overcoming bias, Boix says.

"But it is not like more data diversity is always better; there is a tension here. When the neural network gets better at recognizing new things it hasn't seen, then it will become harder for it to recognize things it has already seen," he says.

Testing training methods

The researchers also studied methods for training the neural network.

In machine learning, it is common to train a network to perform multiple tasks at the same time. The idea is that if a relationship exists between the tasks, the network will learn to perform each one better if it learns them together.

But the researchers found the opposite to be true—a model trained separately for each task was able to overcome bias far better than a model trained for both tasks together.

"The results were really striking. In fact, the first time we did this experiment, we thought it was a bug. It took us several weeks to realize it was a real result because it was so unexpected," he says.

They dove deeper inside the neural networks to understand why this occurs.

They found that neuron specialization seems to play a major role. When the neural network is trained to recognize objects in images, it appears that two types of neurons emerge—one that specializes in recognizing the object category and another that specializes in recognizing the viewpoint.

When the network is trained to perform tasks separately, those specialized neurons are more prominent, Boix explains. But if a network is trained to do both tasks simultaneously, some neurons become diluted and don't specialize for one task. These unspecialized neurons are more likely to get confused, he says.

"But the next question now is, how did these neurons get there? You train the neural network and they emerge from the learning process. No one told the network to include these types of neurons in its architecture. That is the fascinating thing," he says.

That is one area the researchers hope to explore with future work. They want to see if they can force a neural network to develop neurons with this specialization. They also want to apply their approach to more complex tasks, such as objects with complicated textures or varied illuminations.

Boix is encouraged that a neural network can learn to overcome bias, and he is hopeful their work can inspire others to be more thoughtful about the datasets they are using in AI applications.

Varme artikler