Can active learning preemptively mitigate fairness issues?

By Parmida Atighehchian

The purpose of this notebook is to demonstrate the prilimary results of our recent contribution to ICLR workshop of Responsible AI 2021. We show that active learning could help in creating fairer datasets without the need to know the bias in the dataset. This is important since in real scenarios, the source of bias is often unknown. Using active learning (i.e. BALD), we show that the prior knowledge of the bias is not necessary and hence it could be easier to integrate this setup in pipelines to make sure that the dataset is generally fairer and the possible biases are reduced.

For the purpose of this demo, we use Synbols dataset. Synbols is the new state of the art generating synthetic datasets.

The Dockerfile is located at baal/notebooks/fairness/Docker_biased_data.

More resources on BaaL:

If you have any question, please submit an issue or reach out on Gitter.

Introducing bias in dataset

Using Synbols, we will generate a character classification dataset with an important correlation between the character and the color. There is a correlation between the color blue and the character a:

\(p(char=a | color=blue) = 90\%\)

and there is a correlation between the color red and the character d:

\(p(char=d | color=red) = 90\%\)

[1]:
import numpy as np
from math import pi
from synbols.data_io import pack_dataset
from synbols import drawing
from synbols import generate

class InfoSolid(drawing.SolidColor):
    def attribute_dict(self):
        d = super().attribute_dict()
        d['color'] = self.color
        return d

rng = np.random.RandomState(1337)
p = .1
blue = (0,0,255)
red = (255, 0, 0)

class SpuriousSampler:
    def __init__(self, p):
        self.p = p

    def __call__(self, seed):
        """Makes color dependant on symbol."""
        rng = np.random.RandomState(seed)
        color = [blue, red][rng.choice([0, 1], p=[self.p, 1-self.p])]
        char = rng.choice(['a', 'd'])
        color_p = {'a':self.p, 'd':1-self.p}[char]
        color = [blue, red][rng.choice([0, 1], p=[color_p, 1-color_p])]

        fg = InfoSolid(color)
        fg.color = color

        attr_sampler = generate.basic_attribute_sampler(
            char=char, foreground=fg, background=None, inverse_color=False, resolution=(64, 64))
        d = attr_sampler()
        return d


def make_dataset(p, seed, num):
    attribute_sampler = SpuriousSampler(p=p)
    x, mask, y = pack_dataset(generate.dataset_generator(attribute_sampler, num, generate.flatten_mask, dataset_seed=seed))
    for yi in y:
        yi['color'] = 'red' if yi['foreground']['color'] == red else 'blue'
    return (x,y,y)

train_set = make_dataset(p=0.9, seed=1000, num=10000)
test_set = make_dataset(p=0.5, seed=2000, num=5000)
dataset = {'train': train_set, 'test': test_set}
100%|██████████| 10000/10000 [02:28<00:00, 67.28it/s]
100%|██████████| 5000/5000 [01:13<00:00, 68.07it/s]

Prepare model and dataset to be used in BaaL setup

As usual we wrap the train_set in ActiveLearningDataset and using vgg16 as default model, we use the BaaL’s patch_module to create a dropout layer which performs in inference time.

[3]:
from torchvision.transforms import transforms
from active_fairness.dataset import SynbolDataset
from baal.active import get_heuristic, ActiveLearningDataset
from typing import Dict

IMG_SIZE=64

def get_datasets(dataset : Dict, initial_pool: int, attribute:str, target_key:str):
    """
    Get the dataset for the experiment.
    Args:
        dataset: The synbol generated dataset.
        initial_pool: Initial number of items to label.
        attribute: Key where the sensitive attribute is.
        target_key: Key where the target is.
    Returns:
        ActiveLearningDataset with `initial_pool` items labelled
        Test dataset
    """
    transform = transforms.Compose(
        [transforms.ToPILImage(),
         transforms.Resize((IMG_SIZE, IMG_SIZE)),
         transforms.RandomHorizontalFlip(),
         transforms.RandomRotation(30),
         transforms.ToTensor(),
         transforms.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010))])
    test_transform = transforms.Compose([transforms.ToPILImage(),
                                         transforms.Resize((IMG_SIZE, IMG_SIZE)),
                                         transforms.ToTensor(),
                                         transforms.Normalize((0.4914, 0.4822, 0.4465),
                                                              (0.2023, 0.1994, 0.2010))])
    train_ds = dataset['train']
    test_ds = dataset['test']
    ds = SynbolDataset(*train_ds, target_key=target_key, attribute=attribute,
                           transform=transform)

    test_set = SynbolDataset(*test_ds, target_key=target_key, attribute=attribute,
                                 transform=test_transform)

    active_set = ActiveLearningDataset(ds, pool_specifics={'transform': test_transform})
    active_set.label_randomly(initial_pool)
    return active_set, test_set
[4]:
from torchvision import models
from torch.hub import load_state_dict_from_url
from baal.bayesian.dropout import patch_module

#set use_cuda to False if you don't have access to GPUS
use_cuda=True

model = models.vgg16(pretrained=False, num_classes=2)
weights = load_state_dict_from_url('https://download.pytorch.org/models/vgg16-397923af.pth')
weights = {k: v for k, v in weights.items() if 'classifier.6' not in k}
model.load_state_dict(weights, strict=False)

# change dropout layer to MCDropout
model = patch_module(model)

if use_cuda:
    model.cuda()

We wrap the pytorch criterion to accomodate target being a dictionary.

[5]:
from torch import nn

class Criterion(nn.Module):
    def __init__(self, crit):
        super().__init__()
        self.crit = crit

    def forward(self, input, target):
        return self.crit(input, target['target'])

Training

Let’s now train the model with active learning. As usual, we compare bald with random but this time, we are looking for something else in the results!

[ ]:
from copy import deepcopy
from tqdm import tqdm
import pandas as pd
import torch
from torch import optim
from torch.nn import CrossEntropyLoss
from baal.modelwrapper import ModelWrapper
from baal.active.heuristics import BALD
from baal.active.active_loop import ActiveLearningLoop
from active_fairness.metrics import FairnessMetric
import sklearn.metrics as skm

heuristics = ['bald', 'random']

logs = {'bald': {}, 'random': {}}

for heuristic_name in heuristics:
    active_set, test_set = get_datasets(dataset, initial_pool=500, attribute='color', target_key='char')

    heuristic = get_heuristic(name=heuristic_name, shuffle_prop=0.0)

    criterion = Criterion(CrossEntropyLoss())

    optimizer = optim.SGD(model.parameters(), lr=0.001, momentum=0.9, weight_decay=5e-4)

    wrapped_model = ModelWrapper(model, criterion)

    wrapped_model.add_metric('aggregate_res', lambda: FairnessMetric(skm.accuracy_score, name='acc',
                                                                     attribute='color'))

    # save imagenet weights
    init_weights = deepcopy(model.state_dict())


    bald = BALD()



    # for prediction we use a smaller batchsize
    # since it is slower
    active_loop = ActiveLearningLoop(active_set,
                                     wrapped_model.predict_on_dataset,
                                     heuristic,
                                     50,
                                     batch_size=16,
                                     iterations=20,
                                     use_cuda=use_cuda,
                                     workers=0)
    learning_epoch = 20
    for epoch in tqdm(range(100000)):
        wrapped_model.load_state_dict(init_weights)
        wrapped_model.train_on_dataset(active_set, optimizer, batch_size=32,
                                       epoch=learning_epoch, use_cuda=True, workers=12)

        # Validation!
        wrapped_model.test_on_dataset(test_set, batch_size=32, use_cuda=use_cuda,
                                      workers=12, average_predictions=20)

        should_continue = active_loop.step()
        if not should_continue:
            break

        # Send logs
        fair_train = wrapped_model.metrics[f'train_aggregate_res'].value
        epoch_logs = {
            'epoch': epoch,
            'test_loss': wrapped_model.metrics['test_loss'].value,
            'active_train_size': len(active_set)}

        agg_res = {'train_' + k: v for k, v in fair_train.items()}
        epoch_logs.update(agg_res)

        for k, v in epoch_logs.items():
            if k in logs[heuristic_name].keys():
                logs[heuristic_name][k].append(v)
            else:
                logs[heuristic_name][k] = [v]

        if len(active_set) > 2000:
            break

Results and Discussion

Below we show the number of samples added to each subcategory (i.e. character with a specific color) as the training goes on. Interesting result is that the number of samples added to the minority group of each character increases using bald where as random picks samples in a random setup and hence having more samples given a protected attribute (here color), random has more samples of a certain color to pick. This indicates that active learning with bald generally leads to a more fair dataset.

[17]:
import matplotlib.pyplot as plt
%matplotlib inline


x = logs['bald']['epoch']
fig, ((ax0, ax1), (ax2, ax3)) = plt.subplots(nrows=2, ncols=2, sharex=True,
                                    figsize=(12, 6))
plots_target = [('minority count for character a', 'train_count_0_red'),
                 ('minority count for character b', 'train_count_1_blue'),
               ('majority count for character a', 'train_count_0_blue'),
               ('majority count for character b', 'train_count_1_red')]

for ax, (title, key) in zip([ax0, ax1, ax2, ax3], plots_target):
    ax.set_title(title)
    ax.plot(x, logs['bald'][key], color='r', label="BALD")
    ax.plot(x, logs['random'][key], color='b', label="Uniform")
    ax.set_xlabel('Active step')
    ax.set_ylabel('Count')
    ax.legend()

fig.show()
../../_images/notebooks_fairness_ActiveFairness_11_0.png

We demonstrate the test_loss and training_size using bald vs random as heuristics. As it is shown, the trainig size increases with the same pace but the above graphs shows the underlying difference in the existing samples for each class which then results in also a better loss decrease using bald.

[16]:
x = logs['bald']['epoch']
fig, (ax0, ax1) = plt.subplots(nrows=1, ncols=2, sharex=True,
                                    figsize=(12, 6))
ax0.set_title('training size')
ax0.plot(x, logs['bald']['active_train_size'], color='r', label='BALD')
ax0.plot(x, logs['random']['active_train_size'], color='b', label='Uniform')

ax1.set_title('test loss')
ax1.plot(x, logs['bald']['test_loss'], color='r', label='BALD')
ax1.plot(x, logs['random']['test_loss'], color='b', label='Uniform')
ax1.legend()
fig.show()
../../_images/notebooks_fairness_ActiveFairness_13_0.png