面向图像识别的公平性研究进展
Review on fairness in image recognition
- 2024年29卷第7期 页码:1814-1833
纸质出版日期: 2024-07-16
DOI: 10.11834/jig.230226
移动端阅览
浏览全部资源
扫码关注微信
纸质出版日期: 2024-07-16 ,
移动端阅览
王玫, 邓伟洪, 苏森. 2024. 面向图像识别的公平性研究进展. 中国图象图形学报, 29(07):1814-1833
Wang Mei, Deng Weihong, Su Sen. 2024. Review on fairness in image recognition. Journal of Image and Graphics, 29(07):1814-1833
在过去的几十年里,图像识别技术经历了迅速发展,并深刻地改变着人类社会的进程。发展图像识别技术的目的是通过减少人力劳动和增加便利来造福人类。然而,最近的研究和应用表明,图像识别系统可能会表现出偏见甚至歧视行为,从而对个人和社会产生潜在的负面影响。因此,图像识别的公平性研究受到广泛关注,避免图像识别系统可能给人们带来的偏见与歧视,才能使人完全信任该项技术并与之和谐相处。本文对图像识别的公平性研究进行了全面的梳理回顾。首先,简要介绍了偏见3个方面的来源,即数据不平衡、属性间的虚假关联和群体差异性;其次,对于常用的数据集和评价指标进行汇总;然后,将现有的去偏见算法划分为重加权(重采样)、图像增强、特征增强、特征解耦、度量学习、模型自适应和后处理7类,并分别对各类方法进行介绍,阐述了各方法的优缺点;最后,对该领域的未来研究方向和机遇挑战进行了总结和展望。整体而言,学术界对图像识别公平性的研究已经取得了较大的进展,然而该领域仍处于发展初期,数据集和评价指标仍有待完善,针对未知偏见的公平性算法有待研究,准确率和公平性的权衡困境有待突破,针对细分任务的独特发展趋势开始呈现,视频数据的去偏见算法逐渐受到关注。
In the past few decades, image recognition technology has undergone rapid developments and has been integrated into people’s lives, profoundly changing the course of human society. However, recent studies and applications indicate that image recognition systems would show human-like discriminatory bias or make unfair decisions toward certain groups or populations, even reducing the quality of their performances in historically underserved populations. Consequently, the need to guarantee fairness for image recognition systems and prevent discriminatory decisions to allow people to fully trust and live in harmony has been increasing. This paper presents a comprehensive overview of the cutting-edge research progress toward fairness in image recognition. First, fairness is defined as achieving consistent performances across different groups regardless of peripheral attributes (e.g., color, background, gender, and race) and the reasons for the emergence of bias are illustrated from three aspects. 1) Data imbalance. In existing datasets, some groups are overrepresented and others are underrepresented. Deep models will facilitate optimization for the overrepresented groups to boost the accuracy, while the underrepresented ones are ignored during training. 2) Spurious correlations. Existing methods continuously capture unintended decision rules from spurious correlations between target variables and peripheral attributes, failing to generalize the images with no such correlations. 3) Group discrepancy. A large discrepancy exists between different groups. Performance on some subjects is sacrificed when deep models cannot trade off the specific requirements of various groups. Second, datasets (e.g., Colored Mixed National Institute of Standards and Technology database (MNIST), Corrupted Canadian Institute for Advanced Research-10 database (CIFAR-10), CelebFaces attributes database (CelebA), biased action recognition (BAR), and racial faces in the wild (RFW)) and evaluation metrics (e.g., equal opportunity and equal odds) used for fairness in image recognition are also introduced. These datasets enable researchers to study the bias of image recognition models in terms of color, background, image quality, gender, race, and age. Third, the debiased methods designed for image recognition are divided into seven categories. 1) Sample reweighting (or resampling). This method simultaneously assigns larger weights (increases the sampling frequency) to the minority groups and smaller weights (decreases the sampling frequency) to the majority ones to help the model focus on the minority groups and reduce the performance difference across groups. 2) Image augmentation. Generative adversarial networks (GANs) are introduced into debiased methods to translate the images of overrepresented groups to those of underrepresented groups. This method modifies the bias attributes of overrepresented samples while maintaining their target attributes. Therefore, additional samples are generated for underrepresented groups, and the problem of data imbalance is addressed. 3) Feature augmentation. Image augmentation suffers from model collapse in the training process of GANs; thus, some works augment samples on the feature level. This augmentation encourages the recognition model to produce consistent predictions for the samples before and after perturbing and editing the bias information of features, making it impossible for the model to predict target attributes based on bias information and thus improving model fairness. 4) Feature disentanglement. This method is one of the most commonly used for debiasing, which removes the spurious correlation between target and bias attributes in the feature space and learns target features that are independent of bias. 5) Metric learning. This method utilizes the power of metric learning (e.g., contrastive learning) to encourage the model to make predictions based on target attributes rather than bias information to promote pulling the same target class with different bias class samples close and pushing the different target classes with similar bias class samples away in the feature space. 6) Model adaptation. Some works adaptively change the network depth or hyperparameters for different groups according to their specific requirements to address group discrepancy, which improves the performance on underrepresented groups. 7) Post-processing. This method assumes black-box access to a biased model and aims to modify the final predictions outputted by the model to mitigate bias. The advantages and limitations of these methods are also discussed. Competitive performances and experimental comparisons in widely used benchmarks are summarized. Finally, the following future directions in this field are reviewed and summarized. 1) In existing datasets, bias attributes are limited to color, background, image quality, race, age, and gender. Diverse datasets must be constructed to study highly complex biases in the real world. 2) Most of the recent studies dealing with bias mitigation require annotations of the bias source. However, annotations require expensive labor, and multiple biases may occasionally coexist. Mitigation of multiple unknown biases must still be fully explored. 3) A tradeoff dilemma exists between fairness and algorithm performance. Simultaneously reducing the effect of bias without hampering the overall model performance is challenging. 4) Causal intervention is introduced into object classification to mitigate bias, while individual fairness is proposed to encourage models to provide the same predictions to similar individuals in face recognition. 5) Fairness on video data has also recently attracted attention.
公平性偏见去偏见学习图像识别深度学习
fairnessbiasdebiased learningimage recognitiondeep learning
Ahn S, Kim S and Yun S Y. 2023. Mitigating dataset bias by using per-sample gradient//Proceedings of the 11th International Conference on Learning Representations. Kigali, Rwanda: OpenReview.net: 1-14
Alvi M, Zisserman A and Nellåker C. 2018. Turning a blind eye: explicit removal of biases and variation from deep neural network embeddings//Proceedings of 2018 European Conference on Computer Vision. Munich, Germany: Springer: 556-572 [DOI: 10.1007/978-3-030-11009-3_34http://dx.doi.org/10.1007/978-3-030-11009-3_34]
Amini A, Soleimany A P, Schwarting W, Bhatia S N and Rus D. 2019. Uncovering and mitigating algorithmic bias through learned latent structure//Proceedings of 2019 AAAI/ACM Conference on AI, Ethics, and Society. Honolulu, USA: ACM: 289-295 [DOI: 10.1145/3306618.3314243http://dx.doi.org/10.1145/3306618.3314243]
Bahng H, Chun S, Yun S, Choo J and Oh S J. 2020. Learning de-biased representations with biased representations//Proceedings of the 37th International Conference on Machine Learning. Vienna, Austria: PMLR: 528-539
Bruveris M, Mortazavian P, Gietema J and Mahadevan M. 2020. Reducing geographic performance differentials for face recognition//Proceedings of 2020 IEEE Winter Applications of Computer Vision Workshops. Snowmass, USA: IEEE: 98-106 [DOI: 10.1109/WACVW50321.2020.9096930http://dx.doi.org/10.1109/WACVW50321.2020.9096930]
Chapelle O, Haffner P and Vapnik V N. 1999. Support vector machines for histogram-based image classification. IEEE transactions on Neural Networks, 10(5): 1055-1064 [DOI: 10.1109/72.788646http://dx.doi.org/10.1109/72.788646]
Choi J, Gao C, Messou J C E and Huang J B. 2019. Why can’t I dance in a mall? Learning to mitigate scene bias in action recognition//Proceedings of the 33rd International Conference on Neural Information Processing Systems. Red Hook, USA: Curran Associates Inc.: #77 [DOI: 10.5555/3454287.3454364http://dx.doi.org/10.5555/3454287.3454364]
Choi Y, Choi M, Kim M, Ha J W, Kim S and Choo J. 2018. Stargan: unified generative adversarial networks for multi-domain image-to-image translation//Proceedings of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, USA: IEEE: 8789-8797 [DOI: 10.1109/CVPR.2018.00916http://dx.doi.org/10.1109/CVPR.2018.00916]
Chuang C Y and Mroueh Y. 2021. Fair mixup: fairness via interpolation//Proceedings of the 9th International Conference on Learning Representations. Virtual: OpenReview.net: 1-11
Danziger S, Levav J and Avnaim-Pesso L. 2011. Extraneous factors in judicial decisions. Proceedings of the National Academy of Sciences of the United States of America, 108(17): 6889-6892 [DOI: 10.1073/pnas.1018033108http://dx.doi.org/10.1073/pnas.1018033108]
Deng J K, Guo J, Xue N N and Zafeiriou S. 2019. Arcface: additive angular margin loss for deep face recognition//Proceedings of 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Long Beach, USA: IEEE: 4685-4694 [DOI: 10.1109/CVPR.2019.00482http://dx.doi.org/10.1109/CVPR.2019.00482]
Deng W, Xing Y H, Li Y F, Li Z H and Wang G Y. 2020. Survey on fair machine learning. CAAI Transactions on Intelligent Systems, 15(3): 578-586
邓蔚, 邢钰晗, 李逸凡, 李振华, 王国胤. 2020. 公平性机器学习研究综述. 智能系统学报, 15(3): 578-586 [DOI: 10.11992/tis.202007004http://dx.doi.org/10.11992/tis.202007004]
Dhar P, Gleason J, Roy A, Castillo C D and Chellappa R. 2021. PASS: protected attribute suppression system for mitigating bias in face recognition//Proceedings of 2021 IEEE/CVF International Conference on Computer Vision. Montreal, Canada: IEEE: 15067-15076 [DOI: 10.1109/ICCV48922.2021.01481http://dx.doi.org/10.1109/ICCV48922.2021.01481]
Du M N, Mukherjee S, Wang G C, Tang R X, Awadallah A and Hu X. 2021. Fairness via representation neutralization//Proceedings of the 35th International Conference on Neural Information Processing Systems. Virtual: NeurIPS: 12091-12103
Esteva A, Kuprel B, Novoa R A, Ko J, Swetter S M, Blau H M and Thrun S. 2017. Dermatologist-level classification of skin cancer with deep neural networks. Nature, 542(7639): 115-118 [DOI: 10.1038/nature21056http://dx.doi.org/10.1038/nature21056]
Fish B, Kun J and Lelkes Á D. 2016. A confidence-based approach for balancing fairness and accuracy//Proceedings of 2016 SIAM International Conference on Data Mining. Miami, USA: SIAM: 144-152 [DOI: 10.1137/1.9781611974348.17http://dx.doi.org/10.1137/1.9781611974348.17]
Gajane P and Pechenizkiy M. 2018. On formalizing fairness in prediction with machine learning [EB/OL]. [2023-04-03]. https://arxiv.org/pdf/1710.03184.pdfhttps://arxiv.org/pdf/1710.03184.pdf
Ganin Y and Lempitsky V. 2015. Unsupervised domain adaptation by backpropagation//Proceedings of the 32nd International Conference on Machine Learning. Lille, France: PMLR: 1180-1189
Ge J C, Deng W H, Wang M and Hu J N. 2020. FGAN: fan-shaped GAN for racial transformation//Proceedings of 2020 IEEE International Joint Conference on Biometrics. Houston, USA: IEEE: 1-7 [DOI: 10.1109/IJCB48548.2020.9304901http://dx.doi.org/10.1109/IJCB48548.2020.9304901]
Georgopoulos M, Oldfield J, Nicolaou M A, Panagakis Y and Pantic M. 2021. Mitigating demographic bias in facial datasets with style-based multi-attribute transfer. International Journal of Computer Vision, 129(7): 2288-2307 [DOI: 10.1007/s11263-021-01448-whttp://dx.doi.org/10.1007/s11263-021-01448-w]
Gong S X, Liu X M and Jain A K. 2020. Jointly de-biasing face recognition and demographic attribute estimation//Proceedings of the 16th European Conference on Computer Vision. Glasgow, UK: Springer: 330-347 [DOI: 10.1007/978-3-030-58526-6_20http://dx.doi.org/10.1007/978-3-030-58526-6_20]
Gong S X, Liu X M and Jain A K. 2021. Mitigating face recognition bias via group adaptive classifier//Proceedings of 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Nashville, USA: IEEE: 3413-3423 [DOI: 10.1109/CVPR46437.2021.00342http://dx.doi.org/10.1109/CVPR46437.2021.00342]
Guo Y D, Zhang L, Hu Y X, He X D and Guo J F. 2016. MS-celeb-1M: a dataset and benchmark for large-scale face recognition//Proceedings of the 14th European Conference on Computer Vision. Amsterdam, the Netherlands: Springer: 87-102 [DOI: 10.1007/978-3-319-46487-9_6http://dx.doi.org/10.1007/978-3-319-46487-9_6]
Hardt M, Price E and Srebro N. 2016. Equality of opportunity in supervised learning//Proceedings of the 30th International Conference on Neural Information Processing Systems. Barcelona, Spain: NIPS: 3323-3331
Hazirbas C, Bitton J, Dolhansky B, Pan J, Gordo A and Ferrer C C. 2022. Towards measuring fairness in AI: the casual conversations dataset. IEEE Transactions on Biometrics, Behavior, and Identity Science, 4(3): 324-332 [DOI: 10.1109/TBIOM.2021.3132237http://dx.doi.org/10.1109/TBIOM.2021.3132237]
He K M, Zhang X Y, Ren S Q and Sun J. 2016. Deep residual learning for image recognition//Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas, USA: IEEE: 770-778 [DOI: 10.1109/CVPR.2016.90http://dx.doi.org/10.1109/CVPR.2016.90]
He Z L, Zuo W M, Kan M M, Shan S G and Chen X L. 2019. AttGAN: facial attribute editing by only changing what you want. IEEE Transactions on Image Processing, 28(11): 5464-5478 [DOI: 10.1109/TIP.2019.2916751http://dx.doi.org/10.1109/TIP.2019.2916751]
Hendrycks D and Dietterich T G. 2019. Benchmarking neural network robustness to common corruptions and perturbations//Proceedings of the 7th International Conference on Learning Representations. New Orleans, USA: OpenReview.net: 1-10
Holland P W. 1986. Statistics and causal inference. Journal of the American statistical Association, 81(396): 945-960 [DOI: 10.1080/01621459.1986.10478354http://dx.doi.org/10.1080/01621459.1986.10478354]
Hong Y and Yang E. 2021. Unbiased classification through bias-contrastive and bias-balanced learning//Proceedings of the 35th International Conference on Neural Information Processing Systems. Virtual: NeurIPS: 26449-26461
Huang R, Geng A and Li Y X. 2021. On the importance of gradients for detecting distributional shifts in the wild//Proceedings of the 35th International Conference on Neural Information Processing Systems. Virtual: NeurIPS: 677-689
Huang X and Belongie S. 2017. Arbitrary style transfer in real-time with adaptive instance normalization//Proceedings of 2017 IEEE International Conference on Computer Vision. Venice, Italy: IEEE: 1510-1519 [DOI: 10.1109/ICCV.2017.167http://dx.doi.org/10.1109/ICCV.2017.167]
Hwang I, Lee S, Kwak Y, Oh S J, Teney D, Kim J H and Zhang B T. 2022. SelecMix: debiased learning by contradicting-pair sampling//Proceedings of the 36th International Conference on Neural Information Processing Systems. New Orleans, USA: NeurIPS
Ilyas A, Santurkar S, Tsipras D, Engstrom L, Tran B and Madry A. 2019. Adversarial examples are not bugs, they are features//Proceedings of the 33rd International Conference on Neural Information Processing Systems. Vancouver, Canada: NeurIPS: 125-136
Iurada L, Bucci S, Hospedales T M and Tommasi T. 2023. Fairness meets cross-domain learning: a new perspective on models and metrics [EB/OL]. [2023-04-03]. https://arxiv.org/pdf/2303.14411.pdfhttps://arxiv.org/pdf/2303.14411.pdf
Jeon M, Kim D, Lee W, Kang M and Lee J. 2022. A conservative approach for unbiased learning on unknown biases//Proceedings of 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New Orleans, USA: IEEE: 16731-16739 [DOI: 10.1109/CVPR52688.2022.01625http://dx.doi.org/10.1109/CVPR52688.2022.01625]
Jung S, Chun S and Moon T. 2022. Learning fair classifiers with partially annotated group labels//Proceedings of 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New Orleans, USA: IEEE: 10338-10347 [DOI: 10.1109/CVPR52688.2022.01010http://dx.doi.org/10.1109/CVPR52688.2022.01010]
Jung S, Lee D, Park T and Moon T. 2021. Fair feature distillation for visual recognition//Proceedings of 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Nashville, USA: IEEE: 12110-12119 [DOI: 10.1109/CVPR46437.2021.01194http://dx.doi.org/10.1109/CVPR46437.2021.01194]
Kärkkäinen K and Joo J. 2021. FairFace: face attribute dataset for balanced race, gender, and age for bias measurement and mitigation//Proceedings of 2021 IEEE Winter Conference on Applications of Computer Vision. Waikoloa, USA: IEEE: 1547-1557 [DOI: 10.1109/WACV48630.2021.00159http://dx.doi.org/10.1109/WACV48630.2021.00159]
Karras T, Laine S and Aila T. 2019. A style-based generator architecture for generative adversarial networks//Proceedings of 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Long Beach, USA: IEEE: 4396-4405 [DOI: 10.1109/CVPR.2019.00453http://dx.doi.org/10.1109/CVPR.2019.00453]
Khosla P, Teterwak P, Wang C, Sarna A, Tian Y L, Isola P, Maschinot A, Liu C and Krishnan D. 2020. Supervised contrastive learning//Proceedings of the 34th International Conference on Neural Information Processing Systems. Vancouver, Canada: Curran Associates Inc.: #1567 [DOI: 10.5555/3495724.3497291http://dx.doi.org/10.5555/3495724.3497291]
Kim B, Kim H, Kim K, Kim S and Kim J. 2019a. Learning not to learn: training deep neural networks with biased data//Proceedings of 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Long Beach, USA: IEEE: 9004-9012 [DOI: 10.1109/CVPR.2019.00922http://dx.doi.org/10.1109/CVPR.2019.00922]
Kim E, Lee J and Choo J. 2021. BiaSwap: removing dataset bias with bias-tailored swapping augmentation//Proceedings of 2021 IEEE/CVF International Conference on Computer Vision. Montreal, Canada: IEEE: 14972-14981 [DOI: 10.1109/ICCV48922.2021.01472http://dx.doi.org/10.1109/ICCV48922.2021.01472]
Kim M P, Ghorbani A and Zou J. 2019b. Multiaccuracy: black-box post-processing for fairness in classification//Proceedings of 2019 AAAI/ACM Conference on AI, Ethics, and Society. Honolulu, USA: ACM: 247-254 [DOI: 10.1145/3306618.3314287http://dx.doi.org/10.1145/3306618.3314287]
Kim N, Hwang S, Ahn S, Park J and Kwak S. 2022. Learning debiased classifier with biased committee//Proceedings of the 36th International Conference on Neural Information Processing Systems. New Orleans, USA: NeurIPS: 1-13
Klare B F, Burge M J, Klontz J C, Bruegge R W V and Jain A K. 2012. Face recognition performance: role of demographic information. IEEE Transactions on Information Forensics and Security, 7(6): 1789-1801 [DOI: 10.1109/TIFS.2012.2214212http://dx.doi.org/10.1109/TIFS.2012.2214212]
Krizhevsky A. 2009. Learning Multiple Layers of Features from Tiny Images. Toronto: University of Toronto
Krizhevsky A, Sutskever I and Hinton G E. 2017. ImageNet classification with deep convolutional neural networks. Communications of the ACM, 60(6): 84-90 [DOI: 10.1145/3065386http://dx.doi.org/10.1145/3065386]
Kuehne H, Jhuang H, Garrote E, Poggio T and Serre T. 2011. HMDB: a large video database for human motion recognition//Proceedings of 2011 International Conference on Computer Vision. Barcelona, Spain: IEEE: 2556-2563 [DOI: 10.1109/ICCV.2011.6126543http://dx.doi.org/10.1109/ICCV.2011.6126543]
LeCun Y, Bottou L, Bengio Y and Haffner P. 1998. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11): 2278-2324 [DOI: 10.1109/5.726791http://dx.doi.org/10.1109/5.726791]
Lee J, Kim E, Lee J, Lee J and Choo J. 2021. Learning debiased representation via disentangled feature augmentation//Proceedings of the 35th International Conference on Neural Information Processing Systems. Virtual: NeurIPS: 25123-25133
Li H X, Liu Y, Zhang H W and Li B Y. 2023. Mitigating and evaluating static bias of action representations in the background and the foreground [EB/OL]. [2023-04-03]. https://arxiv.org/pdf/2211.12883.pdfhttps://arxiv.org/pdf/2211.12883.pdf
Li Y and Vasconcelos N. 2019. Repair: removing representation bias by dataset resampling//Proceedings of 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Long Beach, USA: IEEE: 9564-9573 [DOI: 10.1109/CVPR.2019.00980http://dx.doi.org/10.1109/CVPR.2019.00980]
Li Y W, Li Y and Vasconcelos N. 2018 RESOUND: towards action recognition without representation bias//Proceedings of the 15th European Conference on Computer Vision. Munich, Germany: Springe: 520-535 [DOI: 10.1007/978-3-030-01231-1_32http://dx.doi.org/10.1007/978-3-030-01231-1_32]
Li Z H, Hoogs A and Xu C L. 2022. Discover and mitigate unknown biases with debiasing alternate networks//Proceedings of the 17th European Conference on Computer Vision. Tel Aviv, Israel: Springer: 270-288 [DOI: 10.1007/978-3-031-19778-9_16http://dx.doi.org/10.1007/978-3-031-19778-9_16]
Liu W Y, Shen C Y, Wang X F, Jin B, Lu X J, Wang X L, Zha H Y and He J F. 2021. Survey on fairness in trustworthy machine learning. Journal of Software, 32(5): 1404-1426
刘文炎, 沈楚云, 王祥丰, 金博, 卢兴见, 王晓玲, 查宏远, 何积丰. 2021. 可信机器学习的公平性综述. 软件学报, 32(5): 1404-1426 [DOI: 10.13328/j.cnki.jos.006214http://dx.doi.org/10.13328/j.cnki.jos.006214]
Liu Z W, Luo P, Wang X G and Tang X O. 2015. Deep learning face attributes in the wild//Proceedings of 2015 IEEE International Conference on Computer Vision. Santiago, Chile: IEEE: 3730-3738 [DOI: 10.1109/ICCV.2015.425http://dx.doi.org/10.1109/ICCV.2015.425]
Mehrabi N, Morstatter F, Saxena N, Lerman K and Galstyan A. 2021. A survey on bias and fairness in machine learning. ACM Computing Surveys, 54(6): #115 [DOI: 10.1145/3457607http://dx.doi.org/10.1145/3457607]
Nam J H, Cha H, Ahn S, Lee J and Shin J. 2020. Learning from failure: De-biasing classifier from biased classifier//Proceedings of the 34th International Conference on Neural Information Processing Systems. Vancouver, Canada: NeurIPS: 20673-20684
Nuriel O, Benaim S and Wolf L. 2021. Permuted AdaIN: reducing the bias towards global statistics in image classification//Proceedings of 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Nashville, USA: IEEE: 9477-9486 [DOI: 10.1109/CVPR46437.2021.00936http://dx.doi.org/10.1109/CVPR46437.2021.00936]
Park S, Hwang S, Kim D and Byun H. 2021. Learning disentangled representation for fair facial attribute classification via fairness-aware information alignment//Proceedings of the 35th AAAI Conference on Artificial Intelligence. Virtual: AAAI: 2403-2411 [DOI: 10.1609/aaai.v35i3.16341http://dx.doi.org/10.1609/aaai.v35i3.16341]
Park S, Lee J, Lee P, Hwang S, Kim D and Byun H. 2022. Fair contrastive learning for facial attribute classification//Proceedings of 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New Orleans, USA: IEEE: 10379-10388 [DOI: 10.1109/CVPR52688.2022.01014http://dx.doi.org/10.1109/CVPR52688.2022.01014]
Raff E and Sylvester J. 2018. Gradient reversal against discrimination: a fair neural network learning approach//Proceedings of 2018 IEEE 5th International Conference on Data Science and Advanced Analytics. Turin, Italy: IEEE: 189-198 [DOI: 10.1109/DSAA.2018.00029http://dx.doi.org/10.1109/DSAA.2018.00029]
Ragonesi R, Volpi R, Cavazza J and Murino V. 2021. Learning unbiased representations via mutual information backpropagation//Proceedings of 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Nashville, USA: IEEE: 2723-2732 [DOI: 10.1109/CVPRW53098.2021.00307http://dx.doi.org/10.1109/CVPRW53098.2021.00307]
Ramaswamy V V, Kim S S Y and Russakovsky O. 2021. Fair attribute classification through latent space de-biasing//Proceedings of 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Nashville, USA: IEEE: 9297-9306 [DOI: 10.1109/CVPR46437.2021.00918http://dx.doi.org/10.1109/CVPR46437.2021.00918]
Ren M Y, Zeng W Y, Yang B and Urtasun R. 2018. Learning to reweight examples for robust deep learning//Proceedings of the 35th International Conference on Machine Learning. Stockholm, Sweden: PMLR: 4334-4343
Rothe R, Timofte R and Van Gool L. 2018. Deep expectation of real and apparent age from a single image without facial landmarks. International Journal of Computer Vision, 126(2/4): 144-157 [DOI: 10.1007/s11263-016-0940-3http://dx.doi.org/10.1007/s11263-016-0940-3]
Russakovsky O, Deng J, Su H, Krause J, Satheesh S, Ma S, Huang Z H, Karpathy A, Khosla A, Bernstein M, Berg A C and Li F F. 2015. Imagenet large scale visual recognition challenge. International Journal of Computer Vision, 115(3): 211-252 [DOI: 10.1007/s11263-015-0816-yhttp://dx.doi.org/10.1007/s11263-015-0816-y]
Snchez J, Perronnin F, Mensink T and Verbeek J. 2013. Image classification with the fisher vector: theory and practice. International Journal of Computer Vision, 105(3): 222-245 [DOI: 10.1007/s11263-013-0636-xhttp://dx.doi.org/10.1007/s11263-013-0636-x]
Sarhan M H, Navab N, Eslami A and Albarqouni S. 2020. Fairness by learning orthogonal disentangled representations//Proceedings of the 16th European Conference on Computer Vision. Glasgow, UK: Springer: 746-761 [DOI: 10.1007/978-3-030-58526-6_44http://dx.doi.org/10.1007/978-3-030-58526-6_44]
Saxena N A, Huang K R, DeFilippis E, Radanovic G, Parkes D C and Liu Y. 2019. How do fairness definitions fare?: examining public attitudes towards algorithmic definitions of fairness//Proceedings of 2019 AAAI/ACM Conference on AI, Ethics, and Society. Honolulu, USA: ACM: 99-106 [DOI: 10.1145/3306618.3314248http://dx.doi.org/10.1145/3306618.3314248]
Seo S, Lee J Y and Han B. 2022. Unsupervised learning of debiased representations with pseudo-attributes//Proceedings of 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New Orleans, USA: IEEE: 16721-16730 [DOI: 10.1109/CVPR52688.2022.01624http://dx.doi.org/10.1109/CVPR52688.2022.01624]
Seyyed-Kalantari L, Zhang H R, McDermott M B A, Chen I Y and Ghassemi M. 2021. Underdiagnosis bias of artificial intelligence algorithms applied to chest radiographs in under-served patient populations. Nature Medicine, 27(12): 2176-2182 [DOI: 10.1038/s41591-021-01595-0http://dx.doi.org/10.1038/s41591-021-01595-0]
Shrestha R, Kafle K and Kanan C. 2022a. An investigation of critical issues in bias mitigation techniques//Proceedings of 2022 IEEE/CVF Winter Conference on Applications of Computer Vision. Waikoloa, USA: IEEE: 2512-2523 [DOI: 10.1109/WACV51458.2022.00257http://dx.doi.org/10.1109/WACV51458.2022.00257]
Shrestha R, Kafle K and Kanan C. 2022b. Occamnets: mitigating dataset bias by favoring simpler hypotheses//Proceedings of the 17th European Conference on Computer Vision. Tel Aviv, Israel: Springer: 702-721 [DOI: 10.1007/978-3-031-20044-1_40http://dx.doi.org/10.1007/978-3-031-20044-1_40]
Singh R, Majumdar P, Mittal S and Vatsa M. 2022. Anatomizing bias in facial analysis//Proceedings of the 36th AAAI Conference on Artificial Intelligence. Virtual: AAAI: 12351-12358 [DOI: 10.1609/aaai.v36i11.21500http://dx.doi.org/10.1609/aaai.v36i11.21500]
Soomro K, Zamir A R and Shah M. 2012. UCF101: a dataset of 101 human actions classes from videos in the wild [EB/OL]. [2023-04-03]. https://arxiv.org/pdf/1212.0402.pdfhttps://arxiv.org/pdf/1212.0402.pdf
Sun T, Gaut A, Tang S, Huang Y X, ElSherief M, Zhao J Y, Mirza D, Belding E, Chang K W and Wang W Y. 2019. Mitigating gender bias in natural language processing: literature review//Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Florence, Italy: ACL: 1630-1640 [DOI: 10.18653/v1/P19-1159http://dx.doi.org/10.18653/v1/P19-1159]
Sun Y F, Li Y and Cui Z. 2022. NFW: towards national and individual fairness in face recognition//Proceedings of the 6th Asian Conference on Pattern Recognition. Jeju Island, Korea (South): Springer: 540-553 [DOI: 10.1007/978-3-031-02375-0_40http://dx.doi.org/10.1007/978-3-031-02375-0_40]
Tartaglione E, Barbano C A and Grangetto M. 2021. EnD: entangling and disentangling deep representations for bias correction//Proceedings of 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Nashville, USA: IEEE: 13503-13512 [DOI: 10.1109/CVPR46437.2021.01330http://dx.doi.org/10.1109/CVPR46437.2021.01330]
Terhörst P, Tran M L, Damer N, Kirchbuchner F and Kuijper A. 2020a. Comparison-level mitigation of ethnic bias in face recognition//Proceedings of the 8th International Workshop on Biometrics and Forensics. Porto, Portugal: IEEE: 1-6 [DOI: 10.1109/IWBF49977.2020.9107956http://dx.doi.org/10.1109/IWBF49977.2020.9107956]
Terhörst P, Kolf J N, Damer N, Kirchbuchner F and Kuijper A. 2020b. Post-comparison mitigation of demographic bias in face recognition using fair score normalization. Pattern Recognition Letters, 140: 332-338 [DOI: 10.1016/j.patrec.2020.11.007http://dx.doi.org/10.1016/j.patrec.2020.11.007]
Tzeng E, Hoffman J, Saenko K and Darrell T. 2017. Adversarial discriminative domain adaptation//Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, USA: IEEE: 2962-2971 [DOI: 10.1109/CVPR.2017.316http://dx.doi.org/10.1109/CVPR.2017.316]
Wang H, Wang Y T, Zhou Z, Ji X, Gong D H, Zhou J C, Li Z F and Liu W. 2018. CosFace: large margin cosine loss for deep face recognition//Proceedings of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, USA: IEEE: 5265-5274 [DOI: 10.1109/CVPR.2018.00552http://dx.doi.org/10.1109/CVPR.2018.00552]
Wang H H, He Z X, Lipton Z C and Xing E P. 2019b. Learning robust representations by projecting superficial statistics out//Proceedings of the 7th International Conference on Learning Representations. New Orleans, USA: OpenReview.net
Wang M and Deng W H. 2020. Mitigating bias in face recognition using skewness-aware reinforcement learning//Proceedings of 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle, USA: IEEE: 9319-9328 [DOI: 10.1109/CVPR42600.2020.00934http://dx.doi.org/10.1109/CVPR42600.2020.00934]
Wang M, Deng W H, Hu J N, Tao X Q and Huang Y H. 2019a. Racial faces in the wild: Reducing racial bias by information maximization adaptation network//Proceedings of 2019 IEEE/CVF International Conference on Computer Vision. Seoul, Korea (South): IEEE: 692-702 [DOI: 10.1109/ICCV.2019.00078http://dx.doi.org/10.1109/ICCV.2019.00078]
Wang M, Zhang Y B and Deng W H. 2022. Meta balanced network for fair face recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(11): 8433-8448 [DOI: 10.1109/TPAMI.2021.3103191http://dx.doi.org/10.1109/TPAMI.2021.3103191]
Wang T, Zhou C, Sun Q R and Zhang H W. 2021. Causal attention for unbiased visual recognition//Proceedings of 2021 IEEE/CVF International Conference on Computer Vision. Montreal, Canada: IEEE: 3071-3080 [DOI: 10.1109/ICCV48922.2021.00308http://dx.doi.org/10.1109/ICCV48922.2021.00308]
Wang T L, Zhao J Y, Yatskar M, Chang K W and Ordonez V. 2019c. Balanced datasets are not enough: estimating and mitigating gender bias in deep image representations//Proceedings of 2019 IEEE/CVF International Conference on Computer Vision. Seoul, Korea (South): IEEE: 5309-5318 [DOI: 10.1109/ICCV.2019.00541http://dx.doi.org/10.1109/ICCV.2019.00541]
Wang Y F, Ma W Z, Zhang M, Liu Y Q and Ma S P. 2023. A survey on the fairness of recommender systems. ACM Transactions on Information Systems, 41(3): #52 [DOI: 10.1145/3547333http://dx.doi.org/10.1145/3547333]
Wang Z Y, Qinami K, Karakozis I C, Genova K, Nair P, Hata K and Russakovsky O. 2020. Towards fairness in visual recognition: effective strategies for bias mitigation//Proceedings of 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle, USA: IEEE: 8916-8925 [DOI: 10.1109/CVPR42600.2020.00894http://dx.doi.org/10.1109/CVPR42600.2020.00894]
Yao B P, Jiang X Y, Khosla A, Lin A L, Guibas L and Li F F. 2011. Human action recognition by learning bases of action attributes and parts//Proceedings of 2011 International Conference on Computer Vision. Barcelona, Spain: IEEE: 1331-1338 [DOI: 10.1109/ICCV.2011.6126386http://dx.doi.org/10.1109/ICCV.2011.6126386]
Yatskar M, Zettlemoyer L and Farhadi A. 2016. Situation recognition: visual semantic role labeling for image understanding//Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas, USA: IEEE: 5534-5542 [DOI: 10.1109/CVPR.2016.597http://dx.doi.org/10.1109/CVPR.2016.597]
Yucer S, Akçay S, Al-Moubayed N and Breckon T P. 2020. Exploring racial bias within face recognition via per-subject adversarially-enabled data augmentation//Proceedings of 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops. Seattle, USA: IEEE: 83-92 [DOI: 10.1109/CVPRW50498.2020.00017http://dx.doi.org/10.1109/CVPRW50498.2020.00017]
Zhang F D, Kuang K, Chen L, Liu Y X, Wu C and Xiao J. 2023. Fairness-aware contrastive learning with partially annotated sensitive attributes//Proceedings of the 11th International Conference on Learning Representations. Kigali, Rwanda: OpenReview.net
Zhang Y and Sang J T. 2020. Towards accuracy-fairness paradox: adversarial example-based data augmentation for visual debiasing//Proceedings of the 28th ACM International Conference on Multimedia. Seattle, USA: ACM: 4346-4354 [DOI: 10.1145/3394171.3413772http://dx.doi.org/10.1145/3394171.3413772]
Zhang Y, Sang J T and Wang J Y. 2022. Fair visual recognition via intervention with proxy features [EB/OL]. [2023-04-03]. https://arxiv.org/pdf/2211.01253.pdfhttps://arxiv.org/pdf/2211.01253.pdf
Zhang Z F, Song Y and Qi H R. 2017. Age progression/regression by conditional adversarial autoencoder//Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, USA: IEEE: 4352-4360 [DOI: 10.1109/CVPR.2017.463http://dx.doi.org/10.1109/CVPR.2017.463]
Zhao B W, Chen C, Wang Q W, He A F and Xia S T. 2021. Combating unknown bias with effective bias-conflicting scoring and gradient alignment [EB/OL]. [2023-04-03]. https://arxiv.org/pdf/2111.13108.pdfhttps://arxiv.org/pdf/2111.13108.pdf
Zhao H and Gordon G J. 2022. Inherent tradeoffs in learning fair representations. The Journal of Machine Learning Research, 23(1): #57 [DOI: 10.5555/3586589.3586646http://dx.doi.org/10.5555/3586589.3586646]
Zhu J Y, Park T, Isola P and Efros A A. 2017. Unpaired image-to-image translation using cycle-consistent adversarial networks//Proceedings of 2017 IEEE International Conference on Computer Vision. Venice, Italy: IEEE: 2242-2251 [DOI: 10.1109/ICCV.2017.244http://dx.doi.org/10.1109/ICCV.2017.244]
Zhu W, Zheng H T, Liao H F, Li W J and Luo J B. 2021. Learning bias-invariant representation by cross-sample mutual information minimization//Proceedings of 2021 IEEE/CVF International Conference on Computer Vision. Montreal, Canada: IEEE: 14982-14992 [DOI: 10.1109/ICCV48922.2021.01473http://dx.doi.org/10.1109/ICCV48922.2021.01473]
Zou J and Schiebinger L. 2018. AI can be sexist and racist—it’s time to make it fair. Nature, 559(7714): 324-326 [DOI: 10.1038/d41586-018-05707-8http://dx.doi.org/10.1038/d41586-018-05707-8]
相关作者
相关机构