Perspectives of Defining Algorithmic Fairness in Customer-oriented Applications: A Systematic Literature Review

Maw Maw (1), Su-Cheng Haw (2), Kok-Why Ng (3)
(1) Faculty of Computing and Informatics, Multimedia University, Jalan Multimedia, Cyberjaya, Selangor, Malaysia
(2) Faculty of Computing and Informatics, Multimedia University, Jalan Multimedia, Cyberjaya, Selangor, Malaysia
(3) Faculty of Computing and Informatics, Multimedia University, Jalan Multimedia, Cyberjaya, Selangor, Malaysia
Fulltext View | Download
How to cite (IJASEIT) :
Maw , Maw, et al. “Perspectives of Defining Algorithmic Fairness in Customer-Oriented Applications: A Systematic Literature Review”. International Journal on Advanced Science, Engineering and Information Technology, vol. 14, no. 5, Oct. 2024, pp. 1504-13, doi:10.18517/ijaseit.14.5.11676.
Automated decision-making systems are massively engaged in different types of businesses, including customer-oriented sectors, and bring countless achievements in persuading customers with more personalized experiences. However, it was observed that the decisions made by the algorithms could bring unfairness to a person or a group of people, according to recent studies. Thus, algorithmic fairness has become a spotlight research area, and defining a concrete version of fairness notions has also become significant research.  In existing literature, there are more than 21 definitions of algorithmic fairness. Many studies have shown that each notion has an incompatibility problem, and it is still necessary to make those notions more adaptable to the legal and social principles of the desired sectors. Yet, the constraints of algorithmic fairness for customer-oriented areas have not been thoroughly studied. This motivates us to work on a systematic literature review to investigate the sectors concerned about algorithmic fairness as a significant matter when using machine-based decision-making systems, what are the well-applied algorithmic fairness notions, and why they can or cannot be directly applicable to the customer-oriented sectors, what are the possible algorithmic fairness constraints for the customer-oriented sectors.  By applying the standard guidelines of systematic literature review, we explored 65 prominent articles thoroughly. The findings show 43 different ways of algorithmic fairness notions in the varieties of domains. We also identified the three important perspectives to be considered for enhancing algorithmic fairness notions in the customer-oriented sectors. 

M. De-Arteaga, S. Feuerriegel, and M. Saar-Tsechansky, “Algorithmic fairness in business analytics: Directions for research and practice,” Prod. Oper. Manag., vol. 31, no. 10, pp. 3749–3770, 2022, doi: 10.1111/poms.13839.

S. Verma and J. Rubin, “Fairness definitions explained,” Proc. - Int. Conf. Softw. Eng., pp. 1–7, 2018, doi: 10.1145/3194770.3194776.

S. T. Lim, J. Y. Yuan, K. W. Khaw, and X. Chew, “Predicting Travel Insurance Purchases in an Insurance Firm through Machine Learning Methods after COVID-19,” J. Informatics Web Eng., vol. 2, no. 2, pp. 43–58, 2023, doi: 10.33093/jiwe.2023.2.2.4.

C.-C. Wong, L.-Y. Chong, S.-C. Chong, and C.-Y. Law, “QR Food Ordering System with Data Analytics,” J. Informatics Web Eng., vol. 2, no. 2, pp. 249–272, 2023, doi: 10.33093/jiwe.2023.2.2.18.

Z. Y. Poo, C. Y. Ting, Y. P. Loh, and K. I. Ghauth, “Multi-Label Classification with Deep Learning for Retail Recommendation,” J. Informatics Web Eng., vol. 2, no. 2, pp. 218–232, 2023, doi:10.33093/jiwe.2023.2.2.16.

H. R. Kouchaksaraei and H. Karl, “Service function chaining across openstack and kubernetes domains,” DEBS 2019 - Proc. 13th ACM Int. Conf. Distrib. Event-Based Syst., pp. 240–243, 2019, doi:10.1145/3328905.3332505.

C. Hertweck, J. Baumann, M. Loi, E. Viganò, and C. Heitz, “A Justice-Based Framework for the Analysis of Algorithmic Fairness-Utility Trade-Offs,” pp. 1–15, 2022.

A. Kasirzadeh, “Algorithmic Fairness and Structural Injustice: Insights from Feminist Political Philosophy,” AIES 2022 - Proc. 2022 AAAI/ACM Conf. AI, Ethics, Soc., pp. 349–356, 2022, doi:10.1145/3514094.3534188.

E. Black, H. Elzayn, A. Chouldechova, J. Goldin, and D. Ho, “Algorithmic Fairness and Vertical Equity: Income Fairness with IRS Tax Audit Models,” ACM Int. Conf. Proceeding Ser., pp. 1479–1503, 2022, doi: 10.1145/3531146.3533204.

T. Y. Sun, S. Bhave, J. Altosaar, and N. Elhadad, “Assessing Phenotype Definitions for Algorithmic Fairness,” pp. 1–13, 2022.

M. Schmitz, R. Ahmed, and J. Cao, “Bias and Fairness on Multimodal Emotion Detection Algorithms,” 2022, doi:10.13140/RG.2.2.14341.01769.

M. von Zahn, S. Feuerriegel, and N. Kuehl, “The Cost of Fairness in AI: Evidence from E-Commerce,” Bus. Inf. Syst. Eng., vol. 64, no. 3, pp. 335–348, 2022, doi: 10.1007/s12599-021-00716-w.

C. Kern, R. L. Bach, H. Mautner, and F. Kreuter, “Fairness in Algorithmic Profiling: A German Case Study,” pp. 1–33, 2021.

N. Kozodoi, J. Jacob, and S. Lessmann, “Fairness in credit scoring: Assessment, implementation and profit implications,” Eur. J. Oper. Res., vol. 297, no. 3, pp. 1083–1094, 2022, doi:10.1016/j.ejor.2021.06.023.

R. J. Chen et al., “Algorithm Fairness in AI for Medicine and Healthcare,” pp. 1–49, 2021.

M. A. Bakker et al., “DADI: Dynamic Discovery of Fair Information with Adversarial Reinforcement Learning,” no. NeurIPS, 2019.

J. Sargent and M. Weber, “Identifying biases in legal data: An algorithmic fairness perspective,” ACM Conf. Equity Access Algorithms, Mech. Optim. Oct. 5-9, 2021, vol. 1, no. 1, pp. 1–14, 2021.

Y. Yang, Y. Wu, X. Chang, and M. Li, “Toward a Fairness-Aware Scoring System for Algorithmic Decision-Making,” 2021.

A. Kumar, Y. Vorobeychik, and W. Yeoh, “Using Simple Incentives to Improve Two-Sided Fairness in Ridesharing Systems,” Proc. Int. Conf. Autom. Plan. Sched. ICAPS, vol. 33, no. 1, pp. 227–235, 2023, doi: 10.1609/icaps.v33i1.27199.

Y. Yu and G. Saint-Jacques, “Choosing an algorithmic fairness metric for an online marketplace: Detecting and quantifying algorithmic bias on LinkedIn,” arXiv Comput. Sci., 2022.

A. Noriega-Campero, B. Garcia-Bulle, M. A. Bakker, and A. S. Pentland, “Active fairness in algorithmic decision making,” AIES 2019 - Proc. 2019 AAAI/ACM Conf. AI, Ethics, Soc., pp. 77–83, 2019, doi:10.1145/3306618.3314277.

F. Arif Khan, E. Manis, and J. Stoyanovich, “Towards Substantive Conceptions of Algorithmic Fairness: Normative Guidance from Equal Opportunity Doctrines,” ACM Int. Conf. Proceeding Ser., vol. 1, no. 1, pp. 1–16, 2022, doi: 10.1145/3551624.3555303.

M. Karimi-Haghighi, C. Castillo, D. Hernandez-Leo, and V. M. Oliver, “Predicting Early Dropout: Calibration and Algorithmic Fairness Considerations,” no. Ml, pp. 1–10, 2021.

G. Pleiss, M. Raghavan, F. Wu, J. Kleinberg, and K. Q. Weinberger, “On fairness and calibration,” Adv. Neural Inf. Process. Syst., vol. 2017-Decem, no. Nips, pp. 5681–5690, 2017.

B. Woodworth, S. Gunasekar, M. I. Ohannessian, and N. Srebro, “Learning Non-Discriminatory Predictors,” no. 1, 2017.

K. Mohammadi, A. Sivaraman, and G. Farnadi, “FETA: Fairness Enforced Verifying, Training, and Predicting Algorithms for Neural Networks,” 2022.

S. Galhotra, Y. Brun, and A. Meliou, “Fairness testing: Testing software for discrimination,” Proc. ACM SIGSOFT Symp. Found. Softw. Eng., vol. Part F1301, pp. 498–510, 2017, doi:10.1145/3106237.3106277.

W. Jiang and Z. A. Pardos, Towards Equity and Algorithmic Fairness in Student Grade Prediction, vol. 1, no. 1. Association for Computing Machinery, 2021. doi: 10.1145/3461702.3462623.

J. Baumann, A. Hannák, and C. Heitz, “Enforcing Group Fairness in Algorithmic Decision Making: Utility Maximization Under Sufficiency,” ACM Int. Conf. Proceeding Ser., pp. 2315–2326, 2022, doi: 10.1145/3531146.3534645.

C. Bakalar et al., “Fairness On The Ground: Applying Algorithmic Fairness Approaches to Production Systems,” 2021.

J. J. Howard, E. J. Laird, Y. B. Sirotin, R. E. Rubin, J. L. Tipton, and A. R. Vemury, “Evaluating Proposed Fairness Models for Face Recognition Algorithms,” pp. 1–11, 2022.

Y. Li et al., “Contextualized Fairness for Recommender Systems in Premium Scenarios,” Big Data Res., vol. 27, p. 100300, 2022, doi:10.1016/j.bdr.2021.100300.

R. Xu et al., “Algorithmic Decision Making with Conditional Fairness,” Proc. ACM SIGKDD Int. Conf. Knowl. Discov. Data Min., pp. 2125–2135, 2020, doi: 10.1145/3394486.3403263.

H. Elzayn et al., “Fair algorithms for learning in allocation problems,” FAT* 2019 - Proc. 2019 Conf. Fairness, Accountability, Transpar., pp. 170–179, 2019, doi: 10.1145/3287560.3287571.

N. Grgić-Hlača, M. B. Zafar, K. P. Gummadi, and A. Weller, “On Fairness, Diversity and Randomness in Algorithmic Decision Making,” 2017.

N. Sambasivan, E. Arnesen, B. Hutchinson, T. Doshi, and V. Prabhakaran, “Re-imagining algorithmic fairness in India and beyond,” FAccT 2021 - Proc. 2021 ACM Conf. Fairness, Accountability, Transpar., pp. 315–328, 2021, doi:10.1145/3442188.3445896.

M. Cheng, M. De-Arteaga, L. Mackey, and A. T. Kalai, Social Norm Bias: Residual Harms of Fairness-Aware Algorithms, vol. 1, no. 1. Association for Computing Machinery, 2021.

T. Chakraborti, A. Patra, and J. A. Noble, “Contrastive Fairness in Machine Learning,” IEEE Lett. Comput. Soc., vol. 3, no. 2, pp. 38–41, 2020, doi: 10.1109/locs.2020.3007845.

H. Zhao and G. J. Gordon, “Inherent Tradeoffs in Learning Fair Representations,” J. Mach. Learn. Res., vol. 23, no. NeurIPS, 2022.

M. Hardt, E. Price, and N. Srebro, “Equality of opportunity in supervised learning,” Adv. Neural Inf. Process. Syst., pp. 3323–3331, 2016.

P. K. Lohia et al., “Bias Mitigation Post-Processing for Individual and Group Fairness IBM Research and IBM Watson AI Platform 1101 Kitchawan Road , Yorktown Heights , NY , USA,” pp. 2847–2851, 2019.

S. Barocas, “Big Data ’ S Disparate Impact,” vol. 671, pp. 671–732, 2014.

N. A. Saxena, E. Defilippis, G. Radanovic, and D. C. Parkes, “How Do Fairness Definitions Fare ? Examining Public Attitudes Towards Algorithmic Definitions of Fairness,” 2018.

J. Foulds, R. Islam, K. Keya, and S. Pan, “Bayesian Modeling of Intersectional Fairness: The Variance of Bias,” 2018.

J. R. Foulds, R. Islam, K. N. Keya, and S. Pan, “An intersectional definition of fairness,” Proc. - Int. Conf. Data Eng., vol. 2020-April, pp. 1918–1921, 2020, doi: 10.1109/ICDE48307.2020.00203.

S. S. Abraham, P. Deepak, and S. S. Sundaram, “Fairness in clustering with multiple sensitive attributes,” Adv. Database Technol. - EDBT, vol. 2020-March, pp. 287–298, 2020, doi: 10.5441/002/edbt.2020.26.

I. Serna, A. Morales, J. Fierrez, and N. Obradovich, “Sensitive loss: Improving accuracy and fairness of face representations with discrimination-aware deep learning,” Artif. Intell., vol. 305, p. 103682, 2022, doi: 10.1016/j.artint.2022.103682.

P. Wong, “Democratizing Algorithmic Fairness,” no. 2010, 2019.

M. Veale, M. Van Kleek, and R. Binns, “Fairness and Accountability Design Needs for Algorithmic Support in High-Stakes Public Sector Decision-Making,” pp. 1–14, 2018.

A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and abstraction in sociotechnical systems,” FAT* 2019 - Proc. 2019 Conf. Fairness, Accountability, Transpar., pp. 59–68, 2019, doi: 10.1145/3287560.3287598.

A. Parkavi, A. Jawaid, S. Dev, and M. S. Vinutha, “The Patterns that Don’t Exist : Study on the effects of psychological human biases in data analysis and decision making,” Proc. 2018 3rd Int. Conf. Comput. Syst. Inf. Technol. Sustain. Solut. CSITSS 2018, pp. 193–197, 2018, doi: 10.1109/CSITSS.2018.8768554.

M. Robles Carrillo, “Artificial intelligence: From ethics to law,” Telecomm. Policy, vol. 44, no. 6, pp. 1–16, 2020, doi:10.1016/j.telpol.2020.101937.

M. P. Hauer, J. Kevekordes, and M. A. Haeri, “Legal perspective on possible fairness measures – A legal discussion using the example of hiring decisions,” Comput. Law Secur. Rev., vol. 42, p. 105583, 2021, doi: 10.1016/j.clsr.2021.105583.

K. Makhlouf, S. Zhioua, and C. Palamidessi, “Machine learning fairness notions: Bridging the gap with real-world applications,” Inf. Process. Manag., vol. 58, no. 5, p. 102642, 2021, doi:10.1016/j.ipm.2021.102642.

S. Kleanthous, M. Kasinidou, P. Barlas, and J. Otterbacher, “Perception of fairness in algorithmic decisions: Future developers’ perspective,” Patterns, vol. 3, no. 1, 2022, doi:10.1016/j.patter.2021.100380.

D. C. Parkes and R. V. Vohra, "Algorithmic and economic perspectives on fairness", arXiv:1909.05282, 2019.

S. Corbett-Davies, E. Pierson, A. Feller, S. Goel, and A. Huq, “Algorithmic decision making and the cost of fairness,” Proc. ACM SIGKDD Int. Conf. Knowl. Discov. Data Min., vol. Part F1296, pp. 797–806, 2017, doi: 10.1145/3097983.3098095.

B. J. Kleinberg, J. Ludwig, S. Mullainathan, and A. Rambachan, “Algorithmic Fairness †,” pp. 22–27, 2018.

H. J. P. Weerts, “An Introduction to Algorithmic Fairness,” pp. 1–18, 2021.

N. Zhou, Z. Zhang, V. N. Nair, H. Singhal, J. Chen, and A. Sudjianto, “Bias, Fairness, and Accountability with AI and ML Algorithms,” pp. 1–18, 2021.

E. Pierson, “Demographics and discussion influence views on algorithmic fairness,” pp. 1–10, 2017.

R. Binns, “Fairness in Machine Learning: Lessons from Political Philosophy,” no. 2016, pp. 1–11, 2017.

S. Corbett-Davies and S. Goel, “The Measure and Mismeasure of Fairness: A Critical Review of Fair Machine Learning,” no. Ec, 2018.

S. Passi and S. Barocas, “Problem formulation and fairness,” FAT 2019 - Proc. 2019 Conf. Fairness, Accountability, Transpar., pp. 39–48, 2019, doi: 10.1145/3287560.3287567.

D. Pessach and E. Shmueli, “A Review on Fairness in Machine Learning,” ACM Comput. Surv., vol. 55, no. 3, pp. 1–44, 2023, doi:10.1145/3494672.

I. Ahmed, G. L. Colclough, and D. First, “Building Fair and Transparent Machine Learning via Operationalized Risk Management: Towards an Open-Access Standard Protocol,” Int. Conf. Mach. Learn. AI Soc. Good Work., 2019.

N. A. Saxena, W. Zhang, and C. Shahabi, “Missed Opportunities in Fair AI,” Proc. 2023 SIAM Int. Conf. Data Min., pp. 961–964, 2023, doi: 10.1137/1.9781611977653.ch110.

C. Dwork, M. Hardt, and R. Zemel, “Fairness Through Awareness,” pp. 214–226.

Q. Zhou, J. Mareček, and R. Shorten, “Subgroup fairness in two-sided markets,” PLoS One, vol. 18, no. 2 February, pp. 1–25, 2023, doi:10.1371/journal.pone.0281443.

M. Veale, M. Van Kleek, and R. Binns, “Fairness and accountability design needs for algorithmic support in high-stakes public sector decision-making,” Conf. Hum. Factors Comput. Syst. - Proc., vol. 2018-April, 2018, doi: 10.1145/3173574.3174014.

A. Chouldechova, “Fair prediction with disparate impact : A study of bias in recidivism prediction instruments,” pp. 1–17, 2017.

S. Barocas and A. D. Selbst, “Big Data ’ s Disparate Impact,” vol. 671, pp. 671–732, 2016.

Creative Commons License

This work is licensed under a Creative Commons Attribution 4.0 International License.

Authors who publish with this journal agree to the following terms:

    1. Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.
    2. Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.
    3. Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See The Effect of Open Access).