Get 200 credits monthly that roll over!

AI and Social Inequality: Technology’s Role in Widening Gaps

How might AI widen social inequalities?

This is the original, unedited work by Riki. Enjoy!

1.1 Historical Context of Technological Inequality: From Industrial Revolution to Digital Divide

Understanding the historical context of technological inequality necessitates an examination of key periods of technological advancement and their socio-economic impacts. One such pivotal period is the Industrial Revolution, which fundamentally altered social structures and spurred new forms of inequality. According to Allen (2019), the Industrial Revolution in England between 1688 and 1867 not only catalyzed unprecedented economic growth but also introduced stark disparities among different social classes. Utilizing social tables to measure the size and incomes of six major social classes during this period, Allen identifies that changes in overall inequality were closely linked to the shifting fortunes of these classes. Notably, the study computed Gini coefficients for each period, revealing that Britain experienced what is known as a ‘Kuznets curve,’ where inequality initially increased during early industrialization before leveling off. This historical pattern underscores that technological advancements, albeit transformative, often serve as a double-edged sword, amplifying social inequalities in the short term.

The advent of digital technology in the late 20th century marked another significant shift, now frequently referred to as the Digital Divide. This term highlights the growing gap between individuals who have easy access to digital and information technology and those who do not. Small (2023) emphasizes that the digital revolution has been transformative in the social sciences, offering unprecedented access to vast amounts of data collected by governments and private companies. While this “data revolution” holds great promise for understanding and potentially addressing various forms of social inequality, it is also fraught with challenges. One significant risk identified is “size-induced blindness,” a phenomenon where the enormity of available data may lead to the neglect of important limitations, thus potentially perpetuating or even exacerbating existing inequalities. The low cost and high speed of data acquisition offer new opportunities but also pose ethical and methodological challenges that must be navigated carefully.

An essential facet of addressing these disparities lies in efforts to bridge the digital divide, a term coined to describe the gap between those with and without access to advanced digital technologies. Public libraries have emerged as crucial actors in this endeavor. Grimes and Porter (2023) discuss the seminal role libraries play in promoting digital equity, stressing the importance of public-private partnerships and government initiatives. Historically, libraries have functioned as democratic spaces offering free and equal access to information. In the digital age, they continue this legacy by addressing technology needs, implementing strategies for digital inclusion, and providing resources and infrastructure to marginalized communities. For instance, Grimes and Porter highlight the role of the New Jersey Broadband Access Commission in closing the digital gap through various initiatives, underscoring how public libraries contribute to digital equity by providing internet access, digital literacy programs, and technological resources.

The trajectory from the Industrial Revolution to the contemporary digital era illustrates that while technological advancements can foster economic growth and societal progress, they also tend to introduce new forms of inequality. The Industrial Revolution’s impact on social classes serves as a historical antecedent to today’s digital divide, exemplifying how technological benefits are often unevenly distributed. As society continues to navigate the implications of AI and other advanced technologies, lessons from these historical periods offer valuable insights. Efforts to mitigate inequalities must prioritize both access to technology and the equitable distribution of its benefits, necessitating collaborative and multi-faceted approaches involving policymakers, public institutions, and private entities. By learning from past and present experiences, society can better anticipate and address the social repercussions of ongoing technological innovation.

 1.2 Current Trends in AI and Social Stratification: Income Gaps, Job Displacement, and Education

Artificial Intelligence (AI) is burgeoning as a transformative force across various sectors, driving both opportunities and challenges. Notably, its impact on social stratification—particularly in terms of income gaps, job displacement, and educational disparities—has become a focal point for researchers and policymakers alike.

One crucial area where AI’s influence is keenly felt is the job market. The deployment of AI and machine learning technologies in industries such as manufacturing, transportation, and even service sectors has been a double-edged sword. On one side, as Tiwari (2023) elucidates, AI can lead to significant job displacement, primarily among low-skilled workers engaged in repetitive and predictable tasks. The automation wave can render many traditional roles obsolete, disproportionately affecting those who lack advanced skills required for roles in the new AI-infused economy. For instance, roles in data annotation, basic customer service, and certain segments of manufacturing are at higher risk of being automated. This displacement not only causes immediate unemployment concerns but also precipitates a broader economic impact by exacerbating income inequalities and creating socioeconomic instability.

Conversely, the introduction of AI technologies can also stimulate the creation of new employment opportunities, often in sectors that require advanced technical skills. High-skilled roles such as AI specialists, data scientists, and engineers see growing demand. Tiwari (2023) asserts that while job displacement is a significant issue, the potential for new job creation in AI and machine learning fields might outweigh these negative impacts. However, these newly created roles predominantly benefit those with access to advanced education and training, further entrenching the wage disparity between high-skilled and low-skilled workers.

In line with this, Grant and Üngör (2024) provide a more granular analysis through their three-level constant elasticity of substitution production model. Their study reveals a rising skill premium, where wages for high-skilled workers—especially those with an AI-based education—outpace those for low-skilled workers. This discrepancy results from the inherent advantages that AI-equipped workers bring to automating and improving production processes. The “AI skill premium” emphasizes a growing divide even among high-skilled laborers, favoring those with specialized AI knowledge over those with traditional educational backgrounds. Therefore, as automation penetrates deeper into various sectors, the increasing wage inequality accentuates the broader economic divide.

Another domain where AI’s role in exacerbating social inequalities is evident is education. Li (2023) investigates the dual role of AI in the educational context, identifying significant challenges that could amplify existing disparities. While AI offers immense potential in personalized learning, resource optimization, and administrative efficiencies, it inadvertently risks widening the educational gap. Educational institutions with robust funding can leverage AI to enhance learning outcomes and empower students with cutting-edge tools and curricula, thus providing a significant advantage to students from affluent backgrounds. Conversely, underfunded schools, often serving marginalized communities, might struggle to afford such advancements, thus perpetuating the cycle of educational inequality.

Moreover, AI-driven educational tools are not immune to biases present in their development and deployment. Li (2023) underscores that disparities in resource allocation for AI implementation can magnify educational inequalities, with privileged demographics gaining more profound benefits from AI-enhanced learning environments. Meanwhile, under-resourced institutions continue to lag, further marginalizing vulnerable student populations. This segregation not only translates into immediate educational outcomes but also reverberates through the labor market, as students from disadvantaged backgrounds enter employment with fewer skills and opportunities.

To mitigate these issues, it is critical to adopt strategies that balance the benefits of AI with equitable access and opportunities. Government interventions, educational reforms, and corporate accountability are essential to ensure that AI becomes a tool for inclusive growth rather than a perpetuator of disparities. Investing in reskilling and upskilling programs for displaced workers, emphasizing inclusive AI education, and fostering public-private partnerships to democratize access to AI resources are pivotal steps in this direction.

Thus, while AI holds the promise of substantial economic and educational advancements, its current trajectory indicates a proclivity towards exacerbating social stratification. Addressing these challenges necessitates a multi-faceted approach, harnessing the potential of AI while conscientiously striving to minimize its adverse impacts on social inequality.

 2.1 Disparities in Data Access and Utilization: Case Studies in Healthcare and Finance

The increasing integration of Artificial Intelligence (AI) into various societal sectors reveals stark disparities in data access and utilization that exacerbate social inequalities. This section delves into case studies in healthcare and finance to illustrate how unequal access to data can deepen social rifts, especially among marginalized communities.

In the healthcare sector, the primary example can be drawn from the impact of the COVID-19 pandemic on healthcare access among different racial groups in the United States. Wu, Huang, and Gao (2023) provide a meticulous analysis of the racial disparities in healthcare access during the pandemic, presenting a unified framework that measures inequality in both physical and virtual spaces. Their study introduces a novel Access Inequity Index that integrates the Information Theory Index and the Theil Index to capture the unevenness in accessibility among different racial groups. Their findings indicate a significant racial disparity in healthcare access, particularly telehealth services, which became crucial during the pandemic.

The study emphasizes that minority groups, such as Black and other non-white populations, exhibited a higher demand for telehealth services, aligning with greater risk for COVID-19 infection, hospitalization, and death. Additionally, the study points to residential segregation as a contributing factor to the segregated pattern of physical healthcare access. In places like Chicago, Black-dominated healthcare access zones largely mirrored Black residential clusters, indicating an intersection of racial and spatial factors affecting access to health services. This hybrid-space approach goes beyond traditional methods by providing a comprehensive understanding of racial disparities in healthcare, thereby shedding light on new areas for policy intervention and community engagement (Wu et al., 2023).

Turning to the rural-urban divide in healthcare, Tong et al. (2022) explore how health information technology (HIT) might bridge this gap in China. Despite high hopes pinned on HIT to alleviate access inequality, the realized impact remains unclear due to varied outcomes in different settings. Anchoring their research in social transformation theory and affordance actualization theory, the authors conducted an in-depth qualitative study, revealing that societal challenges in healthcare access trigger transformative HIT interventions. These interventions often lead to micro-level HIT effects that escalate to macro-level impacts through societal-level affordance actualization, which can effectively address rural-urban healthcare access inequalities.

The study provided nuanced insights into the adaptive nature of HIT interventions and differentiated between the effects of collective and shared affordances. For instance, telemedicine applications allowed rural residents to access specialist consultations virtually, breaking geographical barriers. However, challenges such as internet connectivity and digital literacy remain critical obstacles that need addressing to ensure HIT’s full potential in equalizing healthcare access (Tong et al., 2022).

In the financial sector, disparities in data access are equally evident. Burgos et al. (2022) present a holistic solution to data pipelining in digital finance and insurance applications, which is crucial for ingesting, processing, and utilizing large volumes of data efficiently. Their chapter discusses a range of architectural patterns designed to optimize data pipelining, minimizing the Total Cost of Ownership (TCO) for storage systems and execution time of data processing tasks.

The efficient utilization and interpretation of financial data is more accessible to large corporations and privileged segments of society. Smaller entities and underprivileged groups often find themselves at a disadvantage due to limited access to such advanced data pipelines. This discrepancy contributes to a worsening financial divide, making it difficult for smaller businesses and less affluent individuals to compete on an equal footing. The advanced data handling capabilities enjoyed by larger firms enable better decision-making, risk management, and strategic planning, thus perpetuating existing economic inequalities (Burgos et al., 2022).

In conclusion, the unequal access and utilization of data in both healthcare and finance contribute significantly to widening social inequalities. The case studies discussed highlight the critical need for targeted interventions—be it through advanced metrics to measure inequity, transformative HIT interventions, or optimized data pipelining solutions—to mitigate these disparities. Addressing these issues requires a multifaceted approach involving policy changes, technological advancements, and inclusive community engagements to ensure equitable access to essential services and opportunities.

 2.2 Bias and Discrimination in Algorithmic Decision-Making: Predictive Policing and Job Recruitment Algorithms

The integration of AI technologies into predictive policing and job recruitment processes has exposed inherent biases and discriminatory practices embedded in algorithmic decision-making. This section examines the multifaceted implications of algorithmic bias in these domains and explores how these biases exacerbate social inequalities.

Predictive policing tools, which aim to forecast crime in certain areas, have been criticized for their biases. Ziosi and Pruss (2024) conducted a detailed study of the Chicago crime prediction algorithm, a prominent predictive policing tool. Through interviews with various stakeholders including community organizations, academic researchers, and public sector actors, the study revealed that different groups perceive and use evidence of algorithmic bias in diverse ways to further their own agendas. For instance, some stakeholders used evidence of bias to challenge and reform policies around police patrol allocation, while others aimed to refocus the narrative of crime from an interpersonal issue to a broader structural problem. The study emphasizes that the differential use of algorithmic bias evidence reflects long-standing tensions in criminal justice reform, particularly between systemic change proponents and those who favor surveillance and deterrence measures (Ziosi & Pruss, 2024).

Moreover, the media portrayal of predictive policing has significantly influenced public perception and engagement. Camilleri et al. (2023) analyzed UK press coverage from 2012 to 2021, focusing on two types of police technology: individual risk assessment and predictive area mapping. Their findings show a peak in media attention in 2018 followed by a significant decline, and a shift towards more negative reporting, with bias concerns becoming prevalent (Camilleri et al., 2023). Additionally, the police themselves have become less transparent and more reticent in engaging with the press. This reduced transparency risks exacerbating public distrust and may hinder efforts to mitigate potential harms associated with predictive policing technologies. Thus, while media coverage can significantly shape public understanding and policy, the lack of transparency from police forces adds another layer of complexity to the problem.

On the other side of the spectrum, AI in job recruitment processes has also displayed significant issues concerning fairness and discrimination. Delecraz et al. (2022) examined a machine-learning algorithm designed to match job offers with potential candidates while incorporating safeguards aimed at reducing unfairness and discrimination. Traditional recruitment methods, often criticized for lacking scientific rigor and being prone to biases, contrasted with the proposed algorithm that aims to ensure inclusivity and fairness. Despite this, the study acknowledges that even with advanced algorithmic safeguards, there remains an inherent risk of bias, as algorithms are only as unbiased as the data they are trained on (Delecraz et al., 2022). Nonetheless, the study posits that continuous monitoring and adjustments can help mitigate these issues and promote more equitable outcomes in recruitment practices.

The consequences of these biases are far-reaching. In the case of predictive policing, biased algorithms have been shown to disproportionately target marginalized communities, reinforcing existing social disparities and perpetuating cycles of distrust and discrimination. In job recruitment, biases can result in the exclusion of qualified candidates from underrepresented groups, hindering diversity and equality in the workplace. These examples highlight a critical need for a more nuanced understanding and approach to algorithmic decision-making.

Addressing these biases requires a multifaceted approach. Firstly, increased transparency and accountability in the development and deployment of AI technologies are paramount. As evidenced by Camilleri et al. (2023), transparency can mitigate public concerns and facilitate better policy interventions. Secondly, the involvement of diverse stakeholders, as suggested by Ziosi and Pruss (2024), can ensure that the algorithmic reforms are equitable and reflective of the needs of those most affected by these technologies. Lastly, continuous monitoring and refinement of algorithms, as advocated by Delecraz et al. (2022), can help to identify and rectify biases, promoting more inclusive and fair outcomes.

In summary, while AI technologies promise efficiency and innovation, their current implementations in predictive policing and job recruitment reveal significant issues of bias and discrimination. Addressing these challenges requires sustained efforts towards transparency, stakeholder engagement, and ongoing algorithmic scrutiny to ensure these tools do not perpetuate or exacerbate social inequalities.

 3.1 Promoting Inclusive AI Development and Deployment: Participatory Design and Community Engagement

Advancing an inclusive approach in the development and deployment of AI technologies is pivotal in mitigating inequalities that arise from unequal access and utilization of these technologies. Participatory design and community engagement are two essential strategies that ensure AI systems cater to diverse community needs, promote equity, and foster social good.

Participatory governance and design play crucial roles in making AI more inclusive. According to Moon (2023), the complex and uncertain nature of AI necessitates governance mechanisms that are inclusive and participatory. The equitable involvement of stakeholders in the AI development process can mitigate risks and maximize benefits, contributing to social good. Moon posits that participatory governance allows for diverse perspectives to inform AI policies and practices, which can ensure that AI technologies do not inadvertently widen social inequalities but rather address specific community needs and priorities. Furthermore, policy recommendations from this study suggest that engaging multiple societal actors, including marginalized groups, can result in more equitable AI outcomes.

Community engagement is another critical aspect that can drive inclusivity in AI development. An essay by Kaida et al. (2023) underscores the importance of community engagement in harnessing disaggregated data and advanced analytics for social justice. The study reveals that meaningful community engagement within academia-community-government partnerships can address systemic social inequities. By centering the priorities of equity-deserving groups and addressing structural barriers to community engagement, these partnerships can foster innovation and co-create ethical data governance policies that support community data ownership and access. The authors argue that such collaborations enable contextualized and effective analyses of disaggregated data, contributing to social equity and justice through informed AI applications.

The manual recount of Typhoon Damrey’s impact in Vietnam further exemplifies the efficacy of participatory approaches in achieving inclusive technological solutions. Thanh and Wilderspin (2023) evaluated international projects aimed at recovering settlements and rebuilding homes using a participatory-inclusive approach. Their study highlights that engaging affected communities and stakeholders at various levels results in more relevant, effective, and sustainable outcomes. By employing a bottom-up participatory methodology, these projects not only provided timely support to the affected populations but also ensured that the interventions were culturally appropriate and tailored to the specific needs of the communities. This case underscores the importance of involving community voices in all stages of technological development and implementation to achieve equitable and resilient solutions.

One of the fundamental principles of participatory design in AI is to ensure that technologies are developed with a holistic understanding of the target community’s context. This involves participatory research methods, such as focus groups, workshops, and co-design sessions, to gather inputs directly from the users and stakeholders. This collaborative process helps in identifying potential biases and blind spots that designers might overlook, ensuring that the AI systems are designed to be fair and accessible to all.

Moreover, community engagement fosters trust and buy-in from the community members, which is essential for the successful deployment and adoption of AI technologies. By actively involving communities in the development process, developers can create AI systems that are not only technically robust but also socially responsible and aligned with the community’s values and needs. This approach also empowers communities, enabling them to have a say in how technologies that affect their lives are designed and used.

To conclude, fostering inclusive AI development through participatory design and community engagement is vital in mitigating AI-induced social inequalities. By adopting participatory governance mechanisms, centering community-driven priorities, and fostering equitable partnerships, stakeholders can develop AI technologies that are fair, transparent, and beneficial for all segments of society. Inclusive and engaged approaches ensure that the technological advancements contribute to social good, bridging gaps rather than widening them.

 3.2 Policy and Regulatory Approaches to Equalize Access to AI Technologies: Data Privacy Laws and Ethical Guidelines

The rapid proliferation of artificial intelligence (AI) technologies has outpaced traditional regulatory frameworks, necessitating innovative policy approaches to prevent the deepening of social inequalities. Effective policy measures and regulatory mechanisms are essential to ensuring equitable access to AI technologies across different social groups. This subchapter delves into how data privacy laws and ethical guidelines can be leveraged to address these disparities, drawing insights from current research and policy analyses.

Data privacy laws constitute one of the foundational pillars in mitigating social inequality exacerbated by AI. These laws play a crucial role in protecting individual rights and ensuring trust in digital systems (Zubaedah et al., 2024). By safeguarding personal data, these regulations help address the risks associated with data breaches and unauthorized data usage, which disproportionately affect marginalized communities. For instance, robust data privacy frameworks can prevent the misuse of personal data that could lead to discriminatory practices in areas like healthcare, finance, and employment. However, implementing these laws poses significant challenges, especially in a globalized digital landscape where inconsistent regulations across borders can hamper their effectiveness (Zubaedah et al., 2024).

Cybersecurity regulations complement data privacy laws by protecting digital infrastructures and preventing data breaches. These regulations are vital for maintaining the integrity and security of AI systems, especially those handling sensitive data. Yet, cybersecurity measures often struggle to keep pace with the rapidly evolving nature of cyber threats. This lag between regulatory framework and technological advancements can lead to gaps that increase vulnerability, especially for underrepresented groups who may not have the resources to safeguard their digital assets (Zubaedah et al., 2024). Consequently, continuous updates and international cooperation are necessary to enhance the efficacy of these regulations.

Ethical guidelines provide an additional layer of protection by ensuring that AI technologies are developed and deployed in ways that respect human rights and promote social welfare. These guidelines often outline principles such as transparency, fairness, and accountability (Fukuda-Parr & Gibbons, 2021). For instance, the integration of ethical considerations into AI development can mitigate biases in algorithmic decision-making processes, thereby reducing discriminatory outcomes in areas like job recruitment and law enforcement. However, many current ethical guidelines are criticized for being too focused on transparency while neglecting other crucial aspects like enforceability and accountability (Fukuda-Parr & Gibbons, 2021). This gap underscores the urgency for more rigorous standards that can hold AI developers and deployers accountable for ethical lapses.

Moreover, the governance of AI through cooperative policies is essential for fostering inclusive AI ecosystems. Cooperative policies, which combine principles from ethical guidelines with democratic oversight, ensure that all stakeholders—including marginalized groups—participate in the decision-making process (Gianni et al., 2022). Such an approach not only democratizes AI development but also promotes equitable access to AI technologies. For example, participatory design models engage community members in the development process, ensuring that the resulting technologies address the specific needs and concerns of diverse populations (Gianni et al., 2022). However, achieving this level of inclusion requires significant shifts in policy frameworks to promote active and sustained engagement with civil society.

Despite the proliferation of ethical guidelines and national strategies, there remains a pressing need for a more systematic and enforceable approach to AI governance. Many existing guidelines fail to operationalize human rights principles effectively, often using them as rhetorical devices rather than enforceable standards (Fukuda-Parr & Gibbons, 2021). This has led to a call for action by governments and civil societies to develop more stringent regulatory measures rooted in international human rights frameworks. Such measures could include mandatory compliance with ethical standards, rigorous impact assessments, and robust accountability mechanisms to ensure that AI applications do not exacerbate social inequalities.

In conclusion, while data privacy laws and ethical guidelines provide foundational measures to address AI-driven social inequalities, their current implementation often falls short of ensuring equitable access to AI technologies. Strengthening these frameworks through continuous updates, international cooperation, and inclusive policy-making can help bridge these gaps. By fostering an environment of accountability and inclusivity, policymakers can mitigate the risks associated with AI and contribute to a more equitable digital society.

References:

Allen, R.. (2019). Class Structure and Inequality During the Industrial Revolution: Lessons from England’s Social Tables, 1688–1867. PSN: Income Inequality (topic). https://doi.org/10.1111/ehr.12661

Burgos, D., Kranas, P., Jimenez-Peris, R., & Mahíllo, J.. (2022). Architectural Patterns for Data Pipelines in Digital Finance and Insurance Applications. Big Data and Artificial Intelligence in Digital Finance, 45–65. https://doi.org/10.1007/978-3-030-94590-9_3

Camilleri, H., Ashurst, C., Jaisankar, N., Weller, A., & Zilka, M.. (2023). Media Coverage of Predictive Policing: Bias, Police Engagement, and the Future of Transparency. Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization. https://doi.org/10.1145/3617694.3623249

Delecraz, S., Eltarr, L., Becuwe, M., Bouxin, H., Boutin, N., & Oullier, O.. (2022). Making Recruitment More Inclusive: Unfairness Monitoring With A Job Matching Machine-Learning Algorithm. In  2022 IEEE/ACM International Workshop on Equitable Data & Technology (FairWare) (pp. 34–41). https://doi.org/10.1145/3524491.3527309

Fukuda‐Parr, S., & Gibbons, E.. (2021). Emerging Consensus on ‘Ethical AI’: Human Rights Critique of Stakeholder Guidelines. Global Policy, 12(S6), 32–44. https://doi.org/10.1111/1758-5899.12965

Gianni, R., Lehtinen, S., & Nieminen, M. P.. (2022). Governance of Responsible AI: From Ethical Guidelines to Cooperative Policies. Frontiers of Computer Science, 4. https://doi.org/10.3389/fcomp.2022.873437

Grant, R., & Üngör, M.. (2024). The AI revolution with 21st century skills: Implications for the wage inequality and technical change. Scottish Journal of Political Economy. https://doi.org/10.1111/sjpe.12395

Grimes, N. D., & Porter, W.. (2023). Closing the Digital Divide Through Digital Equity: The Role of Libraries and Librarians. Public Library Quarterly, 43, 307–338. https://doi.org/10.1080/01616846.2023.2251348

Kaida, A., Anderson, J., Barnard, C., Bartram, L., Bert, D., Carpendale, S., … Smith, M.. (2023). Realizing the Promise of Disaggregated Data and Analytics for Social Justice Through Community Engagement and Intersectoral Research Partnerships. Engaged Scholar Journal: Community-engaged Research, Teaching, and Learning. https://doi.org/10.15402/esj.v8i4.70792

Li, H.. (2023). AI in Education: Bridging the Divide or Widening the Gap? Exploring Equity, Opportunities, and Challenges in the Digital Age. Advances in Education, Humanities and Social Science Research. https://doi.org/10.56028/aehssr.8.1.355.2023

Moon, M. J.. (2023). Searching for inclusive artificial intelligence for social good: Participatory governance and policy recommendations for making <scp>AI</scp> more inclusive and benign for society. Public Administration Review, 83(6), 1496–1505. https://doi.org/10.1111/puar.13648

Small, M. L.. (2023). The Data Revolution and the Study of Social Inequality: Promise and Perils. Social Research: An International Quarterly, 90(4), 757–780. https://doi.org/10.1353/sor.2023.a916353

Thanh, M. N. T., & Wilderspin, I. F.. (2023). Recovering Sustainable Settlements for People Affected by Disaster: An Inclusive-Participatory Approach. E3S Web of Conferences. https://doi.org/10.1051/e3sconf/202340301025

Tiwari, R.. (2023). The Impact of AI and Machine Learning on Job Displacement and Employment Opportunities. INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT. https://doi.org/10.55041/ijsrem17506

Tong, Y., Tan, C.-H., Sia, C., Shi, Y., & Teo, H.. (2022). Rural-Urban Healthcare Access Inequality Challenge: Transformative Roles of Information Technology. MIS Quarterly. https://doi.org/10.25300/misq/2022/14789

Wu, M., Huang, Q., & Gao, S.. (2023). Measuring Access Inequality in A Hybrid Physical-Virtual World: : A Case Study of Racial Disparity of Healthcare Access During CoVID-19. 2023 30th International Conference on Geoinformatics, 1–10. https://doi.org/10.1109/Geoinformatics60313.2023.10247690

Ziosi, M., & Pruss, D.. (2024). Evidence of What, for Whom? The Socially Contested Role of Algorithmic Bias in a Predictive Policing Tool. Proceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency. https://doi.org/10.1145/3630106.3658991

Zubaedah, P. A., Harliyanto, R., Situmeang, S. M. T., Siagian, D. S., & Septaria, E.. (2024). The Legal Implications of Data Privacy Laws, Cybersecurity Regulations, and AI Ethics in a Digital Society. The Journal of Academic Science. https://doi.org/10.59613/29qypw51

 


Photo by Oluwaseyi Johnson on Unsplash

Like this Essay?

Share on Facebook
Share on Twitter
Share on Linkdin
Share on Pinterest

Be a Gold Member

$19/month. Cancel anytime.

Get 200 credits monthly (that roll over), 50% off additional credits, personal WhatsApp assistance, & early access to new features

Powered by

My Credits

You must be logged in to view your credits.

Request received.

Your essay ID: 

You’ll be mailed your essay in up to:

Please check spam.
If you don’t receive a confirmation email in the next few minutes, please contact [email protected]

Buy Credits

You must sign in to purchase credits.

Don’t have an account? Let’s get started

Powered by