AI algorithms impact and shape decisions worldwide, from prioritising patients on medical care waiting lists to determining the likelihood of loan approvals, conducting risk assessments in the criminal justice system, ranking job applications for employment, and deciding which news stories appear in our feeds. However, these decisions are often shaped by algorithms trained on biased data, designed without equity in mind, and deployed with little transparency or accountability.
One example is the gig economy, a labour market of short-term, flexible jobs, such as Uber and Deliveroo, powered by digital platforms. Here, male-centric AI algorithms penalise women for caregiving responsibilities that limit their constant availability, resulting in lower visibility, fewer assignments, and reduced ratings (Micha, Poggi, & Pereyra, 2022). This limits their earnings and can exclude them from the most profitable opportunities. On a macroeconomic level, these automated biases have accumulated to exacerbate the gender pay gap in the gig economy (Read, 2022).
Extending beyond women, AI algorithms also create a pattern of exploitation through their design intent to maximise efficiency. The pressure from algorithmic management has led to a new form of unregulated work for gig workers. The “just-in-time scheduling” imposed by algorithms endangers the physical and mental health of gig workers, as shifts are notified only hours in advance instead of days prior. While this approach reduces labour costs for companies, it creates significant uncertainty, stress, and burnout for gig workers (Thelen, 2019).
However, the most concerning aspect is that, unlike traditional technologies, AI can cause algorithmic harm through a nearly invisible process called “Discrimination 3.0,” where bias is subtly and deeply embedded within digital platforms (Barzilay & Ben-David, 2017). This makes it difficult for those harmed to understand how or why they were affected. “Algorithmic violence” represents this hidden form of passive harm that is pervasive, long-term, and challenging to detect.
As these examples illustrate, algorithmic harms are not always the result of overt prejudice. More often, they stem from design intent, optimisation goals, and structural inequities that are baked into the systems themselves. This means that regulation cannot simply focus on banning specific outcomes. It must address the root causes: who is involved in the design process, what values guide development, and how systems are tested, monitored, and corrected when they cause harm.
Yet, too often, the voices of those most affected by algorithmic decision-making are missing from these conversations. Policymakers may lack direct experience with the systems they regulate, and technologists may be disconnected from the social realities their tools impact. The result is a widening gap between AI governance frameworks and the lived realities of the people they are meant to protect.
Join the Fair Tech Policy Proposal Project
The Fair Tech Policy Lab (FTPL) was founded to close this gap. We are an interdisciplinary think tank confronting the hidden harms of technology through research, advocacy, and public engagement. FTPL brings together a global network of students, researchers, technologists, legal experts, social scientists, entrepreneurs, and advocates. This diversity ensures our work is informed by interdisciplinary collaboration, which often drives progress more quickly by bringing together complementary perspectives.
One of our flagship initiatives features a section open exclusively to YAIL hub members worldwide. Over the course of the project, participants will research and develop policy proposals or frameworks that address pressing AI governance challenges in their own countries or regions, drawing on their expertise, local perspectives, and lived experiences. Proposals might critique existing regulations, compare governance approaches across jurisdictions, or introduce innovative solutions to close gaps in fairness, accountability, and inclusion.
In this project, you’ll be connected with other YAIL participants and FTPL members, with whom you can share and discuss ideas, exchange feedback, and explore potential collaborations. Together, this is a diverse, global community united by the same mission of advancing fair, accountable, and inclusive AI.
By the end, your work will not only be showcased on the YAIL and FTPL platforms to reach a wider audience, but could also lead to tangible outcomes to develop into longer-term projects that make tangible impacts.
Publication & Selection
We deeply value the diversity of perspectives this project will bring, and we aim to publish as many proposals as possible. To highlight outstanding contributions, we will select around ten proposals (depending on quality and participation levels) for full publication and feature on the FTPL and YAIL platforms. Additional proposals may also be published on the FTPL website if they meet our quality standards.
The key evaluation criteria for featured proposals include, innovativeness, identifying gaps in current literature, streamlining existing approaches, and feasibility.

Practical Details
Format: Up to 1,500 words in English (longer submissions can be discussed)
Eligibility: All members of YAIL Hubs worldwide
Deadline: 10th October (extended deadlines can be discussed)
This is your opportunity to dedicate your expertise and perspective to addressing some of the most urgent questions in AI governance. At the same time, you’ll connect with peers from around the world who share your commitment. All of this takes place within the support of a global community working together to build fairer, more inclusive AI.
To participate, please fill out this form: https://forms.gle/BnNUE8asBsC7dvi8A
Learn more about us at: www.fairtechpolicylab.org

Alina Huang is the Founder of the Fair Tech Policy Lab, a think tank addressing algorithmic discrimination and bias that translate into tangible harm, an urgent issue in the digital age that has lacked sufficient policy intervention. The think tank operates through interdisciplinary research, policy proposals, and public engagement. Alina is a member of the Young AI Leaders London Hub, where she collaborates with a global network to shape a vision for ethical and inclusive technology. Alina also volunteers at We and AI, a nonprofit organisation based in the UK. She is currently co-developing a project called “Sensory AI Boxes”, a tactile, hands-on engagement toolkit designed to foster critical thinking around AI among marginalised communities.
Alina’s purpose is grounded in challenging the invisibility of digital injustice. Through her work, she aims to ensure that people not only understand the systems shaping their futures, but actively reshape them with equity. Alina’s academic interests lie at the intersection of artificial intelligence, public policy, and social justice. Her published research explores how biased technologies reinforce gender inequalities in the judiciary and the gig economy. She has presented her work at international conferences such as ICGPSH and ICFBA, and has been recognised by the Social Justice Awards (First Prize Winner) and the Harvard International Review (Silver Medallist). Alina is currently conducting graduate-level research through the University of California, Santa Barbara where she investigates natural langauge processing models.
In addition to her academic interests, Alina is deeply passionate about social entrepreneurship and leadership. She co-founded the Raising Awearness Campaign, an international nonprofit operating in over six countries under 501(c)(3) status. The organisation uses sustainable clothing and tote bags with meaningful messages to provide an avenue for teenagers to raise awareness of social issues through ethical fashion, with the current theme being “Coded Inequality.” Alina is also the editor-in-chief of Vantage Magazine, an interdisciplinary academic publication that celebrates diverse perspectives and intellectual curiosity across politics, classics, and economics.











