Designing Against Radical Tech: How Service Design Can Balance AI and Government Policies
Artificial intelligence is no longer just a tool.
It is a force reshaping economies, politics, and societies at a scale governments are still struggling to grasp. Around the world, leaders are drafting policies to either contain or accelerate its use. Some emphasize strict regulation and security, while others champion rapid innovation with minimal oversight. Both approaches risk producing radical outcomes: on one side, surveillance-heavy governance, and on the other, unchecked corporate power and social inequity.
The challenge is not simply about the policies themselves, but about how they translate into the lived experiences of citizens. This is where service design can step in, not to slow down innovation, but to ensure that emerging systems serve people rather than control them.
Government Policies: A Double-Edged Sword
Government policy often works like a double-edged sword. Strict regulation can prevent harm but may also fuel bureaucratic nightmares that erode trust and progress. Deregulation encourages innovation, but it risks creating monopolies and embedding structural inequalities into digital services. Meanwhile, AI itself evolves faster than any law, producing gaps in accountability. Policymakers face what could be called an “ethics vacuum,” where business incentives outpace public interest, and citizens are left feeling excluded from decisions that shape their everyday lives.
Digital Identity: Fairness vs. Surveillance
One of the clearest examples of this tension can be found in digital identity systems. Estonia has created one of the world’s most trusted digital identity infrastructures, designed with inclusivity and auditability in mind. Its framework demonstrates how governments can balance efficiency with fairness while protecting citizen rights (TechUK).
By contrast, the U.S. government’s Login.gov platform decided in 2021 to reject facial recognition and biometric verification. While the decision prioritized equity and accessibility, it created compliance challenges with federal security standards, highlighting the difficulty of designing systems that are both fair and standardized (Wired).
Welfare Algorithms: When Efficiency Becomes Bias
Welfare and social benefit systems illustrate how automation can drift into radical outcomes without proper safeguards. In the United Kingdom, AI-driven systems have been deployed to flag fraud in welfare, marriage licenses, and immigration processes. Investigations revealed that these algorithms lacked transparency, leading to wrongful denials and reinforcing discriminatory practices (The Guardian).
Similarly, in France, human rights groups have taken legal action against government algorithms that assign “risk scores” to welfare recipients. Critics argue that these systems disproportionately target disabled people and single mothers, raising serious concerns about discrimination and legality (Wired).
Service Design: The Preventive Force
These examples reveal how service design can function as a preventative force. By focusing on lived experience, service design ensures that policies and technologies are translated into services that people can trust and use equitably. It is not enough to regulate AI at the legal or corporate level; the systems themselves must be designed with transparency, explainability, and accountability built in.
Research on human-centered AI has shown that bias can creep into every stage of the AI lifecycle, from data collection to model deployment. Service design methods, such as system mapping and participatory prototyping, can help governments identify unintended consequences before new technologies are deployed (NCBI).
Governance frameworks are also beginning to evolve in this direction. The Algorithmic State Architecture (ASA), for example, proposes that governments integrate four essential layers—digital public infrastructure, data-for-policy, algorithmic governance, and GovTech—into one cohesive system rather than implementing AI piecemeal (arxiv.org). Advocacy groups such as the Algorithmic Justice League are also pushing institutions to adopt fairer systems by exposing racial and gender bias in technologies like facial recognition and campaigning against their government adoption (AJL).
Toward a Balanced Future
The future of AI will not be determined solely by policymakers or engineers but by the way people experience and interact with these systems. Service design prevents AI from being radicalized either by excessive control or chaotic deregulation. It brings in the voices of citizens, emphasizes ethical touchpoints, and anticipates risks before they spiral into crises.
Ultimately, a service that does not serve people is not progress; it is control. The real design challenge of our time is ensuring that government policies and AI systems are built not only for efficiency or profit but for dignity, fairness, and collective trust.
References
Ethical and Inclusive Digital Identity: A Value-Led AI Approach – TechUK
A US Agency Rejected Face Recognition—and Landed in Big Trouble – Wired
UK Officials Use AI to Decide on Issues from Benefits to Marriage Licences – The Guardian
Algorithms Policed Welfare Systems for Years. Now They’re Under Fire for Bias – Wired
Photo by Tara Winstead: Link