The digitisation of public services across the world has been going on for much longer than most of us realise. Since the end of the Second World War, governments have been using computers to handle data and derive insights to drive public policy. Of course, as digital technologies continued to mature and become more accessible to more institutions, their use in the public sphere has expanded accordingly. In the present, the universal adoption of smartphones and modern cloud systems has likewise allowed digital channels to become the primary means of accessing public services in all but the least-developed economies.
This digital transformation is generally agreed to have been an overwhelmingly positive force for humankind. In most countries, digitalisation has removed multiple burdens in accessing healthcare, education, and entrepreneurial opportunities. Every day, new opportunities for enhancing efficiency, accessibility, and convenience for the public are being innovated, sometimes in the most unexpected places.
However, we must remember that our current digital world is still a nascent one, with many then-unforeseen ethical dilemmas and considerations rearing their head in recent times. Let’s examine some of the more commonplace ethical challenges that public decision-makers face in this era’s ongoing technological transition:
1. Ensuring Ethical AI in Worker Management
Even before the recent growth of generative AI, other forms of artificial intelligence have been used to help governments and corporate organisations sort through growing mountains of data. However, the use of AI for this purpose does not always undergo any scrutiny, even in cases where workers’ careers and reputations are at stake. Indeed, the use of AI systems for worker management is considered high-risk under the European Parliament’s AI Act, the first binding worldwide horizontal regulation on AI.
With all of AI’s labour-saving benefits, it’s impractical to expect governments to not use AI to streamline worker management, be it for internal purposes or to inform national employment policies. However, there may be some ways forward that would allow AI to be used responsibly in labour management. A 2022 paper published in the New England Journal of Public Policy titled “Reshaping the Digitization of Public Services” mentions some potential improvements in the use of AI in workforce management:
- Keeping humans involved in the assessment process.
- Creating multi-stakeholder regulatory bodies for oversight.
- Establishing independent regulators.
- Running risk assessments during, before, and after implementing AI-driven work management policy actions.
- Providing full transparency of assessments, regardless of the technologies used.
- Defining clear rights of redress for workers and the public.
Policymakers in charge of employment decisions should consider these measures to better prioritise human rights and autonomy over theoretical efficiency gains.
2. Managing Accountability in Complex Digital Systems
The increasing pace of technological change in public policy is not only depersonalising workers but decision-makers as well. As digital systems become more complex, a phenomenon that some academics call “managerial fuzz” may begin to emerge. This happens when technology systems become so complex that they divorce decision-makers from their actions and responsibilities, degrading accountability.
Managerial uncertainty is not unique to the digital age but the use of impersonal technologies has reshaped its fundamental nature, creating situations that are inimical to human rights as we’ve come to know them. Without further education, we can no longer simply assume that decision-makers understand the ethical implications of their interactions with algorithmic systems.
Addressing managerial fuzz to maintain accountability may require a renewed emphasis on multidisciplinary approaches to governance so that decision-makers are better able to maintain a clear idea of their roles. Clear divisions of responsibilities may also help, as will increasing the involvement of other stakeholders in digitalised decision-making.
3. Unequal Competencies in the Digital Age
Though much of the world’s public services are fully digital, billions of people are still not able to adequately navigate modern digital systems. Digital infrastructure has yet to penetrate all reaches of the world, and many educational systems have not yet been able to provide their stakeholders with the skills needed to harness digital systems responsibly. Distressingly, top policymakers are often also uneducated when it comes to digital services. This all leads to society-wide patterns of irresponsible tech usage.
The specific solutions will vary depending on the causes of digital skills gaps. However, it’s clear that capacity building across all levels will be key. Training for workers, public service officials, and application developers on digital tools and ethics is necessary. Policies must also be set to give economically and socially disadvantaged individuals the means to responsibly navigate digital public services.
Building Ethical Digital Ecosystems?
The unethical use of data is nothing new. For instance, historical regimes have used advanced data management techniques and technologies to quickly identify and victimize target groups and individuals. Today, the unfathomable speed and scale at which we are now able to leverage data for our purposes should give us pause.
No matter how much efficiency they produce, digital ecosystems that disregard ethical considerations run contrary to all the principles we collectively share as a species. Without regard for the welfare of individuals, such systems will only empower a small portion of society to the detriment of everyone else. In light of the increasing pace of global digitalisation, coordinated efforts by all public and private entities the world over are needed to ensure that digital public services continue to empower individuals and serve the public good.