Tax authorities’ shift to technological adoption aims to enhance efficiency, accuracy, and taxpayer compliance; however, the integration of AI and automation also raises concerns about transparency, data privacy, and potential biases
Historically, it’s often been a longstanding joke that United States government entities are often believed to be the slowest to adopt technology. In addition, they are often seen as the most inefficient and an embodiment of the word bureaucracy. Government tax authorities, especially in the US, have had a bad rap of being among the entities to be the most technologically lagging.
Over the last five to ten years, a significant shift has occurred in which tax authorities around the globe began heavily investing in technology and taking themselves on a digital transformation. These agencies now recognized that most economies are digital, and to properly service taxpayers and support tax compliance, it is necessary to digitalize most of their own tax processes.
In addition to driving efficiencies, the digitalization of tax administrators allows for greater transparency. For example, fully digitized tax authorities can request and process taxpayers’ information faster. The intergovernmental Organisation for Economic Co-operation and Development (OECD) has relied on the digitalization of the European Union’s member countries to help ensure taxpayers are paying their fair share.
In fact, e-invoicing, a growing global trend, allows tax authorities to have greater insight into transactions between online sellers and buyers since most sales take place digitally. In the US, the Inflation Reduction Act provided the Internal Revenue Services with additional funding of which about $5 billion was allocated to technology modernization.
The potentiality of artificial intelligence (AI), its companion generative AI (GenAI), and additional automation technology offers government tax authorities opportunities for greater efficiency and accessibility and can also provide an easier way for taxpayers to interact with these entities. However, the ease of accessibility — which often includes taxpayers’ requests for more information — has created some unintentional consequences and a heavier workload for tax authorities. As tax jurisdictions increasingly rely on AI and GenAI, some fear it could lead to less transparency; and further concerns about the loss of human judgement could mean that different kind of biases could be introduced into the process.
In June, the Center for Taxpayer Rights held its international conference at which a central theme of the event was a digital taxpayers’ bill of rights. The conference discussed the impact of digitalization on taxpayer rights, focusing on transparency, fair treatment, and human intervention in tax administration.
Lack of transparency in automation and AI
While there is no question that digitalization of tax authorities’ processes, including the integration of AI and automation, has numerous benefits such as efficiency, accuracy, and cost reduction; it also presents challenges, particularly concerning transparency.
On average, most individuals do not understand the complexity of algorithms associated with automation and AI. With tax authorities embracing digitalization and not providing enough context or clarity into how the technology is being used and more importantly, how decisions are made, distrust can percolate among taxpayers.
Data privacy and data sharing
Tax authorities are one of the few government institutions that processes the most information on taxpayers. The use of large language models to train GenAI opens up concerns around how data is collected, used, and more importantly, stored. Cyber-attacks continue to be on the increase globally, with almost every country having major law enforcement committed to combating them. In fact, more than 4,000 cyber-attacks happen every day — and about every 14 seconds, a company falls victim to ransomware.
And other than taxpayers’ data being hacked, there are also concerns that tax authorities will share information with other agencies and even companies. By law the agency isn’t allowed to share taxpayers’ information; however, there are exceptions, including the scenario in which some third-party vendors can be given permission to share information.
Fair treatment and human intervention
Indeed, another growing concern being voiced about use of advanced AI-driven technology, is how tax authorities may train their AI and GenAI on taxpayers’ data, greatly exacerbating any biases that already exist in the information. Worse yet, if these current biases are not addressed, it could lead to further unfair treatment of certain groups. Such was a case in the Netherlands, in which the government admitted in 2022 they had used “algorithms in which foreign-sounding names and dual nationality” were indicators of potential fraud. Not surprisingly, this led to several families having childcare benefits withheld. And for those who did receive benefits, they were told they had committed fraud and had to return the payments or face criminal charges.
Clearly, in this case automation and the lack of human oversight played a significant role in the length of time this mishap was allowed to continue. Having a human in the loop who could have made the correlation between individual names and specific outcomes related to denials or fraud could have stopped what transpired much sooner.
In the US, groups like the Center for Taxpayers Rights are calling on the IRS to be thoughtful and diligent when it considers how software is used, and to make extreme efforts to ensure that the incoming data that the software uses is free from biases. Granted, this is a tall order for many tax authorities, especially smaller ones, but the purpose is to be proactive.
Technology has enhanced the way we live and work, of course, but like with other organizations or companies that handle our data, a standard must be established for government agencies as well, to better ensure that mishaps like the one in the Netherlands or needless flagging of certain companies for audits because of some algorithm error are much less likely to happen here.