top of page
Search

Navigating AI Governance across Jurisdictions

By Jennifer Lim Wei Zhen


Introduction

The rapid advancement of Artificial Intelligence (AI) has brought about a host of intelligent products, ranging from automated systems performing tasks autonomously, to decision-making tools, predictive models,  and generative systems. They have revolutionised the world with sophisticated capabilities once confined to human intelligence. However, these applications are not without risks and implications that necessitate regulatory intervention.


Navigating AI governance necessitates delving into the intricate technologies behind AI, including algorithms, machine learning models trained on datasets, and natural language processing (NLP) techniques.


  • Algorithms: Accountability for Outcomes - The bedrock of AI decision-making, algorithms interpret and process data to generate outcomes. This raises questions about the attribution of liability and responsibility, particularly when decisions are wrong or biased, discriminatory or disadvantageous towards certain groups based on race or other characteristics. 


  • Machine Learning Models: Ethical Boundaries - At the core of AI, machine learning enables AI systems to adapt and learn without further explicit programming.  In fields like healthcare, learning models identify patterns in medical images for disease diagnosis, continuously evolving through iterative adaptation to new data. Regulation is required to align these evolving models with ethical standards. Imagine an autonomous vehicle having to make split-second decisions in a sudden collision, without direct human control. In such situations, ethical parameters that govern AI’s learning models must reflect societal values, prioritising the safety and wellbeing of all stakeholders over other considerations like its self-preservation and liability.

  • Training Dataset:  Privacy and Biases - AI relies on extensive datasets for training to recognise patterns, make predictions, or perform specific tasks. This data may include texts, images, audio, or any other relevant inputs, some of which could be linked to personal data. Unauthorised use of such personal data raises privacy concerns, necessitating safeguards and regulations to prevent misuse and unauthorized access. Additionally, biases inherent in historical data or introduced during data collection present a significant regulatory challenge. If the data is limited or skewed to those that reflect societal prejudices, the AI system may inadvertently perpetuate these biases in its outcomes.  Enhancing AI performance necessitates a larger and more diverse training dataset. Striking a balance between expansive datasets and averting discriminatory elements is a key aspect of AI dataset-related regulations.  


  • Natural Language Processing (NLP): Content Manipulation, Authenticity and Deepfakes - NLP empowers AI systems to process extensive linguistic data, grasp the subtleties of human language, and respond like humans. AI-powered text generation, with ability to mimic humans, poses challenges in discerning between genuine and manipulated content, notably in the form of deepfakes that appear strikingly genuine. The implications are significant as the technology can be exploited for various malicious purposes, including spreading misinformation, impersonation and manipulation of public opinion. To address these risks,  governance frameworks are needed to regulate the creation, authentication, and distribution of digital content.


Establishing a regulatory framework is imperative to unlock the full potential of AI while safeguarding rights and societal well-being. Existing regulations, while offering some safeguards, are nuanced and entwined with broader policies, particularly those addressing privacy concerns. Notably, there are discernible regulatory and supervisory gaps, especially in domains where existing laws fall short of handling the intricacies of AI technological capabilities. Governments worldwide recognise these gaps, as evidenced by initiatives like the EU’s AI Act and AI Liability Directive, as well as China’s regulations on Recommendation Algorithms, Deep Synthesis and Generative AI.  This paper explores the dynamic tapestry of policies and diverse approaches embraced by Australia, China, the European Union, and the United States in navigating the intricate landscape of AI and responding to these issues.


Regulating Biases and Accountability in AI-generated Outcomes

In the realm of AI-generated outcomes, regulatory challenges arise where “biased” outcomes discriminate and disadvantage certain groups based on race or other characteristics. This can be especially problematic in areas such as hiring, lending, and criminal justice where AI-based systems are increasingly deployed. 


Biases in AI may arise from biased algorithms or skewed machine learning models or inadequately representative training datasets. The issue of accountability for AI-generated outcomes is complex due to AI’s lack of consciousness, prompting questions about who should be be  responsible for its biased or unethical outputs. This dynamic disrupts conventional frameworks for attributing liability, requiring innovative approaches to ensure fairness and ethical use of AI technology. Balancing between innovation and accountability calls for regulatory adaptation.


Regulators are generally aligned in their efforts to address biases by mandating transparency and requiring AI developers to provide insights into AI decision-making processes to prevent inadvertent or intentional biases. Proposed legislative initiatives like the EU's AI Act and the US Algorithmic Accountability Act and China’s Regulation on Recommendation Algorithms aim to enforce transparency in AI decision-making processes.


The European Union (EU) is one of the first jurisdictions to come up with a comprehensive solution. Central to the EU’s solution are the proposed AI Act and AI Liability Directive, which are inherently intertwined and potentially trigger a “Brussels effect” in AI regulation, akin to GDPR’s global adoption as the standard for data regulation. Acting as a gatekeeper, the AI Act adopts a risk-based approach, categorising AI systems into risk levels that determine AI providers’ obligations and whether they can even release their AI system into the market: Unacceptable Risk, High Risk, Limited Risk, and Minimal Risk. Unacceptable-risk AI such as social scoring tools or biometric identification systems are prohibited, while high-risk AI faces a rigorous conformity assessment, registration process, and pre-market approval along with ongoing obligations for mandatory human oversight and continuous scrutiny even after being permitted for market release. The draft AI Act, which was first unveiled in April 2021, also introduces other key provisions including hefty fines for non-compliance, exemptions for research and open-source AI, and regulatory sandboxes for real-life AI testing.


Released in September 2022, the EU AI Liability Directive adapts civil liability rules for damage caused by AI systems. Working in tandem with the revised Directive on Liability for Defective Products, it addresses fault-based liability, emphasizing transparency and accountability for post-market product alterations. Evidence disclosure rules empower victims to request courts to order disclosure of information and evidence relating to  high-risk AI systems, while the "presumption of causality" helps victims establish a causal link between the damage and non-compliance of AI systems, allowing claimants to show that providers of high-risk AI systems should be liable for such failure to comply with obligations under the AI Act. 


In the United States (US), the proposed draft of the Algorithmic Accountability Act, unveiled in April 2022, calls for increased transparency and accountability in the use of AI, particularly in critical decision-making processes that directly impact individuals' lives. While this legislation awaits formal enactment, the executive branch can apply existing law to AI-related matters. The judiciary plays a pivotal role in determining this use, as illustrated in significant Supreme Court cases such as Twitter v Taanmeh and Gonzalez v. Google. Notably, in the latter case, the Supreme Court upheld the application of Section 230 of the Communications Decency Act, providing legal protection to tech companies from liability for ISIS-related videos uploaded by users even if their AI algorithms may have recommended those ISIS-related videos on YouTube based on user browsing history.


China too has taken a significant step forward by rolling out some of the world’s most detailed regulations to address biases in AI-generated outcomes. The AI Governance Principles, introduced in September 2019,  offer a framework to harmonize sustainable AI development with safety, security, and reliability. China also introduced the Regulation on Recommendation Algorithms (March 2022) which grants individuals the right to disable algorithmic recommendations and receive explanations for algorithms that substantially impact their interests. The Deep Synthesis Regulation (January 2023) regulates algorithms that generate or alter online content in various formats. It mandates filing with the algorithm registry, labelling synthetically generated content, enforcing information controls and preventing misuse, including user’s registration with real names and consent for editing personal information. China was also the first country to come up with Rules on Generative AI (August 2023) requiring registration of generative AI products and security assessment before public release.


Addressing Authenticity Concerns in AI-generated Works and Deepfakes 

AI’s advanced capabilities to mimic human-generated content blur the lines between human and machine authorship, making the effort to  verify authorship and the true origin of a piece of content a complex endeavour.  This raises concerns about accurately attributing authorship and ownership of works, impacting intellectual rights and liability for unauthorised use of copyrighted materials. The potential for misinformation and misattribution intensifies these challenges, prompting questions about preserving the integrity of creative and scholarly works in the era of generative AI.


Moreover, AI’s ability to mimic human identity has led to the rise of deepfakes – images, video clips and audio recordings generated by AI that are fake but appear to be genuine. This poses challenges in discerning between genuine and manipulated content, with convincing imitation of real individuals raising concerns regarding potential exploitation for malicious purposes, such as spreading misinformation, damaging reputations and even influencing political events. This poses risks to public figures, institutions, and society as a whole, emphasising a pressing need for regulatory frameworks to govern the creation, authentication, and distribution of digital content.


Pioneering regulatory measures have been introduced by EU and China, the first jurisdictions proposing comprehensive rules for generative AI. Transparency is a key requirement, with both mandating disclosure of AI-generated content. Companies in the EU are also required to share summaries of copyrighted data used for training, ensuring that AI models are designed to prevent illegal content generation. In August 2023, China unveiled the Rules on Generative AI, which necessitate companies to register generative AI products targeting Chinese residents. They are also mandated to label synthetically generated content and bear responsibility for its authenticity. Furthermore, specific contents are prohibited, such as material that undermines state power or advocates for the overthrow of the socialist system. Measures to forestall misuse include user registration with verifiable identities and user’s consent for any alterations to personal information. 


Recognizing the urgency of addressing the concerns over deepfakes, China took proactive steps in 2019 by enacting laws that compel disclosure of the use of deepfake technology in videos and other media. These regulations also stipulate that deepfakes must be clearly labelled as artificially generated content before distribution. This comprehensive approach positions China as the first country to have established such extensive deep fake laws, providing an opportunity for other nations to draw insights when enacting their own measures.


Evolving Privacy Regulations in the AI Age

AI development depends on the ingestion of large sets of data (input) that are used to train algorithms to produce models (output) that assist with smart decision-making.


To the extent that the input involves personal data, or any output is used to make decisions that affect the rights or interests of individuals, data regulations have become instrumental in mitigating inherent privacy concerns in AI-driven decisions. To this end, several countries are amending data regulations, with a specific focus on AI development.  The following presents a snapshot of the dynamic tapestry of policies and diverse approaches embraced by Australia, China, the European Union, and the United States in navigating the intricate privacy regulations of AI


  • Australia: The Privacy Act 1988, Australia's primary legislation for personal information, is undergoing a review to align with the expansive data landscape underpinning AI and other digital ecosystems. The Privacy Act Review Report, released in February 2023, recommends substantial changes, such as regulating individual “targeting”, empowering regulators and courts with more enforcement options, and creating new avenues for individuals to seek redress.


  • China: China's data regulatory landscape has undergone significant changes in the past years with the enactment of key laws. Comprising the 2017 Cybersecurity Law (CSL), 2021 Data Security Law (DSL), and 2021 Personal Information Protection Law (PIPL), the data protection framework exerts significant influence on AI development. In particular, the DSL regulates data processing within AI applications. The PIPL, specifically designed to safeguard personal information, delves into the realm of automated decision-making in big data-related practices and introduced more stringent rules for Sensitive Personal Information. These regulatory advancements signify China’s deliberate effort to enhance governance in response to  evolving data privacy concerns posed by AI. The global impact of these regulations is underscored by DSL's extraterritorial application and PIPL's expansive territorial scope beyond China.


  • European Union: In tandem with AI-specific regulations, the EU is actively shaping a broader data regulation framework, exemplified by the Data Act and Data Governance Act. Together with the General Data Protection Regulation (GDPR), they collectively address privacy concerns, including those arising from AI applications.  The GDPR, established in 2018, directly regulates the processing of personal data (including automated processing of data), ensuring individual control and emphasizing principles like transparency, fairness, and data minimisation, which are crucial for AI’s handling of data.  Designed to facilitate data sharing, the Data Governance Act, which was passed in November 2021, clarifies regulations around data sharing - who can create value from data and under which conditions. Complementing the Data Governance Act, the Data Act (which was passed in June 2023) gives individuals more control over their non-personal data generated through smart objects and machines. The synergy between these regulations demonstrates EU’s proactive approach in adapting to the evolving AI landscape, emphasizing both facilitation and control of AI development.   

  • United States: Data regulatory landscape in the US comprises a mix of federal and state laws addressing various dimensions of data privacy. In the absence of comprehensive federal law, the California Consumer Privacy Act (CCPA) plays a pioneering role due to its stringent data protection regulations. Enacted in 2018,  the CCPA was amended in 2020  to define "profiling" and establish rules governing "automated decision-making" while also  mandating privacy assessments for activities like profiling. Beyond California, other states, including Virginia, Colorado, and Connecticut, have enacted similar rules in 2023, indicating a broader trend towards adapting data regulations to the challenges posed by AI. In addition to state-specific actions, industry-specific laws, such as the Health Insurance Portability and Accountability Act (HIPAA), impose strict requirements for authorization, particularly in the context of AI-enabled AdTech and marketing. At the federal level, the American Data Privacy and Protection Act (ADPPA), approved by the House Energy and Commerce Committee in July 2022, represents a significant milestone. The ADPPA proposes national standards for personal information protection, addressing algorithmic accountability and bias, reflecting recognition of the unique privacy concerns posed by AI. 


Conclusion

The widespread adoption of AI in various facets of society presents both promising opportunities and potential risks, necessitating regulatory frameworks to amplify the former and mitigate the latter. Notable progress has been made in crucial areas such as ethical considerations, data privacy, transparency, and liability. However, there remains an ongoing and challenging journey towards clarifying intellectual property rights of AI-generated works, particularly in determining ownership and attributing copyright to works which fall short of a clear human creator.


Navigating AI regulation is particularly challenging from a regulatory design perspective due to its rapid development, diverse applications, and its capacities to transcend national boundaries. There is a lack of clear-cut answers on the optimal approach. Jurisdictions exhibit diverse perspectives on whether specific AI elements should be regulated, and which activities warrant regulatory oversight.  The EU opts for a horizontal regulatory framework, casting a wide net with a comprehensive umbrella of laws that cover all AI applications. In contrast, China’s vertical regulations zoom into specific AI capabilities, such as the regulation on recommendation algorithms targeting biases in  social media feeds. An emerging trend in regions like Australia and the US is the adoption of business-friendly AI regulatory approaches and voluntary self-regulation. In contrast, the EU and China favour stringent regulations with registration and labelling requirements and hefty penalties for non-compliance.    


These regulatory variations underscore the challenge of striking a delicate balance between insufficient and excessive regulations.  Despite these differences, there are common trends: core principles of transparency and explainability; risk-based approaches; sector-specific rules alongside sector-agnostic regulation; integration of AI-related rulemaking with broader digital policy priorities like data privacy; and the use of regulatory sandboxes for collaborative rule development. Notably, a common overarching theme persists: a shared objective of mitigating potential AI harms while leveraging its benefits for societal and economic well-being.


Given the global nature of AI deployment, harmonising legal frameworks across jurisdictions becomes imperative. This harmonisation is crucial for ethical and responsible AI development given that international cooperation and standardization play pivotal roles in preventing regulatory arbitrage, wherein companies strategically exploit regulatory variations to the potential detriment of societal well-being. Harmonization therefore becomes a cornerstone for ensuing ethical AI development for the collective well-being of societies worldwide.



Disclaimer:

This report is based on information available as of September 2023, and the legal landscape may have undergone changes since then. For advice or the most up-to-date information, it is recommended that specific consultation and legal advice be sought.


 

About the Author

Ranked 2nd and 10th in the Asia Law Portal’s 2022 women legal innovators in Asia and 2023 APAC legal innovators respectively, Jennifer is a tech and fintech lawyer. She has advised a diverse clientele across Asia and Europe, ranging from financial institutions to fintech companies and global tech giants, on a range of regulatory matters and contractual arrangements pertaining to their business activities. These include fintech product innovation, data commercialisation, privacy and data protection, online safety, technology risk management,  tech procurement, outsourcing, collaborative arrangements,  business reorganisation and M&A deals. Bilingual in English and Business Chinese, Jennifer regularly reviews contracts written in Chinese. 


A former litigator, Jennifer has experience with technology-related disputes, including patent infringements and revocations, an arbitration pertaining to alleged misrepresentations by a blockchain insurance platform. She also represented a cybersecurity firm in Singapore’s first commercial dispute relating to whether Covid-19 government measures had frustrated a contract.


Beyond legal practice, Jennifer is actively engaged in the tech and innovation ecosystem. She co-founded LawTech.Asia and eTPL.Asia, which won Facebook’s grant for researching on “Operationalising Information Fiduciaries for AI Governance  She is also part of the founding Steering Committee for the Asia-Pacific Legal Innovation and Technology Association.


An alumna of the National University of Singapore (NUS), Jennifer contributes to academia by teaching at NUS Law and the Legal Innovation & Technology Institute on an adjunct basis. She is also a visiting researcher at the NUS Centre for Technology, Robotics, Artificial Intelligence and the Law; and is regularly invited to speak on issues relating to justice, legal innovation, technology, fintech, AI, Web3, DAOs, and NFTs. 




Comentarios


bottom of page