Please note that Chooi & Company does not give any advice via mass communication, and any such communication should be dismissed as it does not originate from us.

In this regard, we wish to inform you that a mass-mailer being circulated under the title “NEW EPF RULES” which purports to be issued by the firm, is not issued by the firm and we have nothing to do with its contents. You are urged to exercise caution when you receive such messages and notify us at contact@chooi.com.my if you do.

Regulating Artificial Intelligence: A Novelty No More?

February 2024

By Calvin Wong Wai Hou, Chai Yau Hei, Ch’ng Zhen Hon and Jessie Yin

Artificial Intelligence (“A.I.”) is rapidly evolving and is becoming ubiquitous to daily lives. It has, in recent times, emerged as a transformative force across industries to the extent of reshaping global economies. Malaysia has also embraced A.I. technologies in various facets of its economy in effort to foster innovation. 

However, the impact of such rapid increase in the use of revolutionary A.I. systems has resulted in increasing violations of fundamental human rights such as right against discrimination and equality.  The call for formal and robust regulations on A.I. has been longstanding in the international plane. A significant step was taken in this regard in 2021, when the European Commission proposed a draft legislation called the Artificial Intelligence Act (“A.I. Act”) which purports to set out rules governing the development, deployment, and use of A.I. within the European Union (“EU”). The A.I. Act aims to ensure that fundamental rights, democracy, the rule of law and environmental sustainability are protected from high-risk A.I. systems.  

Earlier on 14 June 2023, the EU Parliament adopted the A.I. Act with amendments, paving the way for three key EU institutions (namely, the EU Council, Commission and Parliament) to commence the trilogue negotiations. On 9 December 2023, the EU Parliament and Council negotiators reached a provisional agreement on the A.I. Act, and the final text of the A.I. Act is expected to be adopted officially by the EU sometime in 2024.1

It is thus an opportune moment for all providers and users of A.I. in this region to have a look at the regulatory framework as proposed in this novel piece of legislation. 

Apart from providing an overview of the regulatory framework under the A.I. Act, the authors of this article also took the liberty to explore the major concerns brought by the use of A.I., and how the A.I. Act and related legislations seek to address such concerns. 

A. Poring over the proposed A.I. Act

The provisions of the A.I. Act as adopted and amended by the European Parliament as at 14 June 2023 may be accessed here: Compromise Text EU A.I. Act.  The A.I. Act is believed to first of its kind and is intended to provide a “future proof” approach, allowing its rules to adapt to prospective and rapid technological changes.

‘A.I.’ is defined under the proposed A.I. Act as “machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations, or decisions that influence physical or virtual environments”.2  This definition was amended from the initial proposed text by the European Commission back in 2021 to be more restrictive after much criticism on the feasibility and the scope of the initial definition.  

The proposed rules within the Act establish obligations to be adhered by the providers and importers/distributors of A.I. systems within the EU, based on the systems’ potential risks and level of impact to humanity. The A.I. Act classifies A.I. systems into the following risk categories: 

  1. Unacceptable risk – A.I. systems with an unacceptable level of risk to human safety would be completely prohibited/banned from deployment into the market of the EU. Intrusive and discriminatory uses of A.I. which are banned include systems comprising the following features/functions: 
    1. cognitive behavioural manipulation of people or specific vulnerable groups; for example, voice-activated toys that encourage dangerous behaviour in children;3 
    2. social scoring (classifying people based on behaviour, socio-economic status or personal characteristics);4 and 
    3. real-time and remote biometric identification systems, such as facial recognition.5
  2. High-risk – The classification of high-risk applications will include A.I. systems that pose significant harm to people’s health, safety, fundamental rights.6 These systems are subject to stringent requirements including requirements to:
    1. register high-risk A.I. systems on the EU database;
    2. establish and maintain appropriate A.I. risk and quality, technical documentation and record-keeping; 
    3. implement effective data governance; and 
    4. maintain transparency and comply with mandatory disclosure requirements to users of A.I. systems.

Further, all high-risk A.I. systems are required to be assessed to ensure conformity with the statutory requirements before being put on the market and also throughout their lifecycle.7 Citizens of the EU will have a right to launch complaints about AI systems and receive explanations about decisions based on high-risk A.I. systems that impact their rights.8

  1. Minimal/Limited risk – These are systems which pose an acceptable level of risk and thus are permitted to interact with humans directly. These systems are permitted under to be deployed into the European market but are required to adhere to certain transparency obligations, including drawing up technical documentation, complying with copyright law and disseminating detailed summaries about the content used for training of data inputs.9

Non-compliance with the rules within the A.I. Act can lead to fines ranging from €40 million or if the offender is a company, 7% of its global turnover in revenue, whichever is higher.10

Having set out an overview of the proposed regulatory framework under the A.I. Act, the authors will now examine the major risks and impact of the use of A.I. technologies, and the regulatory remedies thereof, if any.

B. Violation of basic human rights: a baseline for A.I. governance?

The rapid proliferation of A.I. technology has raised significant concerns about human rights violations. One of the most pressing concerns related to A.I. is the perpetuation and amplification of discrimination and bias. A.I. algorithms are trained on historical data, which often contains biases reflecting societal prejudices. Consequently, A.I. systems can unintentionally discriminate against certain groups, particularly those with underrepresented voices and backgrounds. Such bias results in violations of the right against discrimination, a fundamental human right. An example of violation of the right against discrimination is seen in the reliance on A.I. sentencing system COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) by the courts in the United States which was reported to be prone to mistakenly label black offenders as likely to reoffend by flagging them with 45-24 percent higher risk to reoffend than white people. 

Another controversial technology associated with A.I. is facial recognition. A.I. facial recognition technology, which has been employed by governments and corporations for various purposes, can turn into mass surveillance and tracking of individuals without their informed consent, destroying any concept of privacy. A.I.’s capacity to collect and analyse massive amounts of personal data raises concerns about the violation of the right to privacy.  Such abuse finds its clearest expression in the occupied Palestinian territories and Xinjiang.

Another concern with the use of A.I. is job replacement. Many jobs that we would today consider protected from automation will eventually be replaced by A.I. For example, A.I.-based virtual assistant software such as Siri and Alexa have steadily replaced personal assistants, foreign language translators, and other services that were previously reliant on human interaction.

While there is broad agreement that A.I. does need regulating, what this regulation will look like, and whether it helps to protect our rights, or serve to undermine them, is a long-standing debate that is getting increasingly heated. Last November, leading A.I. nations including the US and China signed a declaration at Bletchley Park, establishing a shared understanding of the opportunities and risks posed by AI.

Discriminatory algorithms, privacy invasion, job displacement are all issues that must be addressed to protect fundamental human rights. To address these challenges, as shown in Part A of this article, the A.I. Act establishes a clear ethical and legal framework that govern the development and deployment of AI. Such framework bears emphasis on transparency, accountability, and non-discrimination. In a world increasingly shaped by AI, safeguarding human rights must remain a top priority. Only by acknowledging the potential violations of these rights and taking proactive steps to mitigate them can we fully harness the benefits of A.I. while respecting the dignity and freedoms of all individuals.

C. Personal data protection issues with A.I.

C1. Personal Data Protection Act 2010 (“PDPA 2010”)

In Malaysia, the processing of personal data is regulated by the PDPA 2010. PDPA 2010 defines ‘personal data’ as any information in respect of commercial transactions, which:

  1. is being processed wholly or partly by means of equipment operating automatically in response to instructions given for that purpose;
  2. is recorded with the intention that it should wholly or partly be processed by means of such equipment; or
  3. is recorded as part of a relevant filing system or with the intention that it should form part of a relevant filing system,

that relates directly or indirectly to a data subject, who is identified or identifiable from that information or from that and other information in the possession of a data user.  Examples of personal data include name, birthday, address and bank account information, as well as sensitive personal data such as mental or physical health information, biometric data, political beliefs or religious beliefs. 

Although the PDPA 2010 accounts for the processing of personal data by individuals and corporate entities, the PDPA principles do not cover automated data collection, storage and processing by an A.I., which can operate without human instruction or direction. A generative AI may breach the principles of the PDPA 2010 in the following ways:

  1. the collection and disclosure of personal data to third parties without notice to or consent of the data subject – i.e. deepfakes, usage of names and personal details (Sections 6, 7 & 8 of the PDPA 2010);
  2. storage of personal data without implementing appropriate organisational and technological safety measures (Section 9 of the PDPA 2010);
  3. the unauthorised storage and retention of personal data for AI ‘training’ and memorisation without providing a right to access, update or delete the data (Sections 10 & 11 of the PDPA 2010); and 
  4. the transfer of personal data to servers and users outside of Malaysia without the consent of the data subject (Section 129 PDPA 2010).

Other A.I. programs may also process personal data without the subject’s cognizance or awareness, such as A.I. programs which filter individuals’ biometric information for surveillance purposes and AI filters used to analyse candidates for employment and recruitment purposes.

C2. A.T et. al. v. OpenAI LP et. al. (US Case No. 3:23-cv-04557)

An example of the invasive potential of generative AI can be found in the ongoing United States class action lawsuit of A.T et. al. v. OpenAI LP et. al. (US Case No. 3:23-cv-04557), which is currently in the US District Court for Northern California.  In this case, the claimants have alleged that OpenAI LP, the organisation responsible for creating multiple A.I. products including ChatGPT, Dall-E, Vall-E and other generative A.I. software had employed data scraping programs to illegally collect personal data from millions of users. The types of personal data which had allegedly been illegally collected, stored, transferred and processed by the defendants through their A.I. products included account information, name, contact details, payment information and transaction records for paid users, identifying data from browsers, such as IP address and geolocation, social media information, chat log data, key strokes, and typed searches.

The claimants have argued that, by creating A.I. products capable of illegally collecting personal data, the defendants have contravened the US data privacy laws.  Furthermore, the claimants have pleaded for the defendants to take immediate action to implement proper safeguards and regulations for their A.I. products including greater transparency, accountability and control measures.  It remains to be seen if this lawsuit will be successful in the US Court, and whether the US Court would decide to compel OpenAI or similar AI developers to implement appropriate safeguards or regulatory compliance into their AI products.

C3. Data Protection against A.I.

Although Malaysian legislation and case law have not addressed the potential threat of data protection and privacy rights violations posed by A.I., several foreign jurisdictions have begun to propose and implement legislation to safeguard personal data from usage by A.I.  A notable model legal framework for data protection against A.I. can be found in the provisions of the proposed A.I. Act, which includes increased transparency requirements on the usage of content generated by A.I.11; requirements for users to make informed decisions and provide consent before their data is collected and processed12; and required registration for biometric identification systems utilising AI.13  However, further amendments to the Act may be necessary to address specific steps that A.I. products should take to comply with the EU’s existing General Data Protection Regulation.

Other existing and forthcoming international legislations and regulations on personal data processing by A.I. include:

  1. The UK Data Protection and Digital Information (No. 2) Bill, which is structured around the protection of personal data from A.I. Section 12 of the proposed Bill, in particular imposes restrictions on automated processing and decision-making to require consent;
  2. The US Blueprint for an AI Bill of Rights and Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence comprises a principle to provide citizens with built-in protection from data collection and agency over how their data is used; and 
  3. Article 67(2) of the Regulations for the Promotion of the Development of the Artificial Intelligence Industry in Shanghai Municipality prohibits infringement on personal privacy or personal information rights by A.I.

D. Attributing liability for damages caused by A.I.

There is currently no law in Malaysia which specifically addresses liability caused by A.I.-driven products. Consumers will have to seek recourse to traditional liability doctrines and existing consumer protection legislations such as the Consumer Protection Act 1999 (“CPA”) and the Sale of Goods Act 1957 (“SGA”) if they suffer any damage arising from the use of defective A.I. products.

Most commonly, a consumer may bring an action against the seller of a defective A.I. product by relying on warranty or guarantee clauses provided under the agreement entered into between both parties for the sale of the A.I. product.  Nevertheless, in the event there is a lack of privity of contract between the party who suffered harm from the use of defective A.I. product and the manufacturer of such product, the party may rely on the following liability regimes:

  1. Product liability laws such as the CPA and SGA where liability for damage caused by defective A.I. product is attributed to the parties involved in the manufacturing and distribution of the A.I. product; and
  2. Tort of negligence which imposes liability on a designer and/or manufacturer of an A.I. product for damage caused as a result of a lack of reasonable care and diligence in design and/or manufacturing of the product, whereby the victim who suffered damages caused by the negligent design and/or manufacturing of the A.I. product would be entitled to compensation.

There are nevertheless challenges when one applies the current liability regimes arising from the unpredictability and self-driven characteristics of A.I. products. These include:

  1. Complex manufacturing process – A.I. products are often comprised of various components combining both hardware and software are combined. These different components are often manufactured by different manufacturers, thereby resulting in users having to expense great effort to track down the point of supply in order to ascertain the exact manufacturer liable for the defect. 
  2. Difficulties in proving causation due to a lack of foreseeability of damage – A.I. systems often have the ability to self-learn and make incalculable decisions on their own, rendering it difficult for victims of damages caused by A.I. products to prove that the damage suffered was in fact caused by the negligence of the designer or manufacturer the product.
  3. Limited scope of produces under the CPA and SGA – A.I. products which are not tangible would not fall under the definition of goods covered by the CPA or SGA. This makes it challenging for consumers to sue for damages as their product liability actions may be confined to only hardware and not software malfunctions.
  4. Uncertainty on what amounts to defectiveness – A.I. products which cause damage may not necessarily have malfunctioned or are defective. Rather, it could be a situation where the A.I. products performed as intended or as calculated, but was unfortunately not highly-developed or refined enough to be able to precisely anticipate the actions of other actors in their surroundings.

In order to address the aforesaid gaps in the current liability regimes, the following are some of the recommendations which lawmakers are taking into account:

  1. Clearly definition and classification of A.I. systems which are subject to product liability regime – like the proposed A.I. Act, lawmakers in various jurisdiction are formulating a clear definition for A.I. and adopt a risk-based method in classifying the types of A.I. systems to identify the requirements which would need to be abided by producers of A.I.-driven products which are considered high-risk before they are marketed to consumers.
  2. Widening the scope of product under product liability regime – Since the meaning of goods under the CPA and SGA is restricted to tangible goods only, there is a need to reformulate the meaning of goods for the same to cover products which are intangible such as software; and
  3. Reducing Burden of Proving Damage – In order to assist consumers of A.I. products who suffered damage from the use of such products in establishing causation, the European Commission has recently proposed a Directive on adapting non-contractual civil liability rules to artificial intelligence (AI Liability Directive) wherein provides for a rebuttable presumption of a causal link between a failed duty of care and the harm caused by the A.I. product. thereby onerous burden of proof on victims who suffered damage.

E. Intellectual property issues with A.I. generated works

Since the release of ChatGPT by OpenAI in late 2022, there has been a proliferation of generative or deep learning A.I. which are trained on using very large, unlabelled datasets to do a wide range of tasks, despite not having been trained explicitly to do those tasks.  Generative or deep learning A.I. are also programmed to refine or fine-tune datasets with the help of text prompts by users to achieve a specific goal. ChatGPT, for example, has been fine-tuned to allow users to ask it a question, or make a request, and for it to generate “human-like” texts.

The use of generative or deep learning A.I. gave rise to compelling issues surrounding the protection of intellectual property rights. One of major existing issue is with respect to copyright. 

Traditionally, copyright law has been based on human authorship, with legal systems attributing rights to individuals or entities that create original works. A.I.-generated works challenge this notion, as they can be produced without direct human input. 

Recently, the United States Copyright Office ruled that an award-winning artwork, Théâtre d’Opéra Spatial, that won first place at the 2022 United States Colorado state fair annual art competition, was not eligible for copyright protection because it was not made using an image generating software called Midjourney and not by humans.


Théâtre d’Opéra Spatial by James Mathew Allen

Dissatisfied with the decision of the Copyright Office, the artist, James Mathew Allen, sent a written explanation to the copyright office claiming that the underlying A.I. generated work merely constitutes raw material which he has transformed through his artistic contributions. He further specified that creating the painting had required at least 624 text prompts and input revisions. The copyright office rejected Allen’s explanation, ruling that “if all of a work’s ‘traditional elements of authorship’ were produced by a machine, the work lacks human authorship” and thus is not eligible for copyright.

The existing legal framework under the Copyright Act similarly suggest that AI-generated works may not qualify for copyright protection in Malaysia. A ‘qualified person’ is defined under the Copyright Act Copyright Act 1987 (“Copyright Act”) as:

  1. in relation to an individual, a person who is a citizen of, or a permanent resident in Malaysia; and
  2. in relation to a body corporate, a body corporate established in Malaysia and constituted or vested with legal personality under the laws of Malaysia.

Section 10 of the Copyright Act provides that copyright subsists in every work eligible for copyright of which the author (or in the case of awork of joint authorship, any of the authors) is, at the time when the work is made, a qualified person.

The question therefore is upon whom should authorship of A.I.-generated works rest? Is it with the human programmer who designed the A.I. system, the individual providing input or prompts to the A.I., or can A.I. entities claim copyright due to their ability to autonomously generate content?

Several possible solutions have been implemented and/or proposed to thwart the aforesaid legal conundrum:

(a) Giving authorship to the provider or developer of the A.I.

Section 9(3) of the UK Copyright, Designs and Patents Act 1988 (CDPA) provides that: 

In the case of a literary, dramatic, musical or artistic work which is computer-generated, the author shall be taken to be the person by whom the arrangements necessary for the creation of the work are undertaken.”14

Such an approach seems sensible, but is not without practical challenges. In many situations it will be difficult to draw a clear distinction between works of human authorship and computer-generated works where there is no human author.  In this regard, authorship disputes become increasingly likely, especially as AI-generated works often arise as a result of collaborations between those providing A.I. software and others providing data to train the A.I. and setting the A.I. to a particular task.

(b) Public Disclosure of Copyrighted Material

Article 28b(4)(c) of the proposed A.I. Act requires the provider of generative A.I. systems to ‘document and make publicly available a sufficiently detailed summary of the use of training data protected under copyright law’. This must be done before the model is placed on the market or put into service in the EU. 

The scope of this requirement is unclear as to what would amount to ‘sufficiently detailed’ documentation of copyright works. In the case of such large language models, literally billions of works may be used to train the model. The practicalities of compliance may prove to be too burdensome for some developers.  Developers will want further guidance, such as how granular the documentation needs to be, whether they must document copyright works used to train a third-party pre-trained model they have refined, and the extent to which they can protect any trade secrets in their selection of training materials.

F. Conclusion

As alluded to in the above, there are major issues arising from the use of A.I. such as those relating to human rights, personal data protection, product liability and intellectual property, which do not have a regulatory remedy.

While there is a scarcity of regulations specifically addressing the issues caused by the use of A.I., lawmakers in various jurisdictions, including the leading nations in A.I. development such as the EU countries, UK, China and US, have begun to propose and implement legislations to safeguard fundamental rights. It is thus recommended that developers and producers of A.I. produces as well as users of A.I. in general follow such legislative developments closely. Further, as regulations affecting the use of A.I. use take shape, companies should implement, and regularly update, clear and robust internal governance policies in order to minimize risk and liability. 

This material is for general information only and is not intended to provide legal advice. If you have any queries regarding the above, please feel free to contact us at insights@chooi.com.my.


    1. Council of the European Union press release, Artificial intelligence act: Council and Parliament strike a deal on the first rules for AI in the world, 9 December 2023, url:
      https://www.consilium.europa.eu/en/press/press-releases/2023/12/09/artificial-intelligence-act-council-and-parliament-strike-a-deal-on-the-first-worldwide-rules-for-ai/
    2. Article 3, paragraph 1 of the proposed A.I. Act as adopted by the European Parliament as at 14 June 2023 (“proposed A.I. Act”)
    3. Recital 16 of the proposed A.I. Act
    4. Recital 17 of the proposed A.I. Act
    5. Recital 18 of the proposed A.I. Act
    6. Recital 27 of the proposed A.I. Act. A.I. systems used to influence the outcome of elections and voter behaviour, are also classified as high-risk (see Recital 40 of the proposed A.I. Act)
    7. Article 29 and Article 29a of the proposed A.I. Act
    8. Recital 84 of the proposed A.I. Act
    9. Article 52 of the proposed A.I. Act
    10. Article 71 of the proposed A.I. Act
    11.  Article 52 of the proposed A.I. Act
    12. Article 51 of the proposed A.I. Act
    13. Article 51 read with 60(2) and Annex VIII of the proposed A.I. Act
    14. Section 9 of the UK Copyright, Designs and Patents Act 1988, url:  https://www.legislation.gov.uk/ukpga/1988/48/section/9