AI regulations – fear and actions

The fear

There are several key concerns regarding the widespread availability and use of AI tools in both the EU and the USA, reflecting the profound impact AI has on various aspects of society:

  1. Privacy and Data Security: AI systems often rely on vast amounts of data for their functioning. The collection, use, and storage of this data can raise serious privacy issues. Furthermore, AI can be used to decode encryption or other security measures, leading to potential breaches in data security.
  2. Bias and Discrimination: AI systems can perpetuate or even exacerbate biases present in the data they are trained on. This can lead to unfair outcomes in various fields such as job recruitment, credit scoring, law enforcement, and more, leading to significant societal discrimination.
  3. Transparency and Explainability: AI decision-making processes can be incredibly complex and opaque, often referred to as a “black box.” This lack of transparency can make it difficult to understand and account for AI decisions, particularly when it results in adverse outcomes.
  4. Job Displacement: Automation driven by AI has the potential to displace jobs across various sectors, leading to significant economic and social disruption.
  5. Accountability and Liability: It can be challenging to determine responsibility when an AI system causes harm or makes a mistake, particularly with systems that “learn” and evolve over time.
  6. Security and Warfare: AI technologies have potential applications in military and security contexts, such as autonomous weapons and surveillance systems. The ethical implications and potential for misuse are significant concerns.
  7. Overreliance on AI: As AI systems become more integrated into critical infrastructures (e.g., healthcare, finance), any failures or inaccuracies can have serious consequences. Overreliance on AI could also lead to a lack of human oversight and decision-making capacity.
  8. Ethical Concerns: There are fears around the development of AI that exceeds human intelligence (superintelligent AI), and the potential consequences this could have for humanity. This includes concerns about control, value misalignment, and decision-making power.

To boil it down: someone stupid will do something very harmful.

In essence, fears revolve around potential misuse of AI for harmful purposes, AI-induced catastrophes due to inadequate safeguards, and equitable distribution of AI’s benefits and harms


The Futile Quest for Control?

Technological advancements, such as the creation and training of new AI models, seem unstoppable, as costs are plummeting year by year. Even regulatory efforts appear futile, given the lengthy legislative process compared to the swift pace of technology release. Eventually, training and using a large language model (LLM) will be within a startup’s reach, posing further challenges to control efforts.

Governments recognize the dangers of uncontrolled AI development and currently seem to gravitate towards three main approaches: forbidding AI/AGI use in governmental institutions until more knowledge is acquired, creating a framework around new technology to enable existing laws, or consulting with experts to find a solution.


The European Approach

The EU has proposed a directive on adapting non-contractual civil liability rules to AI, aiming to strike a balance between the interests of persons harmed by AI systems and the users of such systems. The directive outlines rules for civil liability for harm or damage from an AI system operation in a professional or business activity.

The directive lays out the rules for civil liability for harm or damage resulting from the operation of an AI system in the course of a professional or business activity. The primary objective is to ensure a fair balance between the interests of persons who are harmed by AI systems and the users of such systems, while safeguarding innovation. The directive applies when harm or damage is caused within the Union, regardless of whether or not the AI system was marketed or put into service in the Union.

Key terms and their definitions used in the directive:

  • AI system: A system developed with techniques involving the training of models with data and which is able to perform tasks traditionally requiring human intelligence, such as problem-solving, learning, perception, language-understanding, etc.
  • User: Any natural or legal person using an AI system under their control in the course of their professional or business activity.
  • Operator: Any natural or legal person who is the user of an AI system or has the AI system under their control.
  • Victim: Any person who has suffered harm or damage caused by the operation of an AI system.
  • High-risk AI system: An AI system that is either listed in the Annex of the AI Act or identified as such by the Commission through delegated acts.
  • Damage: Physical harm or property damage, economic loss, etc.

Proposed limitations:

  1. Protection of Confidentiality and Trade Secrets: Member States should establish measures to protect trade secrets and confidential information in the context of legal proceedings involving AI systems (Article 3).
  2. Preserving Confidentiality in Legal Proceedings: Courts should be empowered to take specific measures to preserve the confidentiality of a trade secret or confidential information when used or referred to in legal proceedings (Article 3).
  3. Procedural Remedies for Forced Disclosure: Individuals or entities compelled to disclose or preserve evidence should have appropriate procedural remedies (Article 3).
  4. Penalties for Non-Compliance with Court Orders: Defendants failing to comply with a court order to disclose or preserve evidence will face a presumption of non-compliance with a relevant duty of care (Article 3).
  5. Presumption of Causal Link in Cases of Fault: National courts should assume a causal link between the fault of the defendant and the output or non-output of the AI system, given certain conditions (Article 4).
  6. Review of the Directive: The European Commission is required to review the application of the Directive and present a report five years after the end of the transposition period. This review will evaluate no-fault liability rules and the need for insurance coverage for certain AI systems (Article 5).
  7. Amendment to Directive (EU) 2020/1828: The AI Liability Directive is added to the annex of Directive (EU) 2020/1828 (Article 6).
  8. Transposition of the Directive: Member States must adopt laws, regulations, and administrative provisions necessary to comply with this Directive within two years of it coming into effect (Article 7)


The American Approach

After the first round of congressional hearings, five key areas for future regulations emerged: the importance of regulation, the role of Congress, distinguishing between AI types, involving AI experts, and the precautionary principle in the face of uncertainty and potential risks.

  1. Importance of Regulation: experts and politicians agree on the importance of regulating artificial intelligence, especially artificial general intelligence (AGI). They acknowledge the potential risks and challenges associated with AGI, drawing parallels with nuclear power and aviation, two industries where regulation plays a critical role in ensuring safety.
  2. Role of Congress: VPs, experts and investors tend to emphasize the need for the Congress to intervene in setting AI regulations. They suggest that due to the potentially far-reaching consequences of AGI, it is necessary for such regulations to be put in place on a national level. They argue that Congress has played a similar role in regulating other industries, such as aviation, and should do the same for AI.
  3. Distinguishing Between AI Types: the difference between narrow AI (or weak AI), which has a specific task-oriented purpose, and AGI, which can understand, learn, and apply knowledge across a wide range of tasks. While narrow AI is well-understood and predictable, AGI poses more potential risks due to its unpredictability and therefore requires more stringent regulation.
  4. Involving AI Experts: It seems politicians agree on importance of involving AI experts in the regulatory process. They recognize that understanding and managing AGI is a complex task that requires specialized knowledge. Sam Altman, the CEO of OpenAI, Elon Musk and many others where advocating for the same.
  5. Precautionary Principle: in the face of uncertainty and potential risks, a precautionary approach should be taken when it comes to AI regulation. Due to high stakes the priority should be to mitigate risks rather than rushing to exploit the technology’s benefits.

The Nuclear Comparison

While it is fundamentally different from nuclear weapons regulations, there may be some general similarities in certain areas due to the complex, potentially high-risk nature of both fields. Here are a few such parallels:

  1. Risk Management: Both AI and nuclear weapons regulations emphasize the need for rigorous risk management procedures to mitigate potential harm. For AI, this includes quality assurance of the training data and robustness of the AI system, and for nuclear weapons, this involves strict safeguards and controls over production, storage, and use.
  2. Confidentiality: Confidentiality and protection of sensitive information are vital in both contexts. In AI, this includes protecting trade secrets and proprietary technology. In nuclear weapons regulations, confidentiality would typically extend to classified information about nuclear technology, strategic plans, etc.
  3. Responsibility and Liability: Both fields stress the responsibility and liability of the parties involved. In AI, this includes liability for damages caused by AI systems. In the case of nuclear weapons, countries are held accountable for their actions under international law, such as the Treaty on the Non-Proliferation of Nuclear Weapons.
  4. Regulatory Compliance: Both AI and nuclear technologies require strict adherence to established regulations and standards. In both cases, non-compliance can lead to serious consequences, including legal action.
  5. Review and Oversight: Regular review and oversight are key components of both AI and nuclear weapons regulations, to ensure that standards are met and risks are properly managed.

However, it is important to note that the regulations and oversight mechanisms for nuclear weapons are distinct and more robust due to the immediate, massive, and indiscriminate destruction they can cause.

Regulating AI at the International Level

While national regulation is crucial, AI’s global nature necessitates international coordination. Many concerns surrounding AI and AGI, such as data privacy, bias, and security, transcend national borders. Consequently, international organizations like the United Nations could play a crucial role in setting global standards and facilitating cooperation among nations.

The Path Forward

AGI may appear intimidating due to its enormous potential. However, while regulations aiming to control AGI development may not deter malicious actors, they could prevent catastrophic mistakes from carelessness or ignorance. Regulatory frameworks enabling current laws to adapt to the new technological landscape are crucial, even though the transitional period may pose temporary challenges. Despite the fears and uncertainties, the technological advancement should continue to be balanced with caution and responsibility.


Posted

in

by

Tags:

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *