AI Discussion with ChatGPT – Part 6

AI Legal and Regulatory Developments

SERIES INTRODUCTION 

ARTICLE INTRODUCTION

As highlighted in prior articles within this discussion series, one of the important limitations of ChatGPT Version 4.0 is time specific. ChatGPT’s most recent general update (as of December 2023) occurred in April 2023.

Two of the most recent and comprehensive AI legislative developments have been:

  1. October 30, 2023 – President Biden’s “Executive Order on the Safe, Secure and Trustworthy Development and Use of Artificial Intelligence.”
  2. December 8, 2023 – The European Union’s “The A.I. Act” putting restrictions on some of AI’s most potential riskiest uses.”

In addition, the NAIC continues to promulgate recommendations for state AI legislation. Most recently:

  1. December 4, 2023 – The NAIC adopted a model AI governance bulletin proposing insurers be required to adopt an AI governance and risk management framework. The bulletin also encourages the use of testing to identify potential “unfair discrimination in the decisions and outcomes resulting from the use of Predictive Models and AI Systems.”
  2. December 11, 2023 – The NAIC issued guidelines for insurers and state insurance regulators proposing how insurers should manage the AI risks created by third-party vendors.

In addition to general AI resources recommended in prior series articles, this writer recommends, and has benefited from, reading Faegre Drinker AI newletters and Life Annuity Specialist articles to keep current with insurance related AI developments.

THIS ARTICLE IDENTIFIES AND DISCUSSES

  1. AI federal legislation in the U.S.
  2. Federal regulatory agencies claiming AI authority.
  3. Important unresolved legislative and regulatory AI issues.
  4. Model AI regulations adopted by the NAIC.
  5. New AI laws and regulations proposed for the insurance industry.
  6. Most important non-insurance AI-related legal developments.
  7. AI industry reaction to proposed laws and regulations.

What AI federal legislation exists in the United States?

As of my last update in April 2023, the United States had seen a growing interest in AI legislation at the federal level, reflecting the increasing importance and impact of AI technologies. While a comprehensive federal law specifically governing AI had not yet been established, several legislative efforts and policies were in place or in progress, focusing on various aspects of AI, such as development, regulation, ethics, and national strategy. Key legislative and policy initiatives include:

  1. National AI Initiative Act: Enacted as part of the National Defense Authorization Act for Fiscal Year 2021, this act established a coordinated program across federal agencies to accelerate AI research and application for the economic and national security of the United States.
  2. American AI Initiative: An executive order signed in 2019, this initiative directed federal agencies to prioritize AI in their budgets, provided guidance for AI regulation, promoted AI research and training, and called for the development of AI governance standards.
  3. AI in Government Act: This act, introduced to improve the use of AI in the federal government, aimed to provide resources and guidance for agencies to adopt AI technologies, including the establishment of an AI Center of Excellence within the General Services Administration (GSA).
  4. Future of AI Act: Introduced to establish a federal advisory committee to examine and recommend ways to competitively advance AI in the United States, focusing on various aspects including ethics, workforce, and international cooperation.
  5. Algorithmic Accountability Act: Proposed legislation aimed at requiring companies to assess and manage the risks posed by AI and automated decision systems, especially those that are privacy invasive or have the potential for discriminatory outcomes.
  6. AI for Workforce Development Act: Focused on using AI to improve education and training programs to prepare the workforce for the changing job market due to AI and automation.
  7. AI Cybersecurity and National Security Act: Aimed at enhancing the use of AI for cybersecurity and national defense purposes.

These legislative efforts represent a growing recognition of AI’s impact on various sectors and the need for a comprehensive approach to AI development, regulation, and integration into society. They cover areas such as research funding, ethical guidelines, workforce development, and the use of AI in government operations. However, the rapidly evolving nature of AI technology means that legislative efforts are continually adapting to new developments and challenges in the field.

Which federal regulatory agencies claim authority over AI issues?

Several federal regulatory agencies in the United States claim authority over AI issues, each focusing on different aspects of AI as it pertains to their specific regulatory domain. As AI technology intersects with many areas of society and the economy, multiple agencies are involved in its regulation and oversight:

  1. Federal Trade Commission (FTC): The FTC plays a key role in regulating AI in terms of consumer protection, privacy, and preventing deceptive or unfair business practices related to AI technologies.
  2. Food and Drug Administration (FDA): The FDA regulates AI applications in medical devices and healthcare. This includes oversight of AI systems used for diagnostic purposes, patient care, and treatment recommendations.
  3. Securities and Exchange Commission (SEC): The SEC is involved in overseeing AI as it relates to financial services, including the use of AI in trading, risk assessment, and financial advisory services.
  4. Department of Transportation (DOT) and National Highway Traffic Safety Administration (NHTSA): These agencies regulate AI in the context of autonomous vehicles and transportation safety.
  5. Federal Communications Commission (FCC): The FCC oversees aspects of AI that relate to communications technology, particularly in areas like broadband networks and new communication technologies that utilize AI.
  6. Department of Labor (DOL): The DOL may address AI in the context of its impact on the workforce, employment, and workplace safety.
  7. Equal Employment Opportunity Commission (EEOC): The EEOC is concerned with how AI is used in hiring practices and whether it adheres to laws regarding employment discrimination.
  8. Department of Defense (DoD): The DoD, particularly through the Defense Advanced Research Projects Agency (DARPA), is involved in the development and regulation of AI for defense and national security purposes.
  9. Office of Science and Technology Policy (OSTP): While not a regulatory agency per se, the OSTP advises on AI policy across various sectors, including ethical, technical, and regulatory issues.

Each of these agencies approaches AI regulation from their respective jurisdictional perspectives, ensuring that AI applications meet federal standards and regulations within their specific domains. As AI technology continues to evolve and permeate different sectors, the role and involvement of these regulatory bodies are likely to expand and adapt.

What are the most important unresolved legislative and regulatory AI issues?

As of April 2023, several important legislative and regulatory issues related to artificial intelligence (AI) remain unresolved. These issues are complex and multifaceted, often involving ethical considerations, technological challenges, and societal impacts. Some of the most pressing unresolved AI issues include:

  1. Comprehensive Legal Framework for AI: Establishing a comprehensive legal framework that addresses the broad spectrum of AI applications and their implications remains a challenge. This includes defining legal standards for the development, deployment, and use of AI.
  2. Privacy and Data Protection: With AI relying heavily on data, especially personal data, there are ongoing concerns about privacy and data protection. Creating robust laws that protect individual privacy while not stifling innovation is a significant challenge.
  3. Bias and Fairness: Addressing and mitigating bias in AI algorithms is a critical issue. This includes developing standards and regulations to ensure AI systems are fair and do not perpetuate or exacerbate existing societal biases.
  4. Liability and Accountability: Determining liability in cases where AI systems cause harm or make erroneous decisions is complex. Clarifying legal responsibility and accountability for AI-driven actions, particularly in areas like autonomous vehicles and healthcare, is crucial.
  5. Ethical Standards: Establishing and enforcing ethical standards for AI, including guidelines for responsible AI development and usage, is an ongoing issue. This also involves balancing ethical considerations with commercial interests.
  6. Transparency and Explainability: Ensuring that AI systems are transparent and their decisions can be explained in understandable terms is essential, particularly for critical applications like criminal justice and healthcare.
  7. Workforce Impact and Employment Laws: AI’s impact on the workforce, including job displacement and the need for new skills, raises questions about labor laws and social safety nets.
  8. Security and Safety: Ensuring the security and safety of AI systems, especially to prevent malicious use of AI technologies like deepfakes or autonomous weapons, is a significant concern.
  9. Global Coordination and Standards: Developing international standards and coordinating regulatory approaches to AI is challenging due to differing priorities, values, and levels of technological advancement across countries.
  10. Intellectual Property Rights: Determining how intellectual property laws apply to AI-created content and inventions, and the ownership of AI-generated data, is an evolving area.
  11. Market Competition: Addressing concerns about market dominance and monopolistic behaviors as a few large companies lead AI development, potentially stifling competition and innovation.

These issues require ongoing dialogue and collaboration between governments, industry leaders, researchers, and civil society to develop effective and balanced solutions. As AI technology continues to evolve, so too will the legislative and regulatory responses to these challenges.

What model laws and/or regulations related to AI has the National Association of Insurance Commissioners (NAIC) adopted?

As of my last update in April 2023, the National Association of Insurance Commissioners (NAIC) had shown increasing interest in addressing the implications of artificial intelligence (AI) in the insurance sector. Recognizing the growing impact of AI on insurance underwriting, claims processing, customer service, and risk assessment, the NAIC had taken steps to develop model laws and guidelines to ensure that the use of AI in insurance is ethical, fair, transparent, and compliant with existing insurance regulations.

Some of the key areas where the NAIC focused its efforts related to AI include:

  1. AI and Data Ethics: The NAIC had been working on guidelines and principles focusing on the ethical use of AI and data in insurance. This includes ensuring transparency, fairness, and accountability in AI algorithms and data usage.
  2. Model Laws for Underwriting and Rating: The NAIC had been examining how traditional model laws and regulations apply to AI-driven underwriting and rating processes. The goal is to ensure that these processes do not result in unfair discrimination and are compliant with existing insurance laws.
  3. Consumer Protection and Privacy: Given the significant amount of personal data used in AI applications, the NAIC had been focused on consumer protection and data privacy issues, ensuring that insurers’ use of AI respects consumer rights and privacy laws.
  4. Regulatory Oversight and Compliance: The NAIC had been considering how regulatory oversight and examination procedures need to evolve to effectively monitor the use of AI in insurance.
  5. Cybersecurity and Risk Management: As AI systems are susceptible to cybersecurity risks, the NAIC had been looking into regulations and guidelines to ensure that insurers adequately manage these risks.
  6. Collaboration with Stakeholders: The NAIC had been engaging with insurers, technology experts, consumer advocates, and other stakeholders to develop these model laws and regulations.

These efforts reflect the NAIC’s role in guiding state insurance regulators in the United States and ensuring that the insurance industry adapts to technological changes in a manner that is safe, fair, and beneficial for consumers and the industry alike. For the most current information on specific model laws or regulations adopted by the NAIC related to AI, consulting the latest publications and updates from the NAIC would be recommended.

More broadly, what new AI-related laws and/or regulations have been proposed for the insurance industry during 2023?

In 2023, there have been various AI-related laws and regulations proposed or considered for the insurance industry:

  1. National Association of Insurance Commissioners (NAIC) Initiatives:

– The NAIC has a committee and working groups focused on the use of big data and AI in the insurance industry. They are evaluating existing regulatory frameworks for their use.

– These NAIC initiatives could lead to the development of or modifications to model laws, regulations, handbooks, and regulatory guidance.

– The NAIC is analyzing AI advancements to assess if current state laws and regulatory tools are sufficiently protecting consumers. This work is centralized within the NAIC’s Innovation, Cybersecurity, and Technology (ICT) Committee and its Big Data and Artificial Intelligence Working Group.

– The ICT Committee held a Collaboration Forum on Algorithmic Bias to identify and address foundational issues and develop a common framework that can inform specific workstreams in each group.

  1. State Insurance Regulators:

– Some state regulators, such as the New York Department of Financial Services, the California Department of Insurance, and the Connecticut Insurance Department, have issued circular letters and bulletins highlighting concerns about bias and discrimination resulting from the use of AI and machine learning in insurance.

– Colorado has enacted a statute requiring its insurance commissioner to adopt rules prohibiting insurers from using algorithms or predictive models that use external consumer data in a way that unfairly discriminates. Other states have similar legislation pending.

  1. Federal Guidance:

– The National Institute of Standards and Technology (NIST) released its Artificial Intelligence Risk Management Framework 1.0 as a voluntary guide for technology companies designing, developing, deploying, or using AI systems. This framework seeks to promote trustworthy and responsible development and use of AI systems.

– The Federal Trade Commission (FTC) has suggested increasing scrutiny on businesses that use AI, issuing warnings to avoid unfair or misleading practices.

These developments reflect a growing awareness and response to the challenges posed by AI in the insurance industry, particularly concerning bias, discrimination, and consumer protection. The regulatory landscape is evolving, with both state and federal bodies taking steps to ensure that AI is used responsibly and ethically in the insurance sector.

Besides insurance, what have been the most important AI legal developments in 2023?

The most important AI legal developments in 2023 include:

  1. National AI Commission Act: Introduced by a bipartisan group of House members, this act would establish a bipartisan commission of experts to review the U.S.’s approach to AI regulation and make recommendations for a risk-based AI regulatory framework.
  2. Promoting AI R&D Leadership: The CREATE AI Act aims to establish the National Artificial Intelligence Research Resource to facilitate AI research by higher education institutions, non-profits, and other federal funding recipients.
  3. Preparing for AI Facilitated Health Crises and Cyber Attacks: Several bipartisan bills require government agencies to prepare for health crises or cyber attacks facilitated by AI, such as the Artificial Intelligence and Biosecurity Risk Assessment Act.
  4. Disclosure Requirements: Bills like the AI Labeling Act have been introduced to mandate disclosure for AI-generated products through disclaimers or other markings.
  5. Guarding Against Deepfakes: Legislation like the Protect Elections from Deceptive AI Act has been proposed to prohibit the distribution of deceptive AI-generated content in federal election ads.
  6. Impact on Workforce: The Jobs of the Future Act requires the Labor Department and the National Science Foundation to report on the impact of AI on American workers.
  7. Coordinating Federal Agency AI Use: Several bills relate to the federal government’s use of AI, such as the AI LEAD Act, which would establish the position of Chief Artificial Intelligence Officer at each federal agency.
  8. Geopolitical Competition: Congress has taken steps to promote U.S. innovation in AI and restrict the transfer of critical emerging technologies to foreign entities of concern, including China.
  9. Congressional Hearings: Congress has held hearings exploring a wide range of issues at the intersection of AI and national security, human rights, intellectual property, and democracy, indicating potential for additional targeted legislation.
  10. Executive and Administrative Actions: The Biden Administration is expected to issue a comprehensive executive order addressing AI risks. The National Telecommunications and Information Administration and NIST are actively developing policy recommendations and frameworks for AI accountability and risk management.

These developments reflect a growing focus on the regulation, responsible development, and deployment of AI technologies in various sectors, including national security, public health, workforce, and consumer protection.

Independent Life Note: As of December 2023, ChatGPT’s last update occurred in April 2023. On October 30, 2023, President Biden issued “Executive Order on the Safe, Secure, and Trustworthy Development of Artificial Intelligence” – https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/ .  Zvi Mowshowitz’s Substack Newsletter “Don’t Worry About the Vase” features two excellent related articles published November 1, 2023:

  1. “On the Executive Order” – https://thezvi.substack.com/p/on-the-executive-order
  2. “Reactions to the Executive Order” – https://thezvi.substack.com/p/on-the-executive-order

How has the AI industry reacted to these laws and regulations including the new proposals?

The AI industry’s reaction to new laws, regulations, and proposals typically varies depending on the nature and impact of the specific legislative or regulatory changes. However, several common themes can be observed in the industry’s response:

  1. Cautious Support: Many in the AI industry recognize the need for regulation to ensure the responsible and ethical use of AI. There is often support for regulations that promote transparency, accountability, and fairness, and protect consumer rights.
  2. Concerns about Innovation Stifling: At the same time, there is concern that overly restrictive or hastily drafted regulations could stifle innovation. The AI industry often advocates for a balanced approach that does not hinder technological advancement or the competitive position of AI firms.
  3. Collaboration and Engagement: AI companies and organizations generally seek to collaborate with regulators and policymakers. This engagement is crucial for developing informed and effective regulations that address real-world challenges and nuances in AI technology.
  4. Global Competitiveness: There is an awareness within the industry that regulations in one region (like the U.S. or EU) can have global implications. Companies are mindful of how these regulations might affect their global operations and competitiveness.
  5. Diversity in Reactions Based on Size and Sector: Larger tech companies with more resources may be more equipped to adapt to new regulations than smaller startups. Additionally, reactions can vary based on the sector of AI application – for example, healthcare AI might have different considerations compared to AI in finance or autonomous vehicles.
  6. Emphasis on Ethical AI Development: Many in the AI industry are focusing on ethical AI development, recognizing that building public trust is essential for the long-term success of AI technologies.
  7. Request for Clarity and Consistency: The industry often requests clear, consistent guidelines and standards to avoid ambiguity, which can make compliance challenging and increase operational uncertainties.
  8. Preparation for Compliance: Companies are also preparing for compliance by investing in legal expertise, ethics boards, and enhanced data governance practices.
  9. Focus on Privacy and Data Protection: Given the emphasis on data in AI regulation, there’s a significant focus on enhancing data privacy and protection measures within AI systems.
  10. Adaptive and Proactive Approaches: Many in the industry are proactively adapting their practices and technologies to anticipate and align with regulatory changes, rather than waiting for regulations to be imposed.

Overall, the AI industry’s reaction is a balancing act between welcoming necessary oversight and ensuring that such regulations do not hinder innovation, growth, and the practical application of AI technologies.

2023 AI Discussion Series with ChatGPT

  1. Part 1 – AI Vocabulary and Definitions 
  2. Part 2 – History of AI
  3. Part 3 – The Current AI Market 
  4. Part 4 – OpenAI and ChatGPT 
  5. Part 5 – AI Tools
  6. Part 6 – AI Legal and Regulatory Developments
  7. Part 7 – AI’s Future and AI Issues

Note: neither Independent Life nor this writer is responsible for any false or inaccurate information generated by ChatGPT responses within this article.

Share