Armilla Review - New Regulations, Diplomatic Shifts, and Surging Industry Trends

Welcome to this week's issue of the Armilla Review. From insurtech funding trends to new global regulations and lawsuits, the AI world is seeing major developments. 🌐 OECD AI Principles: Updated guidelines now address emerging challenges in AI like misinformation and privacy while promoting global cooperation. 🚨 National Security: The U.S. is initiating diplomatic talks with China in Geneva to establish control over AI applications in national security, marking a significant shift in cyber diplomacy strategy. πŸ“œ New AI Legislation: The U.S. introduces the Secure Artificial Intelligence Act of 2024 to tighten AI safety and bolster public-private collaboration. 🌏 Global Regulation: Japan's Prime Minister Fumio Kishida launches an international framework to regulate generative AI via the Hiroshima AI Process Friends Group. πŸ“‰ UK AI Safety: Despite early pledges, tech giants have yet to fully engage in the UK's AI testing initiative, highlighting the limitations of voluntary safety measures. πŸš€ Insurtech Funding: Despite a funding dip in Q1 2024, AI and distribution firms remain strong, with Toronto emerging as a key hub. πŸ“ˆ Business AI Integration: A 293% surge in AI spending shows a shift towards mainstream adoption, particularly in non-tech sectors. ❌ Microsoft's Facial Recognition Ban: Microsoft tightens restrictions on U.S. law enforcement’s use of facial recognition technology with its Azure OpenAI Service. βš–οΈ Copyright Lawsuits: Several major newspapers are suing OpenAI and Microsoft over alleged copyright infringement in AI training practices.
May 8, 2024
β€’
5 min read

TOP STORY

‍

OECD Updates AI Principles to Tackle Emerging Risks and Guide Global Policy

‍

The 2024 OECD Ministerial Council Meeting (MCM) has updated the OECD AI Principles to address the rapid evolution of artificial intelligence, particularly in general-purpose and generative AI. These revisions aim to strengthen policies around privacy, intellectual property, safety, and information integrity. As the first global intergovernmental AI standard, these principles advocate for innovative, trustworthy AI that prioritizes human rights and democratic values. They emphasize responsible business conduct, transparency, and the need for safeguards to manage risks like misinformation, privacy, and environmental sustainability. These revisions reinforce the need for international cooperation to create interoperable governance frameworks that can keep pace with AI developments.

‍

Source: OECD

‍

‍

THE HEADLINES

‍

U.S. Launches AI Diplomacy with China Amidst Cybersecurity Concerns

‍

As the optimism for a unified global internet wanes, the U.S. is leading a new diplomatic strategy focused on AI and cybersecurity. This month, U.S. and Chinese diplomats will initiate talks in Geneva aimed at controlling the use of AI, especially in the context of nuclear arsenal management. Secretary of State Antony J. Blinken emphasizes the strategy of "digital solidarity," advocating for cooperation with allies to safeguard critical infrastructure and ensure technological dominance aligns with democratic values. These negotiations represent a shift in cyber diplomacy, potentially setting new international standards for the integration of emerging technologies into national security frameworks.

‍

Source: The New York Times

‍

‍

New Legislation Seeks to Fortify AI Security and Enhance Public-Private Collaboration

‍

U.S. Senators Mark R. Warner and Thom Tillis introduced the Secure Artificial Intelligence Act of 2024, aimed at enhancing the security framework for AI technologies by improving incident tracking and risk processing. The legislation proposes updates to existing cybersecurity information systems and creates a voluntary database to record AI-related cybersecurity incidents. It also intends to establish an Artificial Intelligence Security Center within the NSA to facilitate counter-AI research and develop guidance to protect against AI-specific threats. The act emphasizes collaboration between the public and private sectors and introduces measures to update public databases and reporting processes to include AI-related incidents. This bipartisan effort is supported by major entities like IBM and ITI, highlighting its importance in safeguarding AI technologies and promoting secure AI adoption.

‍

Source: Mark R. Warner

‍

‍

Japan Leads Global Initiative for Generative AI Regulation with New International Framework

‍

Japanese Prime Minister Fumio Kishida has introduced a global framework for the regulation of generative AI, termed the Hiroshima AI Process Friends Group, at the Organization for Economic Cooperation and Development in Paris. This initiative builds on Japan's leadership during its tenure as chair of the Group of Seven, aiming to establish guiding principles and a code of conduct for AI developers worldwide. The voluntary framework has attracted participation from 49 countries and regions, focusing on mitigating risks like disinformation while promoting the safe, secure, and beneficial use of AI globally. Kishida emphasized the potential of generative AI to enrich the world but also highlighted the need to address its potential negative impacts. This move aligns with international efforts by entities like the European Union, the United States, and China, who are also developing their own AI regulations.

‍

Source: AP News

‍

‍

Challenges Mount for UK's AI Safety Efforts Amid Tech Giants' Reluctance

‍

UK Prime Minister Rishi Sunak's initiative to implement a "landmark" AI testing agreement with major tech companies is facing significant challenges. Despite initial commitments at Bletchley Park from leaders like Sam Altman and Elon Musk to share AI models with the UK's AI Safety Institute (AISI) for pre-release safety testing, major players like OpenAI and Meta have not provided access. This has exposed the limitations of relying on voluntary commitments for ensuring AI safety, as well as the challenges in regulating such advanced technology without specific legislation. Six months after the agreement, only Google DeepMind has allowed pre-deployment access in a limited capacity, and the UK government is considering introducing targeted legal requirements for AI safety.

‍

Source: POLITICO

‍

‍

Insurtech Funding Dips in 2024 Q1 but AI and Distribution Firms Prevail

‍

Insurtech funding experienced a notable decline in the first quarter of 2024, reaching its lowest level in four years at just under US$1 billion, as reported by Gallagher Re. Despite the overall funding drop, AI-centered insurtechs and those focusing on distribution secured the majority of investment deals. AI-centered companies, particularly in the early stages, saw an increase in deal volume, with average deal sizes significantly higher for these companies compared to their non-AI counterparts. Distribution-focused insurtechs led the sector, accounting for half of the global deals, although the average deal size decreased significantly by 30.6% from the previous quarter. Meanwhile, in Canada, Toronto emerged as a notable hub for insurtech activity, with companies like Armilla AI securing substantial investments.

‍

Source: Canadian Underwriter

‍

‍

AI Integration Deepens in Business: A Surge in Spending and Adoption in 2024

‍

AI spending surged by 293% last year, reflecting its shift from experimental to operational use across various sectors. Companies are increasingly relying on AI for automation, cost savings, and enhanced decision-making, with the average business spending on AI tools jumping by 138%. Notably, non-tech sectors such as healthcare and financial services are rapidly increasing their AI investments, with the healthcare sector alone seeing a 131% increase in AI transactions. However, the growth in the number of companies starting to invest in AI has decelerated, indicating that some are taking a wait-and-see approach. Meanwhile, specialized AI tools, known as "narrow" AI, are gaining popularity for specific functions like sales intelligence and customer service, showing a strong trend towards integrating AI into core business operations.

‍

Source: Ramp

‍

‍

Microsoft Tightens Restrictions on U.S. Law Enforcement Use of AI for Facial Recognition

‍

Microsoft has updated its policy to prohibit U.S. police departments from using its Azure OpenAI Service for facial recognition purposes. This amendment to the terms of service specifically bans the integration of this service with any real-time facial recognition technology, particularly on mobile devices such as body cameras and dashcams, within uncontrolled environments. This decision aligns with growing concerns about the risks and biases associated with AI, highlighted by recent criticisms of Axon’s new product that utilizes OpenAI’s GPT-4 to analyze body camera audio. While the ban is stringent within the U.S., it does not universally apply to international police forces, nor does it forbid the use of facial recognition with stationary cameras in controlled settings.

‍

Source: TechCrunch

‍

‍

Major Newspapers Sue OpenAI and Microsoft for Copyright Infringement Over AI Training Practices

‍

Several newspapers including the New York Daily News, Chicago Tribune, and others, all owned by Alden Global Capital, have filed a lawsuit against OpenAI and Microsoft, accusing them of copyright infringement. The lawsuit alleges that the companies used the newspapers' content to train their AI models without permission or compensation. Evidence provided includes instances where chatbots like ChatGPT and Copilot reproduced or accessed articles from these publications verbatim, often without proper attribution or even introducing inaccuracies. This legal action follows similar suits by The New York Times and other media outlets.

‍

Source: The Verge

‍

‍

Meta Unveils Llama 3: Enhanced Performance and Responsibility

‍

Meta launched Llama 3, a highly advanced version of its AI model available in both 8B and 70B configurations, designed to cater to a diverse range of AI applications. Integrated into Meta AI, Llama 3 enhances capabilities in tasks such as coding, problem-solving, language translation, and dialogue generation, offering scalability and performance. The model has been trained on significantly more data than its predecessor, utilizing custom-built 24K GPU clusters and over 15 trillion tokens of data. Additionally, Meta has updated its Responsible Use Guide and introduced new safety tools like Llama Guard 2 and Code Shield to ensure responsible and secure AI development. This comprehensive approach aims to not only advance AI technology but also address the ethical challenges associated with large language models.

‍

Source: Meta

‍

‍

Amazon Q: A New AI Assistant to Revolutionize Coding and Business Operations

‍

Amazon Web Services has officially launched Amazon Q, an AI-powered assistant designed to boost productivity for developers and businesses. This release includes two versions: Amazon Q Developer and Amazon Q Business, both aimed at streamlining technical tasks like coding, debugging, and data analysis, enabling developers to focus more on creative and high-level functions. Amazon Q leverages generative AI to assist with tasks ranging from application development to optimizing AWS resources, and it integrates with Amazon QuickSight for building BI dashboards using natural language. Additionally, a new feature in preview, Amazon Q Apps, allows employees to create generative AI applications using simple text prompts, even without coding skills. AWS is also offering free courses to help users maximize the capabilities of Amazon Q.

‍

Source: ZDNET

‍