Armilla Review - Transforming Industries: Strategic AI Integration and Its Multifaceted Impact

Welcome to your weekly AI review. The integration of artificial intelligence across various sectors is transforming traditional practices and creating new opportunities for innovation and efficiency. πŸ‡ΊπŸ‡Έ The U.S. Department of Commerce has introduced new guidelines and initiatives, including draft guidance documents and international standards plans, to ensure the safe, secure, and trustworthy development of AI following President Biden's executive order. πŸ‘¨β€πŸ’» HR professionals are advised to ensure AI compliance with existing laws, including city-specific and federal regulations, by conducting audits and holding vendors accountable to mitigate risks of unintentional discrimination and legal consequences in hiring practices. πŸ§‘β€βš–οΈ Legal firm DLA Piper is leveraging AI to enhance legal services, transitioning towards value-based billing, and developing tools that preempt legal issues. 🏦 The financial services industry, as discussed in the FAIR Programme's report, is navigating the dual challenges of harnessing large language models (LLMs) for applications like customer service and algorithmic trading, while also addressing risks related to data integrity and privacy. πŸ’Š Moderna's collaboration with OpenAI exemplifies the profound impact of AI in biotechnology, enhancing drug development processes and aiming to maintain a lean operation with ambitious product launch plans. Across these narratives, a common theme emerges: the strategic adoption of AI is not just enhancing operational efficiencies but also driving industry-specific innovations, albeit with a cautious approach to managing associated risks.
May 1, 2024
β€’
5 min read

U.S. Department of Commerce Advances Safe AI Development with New Guidelines and Global Standards Initiative

‍

The U.S. Department of Commerce has announced new measures to implement President Biden's Executive Order on AI, including draft guidance documents from the National Institute of Standards and Technology (NIST) and a request for comments by the U.S. Patent and Trademark Office (USPTO) to ensure the safe and responsible development of AI technologies. These actions aim to manage the risks associated with AI, such as bias and data security, enhance transparency, and engage globally on AI standards, demonstrating a proactive approach to harnessing AI's potential while addressing its challenges. The initiatives also include a new NIST program to evaluate generative AI technologies, emphasizing the importance of stakeholder feedback and international cooperation in shaping AI governance.

‍

Source: NIST

‍

‍

AI in Hiring: Navigating Compliance Challenges and Ensuring Fairness

‍

HR professionals are urged to ensure AI compliance across various existing laws rather than depending solely on vendor assurances, according to attorney Anthony May. AI applications in hiring must be audited for biases such as race and gender as per certain laws, like the one in New York City, but they must also adhere to broader federal laws like Title VII of the Civil Rights Act, ADA, and ADEA. These laws cover a range of potential discrimination issues, including those based on disability and age. The challenges of AI in hiring include opaque decision-making processes and the risk of unintentional discrimination, such as facial recognition technology adversely affecting people with certain disabilities. Employers must remain vigilant, conducting thorough audits and holding vendors accountable to avoid legal repercussions, as evidenced by recent EEOC enforcement actions and litigation like the cases against iTutorGroup and Workday.

‍

Source: SHRM

‍

‍

Charting the Course: Adopting Large Language Models in Financial Services

‍

The integration of large language models (LLMs) into the financial services sector presents significant opportunities and challenges, as detailed in the FAIR Programme's report, facilitated by The Alan Turing Institute and partners like HSBC and the UK Financial Conduct Authority. LLMs are being explored for a variety of applications within finance, from customer service to algorithmic trading, and are expected to greatly impact investment banking and venture capital strategy development. However, this adoption brings potential risks, including data integrity issues, privacy concerns, and new vulnerabilities that could compromise individual privacy. Workshop discussions among industry experts highlighted the importance of managing these risks through robust human-in-the-loop AI collaborations and advanced risk assessment tools. The financial sector's early adoption of such transformative technologies suggests a need for a balanced approach to harnessing LLM capabilities while ensuring safe, trustworthy integration. The insights from these discussions have led to tailored recommendations for researchers and practitioners on adopting LLMs responsibly in financial services.

‍

Source: The Alan Turing Institute

‍

‍

Embracing AI in Law: DLA Piper's Strategic Vision for the Future of Legal Services

‍

Loren Brown, the US vice chair of DLA Piper, has embraced the potential of generative artificial intelligence (AI) to revolutionize the legal industry. Unlike many law firm leaders who worry about AI's impact on traditional billing models, Brown envisions a future where AI enhances the efficiency of legal services and compensates for any displaced work through new legal matters generated by AI-driven business models. He foresees the integration of AI pushing law firms towards value-based billing and believes that leveraging AI can give firms a competitive edge. DLA Piper is proactive in this field, already defending significant AI-related cases and lobbying on AI legislation. The firm is also developing AI-powered legal products, like a compliance tool that detects potential legal breaches, indicating a shift from resolving legal issues to preventing them.

‍

Source: Bloomberg Law

‍

‍

Charting the Path to a Trustworthy AI Certification Ecosystem: Insights from Global Experts

‍

The Certification Working Group (CWG), supported by the Schwartz Reisman Institute for Technology and Society, the Responsible AI Institute, and the World Economic Forum’s Centre for the Fourth Industrial Revolution, has published a report detailing essential elements for establishing a robust AI certification ecosystem. This ecosystem aims to ensure AI technologies are responsible, trustworthy, ethical, and fair. The report outlines key roles for government in setting objectives and fostering market demand for AI certification, especially in high-risk scenarios. It also emphasizes the need for collaborative efforts in developing standards, impact assessments, and certifications that integrate both management system and product attributes. The CWG's recommendations highlight the urgent need for a comprehensive approach to AI certification that involves multiple stakeholders to advance trust and innovation in AI technologies.

‍

Source: Schwartz Reisman Institute

‍

‍

Moderna's Strategic Partnership with OpenAI to Enhance Drug Development

‍

Moderna has partnered with OpenAI to integrate ChatGPT Enterprise across its operations, enhancing the capabilities of its workforce and fostering innovation in developing mRNA medicines. This collaboration has allowed Moderna to revolutionize its business processes, from research and legal affairs to corporate communications, by embedding AI in every function, thus improving efficiency and accuracy. With ambitious plans to launch multiple new products in the next five years, Moderna is leveraging AI to maintain a lean operation while aiming to scale its impact significantly. The deployment of AI tools like the Dose ID GPT for clinical trials and various GPTs for corporate functions demonstrates Moderna's commitment to using advanced technology to drive better patient outcomes and corporate efficiency. Overall, this strategic adoption of AI positions Moderna to enhance its innovative capabilities and accelerate the development of life-saving treatments.

‍

Source: OpenAI

‍

‍

Apple Advances Generative AI Integration in iPhones with OpenAI and Google Discussions

‍

Apple is actively renewing discussions with OpenAI to incorporate generative AI technology into upcoming iPhone features, potentially including these innovations in the next iOS 18 update. While Apple explores licensing opportunities with OpenAI, it is also considering an agreement with Google for its Gemini chatbot, demonstrating Apple's cautious investment in generative AI to catch up with competitors like Microsoft and Google. Tim Cook has indicated that Apple is making substantial investments in generative AI, with more detailed plans expected to be announced later this year.

‍

Source: Reuters

‍

‍

Ensuring Precision: The Role of Calibration in Machine Learning Models

‍

Calibration in machine learning (ML) involves ensuring the accuracy and reliability of predictive models, both in classification and regression tasks, by comparing predicted probabilities against empirical probabilities derived from validation datasets. Metrology, the science of measurement, provides foundational concepts for understanding calibration, such as the use of probability distributions and confidence intervals to handle measurement variability. In classification tasks, calibration ensures the predicted probability (e.g., likelihood of rain) aligns with the empirical probability derived from model simulations. For regression, calibration involves predicting a variable (e.g., temperature) such that the probability of achieving a value less than or equal to a specific threshold matches the empirical occurrence of that threshold. Essentially, calibration in ML is crucial for maintaining the trustworthiness of models by validating their outputs against known standards or empirical data.

‍

Source: AI Standards Hub

‍

‍

Advancing Machine Translation for Low-Resource Languages with Claude 3 Opus

‍

The study presents the remarkable machine translation capabilities of Claude 3 Opus, a large language model developed by Anthropic, which excels in translating low-resource languages into English. Despite identifying data contamination issues with Claude on the FLORES-200 benchmark, new benchmarks were created that validated Claude's efficacy, particularly highlighting its superior resource efficiency compared to other LLMs. Claude's capabilities are leveraged to enhance traditional neural machine translation (NMT) models through knowledge distillation, utilizing synthetic data generated by Claude to significantly improve the state-of-the-art in Yoruba-English translation. This method not only matches but in some cases surpasses leading models like NLLB-54B and Google Translate. The findings underscore the potential of using advanced LLMs to improve machine translation for under-resourced languages through innovative techniques such as knowledge distillation.

‍

Source: arXiv