AI

This week in AI: AI ethics keeps falling by the wayside

Comment

Stable Diffusion
Image Credits: Bryce Durbin / TechCrunch

Keeping up with an industry as fast-moving as AI is a tall order. So until an AI can do it for you, here’s a handy roundup of recent stories in the world of machine learning, along with notable research and experiments we didn’t cover on their own.

This week in AI, the news cycle finally (finally!) quieted down a bit ahead of the holiday season. But that’s not to suggest there was a dearth to write about, a blessing and a curse for this sleep-deprived reporter.

A particular headline from the AP caught my eye this morning: “AI image-generators are being trained on explicit photos of children.” The gist of the story is, LAION, a dataset used to train many popular open source and commercial AI image generators, including Stable Diffusion and Imagen, contains thousands of images of suspected child sexual abuse. A watchdog group based at Stanford, the Stanford Internet Observatory, worked with anti-abuse charities to identify the illegal material and report the links to law enforcement.

Now, LAION, a nonprofit, has taken down its training data and pledged to remove the offending materials before republishing it. But the incident serves to underline just how little thought is being put into generative AI products as the competitive pressures ramp up.

Thanks to the proliferation of no-code AI model creation tools, it’s becoming frightfully easy to train generative AI on any dataset imaginable. That’s a boon for startups and tech giants alike to get such models out the door. With the lower barrier to entry, however, comes the temptation to cast aside ethics in favor of an accelerated path to market.

Ethics is hard — there’s no denying that. Combing through the thousands of problematic images in LAION, to take this week’s example, won’t happen overnight. And ideally, developing AI ethically involves working with all relevant stakeholders, including organizations that represent groups often marginalized and adversely impacted by AI systems.

The industry is full of examples of AI release decisions made with shareholders, not ethicists, in mind. Take for instance Bing Chat (now Microsoft Copilot), Microsoft’s AI-powered chatbot on Bing, which at launch compared a journalist to Hitler and insulted their appearance. As of October, ChatGPT and Bard, Google’s ChatGPT competitor, were still giving outdated, racist medical advice. And the latest version of OpenAI’s image generator DALL-E shows evidence of Anglocentrism.

Suffice it to say harms are being done in the pursuit of AI superiority — or at least Wall Street’s notion of AI superiority. Perhaps with the passage of the EU’s AI regulations, which threaten fines for noncompliance with certain AI guardrails, there’s some hope on the horizon. But the road ahead is long indeed.

Here are some other AI stories of note from the past few days:

Predictions for AI in 2024: Devin lays out his predictions for AI in 2024, touching on how AI might impact the U.S. primary elections and what’s next for OpenAI, among other topics.

Against pseudanthropy: Devin also wrote suggesting that AI be prohibited from imitating human behavior.

Against pseudanthropy

Microsoft Copilot gets music creation: Copilot, Microsoft’s AI-powered chatbot, can now compose songs thanks to an integration with GenAI music app Suno.

Facial recognition out at Rite Aid: Rite Aid has been banned from using facial recognition tech for five years after the Federal Trade Commission found that the U.S. drugstore giant’s “reckless use of facial surveillance systems” left customers humiliated and put their “sensitive information at risk.”

EU offers compute resources: The EU is expanding its plan, originally announced back in September and kicked off last month, to support homegrown AI startups by providing them with access to processing power for model training on the bloc’s supercomputers.

OpenAI gives board new powers: OpenAI is expanding its internal safety processes to fend off the threat of harmful AI. A new “safety advisory group” will sit above the technical teams and make recommendations to leadership, and the board has been granted veto power.

Q&A with UC Berkeley’s Ken Goldberg: For his regular Actuator newsletter, Brian sat down with Ken Goldberg, a professor at UC Berkeley, a startup founder and an accomplished roboticist, to talk humanoid robots and broader trends in the robotics industry.

CIOs take it slow with GenAI: Ron writes that, while CIOs are under pressure to deliver the kind of experiences people are seeing when they play with ChatGPT online, most are taking a deliberate, cautious approach to adopting the tech for the enterprise.

News publishers sue Google over AI: A class action lawsuit filed by several news publishers accuses Google of “siphon[ing] off” news content through anticompetitive means, partly through AI tech like Google’s Search Generative Experience (SGE) and Bard chatbot.

OpenAI inks deal with Axel Springer: Speaking of publishers, OpenAI inked a deal with Axel Springer, the Berlin-based owner of publications including Business Insider and Politico, to train its generative AI models on the publisher’s content and add recent Axel Springer-published articles to ChatGPT.

Google brings Gemini to more places: Google integrated its Gemini models with more of its products and services, including its Vertex AI managed AI dev platform and AI Studio, the company’s tool for authoring AI-based chatbots and other experiences along those lines.

More machine learnings

Certainly the wildest (and easiest to misinterpret) research of the last week or two has to be life2vec, a Danish study that uses countless data points in a person’s life to predict what a person is like and when they’ll die. Roughly!

Visualization of the life2vec’s mapping of various relevant life concepts and events. Image Credits: Lehmann et al.

The study isn’t claiming oracular accuracy (say that three times fast, by the way) but rather intends to show that if our lives are the sum of our experiences, those paths can be extrapolated somewhat using current machine learning techniques. Between upbringing, education, work, health, hobbies and other metrics, one may reasonably predict not just whether someone is, say, introverted or extroverted, but how these factors may affect life expectancy. We’re not quite at “precrime” levels here but you can bet insurance companies can’t wait to license this work.

Another big claim was made by CMU scientists who created a system called Coscientist, an LLM-based assistant for researchers that can do a lot of lab drudgery autonomously. It’s limited to certain domains of chemistry currently, but just like scientists, models like these will be specialists.

Lead researcher Gabe Gomes told Nature: “The moment I saw a non-organic intelligence be able to autonomously plan, design and execute a chemical reaction that was invented by humans, that was amazing. It was a ‘holy crap’ moment.” Basically it uses an LLM like GPT-4, fine tuned on chemistry documents, to identify common reactions, reagents and procedures and perform them. So you don’t need to tell a lab tech to synthesize four batches of some catalyst — the AI can do it, and you don’t even need to hold its hand.

Google’s AI researchers have had a big week as well, diving into a few interesting frontier domains. FunSearch may sound like Google for kids, but it actually is short for function search, which like Coscientist is able to make and help make mathematical discoveries. Interestingly, to prevent hallucinations, this (like others recently) use a matched pair of AI models a lot like the “old” GAN architecture. One theorizes, the other evaluates.

While FunSearch isn’t going to make any ground-breaking new discoveries, it can take what’s out there and hone or reapply it in new places, so a function that one domain uses but another is unaware of might be used to improve an industry standard algorithm.

StyleDrop is a handy tool for people looking to replicate certain styles via generative imagery. The trouble (as the researchers see it) is that if you have a style in mind (say “pastels”) and describe it, the model will have too many sub-styles of “pastels” to pull from, so the results will be unpredictable. StyleDrop lets you provide an example of the style you’re thinking of, and the model will base its work on that — it’s basically super-efficient fine-tuning.

Image Credits: Google

The blog post and paper show that it’s pretty robust, applying a style from any image, whether it’s a photo, painting, cityscape or cat portrait, to any other type of image, even the alphabet (notoriously hard for some reason).

Google is also moving along in the generative video game arena with VideoPoet, which uses an LLM base (like everything else these days… what else are you going to use?) to do a bunch of video tasks, turning text or images to video, extending or stylizing existing video, and so on. The challenge here, as every project makes clear, is not simply making a series of images that relate to one another, but making them coherent over longer periods (like more than a second) and with large movements and changes.

Image Credits: Google

VideoPoet moves the ball forward, it seems, though as you can see, the results are still pretty weird. But that’s how these things progress: First they’re inadequate, then they’re weird, then they’re uncanny. Presumably they leave uncanny at some point but no one has really gotten there yet.

On the practical side of things, Swiss researchers have been applying AI models to snow measurement. Normally one would rely on weather stations, but these can be far between and we have all this lovely satellite data, right? Right. So the ETHZ team took public satellite imagery from the Sentinel-2 constellation, but as lead Konrad Schindler puts it, “Just looking at the white bits on the satellite images doesn’t immediately tell us how deep the snow is.”

So they put in terrain data for the whole country from their Federal Office of Topography (like our USGS) and trained up the system to estimate not just based on white bits in imagery but also ground truth data and tendencies like melt patterns. The resulting tech is being commercialized by ExoLabs, which I’m about to contact to learn more.

A word of caution from Stanford, though — as powerful as applications like the above are, note that none of them involve much in the way of human bias. When it comes to health, that suddenly becomes a big problem, and health is where a ton of AI tools are being tested out. Stanford researchers showed that AI models propagate “old medical racial tropes.” GPT-4 doesn’t know whether something is true or not, so it can and does parrot old, disproved claims about groups, such as that black people have lower lung capacity. Nope! Stay on your toes if you’re working with any kind of AI model in health and medicine.

Lastly, here’s a short story written by Bard with a shooting script and prompts, rendered by VideoPoet. Watch out, Pixar!

More TechCrunch

Employees at Bethesda Game Studios — the Microsoft-owned game developer that produces the Elder Scrolls and Fallout franchises — are joining the Communication Workers of America. Quality assurance testers at…

Bethesda Game Studios employees form a ‘wall-to-wall’ union

This week saw one of the most widespread IT disruptions in recent years linked to a faulty software update from popular cybersecurity firm CrowdStrike. Businesses across the world reported IT…

CrowdStrike’s update fail causes global outages and travel chaos

Alphabet, the parent company of Google, is in advanced talks to acquire cybersecurity startup Wiz for $23 billion, the Wall Street Journal reported on Sunday. TechCrunch’s sources heard similar and…

Unpacking how Alphabet’s rumored Wiz acquisition could affect VC

Around 8.5 million devices — less than 1 percent Windows machines globally — were affected by the recent CrowdStrike outage, according to a Microsoft blog post by David Weston, the…

Microsoft says 8.5M Windows devices were affected by CrowdStrike outage

Featured Article

Some Black startup founders feel betrayed by Ben Horowitz’s support for Trump

Trump is an advocate for a number of policies that could be harmful to people of color.

Some Black startup founders feel betrayed by Ben Horowitz’s support for Trump

Featured Article

Strava’s next chapter: New CEO talks AI, inclusivity, and why ‘dark mode’ took so long

TechCrunch sat down with Strava’s new CEO in London for a wide-ranging interview, delving into what the company is prioritizing, and what we can expect in the future as the company embarks on its “next chapter.”

Strava’s next chapter: New CEO talks AI, inclusivity, and why ‘dark mode’ took so long

Featured Article

Lavish parties and moral dilemmas: 4 days with Silicon Valley’s MAGA elite at the RNC

All week at the RNC, I saw an event defined by Silicon Valley. But I also saw the tech elite experience flashes of discordance.

Lavish parties and moral dilemmas: 4 days with Silicon Valley’s MAGA elite at the RNC

Featured Article

Tracking the EV battery factory construction boom across North America

A wave of automakers and battery makers — foreign and domestic — have pledged to produce North American–made batteries before 2030.

Tracking the EV battery factory construction boom across North America

Featured Article

Faulty CrowdStrike update causes major global IT outage, taking out banks, airlines and businesses globally

Security giant CrowdStrike said the outage was not caused by a cyberattack, as businesses anticipate widespread disruption.

Faulty CrowdStrike update causes major global IT outage, taking out banks, airlines and businesses globally

CISA confirmed the CrowdStrike outage was not caused by a cyberattack, but urged caution as malicious hackers exploit the situation.

US cyber agency CISA says malicious hackers are ‘taking advantage’ of CrowdStrike outage

The global outage is a perfect reminder how much of the world relies on technological infrastructure.

These startups are trying to prevent another CrowdStrike-like outage, according to VCs

The CrowdStrike outage that hit early Friday morning and knocked out computers running Microsoft Windows has grounded flights globally. Major U.S. airlines including United Airlines, American Airlines and Delta Air…

CrowdStrike outage: How your plane, train and automobile travel may be affected

Prior to the ban, Trump’s team used his channel to broadcast some of his campaigns. With the ban now lifted, his channel can resume doing so.

Twitch reinstates Trump’s account ahead of the 2024 presidential election

This week, Google is in discussions to pay $23 billion for cloud security startup Wiz, SoftBank acquires Graphcore, and more.

M&A activity heats up with Wiz, Graphcore, etc.

CrowdStrike competes with a number of vendors, including SentinelOne and Palo Alto Networks but also Microsoft, Trellix, Trend Micro and Sophos, in the endpoint security market.

CrowdStrike’s rivals stand to benefit from its update fail debacle

The IT outage may have an unexpected effect on the climate: clearer skies and maybe lower temperatures this evening

CrowdStrike chaos leads to grounded aircraft — and maybe an unusual weather effect

There’s a man in Florida right now who wants to propose to his girlfriend while they’re on a beach vacation. He couldn’t get the engagement ring before he flew down…

The CrowdStrike outage is a plot point in a rom-com 

Here’s everything you need to know so far about the global outages caused by CrowdStrike’s buggy software update.

What we know about CrowdStrike’s update fail that’s causing global outages and travel chaos

This serves as an example for how easy it is to spread inaccurate information online during a time of immense global confusion and panic.

From the Sphere to false cyberattack claims, misinformation runs rampant amid CrowdStrike outage

Today is the final chance to save up to $800 on TechCrunch Disrupt 2024 tickets. Disrupt Deal Days event will end tonight at 11:59 p.m. PT. Don’t miss out on…

Last chance today: Secure major savings for TechCrunch Disrupt 2024!

Indian fintech Paytm’s struggles won’t seem to end. The company on Friday reported that its revenue declined by 36% and its loss more than doubled in the first quarter as…

Paytm loss widens and revenue shrinks as it grapples with regulatory clampdown

J. Michael Cline, the co-founder of Fandango and multiple other startups over his multi-decade career, died after falling from a Manhattan hotel, New York’s Deputy Commissioner of Public Information tells…

Fandango founder dies in fall from Manhattan skyscraper

Venture capital giant a16z fixed a security vulnerability in one of the firm’s websites after being warned by a security researcher.

Researcher finds flaw in a16z website that exposed some company data

Apple on Thursday announced its upcoming lineup of immersive video content for the Vision Pro. The list includes behind-the-scenes footage of the 2024 NBA All-Star Weekend, an immersive performance by…

Apple Vision Pro debuts immersive content featuring NBA players, The Weeknd and more

Biden centering Musk in his campaign is a notable escalation, considering he spent most of his presidency seemingly pretending the billionaire didn’t exist.

Elon Musk is now a villain in Joe Biden’s presidential campaign

Waymo would need a ground transportation permit to operate at SFO, which has yet to be approved.

Waymo wants to bring robotaxis to SFO, emails show

When Tade Oyerinde first set out to fundraise for his startup, Campus, a fully accredited online community college, it was incredibly difficult. VCs have backed for-profit education companies in the…

Why it made sense for an online community college to raise venture capital

Canadian private equity firm PartnerOne paid $28.2 million for HeadSpin, a mobile app testing startup whose founder was sentenced for fraud earlier this year, according to documents viewed by TechCrunch.…

PE firm PartnerOne paid $28M for HeadSpin, a fraction of its $1.1B valuation set by ICONIQ and Dell Technologies Capital

Meta has suspended the use of its AI assistant after Brazil’s National Data Protection Authority (ANPD) banned the company from training its AI models on personal data from Brazilians. The…

Meta puts a halt to training its generative AI tools in Brazil 

ChatGPT, OpenAI’s text-generating AI chatbot, has taken the world by storm since its launch in November 2022. What started as a tool to hyper-charge productivity through writing essays and code…

ChatGPT: Everything you need to know about the AI-powered chatbot