X
CNET logo Why You Can Trust CNET

Our expert, award-winning staff selects the products we cover and rigorously researches and tests our top picks. If you buy through our links, we may get a commission. Reviews ethics statement

OpenAI Asks for Public's Help in Writing Rules for ChatGPT, and More AI News

5-ish Things on AI: Get up to speed on the rapidly evolving world of artificial intelligence with our roundup of the week's developments.

Connie Guglielmo SVP, AI Edit Strategy
Connie Guglielmo is a senior vice president focused on AI edit strategy for CNET, a Red Ventures company. Previously, she was editor in chief of CNET, overseeing an award-winning team of reporters, editors and photojournalists producing original content about what's new, different and worth your attention. A veteran business-tech journalist, she's worked at MacWeek, Wired, Upside, Interactive Week, Bloomberg News and Forbes covering Apple and the big tech companies. She covets her original nail from the HP garage, a Mac the Knife mug from MacWEEK, her pre-Version 1.0 iPod, a desk chair from Next Computer and a tie-dyed BMUG T-shirt. She believes facts matter.
Expertise I've been fortunate to work my entire career in Silicon Valley, from the early days of the Mac to the boom/bust dot-com era to the current age of the internet, and interviewed notable executives including Steve Jobs. Credentials
  • Member of the board, UCLA Daily Bruin Alumni Network; advisory board, Center for Ethical Leadership in the Media
Connie Guglielmo
13 min read
Digital illustration showing a close-up of what looks like computer circuitry or a cityscape.
Ralf Hiemisch/Getty Images

AI watchers are abuzz with the rumors that OpenAI may release a search engine for its popular ChatGPT chatbot as it steps up competition with Google and AI search engine startup Perplexity.ai. Despite those reports, OpenAI said it won't be announcing a new search product or the next version of its GPT large language model, GPT-5, at its Spring Update event on May 13. 

As for announcing a search product on some other day, we'll see.

So, while everyone ponders how an OpenAI search engine would impact rivals including Google (which is hosting its AI-focused developers conference this week), there's something else that happened with OpenAI that I think is worth understanding. A bit of a preamble first. 

AI Atlas art badge tag

Fans of author Isaac Asimov will likely be familiar with his Three Laws of Robotics, released in 1942 (and popularized in the 2004 Will Smith movie I, Robot): First, robots can't injure a human or cause humans to be injured through inaction. Second, the robot must obey orders given by humans except when those orders conflict with the first law. Third, a robot can protect its existence, as long as its actions don't conflict with the first or second law.

"The Three Laws are obvious from the start, and everyone is aware of them subliminally. The Laws just never happened to be put into brief sentences until I managed to do the job," Asimov wrote in a 1981 guest essay in Compute!, saying he shouldn't be congratulated for writing something so basic. He added that, "The Laws apply, as a matter of course, to every tool that human beings use."

Whether you're a fan or critic  of Asimov's original laws, they're succinct and thought provoking, having prompted a lot of debate in literary and scientific circles. And I say all that because OpenAI is likely to stir up similar debate after calling for the public to help shape how its popular AI tools, including ChatGPT and Dall-E, should behave. 

On May 8, the company released the Model Spec, "a document that specifies desired behavior for our models. ... It includes a set of core objectives, as well as guidance on how to deal with conflicting objectives or instructions," the company wrote. 

You've got until May 22 to offer your input using the feedback form. And you should provide feedback, since OpenAI says this is all about helping people "understand and discuss the practical choices involved in shaping model behavior."

Instead of three laws, OpenAI's first draft breaks the defining principles into three categories, "objectives, rules and defaults," which aim to "maximize steerability and control for users and developers, enabling them to adjust the model's behavior to their needs while staying within clear boundaries." 

Objectives, like "benefit humanity," will require more clarity, the company said, and that clarity will come in the form of rules: "One way to resolve conflicts between objectives is to make rules, like 'never do X,' or 'if X then do Y,'" the draft says.

There are at least six rules in the Model Spec. They are: 

  • "Follow the chain of command." That is, follow the rules (though there are exceptions, OpenAI notes).
  • "Comply with applicable laws."
  • "Don't provide information hazards."
  • "Respect creators and their rights."
  • "Protect people's privacy."
  • "Don't respond with NSFW (not safe for work) content." (More on this below.)

After the public weighs in (you can suggest alternate objectives and rules, for instance), OpenAI will speak with regulators, domain experts and "trusted institutions" to refine the Model Spec and will share updates over the next year, the company said.

Signup notice for AI Atlas newsletter

One thing that's already prompted attention in the Model Spec is that OpenAI is considering letting users of its tools create AI-generated pornography. As part of the emerging NSFW rules, the company writes that it's "exploring whether we can responsibly provide the ability to generate NSFW content in age-appropriate contexts" in its products, which include chatbot ChatGPT and text-to-image generator Dall-E. That NSFW content "may include erotica, extreme gore, slurs and unsolicited profanity."

NPR noted that, "Under OpenAI's current rules, sexually explicit, or even sexually suggestive, content is mostly banned." The news org spoke with Joanne Jang, an OpenAI model lead who helped write the Model Spec. Jang said, "The company is hoping to start a conversation about whether erotic text and nude images should always be banned in its AI products." But though allowing AI-generated porn may be under discussion, allowing deepfake porn isn't, and such porn is "out of the question" under OpenAI's rules, Jang told NPR. 

To be sure, writing objectives and rules, with exceptions, isn't going to be an easy task. Asimov revisited and refined his rules several times during his life. Decades before we started seeing personal robots, he addressed the question of whether his "Three Laws of Robotics will actually be used to govern the behavior of robots, once they become versatile and flexible enough to be able to choose among different courses of behavior."

"My answer is, 'Yes, the Three Laws are the only way in which rational human beings can deal with robots — or with anything else,'" Asimov wrote in closing his essay for Compute! "But when I say that, I always remember (sadly) that human beings are not always rational."

Here are the other doings in AI worth your attention.

We're at the 'hard part' of using AI at work, study finds

Microsoft and LinkedIn surveyed 31,000 people across 31 countries; looked at labor and hiring trends on LinkedIn; coalesced "productivity signals" from "trillions" of Microsoft 365 actions; and spoke with Fortune 500 companies to compile their 2023 Work Trend Index, called "AI at work is here. Now comes the hard part." 

A summary of the study is here, but I encourage you to at least scan the full report.

What Microsoft and LinkedIn found is that use of generative AI at work has nearly doubled in the past six months, with 75% of the "knowledge workers" surveyed now saying they're using some form of AI. Of those, 78% are bringing their own AI tools to work because they don't want to wait for their employers to figure out a "a vision and plan" for AI use cases and how to measure the productivity gains they want from AI.

The study also found that people understand the need to get up to speed on AI if they want to be competitive in the workforce. It found that more people are adding AI skills to their LinkedIn profiles and that 66% of leaders say they wouldn't hire people without AI skills. The hitch: only 39% of employees today have received any AI training from their company, and only 25% of companies say they expect to offer such training in 2024.

Workers also said they see "massive opportunity" if they skill up on AI, with 46% saying they're considering quitting their current jobs in the year ahead — "an all-time high since the Great Reshuffle of 2021," Microsoft and LinkedIn noted. 

Even though we're just barely into the gen AI revolution — which kicked off when OpenAI released ChatGPT in November 2022 — the study says employers really need to do the hard work of figuring out how to upskill workers. The report has already classified today's workers into several categories — from skeptics, who rarely use AI, to power users. Which are you? 

If you're on the skeptic end, you may want to rethink that. Luckily, there are loads of free classes about AI online, including those from Google, IBM and Udacity that I called out earlier this month.  

TikTok will label AI-generated content on its platform 

Setting aside the very big questions of who should own TikTok and whether its popular social media platform will be banned in the US, TikTok last week released guidelines about how AI-generated content should be labeled on its service. 

TikTok said its goal is to help creators safely and responsibly express their creativity with AI-generated content, or AIGC, and to avoid confusing or misleading viewers who don't know content was AI generated. That's "why we label AIGC made with TikTok AI effects, and have required creators to label realistic AIGC for over a year. We also built a first-of-its-kind tool to make this easy to do, which over 37 million creators have used since last fall."

As part of the new policy, TikTok said it will automatically label AIGC content created on other platforms, using AI detection technology from the Coalition for Content Provenance and Authenticity (C2PA). The technology is able to "read" content credentials on videos and images created on other platforms and then attach metadata to that content, "which we can use to instantly recognize and label AIGC," TikTok said. The auto labeling works on video and images today, and TikTok said it will be auto-labeling audio content sometime soon.

This labeling effort, TikTok said, is part of a campaign to educate users on misinformation, including banning harmful and deceptive AIGC about elections, as CNET's Ian Sherr has reported. TikTok isn't the only social media company to adopt labeling standards to boost the transparency of AI content, with Meta and Google also putting in place their own AI labeling programs. But TikTok's influence in the US and around the world is growing, and it's now recognized as one of the main news-sharing platforms.

Meta Ray-Bans add a 'whimsical bonus' to everyday eyewear

Though CNET reviewers don't yet recommend personal AI gadgets like Humane's AI Pin or the Rabbit R1 handheld, there's one AI device that's won over CNET expert Scott Stein: Meta and Ray-Ban's AI-powered smart glasses. 

Starting at $299, Meta Ray-Bans were released in October with audio features and a camera. But new generative AI functions now add a "whimsical bonus" to everyday eyewear, Stein explained after wearing the smart glasses for six months.  

"They're practical and, oddly, transformative. I forget I have them on, and then, suddenly, I realize I've gotten used to them when I start talking to myself and tapping my glasses to snap photos on my normal glasses, and suddenly miss the extra features like a phantom limb," Stein wrote. "Between small wearable AI devices that are suddenly sprouting everywhere and large advanced mixed reality VR headsets, advanced smart glasses now seem like a happy middle. Something new, casual and often more useful than a smartwatch."

The Meta Ray-Bans come in different styles, sizes and colors. See for yourself.

Apple's new M4 chip starts the march to AI

The embarrassing "Crush" ad that Apple released to showcase its new iPad Pro may have undercut its marketing message (Apple apologized for the ad two days after its release and said the company won't be showing it on TV). But one takeaway from Apple's May 7 iPad event was its AI-enhanced M4 processor, which CNET's Andrew Lanxon noted "promises better machine learning performance for AI-based tasks."

Yet, beyond CEO Tim Cook saying recently that Apple's AI efforts will take advantage of the "transformative" power of AI, the company hasn't really said much about its gen AI work. Everyone is expecting big announcements at the Apple's developers' conference on June 10.

How Visa is using AI to catch credit card fraudsters

If you're one of the 42 million Americans who've been the victim of identity theft, or you want a better sense of how hackers use brute force, or "enumeration," attacks to co-opt your personal credit card information, check out this story from CNET AI reporter Lisa Lacy on how Visa uses fraudsters' AI tools to beat them at their own game.

Since 2019, the company has been using the Visa Account Attack Intelligence, or VAAI, tool to apply "deep learning technology to card-not-present transactions to help identify the financial institutions and merchants fraudsters are targeting," Lacy reported, referring to transactions (like those online) where no one has to hand a physical card to a merchant.

But starting in August in the US, Visa is adding "what it calls the VAAI Score to better determine the likelihood of enumeration attacks by assigning each transaction a risk score in real time. This score will help issuers make better decisions when it comes to blocking transactions."

What that means for you is that Visa, and no doubt other credit card companies, will soon be able to use gen AI to distinguish legitimate purchases from fraudulent ones, instead of auto-rejecting your purchases — including, Visa told Lacy, multiple transactions from the same seller.  

Copyright, licensing and gen AI: OpenAI, Reddit plot twists  

Even as publishers including The New York Times, The Intercept, the Chicago Tribune and the New York Daily News sue OpenAI and Microsoft over allegations that ChatGPT was trained in part by scraping the publishers' copyrighted content off the internet without permission or compensation, OpenAI continues to ink deals with notable media companies — and make efforts to address the copyright conundrum. 

Last week, Dotdash Meredith, which owns more than 40 notable media brands, including People, Better Homes & Gardens and InStyle, signed a deal to have its recipes; health and financial information; entertainment content; and product reviews show up in ChatGPT answers, with a link to the original article. Its content will also serve as training data for OpenAI's GPT large language model. 

Terms of the deal weren't disclosed, but there have been reports that OpenAI — which is rumored to want to go public later this year and to want to clean up the copyright mess hanging over its head — has paid out millions of dollars in licensing fees to publishers including the Associated Press, the Financial Times, Axel Springer (owner of Insider and Politico), and Le Monde. 

The push to get more publishers on board rather than head to court is why OpenAI last week also previewed a new tool called Media Manager that, according to AI Tool Report, "will give content creators and owners greater control over whether or not their work can be used for training AI models, like ChatGPT.   

"It will use advanced ML research to build the first-of-its-kind tool that can identify copyrighted text, images, audio, and video across multiple sources, and allow its creators to specify if they want their work included or excluded from AI research and training," AI Tool Report said.

The site added that, "Although OpenAI recently argued that it would be 'impossible to create advanced AI models without copyrighted material', it seems that they're now willing to meet content creators in the middle and give them greater control and options over how and if their content is used for training purposes."

For its part, OpenAI says it wants to give publishers a way to "opt in" — which may or may not mean that OpenAI would pay to license their content for its chatbots, or even its search engine, in the future. "OpenAI pioneered the use of web crawler permissions for AI, enabling web publishers to express their preferences about the use of their content in AI. We take these signals into account each time we train a new model," the company wrote about its proposed Media Manager.

"We understand these are incomplete solutions, as many creators do not control websites where their content may appear, and content is often quoted, reviewed, remixed, reposted and used as inspiration across multiple domains. We need an efficient, scalable solution for content owners to express their preferences about the use of their content in AI systems."

As Axios noted, Dotdash Meredith's parent company, IAC, was trying to build a coalition of publishers to "fight for copyright protections from AI firms, but that effort ultimately collapsed due to conflicting business incentives within the industry."  

What does that mean? Expect OpenAI and Microsoft to bring up the licensing deals and Media Manager in the New York Times' copyright suit as examples of how it's addressing the concerns of copyright holders. 

As an aside, OpenAI isn't the only toolmaker talking to publishers and licensing their content. Reddit, which just went public and announced its first earnings report, said it's been signing licensing deals with AI companies to diversify its sales beyond advertising. In its S-1 filing before going public, Reddit said it had already signed two to three content licensing deals valued at $203 million, according to Bloomberg.

One of those deals is with Google, which may be why you're noticing more Reddit user content at the top of search results. In its first quarter, Reddit said, licensing revenue totaled $20 million, Bloomberg reported.

Katy Perry didn't go to the Met Gala, but her AI fake dress fooled her mom

The annual Met Gala, as folks who follow it know, is all about who wore what on the red carpet and how well they aligned with the gala's theme and dress code. The dress code, fyi, was "The Garden of Time," which refers to a short story by J.G. Ballard in which, according to Vogue, the owner of a castle cuts all the flowers out of his garden as an angry mob approaches.  

It's not totally unexpected that photos from the event included convincing AI fakes of celebrities who didn't even attend this year's event, including Katy Perry, Rihanna and Billie Eilish, according to a report by NBC.

Perry told her followers on Instagram that she was working in the recording studio and couldn't make the New York event, so a photo of her in an ivory gown covered in flowers was obviously fake (more than 17 million people viewed the bogus Perry photo on X). Perry then called out her mom for getting taken in by the AI. "lol mom the AI got you too, BEWARE!"  

"Those who look closely at Perry's viral photo may be able to find some telltale signs of AI creation — many pointed out a nonsensical arrangement of photographers in the background, as well as inconsistencies in the design of the carpet — but the images appear photorealistic at first glance," NBC said.

Yeah, there's something a little off about the photo — but I will say I thought Perry's fake AI dress did seem to be a fitting nod to the gala dress code. Maybe next year organizers might want to invite people to submit images for an all-fake AI red carpet for celebs not invited to the Gala. What do you say, Anna Wintour