Who is going to pay for AI Deep Fake Detection?

Who is going to pay for AI Deep Fake Detection?

Having had some exposure to the pioneering work my Intel client is doing with their FakeCatcher initiative, I can plus one the article The Times published this week on the innovative products being developed to detect machine-made content. The question is who is going to pay for these misinformation solutions. 10+ years ago, after he left Microsoft and before he became Rush Limbaugh’s heir apparent, while serving as the lead digital strategist for the Republican National Committee, Todd Herman shared with me an innovative new marketing technique. Activists had discovered they could access mass reach simply by seeding cross-linked stories in the blogosphere and then prompting mainstream media to cover the “people are saying” phenomenon. It seems inevitable that culture-shaping strategy will be amplified with deep fake content. 

The recent challenges at Vice and Buzzfeed News indicate content companies are not looking to take on additional costs to monitor and scrub their content. Anyone who has been following Eric Picard and other emerging AI artists can attest that machine made content can be really cool and generate a lot of engagement. I’m sure there is a niche segment of consumers who will pay for their own personal misinformation tools but resistance to micropayments and expectations of free content make it unlikely that will become widespread behavior. That leaves advertisers to pick up the AI Fake detection tab. I am sure that advertisers will issue Brand Suitability policies virtue signaling their preferences. Enforcement will likely get added to the Brand Safety/Verification bill and only really impact the immediate environment surround the ad. More importantly, it’s hard to see how paying extra to avoid advertising near machine made content will lift the performance of the ad.

From the angles of Publisher, Consumer, Advertiser, it is hard to see who has an incentive to pay for deep fake detection. Without a market incentive to self-regulate, pressure to modify Section 230 in the US will only grow. The simple approach would be to make for-profit companies liable for what they monetize. If you make money from the distribution of, or studying the consumption of, content that foments illegal behavior or defames someone, then that company should be liable for the outcome of their business practices. 

Buckle your chinstrap!

Lior Derry

VP sales | IT\IoT\OT Network Security | Cyber Crime Expert | GTM Leader | M&A Expert | Board Member & Mentor | Strategic Thinker | Skipper | Innovation Enthusiast

11mo

Interesting. security techniques need to emerge into zero trust strategy

Like
Reply
Molly Ford

SaaS, AI/Machine Learning, Strategy, Marketing, Media, Business Development Executive

1y

Brian, agree with your opinion which you articulated so well.

Like
Reply
Catherine Warburton

Media Industry Leader/Advisor/Video/Audio/OOH/Digital Platforms/Performance/Brand/Agency/Sales/Mentor

1y

AI is such an interesting area that seemed like it just popped up, but it didn't of course. AI has been developing all along. There are so many great applications, but dangers too! Thx Brian for sharing your thoughts...

Like
Reply
Bryan Leach

Founder and CEO at Ibotta, Inc.

1y

I’d like to discuss this more with you as it’s been much on my mind

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics