published on in Informative Details

Microsoft calls for new laws on deepfake fraud, AI sexual abuse images

Happy Tuesday! I’m Gerrit De Vynck, a reporter covering Google and artificial intelligence, and I’m filling in for Cristiano today. Send news tips to: gerrit.devynck@washpost.com.

Microsoft calls for new laws on deepfake fraud, AI sexual abuse images

Tech giant Microsoft is calling on Congress to pass laws that make it illegal to use AI-generated voices and images to defraud people and require AI companies to build tech to identify fake AI images made with their products.

The recommendations came as part of a 50-page document Microsoft released Tuesday laying out its broad vision for how governments should approach AI.

As lawmakers and regulators around the country debate how to regulate AI, the companies behind the new tech have released a plethora of suggestions for how they think politicians should treat the industry.

Advertisement

Microsoft, long accustomed to lobbying governments on issues that affect its business, has tried to position itself as proactive and helpful, trying to shape the conversation and eventual outcome of legislation by actively calling for regulation.

Smaller tech companies and venture capitalists have been skeptical of the approach, accusing bigger AI companies like Microsoft, Google and OpenAI of trying to get legislation passed that would make it harder for upstarts to compete with them. Supporters of legislation, including California politicians who are leading the country in trying to pass a wide range of AI laws, have pointed to how governments failed to regulate social media use early on, potentially allowing problems like cyberbullying and disinformation to proliferate unchecked.

“Ultimately, the danger is not that we will move too fast, but that we will move too slowly or not at all,” Microsoft President Brad Smith wrote in the policy document.

Advertisement

In the document, Microsoft called for a “deepfake fraud statute” specifically making it illegal to use AI to defraud people.

As voice- and image-generating AI becomes better, fraudsters have already begun using it to impersonate family members to try to get people to send them money. Other tech lobbyists have argued that existing anti-fraud laws are enough to police AI and that the government doesn’t need extra legislation.

Microsoft split from other tech companies last year on a different issue, when it suggested the government should create a stand-alone agency to regulate AI, while others said the FTC and DOJ were capable of regulating AI.

Microsoft also called for Congress to force AI companies to build “provenance” tools into their AI products.

AI images and audio have already been used in propaganda and to mislead voters around the world. AI companies have been working on tech to embed hidden signatures into AI images and videos that can be used to identify whether the content is AI-generated. But deepfake detection is notoriously unreliable, and some experts question whether it will ever be possible to reliably separate AI content from real images and audio.

Advertisement

State governments and Congress should also update laws that address child sexual exploitation imagery and the creation and sharing of intimate images of people without their consent, Microsoft said. AI tools have already been used to make sexual images of people against their will and to create sexual images of children.

Government scanner

US border agents must get warrant before cellphone searches, federal court rules (TechCrunch)

Google’s Anthropic AI deal gets closer UK regulator scrutiny (Bloomberg)

New U.S. Commerce Department report endorses ‘open’ AI models (TechCrunch)

Hill happenings

Senators turn to online content creators to push legislation (Taylor Lorenz)

Low-income homes drop Internet service after Congress kills discount program (Ars Technica)

Inside the industry

Trump vs. Harris is dividing Silicon Valley into feuding political camps (Trisha Thadani, Elizabeth Dwoskin, Nitasha Tiku and Gerrit De Vynck)

Advertisement

TikTok has a Nazi problem (Wired)

Amazon Paid Almost $1 Billion for Twitch in 2014. It’s Still Losing Money. (Wall Street Journal)

Scammers target Middle East influencers with Meta’s own tools (Bloomberg)

Competition watch

Adobe, Canva losing users to ByteDance’s CapCut — especially on TikTok (Bloomberg)

Websites are blocking the wrong AI scrapers because AI companies keep making new ones (404 Media)

Trending

How Elon Musk came to support Donald Trump (Josh Dawsey, Eva Dou and Faiz Siddiqui)

A field guide on how to spot fake pictures (Chris Velazco and Monique Woo)

AI gives weather forecasters a new edge (New York Times)

Daybook

  • The Information Technology and Innovation Foundation hosts an event, “Can China Innovate in EVs?”, Tuesday at 1 p.m. at Rayburn House Office Building 2045.
  • The Consumer Technology Association hosts a conversation with the White House National Cyber Director, Harry Coker, Jr., Tuesday at 4 p.m. at CTA Innovation House.
  • The Senate Budget Committee holds hearings on the future of electric vehicles, Wednesday at 10 a.m. at 608 Dirksen Senate Office Building.
  • The Center for Democracy and Technology hosts a virtual event, “What You Need to Know About Artificial Intelligence,” Wednesday at noon.
  • Sens. Ben Ray Luján (D-N.M.) and Alex Padilla (D-Calif.) host a public panel, “Combatting Digital Election Disinformation in Non-English Languages,” Wednesday at 4 p.m. at Dirksen Senate Office Building, Room G50.
  • The U.S. General Services Administration hosts a Federal AI Hackathon, Thursday at 9 a.m.

Before you log off

That’s all for today — thank you so much for joining us! Make sure to tell others to subscribe to Tech Brief. Get in touch with Cristiano (via email or social media) and Will (via email or social media) for tips, feedback or greetings.

ncG1vNJzZmivp6x7uK3SoaCnn6Sku7G70q1lnKedZL2wuMitoJyrX2d9c4COaW5oa2Bkuqqv0aiqqJ6kYq6qecOenKmekaCybrjAsGSfqpGqsXA%3D