India

As deepfakes proliferate, nations struggle to respond.

Deepfake technology — software that allows people to swap faces, voices, and other characteristics to create digital forgeries — has been used in recent years to create a synthetic version of Elon Musk to promote a cryptocurrency scam, to digitally “undress” more than 100,000 women on Telegram, and to steal millions of dollars from businesses by imitating their executives’ phone voices.

In the majority of the world, authorities have limited options. Even as software becomes increasingly complex and accessible, there are few rules in place to control its dissemination.

China aims to be an outlier. This month, the nation established stringent regulations mandating that modified content include digital signatures or watermarks and that deepfake service providers offer methods to “dispel rumours.”

But China confronts the same obstacles that have thwarted other attempts to regulate deepfakes: the worst abusers of the technology are notoriously difficult to apprehend, working anonymously, changing rapidly, and spreading their synthetic creations on borderless online platforms. China’s action has also brought to light a second reason why few nations have implemented guidelines: the fear that the government could exploit the laws to restrict free speech.

Beijing might influence how other governments cope with the machine learning and artificial intelligence that fuel deepfake technologies, according to experts in technology. As there are few precedents in the subject, legislators over the world are searching for test cases to emulate or reject.

Ravit Dotan, a postdoctoral researcher at the University of Pittsburgh who directs the Collaborative AI Responsibility Lab, stated, “The AI scene is an exciting location for global politics because governments are fighting to determine who will set the tone.” “We know that new legislation are on the way, but we don’t know what they will be, so there is a great deal of uncertainty.”

Deepfakes show tremendous potential in numerous areas. Last year, Dutch police reopened a cold case from 2003 by developing a digital avatar of the 13-year-old murder victim and releasing footage of him interacting with his family and friends in the present day.

The technology is also utilised for parody and satire, by online shoppers trying on clothes in virtual fitting rooms, by museums creating dynamic dioramas, and by actors wishing to speak many languages in international film releases. Researchers from the Media Lab of the Massachusetts Institute of Technology and UNICEF employed similar approaches to examine empathy by changing photos of North American and European towns into war-ravaged Syrian landscapes.

However, there are also a large number of problematic applications. Legal professionals are concerned that deepfakes could be exploited to undermine confidence in surveillance footage, body cameras, and other evidence. (According to the parent’s attorney, a doctored audio produced in a British child custody lawsuit in 2019 appeared to show a parent making violent threats.)

Digital forgeries could discredit police personnel, drive them to violence, or send them on fruitless manhunts. The Department of Homeland Security has also highlighted cyberbullying, extortion, stock manipulation, and political instability as potential threats.

In a few years, some experts believe that as much as 90 percent of online material will be generated artificially.

The increasing volume of deepfakes might lead to a situation where “people no longer have a shared reality, or could cause societal misunderstanding about whether information sources are reliable,” the European law enforcement agency Europol stated in a report last year.

British officials listed a website that “virtually strips women naked” and was seen 38 million times in the first eight months of 2021 as a threat last year. However, neither there nor in the European Union have measures to regulate the technology become law.

In the United States, attempts to establish a federal task group to investigate deepfake technology have stagnated. Defending Each and Every Person From False Appearances by Keeping Exploitation Subject to Accountability Act (Defending Each and Every Person From False Appearances by Keeping Exploitation Subject to Accountability Act) was introduced by Rep. Yvette D. Clarke, D-New York, in 2019 and again in 2021, but has not yet been voted on. She indicated that she would reintroduce the bill this year.

Clarke stated that her law mandating watermarks or identifying labels on deepfakes was “a preventative precaution.” She characterised the new Chinese regulations as “more of a control mechanism”

“Many advanced civic societies know how this can be weaponized and destructive,” she said, adding that the United States had to be more daring in establishing its own norms rather than following another frontrunner.

Clarke stated, “We absolutely do not want the Chinese to eat our lunch in the technology sector.” “We want to be able to establish the standard for consumer safeguards in the tech industry.”

However, according to law enforcement officials, the sector is still unable to detect deepfakes and struggles to govern criminal applications of the technology. In 2021, a California attorney stated in a legal journal that certain deepfake rules had “an almost insurmountable feasibility problem” and were “functionally unenforceable” because to the ease with which (often anonymous) abusers can cover their traces.

Existing regulations in the United States mostly target political or pornographic deepfakes. Marc Berman, a Democrat in the California State Assembly who represents sections of Silicon Valley and has sponsored such legislation, stated that he was unaware of any litigation or fines filed to enforce his laws. In accordance with one of his laws, he claimed, a deepfaking programme had destroyed the capacity to imitate President Donald Trump prior to the 2020 election.

New York is one of a handful of states that outlaw deepfake pornography. While fighting for reelection in 2019, the mayor of Houston claimed that a critical ad from a rival candidate violated a Texas statute that prohibits some deceptive political deepfakes.

“Half of the benefit is encouraging people not to take things at face value,” said Berman.

However, even as technology experts, legislators, and victims advocate for increased security, they counsel caution. They stated that deepfake laws risk becoming both overreaching and useless. In addition, forcing labels or disclaimers on deepfakes intended as accurate commentary on politics or culture could make the content appear less credible, according to the researchers.

Digital rights organisations like the Electronic Frontier Foundation are urging lawmakers to delegate deepfake policing to tech businesses or to employ an existing legal framework that tackles issues like fraud, copyright violation, obscenity, and defamation.

David Greene, a civil rights attorney for the Electronic Frontier Foundation, stated, “That is the best cure for harms, as opposed to government intervention, which in its implementation will nearly always capture non-harmful content, thereby chilling people’s lawful, productive expression.”

Google began forbidding the use of its Colaboratory data analysis platform to train AI systems to make deepfakes several months ago. According to The Verge, in the fall, the firm behind the image-generation programme Stable Diffusion released an upgrade that hinders users’ ability to generate pornographic and nude imagery. Meta, TikTok, YouTube, and Reddit prohibit intentionally deceptive deepfakes.

However, it may be difficult for regulations or bans to limit a technology that is meant to continuously adapt and develop. Researchers from the RAND Corporation revealed how tough it is to identify deepfakes by presenting a set of movies to over 3,000 test subjects and asking them to determine which ones were modified (such as a deepfake of climate activist Greta Thunberg disavowing the existence of climate change).

The group was incorrect over a third of the time. Even a subset of dozens of machine learning students at Carnegie Mellon University erred more than 20% of the time.

Microsoft and Adobe have launched initiatives to authenticate media and teach moderation technologies to identify the irregularities that characterise synthetic content. But they are in a perpetual race to keep up with deepfake producers, who frequently find new techniques to patch flaws, delete watermarks, and modify metadata to disguise their traces.

“Deepfake producers and deepfake detectors are engaged in a technological arms race,” said Jared Mondschein, a physical scientist at RAND. Until we develop methods to detect deepfakes more effectively, it will be difficult for any regulation to be effective.

This piece was first published in The New York Times.

Purvi Pathak

I'm Purvi Pathak, a journalist who loves covering celebrity news. Researching and writing about the latest gossip and scandals excites me. As an entertainment reporter, I get to satisfy my curiosity about the rich and famous while sharing juicy details with my devoted readers. My passion is bringing glitz, glamour and intrigue to life on the page.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button