Politics Has Responded To The Taylor Swift Deepfakes. But It Needs To Go Further.
The DEFIANCE Act & Its Discontents
When the deepfakes of Taylor Swift circulated on social media a week ago, it was widely predicted that this would be the event that would place deepfakes at the centre of the zeitgeist, the moment that would turbocharge the saliency of the issue. And sure enough, this proved to be the case. Though Taylor Swift herself has not publicly commented upon the deepfakes, her private outrage at the events has been reported and her titanic fanbase have mobilised behind the cause of legislation that will efficiently ban deepfakes.
Within days of the incident, the machinery of Washington DC had begun to move. Senators Dick Durbin, Lindsey Graham, Josh Hawley & Amy Klobuchar sponsored the Disrupt Explicit Forged Images & Non-Consensual Edits Act (the DEFIANCE Act). A full draft of the bill is not yet publicly available. But a one-page summary from Durbin & Graham suggests a civil penalty “enforceable against individuals who produced or possessed the forgery with intent to distribute it; or who produced, distributed, or received the forgery, if the individual knew or recklessly disregarded that the victim did not consent to the conduct.”
This proposal, to its credit, bolder than previous bills in prohibiting deepfake creation. Previously proposed bills, such as the Preventing Deepfakes of Intimate Images Act, have sought to ban the publication and sharing of sexually explicit deepfakes. By seeking to punish individuals who ‘produced or possessed the forgery’, the DEFIANCE Act has gone further than previous bills. With deepfakes, the very production and possession of the content is by nature non-consensual, and the DEFIANCE Act is correct to recognise this as something that needs to be confronted.
But as you may have noticed, the DEFIANCE Act in its currently outlined form retrospectively prohibits deepfake creation when it was done with the purpose of spreading and publishing the material. Tolerating deepfakes if they are ostensibly created for private consumption ignores that: 1) sexually explicit deepfakes violate the subject’s privacy from the moment of creation, regardless of the extent to which they are circulated; & 2) the sheer speed with which deepfakes are proliferating means that widespread circulation of deepfakes may be inevitable even if the transmission and publication of deepfakes is nominally prohibited.
The speed of AI development means that policy proposals to deal with it are being revised to become stronger. Amy Klobuchar, one of the sponsors of the DEFIANCE Act, has already demonstrated this in her evolving policy proposals for dealing with the impact of AI on elections. In early 2023, Klobuchar sponsored the REAL Political Ads Act, which would mandate that political ads include disclaimers if they contain AI-generated content. But as of late 2023, clearly sensing the escalating power and risks of AI technology, Klobuchar is now sponsoring the Protect Elections From Deceptive AI Act. Bolder and further reaching, this act would ban deepfakes of candidates for federal office - whether these deepfakes be featured in political ads or made by private individuals.
Legislation will only be able to meaningfully stop deepfakes when it holds software developers and providers liable for the technology they create. Until laws exist that require developers and providers to prevent their software from being used to create deepfakes, public policy may be helpless to stop the proliferation of deepfakes.
By the time legal action is taken, the damage has been done. Enforcing election interference crimes - in this case deep fakes - after the election is useless. The damage has been done.
Justice delayed is justice denied. This fact was masterfully exploited by Democrats from 2016 onward. It will continue.