This week the UK home secretary, James Cleverly, warned that the upcoming general election in the UK could be disrupted and undermined by the use of deepfakes. Whilst Cleverly had previously highlighted the risks of deepfakes during the AI Safety Summit (whilst still in his role as foreign secretary), this was the first time that a British home secretary had explicitly warned that British elections could be jeopardised by deepfakes.
Cleverly particularly emphasised the ability of Britain’s adversaries to use deepfakes the shift the outcome of an election. But the availability of deepfake technology means that non-state actors - both groups and individuals - have the power to undermine and destabilise elections in the way Cleverly warns about.
Some AI companies are already taking measures to prevent their software from being used to deepfake politicians. These sorts of measures can’t just be de facto, voluntary norms, but need to be enforced by public policy. At the Munich Security Conference earlier this month, 20 tech companies signed an accord to prevent AI being used to undermine the various elections that will be taking place across the world this year. Yet the accord has no binding power, is toothless without legislation to enforce it, and may amount to little more than a symbolic gesture.
A further problem is that many AI models that can be used to create deepfakes are open source. This means that generative AI models already trained on vast amounts of data are publicly available on coding and developer platforms, unmediated by an API, and ready to be put to use by anyone willing to run them - including, of course, bad actors. This has not deterred some prominent figures in AI from advocating for apparently unlimited open sourcing of AI models as an appropriate and responsible path for AI innovation to take.
For public policy to effectively protect elections from malign interference - whether it be by adversarial states, organisations or individuals - it will need to hold not just users accountable for creating and spreading deepfakes. It will also need to hold software developers and providers accountable for harms caused by their AI.
Write to your representative to ask for effective anti-deepfake legislation.