In the shadow of the tragic terrorist attack in Moscow, the weaponization of artificial intelligence through deepfakes thrusts into the global spotlight. A video, manipulated to show Ukraine’s top security official making inflammatory remarks about the attack, has intensified the disinformation maelstrom, falsely implicating Ukraine in one of Russia's most deadly incidents in over a decade. This incident underscores the escalating dangers of deepfake technology, capable of distorting truths and inflaming geopolitical tensions.
European Commission Vice President Vera Jourova brings to light the alarming efficiency with which artificial intelligence can fabricate realities. "It takes 30 minutes to create a deep fake," she stated, emphasising the technology's role in distorting political narratives across European elections. While the direct impact on election outcomes remains unclear, Jourova warns of the dire consequences should such tactics be employed on a larger scale, potentially jeopardising the essence of free and fair elections. Jourova is right about the dire consequences, but overestimates how long it takes to create a deepfake - it can take as little as five minutes to create one.
The threat extends beyond the political sphere, touching the lives of individuals in their local communities. A striking example is an incident in Baltimore, where a fabricated audio clip led to Pikesville High School principal Eric Eiswert being accused of making derogatory comments about students.
Amidst these unsettling developments, the race to maintain technological supremacy, especially between the United States and China, accelerates AI development with little regard for the dire need for regulation. This competitive dynamic risks sidelining crucial regulatory measures aimed at mitigating AI's societal harms, such as job displacement, misinformation proliferation, and democratic process disruption. Civil society organisations and academia warn that without comprehensive regulation, the potential for AI to inflict societal damage remains unchecked.
The Institute for Public Policy Research (IPPR) emphasises the urgency of addressing these challenges. With a recent report indicating that while currently, 11% of job tasks could be automated, this figure might rise to 59% as AI technology advances, the need for proactive regulatory frameworks has never been more apparent.
For those eager to explore the complexities of AI governance, synthetic media regulation, and the global security implications of AI advancements, a collaborative space exists. Join us on Discord at https://discord.gg/32EXTWNEfe to share insights and contribute to a future where AI safeguards our security, democratic values, and champions responsible innovation.
I ommitted 'more opportunity' before ... darker machinations; my bad.
First, I am forced to address the fucked up Discord link; who's idea was that? Limited past experience with that platform has and remains annoying. I understand that app is mostly developed for gaming but I just redirected to Galaxy store w/o account on device - for grins I hit install and it DID!?!
End result: compounded concern. Which begs the question of credible reporting or - surprise - an obvious opportunity for yet more blah blah about deep fake when deep fake is the least of the issues with AI, the (imo) evidence suggesting it's been surreptitiously training in the background and if so accounts for it exponential advancement in the last 2 years. I'm outraged by the apparent lack of transparency and disturbed by the move into Wi-Fi dependant hardware access, the invasive use of account credentials attaching device and user ID and how that facilitates not only data monetization but location and geofencing - leading to patterned operational variables that lead to manipulation and even darker machinations and motives. WTF is up? Is my perception accurate about this crap?