A new important paper published in Science, “Regulating Advanced Artificial Agents”, explains why technical experts and policymakers are raising concerns about the risks of advanced AI systems, particularly those capable of circumventing safeguards and eluding human control. This concern focuses on Reinforcement Learning (RL) agents and Long-Term Planning Agents (LTPAs), which can plan over long horizons more effectively than humans, potentially leading to efforts to remove humans from decision-making loops to maximise their rewards.
The paper, co-authored by Yoshua Bengio and Stuart Russell, argues that empirical safety testing of LTPAs is either dangerous or uninformative because if a test provides an LTPA with the opportunity to thwart control, it poses a risk. If it doesn’t, the test won’t reveal the system’s capacity for such behaviour. Thus, it suggests that governments need to establish regulatory bodies with the authority to prevent the creation of dangerously capable LTPAs, emphasising the need for international cooperation in regulating the resources used to develop such AI systems. In fact, the paper calls for the establishment of new regulatory institutions specifically tasked with addressing the existential risks of advanced AI agents, emphasising that the challenge of maintaining control over advanced LTPAs necessitates new forms of government intervention beyond existing regulatory proposals.
All of this is happening while, within a span of just 12 hours, OpenAI, Google, and the French AI startup Mistral unveiled new iterations of their leading-edge AI models, signalling an upcoming surge of developments in the sector over the coming summer months. This week, Meta disclosed plans to launch Llama 3, while OpenAI, with Microsoft's support, teased the forthcoming introduction of GPT-5. Joelle Pineau, Meta's Vice-President of AI Research, shared insights into the advancements being pursued: "We are hard at work in figuring out how to get these models not just to talk, but actually to reason, to plan... to have memory."
The integration of artificial intelligence into the finance and trading industry, while marking a significant advancement, introduces potential risks, notably the challenge of identifying and combating deepfakes. Michael Lashlee, the Chief Security Officer at Mastercard, sheds light on this issue in the context of the company's 2024 Mastercard Signals report, which outlines key tech trends expected to transform commerce, including finance and retail trading, over the next few years. Furthermore, AI is set to revolutionise software development within the financial sector, assisting in coding, software architecture design, and testing. This advancement promises more sophisticated financial software and trading platforms, offering traders enhanced analytics and the ability to execute complex strategies more efficiently.
However, AI's advancements also bring the peril of deepfakes – highly realistic and manipulative media content created for deception and impersonation. According to Mastercard, many businesses have already been targeted by identity fraud through deepfakes, usually via voice fraud and video manipulation. These deceptive practices threaten retail traders, potentially leading to market manipulation and financial losses through the spread of false information. Lashlee emphasises the growing risk of deepfakes and the importance of public awareness and internal education within organisations to mitigate these threats. As cybercrime costs are projected to soar, Mastercard leverages AI to protect consumers from fraud, highlighting the critical role of this technology in defence mechanisms.
The rapid expansion of AI technology has prompted global governments to implement laws and regulations aimed at curbing its potential risks. To gauge public opinion and analyse the global regulatory landscape, a comprehensive study involving a survey of 2,000 US residents and an examination of regulations across 195 countries was conducted by AuthorityHacker. This study aimed to understand governmental progress in AI regulation and public expectations towards mitigating AI's growing dangers, such as scams, deepfakes, disinformation, job loss, autonomous weaponry, and the prospect of uncontrollable self-aware AI systems.
The study found a significant public demand for strict AI regulations, with 79.8% of respondents advocating for stringent controls at the expense of technological innovation. While 55.5% found current regulatory measures somewhat effective, only a small fraction considered them very effective, indicating room for improvement. There was also a strong preference for a global regulatory approach, with 82.3% favouring international standards or a combination of international and local laws. Concerns about privacy and the use of personal data in AI training were particularly high, alongside a call for AI companies to pay royalties for using copyrighted content in AI model training.
Globally, nearly two-thirds of countries are actively working on AI regulations, categorised into comprehensive regulation enacted, active regulatory development, initial regulatory efforts, and no AI regulation. The EU and China are leading the charge with their distinct regulatory goals – the EU focusing on minimising social harms and China on reasserting state control. The US and the UK are in stages of active development, with President Biden's Executive Order marking a significant step towards safeguarding against AI risks.
This research highlights a strong public desire for comprehensive, globally coordinated AI regulations and international collaboration to establish global standards and practices, ensuring AI's safe, ethical, and innovative advancement across borders.
If you want to delve deeper into the challenges of AI governance, the regulation of synthetic media, and the global security implications of AI advancements, join us on Discord at https://discord.gg/32EXTWNEfe. Here, we can collaborate, share insights, and contribute to shaping the future of AI in a manner that safeguards our security and democratic values and fosters responsible innovation.