spot_img
Tuesday, December 24, 2024
More
    spot_img
    HomeFuture PerfectHow do we know when AI has gone rogue?

    How do we know when AI has gone rogue?

    -

    An emergency road sign surrounded by orange cones on a deserted strip of highway

    Congress needs to better understand the capabilities of artificial intelligence to mitigate future risks.

    As the frontiers of artificial intelligence advance at a breakneck pace, the US government is struggling to keep up. Working on AI policy in Washington, DC, I can tell you that before we can decide how to manage frontier AI systems, we first need to see them clearly. Right now, we’re navigating in a fog.

    My role as an AI Policy Fellow at the Federation of American Scientists (FAS) involves developing bipartisan ideas to improve government’s ability to analyze current and future systems. In this work, I interact with experts in government, academia, civil society and the AI ​​industry. What I learned is that there is no broad consensus on how to manage the potential risks of breakthrough AI systems without stifling innovation. However, the US government requires extensive agreements Better information about AI company technologies and practicesand greater ability to respond as both catastrophic and more insidious risks emerge. Without detailed knowledge of the latest AI capabilities, policymakers cannot effectively assess whether current regulations are sufficient to prevent abuse and accidents, or whether companies need to take additional steps to secure their systems.

    when it comes Nuclear power or Aviation safetyThe federal government demands timely information from private companies in those industries to ensure public welfare. We need the same insight into the emerging AI field. Otherwise, this information gap could lead to unforeseen risks to our national security or to overly restrictive policies that stifle innovation.

    Progress in Washington

    Encouragingly, Congress is making slow progress in improving the government’s ability to understand and respond to novel developments in AI. Since ChatGPT’s debut in late 2022, AI has been taken more seriously by legislators in both chambers of both parties and on Capitol Hill. House is a structure Bipartisan AI Task Force Innovation, including guidance on balancing national security and safety. Senate Majority Leader Chuck Schumer (D-NY) hosted a series AI Insight Forum To gather external input and form the basis of AI policy. These developments informed the bipartisan Senate AI Working Group The AI ​​Roadmap That outlined areas of consensus, including “developing and standardizing risk testing and assessment methods and processes” and an AI-focused data sharing and analytics hub.

    Several bills have been introduced that would increase information sharing about AI and strengthen government response capabilities. The Senate is bipartisan AI Research, Innovation and Accountability Act Companies must submit risk assessments to the Commerce Department before deploying AI systems that could affect critical infrastructure, criminal justice, or biometric identification. Another bipartisan bill, the VET AI Act (which is FAS approval), proposes a system for independent evaluators to audit and verify AI companies’ compliance with established guidelines similar to existing practices in the financial industry. The bills cleared the Senate Commerce Committee in July and could receive a floor vote in the Senate before the 2024 election.

    Promising progress has also been made in other parts of the world. In May, the UK and Korean governments announced that most of the world’s leading AI companies had agreed to a new set of Voluntary Security Commitment At the AI ​​Seoul Summit. These commitments include identifying, assessing and managing the risks associated with developing the most advanced AI models, drawing on companies Responsible scaling policy Pioneers in the past year provide a roadmap for future risk mitigation as AI capabilities evolve. AI developers also agreed to provide transparency on their approach to border AI security, including “sharing more detailed information that cannot be shared publicly with trusted actors, including their respective home governments.”

    However, these commitments lack enforcement mechanisms and standardized reporting requirements, making it difficult to assess whether companies are complying with them.

    Even some industry leaders have expressed support for increased government oversight. Sam Altman, CEO of OpenAI, Emphasizes this point In testimony before Congress early last year, said, “I think if this technology goes wrong, it could go pretty wrong, and we want to be vocal about that. We want to work with the government to ensure that this does not happen.” Dario Amodei, CEO of Anthropic, took that sentiment a step further; After the publication of Anthropology Responsible scaling policyhe expressed his hope Governments that will turn policy components into “well-designed testing and monitoring systems with accountability and oversight.”

    Despite these encouraging signs from Washington and the private sector, significant gaps remain in the US government’s ability to understand and respond to rapid advances in AI technology. In particular, three critical areas need immediate attention: safeguards for independent research on AI security, early warning systems to improve AI capabilities, and comprehensive reporting mechanisms for real-world AI incidents. Addressing these gaps is critical to protecting national security, fostering innovation, and ensuring that AI development advances the public interest.

    A safe harbor for independent AI security research

    AI companies often discourage or even threatened to ban Researchers who identify security flaws in the use of their products create a chilling effect on essential independent research. This leaves the public and policymakers in the dark about potential dangers from widely used AI systems, including threats to US national security. Independent research is vital because it provides an external check on the claims made by AI developers, helping to identify risks or limitations that may not be apparent to the companies themselves.

    A significant proposition to deal with this problem is that companies should offer Legal Safeguards and Financial Incentives for Good Faith AI Security and Trustworthiness Research. Congress may propose “buggrace Extends legal protections to AI security researchers who identify vulnerabilities and experts studying AI platforms, such as proposed for social media researchers Platform Accountability and Transparency Act. in a open letter Earlier this year, more than 350 leading researchers and advocates called on companies to provide such protections for security researchers, but which company Still did.

    With these protections and incentives, thousands of American researchers could be empowered to stress-test AI systems, allowing real-time evaluation of AI products and systems. The US AI Safety Institute has included similar safeguards for AI researchers in its draft guidance “Managing misuse risk for dual-use foundation models“And Congress should consider codifying these best practices.

    An early warning system for improving AI capabilities

    The US government’s approach to identifying and responding to potentially dangerous capabilities of frontier AI systems is limited and unlikely to keep pace with new AI capabilities if they continue to grow rapidly. Knowledge gaps within the industry leave policymakers and security organizations unprepared to address emerging AI risks. Worse, the potential consequences of this disparity will become more complex over time as AI systems become more vulnerable and more widely used.

    establish a AI early warning system Equipping governments with the information they need before the threat of artificial intelligence. Such a system would create a formal channel for AI developers, researchers and other relevant parties to report AI capabilities which has both civilian and military applications (eg Rise to Biological Weapons Research or cyber crime) to Govt. The Commerce Department’s Bureau of Industry and Security can act as an information clearinghouse, receiving, triaging, and forwarding these reports to other relevant agencies.

    This proactive approach will provide government stakeholders with up-to-date information about the latest AI capabilities, enabling them to assess whether current regulations are adequate or whether new safeguards are needed. For example, if advances in AI systems increase the risk of a biological weapons attack, the relevant parts of the government will be immediately alerted, so that they can respond quickly to protect the public’s welfare.

    Reporting process for real-world AI incidents

    The U.S. government currently lacks a comprehensive understanding of adverse events in which AI systems have caused damage, hindering its ability to identify risky usage patterns, evaluate government guidelines, and respond effectively to threats. This blind spot does not equip policymakers to develop timely and informed response mechanisms.

    To establish a voluntary national AI incident reporting hub It will create a standardized channel for companies, researchers and the public to confidentially report AI incidents, including system failures, accidents, abuses and potential hazards. The hub will be housed at the National Institute of Standards and Technology, using existing expertise in incident reporting and standards setting, avoiding mandates; This will encourage collaborative industry participation.

    This real-world data on adverse AI events combined with forward-looking reporting capabilities and researcher protections will enable governments to develop better informed policy responses to emerging AI issues and further empower developers to better understand threats.

    the way forward

    These three propositions strike a balance between oversight and innovation in AI development. By encouraging independent research and improving government visibility into AI capabilities and events, they can support both security and technological progress. Governments can increase public confidence and potentially accelerate AI adoption by preventing the regulatory backlash across sectors that can follow preventable high-profile incidents. Policymakers will be able to create targeted regulations that address specific risks — such as AI-enhanced cyber threats or potential misuse of critical infrastructure — while preserving the flexibility needed for continued innovation in areas such as health care diagnostics and climate modeling.

    Passing legislation in this area requires bipartisan cooperation in Congress. Stakeholders from industry, academia and civil society must advocate and engage in this process, offering their expertise to refine and implement these proposals. There is a brief window for action In the remainder of the 118th Congress, some AI transparency policies are likely to be attached to passing legislation such as the National Defense Authorization Act. The clock is ticking, and quick, decisive action now can set the stage for better AI governance for years to come.

    Imagine a future where our government has the tools to understand and responsibly guide AI development, and a future where we can harness the potential of AI to solve big challenges while staying protected against risks. This future is within our grasp — but only if we act now to remove the fog and sharpen our collective vision of how AI is created and used. By improving our collective understanding and oversight of AI, we increase the likelihood that this powerful technology will lead to beneficial outcomes for society.

    Source link

    Related articles

    Stay Connected

    0FansLike
    0FollowersFollow
    0FollowersFollow
    0SubscribersSubscribe
    google.com, pub-6220773807308986, DIRECT, f08c47fec0942fa0

    Latest posts