Attacking Artificial Intelligence: AIs Security Vulnerability and What Policymakers Can Do About It Belfer Center for Science and International Affairs
In the face of AI attacks, today’s dragnet data collection practices may soon be a quaint relic of a simpler time. If an AI user’s data collection practices are known by an adversary, the adversary can influence the collection process in order to attack the resulting AI system through a poisoning attack. As a result, the age of AI attacks requires new attitudes towards data that are in stark contrast to current data collection practices. Imperceivable attacks are highly applicable to targets that the adversary has full control over, such as digital images or manufactured objects. For example, a user posting an illicit image, such as one containing child pornography, can alter the image such that it evades detection by the AI-based content filters, but also remains visually unchanged from the human perspective. This allows the attacker unfettered and, for all practical purposes, unaltered distribution of the content without detection.
Many of AI’s harmful uses are already illegal, from criminal impersonation to cyberattacks and the release of dangerous pathogens. And there are certainly additional steps we can take to make it harder to use AI capabilities for ill. The AI executive order, for example, included a whole subsection about introducing standards for DNA synthesis companies to screen their orders for dangerous pathogens, thereby reducing the risk that AI systems are used to acquire bioweapons.
Try out Typetone Government AI now.
To combat this risk, the Federal Government will ensure that the collection, use, and retention of data is lawful, is secure, and mitigates privacy and confidentiality risks. Agencies shall use available policy and technical tools, including privacy-enhancing technologies (PETs) where appropriate, to protect privacy and to combat the broader legal and societal risks — including the chilling of First Amendment rights — that result from the improper collection and use of people’s data. Effective AI governance is necessary to balance innovation with ethics, safeguard individual rights, and foster healthy competition, ultimately creating a better future for all.
This can include using strong passwords, enabling two-factor authentication whenever possible, and regularly updating software and applications to ensure they have the latest security patches. The General Data Protection Regulation (GDPR) in Europe is one very important one that applies strict rules about the collection, storage, and use of personal data. It offers individuals so much control over their information and demands that organizations obtain necessary consent before going into processing. (b) This order shall with applicable law and subject to the availability of appropriations. (c) This order is not intended to, and does not, create any right or benefit, substantive or procedural, enforceable at law or in equity by any party against the United States, its departments, agencies, or entities, its officers, employees, or agents, or any other person.
deepset: A Partner in AI Compliance
Projects and startups may find it difficult to navigate the evolving compliance landscape, lacking larger, established companies’ resources. Andrew Ng believes that the right way to regulate is at the application layer, where regulations would be applied based on what AI is used for. For example, AI in biotech would have more stringent requirements than AI for copywriters.
- Commercial applications that are using AI to replace humans, such as self-driving cars and the Internet of Things, are putting vulnerable artificial intelligence technology onto our streets and into our homes.
- This includes the recent expansion of the IBM Cloud Security and Compliance Center—a suite of modernized cloud security and compliance solutions—to help enterprises mitigate risk and protect data across their hybrid, multicloud environments and workloads.
- Critical AI systems must restrict how and when the data used to build them is shared in order to make AI attacks more difficult to execute.
- In this case, care must be taken that adversaries cannot access or manipulate the models stored on systems over which they otherwise have full control.
- In the next decades, we will see the rise of AI especially in industries that rely extensively on personal data like healthcare and finance.
- The EO introduces a wide range of critical guidelines aimed at the privacy, security, and safety of AI technologies.
It’s imperative to ensure any government oversight balances both safety and open-source as AI capabilities advance. Agencies need to secure the IP their contractors create in order to reuse, evolve and maintain their models over time. They also need to enable collaboration across contractors, users and datasets, and give contractors access to their preferred tools. Engaging with stakeholders is also critical to address concerns about data privacy and security adequately.
A major reason for using AI in government processes is that it can free up millions of labor hours. This can allow government workers to focus on more important tasks and result in the government being able to provide services to the public faster. But first, let’s take a look at the current state of artificial intelligence in government agencies. Even though the AI Bill of Rights is merely a guideline, there have been calls for the government to make it binding, at least in how it applies to federal agencies. If it does, it will affect all entities working with AI within the European Union, not just their government agencies.
The wide range of possibilities will be explored in more detail in the next section, which outlines some key use cases for deploying conversational AI in government. By understanding the various applications of technologies like conversational AI, hyperautomation, and TX, public sector organizations can start to envision how they could specifically benefit their needs and operations as part of a digital transformation strategy. Microsoft in July also received FedRAMP high authorization, giving federal agencies who manage some of the government’s most sensitive data access to powerful language models including ChatGPT. As the future of work moves towards an AI-powered digital workspace, it’s becoming increasingly critical for government agencies to embrace this change to stay ahead of the curve and seize opportunities to enhance efficiency, drive innovation, and improve citizen services. Artificial intelligence (AI) is rapidly transforming businesses and industries, and the potential for AI in government is massive – it can automate tedious tasks, improve public services, and even reduce costs.
(d) Artificial Intelligence policies must be consistent with my Administration’s dedication to advancing equity and civil rights. My Administration cannot — and will not — tolerate the use of AI to disadvantage those who are already too often denied equal opportunity and justice. From hiring to housing to healthcare, we have seen what happens when AI use deepens discrimination and bias, rather than improving quality of life. Artificial Intelligence systems deployed irresponsibly have reproduced and intensified existing inequities, caused new types of harmful discrimination, and exacerbated online and physical harms.
To ensure that safety-critical AI systems are built on solid foundations, reducing the chance of accidents, widely used foundation models should be designed with a particular focus on transparency and ensuring they behave predictably. To address cybersecurity risks, systems that can identify novel software vulnerabilities should be used to patch exploits before they are made available to hackers. Cyber-attacks and data breaches can cost organizations–in dollars and in customer trust.
Additionally, governments are establishing specialized agencies or departments responsible for overseeing data protection efforts. These entities monitor compliance with regulations, conduct audits, and enforce penalties for non-compliance. By having dedicated bodies focused on data privacy and security matters, governments ensure accountability and provide a mechanism for addressing any breaches or lapses promptly. International cooperation develops trust among nations by showing the necessary commitment to safeguarding individuals’ rights while harnessing the potential benefits of AI technology.
Manual data entry or verification consumes a lot of time and resources, making it difficult to prove quick services to the public. Let’s discuss some major AI applications that governments can leverage to improve public sector services. However, such talking-thinking computers and droids need to be fully capable of human-like thinking — to command artificial general intelligence (AGI) or artificial superintelligence (ASI), yet neither AGI nor ASI have been invented yet, and will hardly be so in the foreseeable future.
Featured in AI, ML & Data Engineering
This mandate offers guidance on watermarking and clearly labeling AI-generated content so Americans can easily understand which ads, content and communications are authentic. If AI is effectively applied across government departments, it can have a positive impact on the healthcare, energy, economic, and agriculture sectors. However, without clear governmental guidelines, AI can jeopardize the well-being of citizens. For instance, an AI crowdsourcing tool developed by a Belgian technology company CitizenLab was used by Belgian authorities to understand public demands during climate change protests in 2019. As a result, Belgium was able to define 15 climate action priority policies that were curated via public opinion.
How would you define the safe secure and reliable AI?
Safe and secure
To be trustworthy, AI must be protected from cybersecurity risks that might lead to physical and/or digital harm. Although safety and security are clearly important for all computer systems, they are especially crucial for AI due to AI's large and increasing role and impact on real-world activities.
These are defined as AI models with more than 10 billion parameters that are trained on large amounts of data and demonstrate high performance on tasks that pose a “risk to national security.” Organizations working on such models will have to report regularly to the federal government on their activities and plans. Mattermost provides secure, workflow-centric collaboration for technical and operational teams that need to meet nation-state-level security and trust requirements. We serve technology, public sector, national defense, and financial services industries with customers ranging from tech giants to the world’s largest banks, to the U.S. Microsoft continues to prioritize the development of cloud services that align with US regulatory standards and cater to government requirements for security and compliance. The latest addition to their tools is the integration of generative AI capabilities through Microsoft Azure OpenAI Service, which aims to enhance government agencies’ efficiency, productivity, and data insights.
Another important step is fostering international cooperation on data privacy and security issues. Governments recognize that cyber threats are not confined within national borders; therefore, collaboration among countries becomes essential in combating these risks effectively. Sharing best practices, intelligence on emerging threats, and collaborating on cross-border investigations help strengthen overall cybersecurity defenses. As part of this work, the Secretary of State and the Administrator of the United States Agency for International Development shall draw on lessons learned from programmatic uses of AI in global development.
Read more about Secure and Compliant AI for Governments here.
How can AI help with defense?
It can streamline operations, enhance decision-making and increase the accuracy and effectiveness of military missions. Drones and autonomous vehicles can perform missions that are dangerous or impossible for humans. AI-powered analytics can provide strategic advantages by predicting and identifying threats.
How can AI be secure?
Sophisticated AI cybersecurity tools have the capability to compute and analyze large sets of data allowing them to develop activity patterns that indicate potential malicious behavior. In this sense, AI emulates the threat-detection aptitude of its human counterparts.