Science, Innovation and Technology Secretary of State Peter Kyle will use the Munich Security Conference as a platform to rename the UK AI Safety Institute to the AI Security Institute.
According to a statement from the Science, Innovation and Technology Publishing, the new name reflects “AI Security Institute” to carry out cyber attacks and enable crimes such as fraud and child sexual abuse. How can it be used?
The government said the AI Security Institute does not focus on bias or freedom of speech, but it focuses on advancing understanding of the most serious risks pose by AI technology. The agency said protecting UK national security and protecting citizens from crime would be the fundamental principles of the UK’s approach to the responsible development of artificial intelligence.
Kyle, days after the end of the AI litigation summit in Paris, refused to sign an agreement on comprehensive and sustainable artificial intelligence (AI), against Munich’s Revitalization AI Security Institute. Set his vision. He also said, according to a statement, “wrapped from a new agreement” struck by the UK and AI humanity.
According to the statement, “This partnership is the job of the new sovereign AI unit in the UK, with both sides working closely together to realize technology opportunities, focusing on responsible development and deployment of AI systems. That’s what happens.”
The UK will introduce further agreements with “major AI companies” as an important pillar of its government landlord-focused plan for change.
Kyle said: “The changes I am making today represent the next logical step in how to approach responsible AI development: unlocking AI and growing the economy as part of a plan for change. It will help.
“The work of the AI Security Institute remains the same, but this new focus is that our citizens and our allies use AI for our institutions, democratic values and ways of life. Ensures you are protected from those who try to do so.
“The government’s main job is to ensure that our citizens are safe and protected. The expertise our AI Security Institute can withstand is what makes the UK a stronger position than ever before. We guarantee that this is someone who tries to use this technology against us.
The AI Security Institute will work with the Defense Science and Technology Research Institute, the Ministry of Defense’s science and technology organization, to assess the risks posed by what the department calls “frontier AI.” It will also work with the national security community, including building expertise from AI Security Research (LASR) research institutes and the National Cybersecurity Centre.
The AI Security Institute will launch a new criminal misuse team that will work together with the Ministry of Home Affairs to conduct investigations on new crime and security issues. One such focus is creating images of child sexual abuse to tackle AI use. This new team explores how to prevent abusers from using AI to commit crimes. This supports previously announced work. This makes it illegal to own an AI tool optimized to create images of child sexual abuse.
“We are pleased to announce that we are a part of the AI Security Institute,” said Ian Hogarth, chairman of the AI Security Institute. “From the start, the lab’s focus has been on security, and we have built a team of scientists focused on assessing critical risks to the public, including our new criminal misuse team and the national security community. Deepening partnerships illustrate the next step in tackling these risks.”
Dario Amodei, CEO and co-founder of Anthropic, added: How human AI assistant Claude strengthens public services with the aim of discovering new ways to make UK residents more efficient access to critical information and services I look forward to finding out if I can help.
“We will work closely with the UK AI Security Institute to investigate and evaluate AI capabilities to ensure secure deployment.”