UP

AI exploitation ‘risks are real’

Home page Technology
12 Punto 14 Punto 16 Punto 18 Punto

The rapid expansion and application of artificial intelligence, from defense and intelligence to media and information, have left the technology vulnerable to exploitation by malicious entities and individuals, a recent report warns.

Axar.az informs citing DS.

The Malicious Use of Artificial Intelligence report – composed by 26 experts from 14 institutions in academia, industry and civil society – identifies three particularly vulnerable areas: digital, physical and political.

The 100-page review names threats from terrorists and rogue states, ranging from weaponized drones to media manipulation to enhanced hacking, able to strike significant damage to physical and financial security.

"We live in a world that could become fraught with day-to-day hazards from the misuse of AI and we need to take ownership of the problems – because the risks are real," Seán Ó hÉigeartaigh, executive director of the Centre for the Study of Existential Risk and co-author of the report, told the BBC.

Naming a few of the threats, Oxford research fellow Miles Brundage said, "AI will alter the landscape of risk for citizens, organizations and states – whether it's criminals training machines to hack or 'phish' at human levels of performance or privacy-eliminating surveillance, profiling and repression – the full range of impacts on security is vast."

To meet these risks, the researchers counsel scientists and engineers to consider how they might mitigate misuse of their creations and governments to put laws in place to protect the emerging technology from exploitation.

Both entities must work together to guard against threats, the report advises, and strive to involve more stakeholders in mitigation of the risks.

To design security mechanisms, experts suggest examining how longer-established disciplines, such as computer security, have handled dual use risks.

Though the "hype" surrounding AI's capacity for good has wowed the world over the past decade, hÉigeartaigh warns it's time to take its capacity for harm seriously.

"For many decades hype outstripped fact in terms of AI and machine learning. No longer. This report looks at the practices that just don't work anymore – and suggests broad approaches that might help: for example, how to design software and hardware to make it less hackable – and what type of laws and international regulations might work in tandem with this," hÉigeartaigh said.

Date
2018.02.21 / 17:25
Author
Axar.az
See also

Official prices of the new iPhone 17 models released - Photo

Google hit with €2.95B EU fine over digital ads monopoly

iPhone 17: Here’s the rumored cost for each new model

Apple to unveil iPhone 17 on September 9

U.S. users report ChatGPT outage

xAI plans legal action against Apple

OpenAI adds mental health safeguards to ChatGPT

Elon Musk makes 'Grok 4' AI app free

Half of British companies want an end to hybrid working

Apple pledges $100B more for U.S. manufacturing

Latest
Xocalı soyqırımı — 1992-ci il Bağla
Bize yazin Bağla
ArxivBağla