* LASN_picture_logo.jpg

 

Locks and Security News: your weekly locks and security industry newsletter
4th December 2024 Issue no. 732

Your industry news - first

 

We strongly recommend viewing Locks and Security News full size in your web browser. Click our masthead above to visit our website version.

 

Search
English French Spanish Italian German Dutch Russian Mandarin


Has the hacking underworld removed all of AI's guardrails?

The hacking underworld has removed all of AI's guardrails, with hackers now using generative AI large language models to help formulate highly targeted, text-based scams. But, at the same time, Gen-AI is also giving cyber defenders a helping hand.

Dr Ilia Kolochenko, CEO at ImmuniWeb and Adjunct Professor of Cybersecurity at Capital Technology University, commented:

“It seems that predictions about the unprecedented cybercrime surge, fueled by GenAI and fine-tuned malicious LLMs, are a bit exaggerated.

First, LLMs have a fairly narrow application in cybercrime, namely in phishing, smishing and vishing, BEC and whaling attacks – all of which rely on social engineering and human deception. GenAI provides little to no help with nationwide ransomware campaigns, disruptive attacks against critical national infrastructure (NCI), or advanced persistent threats (APTs) aiming at stealing classified information from the government or intellectual property from businesses. Organized cybercrime groups already have all the requisite skills, such as spear-phishing email creation or state-of-the-art malware development, producing substantially superior quality of cyber warfare compared to any LLM.

Second, cyberattacks that exploit human deception have been already quite efficient in the past. Cyber gangs behind will unlikely boost their success rate by a better-written email impersonating a CEO in a whaling attack. Moreover, an impeccably written email can rather trigger some doubts, as in business people frequently make typos or use jargon when communicating with their colleagues. Having said that, any authentication systems, for example in financial institutions, that are based on a client’s voice or appearance are to be urgently tested for bypassability with fake AI-generated content. Employees susceptible to this kind of cyberattacks should also be regularly trained to spot red flags and require additional proof of identity to prevent fraud.”


More on the story here: https://www.cnbc.com/2024/03/11/cybercrime-underworld-has-removed-all-the-guardrails-on-ai-frontier.html

6th March 2024




© Locks and Security News 2024.
Subscribe | Unsubscribe | Hall of Fame | Cookies | Sitemap