On Wednesday, British cyber security firm ‘Darktrace‘ alerted that the artificial intelligence Chatbot ChatGPT may have expanded the sophistication of phishing frauds. Content creation bot ChatGPT was launched by Microsoft-backed start-up OpenAI in November.
Darktrace does not believe that ChatGPT has yet lowered barriers to entry for threat actors significantly. But it does believe that it may have helped increase the sophistication of phishing emails, enabling adversaries to create more targeted, personalized, and ultimately, successful attacks.”
the firm said in a results statement.
Generative AI can upon request wade through reams of data to gather up actual content, an image, a poem, or a thousand-word essay in seconds. Moreover, ChatGPT had “ignited a discussion about the senses of generative AI for cyber security”, Darktrace noted. It added however that email attacks on its clients were “steady” despite the release of ChatGPT, with a decline in the number of those including negative links.
Yet it warned that the “linguistic complexity” of those emails including punctuation, sentence length, and text volume had increased.
“This indicates that cyber-criminals may be redirecting their focus to crafting more sophisticated social engineering scams that exploit user trust.”
However, Darktrace also disclosed that its net profit dropped 86 percent to $581 million in the first half of its financial year, or six months to December. Its performance was slammed by overflowing costs and tax charges.
Darktrace shares climbed 1.2 percent to 267.10 pence in London midday deals, but the stock is down 40 percent compared with the same stage last year. The company, which uses cutting-edge artificial intelligence technology to combat cyber attacks, floated on the London stock market in 2021.
However shares have tumbled over the last year and a half on concerns over the group’s accounts, and after US private equity firm Thoma Bravo ended its takeover interest in 2022.