British officials are urging businesses to exercise caution when integrating artificial intelligence (AI) chatbots into their operations, citing concerns over potential risks. The National Cyber Security Centre (NCSC) in the UK has released two blog posts, highlighting the need for a better understanding of security issues associated with algorithms capable of generating human-like interactions, known as large language models (LLMs).
AI-powered chatbots, designed to mimic human conversations, have gained popularity for various tasks, from customer service interactions to aiding sales calls. However, the NCSC has pointed out that the intricacies of the security challenges posed by these AI algorithms have yet to be fully comprehended.
Instances of chatbots being manipulated to perform unauthorized actions or circumvent security measures have been documented by researchers. For instance, hackers could exploit an AI-powered chatbot to carry out unauthorized financial transactions by manipulating its responses.
In response to these concerns, the NCSC advises organizations to exercise prudence similar to what would be applied to experimental software when implementing LLMs. The center emphasized that organizations might opt not to involve LLMs in customer transactions and should approach their use with caution.
The proliferation of large language models, including examples like OpenAI’s ChatGPT, has led to their integration into various business functions. Yet, the security implications of AI technology are still being explored, with authorities in other countries, including the US and Canada, observing instances of hackers leveraging AI for malicious purposes.
A recent survey by Reuters/Ipsos revealed that a significant portion of corporate employees are already using AI tools for tasks like drafting emails, summarizing documents, and conducting preliminary research. While some companies have prohibited the use of external AI tools, others remain uncertain about their stance.
Oseloka Obiora, Chief Technology Officer at cybersecurity firm RiverSafe, warned against hastily adopting AI trends without thorough consideration. Obiora stressed the importance of assessing both the benefits and potential risks, coupled with the implementation of robust cybersecurity measures.
As the business landscape increasingly integrates AI technologies, the NCSC’s advisory serves as a reminder of the critical need to balance innovation with cybersecurity, ensuring that businesses can harness the potential of AI while safeguarding against potential vulnerabilities.