C.Concerns about the increasing capabilities of chatbots trained on large language models, such as OpenAI’s GPT-4, Google’s Bard, and Microsoft’s Bing Chat, are making headlines. Experts warn of its ability to spread misinformation on a monumental scale, as well as the existential risk that its development may pose to humanity. As if this weren’t worrying enough, a third area of concern has opened up, illustrated by the recent ban on ChatGPT in Italy for privacy reasons.
The Italian data regulator raised concerns about the model used by ChatGPT’s owner, OpenAI, and announced that it would investigate whether the company had violated strict European data protection laws.
Chatbots can be useful for work and personal tasks, but they collect a lot of data. AI also poses multiple security risks, including the ability to help criminals carry out more convincing and effective cyberattacks.
Are chatbots a bigger privacy concern than search engines?
Most people are aware of the privacy risks posed by search engines like Google, but experts believe that chatbots could consume even more data. Its conversational nature can catch people off guard and encourage them to give out more information than they would have entered into a search engine. “Human style can be disarming to users,” warns Ali Vaziri, legal director of the data and privacy team at law firm Lewis Silkin.
Chatbots often collect text, voice, and device information, as well as data that can reveal your location, such as your IP address. Like search engines, chatbots collect data like social media activity, which can be linked to your email address and phone number, says Dr. Lucian Tipi, associate dean at City University from Birmingham. “As data processing improves, so does the need for more information and anything on the web becomes fair game.”
While the companies behind the chatbots say your data is needed to help improve services, it can also be used for targeted advertising. Every time you ask an AI chatbot for help, micro-calculations feed the algorithm to profile people, says Jake Moore, global cybersecurity adviser at software firm ESET. “These identifiers are analyzed and could be used to target us with ads.”
This is already starting to happen. Microsoft has announced that it is exploring the idea of bring ads to bing chat. It also recently emerged that Microsoft staff may read users’ chatbot conversations and the American company has updated its Privacy Policy to reflect this.
ChatGPT Privacy Policy “It doesn’t seem to open the door to commercial exploitation of personal data,” says Ron Moscona, a partner at the Dorsey & Whitney law firm. The policy “promises to protect people’s data” and not share it with third parties, he says.
However, while Google also agrees not to share information with third parties, the technology company’s broader privacy policy allows it to use data to serve targeted advertising to users.
How can you use chatbots privately and securely?
It’s hard to use chatbots privately and securely, but there are ways to limit the amount of data they collect. It is a good idea, for example, to use a VPN like ExpressVPN either NordVPN to mask your IP address.
At this stage, the technology is too new and unrefined to be sure it’s private and secure, says Will Richmond-Coggan, a data, privacy and AI specialist at law firm Freeths. He says “considerable care” should be taken before sharing any information, especially if the information is confidential or business related.
The nature of a chatbot means that it will always reveal information about the user, regardless of how the service is used, Moscona says. “Even if you use a chatbot through an anonymous account or a VPN, the content you provide over time could reveal enough information to be identified or tracked.”
But the tech firms that stand by their chatbot products say you can safely use them. Microsoft says its Bing Chat is “considerate about how it uses your data” to provide a good experience and “uphold the policies and protections of traditional Bing search.”
Microsoft protects privacy through technology like encryption and only stores and retains information for as long as it is needed. Microsoft also offers control over your search data through the Microsoft Privacy Dashboard.
ChatGPT’s creator, OpenAI, says it has trained the model to reject inappropriate requests. “We use our moderation tools to warn or block certain types of unsafe and sensitive content,” a spokesperson adds.
What about using chatbots to help with work tasks?
Chatbots can be useful at work, but experts recommend that you proceed with caution to avoid oversharing and infringing regulations such as the updated EU General Data Protection Regulation. (GDPR). With this in mind, companies like JP Morgan and Amazon have prohibited or restricted use of ChatGPT by staff.
The risk is so great that the developers themselves advise against its use. “We can’t remove specific ads from your history,” the ChatGPT FAQ states. “Please do not share any sensitive information in your conversations.”
Using free chatbot tools for business purposes “can be unwise,” Moscona says. “The free version of ChatGPT does not offer clear and unambiguous guarantees on how it will protect the security of the chats or the confidentiality of the inputs and outputs generated by the chatbot. Although the terms of use acknowledge user ownership and the privacy policy promises to protect personal information, they are vague about information security.
Microsoft says Bing can help with work tasks, but “we wouldn’t recommend entering sensitive company information into any consumer service.”
If you must use one, experts advise caution. “Follow your company’s security policies and never share sensitive or confidential information,” says Nik Nicholas, CEO of data consulting firm Covelent.
Microsoft offers a product called Co-pilot for business use, which assumes the company’s most stringent security, compliance, and privacy policies for its Microsoft 365 business product.
How can I detect malware, emails or other malicious content generated by bad actors or AI?
As chatbots become more integrated into the Internet and social networks, the chances of becoming a victim of malware or malicious emails will increase. The UK National Cyber Security Center (NCSC) has warned about the risks of AI chatbots, saying the technology that powers them could be used in cyberattacks.
Experts say that ChatGPT and its competitors have the potential to allow bad actors to build more sophisticated phishing email operations. For example, generating emails in multiple languages will be simple, so the telltale signs of fraudulent messages, such as poor grammar and spelling, will be less obvious.
With this in mind, experts advise more vigilance than ever before clicking links or downloading attachments from unknown sources. As usual, Nicholas advises, use security software and keep it up to date to protect against malware.
The language may be flawless, but the chatbot’s content can often contain factual errors or out-of-date information, and this could be a sign of a non-human sender. You can also have a bland and formulaic writing style, but this can help rather than hinder the bad actor bot when trying to pass off as official communication.
AI-enabled services are emerging rapidly, and as they develop, the risks will worsen. Experts say the likes of ChatGPT can be used to help cybercriminals write malware, and there are concerns that sensitive information entered into chat-enabled services could leak onto the Internet. Other ways of generative AI – AI capable of producing content such as voice, text or images – could offer criminals the opportunity to create more realistic so-called deepfake videos by imitating a bank employee asking for a password, for example.
Ironically, it is humans who are best at detecting these types of AI-enabled threats. “The best protection against malware and bad actor AI is your own vigilance,” says Richmond-Coggan.