On the surface, it may seem like a tool that can be useful for a variety of work tasks. But before you ask the chatbot to summarize important memos or check its work for errors, it’s worth remembering that anything you share with ChatGPT could be used to train the system and maybe even appear in your responses to other users. That’s something many employees probably should have been aware of before sharing sensitive information with the chatbot.
Shortly after Samsung’s semiconductor division began allowing engineers to use ChatGPT, workers leaked secret information to it on at least three occasions, according to (as seen by ). According to reports, one employee asked the chatbot to check the source code of the sensitive database for errors, another requested code optimization, and a third entered a recorded meeting into ChatGPT and asked it to generate minutes.
suggest that after learning of the security flaws, Samsung attempted to limit the scope of future missteps by restricting the length of employees’ ChatGPT prompts to one kilobyte, or 1024 text characters. The company is also said to be investigating the three employees in question and building its own chatbot to prevent similar mishaps. Engadget has contacted Samsung for comment.
ChatGPT states that unless users explicitly opt out, it uses their input to train its models. The owner of the OpenAI chatbot not share secret information with ChatGPT in conversations, as it “cannot remove specific prompts from your history”. The only way to get rid of personally identifiable information on ChatGPT is to delete your account, a process that .
The Samsung saga is another example of why it is as perhaps you should with all your online activity. You never really know where your data will end up.