Privacy

ChatGPT Privacy Concerns That are Troubling Users

A look into ChatGPT privacy concerns and ways to tackle them

chatgpt-users-concerned-sharing-confidential-data

When OpenAI launched ChatGPT in November 2022, it shook industries and has been the talk of the town in the mainstream media ever since.

For the ones living under a rock, ChatGPT is a chatbot that, when supplied with a thoughtfully constructed prompt, can write essays, presentations, articles, and computer codes. It searches its knowledge base for the most relevant answers and compiles that data into human-readable forms or computer scripts.

Even though some of its replies contain untrustworthy, wrong, obsolete, or illogical information, its supporters claim that it can provide a solid draft or code script for a human to analyze and follow up on, making ChatGPT a 100-million-user app in just two months after introduction.

Regardless, it's not the output that's keeping some security and compliance officers up at night; it's what's going in.

The Data Challenge

According to research by the data security firm Cyberhaven, which examined common ChatGPT privacy concerns among users. It found that among 1.6 million users, as many as 6.5% of employees entered business data into the application, and 3.1% had copied and pasted sensitive material to the program.

For example, at the beginning of April, two different programmers from Samsung's Korean semiconductor division dropped ChatGPT some sensitive, bug-ridden computer code and asked the AI to detect and solve the issues.

Soon, when a third employee requested a summary of meeting notes by sending them to the application, ChatGPT privacy concerns were flagged to the management highlighting the dangers of disclosing confidential material. The management then decided to cap each employee's ChatGPT prompt at 1,024 bytes.

Stopping this risky user behavior is challenging. Organizations are unable to determine the severity of the problem since traditional cybersecurity solutions cannot stop users from copying content into the ChatGPT browser. Many security tools can monitor file uploads and downloads to the internet, but they cannot monitor information users copy and paste into a ChatGPT browser window.

Furthermore, private information could not be as quickly identified and banned as credit cards or Social Security numbers. Today's security technologies cannot distinguish between someone entering the cafeteria menu and the company's M&A ambitions without knowing more about the input context.

At the same time, malicious and criminal actors are already abusing the app. The model could be used to develop and disseminate false information. Hackers are already utilizing the bot to make more convincing malware or phishing schemes.

Because of this, organizations like Amazon, Walmart, Accenture, Verizon, JPMorgan Chase, and many more forbid their staff due to common ChatGPT privacy concerns. Along with the large corporations, governments have also reacted to the change and advised on developing responsible AI policies that address issues like informed consent, privacy, data security, fairness, and transparency. For instance, the White House recently requested an urgent meeting with Microsoft CEO Satya Nadella and Google CEO Sundar Pichai.

Final Words

Due to its advantages, it is tough to prevent consumers from using the ChatGPT app. However, you should begin by proactively training your staff on exchanging corporate data with ChatGPT.

Bivek Minj graduated from the Indian Institute of Mass Communication with a degree in English Journalism. He serves as a Content Writer at ZL Tech India's Marketing department. He comes to the industry with a desire to learn and grow.