AI in the Workplace: Guidance for Organizations
ChatGPT and several other AI-related tools have taken off at astonishing speed over the past few months. While these tools can be incredibly useful to help increase productivity, expedite mundane tasks, and spark creativity within the workplace, your staff members might be using these platforms in ways that can unintentionally create risk for your organization.
Our top concern is that staff are using unauthorized tools in conjunction with data that is classified as confidential or private/internal as defined by your organization’s data classification policy. Reports of staff from Samsung doing exactly this made headlines earlier this month, but anecdotal evidence indicates that this is a widespread problem.
In March, OpenAI confirmed reports of a bug (which OpenAI claims has been fixed) that allowed some users to see other users’ chat histories.
These are just a few examples that have emerged over the past few months; as usage becomes even more widespread, and AI tools are incorporated into more third-party platforms, we expect further disruptions and an uptick in security-related issues.
Security Risks with ChatGPT
- Data leakage: Confidential data or internal intellectual property could be leaked to unauthorized parties, including attackers who could leverage this data for malicious purposes.
- Legal and compliance risks: Depending on the nature of your data, it’s possible that your organization may be in violation of regulatory obligations or contractual agreements with clients, which could result in fines or legal action.
- Reputational data: If confidential data is leaked, it can lead to reputational harm for your organization. If that happens, will clients, customers, donors, investors, or other stakeholders trust your organization if confidential data has been leaked?
Recommendations
To minimize and/or mitigate these security risks with ChatGPT and other AI platforms, FireOak strongly recommends that each organization establish clear guidelines for staff around the use of AI platforms, including ChatGPT. These guidelines should align with your organization’s Data Classification Policy (DCP). Through these guidelines, provide clear guidance to employees on the ways in which they can (and can’t) use AI tools and which platforms have been authorized for which purpose(s).
If your organization doesn’t have a data classification policy, address this issue at the same time!
To be clear, we are not suggesting that all organizations immediately discontinue the use of AI tools. There are plenty of use cases and secure mechanisms for incorporating AI into workflows. But be transparent in identifying what’s acceptable and what’s not. And, at a high level, explain the reasoning behind this guidance.
References
Bloomberg Law – Employers Should Consider These Risks When Employees Use ChatGPT
CSO – Sharing sensitive business data with ChatGPT could be risky
TechRadar – Samsung workers made a major error by using ChatGPT