Classified information leaking in this way is a one-off situation that might get an individual in trouble. If someone at a heavily-regulated company uploads the wrong thing though, that can cause major disruptions to commercial services while the regulators investigate. Not just fines or prosecutions after-the-fact!
Here’s why it’s a big deal: Nearly every organization allows employees to use google.com. That necessitates allowing POSTs to google.com and from a filtering perspective it makes it nearly impossible to prevent. The best you can do is limit the POST size.
Having said that, search forms in general always pose a 3rd party information disclosure risk but when you enable uploading of entire files instead of just limited text prompts you increase the risk surface by an order of magnitude.
My organization seems to have already thrown in the AI towel, or at least are resorting to magical thinking about it
We’re highly integrated with Microsoft - Windows Login, Active Directory, Microsoft 365, and even a managed version of Edge as the org-wide ‘default’ browser that we’re encouraged to sign into with our organizational credentials to sync account information, etc. Our AI policy is basically “You can use any Microsoft AI feature your account can access.”
They can try to block whatever sites they want with the firewall, but once you let a user get comfortable with the idea of allowing systems to exfiltrate data, you aren’t going to also make them more discrete. They’re trusting that by throwing open the floodgates users will actually use Microsoft’s offerings instead of competing offerings — as if folks who sometimes still cannot tell the difference between a web browser and ‘the internet’ will know the difference. And they are also trusting that Microsoft is going to uphold our enterprise license agreement and their own security to keep that data within our own cloud instance.
Even worse: It’s a compliance nightmare!
Classified information leaking in this way is a one-off situation that might get an individual in trouble. If someone at a heavily-regulated company uploads the wrong thing though, that can cause major disruptions to commercial services while the regulators investigate. Not just fines or prosecutions after-the-fact!
Here’s why it’s a big deal: Nearly every organization allows employees to use google.com. That necessitates allowing POSTs to google.com and from a filtering perspective it makes it nearly impossible to prevent. The best you can do is limit the POST size.
Having said that, search forms in general always pose a 3rd party information disclosure risk but when you enable uploading of entire files instead of just limited text prompts you increase the risk surface by an order of magnitude.
My organization seems to have already thrown in the AI towel, or at least are resorting to magical thinking about it
We’re highly integrated with Microsoft - Windows Login, Active Directory, Microsoft 365, and even a managed version of Edge as the org-wide ‘default’ browser that we’re encouraged to sign into with our organizational credentials to sync account information, etc. Our AI policy is basically “You can use any Microsoft AI feature your account can access.”
They can try to block whatever sites they want with the firewall, but once you let a user get comfortable with the idea of allowing systems to exfiltrate data, you aren’t going to also make them more discrete. They’re trusting that by throwing open the floodgates users will actually use Microsoft’s offerings instead of competing offerings — as if folks who sometimes still cannot tell the difference between a web browser and ‘the internet’ will know the difference. And they are also trusting that Microsoft is going to uphold our enterprise license agreement and their own security to keep that data within our own cloud instance.
Boy howdy, this will be interesting.