I worked for regulated companies too, I suspect that most companies would forbid such tools even if it’s not a dependency to build the software, especially if the source code is sent to random people. IMHO, every tool should be stored locally (e.g. on a proxy server) to make sure that the whole project can be recreated in a few minutes should something bad happens (or from scratch in a CI). As long as AIs rely on private companies on the internet, I wouldn’t use those tools.
They do say this though “We also plan to support local and on-premises models. For local models, the supported feature set will most likely be limited.”
This is currently a no go at my place (I asked) but the ai and security folks were interested in that as it would allow on-prem/private cloud usage as well as the possibility of using targeted models instead of a generic one.
For example, in the comments on their announcement they confirm they are looking at Azure AI support.
That’s a bummer. We’re strictly regulated and stuff like this needs to be self hosted or we can’t use it
I worked for regulated companies too, I suspect that most companies would forbid such tools even if it’s not a dependency to build the software, especially if the source code is sent to random people. IMHO, every tool should be stored locally (e.g. on a proxy server) to make sure that the whole project can be recreated in a few minutes should something bad happens (or from scratch in a CI). As long as AIs rely on private companies on the internet, I wouldn’t use those tools.
They do say this though “We also plan to support local and on-premises models. For local models, the supported feature set will most likely be limited.”
This is currently a no go at my place (I asked) but the ai and security folks were interested in that as it would allow on-prem/private cloud usage as well as the possibility of using targeted models instead of a generic one.
For example, in the comments on their announcement they confirm they are looking at Azure AI support.