Although Microsoft is a major investor in OpenAI and has been utilizing GPT technology extensively, recent reports indicate that Microsoft has instructed its employees to stop using ChatGPT internally due to security concerns. This decision covers other AI tools as well, as stated in Microsoft’s internal announcement: “Due to security and data concerns, a number of AI tools are no longer available for employees to use.” Examples of these tools mentioned in Microsoft’s announcement include Midjourney and Replika.
A spokesperson for Microsoft stated that this announcement was an experiment with endpoint control systems to regulate the usage of the LLM model, but there was a configuration error. Instead of conducting limited-scale testing, it inadvertently became available for all employees across the company.
TLDR: Microsoft has temporarily halted the use of ChatGPT and other AI tools due to security and data concerns. This decision was a result of an experiment with endpoint control systems gone wrong.