Generative artificial intelligence (AI) resources—tools that use generative AI to create new content like text, images, audio and video upon a user’s request—are becoming very popular. Users beware: Many AI tools don’t protect the information that’s entered into the prompt fields. In fact, these tools often use your prompt details to train the AI model, which means your prompt details are likely to be provided to other users. When it comes to generative AI, always be mindful of unintended data exposure, compliance violations, and the unapproved use of AI tools without the knowledge or oversight of the IT team or security team.
At Barry-Wehmiller, the only approved generative AI tools are Microsoft Copilot and Anthropic’s Claude.ai. If you have questions on how to access these tools, please contact your IT service desk.
When personally engaging with any AI, be mindful of sharing sensitive information such as:
- Credit card or bank statements: Sharing this data could lead to unauthorized access and threaten your financial identity.
- Medical records: Sharing personal health information could make your private health details accessible to others while you remain unaware.
- Legal documents: Contracts and agreements are filled with critical and sensitive information that, when presented to an AI, may result in privacy violations and potential legal repercussions.
When using official BW AI business tools, be very careful about sharing proprietary software code and business plans or strategies.
- At work, even when using our approved AI tools (Claude.ai and Copilot), always be aware: Uploading sensitive and proprietary company information to an AI for assistance in creating documents or presentations can carry significant risks.
- Exposing confidential business details to an unapproved AI tools poses potential legal risks and can provide competitors with valuable insights into Barry-Wehmiller.
Remember, AI platforms learn from the data they ingest. They use this data to adapt over time. This makes them susceptible to manipulation through poisoned inputs, adversarial attacks, or flawed training sources. Unlike traditional software, these threats can easily go undetected, quietly altering behavior or leaking sensitive information without causing obvious system failures. AI can be a helpful tool, but it often carries risks that may not be immediately apparent.
Sources:
Ermakov, E. (2025, September 12). When AI tools leak data: The hidden risk of copy-paste into Chatbots. Medium. https://medium.com/@bizsec_top/when-ai-tools-leak-data-the-hidden-risk-of-copy-paste-into-chatbots-4c65f2129150
Legit Security. (2025, September 3). 9 AI Security Risks: What you need to know for protection. https://www.legitsecurity.com/aspm-knowledge-base/ai-security-risks#:~:text=AI%20models%20interpret%20data%20differently,access%20private%20or%20regulated%20information.
Security Journey. (2024, December 19). 5 types of data you should never share with ai. Security Journey. https://www.securityjourney.com/post/5-types-of-data-you-should-never-share-with-ai
Barry-Wehmiller’s “Use of Public Generative Artificial Intelligence Services Policy.” February 2025, Retrieved from bwpolicies.com/global.