Skip to content

Contrast Responsible AI Policy Project: Keeping your business safe in the AI era

Contrast Responsible AI Policy Project: Keeping your business safe in the AI era

Contrast Security is announcing the launch of the Contrast Responsible AI Policy Project, a pioneering initiative in the realm of Artificial Intelligence (AI) utilization. In our commitment to democratize responsible AI practices, we are open-sourcing our company’s internal AI policy under the Creative Commons Attribution-ShareAlike 4.0 (CC BY-SA 4.0) license.

AI is no longer just a concept. It is embedded in our everyday lives, powering a vast array of systems and services, from personal assistants to financial analytics. As with any transformative technology, it is imperative that its use be governed by thoughtful and comprehensive policies to mitigate potential risks and ethical dilemmas.

The Contrast Responsible AI Policy Project is a testament to our belief in transparency, cooperation and shared growth. As AI continues to evolve, we need to ensure that its potential is harnessed in a responsible and ethical manner. Having a clear, well-defined AI policy is essential for any organization implementing or planning to implement AI technologies. 

Why now? 

The excitement over AI has been compared to a Gold Rush. 

AI is everywhere: It’s embedded into browsers, bringing us personalized shopping via AI-powered assistants and powering recommendation engines; in the driverless vehicles now cruising our roads; in our email inboxes, where it filters out spam; in the facial recognition that unlocks our devices; in human resources, where machine learning helps to scan job candidates’ profiles; and far more

In using AI in such applications, we often share our personal and company information in order to grant autonomy for the technology to act on our behalf as a virtual assistant. But sharing data can create risk of data breaches, identity theft and intellectual property exposure, given the potential for uncertainty regarding AI applications’ data collection and use procedures. 

Such risks aren’t hypothetical. Research has shown that 11% of the data that employees paste into ChatGPT is confidential, while 4% of employees have pasted sensitive data into the generative AI at least once. 

Risks also exist in software development code generated by AI. Research has shown that nearly 40% of top AI suggestions from GitHub Copilot — a language model trained over open-source GitHub code — as well as 40% of its total AI suggestions led to code vulnerabilities. Much of this has to do with the fact that AI services have been trained on web scraping. If you’re going to scrape code to use in training an AI used by programmers, you’re bound to pick up the coding errors that exist in the world’s code. In fact, as reported by Gizmodo, Google updated its privacy policy over the July 4 holiday weekend. The new policy suggests that  the entire public internet is fair game to be scraped for Google’s AI projects, which include Bard and Cloud AI.

In light of these and other new risks that have arisen with AI, Contrast’s AI policy is intended to serve as a guiding star, informing AI-related activities within an organization and ensuring that they are aligned with legal, ethical and social norms.

A strong AI policy delineates the rules for data use, respects privacy rights, ensures transparency and strives for accountability. Not having such a policy could expose your company to significant risks, including legal liabilities, reputational damage and financial loss.

We designed this policy to protect our stakeholders from potential negative implications of AI misuse, emphasizing data security, intellectual property rights and regulatory compliance.

By open-sourcing our AI policy under the CC BY-SA 4.0 license, we are allowing other organizations to use and adapt this policy framework while ensuring that Contrast is credited as the source. This license permits others to remix, adapt and build upon the policy, even for commercial purposes, under the same license terms. This provides your legal and policy team a well-structured, thought-out starting point, saving time and resources.

Available via GitHub

We are distributing this policy via a GitHub repository. GitHub offers an efficient, collaborative platform for version control and change tracking. It allows us to maintain an open dialogue with the user community and ensures the policy can be updated and improved based on user feedback and the evolving AI landscape. We actively encourage feedback, suggestions and changes that can enhance and harden the policy, making it even more beneficial for a broader range of use cases.

If your company has yet to establish an AI policy, we invite you to leverage the Contrast Responsible AI Policy Project as a foundation. 

Please visit our GitHub repository to access the policy, contribute and join the conversation. We look forward to your feedback and suggestions, which you can also send to

Let's join hands in shaping a safer, responsible future with AI. Thank you for your support in making the AI world a more responsible and safer place.

Get Demo

Read more:

David Lindner, Chief Information Security Officer

David Lindner, Chief Information Security Officer

David is an experienced application security professional with over 20 years in cybersecurity. In addition to serving as the chief information security officer, David leads the Contrast Labs team that is focused on analyzing threat intelligence to help enterprise clients develop more proactive approaches to their application security programs. Throughout his career, David has worked within multiple disciplines in the security field—from application development, to network architecture design and support, to IT security and consulting, to security training, to application security. Over the past decade, David has specialized in all things related to mobile applications and securing them. He has worked with many clients across industry sectors, including financial, government, automobile, healthcare, and retail. David is an active participant in numerous bug bounty programs.