Contribute to the OWASP AI Exchange
The OWASP projects are an open source effort, and we enthusiastically welcome all forms of contributions and feedback.
- 📥 Send your suggestion to the project leader.
- 👋 Join
#project-aiin our Slack workspace.
- 🗣️ Discuss with the project leader how to become part of the writing group.
- 💡Propose your concepts, or submit an issue.
- 📄 Fork our repo and submit a Pull Request for concrete fixes (e.g. grammar/typos) or content already approved by the core team.
- 🙌 Showcase your contributions.
- 🐞 Identify an issue or fix it on a Pull Request.
- 💬 Provide your insights in GitHub Discussions.
- 🙏 Pose your questions.
We value every contribution to our project, but it’s important to be aware of certain guidelines:
- Avoid Advertising: The OWASP AI projects should not be a medium for promoting commercial tools, companies, or individuals. The focus should be on free and open-source tools when discussing the implementation of techniques or tests. While commercial tools generally aren’t included, they may be mentioned in specific, relevant instances.
- Refrain from Unnecessary Self-Promotion: If you’re referencing tools or articles you’re affiliated with, please disclose this relationship in your pull request. This transparency helps us ensure that the content aligns with the overall objectives of the guide.
If you’re unsure about anything, feel free to reach out to us with your questions.
|Trail of Bits
|Improved supply chain management
|several elaborations and references on datascience defence mechanisms
|mapping with misc. standards
|many textual improvements & link to LLM top 10
|misc. contributions including model obfuscation and explanation
|Disesdi Susanna Cox
|Software Improvement Group (SIG)
|step-by-step guide for organizations, website creation, various textual improvements
|datascience discussion and references around evasion attacks
|Rob van der Veer
|Software Improvement Group (SIG)
|Boise State University, AI Cyber Advisors
|Oak Ridge National Laboratory
|BLUF, Adversarial Training, OOD detection, NISTIR 8269, Guide Usability/Structure
|Workforce Tech LLC
|mapping with ISO/IEC 42001
|Yiannis Kanellopoulos and team
|Mutual Knowledge Systems
|Many markdown improvements
Tasks are sorted by urgency, with the top item being the most urgent.
Tweak navigator: 1) “deal with conidentiality issues” -> “minimize data to help confidentiality”, 2) remove ADDTRAINNOISE
Futureproof hyperlinks: Create a way to link to Controls and to Threats with permanent links (we probably need to generate html from the md)
Update hyperlinks in navigator, taking into account the ‘futureproof hyperlinks’
Elaborate on “Choose a model type resilient against a transfer learning attack”
Under DATAQUALITCONTROL: Elaborate on that method to detect statistical deviation by training models on random selections of the training dataset and then feeding each training sample to those models and compare results.
BIG ONE- DISTRIBUTE: Each section (threat, control):
- review on clarity,
- grammar & spelling
- completeness. Goal: offer a clear summary to non AI security experts, mention important attention points/potential challenges, and refer them to other work for more details.
- examples needed?
- visualisation needed?
- sufficiently practical: make clear what needs to be done. Best practices.
- sufficient references. Use a ‘References’ section and/or a ‘Links to standards’ section.
Create a visualisation of the new Summary with controls, perhaps combine it with the new threat model diagram ideas and replace the current one
BIG ONE: Risk analysis: Further design the risk analysis process and especially make responsiblity assignment more clear. Include for example: when is evasion really a problem in practice
BIG ONE: Write more of a step-by-step guide for organizations to start with AI (security)
BIG ONE: high level sanity check with NIST adversarial machine learning document
BIG ONE: high level sanity check with MITRE ATLAS
Add ‘Leak sensitive input data’ to threat diagram and check further for any gaps with this document
Check if OBFUSCATETRAININGDATA has strategies (anonymization, tokenization) that are covered in ISO/IEC standards and add references to those standards
Under DATAQUALITCONTROL: elaborate on RONI and tRONI training sample selection
Elaborate on the various methods and the general approach of TRAINDATADISTORTION to prevent data poisoning
Add attribute inference attacks and consider making that part of ‘data reconstruction’, together with model inversion, although it is a different approach
Work with the LLM top 10 team to make sure that the LLM top 10 entries link back to the AI Exchange
Under TRAINADVERSARIAL: Elaborate - See Annex C of ENISA Secure machine learning algorithms 2021.
Under DETECTADVERSARIALINPUT: elaborate on detector subnetworks in Annex C of ENISA 2021 and on the references in that section
Under EVASIONROBUSTMODEL: See Annex C in ENISA 2021 document to cover Stability terms, adversarial regulaiser, input gradient regularisation, defenisvie distillation and Random feature nullification.
Under INPUTDISTORTION: See ENISA Annex C to add data randomisation, input transformation and input denoising.
Under INPUTDISTORTION: add Gradient masking - Annex C ENISA 2021
Cover integrity checks in development pipeline (build, deploy, supply chain) - under supplychainmanage and/or secdevprogram
Create an overall community outreach marketing plan, and regional outreach plans.
- Do gap analysis and elaborate on ISO/IEC 27563 on AI use case security & privacy (search for it in this document)
- Do gap analysis and elaborate on ISO/IEC 23894 on Risk analysis (search for it in this document)
- Do gap analysis and elaborate on ISO/IEC 27115 on Cybersecurity evaluation of complex systems (search for it in this document)
- Do gap analysis and elaborate on ISO/IEC TR 24029 on Assessment of the robustness of neural networks (search for it in this document)
Anything is welcome: more controls, improved descriptions, examples, references, etc. We will make sure you get credit for your input.