AI Security References

AI Security References

References of the OWASP AI Exchange

Category: discussion
Permalink: https://owaspai.org/goto/references/

See the Media page for several webinars and podcast by and about the AI Exchange.

Overviews of AI Security Threats:


Overviews of AI Security/Privacy Incidents:


Misc.:


Learning and Training:


CategoryTitleDescriptionProviderContent TypeLevelCostLink
Courses and LabsAI Security FundamentalsLearn the basic concepts of AI security, including security controls and testing procedures.MicrosoftCourseBeginnerFreeAI Security Fundamentals
Red Teaming LLM ApplicationsExplore fundamental vulnerabilities in LLM applications with hands-on lab practice.GiskardCourse + LabBeginnerFreeRed Teaming LLM Applications
Exploring Adversarial Machine LearningDesigned for data scientists and security professionals to learn how to attack realistic ML systems.NVIDIACourse + LabIntermediatePaidExploring Adversarial Machine Learning
OWASP LLM VulnerabilitiesEssentials of securing Large Language Models (LLMs), covering basic to advanced security practices.CheckmarxInteractive LabBeginnerFree with OWASP MembershipOWASP LLM Vulnerabilities
OWASP TOP 10 for LLMScenario-based LLM security vulnerabilities and their mitigation strategies.Security CompassInteractive LabBeginnerFreeOWASP TOP 10 for LLM
Web LLM AttacksHands-on lab to practice exploiting LLM vulnerabilities.PortswiggerLabBeginnerFreeWeb LLM Attacks
CTF PracticesAI Capture The FlagA series of AI-themed challenges ranging from easy to hard, hosted by DEFCON AI Village.Crucible / AIVCTFBeginner, IntermediateFreeAI Capture The Flag
IEEE SaTML CTF 2024A Capture-the-Flag competition focused on Large Language Models.IEEECTFBeginner, IntermediateFreeIEEE SaTML CTF 2024
Gandalf Prompt CTFA gamified challenge focusing on prompt injection techniques.LakeraCTFBeginnerFreeGandalf Prompt CTF
HackAPromptA prompt injection playground for participants of the HackAPrompt competition.AiCrowdCTFBeginnerFreeHackAPrompt
AI CTFAI/ML themed challenges to be solved over a 36-hour period.PHDayCTFBeginner, IntermediateFreeAI CTF
Prompt Injection LabAn immersive lab focused on gamified AI prompt injection challenges.ImmersiveLabsCTFBeginnerFreePrompt Injection Lab
DoublespeakA text-based AI escape game designed to practice LLM vulnerabilities.Forces UnseenCTFBeginnerFreeDoublespeak
MyLLMBankPrompt injection challenges against LLM chat agents that use ReAct to call tools.WithSecureCTFBeginnerFreeMyLLLBank
MyLLMDoctorAdvanced challenge focusing on multi-chain prompt injection.WithSecureCTFIntermediateFreeMyLLMDoctor
TalksAI is just software, what could possible go wrong w/ Rob van der VeerThe talk explores the dual nature of AI as both a powerful tool and a potential security risk, emphasizing the importance of secure AI development and oversight.OWASP Lisbon Global AppSec 2024ConferenceN/AFreeYouTube
Lessons Learned from Building & Defending LLM ApplicationsAndra Lezza and Javan Rasokat discuss lessons learned in AI security, focusing on vulnerabilities in LLM applications.DEF CON 32ConferenceN/AFreeYouTube
Practical LLM Security: Takeaways From a Year in the TrenchesNVIDIA’s AI Red Team shares insights on securing LLM integrations, focusing on identifying risks, common attacks, and effective mitigation strategies.Black Hat USA 2024ConferenceN/AFreeYouTube
Hacking generative AI with PyRITRajasekar from Microsoft AI Red Team presents PyRIT, a tool for identifying vulnerabilities in generative AI systems, emphasizing the importance of safety and security.Black Hat USA 2024WalkthroughN/AFreeYouTube