b'Red teamingRed teaming proves that there are many pathways to attack machine artificial intelligence learning models that the security community has yet to consider.M achine learning and artificial intelligence software provides researchers with the tools they need to develop machine learning applications with astounding speed and flexibility. However, as with any code base, these libraries contain vulnerabilities that can have drastic security implications. With widely used libraries like TensorFlow, PyTorch, and Scikit-Learn, as well as PROJECT NUMBER:other libraries such as Theano, Chainer, and Caffe, cyber adversaries leverage code 21A1050-088FP vulnerabilities across a variety of systems to gain access. As machine learning becomes ubiquitous in commercial products and critical systems, these system TOTAL APPROVED AMOUNT:weaknesses have a large impact on the security of millions of devices. While most $950,000 over 3 years research in this area targets model weights and data, very few researchers are PRINCIPAL INVESTIGATOR:exploring machine learning security through traditional red teaming and reverse Jed Haile engineering approaches. This project explored two primary avenues for success across three years of research. First, machine learning libraries were fuzzed to find CO-INVESTIGATORS: vulnerabilities and those vulnerabilities were exploited to demonstrate the impact Sage Havens, INL of such security violations. This resulted in three white papers and solidified a strong Mike Borowczak, University of Wyoming partnership with the University of Wyoming. The second avenue introduced using side channels to exploit vulnerabilities and three publications were completed for this area. Side channel attacks, or attacks that use auxiliary measurements and information to extract system secrets, targeted machine learning parameters, the models themselves, and the hardware running the machine learning algorithms. Ultimately, this project covered significant ground in researching machine learning vulnerabilities, strengthened university partnerships, funded multiple student researchers, and produced deliverables such as white papers, publications, posters, and other presentations. Red teaming machine learning applications is crucial in securing systems in quickly evolving technological landscapes.116'