Throwback to Defcon CTF Finals

less than 1 minute read

Published:

Just was thinkng back to a couple months ago when we competed in Defcon CTF Finals 2020.

team Image from @perribus

A survey paper that I have been working on since this competition has been accepted for publication and presentation at the Reversing and Offensive-Oriented Trends Symposium (ROOTS 2020). The paper covers practical adversarial examples that target malware classifiers.

I first decided to write this soon after the Rorschach challenge at this year’s Defcon CTF Finals. Similar to what happened with Defcon CTF Quals, there was some grumbling about including a machine learning challenge. It may be that it is difficult to tell what the impact of adversarial learning is when most research is done in the natural image domain (MNIST, CIFAR, Imagenet). I hope that the survey 1) shows a direct relation between adversarial learning and security through evasive malware and 2) it interests people in both security and ML enough to pursue research in this area.