CyberSec & AI Prague 2019

Dominika Regéciová
5 min readOct 30, 2019
Source: https://twitter.com/CyberSecAI/header_photo

International Conference on Cybersecurity and AI was held in Prague on 25th October 2019. Organized by Avast and Czech Technical University, it combined the worlds of academic research and commercial sphere.

I was lucky to be there, and I would like to bring you my experience and my notes from some of the many interesting talks given that day.

Securing Digital Democracies: Protecting the Security and Privacy of Digital Elections Around the World

For me, one of the most interesting talks was given by Alex Halderman, the Professor from the University of Michigan. He was talking about the security of digital elections. We all remember the problems following the 2016 presidential elections in the USA, including phishing, malware attacks, leaked emails, and more. A very nice summary can be found here. Even it seems no votes were changed, the possibility to influence the process of the tally of votes is still here.

Halderman and his colleagues tried to estimate how many votes would need to be changed to meddle with results. The number 0.02% of votes is shockingly low, and it just proves how real the threat is. He also showed that the goal of tempering the votes is not that difficult to archive. The main problems are the use of the old voting machines without needed software and hardware updates, the low security of Governmental Business Systems (GBS) providing the programming of the voting machines, and even the low interest in checking the votes with paper records. Halderman also said he sees the solution for these problems in paper ballots and post-election audits, but this required support from the government.

I recommend to watch the video Hacking Democracy by Halderman or to read an interview The Vulnerabilities of Our Voting Machines. In them, Halderman is explaining the possible attacks on voting machines via their memory cards with malicious software.

Mirai and the Current State of IoT Technology

Zakir Durumeric, an Assistant Professor from Standford University, talked about the Mirai malware and current state of IoT technology. Even though the Mirai had a quite simple concept, it was effective due to the use of IoT devices. Targeting mainly cameras, printers, scanners, and routers, it was looking for weak credentials. Lots of IoT devices do not have a password, or they have a weak one for connection purposes, the use of telnet is also common. Once under the control of malware, they performed a DDoS attack leading to the October 2016 Dyn cyberattack, for example. The list of websites affected by a botnet is enormous, including Amazon.com, GitHub.com, Netflix, Spotify, and Twitter. Dyn is the domain name provider (DNS), and that is the reason why so many websites were affected. More details are online in USENIX 17 talk named Understanding the Mirai Botnet.

A Marauder’s Map of Security and Privacy in Machine Learning

Based on the name of the conference, it is obvious that AI and machine learning were the main topics of the day. Top interest was given so-called adversarial attacks. The basic idea is to exploit the limitations of current learning methods for AIs. Knowing the tested data are not perfect we can use it to confuse the AI. Researchers found out that we have to change little things — pixels in the picture, add a slight noise to audio recording to archive very different recognition results.

Attackers can also poison training data. Because of that, AI can show behavior that is not always ‘acceptable’. In 2006, Microsoft revealed Twitter AI chatbot Tay. In less than 24 hours, Tay has become the persona of the worst parts of the Internet.

The talk given by Nicolas Papernot contains also information that attacks are possible even with a ‘black-box’ approach when we do not know how the AI was trained. We need only the results on our queries to deduct more information about the model and found out how to mislead it.

One way how to protect the AI against it is to use an aggregation of models, which means the set of data influence only one of the models. With additional noise, we can also bring more privacy into AIs.

More information can be found in Papernot’s paper A Marauder’s Map of Security and Privacy in Machine Learning.

Recent Advances in Adversarial AI for Malware

The adversarial attacks can be dangerous, how it was shown by Sadia Alforz, a Senior Researcher from the University of California, Berkeley. The typical example, shown below, consists of the stop sign, interpreted by AI as speed limit sign because the black and white strikers were added by researchers. This can be hazardous in the case of self-driving cars. The impact on security was given by a special pair of glasses. With them, no matter who is wearing them, the AI is sure the person is Milla Jovovich. The limitations exist within voice recognition as well, and attacks can be done by adding background noise.

Using Real AI to Protect Real Users (435M of Them)

In the next talk, Rajarshi Gupta, a Head of AI at Avast, presented ways how Avast is protecting 435 million users all over the world every day. Even he mentioned the adversarial attack, used to deceive AI in antivirus products to believe the inspected code is not a malware. In this scenario, Avast is focusing on the question if the scanning data are from a victim, or from an attacker who is testing his obfuscated data. He also presented the web shield technology for phishing detection using whitelists and machine learning. The captured data from users are then examined with the use of clusters, in less than 12 hours.

Panel Discussion: AI Used for Both Good and Evil in Security and Privacy

The last but not the least point of the program was a panel discussion with Nicolas Papernot, Rachel Greenstadt, Battista Biggio, and Rajarshi Gupta, moderated by Dave Gershgorn. Can AI be evil? Are adversarial attacks already used in practice, or are they still a theoretical problem? Should we regulate the use of AI in the commercial sphere? How to prevent biased AIs influenced by in-perfected tested data? These and more questions were addressed. It is very likely the use of AI will be more and more prominent in the future. With this in mind, we have to work on our perception of machine learning and be aware of its limitations. Similarly, like guns are not responsible for victims of the shooting, but people do, we are obliged to train AIs on data that are not biased by our society. On this topic, I would recommend the article by the moderator of this panel discussion, Dave Gershgorn: Hospital Algorithms Are Biased Against Black Patients, New Research Shows.

Overall it was a really exciting event, and I am glad I could attend. It has been announced that there will be CyberSec & AI Prague 2020, and I am planning to participate in a poster session.

Note: I would like to thank the Department of Information Systems at the Faculty of Information Technology, the Brno University of Technology for providing financial support that allowed me to visit this great conference.

VGhhbmtzIGZvciByZWFkaW5nLCBhbmQgc2VlIHlvdSBsYXRlciE=

--

--