It is our great pleasure to welcome you to the 2013 ACM Workshop Artificial Intelligence and Security (AISec 2013) -- the sixth annual workshop addressing technologies that fuse intelligent systems into computer security applications and the implications of these approaches. The papers to be presented in this year's program span topics ranging from the detection of attacks and adversarial learning to the social networks and CAPTCHA authentication schemes. The workshop's aim is to advance research at the intersection of artificial intelligence, machine learning, privacy and security. In particular, AISec gives researchers and practitioners working within one or more of those fields a platform for interdisciplinary discussion, which would otherwise be lacking. Hopefully, the workshop will lead to a high degree of cross-pollination between groups working across these areas. We are delighted to again be co-located with the premier ACM Computer and Communication Security (CCS 2013) conference.
This year we had 17 submissions from Asia, Europe and North America. After a rigorous reviewing process, involving at least three referees per paper, 10 papers were accepted for presentation at the workshop. Apart from the main program, there will also be a keynote speech by Konrad Rieck on offensive security entitled "Off the Beaten Path: Machine Learning for Offensive Security".
Proceeding Downloads
Off the beaten path: machine learning for offensive security
Machine learning has been widely used for defensive security. Numerous approaches have been devised that make use of learning techniques for detecting attacks and malicious software. By contrast, only very few research has studied how machine learning ...
Using naive bayes to detect spammy names in social networks
Many social networks are predicated on the assumption that a member's online information reflects his or her real identity. In such networks, members who fill their name fields with fictitious identities, company names, phone numbers, or just gibberish ...
What you want is not what you get: predicting sharing policies for text-based content on facebook
As the amount of content users publish on social networking sites rises, so do the danger and costs of inadvertently sharing content with an unintended audience. Studies repeatedly show that users frequently misconfigure their policies or misunderstand ...
GOTCHA password hackers!
We introduce GOTCHAs (Generating panOptic Turing Tests to Tell Computers and Humans Apart) as a way of preventing automated offline dictionary attacks against user selected passwords. A GOTCHA is a randomized puzzle generation protocol, which involves ...
Early security classification of skype users via machine learning
We investigate possible improvements in online fraud detection based on information about users and their interactions. We develop, apply, and evaluate our methods in the context of Skype. Specifically, in Skype, we aim to provide tools that identify ...
Structural detection of android malware using embedded call graphs
The number of malicious applications targeting the Android system has literally exploded in recent years. While the security community, well aware of this fact, has proposed several methods for detection of Android malware, most of these are based on ...
ACTIDS: an active strategy for detecting and localizing network attacks
In this work we investigate a new approach for detecting attacks which aim to degrade the network's Quality of Service (QoS). To this end, a new network-based intrusion detection system (NIDS) is proposed. Most contemporary NIDSs take a passive approach ...
A close look on n-grams in intrusion detection: anomaly detection vs. classification
Detection methods based on n-gram models have been widely studied for the identification of attacks and malicious software. These methods usually build on one of two learning schemes: anomaly detection, where a model of normality is constructed from n-...
On the hardness of evading combinations of linear classifiers
An increasing number of machine learning applications involve detecting the malicious behavior of an attacker who wishes to avoid detection. In such domains, attackers modify their behavior to evade the classifier while accomplishing their goals as ...
Is data clustering in adversarial settings secure?
Clustering algorithms have been increasingly adopted in security applications to spot dangerous or illicit activities. However, they have not been originally devised to deal with deliberate attack attempts that may aim to subvert the clustering process ...
Approaches to adversarial drift
- Alex Kantchelian,
- Sadia Afroz,
- Ling Huang,
- Aylin Caliskan Islam,
- Brad Miller,
- Michael Carl Tschantz,
- Rachel Greenstadt,
- Anthony D. Joseph,
- J. D. Tygar
In this position paper, we argue that to be of practical interest, a machine-learning based security system must engage with the human operators beyond feature engineering and instance labeling to address the challenge of drift in adversarial ...
Index Terms
- Proceedings of the 2013 ACM workshop on Artificial intelligence and security