Publication Overview
I published about 300 papers at peer-reviewed international conferences, journals and workshops, and about 140 papers in technical reports. About 100 of these papers were published in the four flagship conference for information security and Privacy (S&P, CCS, Usenix Security, NDSS). I moreover edited 14 proceedings and filed 6 patents. For a continuously updated publication overview, please see my DBLP entry.Recent publications (2021 -- 2024)
-
MGTBench: Benchmarking Machine-Generated Text Detection
CCS, 2024.
-
"Do Anything Now": Characterizing and Evaluating In-The-Wild Jailbreak Prompts on Large Language Models
CCS, 2024.
-
Quantifying Privacy Risks of Prompts in Visual Prompt Learning
USENIX Security, 2024.
-
Instruction Backdoor Attacks Against Cutomized LLMs
USENIX Security, 2024.
-
Prompt Stealing Attacks Against Text-to-Image Generation Models
USENIX Security, 2024.
-
SecurityNet: Assessing Machine Learning Vulnerabilities on Public Models
USENIX Security, 2024.
-
Measuring the Effects of Stack Overflow Code Snippet Evolution on Open-Source Software Security
S&P, 2024.
-
Link Stealing Attacks Against Inductive Graph Neural Networks
PoPETS, 2024.
-
Games and Beyond: Analyzing the Bullet Chats of Esports Livestreaming
ICWSM, 2024.
-
FAKEPCD: Fake Point Cloud Detection via Source Attribution
ASIACCS, 2024.
-
Generated Distributions Are All You Need for Membership Inference Attacks Against Generative Models
WACV, 2024.
-
TrustLLM: Trustworthiness in Large Language Models
ICML, 2024.
-
Memorization in Self-Supervised Learning Improves Downstream Generalization
ICLR, 2024.
-
Composite Backdoor Attacks Against Large Language Models
NAACL (Findings), 2024.
-
Detection and Attribution of Models Trained on Generated Data
ICASSP, 2024.
-
Adversarial vulnerability bounds for Gaussian process classification
Machine Learning, 2023.
-
Pareto-optimal Defenses for the Web Infrastructure: Theory and Practice
ACM Transactions on Privacy and Security, 2023.
-
Unsafe Diffusion: On the Generation of Unsafe Images and Hateful Memes From Text-To-Image Models
CCS, 2023.
-
Can't Steal? Cont-Steal! Contrastive Stealing Attacks Against Image Encoders
CVPR, 2023.
-
Is Adversarial Training Really a Silver Bullet for Mitigating Data Poisoning?
ICLR, 2023.
-
Generated Graph Detection
ICML, 2023.
-
Data Poisoning Attacks Against Multimodal Encoders
ICML, 2023.
-
Backdoor Attacks Against Dataset Distillation
NDSS, 2023.
-
A Systematic Study of the Consistency of Two-Factor Authentication User Journeys on Top-Ranked Websites
NDSS, 2023.
-
SEAL: Capability-Based Access Control for Data-Analytic Scenarios.
SACMAT, 2023.
-
On the Evolution of (Hateful) Memes by Means of Multimodal Contrastive Learning
S&P, 2023.
-
PrivTrace: Differentially Private Trajectory Synthesis by Adaptive Markov Models
USENIX Security, 2023.
-
Two-in-One: A Model Hijacking Attack Against Text Generation Models
USENIX Security, 2023.
-
Bilingual Problems: Studying the Security Risks Incurred by Native Extensions in Scripting Languages
USENIX Security, 2023.
-
FACE-AUDITOR: Data Auditing in Facial Recognition Systems
USENIX Security, 2023.
-
UnGANable: Defending Against GAN-based Face Manipulation
USENIX Security, 2023.
-
Backdoor smoothing: Demystifying backdoor attacks on deep neural networks.
Computers & Security, 2022.
-
Graph Unlearning.
CCS, 2022.
-
On the Privacy Risks of Cell-Based NAS Architectures.
CCS, 2022.
-
Auditing Membership Leakages of Multi-Exit Networks.
CCS, 2022.
-
Membership Inference Attacks by Exploiting Loss Trajectory.
CCS, 2022.
-
Freely Given Consent?: Studying Consent Notice of Third-Party Tracking and Its Violations of GDPR in Android Apps.
CCS, 2022.
-
Finding MNEMON: Reviving Memories of Node Embeddings.
CCS, 2022.
-
Why So Toxic?: Measuring and Triggering Toxic Behavior in Open-Domain Chatbots.
CCS, 2022.
-
A Framework for Constructing Single Secret Leader Election from MPC.
ESORICS, 2022.
-
Dynamic Backdoor Attacks Against Machine Learning Models.
EuroS&P, 2022.
-
On Xing Tian and the Perseverance of Anti-China Sentiment Online.
ICWSM, 2022.
-
Get a Model! Model Hijacking Attack Against Machine Learning Models.
NDSS, 2022.
-
Industrial practitioners' mental models of adversarial machine learning.
SOUPS @ USENIX Security, 2022.
-
ML-Doctor: Holistic Risk Assessment of Inference Attacks Against Machine Learning Models.
USENIX Security, 2022.
-
Inference Attacks Against Graph Neural Networks.
USENIX Security, 2022.
-
BadNL: Backdoor Attacks against NLP Models with Semantic-preserving Improvements.
ACSAC, 2021.
-
Measuring User Perception for Detecting Unexpected Access to Sensitive Resource in Mobile Apps.
AsiaCCS, 2021.
-
When Machine Unlearning Jeopardizes Privacy.
CCS, 2021.
-
DoubleX: Statically Detecting Vulnerable Data Flows in Browser Extensions at Scale.
CCS, 2021.
-
12 Angry Developers - A Qualitative Study on Developers' Struggles with CSP.
CCS, 2021.
-
Accountability in the Decentralised-Adversary Setting.
CSF, 2021.
-
MLCapsule: Guarded Offline Deployment of Machine Learning as a Service.
CVPR Workshops, 2021.
-
Statically Detecting JavaScript Obfuscation and Minification Techniques in the Wild.
DSN, 2021.
-
Do winning tickets exist before DNN training?
SDM, 2021.
-
Explanation Beats Context: The Effect of Timing & Rationales on Users' Runtime Permission Decisions.
USENIX Security, 2021.
-
PrivSyn: Differentially Private Data Synthesis.
USENIX Security, 2021.
-
Stealing Links from Graph Neural Networks.
USENIX Security, 2021.
-
A11y and Privacy don't have to be mutually exclusive: Constraining Accessibility Service Misuse on Android.
USENIX Security, 2021.
-
Share First, Ask Later (or Never?) Studying Violations of GDPR's Explicit Consent in Android Apps.
USENIX Security, 2021.
-
Why Eve and Mallory Still Love Android: Revisiting TLS (In)Security in Android Applications.
USENIX Security, 2021.