Dr Ahmed M. A. Sayed has had a paper published at IEEE Transactions on Information Forensics and Security
Authors: Jian Chen, Yuan Gao, Gaoyang Liu, Ahmed M Abdelmoniem, Chen Wan
Abstract: In recent years, contrastive learning has become very powerful for representation learning using large-scale unlabeled data, by involving pre-trained encoders to fine-tune downstream classifiers. However, the latest research indicates that contrastive learning can potentially suffer from the risks of data poisoning attacks, where the attacker injects maliciously crafted poisoned samples into the unlabeled pre-training data. To step forward, in this paper, we present a more stealthy poisoning attack dubbed PA-CL to directly poison the pre-trained encoder, such that the downstream classifier’s behavior on a single target instance to the attacker-desired class can be manipulated without affecting the overall downstream classification performance. We observe that a high similarity exists between the feature representation generated by the poisoned pre-trained encoder for the target sample and samples from the attacker-desired class. This leads to the downstream classifier misclassifying the target sample with the attacker-desired class. Therefore, we formulate our attack as an optimization problem, and design two novel loss functions, namely, the target effectiveness loss to effectively poison the pre-trained encoder, and the model utility loss to maintain the downstream classification performance. Experimental results on four real-world datasets demonstrate that the attack success rate of the proposed attack is 40% higher on average than that of the three baseline attacks, and the fluctuation of the downstream classifier’s prediction accuracy is within 5%.