Следене
Xiaojun Jia
Xiaojun Jia
Потвърден имейл адрес: ntu.edu.sg - Начална страница
Заглавие
Позовавания
Позовавания
Година
Comdefend: An efficient image compression model to defend adversarial examples
X Jia, X Wei, X Cao, H Foroosh
Proceedings of the IEEE/CVF conference on computer vision and pattern …, 2019
3352019
LAS-AT: adversarial training with learnable attack strategy
X Jia, Y Zhang, B Wu, K Ma, J Wang, X Cao
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2022
1682022
Adv-watermark: A novel watermark perturbation for adversarial examples
X Jia, X Wei, X Cao, X Han
Proceedings of the 28th ACM international conference on multimedia, 1579-1587, 2020
882020
Defending against model stealing via verifying embedded external features
Y Li, L Zhu, X Jia, Y Jiang, ST Xia, X Cao
AAAI 2022, 2022
742022
Boosting fast adversarial training with learnable adversarial initialization
X Jia, Y Zhang, B Wu, J Wang, X Cao
IEEE Transactions on Image Processing 31, 4417-4430, 2022
632022
Prior-Guided Adversarial Initialization for Fast Adversarial Training
X Jia, Y Zhang, X Wei, B Wu, K Ma, J Wang, X Cao
ECCV 2022, 2022
442022
Generating transferable 3d adversarial point cloud via random perturbation factorization
B He, J Liu, Y Li, S Liang, J Li, X Jia, X Cao
Proceedings of the AAAI Conference on Artificial Intelligence 37 (1), 764-772, 2023
372023
A Large-scale Multiple-objective Method for Black-box Attack against Object Detection
S Liang, L Li, Y Fan, X Jia, J Li, B Wu, X Cao
ECCV 2022, 2022
362022
A mutation-based method for multi-modal jailbreaking attack detection
X Zhang, C Zhang, T Li, Y Huang, X Jia, X Xie, Y Liu, C Shen
arXiv preprint arXiv:2312.10766, 2023
31*2023
Improving fast adversarial training with prior-guided knowledge
X Jia, Y Zhang, X Wei, B Wu, K Ma, J Wang, X Cao
IEEE Transactions on Pattern Analysis and Machine Intelligence, 2024
282024
Watermark Vaccine: Adversarial Attacks to Prevent Watermark Removal
X Liu, J Liu, Y Bai, J Gu, T Chen, X Jia, X Cao
ECCV 2022, 2022
272022
A survey on transferability of adversarial examples across deep neural networks
J Gu, X Jia, P de Jorge, W Yu, X Liu, A Ma, Y Xun, A Hu, A Khakzar, Z Li, ...
arXiv preprint arXiv:2310.17626, 2023
262023
Sa-attack: Improving adversarial transferability of vision-language pre-training models via self-augmentation
B He, X Jia, S Liang, T Lou, Y Liu, X Cao
arXiv preprint arXiv:2312.04913, 2023
242023
Poisoned forgery face: Towards backdoor attacks on face forgery detection
J Liang, S Liang, A Liu, X Jia, J Kuang, X Cao
arXiv preprint arXiv:2402.11473, 2024
212024
Context-aware robust fine-tuning
X Mao, Y Chen, X Jia, R Zhang, H Xue, Z Li
International Journal of Computer Vision 132 (5), 1685-1700, 2024
202024
Identifying and resisting adversarial videos using temporal consistency
X Jia, X Wei, X Cao
arXiv preprint arXiv:1909.04837, 2019
182019
Ot-attack: Enhancing adversarial transferability of vision-language models via optimal transport optimization
D Han, X Jia, Y Bai, J Gu, Y Liu, X Cao
arXiv preprint arXiv:2312.04403, 2023
172023
Hide in thicket: Generating imperceptible and rational adversarial perturbations on 3d point clouds
T Lou, X Jia, J Gu, L Liu, S Liang, B He, X Cao
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2024
152024
Does few-shot learning suffer from backdoor attacks?
X Liu, X Jia, J Gu, Y Xun, S Liang, X Cao
Proceedings of the AAAI Conference on Artificial Intelligence 38 (18), 19893 …, 2024
142024
Improved techniques for optimization-based jailbreaking on large language models
X Jia, T Pang, C Du, Y Huang, J Gu, Y Liu, X Cao, M Lin
arXiv preprint arXiv:2405.21018, 2024
132024
Системата не може да изпълни операцията сега. Опитайте отново по-късно.
Статии 1–20