Следене
Zeyu Qin
Заглавие
Позовавания
Позовавания
Година
Boosting the Transferability of Adversarial Attacks with Reverse Adversarial Perturbation
Z Qin, Y Fan, Y Liu, L Shen, Y Zhang, J Wang, B Wu
NeurIPS 2022, 2022
882022
Random Noise Defense Against Query-Based Black-Box Attacks
Z Qin, Y Fan, H Zha, B Wu
NeurIPS 2021, 2021
692021
Beyond factuality: A comprehensive evaluation of large language models as knowledge generators
L Chen, Y Deng, Y Bian, Z Qin, B Wu, TS Chua, KF Wong
EMNLP 2023, 2023
452023
Revisiting Personalized Federated Learning: Robustness Against Backdoor Attacks
Z Qin, L Yao, D Chen, Y Li, B Ding, M Cheng
KDD 2023, 2023
282023
Towards Stable Backdoor Purification through Feature Shift Tuning
R Min*, Z Qin*, L Shen, M Cheng
NeurIPS 2023, 2023
272023
Imitation learning from imperfection: Theoretical justifications and algorithms
Z Li, T Xu, Z Qin, Y Yu, ZQ Luo
NeurIPS 2023 Spotlight, 2024
122024
Improving Adversarial Training for Multiple Perturbations through the Lens of Uniform Stability
J Xiao, Z Qin, Y Fan, B Wu, J Wang, ZQ Luo
ICML 2023, The Workshop on New Frontiers in Adversarial Machine Learning, 2023
10*2023
Step-on-feet tuning: Scaling self-alignment of llms via bootstrapping
H Wang, G Ma, Z Meng, Z Qin, L Shen, Z Zhang, B Wu, L Liu, Y Bian, T Xu, ...
arXiv preprint arXiv:2402.07610, 2024
82024
Entropic distribution matching in supervised fine-tuning of LLMs: Less overfitting and better diversity
Z Li, C Chen, T Xu, Z Qin, J Xiao, R Sun, ZQ Luo
arXiv preprint arXiv:2408.16673, 2024
42024
Leveraging Reasoning with Guidelines to Elicit and Utilize Knowledge for Enhancing Safety Alignment
H Wang*, Z Qin*, L Shen, X Wang, M Cheng, D Tao
arXiv preprint arXiv:2502.04040, 2025
12025
Uncovering, Explaining, and Mitigating the Superficial Safety of Backdoor Defense
R Min*, Z Qin*, NL Zhang, L Shen, M Cheng
NeurIPS 2024 Spotlight, 2024
12024
Adversarial Machine Learning Under the Black-Box Scenario
Z QIN
2022
Системата не може да изпълни операцията сега. Опитайте отново по-късно.
Статии 1–12