Следене
Zhuoming Chen
Zhuoming Chen
Потвърден имейл адрес: andrew.cmu.edu
Заглавие
Позовавания
Позовавания
Година
Specinfer: Accelerating large language model serving with tree-based speculative inference and verification
X Miao, G Oliaro, Z Zhang, X Cheng, Z Wang, Z Zhang, RYY Wong, A Zhu, ...
Proceedings of the 29th ACM International Conference on Architectural …, 2024
1572024
Quantized training of gradient boosting decision trees
Y Shi, G Ke, Z Chen, S Zheng, TY Liu
Advances in neural information processing systems 35, 18822-18833, 2022
202022
TriForce: Lossless Acceleration of Long Sequence Generation with Hierarchical Speculative Decoding
H Sun, Z Chen, X Yang, Y Tian, B Chen
COLM 2024, 2024
182024
Sequoia: Scalable, Robust, and Hardware-aware Speculative Decoding
Z Chen, A May, R Svirschevski, Y Huang, M Ryabinin, Z Jia, B Chen
NeurIPS 2024 (Spotlight), 2024
172024
MagicDec: Breaking the Latency-Throughput Tradeoff for Long Context Generation with Speculative Decoding
J Chen, V Tiwari, R Sadhukhan, Z Chen, J Shi, IEH Yen, B Chen
arXiv preprint arXiv:2408.11049, 2024
52024
SpecExec: Massively Parallel Speculative Decoding for Interactive LLM Inference on Consumer Devices
R Svirschevski, A May, Z Chen, B Chen, Z Jia, M Ryabinin
NeurIPS 2024, 2024
32024
GNNPipe: Scaling Deep GNN Training with Pipelined Model Parallelism
J Chen, Z Chen, X Qian
HPCA2025, 2023
3*2023
MagicPIG: LSH Sampling for Efficient LLM Generation
Z Chen, R Sadhukhan, Z Ye, Y Zhou, J Zhang, N Nolte, Y Tian, M Douze, ...
arXiv preprint arXiv:2410.16179, 2024
12024
Sirius: Contextual Sparsity with Correction for Efficient LLMs
Y Zhou, Z Chen, Z Xu, V Lin, B Chen
NeurIPS 2024, 2024
12024
MINI-SEQUENCE TRANSFORMER: Optimizing Intermediate Memory for Long Sequences Training
C Luo, J Zhao, Z Chen, B Chen, A Anandkumar
NeurIPS 2024, 2024
12024
Quark: A Gradient-Free Quantum Learning Framework for Classification Tasks
Z Zhang, Z Chen, H Huang, Z Jia
12022
Системата не може да изпълни операцията сега. Опитайте отново по-късно.
Статии 1–11