Follow
Clara Na
Clara Na
Language Technologies Institute, Carnegie Mellon University
Verified email at cs.cmu.edu - Homepage
Title
Cited by
Cited by
Year
Train Flat, Then Compress: Sharpness-Aware Minimization Learns More Compressible Models
C Na, SV Mehta, E Strubell
arXiv preprint arXiv:2205.12694, 2022
192022
Energy and Carbon Considerations of Fine-Tuning BERT
X Wang, C Na, E Strubell, S Friedler, S Luccioni
arXiv preprint arXiv:2311.10267, 2023
112023
To Build Our Future, We Must Know Our Past: Contextualizing Paradigm Shifts in Natural Language Processing
S Gururaja, A Bertsch, C Na, DG Widder, E Strubell
arXiv preprint arXiv:2310.07715, 2023
92023
The Framework Tax: Disparities Between Inference Efficiency in NLP Research and Deployment
J Fernandez, J Kahn, C Na, Y Bisk, E Strubell
arXiv preprint arXiv:2302.06117, 2023
32023
Scalable Data Ablation Approximations for Language Models through Modular Training and Merging
C Na, I Magnusson, AH Jha, T Sherborne, E Strubell, J Dodge, P Dasigi
arXiv preprint arXiv:2410.15661, 2024
2024
Less Is More? In Patents, Design Transformations that Add Occur More Often Than Those that Subtract
K Stenger, C Na, L Klotz
Design Computing and Cognition’20, 283-295, 2022
2022
The system can't perform the operation now. Try again later.
Articles 1–6