Clara Na
Clara Na
Language Technologies Institute, Carnegie Mellon University
Verified email at - Homepage
Cited by
Cited by
Train Flat, Then Compress: Sharpness-Aware Minimization Learns More Compressible Models
C Na, SV Mehta, E Strubell
arXiv preprint arXiv:2205.12694, 2022
To Build Our Future, We Must Know Our Past: Contextualizing Paradigm Shifts in Natural Language Processing
S Gururaja, A Bertsch, C Na, DG Widder, E Strubell
arXiv preprint arXiv:2310.07715, 2023
The Framework Tax: Disparities Between Inference Efficiency in Research and Deployment
J Fernandez, J Kahn, C Na, Y Bisk, E Strubell
arXiv preprint arXiv:2302.06117, 2023
Energy and Carbon Considerations of Fine-Tuning BERT
X Wang, C Na, E Strubell, S Friedler, S Luccioni
arXiv preprint arXiv:2311.10267, 2023
Less Is More? In Patents, Design Transformations that Add Occur More Often Than Those that Subtract
K Stenger, C Na, L Klotz
Design Computing and Cognition’20, 283-295, 2022
The system can't perform the operation now. Try again later.
Articles 1–5