Towards Continual Knowledge Learning of Language Models J Jang, S Ye, S Yang, J Shin, J Han, G KIM, SJ Choi, M Seo ICLR 2022, 2022 | 88 | 2022 |
TemporalWiki: A Lifelong Benchmark for Training and Evaluating Ever-Evolving Language Models J Jang*, S Ye*, C Lee, S Yang, J Shin, J Han, G Kim, M Seo EMNLP 2022, 2022 | 50 | 2022 |
Knowledge Unlearning for Mitigating Privacy Risks in Language Models J Jang, D Yoon, S Yang, S Cha, M Lee, L Logeswaran, M Seo ACL 2023, 2023 | 49 | 2023 |
Camels in a changing climate: Enhancing lm adaptation with tulu 2 H Ivison, Y Wang, V Pyatkin, N Lambert, M Peters, P Dasigi, J Jang, ... arXiv preprint arXiv:2311.10702, 2023 | 44 | 2023 |
Can Large Language Models Truly Understand Prompts? A Case Study with Negated Prompts J Jang*, S Ye*, M Seo NeurIPS 2022 Workshop on Transfer Learning for NLP (TL4NLP), 2022 | 44 | 2022 |
Exploring the benefits of training expert language models over instruction tuning J Jang, S Kim, S Ye, D Kim, L Logeswaran, M Lee, K Lee, M Seo ICML 2023, 2023 | 35 | 2023 |
Prometheus: Inducing fine-grained evaluation capability in language models S Kim, J Shin, Y Cho, J Jang, S Longpre, H Lee, S Yun, S Shin, S Kim, ... ICLR 2024, 2024 | 31* | 2024 |
Guess the Instruction! Making Language Models Stronger Zero-Shot Learners S Ye, D Kim, J Jang, J Shin, M Seo ICLR 2023, 2023 | 27* | 2023 |
Sequential targeting: a continual learning approach for data imbalance in text classification J Jang, Y Kim, K Choi, S Suh Expert Systems with Applications 179, 115067, 2021 | 25* | 2021 |
The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-Tuning S Kim, SJ Joo, D Kim, J Jang, S Ye, J Shin, M Seo EMNLP 2023, 2023 | 24 | 2023 |
Personalized soups: Personalized large language model alignment via post-hoc parameter merging J Jang, S Kim, BY Lin, Y Wang, J Hessel, L Zettlemoyer, H Hajishirzi, ... arXiv preprint arXiv:2310.11564, 2023 | 22 | 2023 |
Supervised health stage prediction using convolutional neural networks for bearing wear S Suh, J Jang, S Won, MS Jha, YO Lee Sensors 20 (20), 5846, 2020 | 22 | 2020 |
Fixed Input Parameterization for Efficient Prompting E Choi, Y Jo, J Jang, J Jang, M Seo ACL 2023 Findings, 2023 | 17* | 2023 |
Efficiently Enhancing Zero-Shot Performance of Instruction Following Model via Retrieval of Soft Prompt S Ye, J Jang, D Kim, Y Jo, M Seo Findings of the Association for Computational Linguistics: EMNLP 2023, 12288 …, 2023 | 11* | 2023 |
Continually updating generative retrieval on dynamic corpora S Yoon, C Kim, H Lee, J Jang, M Seo arXiv preprint arXiv:2305.18952, 2023 | 3 | 2023 |
Music2Video: Automatic Generation of Music Video with fusion of audio and text Y Kim*, J Jang*, S Shin* arXiv preprint arXiv:2201.03809, 2022 | 3 | 2022 |
LangBridge: Multilingual Reasoning Without Multilingual Supervision D Yoon, J Jang, S Kim, S Kim, S Shafayat, M Seo arXiv preprint arXiv:2401.10695, 2024 | 2 | 2024 |
How Well Do Large Language Models Truly Ground? H Lee, S Joo, C Kim, J Jang, D Kim, KW On, M Seo NAACL 2024, 2024 | 2 | 2024 |
Improving Probability-based Prompt Selection Through Unified Evaluation and Analysis S Yang, J Kim, J Jang, S Ye, H Lee, M Seo TACL 2024, 2024 | 2 | 2024 |
Gradient Ascent Post-training Enhances Language Model Generalization D Yoon*, J Jang*, S Kim, M Seo ACL 2023, 2023 | 1 | 2023 |