Hello! I am currently a Ph.D. student in the Kim Jaechul Graduate School of AI at KAIST, where I am fortunate to be advised by Jaegul Choo. I am serving as a Technical Research Personnel in the Republic of Korea Army and was previously a Research Intern at Naver Webtoon. My primary research interests lie in the intersection of natural language processing and machine learning, with a focus on information extraction and security. Prior to joining KAIST, I received my B.S. in Computer Science from the University of Illinois at Urbana-Champaign.
( * indicates equal contributions.)
Cross-Lingual Unlearning of Selective Knowledge in Multilingual Language Models
Minseok Choi, Kyunghyun Min, Jaegul Choo
Preprint
SNAP: Unlearning Selective Knowledge in Large Language Models with Negative Instructions
Minseok Choi, Daniel Rim, Dohyun Lee, Jaegul Choo
Preprint
PairEval: Open-domain Dialogue Evaluation with Pairwise Comparison
ChaeHun Park, Minseok Choi, Dohyun Lee, Jaegul Choo
COLM 2024
Protecting Privacy Through Approximating Optimal Parameters for Sequence Unlearning in Language Models
Dohyun Lee*, Daniel Rim*, Minseok Choi, Jaegul Choo
ACL 2024 Findings
SimCKP: Simple Contrastive Learning of Keyphrase Representations
Minseok Choi, Chaeheon Gwak, Seho Kim, Si hyeong Kim, Jaegul Choo
EMNLP 2023 Findings
PRiSM: Enhancing Low-Resource Document-Level Relation Extraction with Relation-Aware Score Calibration
Minseok Choi, Hyesu Lim, Jaegul Choo
IJCNLP-AACL 2023 Findings
HistRED: A Historical Document-Level Relation Extraction Dataset
Soyoung Yang, Minseok Choi, Youngwoo Cho, Jaegul Choo
ACL 2023
Rethinking Style Transformer by Energy-based Interpretation: Adversarial Unsupervised Style Transfer using a Pretrained Model
Hojun Cho, Dohee Kim, Seungwoo Ryu, ChaeHun Park, Hyungjong Noh, Jeong-in Hwang, Minseok Choi, Edward Choi, Jaegul Choo
EMNLP 2022
Cross-Lingual Unlearning of Selective Knowledge in Multilingual Language Models
Minseok Choi, Kyunghyun Min, Jaegul Choo
Preprint
SNAP: Unlearning Selective Knowledge in Large Language Models with Negative Instructions
Minseok Choi, Daniel Rim, Dohyun Lee, Jaegul Choo
Preprint
PairEval: Open-domain Dialogue Evaluation with Pairwise Comparison
ChaeHun Park, Minseok Choi, Dohyun Lee, Jaegul Choo
COLM 2024
Protecting Privacy Through Approximating Optimal Parameters for Sequence Unlearning in Language Models
Dohyun Lee*, Daniel Rim*, Minseok Choi, Jaegul Choo
ACL 2024 Findings
SimCKP: Simple Contrastive Learning of Keyphrase Representations
Minseok Choi, Chaeheon Gwak, Seho Kim, Si hyeong Kim, Jaegul Choo
EMNLP 2023 Findings
PRiSM: Enhancing Low-Resource Document-Level Relation Extraction with Relation-Aware Score Calibration
Minseok Choi, Hyesu Lim, Jaegul Choo
IJCNLP-AACL 2023 Findings
HistRED: A Historical Document-Level Relation Extraction Dataset
Soyoung Yang, Minseok Choi, Youngwoo Cho, Jaegul Choo
ACL 2023
Rethinking Style Transformer by Energy-based Interpretation: Adversarial Unsupervised Style Transfer using a Pretrained Model
Hojun Cho, Dohee Kim, Seungwoo Ryu, ChaeHun Park, Hyungjong Noh, Jeong-in Hwang, Minseok Choi, Edward Choi, Jaegul Choo
EMNLP 2022
Full CV in PDF.