Hello! I am currently a Ph.D. student in the Kim Jaechul Graduate School of AI at KAIST, where I am fortunate to be advised by Jaegul Choo. I am serving as a Technical Research Personnel in the Republic of Korea Army and was previously a Research Intern at Naver Webtoon. My primary research interests lie in the intersection of natural language processing and machine learning, with a focus on privacy & safety and information extraction. Prior to joining KAIST, I received my B.S. in Computer Science from the University of Illinois at Urbana-Champaign.
( * indicates equal contributions.)
Breaking Chains: Unraveling the Links in Multi-Hop Knowledge Unlearning
Minseok Choi, ChaeHun Park, Dohyun Lee, Jaegul Choo
Preprint
Opt-Out: Investigating Entity-Level Unlearning for Large Language Models via Optimal Transport
Minseok Choi, Daniel Rim, Dohyun Lee, Jaegul Choo
Preprint
Cross-Lingual Unlearning of Selective Knowledge in Multilingual Language Models
Minseok Choi, Kyunghyun Min, Jaegul Choo
EMNLP 2024 Findings
PairEval: Open-domain Dialogue Evaluation with Pairwise Comparison
ChaeHun Park, Minseok Choi, Dohyun Lee, Jaegul Choo
COLM 2024
Protecting Privacy Through Approximating Optimal Parameters for Sequence Unlearning in Language Models
Dohyun Lee*, Daniel Rim*, Minseok Choi, Jaegul Choo
ACL 2024 Findings
SimCKP: Simple Contrastive Learning of Keyphrase Representations
Minseok Choi, Chaeheon Gwak, Seho Kim, Si hyeong Kim, Jaegul Choo
EMNLP 2023 Findings
PRiSM: Enhancing Low-Resource Document-Level Relation Extraction with Relation-Aware Score Calibration
Minseok Choi, Hyesu Lim, Jaegul Choo
IJCNLP-AACL 2023 Findings
HistRED: A Historical Document-Level Relation Extraction Dataset
Soyoung Yang, Minseok Choi, Youngwoo Cho, Jaegul Choo
ACL 2023
Rethinking Style Transformer by Energy-based Interpretation: Adversarial Unsupervised Style Transfer using a Pretrained Model
Hojun Cho, Dohee Kim, Seungwoo Ryu, ChaeHun Park, Hyungjong Noh, Jeong-in Hwang, Minseok Choi, Edward Choi, Jaegul Choo
EMNLP 2022
Development and Application of Web‑based Machine Learning Program for Automated Assessment Model Generation
Minseok Choi, Jaegul Choo, Minsu Ha
Brain, Digital, & Learning, Vol. 12, No. 4, pp. 567-578, 2022
Breaking Chains: Unraveling the Links in Multi-Hop Knowledge Unlearning
Minseok Choi, ChaeHun Park, Dohyun Lee, Jaegul Choo
Preprint
Opt-Out: Investigating Entity-Level Unlearning for Large Language Models via Optimal Transport
Minseok Choi, Daniel Rim, Dohyun Lee, Jaegul Choo
Preprint
Cross-Lingual Unlearning of Selective Knowledge in Multilingual Language Models
Minseok Choi, Kyunghyun Min, Jaegul Choo
EMNLP 2024 Findings
PairEval: Open-domain Dialogue Evaluation with Pairwise Comparison
ChaeHun Park, Minseok Choi, Dohyun Lee, Jaegul Choo
COLM 2024
Protecting Privacy Through Approximating Optimal Parameters for Sequence Unlearning in Language Models
Dohyun Lee*, Daniel Rim*, Minseok Choi, Jaegul Choo
ACL 2024 Findings
SimCKP: Simple Contrastive Learning of Keyphrase Representations
Minseok Choi, Chaeheon Gwak, Seho Kim, Si hyeong Kim, Jaegul Choo
EMNLP 2023 Findings
PRiSM: Enhancing Low-Resource Document-Level Relation Extraction with Relation-Aware Score Calibration
Minseok Choi, Hyesu Lim, Jaegul Choo
IJCNLP-AACL 2023 Findings
HistRED: A Historical Document-Level Relation Extraction Dataset
Soyoung Yang, Minseok Choi, Youngwoo Cho, Jaegul Choo
ACL 2023
Rethinking Style Transformer by Energy-based Interpretation: Adversarial Unsupervised Style Transfer using a Pretrained Model
Hojun Cho, Dohee Kim, Seungwoo Ryu, ChaeHun Park, Hyungjong Noh, Jeong-in Hwang, Minseok Choi, Edward Choi, Jaegul Choo
EMNLP 2022
Development and Application of Web‑based Machine Learning Program for Automated Assessment Model Generation
Minseok Choi, Jaegul Choo, Minsu Ha
Brain, Digital, & Learning, Vol. 12, No. 4, pp. 567-578, 2022
Full CV in PDF.