International Conference

  1. Seongtae Hong, Seungyoon Lee, Hyeonseok Moon, and Heuiseok Lim. 2025. MIGRATE: Cross-Lingual Adaptation of Domain-Specific LLMs through Code-Switching and Embedding Transfer. In Proceedings of the 31st International Conference on Computational Linguistics, pages 9184–9193, Abu Dhabi, UAE. Association for Computational Linguistics.
  2. Kim, D., Lee, S., Kim, Y., Rutherford, A., & Park, C. (2025, January). Representing the Under-Represented: Cultural and Core Capability Benchmarks for Developing Thai Large Language Models. In Proceedings of the 31st International Conference on Computational Linguistics (pp. 4114-4129).
  3. Dahyun Kim, Yungi Kim, Wonho Song, Hyeonwoo Kim, Yunsu Kim, Sanghoon Kim, and Chanjun Park. 2025. sDPO: Don’t Use Your Data All at Once. In Proceedings of the 31st International Conference on Computational Linguistics: Industry Track, pages 366–373, Abu Dhabi, UAE. Association for Computational Linguistics.
  4. Jihoo Kim, Wonho Song, Dahyun Kim, Yunsu Kim, Yungi Kim, and Chanjun Park. 2024. Evalverse: Unified and Accessible Library for Large Language Model Evaluation. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 25–33, Miami, Florida, USA. Association for Computational Linguistics.
  5. Hyeonwoo Kim, Gyoungjin Gim, Yungi Kim, Jihoo Kim, Byungju Kim, Wonseok Lee, and Chanjun Park. 2024. SAAS: Solving Ability Amplification Strategy for Enhanced Mathematical Reasoning in Large Language Models. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing: Industry Track, pages 186–198, Miami, Florida, US. Association for Computational Linguistics.
  6. Seo, J., & Park, J. How To Mitigate Hallucinations with Multilingual Translation
  7. Shim, G., & Aiyanyo, I. D. Domain-Aware Router: Bridging Domain-Specific Models and Model Ensembles for Efficient and Scalable Performance
  8. Jang, Y., & Hur, Y. An Analysis of Korean Language Proficiency of Gemma 2B Models Using KBS Korean Language Proficiency Test
  9. Kim, D., & So, A. Unveiling Code Region: Identifying Critical Parameters for Coding Abilities in LLMs
  10. Moon, H., & So, A. Korean Penalty of Large Language Models Derived by the Tokenizer
  11. Lee, J., & Park, K. Exploring Language Transfer Techniques in Large Language Models
  12. Jang, Y., & Lee, T. Boosting Korean Embedding Performance via Pre-training with Accessible Data
  13. Park, C., & Lim, H. Rethinking Retriever Evaluation in Retrieval-Augmented Generation: A Document- and Word-Level Analysis
  14. Chun, Y., & Hur, Y. Comparative Analysis of Techniques for Locating Knowledge Editing in Language Models: Integrated Gradients vs. Causal Tracing
  15. Kang, M., & Park, K. Educational subject classification with hierarchical information utilizing Large Language Models
  16. Son, J., & Lee, T. Building Retrieval Benchmarks using Retrieval Augmented Generation
  17. Lee, S., & Hur, Y. Linearized Embedding Transfer in Multilingual Large Language Model
  18. Koo, S., Yang, Y., & Park, J. Examining the Ability of Large Language Model on Entity-Based Question Answering
  19. Kim, M., & Lim, H. Partial Quantization: Improving Text Generation over Uniform Quantization
  20. Son, S., & So, A. Improving Empathetic Response Generation in LLMs Using Direct Preference Optimization
  21. Lee, J., & Park, K. A Time-Sensitive Temporal Knowledge Editing Benchmark
  22. Jung, D., & Park, J. Detailed Error Detection in Machine Translation Using Large Language Models
  23. Hong, S., & Park, K. Lexical-Based Embedding Transfer for Enhancing Jeju Dialect to English Translation
  24. Eo, S., & Park, J. Enhancing Efficiency in Large Language Model Ensemble
  25. Kim, J., & So, A. Investigating the Korean Question-Answering Capability of Large Language Models through Query Perturbation
  26. Hong, S., Shin, J., Seo, J., Lee, T., Park, J., Young, C., … & Lim, H. S. (2024, November). Intelligent Predictive Maintenance RAG framework for Power Plants: Enhancing QA with StyleDFS and Domain Specific Instruction Tuning. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing: Industry Track (pp. 805-820).
  27. Hyeonseok Moon, Seungyoon Lee, SeongTae Hong, Seungjun Lee, Chanjun Park, and Heuiseok Lim. 2024. Translation of Multifaceted Data without Re-Training of Machine Translation Systems. In Findings of the Association for Computational Linguistics: EMNLP 2024, pages 2088–2108, Miami, Florida, USA. Association for Computational Linguistics.
  28. Koo, S., Kim, J., Park, C., & Lim, H. S. (2024, November). Search if you don’t know! Knowledge-Augmented Korean Grammatical Error Correction with Large Language Models. In Findings of the Association for Computational Linguistics: EMNLP 2024 (pp. 96-125).
  29. Kim, J., Koo, S., & Lim, H. S. (2024, November). PANDA: Persona Attributes Navigation for Detecting and Alleviating Overuse Problem in Large Language Models. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing (pp. 12005-12026).
  30. Koo, S., Kim, J., Jang, Y., Park, C., & Lim, H. S. (2024, November). Where am I? Large Language Models Wandering between Semantics and Structures in Long Contexts. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing (pp. 14144-14160).
  31. Jinsung Kim, Seonmin Koo, and Heuiseok Lim, Revisiting Under-represented Knowledge of Latin American Literature in Large Language Models, The 27th European Conference on Artificial Intelligence (ECAI 2024 Main, Oral)
  32. Lee, J., Moon, H., Lee, S., Park, C., Eo, S., Ko, H., … & Lim, H. S. (2024, August). Length-aware Byte Pair Encoding for Mitigating Over-segmentation in Korean Machine Translation. In Findings of the Association for Computational Linguistics ACL 2024 (pp. 2287-2303).
  33. Jung, D., Eo, S., & Lim, H. S. (2024, August). Towards Precise Localization of Critical Errors in Machine Translation. In Findings of the Association for Computational Linguistics ACL 2024 (pp. 3000-3012).
  34. Seo, J., Lee, J., Park, C., Hong, S., Lee, S., & Lim, H. S. (2024, August). Kocommongen v2: A benchmark for navigating korean commonsense reasoning challenges in large language models. In Findings of the Association for Computational Linguistics ACL 2024 (pp. 2390-2415).
  35. Jung, D., Eo, S., Park, C., & Lim, H. S. (2024, June). Explainable CED: A Dataset for Explainable Critical Error Detection in Machine Translation. In Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 4: Student Research Workshop) (pp. 25-35).
  36. Lee, S., Kim, D., Jung, D., Park, C., & Lim, H. S. (2024, June). Exploring Inherent Biases in LLMs within Korean Social Context: A Comparative Analysis of ChatGPT and GPT-4. In Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 4: Student Research Workshop) (pp. 93-104).
  37. Eo, S., Lim, J., Park, C., Jung, D., Koo, S., Moon, H., … & Lim, H. S. (2024, May). Detecting Critical Errors Considering Cross-Cultural Factors in English-Korean Translation. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024) (pp. 4705-4716).
  38. Lee, S., Park, C., Jung, D., Moon, H., Seo, J., Eo, S., & Lim, H. S. (2024, May). Leveraging Pre-existing Resources for Data-Efficient Counter-Narrative Generation in Korean. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024) (pp. 10380-10392).
  39. Lee, D., Park, E., Lee, H., & Lim, H. S. (2024, March). Ask, Assess, and Refine: Rectifying Factual Consistency and Hallucination in LLMs with Metric-Guided Feedback Learning. In Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers) (pp. 2422-2433).
  40. Park, C., Seo, J., Lee, S., Son, J., Moon, H., Eo, S., … & Lim, H. S. (2024, March). Hyper-BTS Dataset: Scalability and Enhanced Analysis of Back TranScription (BTS) for ASR Post-Processing. In Findings of the Association for Computational Linguistics: EACL 2024 (pp. 67-78).
  41. Moon, H., Lee, J., Eo, S., Park, C., Seo, J., & Lim, H. S. (2024, March). Generative Interpretation: Toward Human-Like Evaluation for Educational Question-Answer Pair Generation. In Findings of the Association for Computational Linguistics: EACL 2024 (pp. 2185-2196).
  42. Lim, J., Ahn, S., & Park, C. Utilization of Image-Text Embeddings for Audio-Visual Scene Dialogue
  43. Lee, S., Jung, D., & Hur, Y. A Study on Dialogue Evaluation Metrics Utilizing Large Language Models.
  44. Seo, J., & Lim, H. Unveiling the Blind Spots: Evaluating Large Language Models in Korean Commonsense
  45. Son, S., Kang, M., & Jo, J. Exploring Prefix-Tuning-Based Models for Open-Ended Knowledge Tracing.
  46. Kang, M., Lee, J., Lee, S., Hong, S., & Park, J. CheckLLM-Ko: Prompting strategy for constructing Korean LLM written document detecting dataset. work6, 7.
  47. Kim, J., & Aiyanyo, I. D. Which is Better Training for Reward Model, by Ranking or Classification?.
  48. Hong, S., Lee, S., Ahn, S., & Lim, H. Enhancing Translation Performance with SuperICL4Gen.
  49. Lee, S., Park, C., & Lim, H. Evaluating the toxicity of ChatGPT for Gender and Race in Korean.
  50. Moon, H., Seo, J., Eo, S., Park, C., & Lim, H. Hallucination of Large Language Models Derived by the Colloquial Style
  51. Son, J., Kim, J., Lim, J., Jang, Y., & So, A. Identifying Bridging Entity and Its Context using Dense Retriever.
  52. Jang, Y., & Park, K. Extracting Persona Triples in Utterances
  53. Jung, D., & Kim, G. Prompt-based Fine-tuning Method for English-Korean Critical Error Detection.
  54. Lee, J., Danielle, A. I., & Yang, Y. Improving TOEIC Problems Solving Model Performance through Data Augmentation using WordNet.
  55. Koo, S., Park, C., So, A., & Upstage, H. I. A. Exploring the Potential of Large Language Model for Korean Grammatical Error Correction.
  56. Eo, S., Ahn, S., & Park, J. A Study on a Large Language Model’s Ability to Solve Riddles
  57. Chun, C., & Lim, H. Active Learning Enhanced by LLMs: Empirical Strategies to Rectify Intent Classifier Discrepancies
  58. Lee, J., Son, J., Lee, T., Park, C., Kang, M., & Hur, Y. Assessing the Retrieval-based Generation Capabilities of Large Language Models: A Call for a New Benchmark.
  59. Lee, J., Seo, J., Jung, D., & Kim, G. Genuine Knowledge Prediction for Commonsense Knowledge Transfer.
  60. Kim, J., Kim, G., Jo, J., & Park, K. Examining Zero-shot Relation Extraction in Korean Language using a Large-scale Language Model: A Comparative Analysis.
  61. Lim, J., Kang, M., Kim, J., Kim, J., Hur, Y., & Lim, H. S. (2023, December). Beyond Candidates: Adaptive Dialogue Agent Utilizing Persona and Knowledge. In Findings of the Association for Computational Linguistics: EMNLP 2023 (pp. 7950-7963).
  62. Son, J., Kim, J., Lim, J., Jang, Y., & Lim, H. S. (2023, December). Explore the Way: Exploring Reasoning Path by Bridging Entities for Effective Cross-Document Relation Extraction. In Findings of the Association for Computational Linguistics: EMNLP 2023 (pp. 6755-6761).
  63. Chun, C., Lee, S., Seo, J., & Lim, H. S. (2023, December). CReTIHC: Designing Causal Reasoning Tasks about Temporal Interventions and Hallucinated Confoundings. In Findings of the Association for Computational Linguistics: EMNLP 2023 (pp. 10334-10343).
  64. Koo, S., Park, C., Kim, J., Seo, J., Eo, S., Moon, H., & Lim, H. S. (2023, December). KEBAP: Korean Error Explainable Benchmark Dataset for ASR and Post-processing. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing (pp. 4798-4815).
  65. Jaehyung Seo, Hyeonseok Moon, Jaewook Lee, Sugyeong Eo, Chanjun Park, and Heuiseok Lim. 2023. CHEF in the Language Kitchen: A Generative Data Augmentation Leveraging Korean Morpheme Ingredients. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 6014–6029, Singapore. Association for Computational Linguistics.
  66. Yoonna Jang, Suhyune Son, Jeongwoo Lee, Junyoung Son, Yuna Hur, Jungwoo Lim, Hyeonseok Moon, Kisu Yang, and Heuiseok Lim. 2023. Post-hoc Utterance Refining Method by Entity Mining for Faithful Knowledge Grounded Conversations. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 4844–4861, Singapore. Association for Computational Linguistics.
  67. Lee, S., Jung, D., Park, C., Lee, S., & Lim, H. (2023, December). Alternative Speech: Complementary Method to Counter-Narrative for Better Discourse. In 2023 IEEE International Conference on Data Mining Workshops (ICDMW) (pp. 1438-1442). IEEE.
  68. Jung, D., Eo, S., Park, C., Moon, H., Seo, J., & Lim, H. S. (2023, November). Informative Evidence-guided Prompt-based Fine-tuning for English-Korean Critical Error Detection. In Proceedings of the 13th International Joint Conference on Natural Language Processing and the 3rd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics (Volume 1: Long Papers) (pp. 344-358).
  69. Seungjun Lee, Jaewook Lee, Jaehyung Seo, Heuiseok Lim(Korea University): Phone Scam Detection Using Large Language Model
  70. Chanjun Park, SugyeongEo, Yoonna Jang, Jungseob Lee, Aram So, Heuiseok Lim (Korea University): Enhancing Relation Extraction with Triple Representation
  71. Lee, S., Moon, H., Park, C., & Lim, H. (2023). Data-Driven Approach for Formality-Sensitive Machine Translation: Language-Specific Handling and Synthetic Data Generation. arXiv preprint arXiv:2306.14514.
  72. Jung, D., Seo, J., Lee, J., Park, C., & Lim, H. (2023). Knowledge Graph-Augmented Korean Generative Commonsense Reasoning. arXiv preprint arXiv:2306.14470.
  73. Koo, S., Park, C., Kim, J., Seo, J., Eo, S., Moon, H., & Lim, H. (2024). Toward Practical Automatic Speech Recognition and Post-Processing: a Call for Explainable Error Benchmark Guideline. arXiv preprint arXiv:2401.14625.
  74. Park, C., Koo, S., Lee, S., Seo, J., Eo, S., Moon, H., & Lim, H. (2023). Synthetic Alone: Exploring the Dark Side of Synthetic Data for Grammatical Error Correction. arXiv preprint arXiv:2306.14377.
  75. Seungjun Lee, Hyeonseok Moon, Chanjun Park, and Heuiseok Lim. 2023. Improving Formality-Sensitive Machine Translation Using Data-Centric Approaches and Prompt Engineering. In Proceedings of the 20th International Conference on Spoken Language Translation (IWSLT 2023), pages 420–432, Toronto, Canada (in-person and online). Association for Computational Linguistics.
  76. Lee, S., Jang, Y., Park, C., Lee, J., Seo, J., Moon, H., … & Lim, H. S. (2023, July). PEEP-Talk: A Situational Dialogue-based Chatbot for English Education. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations) (pp. 190-207).
  77. Eo, S., Moon, H., Kim, J., Hur, Y., Kim, J., Lee, S., … & Lim, H. (2023). Towards diverse and effective question-answer pair generation from children storybooks. arXiv preprint arXiv:2306.06605.
  78. Koo, S., Lim, J., Son, S., Hur, Y., & Lim, H. Retrieval-based Question-Answer Generation
  79. Moon, H., Kim, J., Park, K., Park, J., & Lim, H. An Empirical Study on the Length Granularity in Long-Form Question Answering
  80. Lim, J., Kang, M., Hur, Y., Jung, S., Kim, J., Jang, Y., … & Lim, H. (2023). You truly understand what i need: Intellectual and friendly dialogue agents grounding knowledge and persona. arXiv preprint arXiv:2301.02401.
  81. Lim, J., & Park, J. A Method of Graph Construction for Dialogue Relation Extraction
  82. Lee, J., Moon, H., & Park, J. Text-only Keyword Recommendation: No Longer Have Cold-Start Problem
  83. Kang, M., Lee, J., Lee, S., Moon, H., Park, C., & So, A. Masked language modeling-based Korean Data Augmentation Techniques Using Label Correction
  84. Chun, C., & Lim, H. Joint learning method using intent classification and spacing to improve Korean SLU
  85. Oh, D., Kim, S. & Lim, H. Guidance by Semantic Search for Contrastive Learning of Sentence Embeddings
  86. Seo, J., & Park, K. BERT Can Evaluate Korean Commonsense Reasoning
  87. Lee, S., Seo, J., Lee, J., Kang, M., Moon, H., Lee, J., & Lim, H. You Need More Data? Data Augmentation using Retrieval Technique
  88. Son, S., & So, A. A Comparative Study on Cross-Lingual Post-Training (XPT) with Korean
  89. Lee, J., & Aiyanyo, I. D. Data Augmentation Schemes For Machine Reading Comprehension
  90. Son, J., Kim, G., Kim, J., & So, A. An Analysis of Korean Named Entity Recognition System using MLM-based Language Transfer Learning
  91. Kim, G., Kim, J., Son, J., Jo, K., & Yang, Y. Optimized Trigger Generation Strategy for Dialogue Relation Extraction Task
  92. Kim, J., Kim, G., Son, J., & Park, K. Domain-specific Korean Relation Extraction methodology using Prompt with Meta-Information
  93. Jang, Y., & Park, K. A Study on Soft Prompt-based Few-shot Persona Dialogue Generation
  94. Jung, D., Lee, s., Lee, s., Seo, J., Eo, S., Park, C., & Titi. A Dataset for Korean Graph-to-Text Generation
  95. Lee, S., Son, S., Jung, D., & Park, C. Efficient Way for Constructing Hate Speech-Counter Narrative Dataset
  96. Moon, H., Lee, J., Seo, J., Eo, S., Park, C., & Park, J. Fortune Telling via Deep Learning based Generation Model
  97. Eo, S., Park, C., & Lim, H. Prompt-based Learning for English-German Critical Translation Error Detection
  98. Koo, S., Park, C., Moon., Seo, J., Eo, S., & Lim, H. Automatic Generation of Learning Data for Korean Speech Recognition Post-Processor
  99. Lee, J., & Hur, Y. Korean Commonsense Knowledge Graph based on GPT-3
  100. Park, C., Jang, Y., Lee, S., Seo, J., Yang, K., & Lim, H. S. (2022, November). PicTalky: augmentative and alternative communication for language developmental disabilities. In Proceedings of the 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing: System Demonstrations (pp. 17-27).
  101. Eo, S., Park, C., Moon, H., Seo, J., & Lim, H. S. (2022, December). KU X upstage’s submission for the WMT22 quality estimation: Critical error detection shared task. In Proceedings of the Seventh Conference on Machine Translation (WMT) (pp. 606-614).
  102. Eo, S., Park, C., Moon, H., Seo, J., Kim, G., Lee, J., & Lim, H. (2022). QUAK: A synthetic quality estimation dataset for korean-english neural machine translation. arXiv preprint arXiv:2209.15285.
  103. Lee, S., Lee, J., Park, C., Eo, S., Moon, H., Seo, J., … & Lim, H. S. (2022, October). Focus on FoCus: Is FoCus focused on Context, Knowledge and Persona?. In Proceedings of the 1st Workshop on Customized Chat Grounding Persona and Knowledge (pp. 1-8).
  104. Oh, D., Kim, Y., Lee, H., Huang, H. H., & Lim, H. (2022). Don’t Judge a Language Model by Its Last Layer: Contrastive Learning with Layer-Wise Attention Pooling. arXiv preprint arXiv:2209.05972.
  105. Son, J., Kim, J., Lim, J., & Lim, H. (2022). GRASP: Guiding model with RelAtional semantics using prompt for dialogue relation extraction. arXiv preprint arXiv:2208.12494.
  106. Kim, G., Kim, J., Son, J., & Lim, H. (2022). Kochet: A korean cultural heritage corpus for entity-related tasks. arXiv preprint arXiv:2209.00367.
  107. Jaehyung Seo, Seounghoon Lee, Chanjun Park, Yoonna Jang, Hyeonseok Moon, Sugyeong Eo, Seonmin Koo, and Heuiseok Lim. 2022. A Dog Is Passing Over The Jet? A Text-Generation Dataset for Korean Commonsense Reasoning and Evaluation. In Findings of the Association for Computational Linguistics: NAACL 2022, pages 2233–2249, Seattle, United States. Association for Computational Linguistics.
  108. Park, C., Lee, S., Seo, J., Moon, H., Eo, S., & Lim, H. S. (2022, June). Priming ancient Korean neural machine translation. In Proceedings of the Thirteenth Language Resources and Evaluation Conference (pp. 22-28).
  109. Jang, Y., Lim, J., Hur, Y., Oh, D., Son, S., Lee, Y., … & Lim, H. (2022, June). Call for customized conversation: Customized conversation grounding persona and knowledge. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 36, No. 10, pp. 10803-10812).
  110. Moon, H., Park, C., Lee, S., Seo, J., Lee, J., Eo, S., & Lim, H. S. (2022, June). Empirical Analysis of Noising Scheme based Synthetic Data Generation for Automatic Post-editing. In Proceedings of the Thirteenth Language Resources and Evaluation Conference (pp. 883-891).
  111. Jung, H., Jang, Y., Kim, S., & Kim, H. (2022, February). KPCR: Knowledge graph enhanced personalized course recommendation. In Australasian Joint Conference on Artificial Intelligence (pp. 739-750). Cham: Springer International Publishing.
  112. Moon, H., Park, C., Eo, S., Seo, J., Lee, S., & Lim, H. (2021). A Self-Supervised Automatic Post-Editing Data Generation Tool. arXiv preprint arXiv:2111.12284.
  113. Park, C., Lee, S., Moon, H., Eo, S., Seo, J., & Lim, H. (2021). How should human translation coexist with NMT? Efficient tool for building high quality parallel corpus. arXiv preprint arXiv:2111.00191.
  114. Eo, S., Park, C., Seo, J., Moon, H., & Lim, H. (2021). A new tool for efficiently generating quality estimation datasets. arXiv preprint arXiv:2111.00767.
  115. Seo, J., Park, C., Eo, S., Moon, H., & Lim, H. (2021). Automatic knowledge augmentation for generative commonsense reasoning. arXiv preprint arXiv:2111.00192.
  116. Lee, S., Yang, K., Park, C., Sedoc, J., & Lim, H. (2021). Syntax-enhanced dialogue summarization using syntax-aware information. In Proc. NIPS (pp. 19-39).
  117. Lee, D., Lim, J., Whang, T., Lee, C., Cho, S., Park, M., & Lim, H. S. (2021, November). Capturing speaker incorrectness: speaker-focused post-correction for abstractive dialogue summarization. In Proceedings of the Third Workshop on New Frontiers in Summarization (pp. 65-73).
  118. Lee, S., Yang, K., Park, C., Sedoc, J., & Lim, H. (2021). Towards Syntax-Aware Dialogue Summarization using Multi-task Learning. In Workshop on Widening NLP at EMNLP.
  119. Park, C., Park, S., Lee, S., Whang, T., & Lim, H. S. (2021, November). Two heads are better than one? verification of ensemble effect in neural machine translation. In Proceedings of the Second Workshop on Insights from Negative Results in NLP (pp. 23-28).
  120. Eo, S., Park, C., Moon, H., Seo, J., & Lim, H. S. (2021, August). Dealing with the paradox of quality estimation. In Proceedings of the 4th Workshop on Technologies for MT of Low Resource Languages (LoResMT2021) (pp. 1-10).
  121. Park, C., Eo, S., Moon, H., & Lim, H. S. (2021, June). Should we find another model?: Improving neural machine translation performance with ONE-piece tokenization method without model modification. In Proceedings of the 2021 conference of the north american chapter of the association for computational linguistics: human language technologies: Industry papers (pp. 97-104).
  122. Park, C., Jang, Y., Lee, S., Park, S., & Lim, H. (2021). FreeTalky: Don’t Be Afraid! Conversations Made Easier by a Humanoid Robot using Persona-based Dialogue. arXiv preprint arXiv:2112.04126.
  123. Park, C., Park, K., & Lim, H. Data Augmentation Method for Korean Neural Machine Translation.
  124. Han, S., & Lim, H. Proposal of Punctuation Mark Filling Task with BERT-based Model
  125. Jang, Y., & Lim, H. Persona-aware Language Models
  126. Kim, G., & Lim, H. Korean Language Modeling for Speaker Classification
  127. Yang, K., Oh, D., & Lim, H. An Analysis of KoBERT’s Attention
  128. Oh, D., Yang, K., & Lim, H. Probing Semantic Relations in BERT
  129. Lee, S., Hur, Y., & Lim, H. Poly-encoder based COVID-19 Question Answering System with Persona
  130. Eo, S., Park, C., & Lim, H. Design Neural Machine Translation Model Combining External Symbolic Knowledge in translation task
  131. Lim, J., &Lim, H. The method of Graph Integration using AMR graph and ConceptNet Graph
  132. Lee, C., & Lim, H. Cross-Lingual Transfer Learning RoBERTa from English to Korean
  133. Park, S., & Lim, H. Object-level Non-local operation for Visual Dialog
  134. Park, C., & Lim, H. Domain-specialize Neural Machine Translation Methodology
  135. Seo, J., Oh, D., Eo, S., & Lim, H. Semi-supervised GPT2 for Measuring Similarity in Document
  136. Park, J., & Lim, H. Autonomously Growing Multi-Domain Task-Oriented Dialog System Using Meta-Learned Knowledge Base
  137. Hur, Y., Oh, D., Lee, S., So, A., & Lim, H. How Important is Special Token in Relation Extraction
  138. Lim, J., Oh, D., Jang, Y., Yang, K., & Lim, H. (2020). I know what you asked: Graph path learning using AMR for commonsense reasoning. arXiv preprint arXiv:2011.00766.
  139. Jeong, J., Kim, S., Gil, J., Chung, K., & Yu, H. Workload Processing Method Considering Priority Based on Hot Standby System in RSU.
  140. Kim, S., Süsstrunk, S., & Salzmann, M. (2020). Volumetric transformer networks. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XXVIII 16 (pp. 561-578). Springer International Publishing.
  141. Lee, S., & Sedoc, J. (2020, December). Using the poly-encoder for a COVID-19 question answering system. In Proceedings of the 1st Workshop on NLP for COVID-19 (Part 2) at EMNLP 2020.
  142. Park, C., Kim, Y., Jang, Y., G.R, U., & Lim, H. PicTalky: Deep learning-based speech therapy platform for developmental disabilities
  143. Yook, D., Leem, S. G., Lee, K., & Yoo, I. C. (2020, November). Many-to-Many Voice Conversion Using Cycle-Consistent Variational Autoencoder with Multiple Decoders. In Odyssey (pp. 215-221).
  144. Park, C., Oh, Y., Choi, J., Kim, D., & Lim, H. (2020, October). Toward high quality parallel corpus using monolingual corpus. In The 10th international conference on convergence technology (ICCT 2020) (pp. 146-147).
  145. Park, C., Park, S., & Lim, H. (2020). Self-supervised Korean spelling correction via denoising transformer. In Proceedings of the 2023 International Conference on Information, System and Convergence Applications.
  146. Lee, D., Shin, M., Whang, T., Cho, S., Ko, B., Lee, D., … & Jo, J. (2020). Reference and document aware semantic evaluation methods for Korean language summarization. arXiv preprint arXiv:2005.03510.
  147. Lee, S., Ko, B., Lee, K., Yoo, I. C., & Yook, D. (2020, May). Many-to-many voice conversion using conditional cycle-consistent adversarial networks. In ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (pp. 6279-6283). IEEE.
  148. Park, C., Park, K., & Lim, H. Resource Management and Preprocessing in Korean Neural Machine Translation Models.
  149. Park, C., Park, S., & Lim, H. (2020). Self-supervised Korean spelling correction via denoising transformer. In Proceedings of the 2023 International Conference on Information, System and Convergence Applications.
  150. Kim, G., & Lim, H. Named Entity Recognition system of Cybersecurity Using a Deep Bi-LSTM-CRF Network
  151. Whang, T., Cha, D., & Lim, H. Metonymy Detection with Deep Neural Networks
  152. Hur, Y., Lee, C., Kim, G., Park, K., & Lim, H. An Analytical Framework for Automatically Extracting Formal Information from Unstructured Security Intelligence Report
  153. Yang, K., Park, J., & Kang, J. Knowledge Transfer via Layer-wise Distillation of Deep Contextualized Representations
  154. Park, C., Choi, J., Park, S., Kim, Y., & Lim, H. Free Talking NAO Robot for Children
  155. Lee, C., Lee, C., Kim, M., & Lim, H. Achieving Super-Human Performance in Politeness Classification
  156. Lim, J., Jang, Y., Umadevi, G.R., & Lim, H. Sentence BERT Embedding on Hyperpartisan News Classification Task
  157. Han, S., Yang, K., & Lim, H. A Study on Building Reliable Corpus for Punctuation and Quotation Mark Filling Task
  158. Choi, J., Lim, H., & Jeon, H. Gender-based Dementia risk factor analysis using a Longitudinal Cohort data
  159. Oh, D., & Lim, H. Word Sense Disambiguation with Representations of Context-Sensitive Meaning using Knowledge Graphs
  160. Park, C., Choi, S., & Lim, H. Improved Machine Translation Performance Using Two-Stage Subword Tokenization
  161. So, A., Wulansari, N., Park, K., & Lim, H. A study on the slot filling method for the Korean restaurant reservation system
  162. Park, C., & Kang, J. Improved Machine Translation Performance Using Relative Ratios of Original and Synthetic Corpus
  163. Lee, S., Jwa, H., Park, K., & Lim, H. EyeBERT: Eye tracking based Human Reading for Text Summarization
  164. Park, C., Choi, S. & Lim, H. Quality before Quantity: Improved machine translation performance using parallel corpus filtering
  165. Park, R., & Cho, J. Intelligence e-Assessment Model based on Face Recognition on Online Learning System
  166. Cho, J., Park, H., Kim, S., Lee, K., & Jung, S. A study of students’ perceptions and Improvement towards level-differentiated Creative Design curriculum in the university
  167. Park, J., Yang, K., & Kang, J. Artificial Intelligence based Verbal speech recognition: beyond Acoustic noises and overparameterization
  168. Hooshyar, D., Lim, H., Pedaste, M., Yang, K., Fathi, M., & Yang, Y. (2019). AutoThinking: An adaptive computational thinking game. In Innovative Technologies and Learning: Second International Conference, ICITL 2019, Tromsø, Norway, December 2–5, 2019, Proceedings 2 (pp. 381-391). Springer International Publishing.
  169. Whang, T., Lee, D., Lee, C., Yang, K., Oh, D., & Lim, H. (2019). An effective domain adaptive post-training method for bert in response selection. arXiv preprint arXiv:1908.04812.
  170. Whang, T., Lee, D., Lee, C., & Lim, H. (2019). Enhanced Sequential Representation Augmented with Utterance-level Attention for Response Selection.
  171. Hwang, S., Choi, H., & Yu, H. (2019, November). Implementation of low-latency message delivery for serverless based workflow. In 2019 IEEE 16th International Conference on Mobile Ad Hoc and Sensor Systems Workshops (MASSW) (pp. 170-171). IEEE.
  172. Kwon, M., & Yu, H. (2019, October). Performance improvement of ordering and endorsement phase in hyperledger fabric. In 2019 sixth international conference on internet of things: systems, management and security (IOTSMS) (pp. 428-432). IEEE.
  173. Lee, J., Kang, J., & Yu, H. (2019, October). NIOECM: A Network I/O Event Control Mechanism to Provide Fairness of Network Performance Among VMs with Same Resource Configuration in Virtualized Environment. In International Conference on Internet and Distributed Computing Systems (pp. 271-283). Cham: Springer International Publishing.
  174. Yang, K., Lee, D., Whang, T., Lee, S., & Lim, H. (2019). Emotionx-ku: Bert-max based contextual emotion classifier. arXiv preprint arXiv:1906.11565.
  175. Jo, J., Yang, Y., Kim, G., & Lim, H. (2019). A comparative analysis of emotional words for learning effectiveness in online education. In 12th International Conference on Educational Data Mining, EDM 2019 (pp. 591-594). International Educational Data Mining Society.
  176. Yoon, Y. C., & Lee, J. W. (2018, January). Movie recommendation using metadata based word2vec algorithm. In 2018 International Conference on Platform Technology and Service (PlatCon) (pp. 1-6). IEEE.
  177. Jo, J., Yang, Y., & Lim, H. (2018). A Study on the Development of Game-based Mind Wandering Judgment Model in Video Lecture-based Education. Proceedings of Engineering and Technology Innovation10, 08-12.
  178. Matteson, A., Lee, C., Kim, Y. B., & Lim, H. (2018). Rich character-level information for Korean morphological analysis and part-of-speech tagging. arXiv preprint arXiv:1806.10771.
  179. Lee, C., Kim, Y. B., Lee, D., & Lim, H. (2018). Character-level feature extraction with densely connected networks. arXiv preprint arXiv:1806.09089.
  180. Bak, B., Kang, J., Choi, H., Lee, J., Yu, H., & Chung, K. (2018, December). A scheduler considering characteristics of VM for mitigation of unstable performance. In 2018 9th International Symposium on Parallel Architectures, Algorithms and Programming (PAAP) (pp. 49-53). IEEE.
  181. Lee, C., & Lim, H. Sketch to Image Upsampling using Generative Adversarial Networks for Query by Image Content
  182. Kim, G., Ji, H., & Lim, H. Comparisons of Multiple Approaches in improving Image Caption Generation Models
  183. Lee, S., & Lim, H. A Study on Vector-Based User-Preferred Fashion Matching
  184. Cho, J., & Lim, H. Learning Effectiveness of an E-Assessment Tool for Video-based Online Education