
International Conference
Publications
- Seongtae Hong, Seungyoon Lee, Hyeonseok Moon, and Heuiseok Lim. 2025. MIGRATE: Cross-Lingual Adaptation of Domain-Specific LLMs through Code-Switching and Embedding Transfer. In Proceedings of the 31st International Conference on Computational Linguistics, pages 9184–9193, Abu Dhabi, UAE. Association for Computational Linguistics.
- Kim, D., Lee, S., Kim, Y., Rutherford, A., & Park, C. (2025, January). Representing the Under-Represented: Cultural and Core Capability Benchmarks for Developing Thai Large Language Models. In Proceedings of the 31st International Conference on Computational Linguistics (pp. 4114-4129).
- Dahyun Kim, Yungi Kim, Wonho Song, Hyeonwoo Kim, Yunsu Kim, Sanghoon Kim, and Chanjun Park. 2025. sDPO: Don’t Use Your Data All at Once. In Proceedings of the 31st International Conference on Computational Linguistics: Industry Track, pages 366–373, Abu Dhabi, UAE. Association for Computational Linguistics.
- Jihoo Kim, Wonho Song, Dahyun Kim, Yunsu Kim, Yungi Kim, and Chanjun Park. 2024. Evalverse: Unified and Accessible Library for Large Language Model Evaluation. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 25–33, Miami, Florida, USA. Association for Computational Linguistics.
- Hyeonwoo Kim, Gyoungjin Gim, Yungi Kim, Jihoo Kim, Byungju Kim, Wonseok Lee, and Chanjun Park. 2024. SAAS: Solving Ability Amplification Strategy for Enhanced Mathematical Reasoning in Large Language Models. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing: Industry Track, pages 186–198, Miami, Florida, US. Association for Computational Linguistics.
- Seo, J., & Park, J. How To Mitigate Hallucinations with Multilingual Translation
- Shim, G., & Aiyanyo, I. D. Domain-Aware Router: Bridging Domain-Specific Models and Model Ensembles for Efficient and Scalable Performance
- Jang, Y., & Hur, Y. An Analysis of Korean Language Proficiency of Gemma 2B Models Using KBS Korean Language Proficiency Test
- Kim, D., & So, A. Unveiling Code Region: Identifying Critical Parameters for Coding Abilities in LLMs
- Moon, H., & So, A. Korean Penalty of Large Language Models Derived by the Tokenizer
- Lee, J., & Park, K. Exploring Language Transfer Techniques in Large Language Models
- Jang, Y., & Lee, T. Boosting Korean Embedding Performance via Pre-training with Accessible Data
- Park, C., & Lim, H. Rethinking Retriever Evaluation in Retrieval-Augmented Generation: A Document- and Word-Level Analysis
- Chun, Y., & Hur, Y. Comparative Analysis of Techniques for Locating Knowledge Editing in Language Models: Integrated Gradients vs. Causal Tracing
- Kang, M., & Park, K. Educational subject classification with hierarchical information utilizing Large Language Models
- Son, J., & Lee, T. Building Retrieval Benchmarks using Retrieval Augmented Generation
- Lee, S., & Hur, Y. Linearized Embedding Transfer in Multilingual Large Language Model
- Koo, S., Yang, Y., & Park, J. Examining the Ability of Large Language Model on Entity-Based Question Answering
- Kim, M., & Lim, H. Partial Quantization: Improving Text Generation over Uniform Quantization
- Son, S., & So, A. Improving Empathetic Response Generation in LLMs Using Direct Preference Optimization
- Lee, J., & Park, K. A Time-Sensitive Temporal Knowledge Editing Benchmark
- Jung, D., & Park, J. Detailed Error Detection in Machine Translation Using Large Language Models
- Hong, S., & Park, K. Lexical-Based Embedding Transfer for Enhancing Jeju Dialect to English Translation
- Eo, S., & Park, J. Enhancing Efficiency in Large Language Model Ensemble
- Kim, J., & So, A. Investigating the Korean Question-Answering Capability of Large Language Models through Query Perturbation
- Hong, S., Shin, J., Seo, J., Lee, T., Park, J., Young, C., … & Lim, H. S. (2024, November). Intelligent Predictive Maintenance RAG framework for Power Plants: Enhancing QA with StyleDFS and Domain Specific Instruction Tuning. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing: Industry Track (pp. 805-820).
- Hyeonseok Moon, Seungyoon Lee, SeongTae Hong, Seungjun Lee, Chanjun Park, and Heuiseok Lim. 2024. Translation of Multifaceted Data without Re-Training of Machine Translation Systems. In Findings of the Association for Computational Linguistics: EMNLP 2024, pages 2088–2108, Miami, Florida, USA. Association for Computational Linguistics.
- Koo, S., Kim, J., Park, C., & Lim, H. S. (2024, November). Search if you don’t know! Knowledge-Augmented Korean Grammatical Error Correction with Large Language Models. In Findings of the Association for Computational Linguistics: EMNLP 2024 (pp. 96-125).
- Kim, J., Koo, S., & Lim, H. S. (2024, November). PANDA: Persona Attributes Navigation for Detecting and Alleviating Overuse Problem in Large Language Models. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing (pp. 12005-12026).
- Koo, S., Kim, J., Jang, Y., Park, C., & Lim, H. S. (2024, November). Where am I? Large Language Models Wandering between Semantics and Structures in Long Contexts. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing (pp. 14144-14160).
- Jinsung Kim, Seonmin Koo, and Heuiseok Lim, Revisiting Under-represented Knowledge of Latin American Literature in Large Language Models, The 27th European Conference on Artificial Intelligence (ECAI 2024 Main, Oral)
- Lee, J., Moon, H., Lee, S., Park, C., Eo, S., Ko, H., … & Lim, H. S. (2024, August). Length-aware Byte Pair Encoding for Mitigating Over-segmentation in Korean Machine Translation. In Findings of the Association for Computational Linguistics ACL 2024 (pp. 2287-2303).
- Jung, D., Eo, S., & Lim, H. S. (2024, August). Towards Precise Localization of Critical Errors in Machine Translation. In Findings of the Association for Computational Linguistics ACL 2024 (pp. 3000-3012).
- Seo, J., Lee, J., Park, C., Hong, S., Lee, S., & Lim, H. S. (2024, August). Kocommongen v2: A benchmark for navigating korean commonsense reasoning challenges in large language models. In Findings of the Association for Computational Linguistics ACL 2024 (pp. 2390-2415).
- Jung, D., Eo, S., Park, C., & Lim, H. S. (2024, June). Explainable CED: A Dataset for Explainable Critical Error Detection in Machine Translation. In Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 4: Student Research Workshop) (pp. 25-35).
- Lee, S., Kim, D., Jung, D., Park, C., & Lim, H. S. (2024, June). Exploring Inherent Biases in LLMs within Korean Social Context: A Comparative Analysis of ChatGPT and GPT-4. In Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 4: Student Research Workshop) (pp. 93-104).
- Eo, S., Lim, J., Park, C., Jung, D., Koo, S., Moon, H., … & Lim, H. S. (2024, May). Detecting Critical Errors Considering Cross-Cultural Factors in English-Korean Translation. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024) (pp. 4705-4716).
- Lee, S., Park, C., Jung, D., Moon, H., Seo, J., Eo, S., & Lim, H. S. (2024, May). Leveraging Pre-existing Resources for Data-Efficient Counter-Narrative Generation in Korean. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024) (pp. 10380-10392).
- Lee, D., Park, E., Lee, H., & Lim, H. S. (2024, March). Ask, Assess, and Refine: Rectifying Factual Consistency and Hallucination in LLMs with Metric-Guided Feedback Learning. In Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers) (pp. 2422-2433).
- Park, C., Seo, J., Lee, S., Son, J., Moon, H., Eo, S., … & Lim, H. S. (2024, March). Hyper-BTS Dataset: Scalability and Enhanced Analysis of Back TranScription (BTS) for ASR Post-Processing. In Findings of the Association for Computational Linguistics: EACL 2024 (pp. 67-78).
- Moon, H., Lee, J., Eo, S., Park, C., Seo, J., & Lim, H. S. (2024, March). Generative Interpretation: Toward Human-Like Evaluation for Educational Question-Answer Pair Generation. In Findings of the Association for Computational Linguistics: EACL 2024 (pp. 2185-2196).
- Lim, J., Ahn, S., & Park, C. Utilization of Image-Text Embeddings for Audio-Visual Scene Dialogue
- Lee, S., Jung, D., & Hur, Y. A Study on Dialogue Evaluation Metrics Utilizing Large Language Models.
- Seo, J., & Lim, H. Unveiling the Blind Spots: Evaluating Large Language Models in Korean Commonsense
- Son, S., Kang, M., & Jo, J. Exploring Prefix-Tuning-Based Models for Open-Ended Knowledge Tracing.
- Kang, M., Lee, J., Lee, S., Hong, S., & Park, J. CheckLLM-Ko: Prompting strategy for constructing Korean LLM written document detecting dataset. work, 6, 7.
- Kim, J., & Aiyanyo, I. D. Which is Better Training for Reward Model, by Ranking or Classification?.
- Hong, S., Lee, S., Ahn, S., & Lim, H. Enhancing Translation Performance with SuperICL4Gen.
- Lee, S., Park, C., & Lim, H. Evaluating the toxicity of ChatGPT for Gender and Race in Korean.
- Moon, H., Seo, J., Eo, S., Park, C., & Lim, H. Hallucination of Large Language Models Derived by the Colloquial Style
- Son, J., Kim, J., Lim, J., Jang, Y., & So, A. Identifying Bridging Entity and Its Context using Dense Retriever.
- Jang, Y., & Park, K. Extracting Persona Triples in Utterances
- Jung, D., & Kim, G. Prompt-based Fine-tuning Method for English-Korean Critical Error Detection.
- Lee, J., Danielle, A. I., & Yang, Y. Improving TOEIC Problems Solving Model Performance through Data Augmentation using WordNet.
- Koo, S., Park, C., So, A., & Upstage, H. I. A. Exploring the Potential of Large Language Model for Korean Grammatical Error Correction.
- Eo, S., Ahn, S., & Park, J. A Study on a Large Language Model’s Ability to Solve Riddles
- Chun, C., & Lim, H. Active Learning Enhanced by LLMs: Empirical Strategies to Rectify Intent Classifier Discrepancies
- Lee, J., Son, J., Lee, T., Park, C., Kang, M., & Hur, Y. Assessing the Retrieval-based Generation Capabilities of Large Language Models: A Call for a New Benchmark.
- Lee, J., Seo, J., Jung, D., & Kim, G. Genuine Knowledge Prediction for Commonsense Knowledge Transfer.
- Kim, J., Kim, G., Jo, J., & Park, K. Examining Zero-shot Relation Extraction in Korean Language using a Large-scale Language Model: A Comparative Analysis.
- Lim, J., Kang, M., Kim, J., Kim, J., Hur, Y., & Lim, H. S. (2023, December). Beyond Candidates: Adaptive Dialogue Agent Utilizing Persona and Knowledge. In Findings of the Association for Computational Linguistics: EMNLP 2023 (pp. 7950-7963).
- Son, J., Kim, J., Lim, J., Jang, Y., & Lim, H. S. (2023, December). Explore the Way: Exploring Reasoning Path by Bridging Entities for Effective Cross-Document Relation Extraction. In Findings of the Association for Computational Linguistics: EMNLP 2023 (pp. 6755-6761).
- Chun, C., Lee, S., Seo, J., & Lim, H. S. (2023, December). CReTIHC: Designing Causal Reasoning Tasks about Temporal Interventions and Hallucinated Confoundings. In Findings of the Association for Computational Linguistics: EMNLP 2023 (pp. 10334-10343).
- Koo, S., Park, C., Kim, J., Seo, J., Eo, S., Moon, H., & Lim, H. S. (2023, December). KEBAP: Korean Error Explainable Benchmark Dataset for ASR and Post-processing. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing (pp. 4798-4815).
- Jaehyung Seo, Hyeonseok Moon, Jaewook Lee, Sugyeong Eo, Chanjun Park, and Heuiseok Lim. 2023. CHEF in the Language Kitchen: A Generative Data Augmentation Leveraging Korean Morpheme Ingredients. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 6014–6029, Singapore. Association for Computational Linguistics.
- Yoonna Jang, Suhyune Son, Jeongwoo Lee, Junyoung Son, Yuna Hur, Jungwoo Lim, Hyeonseok Moon, Kisu Yang, and Heuiseok Lim. 2023. Post-hoc Utterance Refining Method by Entity Mining for Faithful Knowledge Grounded Conversations. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 4844–4861, Singapore. Association for Computational Linguistics.
- Lee, S., Jung, D., Park, C., Lee, S., & Lim, H. (2023, December). Alternative Speech: Complementary Method to Counter-Narrative for Better Discourse. In 2023 IEEE International Conference on Data Mining Workshops (ICDMW) (pp. 1438-1442). IEEE.
- Jung, D., Eo, S., Park, C., Moon, H., Seo, J., & Lim, H. S. (2023, November). Informative Evidence-guided Prompt-based Fine-tuning for English-Korean Critical Error Detection. In Proceedings of the 13th International Joint Conference on Natural Language Processing and the 3rd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics (Volume 1: Long Papers) (pp. 344-358).
- Seungjun Lee, Jaewook Lee, Jaehyung Seo, Heuiseok Lim(Korea University): Phone Scam Detection Using Large Language Model
- Chanjun Park, SugyeongEo, Yoonna Jang, Jungseob Lee, Aram So, Heuiseok Lim (Korea University): Enhancing Relation Extraction with Triple Representation
- Lee, S., Moon, H., Park, C., & Lim, H. (2023). Data-Driven Approach for Formality-Sensitive Machine Translation: Language-Specific Handling and Synthetic Data Generation. arXiv preprint arXiv:2306.14514.
- Jung, D., Seo, J., Lee, J., Park, C., & Lim, H. (2023). Knowledge Graph-Augmented Korean Generative Commonsense Reasoning. arXiv preprint arXiv:2306.14470.
- Koo, S., Park, C., Kim, J., Seo, J., Eo, S., Moon, H., & Lim, H. (2024). Toward Practical Automatic Speech Recognition and Post-Processing: a Call for Explainable Error Benchmark Guideline. arXiv preprint arXiv:2401.14625.
- Park, C., Koo, S., Lee, S., Seo, J., Eo, S., Moon, H., & Lim, H. (2023). Synthetic Alone: Exploring the Dark Side of Synthetic Data for Grammatical Error Correction. arXiv preprint arXiv:2306.14377.
- Seungjun Lee, Hyeonseok Moon, Chanjun Park, and Heuiseok Lim. 2023. Improving Formality-Sensitive Machine Translation Using Data-Centric Approaches and Prompt Engineering. In Proceedings of the 20th International Conference on Spoken Language Translation (IWSLT 2023), pages 420–432, Toronto, Canada (in-person and online). Association for Computational Linguistics.
- Lee, S., Jang, Y., Park, C., Lee, J., Seo, J., Moon, H., … & Lim, H. S. (2023, July). PEEP-Talk: A Situational Dialogue-based Chatbot for English Education. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations) (pp. 190-207).
- Eo, S., Moon, H., Kim, J., Hur, Y., Kim, J., Lee, S., … & Lim, H. (2023). Towards diverse and effective question-answer pair generation from children storybooks. arXiv preprint arXiv:2306.06605.
- Koo, S., Lim, J., Son, S., Hur, Y., & Lim, H. Retrieval-based Question-Answer Generation
- Moon, H., Kim, J., Park, K., Park, J., & Lim, H. An Empirical Study on the Length Granularity in Long-Form Question Answering
- Lim, J., Kang, M., Hur, Y., Jung, S., Kim, J., Jang, Y., … & Lim, H. (2023). You truly understand what i need: Intellectual and friendly dialogue agents grounding knowledge and persona. arXiv preprint arXiv:2301.02401.
- Lim, J., & Park, J. A Method of Graph Construction for Dialogue Relation Extraction
- Lee, J., Moon, H., & Park, J. Text-only Keyword Recommendation: No Longer Have Cold-Start Problem
- Kang, M., Lee, J., Lee, S., Moon, H., Park, C., & So, A. Masked language modeling-based Korean Data Augmentation Techniques Using Label Correction
- Chun, C., & Lim, H. Joint learning method using intent classification and spacing to improve Korean SLU
- Oh, D., Kim, S. & Lim, H. Guidance by Semantic Search for Contrastive Learning of Sentence Embeddings
- Seo, J., & Park, K. BERT Can Evaluate Korean Commonsense Reasoning
- Lee, S., Seo, J., Lee, J., Kang, M., Moon, H., Lee, J., & Lim, H. You Need More Data? Data Augmentation using Retrieval Technique
- Son, S., & So, A. A Comparative Study on Cross-Lingual Post-Training (XPT) with Korean
- Lee, J., & Aiyanyo, I. D. Data Augmentation Schemes For Machine Reading Comprehension
- Son, J., Kim, G., Kim, J., & So, A. An Analysis of Korean Named Entity Recognition System using MLM-based Language Transfer Learning
- Kim, G., Kim, J., Son, J., Jo, K., & Yang, Y. Optimized Trigger Generation Strategy for Dialogue Relation Extraction Task
- Kim, J., Kim, G., Son, J., & Park, K. Domain-specific Korean Relation Extraction methodology using Prompt with Meta-Information
- Jang, Y., & Park, K. A Study on Soft Prompt-based Few-shot Persona Dialogue Generation
- Jung, D., Lee, s., Lee, s., Seo, J., Eo, S., Park, C., & Titi. A Dataset for Korean Graph-to-Text Generation
- Lee, S., Son, S., Jung, D., & Park, C. Efficient Way for Constructing Hate Speech-Counter Narrative Dataset
- Moon, H., Lee, J., Seo, J., Eo, S., Park, C., & Park, J. Fortune Telling via Deep Learning based Generation Model
- Eo, S., Park, C., & Lim, H. Prompt-based Learning for English-German Critical Translation Error Detection
- Koo, S., Park, C., Moon., Seo, J., Eo, S., & Lim, H. Automatic Generation of Learning Data for Korean Speech Recognition Post-Processor
- Lee, J., & Hur, Y. Korean Commonsense Knowledge Graph based on GPT-3
- Park, C., Jang, Y., Lee, S., Seo, J., Yang, K., & Lim, H. S. (2022, November). PicTalky: augmentative and alternative communication for language developmental disabilities. In Proceedings of the 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing: System Demonstrations (pp. 17-27).
- Eo, S., Park, C., Moon, H., Seo, J., & Lim, H. S. (2022, December). KU X upstage’s submission for the WMT22 quality estimation: Critical error detection shared task. In Proceedings of the Seventh Conference on Machine Translation (WMT) (pp. 606-614).
- Eo, S., Park, C., Moon, H., Seo, J., Kim, G., Lee, J., & Lim, H. (2022). QUAK: A synthetic quality estimation dataset for korean-english neural machine translation. arXiv preprint arXiv:2209.15285.
- Lee, S., Lee, J., Park, C., Eo, S., Moon, H., Seo, J., … & Lim, H. S. (2022, October). Focus on FoCus: Is FoCus focused on Context, Knowledge and Persona?. In Proceedings of the 1st Workshop on Customized Chat Grounding Persona and Knowledge (pp. 1-8).
- Oh, D., Kim, Y., Lee, H., Huang, H. H., & Lim, H. (2022). Don’t Judge a Language Model by Its Last Layer: Contrastive Learning with Layer-Wise Attention Pooling. arXiv preprint arXiv:2209.05972.
- Son, J., Kim, J., Lim, J., & Lim, H. (2022). GRASP: Guiding model with RelAtional semantics using prompt for dialogue relation extraction. arXiv preprint arXiv:2208.12494.
- Kim, G., Kim, J., Son, J., & Lim, H. (2022). Kochet: A korean cultural heritage corpus for entity-related tasks. arXiv preprint arXiv:2209.00367.
- Jaehyung Seo, Seounghoon Lee, Chanjun Park, Yoonna Jang, Hyeonseok Moon, Sugyeong Eo, Seonmin Koo, and Heuiseok Lim. 2022. A Dog Is Passing Over The Jet? A Text-Generation Dataset for Korean Commonsense Reasoning and Evaluation. In Findings of the Association for Computational Linguistics: NAACL 2022, pages 2233–2249, Seattle, United States. Association for Computational Linguistics.
- Park, C., Lee, S., Seo, J., Moon, H., Eo, S., & Lim, H. S. (2022, June). Priming ancient Korean neural machine translation. In Proceedings of the Thirteenth Language Resources and Evaluation Conference (pp. 22-28).
- Jang, Y., Lim, J., Hur, Y., Oh, D., Son, S., Lee, Y., … & Lim, H. (2022, June). Call for customized conversation: Customized conversation grounding persona and knowledge. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 36, No. 10, pp. 10803-10812).
- Moon, H., Park, C., Lee, S., Seo, J., Lee, J., Eo, S., & Lim, H. S. (2022, June). Empirical Analysis of Noising Scheme based Synthetic Data Generation for Automatic Post-editing. In Proceedings of the Thirteenth Language Resources and Evaluation Conference (pp. 883-891).
- Jung, H., Jang, Y., Kim, S., & Kim, H. (2022, February). KPCR: Knowledge graph enhanced personalized course recommendation. In Australasian Joint Conference on Artificial Intelligence (pp. 739-750). Cham: Springer International Publishing.
- Moon, H., Park, C., Eo, S., Seo, J., Lee, S., & Lim, H. (2021). A Self-Supervised Automatic Post-Editing Data Generation Tool. arXiv preprint arXiv:2111.12284.
- Park, C., Lee, S., Moon, H., Eo, S., Seo, J., & Lim, H. (2021). How should human translation coexist with NMT? Efficient tool for building high quality parallel corpus. arXiv preprint arXiv:2111.00191.
- Eo, S., Park, C., Seo, J., Moon, H., & Lim, H. (2021). A new tool for efficiently generating quality estimation datasets. arXiv preprint arXiv:2111.00767.
- Seo, J., Park, C., Eo, S., Moon, H., & Lim, H. (2021). Automatic knowledge augmentation for generative commonsense reasoning. arXiv preprint arXiv:2111.00192.
- Lee, S., Yang, K., Park, C., Sedoc, J., & Lim, H. (2021). Syntax-enhanced dialogue summarization using syntax-aware information. In Proc. NIPS (pp. 19-39).
- Lee, D., Lim, J., Whang, T., Lee, C., Cho, S., Park, M., & Lim, H. S. (2021, November). Capturing speaker incorrectness: speaker-focused post-correction for abstractive dialogue summarization. In Proceedings of the Third Workshop on New Frontiers in Summarization (pp. 65-73).
- Lee, S., Yang, K., Park, C., Sedoc, J., & Lim, H. (2021). Towards Syntax-Aware Dialogue Summarization using Multi-task Learning. In Workshop on Widening NLP at EMNLP.
- Park, C., Park, S., Lee, S., Whang, T., & Lim, H. S. (2021, November). Two heads are better than one? verification of ensemble effect in neural machine translation. In Proceedings of the Second Workshop on Insights from Negative Results in NLP (pp. 23-28).
- Eo, S., Park, C., Moon, H., Seo, J., & Lim, H. S. (2021, August). Dealing with the paradox of quality estimation. In Proceedings of the 4th Workshop on Technologies for MT of Low Resource Languages (LoResMT2021) (pp. 1-10).
- Park, C., Eo, S., Moon, H., & Lim, H. S. (2021, June). Should we find another model?: Improving neural machine translation performance with ONE-piece tokenization method without model modification. In Proceedings of the 2021 conference of the north american chapter of the association for computational linguistics: human language technologies: Industry papers (pp. 97-104).
- Park, C., Jang, Y., Lee, S., Park, S., & Lim, H. (2021). FreeTalky: Don’t Be Afraid! Conversations Made Easier by a Humanoid Robot using Persona-based Dialogue. arXiv preprint arXiv:2112.04126.
- Park, C., Park, K., & Lim, H. Data Augmentation Method for Korean Neural Machine Translation.
- Han, S., & Lim, H. Proposal of Punctuation Mark Filling Task with BERT-based Model
- Jang, Y., & Lim, H. Persona-aware Language Models
- Kim, G., & Lim, H. Korean Language Modeling for Speaker Classification
- Yang, K., Oh, D., & Lim, H. An Analysis of KoBERT’s Attention
- Oh, D., Yang, K., & Lim, H. Probing Semantic Relations in BERT
- Lee, S., Hur, Y., & Lim, H. Poly-encoder based COVID-19 Question Answering System with Persona
- Eo, S., Park, C., & Lim, H. Design Neural Machine Translation Model Combining External Symbolic Knowledge in translation task
- Lim, J., &Lim, H. The method of Graph Integration using AMR graph and ConceptNet Graph
- Lee, C., & Lim, H. Cross-Lingual Transfer Learning RoBERTa from English to Korean
- Park, S., & Lim, H. Object-level Non-local operation for Visual Dialog
- Park, C., & Lim, H. Domain-specialize Neural Machine Translation Methodology
- Seo, J., Oh, D., Eo, S., & Lim, H. Semi-supervised GPT2 for Measuring Similarity in Document
- Park, J., & Lim, H. Autonomously Growing Multi-Domain Task-Oriented Dialog System Using Meta-Learned Knowledge Base
- Hur, Y., Oh, D., Lee, S., So, A., & Lim, H. How Important is Special Token in Relation Extraction
- Lim, J., Oh, D., Jang, Y., Yang, K., & Lim, H. (2020). I know what you asked: Graph path learning using AMR for commonsense reasoning. arXiv preprint arXiv:2011.00766.
- Jeong, J., Kim, S., Gil, J., Chung, K., & Yu, H. Workload Processing Method Considering Priority Based on Hot Standby System in RSU.
- Kim, S., Süsstrunk, S., & Salzmann, M. (2020). Volumetric transformer networks. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XXVIII 16 (pp. 561-578). Springer International Publishing.
- Lee, S., & Sedoc, J. (2020, December). Using the poly-encoder for a COVID-19 question answering system. In Proceedings of the 1st Workshop on NLP for COVID-19 (Part 2) at EMNLP 2020.
- Park, C., Kim, Y., Jang, Y., G.R, U., & Lim, H. PicTalky: Deep learning-based speech therapy platform for developmental disabilities
- Yook, D., Leem, S. G., Lee, K., & Yoo, I. C. (2020, November). Many-to-Many Voice Conversion Using Cycle-Consistent Variational Autoencoder with Multiple Decoders. In Odyssey (pp. 215-221).
- Park, C., Oh, Y., Choi, J., Kim, D., & Lim, H. (2020, October). Toward high quality parallel corpus using monolingual corpus. In The 10th international conference on convergence technology (ICCT 2020) (pp. 146-147).
- Park, C., Park, S., & Lim, H. (2020). Self-supervised Korean spelling correction via denoising transformer. In Proceedings of the 2023 International Conference on Information, System and Convergence Applications.
- Lee, D., Shin, M., Whang, T., Cho, S., Ko, B., Lee, D., … & Jo, J. (2020). Reference and document aware semantic evaluation methods for Korean language summarization. arXiv preprint arXiv:2005.03510.
- Lee, S., Ko, B., Lee, K., Yoo, I. C., & Yook, D. (2020, May). Many-to-many voice conversion using conditional cycle-consistent adversarial networks. In ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (pp. 6279-6283). IEEE.
- Park, C., Park, K., & Lim, H. Resource Management and Preprocessing in Korean Neural Machine Translation Models.
- Park, C., Park, S., & Lim, H. (2020). Self-supervised Korean spelling correction via denoising transformer. In Proceedings of the 2023 International Conference on Information, System and Convergence Applications.
- Kim, G., & Lim, H. Named Entity Recognition system of Cybersecurity Using a Deep Bi-LSTM-CRF Network
- Whang, T., Cha, D., & Lim, H. Metonymy Detection with Deep Neural Networks
- Hur, Y., Lee, C., Kim, G., Park, K., & Lim, H. An Analytical Framework for Automatically Extracting Formal Information from Unstructured Security Intelligence Report
- Yang, K., Park, J., & Kang, J. Knowledge Transfer via Layer-wise Distillation of Deep Contextualized Representations
- Park, C., Choi, J., Park, S., Kim, Y., & Lim, H. Free Talking NAO Robot for Children
- Lee, C., Lee, C., Kim, M., & Lim, H. Achieving Super-Human Performance in Politeness Classification
- Lim, J., Jang, Y., Umadevi, G.R., & Lim, H. Sentence BERT Embedding on Hyperpartisan News Classification Task
- Han, S., Yang, K., & Lim, H. A Study on Building Reliable Corpus for Punctuation and Quotation Mark Filling Task
- Choi, J., Lim, H., & Jeon, H. Gender-based Dementia risk factor analysis using a Longitudinal Cohort data
- Oh, D., & Lim, H. Word Sense Disambiguation with Representations of Context-Sensitive Meaning using Knowledge Graphs
- Park, C., Choi, S., & Lim, H. Improved Machine Translation Performance Using Two-Stage Subword Tokenization
- So, A., Wulansari, N., Park, K., & Lim, H. A study on the slot filling method for the Korean restaurant reservation system
- Park, C., & Kang, J. Improved Machine Translation Performance Using Relative Ratios of Original and Synthetic Corpus
- Lee, S., Jwa, H., Park, K., & Lim, H. EyeBERT: Eye tracking based Human Reading for Text Summarization
- Park, C., Choi, S. & Lim, H. Quality before Quantity: Improved machine translation performance using parallel corpus filtering
- Park, R., & Cho, J. Intelligence e-Assessment Model based on Face Recognition on Online Learning System
- Cho, J., Park, H., Kim, S., Lee, K., & Jung, S. A study of students’ perceptions and Improvement towards level-differentiated Creative Design curriculum in the university
- Park, J., Yang, K., & Kang, J. Artificial Intelligence based Verbal speech recognition: beyond Acoustic noises and overparameterization
- Hooshyar, D., Lim, H., Pedaste, M., Yang, K., Fathi, M., & Yang, Y. (2019). AutoThinking: An adaptive computational thinking game. In Innovative Technologies and Learning: Second International Conference, ICITL 2019, Tromsø, Norway, December 2–5, 2019, Proceedings 2 (pp. 381-391). Springer International Publishing.
- Whang, T., Lee, D., Lee, C., Yang, K., Oh, D., & Lim, H. (2019). An effective domain adaptive post-training method for bert in response selection. arXiv preprint arXiv:1908.04812.
- Whang, T., Lee, D., Lee, C., & Lim, H. (2019). Enhanced Sequential Representation Augmented with Utterance-level Attention for Response Selection.
- Hwang, S., Choi, H., & Yu, H. (2019, November). Implementation of low-latency message delivery for serverless based workflow. In 2019 IEEE 16th International Conference on Mobile Ad Hoc and Sensor Systems Workshops (MASSW) (pp. 170-171). IEEE.
- Kwon, M., & Yu, H. (2019, October). Performance improvement of ordering and endorsement phase in hyperledger fabric. In 2019 sixth international conference on internet of things: systems, management and security (IOTSMS) (pp. 428-432). IEEE.
- Lee, J., Kang, J., & Yu, H. (2019, October). NIOECM: A Network I/O Event Control Mechanism to Provide Fairness of Network Performance Among VMs with Same Resource Configuration in Virtualized Environment. In International Conference on Internet and Distributed Computing Systems (pp. 271-283). Cham: Springer International Publishing.
- Yang, K., Lee, D., Whang, T., Lee, S., & Lim, H. (2019). Emotionx-ku: Bert-max based contextual emotion classifier. arXiv preprint arXiv:1906.11565.
- Jo, J., Yang, Y., Kim, G., & Lim, H. (2019). A comparative analysis of emotional words for learning effectiveness in online education. In 12th International Conference on Educational Data Mining, EDM 2019 (pp. 591-594). International Educational Data Mining Society.
- Yoon, Y. C., & Lee, J. W. (2018, January). Movie recommendation using metadata based word2vec algorithm. In 2018 International Conference on Platform Technology and Service (PlatCon) (pp. 1-6). IEEE.
- Jo, J., Yang, Y., & Lim, H. (2018). A Study on the Development of Game-based Mind Wandering Judgment Model in Video Lecture-based Education. Proceedings of Engineering and Technology Innovation, 10, 08-12.
- Matteson, A., Lee, C., Kim, Y. B., & Lim, H. (2018). Rich character-level information for Korean morphological analysis and part-of-speech tagging. arXiv preprint arXiv:1806.10771.
- Lee, C., Kim, Y. B., Lee, D., & Lim, H. (2018). Character-level feature extraction with densely connected networks. arXiv preprint arXiv:1806.09089.
- Bak, B., Kang, J., Choi, H., Lee, J., Yu, H., & Chung, K. (2018, December). A scheduler considering characteristics of VM for mitigation of unstable performance. In 2018 9th International Symposium on Parallel Architectures, Algorithms and Programming (PAAP) (pp. 49-53). IEEE.
- Lee, C., & Lim, H. Sketch to Image Upsampling using Generative Adversarial Networks for Query by Image Content
- Kim, G., Ji, H., & Lim, H. Comparisons of Multiple Approaches in improving Image Caption Generation Models
- Lee, S., & Lim, H. A Study on Vector-Based User-Preferred Fashion Matching
- Cho, J., & Lim, H. Learning Effectiveness of an E-Assessment Tool for Video-based Online Education