Taegyeong Lee

Taegyeong Lee

AI Researcher · Generative AI, LLMs, RAG

I'm passionate about novel research that explores generating images or videos from audio and text — integrating various modalities. I enjoy conducting research that is simple yet effective, leveraging multimodal and generative models to make a strong impact in the real world.

Currently working as an AI researcher on the FnGuide Inc, focusing on LLMs and RAG. Previously, I earned my Master's degree from UNIST AIGS. I interned at ETRI and completed the Software Maestro 8th. I also served as a software developer in the Promotion Data Management Division at the Republic of Korea Army Headquarters. I hold a Bachelor of Computer Engineering from Pukyong National University.

Research

My current primary research interests include Generative AI, LLMs, and RAG.

DEO
Taegyeong Lee*, Jiwon Park*, Seunghyun Hwang*, JooYoung Jang
Underreview

We propose DEO, a fine-tuning-free method that optimizes embeddings for negation-aware retrieval, improving performance across text and multimodal tasks.

Financial Charts
Taegyeong Lee, Jiwon Park, Kyunga Bang, Seunghyun Hwang, Ung-Jin Jang
Preprint

We propose a novel stock price prediction approach that treats time-series data as images and leverages text-to-image generative models to generate and evaluate future chart patterns.

QGuard
Taegyeong Lee, Jeonghwa Yoo, Hyoungseo Cho, Soo Yong Kim and Yunho Maeng
ACL 2025 Workshop (The 9th Workshop on Online Abuse and Harms)

This paper proposes a simple yet effective question prompting method to block harmful prompts, including multi-modal ones, in a zero-shot and robust manner.

MaKD
Taegyeong Lee*, Jinsik Bang*, Soyeong Kwon, Taehwan Kim
CVPR 2025 Workshop (The 12th Workshop on Fine-Grained Visual Categorization)

We introduce a multi-aspect knowledge distillation method using MLLMs to enhance vision models by learning both visual and abstract aspects, improving performance across tasks.

Sound to Image
Taegyeong Lee, Jeonghun Kang, Hyeonyu Kim, Taehwan Kim
ICCV 2023

We propose a diffusion-based model that generates images from wild sounds using audio captioning, attention mechanisms, and CLIP-based optimization, achieving superior results.

Emotional Face
Generating Emotional Face Images using Audio Information for Sensory Substitution
Taegyeong Lee, Hyerin Uhm, Chi Yoon Jeong, Chae-Kyu Kim
Journal of Korea Multimedia Society, 2023 (Outstanding Paper Award)

We propose a method to generate images optimized for sound intensity, enhancing V2A models for improved face image generation.

Face Reenactment
An enhanced model of Face Reenactment and Transformation Models based on Head pose vector
Taegyeong Lee, Gyubin Park, HyeJin Seo, Su-Hwa Jo, Chae-Kyu Kim
Conference of Korea Multimedia Society, 2021 (AI Capstone Design Grand Prize)

We propose a face synthesis method using dense landmarks for accurate head pose estimation, yielding more natural results than existing methods.

Academic Activities / Awards

Reviewer
ICPR 2024 ICLR 2024 ICLR 2025 ICLR 2026 CVPR 2026 ECCV 2026 ICLR 2026 RSI Workshop CVPR 2026 FGVC Workshop
Awards
  • Excellent Paper Award, Korea Multimedia Society, Nov. 18, 2022
  • Grand Prize, AI Capstone Design Competition, Korea Multimedia Society Fall Conference, Nov. 26, 2021
  • Grand Prize, Pukyong National University Samsung SST/SW Contest, May 30, 2017
  • Best Club Award, MCS Club, Pukyong National University