Myeonghun Jeong
Staff Engineer ยท Samsung Electronics AI Center | Seoul, Republic of Korea
I earned my Ph.D. from Seoul National University under the supervision of Prof. Nam Soo Kim, with a primary focus on speech synthesis and deep generative models. Since March 2025, I have been working at Samsung Electronics, where my research focuses on Agentic AI for algorithmic and scientific discovery.
Research Interests
Large Language Models
Agentic AI
Speech Synthesis
Audio & Speech Signal Processing
Deep Generative Models
Publications
Dec 2025
Aligning Reasoning LLMs for Materials Discovery with Physics-aware Rejection Sampling
NeurIPS 2025 AI for Science Workshop
Jan 2026
ASVspoof 5: Design, Collection and Validation of Resources for Spoofing, Deepfake, and Adversarial Attack Detection Using Crowdsourced Speech
Computer Speech & Language
May 2025
Utilizing Neural Transducers for Two-Stage Text-to-Speech via Semantic Token Prediction
IEEE/ACM TASLP
Apr 2025
Evidential-TTS: High Fidelity Zero-Shot Text-to-Speech Using Evidential Deep Learning
IEEE ICASSP 2025
Aug 2025
SNR-Aligned Consistent Diffusion for Adaptive Speech Enhancement
Interspeech 2025
Sep 2024
High Fidelity Text-to-Speech Via Discrete Tokens Using Token Transducer and Group Masked Language Model
Interspeech 2024
Sep 2024
MakeSinger: A Semi-Supervised Training Method for Multi-Speaker Singing Voice Synthesis by Classifier-free Diffusion Guidance
Interspeech 2024
Jan 2025
Sampling-based Pruned Knowledge Distillation for Training Lightweight RNN-T
IEEE Signal Processing Letters
Jan 2025
SegINR: Segment-wise Implicit Neural Representation for Sequence Alignment in Neural Text-to-Speech
IEEE Signal Processing Letters
Mar 2024
Efficient Parallel Audio Generation Using Group Masked Language Modeling
IEEE Signal Processing Letters
Mar 2024
Variable-Length Speaker Conditioning in Flow-Based Text-to-Speech
IEEE Signal Processing Letters
Feb 2024
Transfer Learning for Low-Resource, Multi-Lingual, and Zero-Shot Multi-Speaker Text-to-Speech
IEEE/ACM TASLP
Dec 2023
Transduce and Speak: Neural Transducer for Text-to-Speech with Semantic Token Prediction
IEEE ASRU 2023
Aug 2023
Towards Single Integrated Spoofing-Aware Speaker Verification Embeddings
Interspeech 2023
Jun 2023
Improving Learning Objectives for Speaker Verification from the Perspective of Score Comparison
IEEE ICASSP 2023
Sep 2022
Transfer Learning Framework for Low-Resource Text-to-Speech using a Large-Scale Unlabeled Speech Corpus
Interspeech 2022
๐ Best Student Paper
Nov 2022
Adversarial Speaker-Consistency Learning Using Untranscribed Speech Data for Zero-Shot Multi-Speaker Text-to-Speech
APSIPA ASC 2022
Dec 2022
SNAC: Speaker-Normalized Affine Coupling Layer in Flow-Based Architecture for Zero-Shot Multi-Speaker Text-to-Speech
IEEE Signal Processing Letters
Sep 2021
Diff-TTS: A Denoising Diffusion Model for Text-to-Speech
Interspeech 2021
Work Experience
Samsung Electronics, AI Center
Mar 2025 โ
Staff Engineer
Suwon, South Korea
- Development of LLM and agentic AI workflows for scientific material discovery
- Post-training LLMs using SFT, DPO, and in-context learning for complex scientific tasks
- LLM-driven supply-demand optimization for semiconductor manufacturing
Kakao Enterprise Corp.
Aug โ Dec 2022
Research Intern
Seongnam, South Korea
- Development of in-car speech synthesis services
- Research on data-efficient, multi-lingual, and zero-shot multi-speaker TTS
Projects
LLM-driven Supply-Demand Optimization for Semiconductor Manufacturing
Mar 2026 โ
- LLM-based algorithmic search for supply-demand scheduling
- Formulated industrial problems as combinatorial optimization models
- Built agentic AI workflows with LLM tool-calling and RAG approaches
New Material Discovery for Semiconductor Manufacturing using LLMs & Agentic AI
May 2025 โ
- Developed material property prediction models using ICL, SFT, DPO
- Designed agentic AI workflows leveraging RAG for material discovery
Research on Low Latency Streaming TTS
May โ Dec 2024
- Conducted research for streaming zero-shot TTS technology
- Aimed at improving low latency performance in TTS systems
Zero-Shot TTS Technology using In-Context Learning
Jun โ Dec 2023
- Researched in-context learning-based zero-shot TTS technology
- Focused on TTS systems requiring minimal training data for new speakers
Acoustic Modeling for Personalized Voice Synthesis
Jul 2021 โ Jul 2022
- Worked on acoustic modeling techniques to personalize voice synthesis
- Aimed at creating more natural and customized synthetic voices
End-to-End Multi-Speaker Prosody and Emotion Cloning with Limited Data
Apr 2020 โ Dec 2021
- Conducted research on multi-speaker neural vocoder
- Focused on end-to-end models for prosody and emotion cloning
AI-based Real-Time Voice Recognition and Synthesis
Apr 2020 โ Jul 2021
- Speech enhancement for robust automatic speech recognition
- Developed voice activity detection for real-time ASR applications
AI-based Real-Time Speech Synthesis
Nov 2020 โ Nov 2021
- Conducted research on speech synthesis tailored for financial services
Education
Seoul National University
Mar 2020 โ Feb 2025
Ph.D. in Electrical and Computer Engineering
- GPA: 4.08 / 4.30
- Advised by Prof. Nam Soo Kim
Soongsil University
Mar 2014 โ Feb 2020
B.S. in Electronic Engineering
- GPA: 4.28 / 4.50 โ Ranked 2nd out of 171
Awards & Honors
Data Contributor, ASVSpoof 5 Challenge
2024
Ph.D. Scholarship, Samsung Advanced Institute of Technology (SAIT)
2023
Best Student Paper Award, Interspeech 2022
2022
Third Prize, IEEE ICASSP Signal Processing CUP
2019
National Science and Engineering Scholarship, Korean Minister of Science and ICT
2019
Academic Reviewer: IEEE ICASSP 2024, 2025 ยท Interspeech 2025, 2026 ยท IEEE Signal Processing Letters ยท IEEE/ACM TASLP