Nguyen Minh Quang

GitHubLinkLink

Ph.D. Student in Computer Science

School of Computing and Information Systems, Singapore Management University, Singapore

mq.nguyen.2023(at)phdcs.smu(dot)edu.sg

Biography

I am Nguyen Minh Quang (Nguyen is family name), Ph.D. Student at School of Computing and Information Systems, Singapore Management University, under the supervision of Prof. Hady W. Lauw. I am also a member of Perferred.AI. My research focuses on Reinforcement Learning problem. 

I received bachelor's degree (first class, honors program) in Information Technology from University of Engineering and Technology, Vietnam National University Hanoi (UET, VNU) in 2023. During my undergraduate, I was a member of DS&KT Laboratory, Faculty of Information Technology, and my initial research interest is Natural Language Processing (NLP) problems. 

Research Interests

My Research

Augmenting Decision with Hypothesis in Reinforcement Learning

Nguyen Minh Quang and Hady W. Lauw

The Forty-first International Conference on Machine Learning (ICML 2024)

TL;DR: We find out that Bellman-based Learning Scheme can be "softly constrained" if we prompt learners with hypothesis - a weak environment representation, created by M augementor  in the right figure. 

[paper][code][poster]

Value-based reinforcement learning is the current State-Of-The-Art due to high sampling efficiency. However, our theoretical and empirical studies show evidence that it suffers from low exploitation in early training period and bias sensitiveness.  To address these issues, we propose to augment the decision-making process with hypothesis, a weak form of environment description.  Our approach relies on prompting the learning agent with accurate hypotheses, and designing a ready-to-adapt policy through incremental learning.  We propose the ALH algorithm and monitor it on detailed analyses on a typical learning scheme and a diverse set of Mujoco benchmarks.  Our algorithm produces a significant improvement over value-based learning algorithms and other strong baselines.


If you see me looked like a dying fish at ICML2024, yes: my flight was delayed then canceled and I had to take a much longer flight. Then I even can't check in hotel (that why you might see me at the conference site very early). Lately, I got jet lag, in 2 over 3 days in Vienna.
Why BART? At the time I finished the first manuscript (end of 2022), BART is still good enough for summarisation tasks, and cheap enough for a student can use via free google colab.

Zero-cost Transition to Multi-Document Processing in Summarization with Multi-Channel Attention

Minh-Quang Nguyen, Duy-Cat Can and Hoang-Quynh Le

European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML-PKDD 2024) (Research Track)

TL;DR: As its name, we propose to directly utilize pre-optimized BART (on Single-Doc Summarization) for Multi-Doc Summarization task. We show this is possible, and also "very simple", by the MCA arechitecture in the left figure.


[paper][code][poster]

We propose a novel vertical scaling approach. In this approach, we conditionally factorize the multi-document output probability by lower-complexity components.  Specifically, these components are estimated by estimators optimized for single-doc data.  Unlike the full-attention approach, vertical scaling has complexity that scales linearly with the number of single documents, making it more efficient for long documents or large numbers of documents. To further enhance the efficiency and effectiveness of our approach, we introduce the Multi-Channel Attention architecture.  This architecture enables us to fully utilize BART's single-doc pre-optimized parameters, while does not require re-optimization, leading to a zero-cost transition. Our approach maintains promising accuracy and computing efficiency.

My Projects

Uetcorn at mediqa-sum 2023: Template-based summarization for clinical note generation from doctor-patient conversation

Duy-Cat Can, Quoc-An Nguyen, Binh-Nguyen Nguyen, Minh-Quang Nguyen, Khanh-Vinh Nguyen, Trung-Hieu Do and Hoang-Quynh Le

Presented at  CEUR-WS@CLEF 2023 

[technical report]


Uetrice at mediqa 2021: A prosper-thy-neighbour extractive multi-document summarization model


Duy-Cat Can, Quoc-An Nguyen, Quoc-Hung Duong, Minh-Quang Nguyen, Huy-Son Nguyen, Linh Nguyen Tran Ngoc, Quang-Thuy Ha, Mai-Vu Tran

Presented at  BioNLP@NAACL 2021 

[technical report]

Uetfishes at mediqa 2021: Standing-on-the-shoulders-of-giants model for abstractive multi-answer summarization

Hoang-Quynh Le, Quoc-An Nguyen, Quoc-Hung Duong, Minh-Quang Nguyen, Huy-Son Nguyen, Tam Doan Thanh, Hai-Yen Thi Vuong and Trang M. Nguyen

Presented at  BioNLP@NAACL 2021 

[technical report]