Stylized Dialogue Generation with Feature-Guided Knowledge Augmentation

Stylized dialogue generation systems aim to produce coherent and context-aware dialogues while effectively emulating the desired style. Generating stylized dialogue is valuable yet challenging due to the scarce parallel data. Existing methods often synthesize pseudo data through back translation, yet suffer from noisy and context-agnostic style signals caused by insufficient guidance on target style features. To address this, we propose the knowledge-augmented stylized dialogue generation model, which includes a feature-guided style knowledge selection module that utilizes context and response features. Specifically, we retrieve dialogue-related style sentences from style corpus to explicitly provide clear style signals. We design a feature-guided selection module with response-related contrastive learning and style responsiveness Kullback-Leibler losses to enhance generation at both semantic and stylized levels. Our approach demonstrates satisfactory performance on two public stylized dialogue benchmarks in both automatic and human evaluations. We have released our code and datasets.

We would like to thank the anonymous reviewers for their helpful comments and suggestions. This work is supported by National Key R&D Program of China (No. 2021YFC3340303) and National Natural Science Foundation of China (No. 62122089).

Association for Computational Linguistics

Conference/Event Name
2023 Findings of the Association for Computational Linguistics: EMNLP 2023


Additional Links