Science Park 904, 1098 XH Amsterdam, Netherlands
Conversational AI in the Lab and in Practice
|Assisting Users to Provide Feedback by Asking Smart Follow-up Questions
|Floris den Hengst
|Reinforcement Learning for Personalized Dialogue Management
|Maartje ter Hoeve
|Conversations with Documents
|Diversifying Dialogue Response Generation with Prototype Guided Paraphrasing
Assisting Users to Provide Feedback by Asking Smart Follow-up Questions
KLM uses several channels to receive feedback from crew regarding problems they encounter in their daily work. By combining AI models, we build a system that assists users to write their reports in an efficient and effective way. I will explain the steps we take to build this system. We first use (simulate) a chatbot that interactively records the feedback from crew. We show that a simple but intuitive entry form in combination with a text classifier is preferred by our users to the chatbot. We further simplify the entry form by asking smart follow up questions and making sure all required fields are filled in. I will talk about the technical design of the system and different pipelines we (are planning to) use for deploying our models in production and keeping track of them.
Floris den Hengst
Reinforcement Learning for Personalized Dialogue Management
Today’s dialogue management modules still require manual development and careful tuning. Reinforcement Learning allows for automatic optimization of these modules. We explore two Reinforcement Learning (RL)-based approaches to this problem. The first approach uses a single learner and extends the POMDP formulation of dialogue state with features that describe the user context. The second approach segments users by context and then employs a learner per context. We compare these approaches in a benchmark of existing non-RL and RL-based methods in three established and one novel application domain of financial product recommendation. We compare the influence of context and training experiences on performance and find that learning approaches generally outperform a handcrafted gold standard.
Maartje ter Hoeve
Conversations with Documents
I focus on document-centered assistance, with three contributions: (1) We first present a survey to understand the space of document-centered assistance and the capabilities people expect in this scenario. (2) We investigate the types of queries that users will pose while seeking assistance with documents, and show that document-centered questions form the majority of these queries. (3) We present a set of initial machine learned models that show that (a) we can accurately detect document-centered questions, and (b) we can build reasonably accurate models for answering such questions. These positive results are encouraging, and suggest that even greater results may be attained with continued study of this interesting and novel problem space. Our findings have implications for the design of intelligent systems to support task completion via natural interactions with documents.
Diversifying Dialogue Response Generation with Prototype Guided Paraphrasing
We propose to combine the merits of template-based and corpus-based DRG by introducing a prototype-based, paraphrasing neural network, P2-Net. Instead of generating a response from scratch, P2-Net generates system responses by paraphrasing template-based responses. To guarantee the precision, P2-Net learns to separate a response into its semantics, context influence, and paraphrasing noise, and to keep the semantics unchanged during paraphrasing. To introduce diversity, P2-Net samples previous conversational utterances as prototypes, from which the model can then extract speaking style information. We conduct experiments on the MultiWOZ dataset with automatic and human evaluations. P2-Net achieves a significant improvement in diversity while preserving the semantics of responses.