Introduction Dialogue systems are systems intended to converse with human users, and recent advancements in AI have contributed to closing the gap between human-machine conversations in many consumer services. AKA Intelligence researchers also tried to build automated dialogue systems and finally set up its own dialogue system, Muse, being able to practice English in addition to social conversations. One of the key differences between the existing systems and Muse is the customized data structure, Bach, to train AI model. It is important for recent dialogue systems to learn from human-human conversations in order to generate best human-machine conversations. Normally, the process of dialogue system may be summarized as follows: when a user asks a question, the system either searches a […]
Introduction Recent advances in AI has contributed to the rebirth of a chatbot-type dialogue system being able to interact with people through natural language communication. This could help people better understand the world around them and communicate more effectively with others, effectively bridging communication gaps. Therefore, it is important to understand the quality attributes associated developing and implementing high-quality conversational agents and diaglouge system. Muse is a NLP engine developed by AKA Intelligence, with a focus on natural conversation. Engineers at AKA and Softbank are collaborating to bring the Muse engine into Pepper, Softbank’s humannoid robot, to use Muse as Pepper’s English conversation system. Muse is also expanding into other hardware platforms as well. A typical example is Musio, a […]
Today, we have the sneak preview of MUSE API (Beta). MUSE API is what Musio actually talks to in the cloud to manage the dialogue and generate things to say, recognize faces, etc. In order to show you better some of the major things going on behind the Musio, we built a temporary front-end for the API. We have a video explaining features of API as well as accompanying material. Our API is on its way, so stay tuned!
Table of Contents 1. Conditional Neural Network Architectures 1.1. goal 1.2. motivation 1.3. ingredients 1.4. steps 1.5. outlook 1.6. resources Conditional Neural Network Architectures goal Today we are going to have a look at conditional neural network architectures and present some of the findings in the recent papers “Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer” and “PathNet: Evolution Channels Gradient Descent in Super Neural Networks”. motivation The interest in conditional models is mainly based on their capability to incorporate a huge number of parameters compared to standard architectures without increasing the need for more computationally powerful hardware. Furthermore such models seem to be able to reduce the training time and are interesting for multi-, online-task learning and transfer learning. […]
Ever since we’ve introduced Musio to the whole world, we’ve received countless questions from our partners, investors, customers as well as curious enthusiasts. So we thought it would be a good idea to answer the most frequently asked questions to make Musio more engaging and easier to understand. We hope that the following Q&A sessions enlighten our readers on who we are and what we dream with Musio. The first round of questions is about Musio as an AI robot. So let’s start! Q. I’ve heard lots of AIs in the market, starting from likes of Apple’s Siri to robots like Pepper. What kind of AI is Musio? Musio is a robot run on AKA’s special AI system (software engine) that […]
Table of Contents 1. Adversarial techniques for dialogue generation 1.1. goal 1.2. motivation 1.3. ingredients 1.4. steps 1.5. outlook 1.6. resources Adversarial techniques for dialogue generation goal This week we are going to have a look at the latest developments of generative adversarial networks (GANs) in the field of dialogue generation by summarizing the paper “Adversarial Learning for Neural Dialogue Generation”. motivation General encoder decoder models for response generation are usually not able to produce meaningful utterances and instead come up with short, generic, repetitive and non-informative sequences of words. The idea here is to apply adversarial methods so far only successful in computer vision to NLP problems, in particular dialogues. Adversarial training with respect to modeling conversations can be […]
Table of Contents 1. Compression and distillation of models 1.1. goal 1.2. motivation 1.3. ingredients 1.4. steps 1.5. outlook 1.6. resources Compression of neural networks goal In this blogpost we will have a look at methods for compressing and reducing deep neural network models in size. motivation The simple fact that bigger and deeper is better for training, leads to models that take up quite some space in memory. However, most of the time one is limited when it comes to memory. Either it is the budget on hardware or more recently developing models for mobile devices is becoming more and more popular. Another important point for deploying neural networks in applications is inference time. In general, larger models also […]