Accepted papers
10 minutes spotlight talk & poster
No.1 Martino Mensio, Emanuele Bastianelli, Ilaria Tiddi, and Giuseppe Rizzo
A Multi-layer LSTM-based Approach for Robot Command Interaction Modeling
No.2 Yen-Ling Kuo, Andrei Barbu, and Boris Katz
Deep compositional models for robotic planning and language
No.3 Rikunari Sagara, Ryo Taguchi, Akira Taniguchi, Tadahiro Taniguchi, Koosuke Hattori, Masahiro Hoguro, and Taizo Umezaki
Mutual Learning of Relative Spatial Concepts and Phoneme Sequences using Spoken User Utterances
No.4 Xavier Hinaut
Neuro-Inspired Model for Robots Learning Language from Phonemes, Words or Grammatical Constructions
2 minutes flash talk & poster
No.1 Lu Cao and Yoshinori Kuno
Understanding Spatial Knowledge in Natural Language
No.2 Alexander Sutherland, Sven Magg, Cornelius Weber, and Stefan Wermter
Analyzing the Influence of Dataset Composition for Emotion Recognition
No.3 Matthias Hirschmanner, Oliver Schurer, Brigitte Krenn, Christoph Muller, Friedrich Neubarth, and Markus Vincze
Improving the Quality of Dialogues with Robots for Learning of Object Meaning
No.4 Agnese Augello, Ignazio Infantino, Umberto Maniscalco, Giovanni Pilato, and Filippo Vella
Grounding Language on Roboceptions
No.5 Mithun Kinarullathil, Oedro H. Martins, Carlos Azevedo, Oscar Lima, Guilherme Lawless, Pedro U. Lima, Luis Custodio, and Rodrigo Ventura
From User Spoken Commands to Robot Task Plans: a Case Study in RoboCup@Home