Invited speakers

Prof. Tadahiro Taniguchi, Ritsumeikan University

Title: TBA

Abstract:


--------------------
Prof. Daichi Mochihashi, Institute of Statistical Mathematics

Title: Inducing Motions from Movements

Abstract:
Recognizing motions, such as running, kicking, holding, or eating, from movements is a fundamental ability of human that resembles learning "words" from a sequence of characters in language. It forms a basis for further semantic processing, as well as important from a developmental point of view. In this talk, I present our approaches to this problem by extending the methods for word recognition. Employing a semi-Markov model whose emissions are Gaussian processes, we show it is able to recognize motions from robot movements in an unsupervised fasion. As to the number of latent motions, we show that a method based on hierarchical Dirichlet processes can find the proper number of motions. Finally, if time permits, I will discuss how to integrate further advanced methods in natural language processing into robotics research to enable higher levels of recognition.


--------------------
Dr. Scott Heath, University of Queensland

Title: Lingodroids: Cross-situational learning of episodic elements

Abstract:
Lingodroids are mobile robots that are capable of learning their own lexicons for space and time. They use simple conversations (language games) to bootstrap and ground the meanings of spatial and temporal terms within their sensors and cognition through shared experiences of space and time. For Lingodroids to effectively bootstrap the acquisition of language, they must handle referential uncertainty - the problem of deciding what meaning to ascribe to a given word. For the first Lingodroids, the underlying representation was specified within the grammar of a conversation (pre-determined). The advantages of the pre-determined conversations are that words learned are immediately usable. The disadvantages are that lexicon learning is constrained to words for innate features.
Later studies investigated the use of cross-situational learning to resolve referential uncertainty for certain aspects of space and time when the Lingodroids are not able to communicate their attention through conversation. In particular, Lingodroids that have different spatial sensors and cognition need to be able to determine which dimension (space or time) labels from another robot are referring to. Cross-situational learning was compared to pre-determined conversations on long-term coherence, immediate usability and learning time for each condition. Results demonstrate that for unconstrained learning, the long-term coherence is unaffected, although at the cost of increased learning time and decreased immediate usability. The immediate usability is further explored through a set of phase-portrait visualisations, which show that for Lingodroids, there is a relationship between the generalisation of a word and its referential resolution.
This talk will briefly present Lingodroids and their abilities to learn spatial and temporal terms, and then describe the Lingodroids' use of cross-situational learning for handling referential uncertainty.Comprehension of spoken natural language is an essential skill for robots to communicate with humans effectively. However, handling unconstrained spoken instructions is challenging due to complex structures and the wide variety of expressions used in spoken language, and inherent ambiguity of human instructions. For these challenges, I will introduce related research with state of art deep learning and our new research results.


--------------------
Dr. Kuniyuki Takahashi, Preferred Networks

Title: Real-World Objects Interaction with Unconstrained Spoken Language Instructions

Abstract:
Comprehension of spoken natural language is an essential skill for robots to communicate with humans effectively. However, handling unconstrained spoken instructions is challenging due to complex structures and the wide variety of expressions used in spoken language, and inherent ambiguity of human instructions. For these challenges, I will introduce related research with state of art deep learning and our new research results.


--------------------
Dr. Andrzej Pronobis, University of Washington

Title: TBA

Abstract:


--------------------
Prof. Tetsuya Ogata, Waseda University

Title: Recurrent Neural Models for Translation between Robot Actions and Language

Abstract:
There are the multiple serious difficulties for bidirectional translation between robot actions and languages, such as the segmentation of the continuous sensory-motor flows, the ambiguity and incompleteness of the sentences, the many to many mapping between actions and sentences,  and so on. In this talk, I will introduce the series of the recurrent neural models for robot's action-language learning with parametric bias and/or sequence-to-sequence manners etc. which we have proposed in these ten years.
Comments