Back to: Robotics | Erik Billing
This page is no longer updated. Please refer to for more information.

Behavior and Task Learning from Demonstration

The move of robots from industrial to everyday environments like hospitals and homes introduces a number of demanding requirements to handle uncertainty and changing external conditions. Still, most robots are constructed to perform pre-defined tasks, programmed by a researcher or engineer. However, most robots of tomorrow will have to be constructed for a large variety of tasks to become economically attractive. The adaptation to new tasks can not be expected to be made with regular end-user programming. Rather, the robot has to be delivered with advanced capability to learn new tasks and new working conditions, adapted to the present environment. Robot learning is for this reason a very active research area.

A popular method for teaching robots simple behaviors involves a human demonstrating a behavior via remote control, or by having the robot observe the human's movements. This approach is commonly referred to as Learning from Demonstration (LFD) or Imitation Learning (IL).

The goal of LFD is to create a representation of a behavior such that the robot, when executing the taught behavior, ends up executing a certain behavior. The meaning of a behavior is in this context very general and can be anything from surviving in the environment to a specific task such as taking out the garbage.

This project focuses on the theoretical aspects of robot learning, and specifically learning from demonstration using teleoperation. In order to interpret a demonstration, the robot has to have some knowledge of how to extract the relevant aspects of the demonstration. This previous knowledge, or bias, can in turn be the result of previous learning. From this view, learning is seen as a gradual development of both knowledge and bias where learning is only successful when there is suitable bias available, i.e., when the taught task is not too easy and not too hard.

Previous knowledge is commonly stored as behavior primitives, which can be either learned or hard-coded by a programmer. A demonstrated behavior can then be broken down into segments that correspond to one primitive behavior. Primitive behaviors is implemented by already learned or hard-coded controllers that can be combined into new, more complex behaviors. This transforms learning from demonstration into three basic activities, behavior segmentation, behavior recognition and behavior coordination (Billing 2007, Billing & Hellstr÷m 2008b).

Behavior segmentation refers to the process of dividing the observed event sequence into segments which can be explained by a single primitive. Behavior recognition involves identifying which primitive, with possible parametrization, that best matches each segment. Finally, behavior coordination involves identifying switching criteria between primitives, and how the primitives should be composed. Identification of switching criteria corresponds to finding sub-goals in the demonstrated behavior.

So far, a broad overview of robot learning and behavior representation including a formalisation of LFD has been performed within the present project (Billing 2007, Billing & Hellstr÷m 2008b). Furthermore, three general methods for behavior recognition have been developed and tested (Billing & Hellstr÷m 2008a).

Recently, the focus of the project has turned towards functional models of cortex and how neurological models can be applied within LFD, specifically to develop a common understanding between robot pupil and human teacher. This includes how attentional processes can be applied during learning as a way to infer bias, and support the extraction of relevant features from demonstration.

List of publications

Billing, E. A., Hellstr÷m, T and Janlert, L. E. Behavior Recognition for Learning from Demonstration. Accepted to IEEE International Conference on Robotics and Automation (ICRA 2010), Anchorage, Alaska, May 3-8, 2010.
Billing, E. A. Cognitive Perspectives on Robot Behavior. In Proceedings of the Second International Conference on Agents and Artificial Intelligence (ICAART), Special Session on Languages with Multi-Agent Systems and Bio-Inspired Devices, Valencia, Spain, January 22-24, 2010.
Billing, E. A., Hellstr÷m, T and Janlert, L. E. Model-Free Learning from Demonstration. In Proceedings of the Second International Conference on Agents and Artificial Intelligence (ICAART), Valencia, Spain, January 22-24, 2010. Best Student Paper Award
Billing, E. A. Cognition Reversed - Robot Learning from Demonstration. Licentiate thesis, ISBN 978-91-7264-925-5, Department of Computing Science, Umeň University, Umeň, Sweden, 2009.
Billing, E. A. and Hellstr÷m, T. Formalising Learning from Demonstration. UMINF 08.10, Department of Computing Science, Umeň University, Umeň, Sweden, 2008.
Billing, E. A. and Hellstr÷m, T. Behavior recognition for segmentation of demonstrated tasks. In Proceedings of the IEEE International Conference on Distributed Human-Machine Systems Athens, Greece, March 2008.
Billing, E. A.. Representing Behavior - Distributed theories in a context of robotics. UMINF 07.25, Department of Computing Science, Umeň University, Umeň, Sweden, 2007.
Billing, E. A. and Hellstr÷m, T. Behavior and Task Learning from Demonstration. Proceedings from the 23rd Annual workshop of the Swedish Artificial Intelligence Society (SAIS06), Umeň, Sweden, 2006.
Billing, E. A.. Simulation of Corticospinal Interaction for Motor Control. Master Thesis, Cognitive Science Programme, Department of Integrative Medical Biology, Umeň University, Umeň, Sweden, 2004.

Erik Billing
Department of Computing Science
Umeå University
Last modified:
Webmaster: Erik Billing