IEEE/RSJ International Conference on
Intelligent Robots and Systems
Vancouver, BC, Canada
September 24–28, 2017

Menu

Plenaries
Dieter Fox, University of Washington
Fei-Fei Li, Stanford University/Google
Maja Mataric, University of Southern California
Keynotes
Nick Roy, MIT
Brian Gerkey, Open Source Robotics Foundation (OSRF)
Steve Waslander, University of Waterloo
Lynne Parker, University of Tennessee
Josh Bongard, University of Vermont
Vincent Hayward, Institut des Systèmes Intelligents et de Robotique(ISIR)
David Hsu, National University of Singapore
Julie Shah, MIT
Frank Chongwoo Park, Seoul National University
Ed Olsen, University of Michigan
Hiroshi Ishiguro, Advanced Telecommunications Research Institute International (ATR)
Oliver Brock, Technical University of Berlin

Dieter Fox

Toward Robots that Understand Pand Their Environments

To interact and collaborate with people in a natural way, robots must be able to recognize objects in their environments, accurately track the motion of humans, and estimate their goals and intentions. The last years have seen dramatic improvements in robotic capabilities to model, detect, and track non-rigid objects such as human bodies, hands, and their own manipulators. These recent developments can serve as the basis for providing robots with an unprecedented understanding of their environment and the people therein. I will use examples from our research on modeling, detecting, and tracking ​in 3D scenes to highlight ​some of ​these advances and discuss open problems that still need to be addressed. I will also use these examples to highlight the pros and cons of model-based approaches and deep learning techniques for solving perception problems in robotics.

 

Dieter Fox is a Professor in the Paul G. Allen School of Computer Science & Engineering at the University of Washington, where he heads the UW Robotics and State Estimation Lab. From 2009 to 2011, he was also Director of the Intel Research Labs Seattle. Dieter obtained his Ph.D. from the University of Bonn, Germany. His research is in robotics and artificial intelligence, with a focus on state estimation and perception applied to problems such as mapping, object detection and tracking, manipulation, and activity recognition. He has published more than 180 technical papers and is the co-author of the textbook “Probabilistic Robotics.” He is a Fellow of the IEEE and the AAAI, and he received several best paper awards at major robotics, AI, and computer vision conferences. He was an editor of the IEEE Transactions on Robotics, program co-chair of the 2008 AAAI Conference on Artificial Intelligence, and program chair of the 2013 Robotics: Science and Systems conference.


Maja J. Mataric

Automation vs. Augmentation: Defining the Future of Socially Assistive Robotics

Robotics has been driven by the desire to automate work, but automation raises concerns about the impact on the future of work. Less discussed but no more important are the implications on human health, as the science on longevity and resilience indicates that having the drive to work is key for health and wellness.

However, robots, machines that were originally invented to automate work, are also becoming helpful by not doing any physical work at all, but instead by motivating and coaching us to do our own work, based on evidence from neuroscience and behavioral science demonstrating that human behavior is most strongly influenced by physically embodied social agents, including robots.The field of socially assistive robotics (SAR) focuses on developing intelligent socially interactive machine that that provide assistance through social rather than physical means. The robot’s physical embodiment is at the heart of SAR’s effectiveness, as it leverages the inherently human tendency to engage with lifelike (but not necessarily human-like or otherwise biomimetic) agents. People readily ascribe intention, personality, and emotion to robots; SAR leverages this engagement to develop robots capable of monitoring, motivating, and sustaining user activities and improving human learning, training, performance and health outcomes. Human-robot interaction (HRI) for SAR is a growing multifaceted research field at the intersection of engineering, health sciences, neuroscience, social, and cognitive sciences, with rapidly growing commercial spinouts. This talk will describe research into embodiment, modeling and steering social dynamics, and long-term adaptation and learning for SAR, grounded in projects involving multi-modal activity data, modeling personality and engagement, formalizing social use of space and non-verbal communication, and personalizing the interaction with the user over a period of months, among others. SAR systems have been validated with a variety of user populations, including stroke patients, children with autism spectrum disorders, elderly with Alzheimers and other forms of dementia; this talk will cover the short, middle, and long-term commercial applications of SAR, as well as the frontiers of SAR research.

 

Maja Mataric´ is Chan Soon-Shiong Professor of Computer Science, Neuroscience, and Pediatrics at the University of Southern California, founding director of the USC Robotics and Autonomous Systems Center, and Vice Dean for Research in the Viterbi School of Engineering. Her PhD and MS are from MIT, and BS from the University of Kansas. She is Fellow of AAAS, IEEE, and AAAI, and received the Presidential Award for Excellence in Science, Mathematics and Engineering Mentoring, Anita Borg Institute Women of Vision Award in Innovation, the Okawa Foundation, NSF Career, MIT TR35, and IEEE RAS Early Career Awards. She has published extensively and is active in K-12 STEM outreach. A pioneer of socially assistive robotics, her research enables robots to help people through social interaction in therapy, rehabilitation, training, and education, developing robot-assisted therapies for autism, stroke, Alzheimer's and other special needs, as well as wellness interventions http://robotics.usc.edu/interaction/. She is also founder and CSO of Embodied, Inc. www.embodied.me.


Prof. Nick Roy, MIT


Brian Gerkey

Fun! Free! Awesome! Advanced robotics in the era of open source software

After many years of it being "just around the corner," we are now witnessing the beginning of a robot revolution. We hear about robots daily, from awe-inspiring technical achievements to breath-taking investments and acquisitions. Why? And, why now? In this session, I'll explain how open source software, embedded computing, and new sensors have come together to change the landscape for robotics developers (and users).

 

Brian Gerkey is CEO of OSRF. Prior to joining OSRF, Brian was Director of Open Source Development at Willow Garage. Previously, Brian was a Computer Scientist in the Artificial Intelligence Center at SRI, and before that, a postdoctoral research fellow in the Artificial Intelligence Lab at Stanford University. Brian received his Ph.D. in Computer Science from the University of Southern California (USC) in 2003, his M.S. in Computer Science from USC in 2000, and his B.S.E. in Computer Engineering, with a secondary major in Mathematics and a minor in Robotics and Automation, from Tulane University in 1998. Brian is a strong believer in, frequent contributor to, and constant beneficiary of open source software. Since 2008, Brian has worked on the ROS Project, which develops and releases one of the most widely used robot software platforms in robotics research and education (and soon industry). He is founding and former lead developer on the open source Player Project, which continues to maintain widely used robot simulation and development tools. For his work on Player and ROS, Brian was recognized by MIT Technology Review with the TR35 award in 2011 and by Silicon Valley Business Journal with their 40 Under 40 award in 2016.


Steven Waslander

Static and Dynamic Multi-Camera Clusters for Localization and Mapping

Multi-camera clusters provide significant advantages over monocular and stereo configurations for localization and mapping, particularly in complex environments with moving objects. The wide or omni-directional field of view afforded by multiple cameras allows the mitigation of detrimental effects from feature deprivation or occlusion. Where possible, large baselines between camera centres afford good sensitivity for scale resolution, without the need for overlap. In this talk, I will describe our work on multi-camera localization and mapping for both static clusters with rigidly mounted cameras and dynamic clusters with gimballed cameras. We evaluate conditions for degeneracy of the state estimation process for each type of cluster. We demonstrate performance results on unmanned aerial vehicles and automotive benchmark data.

 

Prof. Waslander received his B.Sc.E. in 1998 from Queen's University, his M.S. in 2002 and his Ph.D. in 2007, both from Stanford University in Aeronautics and Astronautics. He was a Control Systems Analyst for Pratt & Whitney Canada from 1998 to 2001. In 2008, he joined the Department of Mechanical and Mechatronics Engineering at the University of Waterloo in Waterloo, ON, Canada, as an Assistant Professor. He is the Director of the Waterloo Autonomous Vehicles Laboratory (WAVELab,http://wavelab.uwaterloo.ca. His research interests are in the areas of autonomous aerial and ground vehicles, simultaneous localization and mapping, nonlinear estimation and control, and multi-vehicle systems.


Josh Bongard

Robots that Evolve, Develop, and Learn

Many organisms experience radical morphological and neurological change over evolutionary time, as well as their own lifetimes. Traditionally, this has been hard to do with rigid-body robots. The emerging field of soft robotics, however, is now making it relatively easy to create robots that change their body plans and controllers over multiple time scales. In this talk I will explore not just how to do this, but why one might choose to do so: I will show how such robots are more adaptable than robots that cannot adapt body and brain over time.

 

Josh Bongard is a roboticist and professor in the Department of Computer Science at the University of Vermont. He was a Microsoft New Faculty Fellow (2006), an MIT Technology Review “Top Innovator under the Age of 35” (2007), and the recipient of a PECASE award (2011). His funded research covers the crowdsourcing of robotics, embodied cognition, human-robot interaction, autonomous machines that recover functionality after unanticipated damage, soft robotics, and white box machine learning. His work has been funded by NSF, DARPA, ARO, AFRL, and NASA. He is the co-author of the book How the Body Shapes the Way We Think: A New View of Intelligence.


Vincent Hayward

Mechanics of Tactile Perception and Haptic Interface Design

The physics of contact differ in fundamental ways from the physics of acoustics and optics. It should therefore be expected that the processing of somatosensory information be very different from the processing in other sensory modalities. The talk will describe some salients facts regarding the physics of touch and will continue with the description recent findings regarding the processing of time-evolving tactile inputs in second-order neurones in mammals. These ideas can be applied to the design of cost effective efficient tactile displays and tactile sensors.

Vincent Hayward is a professor (on leave) at the Université Pierre et Marie Curie (UPMC) in Paris. Before, he was with the Department of Electrical and Computer Engineering at McGill University, Montréal, Canada, where I became a full Professor in 2006 and was the Director of the McGill Centre for Intelligent Machines from 2001 to 2004. Hayward is interested in haptic device design, human perception, and robotics; and I am a Fellow of the IEEE. He was a European Research Council Grantee from 2010 to 2016. Since January 2017, Hayward is a Professor of Tactile Perception and Technology at the School of Advanced Studies of the University of London, supported by a Leverhulme Trust Fellowship.

Biography


David Hsu

Robust Robot Decision Making under Uncertainty: From Known Unknowns to Unknown Unknowns

In the near future, robots will "live" with humans, providing a variety of services at homes, in workplaces, or on the road. For robots to become effective and reliable human collaborators, a core challenge is the inherent uncertainty in understanding human intentions, in addition to imperfect robot control and sensor noise. To achieve robust performance, robots must hedge against uncertainties and sometimes actively elicit information to reduce uncertainties. I will briefly review Partially Observable Markov Decision Process (POMDP) as a principled general model for planning under uncertainty and present our recent work that tackles the intractable POMDP planning problem and achieves near real-time performance in dynamic environments for autonomous vehicle navigation among many pedestrians. In practice, an outstanding challenge of POMDP planning is model construction. I will also discuss how recent advances in deep learning can help bridge the gap and connect planning and learning.

 

David Hsu is a professor of computer science at the National University of Singapore (NUS), a member of NUS Graduate School for Integrative Sciences & Engineering, and deputy director of the Advanced Robotics Center. He received Ph.D. in computer science from Stanford University, USA. In recent years, he has been working on robot planning and learning under uncertainty.

He served as the General Co-Chair of IEEE International Conference on Robotics & Automation (ICRA) 2016, the Program Chair of Robotics: Science & Systems (RSS) 2015, a steering committee member of International Workshop on the Algorithmic Foundation of Robotics (WAFR), an editorial board member of Journal of Artificial Intelligence Research, and an associate editor of IEEE Transactions on Robotics. He, along with colleagues and students, won the Humanitarian Robotics and Automation Technology Challenge Award at ICRA 2015 and the RoboCup Best Paper Award at IEEE/RSJ International Conference on Intelligent Robots & Systems (IROS) 2015.


Julie Shah

Enhancing Human Capability with Intelligent Machine Teammates

Every team has top performers -- people who excel at working in a team to find the right solutions in complex, difficult situations. These top performers include nurses who run hospital floors, emergency response teams, air traffic controllers, and factory line supervisors. While they may outperform the most sophisticated optimization and scheduling algorithms, they cannot often tell us how they do it. Similarly, even when a machine can do the job better than most of us, it can’t explain how. In this talk I share recent work investigating effective ways to blend the unique decision-making strengths of humans and machines. I discuss the development of computational models that enable machines to efficiently infer the mental state of human teammates and thereby collaborate with people in richer, more flexible ways. Our studies demonstrate statistically significant improvements in people’s performance on military, healthcare and manufacturing tasks, when aided by intelligent machine teammates.

 

Julie Shah is an Associate Professor of Aeronautics and Astronautics at MIT and director of the Interactive Robotics Group, which aims to imagine the future of work by designing collaborative robot teammates that enhance human capability. As a current fellow of Harvard University's Radcliffe Institute for Advanced Study, she is expanding the use of human cognitive models for artificial intelligence. She has translated her work to manufacturing assembly lines, healthcare applications, transportation and defense. Before joining the faculty, she worked at Boeing Research and Technology on robotics applications for aerospace manufacturing. Prof. Shah has been recognized by the National Science Foundation with a Faculty Early Career Development (CAREER) award and by MIT Technology Review on its 35 Innovators Under 35 list. Her work on industrial human-robot collaboration was also in Technology Review’s 2013 list of 10 Breakthrough Technologies. She has received international recognition in the form of best paper awards and nominations from the ACM/IEEE International Conference on Human-Robot Interaction, the American Institute of Aeronautics and Astronautics, the Human Factors and Ergonomics Society, the International Conference on Automated Planning and Scheduling, and the International Symposium on Robotics. She earned degrees in aeronautics and astronautics and in autonomous systems from MIT.


Edwin Olsen

Reliability and Robustness of Autonomous Systems

From self-driving cars to domestic robots, it's relatively easy to build a system that works well enough for the purposes of a video. Achieving high levels of reliability, on the other hand, is all-too-often viewed as an engineering step through which bugs are removed and corner cases are addressed. In some domains, however, the gap between the reliability demonstrated by today's system and the bar needed for real-world deployment remain many orders of magnitude apart. This is not a matter of engineering polish, but rather a need for fundamentally different ways of building our systems.

 


Oliver Brock

Robotics as the Path to Intelligence

The historical promise robotics is to devise technological artifacts that replicate all human capabilities. This includes physical capabilities like locomotion and dexterity, intellectual capabilities like reasoning and learning, and also social capabilities like collaboration and training. Are we, as a discipline, still pursuing this objective? Is it even worthwhile or promising to do so? And if so, are we making good progress? I will portray my views on the importance for robotics to understand and replicate intelligence, including physical intelligence and social intelligence. By juxtaposing views from related disciplines with recent accomplishments of our field, e.g. soft robotics and deep learning, I will sketch a path towards a future generation of robots with human-like abilities.

 

Oliver Brock is the Alexander-von-Humboldt Professor of Robotics in the School of Electrical Engineering and Computer Science at the Technische Universität Berlin in Germany. He received his Diploma in Computer Science in 1993 from the Technische Universität Berlin and his Master's and Ph.D. in Computer Science from Stanford University in 1994 and 2000, respectively. He also held post-doctoral positions at Rice University and Stanford University. Starting in 2002, he was an Assistant Professor and Associate Professor in the Department of Computer Science at the University of Massachusetts Amherst, before to moving back to the Technische Universität Berlin in 2009. The research of Brock's lab, the Robotics and Biology Laboratory, focuses on mobile manipulation, interactive perception, grasping, manipulation, soft material robotics, interactive machine learning, deep learning, motion generation, and the application of algorithms and concepts from robotics to computational problems in structural molecular biology. He is the president of the Robotics: Science and Systems foundation.