Personal details

Name
Dr Niko Suenderhauf
Position(s)
Senior Lecturer
Science and Engineering Faculty,
School of Electrical Engineering & Robotics
Discipline *
Artificial Intelligence and Image Processing, Electrical and Electronic Engineering
Phone
+61 7 3138 9971
Email
Location
View location details (QUT staff and student access only)
Identifiers and profiles
ORCID iD Twitter LinkedIn
Qualifications

PhD (Chemnitz University Of Technology)

* Field of Research code, Australian and New Zealand Standard Research Classification (ANZSRC), 2008

Biography

Dr Niko Suenderhauf is a Chief Investigator of the QUT Centre for Robotics where he leads the Visual Learning and Understanding research program. Niko is also a Chief Investigator and Project Leader with the Australian Centre for Robotic Vision, and a Senior Lecturer at Queensland University of Technology (QUT) in Brisbane, Australia (a Senior Lecturer is roughly equivalent to an Associate Professor in the US system). He has been awarded a Google Faculty Research Award (2018) and an Amazon Research Award (2020).

Niko conducts research in robotic vision, at the intersection of robotics, computer vision, and machine learning. His research is motivated by enabling robots to learn to perform complex tasks that require navigation and interaction with objects, the environment, and with humans. Niko’s research interests focus on scene understanding, object-based semantic SLAM, robotic learning for navigation and interaction, uncertainty and reliability of deep learning on robotic systems in open-world conditions.

Together with his research group and colleagues Niko develops new approaches to object-based simultaneous localisation and mapping (SLAM), and new ways to incorporate semantics and prior knowledge into reinforcement learning. Furthermore, he is very interested in questions around the reliability and robustness of machine learning for real world applications, and leads a project on new benchmarking challenges in robotic vision.

Dr Suenderhauf is co-chair of the IEEE Robotics and Automation Society Technical Committee on Robotic Perception and regularly organises workshops at leading robotics and computer vision conferences. He is member of the editorial board for the International Journal of Robotics Research (IJRR), and was Associate Editor for the IEEE Robotics and Automation Letters journal (RA-L) from 2015 to 2019. Niko served as AE for the IEEE International Conference on Robotics and Automation (ICRA) 2018 and 2020.

In his role as an educator at QUT, Niko enjoys teaching Introduction to Robotics (EGB339) and Mechatronics Design 3 (EGH419) to the undergraduate students in the Mechatronics degree.

Niko received his PhD from Chemnitz University of Technology, Germany in 2012. In his thesis, Niko  focused on robust factor graph based models for robotic localisation and mapping, as well as general probabilistic estimation problems, and developed the mathematical concepts of Switchable Constraints. After two years as a Research Fellow in Chemnitz, Niko joined QUT as a Research Fellow in March 2014, before being appointed a Lecturer position in 2017.

This information has been contributed by Dr Niko Suenderhauf.

Experience

Research Project Leadership

I am a Chief Investigator of the Australian Centre for Robotic Vision. In this role, I lead the project on Robotic Vision Evaluation and Benchmarking, and am deputy project leader for the Centre’s Scene Understanding project.

Robotic Vision Evaluation and Benchmarking (2018 – Present)
Big benchmark competitions like ILSVRC or COCO fuelled much of the progress in computer vision and deep learning over the past years. We aim to recreate this success for robotic vision.

To this end, we develop a set of new benchmark challenges for robotic vision that evaluate probabilistic object detection, scene understanding, uncertainty estimation, continuous learning for domain adaptation, continuous learning to incorporate previuosly unseen classes, active learning, and active vision. We combine the variety and complexity of real-world data with the flexibility of synthetic graphics and physics engines. See project

Scene Understanding and Semantic SLAM (2017 – Present)
Making a robot understand what it sees is one of the most fascinating goals in my current research. To this end, we develop novel methods for Semantic Mapping and Semantic SLAM by combining object detection with simultaneous localisation and mapping (SLAM) techniques. We furthermore work on Bayesian Deep Learning for object detection, to better understand the uncertainty of a deep network’s predictions and integrate deep learning into robotics in a probabilistic way. See project

Bayesian Deep Learning and Uncertainty for Object Detection (2017 – Present)
In order to fully integrate deep learning into robotics, it is important that deep learning systems can reliably estimate the uncertainty in their predictions. This would allow robots to treat a deep neural network like any other sensor, and use the established Bayesian techniques to fuse the network’s predictions with prior knowledge or other sensor measurements, or to accumulate information over time. We focus on Bayesian Deep Learning approaches for the specific use case of object detection on a robot in open-set conditions. See project

Reinforcement Learning for Robot Navigation and Complex Task Execution (2017 – Present)
How can robots best learn to navigate in challenging environments and execute complex tasks, such as tidying up an apartment or assist humans in their everyday domestic chores? Often, hand-written architectures are based on complicated state machines that become intractable to design and maintain with growing task complexity. I am interested in developing learning-based approaches are effective and efficient, and scale better to complicated tasks. See project

Visual Place Recognition in Changing Environments (2012 – Present)
An autonomous robot that operates on our campus should be able to recognize different places when it comes back to them after some time. This is important to support reliable navigation and localisation and therefore enable the robot to perform a useful task. The problem of visual place recognition gets challenging if the visual appearance of these places changed in the meantime. This usually happens due to changes in the lighting conditions (think day vs. night or early morning vs. late afternoon), shadows, different weather conditions, or even different seasons. We develop algorithms for vision-based place recognition that can deal with these changes in visual appearance. See project

 

Organised Research Workshops

Dedicated workshops are a great way of getting in contact with fellow researchers from around the world that are working on similar scientific questions. Over the past years I was lead organiser or co-organiser for these workshops at leading international conferences:

 

This information has been contributed by Dr Niko Suenderhauf.

Publications


For more publications by this staff member, visit QUT ePrints, the University's research repository.

Awards

Awards and recognition

Type
Academic Honours, Prestigious Awards or Prizes
Reference year
2020
Details
Amazon Research Award for the project "Learning Robotic Navigation and Interaction from Object-based Semantic Maps". This internationally competitive and prestigious award supports my research towards intelligent robots operating alongside humans in domestic environments with $120,000AUD.
Type
Academic Honours, Prestigious Awards or Prizes
Reference year
2018
Details
Google Faculty Research Award for the project "The Large Scale Robotic Vision Perception Challenge". This award "recognises and supports world-class faculty pursuing cutting-edge research". My proposal was selected after expert reviews out of 1033 proposals from 360 universities in 46 countries. The acceptance rate was only 14.7%. The award sum of over $74,000AUD supported my research activities of creating new robotic vision research competitions for the international community.
Type
Advisor/Consultant for Community
Reference year
2019
Details
I am one of two chairs for the International Technical Committee for Computer and Robot Vision of the Institute of Electrical and Electronics Engineers (IEEE). In this role, I oversee and steer the organisation of events and activities for the international research community alongside my co-chair, Prof Scaramuzza from ETH Zurich.
Type
Editorial Role for an Academic Journal
Reference year
2019
Details
I was invited to be a Member of the Editorial Board of the International Journal of Robotics Research (IJRR), the highest-impact journal in robotics, alongside full professors from institutions such as Uni of Oxford, Stanford, MIT, or Harvard. From 2015-2019 I served as associate editor of the IEEE Robotics and Automation Letters journal.
Type
Editorial Role for an Academic Journal
Reference year
2018
Details
Guest Editor for Special Issue on "Deep Learning for Robotic Vision" with the leading Q1 journal International Journal on Computer Vision (IJCV)
Type
Editorial Role for an Academic Journal
Reference year
2017
Details
Coordinating Guest Editor for Special Issue on Deep Learning for Robotics with leading Q1 journal in robotics: International Journal of Robotics Research (IJRR)
Type
Academic Honours, Prestigious Awards or Prizes
Reference year
2015
Details
QUT Vice Chancellor's Performance Award
Type
Editorial Role for an Academic Journal
Reference year
2015
Details
Associate Editor for IEEE Robotics and Automation Letters (RA-L) Journal since 2015

Research projects

Grants and projects (Category 1: Australian Competitive Grants only)

Title
ARC Centre of Excellence for Robotic Vision (ACRV)
Primary fund type
CAT 1 - Australian Competitive Grant
Project ID
CE140100016
Start year
2014
Keywords
Robotic Vision; Robotics; Computer Vision

Supervision

Current supervisions

Completed supervisions (Doctorate)