:(

Not found.

HDRM: A Resolution Complete Dynamic Roadmap for Real-Time Motion Planning in Complex Scenes accepted to IEEE RA-L

Yiming Yang, Wolfgang Merkt, Vladimir Ivan, Zhibin Li, and Sethu Vijayakumar. “HDRM: A Resolution Complete Dynamic Roadmap for Real-Time Motion Planning in Complex Scenes”. IEEE Robotics and Automation Letters, tbd, Accepted to Appear.

Publisher’s link – DOI: 10.1109/LRA.2017.2773669

Abstract

We present the Hierarchical Dynamic Roadmap (HDRM), a novel resolution complete motion planning algorithm for solving complex planning problems. A unique hierarchical structure is proposed for efficiently encoding the configuration-to-workspace occupation information that allows the robot to check the collision state of tens of millions of samples on-the-fly—the number of which was previously strictly limited by available memory. The hierarchical structure also significantly reduces the time for path searching, hence the robot is able to find feasible motion plans in real-time in extremely constrained environments. The HDRM is theoretically proven to be resolution complete, with a rigorous benchmarking showing that HDRM is robust and computationally fast, compared to classical dynamic roadmap methods and other state-of-the-art planning algorithms. Experiments on the 7 degree-of-freedom KUKA LWR robotic arm integrated with real-time perception of the environment further validate the effectiveness of HDRM in complex environments.

 

Bibtex

@ARTICLE{yang2017hdrm,
author={Y. Yang and W. Merkt and V. Ivan and Z. Li and S. Vijayakumar},
journal={IEEE Robotics and Automation Letters},
title={HDRM: A Resolution Complete Dynamic Roadmap for Real-Time Motion Planning in Complex Scenes},
year={2017},
volume={PP},
number={99},
pages={1-1},
keywords={Collision avoidance;Dynamics;Heuristic algorithms;Planning;Probabilistic logic;Real-time systems;Robots;Collision Avoidance;Dynamic Roadmap;Motion Planning;Realtime Planning},
doi={10.1109/LRA.2017.2773669},
ISSN={},
month={},}

Robust Shared Autonomy Demonstrator Overview Video

The University of Leeds has released a video summary on our Robust Shared Autonomy work which won the First Prize for Greatest Potential for Positive Impact at the Robots for Resilient Infrastructure Challenge and was previously also featured on Made In Leeds TV:


ExO Summit London

On June 30, 2017 I spoke at the ExO Summit London on interfaces, the ExO internal attribute which bridges the SCALE externalities to the IDEAS internal control framework. The next ExO Summit is being held in Toronto, Canada in January 2018.

Posted by ExO Summits on Friday, 21 July 2017

First Prize at the Robots for Resilient Infrastructure Challenge

On June 27th and 28th, 2017, we presented our work on Robust Shared Autonomy for Mobile Manipulation in Extreme Environments at the Robots for Resilient Infrastructure Competition in Leeds, United Kingdom. Our framework allows for shared autonomy operation via limited bandwidth wireless links and empowers the human operator to use a blend of teleoperation, punctuated, as well as fully autonomous behaviours. It reduces operator fatigue by automatically verifying the validity of intended motions and continuously checking the environment for dynamic changes, altering the robot’s behaviour along the way to ensure safety in shared workspaces while minimising dis- and interruptions.

Our project and demonstration was warded the First Prize for Greatest Potential for Positive Impact.

First person view of competition run

Video submission

Demonstrator summary

About the event

The Robots for Resilient Infrastructure Competition was an International Robotics Challenge Event held at the University of Leeds June 27-28, 2017. Its aim was to bring academics, industry, policy makers and stakeholders together to explore a future use of robots in the creation, inspection, repair and maintenance of critical infrastructure. More information can be found here.

News and Social Media Coverage

Robust Shared Autonomy featured on Made In Leeds TV

Our Robust Shared Autonomy work and the Edinburgh Centre for Robotics was prominently featured as part of the Made In Leeds TV On The Aire news segments at 6pm and 8pm on June 28, 2017.

Paired Forward-Inverse Dynamic Reachability Map Paper accepted to the IEEE RA-L

Yiming Yang, Wolfgang Merkt, Henrique Ferrolho, Vladimir Ivan, and Sethu Vijayakumar. “Efficient Humanoid Motion Planning on Uneven Terrain Using Paired Forward-Inverse Dynamic Reachability Maps”. IEEE Robotics and Automation Letters, 2017, In Press.

Publisher’s link – DOI: 10.1109/LRA.2017.2727538

Abstract

A key prerequisite for planning manipulation together with locomotion of humanoids in complex environments is to find a valid end-pose with a feasible stance location and a full-body configuration that is balanced and collision-free. Prior work based on the Inverse Dynamic Reachability Map assumed that the feet are placed next to each other around the stance location on a horizontal plane, and the success rate was correlated with the coverage density of the sampled space, which in turn is limited by the memory required for storing the map.A key prerequisite for planning manipulation together with locomotion of humanoids in complex environments is to find a valid end-pose with a feasible stance location and a full-body configuration that is balanced and collision-free. Prior work based on the Inverse Dynamic Reachability Map assumed that the feet are placed next to each other around the stance location on a horizontal plane, and the success rate was correlated with the coverage density of the sampled space, which in turn is limited by the memory required for storing the map. In this work, we present a framework that uses a Paired Forward-Inverse Dynamic Reachability Map to exploit a greater modularity of the robot’s inherent kinematic structure. The combinatorics of this novel decomposition allows greater coverage in the high dimensional configuration space while reducing the number of stored samples. This permits drawing samples from a much richer dataset to effectively plan end-poses for both single-handed and bimanual tasks on uneven terrains. This novel method was demonstrated on the 38-DoF NASA Valkyrie humanoid by utilizing and exploiting whole body redundancy for accomplishing manipulation tasks on uneven terrains while avoiding obstacles.

 

Bibtex

@ARTICLE{yang2017pairedidrm,
author={Y. Yang and W. Merkt and H. Ferrolho and V. Ivan and S. Vijayakumar},
journal={IEEE Robotics and Automation Letters},
title={Efficient Humanoid Motion Planning on Uneven Terrain Using Paired Forward-Inverse Dynamic Reachability Maps},
year={2017},
volume={PP},
number={99},
pages={1-1},
keywords={Collision avoidance;Foot;Grasping;Humanoid robots;Legged locomotion;Planning;Dynamic Reachability Map;End-Pose Planning;Humanoid Robots;Motion Planning},
doi={10.1109/LRA.2017.2727538},
month={},}

Paper on Robust Shared Autonomy accepted to IEEE CASE 2017

Our paper on Robust Shared Autonomy for Mobile Manipulation with Continuous Scene Monitoring has been accepted to appear at the 13th IEEE Conference on Automation Science and Engineering (CASE) in Xi’an, China.

Abstract

This work presents a fully integrated system for reliable grasping and manipulation using dense visual mapping, collision-free motion planning, and shared autonomy. The particular motion sequences are composed automatically based on high-level objectives provided by a human operator, with continuous scene monitoring during execution automatically detecting and adapting to dynamic changes of the environment. The system is able to automatically recover from a variety of disturbances and fall back to the operator if stuck or if it cannot otherwise guarantee safety. Furthermore, the operator can take control at any time and then resume autonomous operation. Our system is flexible to be adapted to new robotic systems, and we demonstrate our work on two real-world platforms – fixed and floating base – in shared workspace scenarios. To the best of our knowledge, this work is also the first to employ the inverse Dynamic Reachability Map for real-time, optimized mobile base positioning to maximize workspace manipulability reducing cycle time and increasing planning and autonomy robustness.

Video

Reference

  • Wolfgang Merkt, Yiming Yang, Theodoros Stouraitis, Christopher Mower, Maurice Fallon and Sethu Vijayakumar, Robust Shared Autonomy for Mobile Manipulation with Continuous Scene Monitoring, Proc.  13th IEEE Conference on Automation Science and Engineering, Xian, China (2017). [Bibtex]
  • Find the paper and citation information on Edinburgh Research Explorer.

Valkyrie and Robust Shared Autonomy featured on BBC Scotland

Our work with the NASA Valkyrie and bimanual Husky robots, e.g. on motion planning for high-dimensional systems as well as robust shared autonomy, was prominently featured by BBC Scotland in news coverage about a new tranche of funding the Edinburgh Centre for Robotics received for healthcare and emergency response research.

BBC Scotland: A robotic revolution in healthcare.

Paper on Scaling Sampling-based Motion Planning accepted to IEEE ROBIO

Yiming Yang, Vladimir Ivan, Wolfgang Merkt and Sethu Vijayakumar, Scaling Sampling–based Motion Planning to Humanoid Robots, Proc. IEEE International Conf. on Robotics and Biomimetics (ROBIO 2016), Qingdao, China (2016).

Publisher’s link – DOI: 10.1109/ROBIO.2016.7866531

Abstract

Planning balanced and collision-free motion for humanoid robots is non-trivial, especially when they are operated in complex environments, such as reaching targets behind obstacles or through narrow passages. Research has been done in particular to plan such complex motion on humanoids, however, these approaches are typically restricted to particular robot platforms and environments, which can not be easily replicated nor applied. We propose a method that allows us to apply existing sampling-based algorithms directly to plan trajectories for humanoids by utilizing a customized state space representation, biased sampling strategies, and a steering function based on a robust inverse kinematics solver. Our approach requires no prior offline computation, thus one can easily transfer the work to new robot platforms. We tested the proposed method by solving practical reaching tasks on a 38 degrees-of-freedom humanoid robot, NASA Valkyrie, showing that our method is able to generate valid motion plans that can be executed on advanced full-size humanoid robots.

Bibtex

@INPROCEEDINGS{yang2016scalingsbp,
author={Y. Yang and V. Ivan and W. Merkt and S. Vijayakumar},
booktitle={2016 IEEE International Conference on Robotics and Biomimetics (ROBIO)},
title={Scaling sampling-based motion planning to humanoid robots},
year={2016},
pages={1448-1454},
keywords={collision avoidance;humanoid robots;mobile robots;motion control;robot kinematics;robust control;state-space methods;38 degrees-of-freedom humanoid robot;NASA Valkyrie;balanced motion;biased sampling;collision-free motion;humanoid robots;robust inverse kinematics solver;sampling-based algorithms;scaling sampling-based motion planning;state space representation;steering function;trajectory planning;Collision avoidance;Humanoid robots;Kinematics;Manifolds;Planning;Robot kinematics},
doi={10.1109/ROBIO.2016.7866531},
month={Dec},}

Tonight: Debate on How Robotics Will Reshape Our Lives

We are fully sold out for the debate tonight which I am co-organising in association with the British Science Association’s Scotland branch and hosted by the Edinburgh Centre for Robotics [official announcement].

More information and the programme can be found here.

Press Coverage: