Paper on End-Pose Planning on Complex Support Surfaces accepted to Humanoids 2018

Henrique Ferrolho, Wolfgang Merkt, Yiming Yang, Vladimir Ivan, and Sethu Vijayakumar. “Whole-Body End-Pose Planning for Legged Robots on Inclined Support Surfaces in Complex Environments”, Proc. IEEE Intl. Conf. on Humanoid Robots (Humanoids 2018), Beijing (2018).


Planning balanced whole-body reaching configurations is a fundamental problem in humanoid robotics on which manipulation and locomotion planners depend on.
While finding valid whole-body configurations in free space and on flat terrains is relatively straightforward, the problem becomes extremely challenging when obstacle avoidance is taken into account, and when balancing on more complex terrains, such as inclined supports or steps.
Previous work using Paired Forward-Inverse Dynamic Reachability Maps demonstrated fast end-pose planning on flat terrains at different heights by decomposing the kinematic structure and leveraging combinatorics.
In this paper, we present an efficient whole-body end-pose planning framework capable of finding collision-free whole-body configurations in complex environments and on sloped support regions.
The main contributions in this paper are twofold:
(i) the integration of contact property information of support regions into both precomputation and online planning stages, including whole-body static equilibrium robustness, and
(ii) the proposal of a more informed and meaningful sampling strategy for the lower-body.
We focus on humanoid robots throughout the paper, but all the principles can be applied to legged platforms other than bipedal robots.
We demonstrate our method on the NASA Valkyrie humanoid platform with 38 degrees of freedom over inclined supports.
Analysis of the results indicate both higher success rates – greater than 95% and 80% on obstacle-free and highly cluttered environments, respectively – and shorter computation times compared to previous methods.


to be added

Paper on Time-Indexed RRTConnect with Temporal Constraints accepted to Humanoids 2018

Yiming Yang, Wolfgang Merkt, Vladimir Ivan, and Sethu Vijayakumar. “Planning in Time-Configuration Space for Efficient Pick-and-Place in Non-Static Environments with Temporal Constraints”, Proc. IEEE Intl. Conf. on Humanoid Robots (Humanoids 2018), Beijing (2018).


This paper presents a novel sampling-based motion planning method using bidirectional search with a time-configuration space representation that is able to efficiently generate collision-free trajectories in complex and non-static environments. Our approach exploits time indexing to separate a complex problem with mixed constraints into multiple sub-problems with simpler constraints that can be solved efficiently. We further introduce a planning framework by incorporating the proposed planning method enabling efficient pick-and-place of large objects in various scenarios. Simulation as well as hardware experiments show that the method also scales from redundant robot arms to mobile manipulators and humanoids. In particular, we have demonstrated that the proposed method is able to plan collision-free motion for a humanoid robot to pick up a large object placed inside a moving storage box while walking.


to be added

Paper on Deep RL for Valkyrie accepted to Humanoids 2018

Chuanyu Yang, Kai Yuan, Wolfgang Merkt, Taku Komura, Sethu Vijayakumar, Zhibin Li. “Deep Reinforcement Learning of Locomotion Skills for the Humanoid Valkyrie”, Proc. IEEE Intl. Conf. on Humanoid Robots (Humanoids 2018), Beijing (2018).


This paper presents a hierarchical control framework for Deep Reinforcement Learning that can learn a wide range of push recovery and balancing behavior, i.e., ankle, hip, foot tilting, and stepping strategies. The policy is trained in a realistic physics simulator using the robot model in a setup designed to be able to easily transfer and deploy synthesized control policies to real-world platforms. The advantage over traditional methods that integrate high-level planner and feedback control is that one single coherent policy network is generic for generating versatile, unprogrammed balancing and recovery motions against unknown perturbations at arbitrary locations (e.g., legs, torso). Furthermore, the proposed framework allows the policy to be learned with any state-of-the-art learning algorithm. By comparing the proposed approach with other methods in the literature, we found the performance of learning being similar in terms of disturbance rejection ability with additional benefits of generating generic and versatile behavior.


to be added.

Paper on Warmstart Initialization for Trajectory Optimization accepted to IROS 2018

Wolfgang Merkt, Vladimir Ivan, and Sethu Vijayakumar. “Leveraging Precomputation with Problem Encoding for Warm-Starting Trajectory Optimization in Complex Environments”, Proc. IEEE Intl. Conf. on Intelligent Robotics (IROS 2018), Madrid (2018).


Motion planning through optimization is largely based on locally improving the cost of a trajectory until an optimal solution is found. Choosing the initial trajectory has therefore a significant effect on the performance of the motion planner, especially when the cost landscape contains local minima. While multiple heuristics and approximations may be used to efficiently compute an initialization online, they are based on generic assumptions that do not always match the task at hand.
In this paper, we exploit the fact that repeated tasks are similar according to some metric. We store solutions of the problem as a library of initial seed trajectories offline and employ a problem encoding to retrieve near-optimal warm-start initializations on-the-fly.
We compare how different initialization strategies affect the global convergence and runtime of quasi-Newton and probabilistic inference solvers. Our analysis on the 38-DoF NASA Valkyrie robot shows that efficient and optimal planning in high-dimensional state spaces is possible despite the presence of globally non-smooth and discontinuous constraints, such as the ones imposed by collisions.


author={W. Merkt and V. Ivan and S. Vijayakumar},
booktitle={IEEE IROS},
title={Leveraging precomputation with problem encoding for warm-starting trajectory optimization in complex environments},

EXOTica accepted as a book chapter in the Springer ROS Book Vol. 3

Ivan V., Yang Y., Merkt W., Camilleri M.P., Vijayakumar S. (2019) EXOTica: An Extensible Optimization Toolset for Prototyping and Benchmarking Motion Planning and Control. In: Koubaa A. (eds) Robot Operating System (ROS). Studies in Computational Intelligence, vol 778. Springer, Cham


In this research chapter, we will present a software toolbox called EXOTica that is aimed at rapidly prototyping and benchmarking algorithms for motion synthesis. We will first introduce the framework and describe the components that make it possible to easily define motion planning problems and implement algorithms that solve them. We will walk you through the existing problem definitions and solvers that we used in our research, and provide you with a starting point for developing your own motion planning solutions. The modular architecture of EXOTica makes it easy to extend and apply to unique problems in research and in industry. Furthermore, it allows us to run extensive benchmarks and create comparisons to support case studies and to generate results for scientific publications. We demonstrate the research done using EXOTica on benchmarking sampling-based motion planning algorithms, using alternate state representations, and integration of EXOTica into a shared autonomy system. EXOTica is an opensource project implemented within ROS and it is continuously integrated and tested with ROS Indigo and Kinetic. The source code is available at and the documentation including tutorials, download and installation instructions are available at


author=”Ivan, Vladimir and Yang, Yiming and Merkt, Wolfgang and Camilleri, Michael P. and Vijayakumar, Sethu”,
editor=”Koubaa, Anis”,
title=”EXOTica: An Extensible Optimization Toolset for Prototyping and Benchmarking Motion Planning and Control”,
bookTitle=”Robot Operating System (ROS): The Complete Reference (Volume 3)”,
publisher=”Springer International Publishing”,



HDRM: A Resolution Complete Dynamic Roadmap for Real-Time Motion Planning in Complex Scenes accepted to IEEE RA-L

Yiming Yang, Wolfgang Merkt, Vladimir Ivan, Zhibin Li, and Sethu Vijayakumar. “HDRM: A Resolution Complete Dynamic Roadmap for Real-Time Motion Planning in Complex Scenes”. IEEE Robotics and Automation Letters, 2018, In Press.

Publisher’s link – DOI: 10.1109/LRA.2017.2773669


We present the Hierarchical Dynamic Roadmap (HDRM), a novel resolution complete motion planning algorithm for solving complex planning problems. A unique hierarchical structure is proposed for efficiently encoding the configuration-to-workspace occupation information that allows the robot to check the collision state of tens of millions of samples on-the-fly—the number of which was previously strictly limited by available memory. The hierarchical structure also significantly reduces the time for path searching, hence the robot is able to find feasible motion plans in real-time in extremely constrained environments. The HDRM is theoretically proven to be resolution complete, with a rigorous benchmarking showing that HDRM is robust and computationally fast, compared to classical dynamic roadmap methods and other state-of-the-art planning algorithms. Experiments on the 7 degree-of-freedom KUKA LWR robotic arm integrated with real-time perception of the environment further validate the effectiveness of HDRM in complex environments.



author={Y. Yang and W. Merkt and V. Ivan and Z. Li and S. Vijayakumar},
journal={IEEE Robotics and Automation Letters},
title={HDRM: A Resolution Complete Dynamic Roadmap for Real-Time Motion Planning in Complex Scenes},
keywords={Collision avoidance;Dynamics;Heuristic algorithms;Planning;Probabilistic logic;Real-time systems;Robots;Motion planning;collision avoidance;dynamic roadmap;realtime planning},

Robust Shared Autonomy Demonstrator Overview Video

The University of Leeds has released a video summary on our Robust Shared Autonomy work which won the First Prize for Greatest Potential for Positive Impact at the Robots for Resilient Infrastructure Challenge and was previously also featured on Made In Leeds TV:

ExO Summit London

On June 30, 2017 I spoke at the ExO Summit London on interfaces, the ExO internal attribute which bridges the SCALE externalities to the IDEAS internal control framework. The next ExO Summit is being held in Toronto, Canada in January 2018.

Posted by ExO Summits on Friday, 21 July 2017

First Prize at the Robots for Resilient Infrastructure Challenge

On June 27th and 28th, 2017, we presented our work on Robust Shared Autonomy for Mobile Manipulation in Extreme Environments at the Robots for Resilient Infrastructure Competition in Leeds, United Kingdom. Our framework allows for shared autonomy operation via limited bandwidth wireless links and empowers the human operator to use a blend of teleoperation, punctuated, as well as fully autonomous behaviours. It reduces operator fatigue by automatically verifying the validity of intended motions and continuously checking the environment for dynamic changes, altering the robot’s behaviour along the way to ensure safety in shared workspaces while minimising dis- and interruptions.

Our project and demonstration was warded the First Prize for Greatest Potential for Positive Impact.

First person view of competition run

Video submission

Demonstrator summary

About the event

The Robots for Resilient Infrastructure Competition was an International Robotics Challenge Event held at the University of Leeds June 27-28, 2017. Its aim was to bring academics, industry, policy makers and stakeholders together to explore a future use of robots in the creation, inspection, repair and maintenance of critical infrastructure. More information can be found here.

News and Social Media Coverage

Robust Shared Autonomy featured on Made In Leeds TV

Our Robust Shared Autonomy work and the Edinburgh Centre for Robotics was prominently featured as part of the Made In Leeds TV On The Aire news segments at 6pm and 8pm on June 28, 2017.