Statistical Machine Learning and Motor Control Group

Statistical Machine Learning and Motor Control Group

Statistical Machine Learning and Motor Control Group at the University of Edinburgh

Operating as usual

AIUK Fringe: Innovative Robotics for Tomorrow's assisted living 11/03/2024

🤖Innovative Robotics for Tomorrow’s Assisted Living

🕒 10am and 2pm
📅 Friday 29th March 2024
📍 Bayes Centre, University of Edinburgh, 47 Potterow EH8 9BT
🎟 Book your tickets below

We are offering two interactive sessions providing hands on experience of our advanced robotic platforms. We aim to inspire the next generation of scientists and engineers while providing in-depth insights into the world of assistive robotics.

This event is part of the AI UK Fringe 2024, a series of events exploring key topics around data science and AI.

AI UK is the UK’s showcase of data science and AI from the Alan Turing Institute.

AIUK Fringe: Innovative Robotics for Tomorrow's assisted living Join us at AIUK Fringe for a captivating exploration of cutting-edge robotics shaping the future of assisted living, on 29th March 2024

Bayes Centre Tour: Meet the Robots - Edinburgh Science 20/02/2024

📢We are excited to be part of the Edinburgh Science Festival for another year! Join us on Friday 12 April and experience the cutting edge of robotics - whether you're a technology enthusiast or just curious about the future of robotics, this is an event you won't want to miss 🤖

Booking required and spaces going fast - sign up now to secure your place!

Bayes Centre Tour: Meet the Robots - Edinburgh Science Dive into the fascinating world of robotics on this immersive showcase into the future of human-robot interaction.

13/11/2023

Congratulations to Henrique Ferrolho, Vladimir Ivan, Wolfgang Merkt, Ioannis Havoutis and Sethu Vijayakumar on their paper published in Autonomous Robots 'RoLoMa: Robust Loco-Manipulation for Quadruped Robots with Arms'

Abstract:
Deployment of robotic systems in the real world requires a certain level of robustness in order to deal with uncertainty factors, such as mismatches in the dynamics model, noise in sensor readings, and communication delays. Some approaches tackle these issues reactively at the control stage. However, regardless of the controller, online motion ex*****on can only be as robust as the system capabilities allow at any given state. This is why it is important to have good motion plans to begin with, where robustness is considered proactively. To this end, we propose a metric (derived from first principles) for representing robustness against external disturbances. We then use this metric within our trajectory optimization framework for solving complex loco-manipulation tasks. Through our experiments, we show that trajectories generated using our approach can resist a greater range of forces originating from any possible direction. By using our method, we can compute trajectories that solve tasks as effectively as before, with the added benefit of being able to counteract stronger disturbances in worst-case scenarios.



open access paper: https://bit.ly/49b4nQq

https://www.youtube.com/playlist?list=PL9eBsHlJ9CeTNdQCfx3nFFDApQpNRVEB2

RoLoMa: Robust Loco-Manipulation for Quadruped Robots with Arms This is a playlist of the videos supplementing our paper. H. Ferrolho, V. Ivan, W. Merkt, I. Havoutis, S. Vijayakumar, "RoLoMa: Robust Loco-Manipulation for ...

25/08/2023

Introducing the recently published paper by Ke Wang, Guiyang Xin, Songyan Xin, Michael Mistry, Sethu Vijayakumar and Petar Kormushev in Mechatronics.

'A unified model with inertia shaping for highly dynamic jumps of legged robots'

Abstract
To achieve highly dynamic jumps with legged robots, it is essential to control the rotational dynamics of the robot. In this paper, we aim to improve the jumping performance by proposing a unified model for planning highly dynamic jumps that can approximately model the centroidal inertia. This model abstracts the robot as a single rigid body for the base and point masses for the legs. The model is called the Lump Leg Single Rigid Body Model (LL-SRBM) and can be used to plan motions for both bipedal and quadrupedal robots. By taking the effects of leg dynamics into account, LL-SRBM provides a computationally efficient way for the motion planner to change the centroidal inertia of the robot with various leg configurations. Concurrently, we propose a novel contact detection method by using the norm of the average spatial velocity. After the contact is detected, the controller is switched to force control to achieve a soft landing. Twisting jump and forward jump experiments on the bipedal robot SLIDER and quadrupedal robot ANYmal demonstrate the improved jump performance by actively changing the centroidal inertia. These experiments also show the generalization and the robustness of the integrated planning and control framework.

Full details: https://bit.ly/3qOmtGU

A Behavioural Transformer for Effective Collaboration between a Robot and a Non-stationary Human 22/08/2023

Introducing our recently accepted paper at RO-MAN 2023 by Ruaridh Mon-Williams, Theodoros Stouraitis and Sethu Vijayakumar.
'A Behavioural Transformer for Effective Collaboration between a Robot and a Non-stationary Human'

Abstract:
A key challenge in human-robot collaboration is the non-stationarity created by humans due to changes in their behaviour. This alters environmental transitions and hinders human-robot collaboration. We propose a principled meta-learning framework to explore how robots could better predict human behaviour, and thereby deal with issues of non-stationarity. On the basis of this framework, we developed Behaviour-Transform (BeTrans). BeTrans is a conditional transformer that enables a robot agent to adapt quickly to new human agents with non-stationary behaviours, due to its notable performance with sequential data. We trained BeTrans on simulated human agents with different systematic biases in collaborative settings. We used an original customisable environment to show that BeTrans effectively collaborates with simulated human agents and adapts faster to non-stationary simulated human agents than SOTA techniques.

Pdf: https://bit.ly/3KOvvKJ

A Behavioural Transformer for Effective Collaboration between a Robot and a Non-stationary Human Ruaridh Mon-Williams, Theodoros Stouraitis and Sethu Vijayakumar, A Behavioural Transformer for Effective Collaboration between a Robot and a Non-stationary ...

Inverse-Dynamics MPC via Nullspace Resolution 17/08/2023

Congratulations Carlos Mastalli, Saroj Prasad Chhatoi, Thomas Corbères, Steve Tonneau and Sethu Vijayakumar on your paper published in IEEE Transactions in Robotics (T-Ro), which will also be presented at
'Inverse-Dynamics MPC via Nullspace Resolution'

Abstract:
Optimal control (OC) using inverse dynamics provides numerical benefits such as coarse optimization, cheaper computation of derivatives, and a high convergence rate. However, to take advantage of these benefits in model predictive control (MPC) for legged robots, it is crucial to handle efficiently
its large number of equality constraints. To accomplish this, we first (i) propose a novel approach to handling equality constraints based on nullspace parametrization. Our approach balances optimality, and both dynamics and equality-constraint feasibility appropriately, which increases the basin of attraction to high quality local minima. To do so, we (ii) modify our feasibility driven search by incorporating a merit function. Furthermore,
we introduce (iii) a condensed formulation of inverse dynamics that considers arbitrary actuator models. We also propose (iv) a novel MPC based on inverse dynamics within a perceptive locomotion framework. Finally, we present (v) a theoretical comparison of optimal control with forward and inverse dynamics and evaluate both numerically. Our approach enables the first application of inverse-dynamics MPC on hardware, resulting in state-of-the-art dynamic climbing on the ANYmal robot. We benchmark it over a wide range of robotics problems and generate agile and complex maneuvers. We show the computational reduction of our nullspace resolution and condensed formulation (up to 47:3%). We provide evidence of the benefits of our approach by solving coarse optimization problems with a high convergence rate (up to 10 Hz of discretization). Our algorithm is publicly available inside CROCODDYL.

full details: https://bit.ly/3KIx3pz

Inverse-Dynamics MPC via Nullspace Resolution Inverse-Dynamics MPC via Nullspace ResolutionCarlos Mastalli, Saroj Prasad Chhatoi, Thomas Corbères, Steve Tonneau, and Sethu VijayakumarIEEE Transactions on...

Nonprehensile Planar Manipulation through Reinforcement Learning with Multimodal Categorical Explora 16/08/2023

Introducing our recently accepted paper at 'Nonprehensile Planar Manipulation through Reinforcement Learning with Multimodal Categorical Exploration'.

Congratulations Juan Del Aguila Ferrandis, João Moura and Sethu Vijayakumar

Abstract:
Developing robot controllers capable of achieving dexterous nonprehensile manipulation, such as pushing an object on a table, is challenging. The underactuated and hybrid-dynamics nature of the problem, further complicated by the uncertainty resulting from the frictional interactions, requires sophisticated control behaviors. Reinforcement Learning (RL) is a powerful framework for developing such robot controllers. However, previous RL literature addressing the nonprehensile pushing task achieves low accuracy, non-smooth trajectories, and only simple motions, i.e. without rotation of the manipulated object. We conjecture that previously used unimodal exploration strategies fail to capture the inherent hybrid-dynamics of the task, arising from the different possible contact interaction modes between the robot and the object, such as sticking, sliding, and separation. In this work, we propose a multimodal exploration approach through categorical distributions, which enables us to train planar pushing RL policies for arbitrary starting and target object poses, i.e. positions and orientations, and with improved accuracy. We show that the learned policies are robust to external disturbances and observation noise, and scale to tasks with multiple pushers. Furthermore, we validate the transferability of the learned policies, trained entirely in simulation, to a physical robot hardware using the KUKA iiwa robot arm.

Pdf: https://bit.ly/3qxCCAd

Supported by EU project HARMONY and KAWADA Robotics



https://youtu.be/vTdva1mgrk4

Nonprehensile Planar Manipulation through Reinforcement Learning with Multimodal Categorical Explora Juan Del Aguila Ferrandis, Joao Moura and Sethu Vijayakumar, Nonprehensile Planar Manipulation through Reinforcement Learning with Multimodal Categorical Exp...

12/07/2023

Congratulations to Ran Long for successfully passing his DPhil viva on 10th July 2023 at The University of Edinburgh

His thesis is entitled 'Robust SLAM and motion segmentation under long-term dynamic large occlusions'.

Abstract: Visual sensors are key to robot perception, which can not only help robot localisation but also enable robots to interact with the environment. However, in new environments, robots can fail to distinguish the static and dynamic components in the visual input. Consequently, robots are unable to track objects or localise themselves. Methods often require precise robot proprioception to compensate for camera movement and separate the static background from the visual input. However, robot proprioception, such as inertial measurement unit (IMU) or wheel odometry, usually faces the problem of drift accumulation.

The state-of-the-art methods demonstrate promising performance but either
(1) require semantic segmentation, which is inaccessible in unknown environments,
or (2) treat dynamic components as outliers – which is unfeasible when
dynamic objects occupy a large proportion of the visual input.

This research work systematically unifies camera and multi-object tracking problems in indoor environments by proposing a multi-motion tracking system; and enables robots to differentiate the static and dynamic components in the visual input with the understanding of their own movements and actions. Detailed evaluation of both simulation environments and robotic platforms suggests that the proposed method outperforms the state-of-the-art dynamic SLAM methods when the majority of the camera view is occluded by multiple unmodeled objects over a long period of time.

Thanks to Ran's examiners:
Dr. Hakan Bilen (The University of Edinburgh, internal)
Prof. Andrew Davison (Imperial College of London, external)

30/06/2023

We are very proud to introduce the recent Royal Society Open Science publication 'Adaptive assistive robotics: a framework for triadic collaboration between humans and robots'.

Congratulations to the authors Daniel Gordon, Andreas Christou, Theodoros Stouraitis, Michael Gienger and Sethu Vijayakumar

Summary: Robots, exoskeletons, and other assistive technologies have huge potential to help society in domains ranging from factory work to healthcare. However, safe and effective control of robots in these scenarios is complex, especially when it involves close interactions with humans. In this paper, we propose a framework for optimising robot behaviour in triadic collaboration scenarios comprising a mix of humans, robots and other technological agents. Initial results from our framework are promising, indicating the potential to significantly improve outcome measures for humans in robot-assisted tasks.

Further information and a link to the pdf can be found here: https://royalsocietypublishing.org/doi/10.1098/rsos.221617

22/06/2023

Towards Reliable Machine Learning

Masashi Sugiyama, Director of the RIKEN Center for Advanced Intelligence Project (AIP) and Professor at University of Tokyo will deliver a Distinguished Lecture aimed at university-affiliated audiences on Monday, 26 June as part of celebrations.

The lectures will be followed by a drinks reception. To attend, Eventbrite registration is required.

Event details: https://edin.ac/3qXp68X

16/06/2023

Congratulations to Russell Buchanan for successfully passing his DPhil viva on 9th June 2023. Russell completed his DPhil at the University of Oxford and joined the University of Edinburgh as a PDRA in April.

His thesis is entitled ' Inertial Learning and Haptics for Legged Robot State Estimation in Visually Challenging Environments.

Summary:
This thesis focuses on addressing the challenges faced by legged robots in terms of accurate state estimation, particularly when visual sensors fail in visually challenging conditions like darkness or smoke. Legged robots have great potential for automating dangerous or dirty tasks, as they can navigate through difficult terrains such as stairs or mud. However, the lack of robust localisation has hindered their widespread deployment.

First, haptic sensors are used to compare geometric information with a prior map, enabling a robot to localise without vision. This is similar to a blindfolded person in their own bedroom. They can reach out to touch their furniture and immediately localise. Next, haptic sensors are used to classify different terrain types (like grass or concrete) as additional information for localisation. Finally, new techniques in deep learning for inertial sensors used to improve state estimation.

The deployment of legged robots has the potential to benefit society by automating dangerous, dull, or dirty jobs, as well as assisting first responders in emergency situations. This thesis contributes to the field by addressing the challenges of state estimation and paving the way for the real-world deployment of legged robots in various applications.

Examiners:
Professor Niki Trigoni (University of Oxford, Internal)
Professor Dimitrios Kanoulas (University College London, External)

14/06/2023

Congratulations to Edinburgh Centre for Robotics student Traiko Dinev for successfully passing his PhD viva on Monday 5th June.

Title: : Concurrent Design and Motion Planning in Robotics using Differentiable Optimal Control

Short Lay Summary:
Designing efficient robots is a challenging task. The capabilities of the robot in terms of motions are determined by its design and vice-versa -- what design is best for the task is determined by what task we need the robot to execute.

Concurrent design is the process of finding the best motions and robot designs together. It enables engineers to discover more efficient designs and guides the design process.

In this thesis we present new approaches to concurrent design that are aimed at making the process interactive. The main barrier to interactivity is speed of computation. We propose an approach based on taking derivative information from the motion planner. Our approach leverages the fast computation of motions made available by modern motion planners to enable faster co-design tools.

We demonstrate this by building an interactive tool for co-design of quadruped robots. Our methods achieve similar performance to state-of-the-art in terms of optimality whilst being fast enough to enable interactivity.

Thanks to Traiko's examiners:
Dr. Nicolas Mansard (LAAS CNRS, External)
Dr. Michael Mistry (University of Edinburgh, Internal)

12/06/2023

Congratulations to Jiayi Wang for successfully passing his viva on Monday 5th June 2023.

Title: Online Receding Horizon Planning of Multi-Contact Locomotion

Short lay summary:
Legged robots such as humanoids and quadrupeds can overcome terrain irregularities by using multiple contacts between their limbs and the environment. Such capability makes them particularly useful for tasks taking place in unstructured environments, e.g., exploring disaster zones or inspecting construction sites that are hazardous and dangerous for humans.

Nevertheless, achieving reliable legged locomotion in the real world is still a challenging task. One of the issues is that when operating in the real world, legged robots can encounter unexpected disturbances, such as dynamic changes in the environment or state deviations induced by external force perturbations. These unexpected events can cause the pre-planned motion to become invalid, and continuing to execute the current motion will cause the robot to fall.

To enable reliable operation in response to unexpected disturbances, legged robots necessarily require the capability to online (re)-plan their motions. Unfortunately, due to the combinatorial nature of the contact planning and the high-dimensionality and non-convexity of the robot dynamics, computing multi-contact locomotion plans is often time-consuming.

In this thesis, we develop novel methods to accelerate the computation speed of multi-contact locomotion planning. The core idea of our methods is to find simplifications to the original planning problem. To this end, we propose to introduce model simplifications along the planning horizon. Furthermore, we also explore the usage of machine learning techniques to encode domain knowledge that can be used to bootstrap the computation. We evaluate the computation performance of our methods through rigorous simulation studies. Owing to the computation advantage of our methods, we demonstrate online multi-contact locomotion (re)-planning in real-world robot experiments.

Sethu Vijayakumar 29/05/2023

Professor Sethu Vijayakumar will be delivering one of the prestigious ICRA 2023 keynotes entitled 'From Automation to Autonomy: Machine Learning for Next-generation Robotics' on Thursday 1st June at 5pm at the Main ICC Auditorium for Plenary and Keynotes.

The new generation of robots work much more closely with humans, other robots and interact significantly with the environment around it. As a result, the key paradigms are shifting from isolated decision making systems to one that involves shared control -- with significant autonomy devolved to the robot platform; and end-users in the loop making only high level decisions.

This talk will briefly introduce powerful machine learning technologies ranging from robust multi-modal sensing, shared representations, scalable real-time learning and adaptation, and compliant actuation that are enabling us to reap the benefits of increased autonomy while still feeling securely in control.
This also raises some fundamental questions: while the robots are ready to share control, what is the optimal trade-off between autonomy and control that we are comfortable with?
Domains where this debate is relevant include deployment of robots in extreme environments, self-driving cars, asset inspection, repair & maintenance, factories of the future and assisted living technologies including exoskeletons and prosthetics to list a few.



Sethu Vijayakumar The new generation of robots work much more closely with humans, other robots and interact significantly with the environment around it. As a result, the key paradigms are shifting from isolated decision making systems to one that involves shared control -- with significant autonomy devolved to the....

OpTaS: An Optimization-based Task Specification Library for Trajectory Optimization 26/05/2023

Check out our next accepted paper at !

Authors: Christopher Mower, João Moura, Nazanin Zamani Behabadi, Sethu Vijayakumar, Tom Vercauteren and Christos Bergeles

Title: OpTaS: An Optimization-based Task Specification Library for Trajectory Optimization and Model Predictive Control

Abstract: This paper presents OpTaS, a task specification Python library for Trajectory Optimization (TO) and Model Predictive Control (MPC) in robotics. Both TO and MPC are increasingly receiving interest in optimal control and in particular handling dynamic environments. While a flurry of software libraries exists to handle such problems, they either provide interfaces that are limited to a specific problem formulation (e.g. TracIK, CHOMP), or are large and statically specify the problem in configuration files (e.g. EXOTica, eTaSL). OpTaS, on the other hand, allows a user to specify custom nonlinear constrained problem formulations in a single Python script allowing the controller parameters to be modified during ex*****on. The library provides interface to several open source and commercial solvers (e.g. IPOPT, SNOPT, KNITRO, SciPy) to facilitate integration with established workflows in robotics. Further benefits of OpTaS are highlighted through a thorough comparison with common libraries. An additional key advantage of OpTaS is the ability to define optimal control tasks in the joint-space, task-space, or indeed simultaneously.

This work is supported by The Alan Turing Institute, EU project HARMONY and Kawada Robotics Corporation.

Pdf: https://bit.ly/429JZdO

Further info: https://lnkd.in/ewX89hVb

https://lnkd.in/eBpWSXnt

OpTaS: An Optimization-based Task Specification Library for Trajectory Optimization Christopher Edwin Mower, Joao Moura, Nazanin Zamani Behabadi, Sethu Vijayakumar, Tom Vercauteren and Christos Bergeles, OpTaS: An Optimization-based Task Spe...

23/05/2023

Congratulations to Daniel Gordon, Andreas Christou, Theodoros Stouraitis, Michael Gienger and Sethu Vijayakumar on their recently accepted paper 'Learning Personalised Human Sit-to-Stand Motion Strategies via Inverse Musculoskeletal Optimal Control'.

Abstract:
Physically assistive robots and exoskeletons have great potential to help humans with a wide variety of collaborative tasks. However, a challenging aspect of the control of such devices is to accurately model or predict human behaviour, which can be highly individual and personalised. In this work, we implement a framework for learning subject-specific models of underlying human motion strategies using inverse musculoskeletal optimal control. We apply this framework to a specific motion task: the sit-to-stand transition. By collecting sit-to-stand data from 4 subjects with and without perturbations, we show that humans modulate their sit-to-stand strategy in the presence of instability, and learn the corresponding models of these strategies. In the future, the personalised motion strategies resulting from this framework could be used to inform the design of real-time assistance strategies for human-robot collaboration problems.

Supported by: The Alan Turing Institute and Honda Research Institute Europe GmbH

Further information on all our accepted papers: https://lnkd.in/ewX89hVb

pdf: bit.ly/43gIMTH

homepages.inf.ed.ac.uk

Structured MotionGeneration with Predictive Learning Proposing Subgoal for Long-Horizon Manipulation 17/05/2023

Introducing the second of our papers accepted at 'Structured Motion Generation with Predictive Learning: Proposing Subgoal for Long-Horizon Manipulation'.

Congratulations to Namiko Saito, João Moura, Tetsuya Ogata, Marina Aoyama, Shingo Murata, Shigeki Sugano and Sethu Vijayakumar

Abstract:
For assisting humans in their daily lives, robots need to perform long-horizon tasks, such as tidying up a room or preparing a meal. One effective strategy for handling a long-horizon task is to break it down into short-horizon subgoals, that the robot can execute sequentially. In this paper, we propose extending a predictive learning model using deep neural networks (DNN) with a Subgoal Proposal Module (SPM), with the goal of making such tasks realizable. We evaluate our proposed model in a case-study of a long-horizon task, consisting of cutting and arranging a pizza. This task requires the robot to consider: (1) the order of the subtasks, (2) multiple subtask selection, (3) coordination of dual-arm, and (4) variations within a subtask. The results confirm that the model is able to generalize motion generation to unseen tools and objects arrangement combinations. Furthermore, it significantly reduces the prediction error of the generated motions compared to without the proposed SPM. Finally, we validate the generated motions on the dual-arm robot Nextage Open.

Supported by EU project HARMONY, The Alan Turing Institute, Waseda University & Kawada Robotics Corporation

Pdf: https://bit.ly/45bysO7

Further information: https://lnkd.in/ewX89hVb

https://www.youtube.com/watch?v=3hYS2knRm5o&list=PLW-NGNcgsQQjsGbntGBanhhvTeDrYI4LD&index=75

Structured MotionGeneration with Predictive Learning Proposing Subgoal for Long-Horizon Manipulation Namiko Saito, Joao Moura, Tetsuya Ogata, Marina Y. Aoyama, Shingo Murata, Shigeki Sugano and Sethu Vijayakumar, Structured Motion Generation with Predictive ...

Topology-Based MPC for Automatic Footstep Placement and Contact Surface Selection 11/05/2023

Highlighting the first of five SLMC accepted papers at

Congratulations Jaehyun Shim, Carlos Mastalli, Thomas Corbères, Steve Tonneau, Vladimir Ivan and Sethu Vijayakumar

Topology-Based MPC for Automatic Footstep Placement and Contact Surface Selection

Abstract:
State-of-the-art approaches to footstep planning assume reduced-order dynamics when solving the combinatorial problem of selecting contact surfaces in real time. However, in exchange for computational efficiency, these approaches ignore limb dynamics and joint torque limits. In this work, we address these limitations by presenting a topology-based approach that enables model predictive control (MPC) to simultaneously plan full-body motions, torque commands, contact surfaces, and footstep placements in real time. To determine if a robot’s foot is inside a polygon, we borrow the winding number concept from topology. Specifically, we use winding number and electric potential to create a contact-surface penalty function that forms a harmonic field. Using our topology-based penalty function, MPC can then select a contact surface from all candidate surfaces in the vicinity and determine footstep placements within it. We highlight the benefits of our approach by showing the impact of considering full-body dynamics, which includes joint torque limits and limb dynamics, in the selection of footstep placements and contact surfaces. Additionally, we demonstrate the feasibility of deploying our topology-based approach in an MPC scheme through a series of experimental and simulation trials.

Supported by: EU project MEMMO and The Alan Turing Institute

Pdf: https://bit.ly/3nPkiBe

https://www.youtube.com/watch?v=uweesAj5_x0&list=PLW-NGNcgsQQjsGbntGBanhhvTeDrYI4LD&index=75

Topology-Based MPC for Automatic Footstep Placement and Contact Surface Selection Jaehyun Shim, Carlos Mastalli, Thomas Corbères, Steve Tonneau, Vladimir Ivan and Sethu Vijayakumar, Topology-Based MPC for Automatic Footstep Placement and C...

Photos from Statistical Machine Learning and Motor Control Group's post 21/04/2023

What an incredible day! Thank you to everyone who came to our 'Bayes Centre Tour: Meet The Robots' event as part of the Edinburgh Science Festival.

With over 120 visitors, adults and children were given the opportunity to get up close and personal with our robots, watch demonstrations from our robotic platforms Talos and EVA, try to operate a dual-arm telepresence platform with haptic device, and join in a competition to win a prize with our Sphero maze.

Thank you so much to our team for all the hard work in putting together such a successful event!

Bayes Centre Tour: Meet the Robots - Edinburgh Science - Bayes Centre Tour: Meet the Robots 16/02/2023

Experience the cutting edge of robotics at our upcoming Open Lab Day as part of the Edinburgh Science Festival on Friday 14 April! Get up close and personal with our highly advanced humanoid robot Talos and interact with our latest creation EVA – the ultimate hybrid robotic platform. Whether you are a technology enthusiast or just curious about the future of robotics, this is an event you won’t want to miss.

Open to all ages 5 and above, this is your chance to witness the magic of robotics first-hand. But don’t wait – spots are limited and booking required. Sign up now to secure your place!

Bayes Centre Tour: Meet the Robots - Edinburgh Science - Bayes Centre Tour: Meet the Robots This website uses cookies We use cookies on our site to provide our online booking service, analytics and other functionality. Find out more about what cookies we use and how to disable them.

25/01/2023

Working on contact-rich manipulation?

We are excited to announce the workshop Embracing Contacts taking place at in London on the 2nd of June.

Join us and be ready for super engaging talks and discussions with amazing speakers from both industry and academia.

Submit your latest results on mechanical design, perception, control, planning, and learning for contact-rich manipulation here: http://bit.ly/3R72ETQ

This workshop is supported by The Alan Turing Institute, Honda Research Institute Europe GmbH and HARMONY EU project.



Further info: https://bit.ly/3WEg7U6

Want your school to be the top-listed School/college?

Videos (show all)

Dr Vladimir Ivan and Professor Sethu Vijayakumar as part of the AvatarX team led by Touchlab, recently qualified for the...
Reconfigurable Smart Factory project with Nextage Open from Kawada Robotics
Online Optimal Impedance Planning for Legged Robots