Networks and Robotics

Our goal is to investigate collaboration in decentralized networks including human-robot systmes. We are particularly interested in shared autonomy when precision is needed and both the human and the machine are learning. Applications include robotics, prosthesis, and manufacturing

Cohesive networks


How a network gets to the goal (a consensus value) can be as important as reaching the consensus value. Current network theories achive cohesive behaviour, i.e., each agent in the network will achieve the same desired steady state.

While prior methods focus on rapidly getting to a new consensus value, maintaining cohesion, during the transition between consensus values or during tracking, remains challenging and has not been addressed. Current approaches are unable to maintain cohesion during the transition from steady state operation to another (or when the desired state is changing over time). This loss of cohesion during the tranient can lead to loss of performance, e.g., formations keeping can be lost during rapid transitions, flexible obects being transported by multiple agents can be damaged due to large deformation, and spacing in platoons of autonomous vehicles and UAVs might not be maintained.

Therefore, this effort seeks to develop theories for maintaining cohesion during transients in decentralized networks where all agents do not have access to the desired behaviour.


The main contributions of this work are to address the problem of maintaining cohesion by: (i) proposing a new delayed self-reinforcement (DSR) approach; (ii) extending it for use with agents that have higher-order, heterogeneous dynamics, and (iii) developing stability conditions for the DSR-based method. An example application in the transport of flexible objects is show below. Additional information are in the References.

Superfluidic Non-diffusive Networks

Significance
Motivated by swarms in nature, neighbor-based alignment rules have been proposed for a variety of engineered networks such as autonomous vehicle and robotic networks. Nevertheless, the network response with such neighbor-based update rules tend to be diffusive and do not capture the superfluid-like non-diffusive response observed in nature. A new accelerated gradient-based approach is proposed in this work, which is shown to capture such superfluid-like response and leads to substantial improvement in the cohesiveness of the network’s response, without the need to change the network structure by redefining the set of neighbors. Thus, this work has the potential to better model biological networks and improve the performance of engineered networks.


Background: The effectiveness of a network's response to external stimuli depends on rapid distortion-free information transfer across the network. However, the rate of information transfer, when each agent aligns with information from its network neighbors, is limited by the update rate at which each individual can sense and process information. Moreover, such neighbor-based, diffusion-type information transfer does not predict the superfluid-like information transfer during swarming maneuvers observed in nature.

The main contribution of this effort is to propose a novel model that uses self reinforcement, where each individual augments its neighbor-averaged information update using its previous update, to (i) increase the information-transfer rate without requiring an increased, individual update-rate; and (ii) enable superfluid-like information transfer. Simulations results of example systems show substantial improvement, more than an order of magnitude increase, in the information transfer rate, without the need to increase the update rate. Moreover, results show that the DSR approach's ability to enable superfluid-like, distortion-free information transfer results in maneuvers with smaller turn radius and improved cohesiveness.


Without DSR: Random IC

With DSR: Random IC

Without DSR: Uniform IC

With DSR: Uniform IC

Link to Pdf of Slides

Link to Talk at U. Connecticut

Recent publication

Application: Robotics for Manufacturing

The Boeing Advanced Research Center (BARC) is a 4300 sq. ft. facility housed in the Department of Mechanical Engineering, College of Engineering at the University of Washington that fosters collaborative basic and applied research, translational research and development, and student education-related activities in the area of manufacturing and assembly of aircraft and spacecraft structures. These research and educational activities represent strong partnerships between UW and Industry. The establishment of this facility represents a new paradigm in the execution of industrial research at UW in that Boeing-employed Affiliate Instructors will work in the lab, on a full time basis, hand in hand with faculty and students on joint research projects. It is envisioned that approximately eight Boeing-employed Affiliates will work concurrently in the lab with graduate students and that each project will have at least one UW faculty member assigned to it. The initial research focus is on automation, robotics, mechatronics and metrology, with the focus on the assembly of aircrafts, with four projects including predictive shimming, development of in-wing crawlers, inside fuselage automation for percussive rivet forming, and sensor fusion. The (BARC) is led by the Department of Mechanical Engineering; however, other departments in COE and UW will also be engaged.

Our groups effort is on automation, robotics, mechatronics and metrology, with the focus on the assembly of aircrafts, with projects including in-wing crawlers and inside fuselage automation. An example application is described below. For additional information on projects see (BARC) .

Human-Machine Control for Docking of Manufacturing Fixtures : Assembly fixtures that are much smaller than the structure being manufactured, such as an aircraft, must routinely be positioned and docked against the structure on which they act. The major contribution of our work in this area Ref 6 is to develop a variable-impedance-based human-machine control for the docking process. The issues investigated were: (i) impedance control to facilitate intuitive human force input; and (ii) utilizing variable damping to aid docking and safeguard both the fixture and structure. A single-degree-of-freedom experimental test bed was used to simulate docking using the proposed controller, and to explore how controller parameter choices impact overall performance.

Application: Active Prosthesis with Nonlinear Stiffness

Research with Prof. Glenn Klute.

Background: Passive elements parallel to the actuator can reduce the maximum ankle-torque requirement and, thereby, reduce the actuator size in powered lower-limb prostheses. The challenge is to design a parallel element to optimally match a desired nonlinear response. Our main contribution in Ref 1 is the design of a cam-based device that can match the desired nonlinear response.


Our active prosthesis, designed by Jonathan Realmuto, being tested at the local VA hospital.

Visualization of the data from the experiment including foot-ground reaction forces.

Novice Human-Machine Interactions

This research aims to enables novice human operation of robotic systems by making it easier for novice workers (not experts at robot operation) to use assistive robots. The research challenge is to infer the intent of a novice user from his or her actions under human-in-the-loop operations. A novice human's action, modified by the human-feedback-response dynamics, cannot be considered as the user-intent for the robot controller to try and achieve.


 
photo
Modeling of Human-Feedback Response to enable novice users to operate assistive robots

Our main contribution is the use of a human-feedback-response model along with the observed human output and the measured output to infer the novice user's intent. The inferred intent can then be used by an assistive-input generator to find the additional input to assist the user for achieving the intented action, as shown in Ref 2, 3 and Ref 7.


Work by Rahul Warrier on Human-Robot Interface using Microsoft Kinect and Myo (EMG) Armband for human-in-the-loop control of a Kinova MICO robot

Collaborative Learning

Co-learning is of interest in applications such as: co-operative manipulation with multiple robots and human-robot applications such as active prosthesis and orthosis. For example, bi-directional co-learning (of both human and robot sub-systems) can improve performance when compared to the case when the human is adapting to a fixed robot controller, or the robot is trying to estimate/follow human intent. However, the challenge is to ensure convergence of such co-learning process especially since convergence of iterative learning for each individual sub-system (when the other sub-systems are not learning) does not guarantee convergence under co-learning. Our current efforts are aimed at establishing convergence conditions for co-learning and published in Ref 8

Learning to Collaboratively Track an Output : Ref 4 studied iterative learning control (ILC) where multiple heterogeneous linear subsystems (with potentially different individual dynamics) update their input simultaneously based on the error in a collaboratively-controlled desired output. This work proposed an update-partitioning approach for co-learning and demonstrates convergence whenever the individual, iterative learning for each subsystem is convergent. Additionally, an intermittent time partitioning was developed when the desired trajectory is not known to all (but only some) of the co-learning subsystems.

Networked Systems: The convergence of iterative control for networked, heterogeneous multi-agent systems, where each agent has potentially-different dynamics and uncertainties was studied in Ref 5 . The major contribution of this work is to quantify the acceptable modeling uncertainty for ensuring convergence of the proposed iterative approach for collaborative tracking. Convergence conditions are also established for the case when inversion-based iterative control of each individual agent (designed separately, independent of the iterative controllers of the other agents) are conjoined using a network graph structure.

Contact Santosh Devasia: devasia@u.washington.edu

Presentations and Videos

Cohesive Networks
Talk at Michigan State University on
cohesive networks using Delayed Self Reinforcement (DSR)

Visualization of experimental data

Work by Rahul Warrier on Human-Robot Interface using Microsoft Kinect and Myo (EMG) Armband of Kinova MICO

King5 News

photo
News on Boeing Advanced Research Center (BARC)

References

Ref 1: J. Realmuto, G. Klute, and S. Devasia. "Nonlinear Passive Cam-Based Springs for Powered Ankle Prostheses." ASME Journal of Medical Devices, Vol. 9 (1), pp. 011007 1-10, March 2015

Ref 2: R. Warrier, and S. Devasia “Iterative Learning from Novice Human Demonstrations for Output Tracking,” IEEE Transactions on Human-Machine Systems, Vol. 46 (4), pp. 510-521, August 2016.

Ref 3: R. Warrier and S. Devasia. “Inverse Control for Inferring Intent in Novice Human-in-the-Loop Iterative Learning,” Presented at the American Control Conference, Boston, MA, Sep. 2016. (R. Warrier was selected as a Best Student Paper Award finalist for the conference.)

Ref 4: S. Devasia “Iterative Learning Control with Time-Partitioned Update for Collaborative Output Tracking,” Automatica, Vol. 69, pp. 258-264, July 2016.

Ref 5: S. Devasia “Iterative Control for Networked Heterogeneous Multi-Agent-Systems with Uncertainties,” IEEE Transactions on Automatic Control, Vol. 62 (1), pp. 431-437, Jan. 2017.

Ref 6: W. T. Piaskowy, L. McCann, J. Garbini and S. Devasia. “Variable-Impedance-Based Human-Machine Control for Docking of Manufacturing Fixtures,” Presented at the American Control Conference, Boston, MA, Sep. 2016.

Ref 7: R. Warrier, and S. Devasia “Inferring Intent for Novice Human-in-the-loop Iterative Learning Control,” IEEE Transactions on Control Systems Technology, Vol. 25 (5), pp. 1698-1710, Sept. 2017.

Ref 8: J. Realmuto, R. Warrier, and S. Devasia “Data-Inferred Personalized Human-Robot Models for Collaborative Output Tracking,” Journal of Intelligent and Robotic Systems, Vol. 91(2), pp. 137-153, August 2018

Ref 9: N. Banka and S. Devasia “Application of Iterative Machine Learning for Output Tracking with Soft Magnetic Actuators,” ASME/IEEE Transactions on Mechatronics, Vol. 23(5), pp. 2186-2195, October 2018 .

Ref 10: S. Devasia “Iterative Machine Learning for Output Tracking,” IEEE Transactions on Control Systems Technology, Vol. 27 (2), pp. 516-526, March 2019.

Ref 11: S. Devasia “Rapid Information Transfer in Swarms under Update-Rate-Bounds using Delayed Self Reinforcement,” ASME Journal of Dynamic Systems Measurement and Control, Vol. 141(8), pp. 081009-081009-9, August, 2019.

Ref 12: S. Devasia “Faster Response in Bounded-Update-Rate, Discrete-time Networks using Delayed Self-Reinforcement,” International Journal of Control, 2019.

Ref 13: S. Devasia “Cohesive Networks using Delayed Self-Reinforcement,” Automatica, Vol. 112, Paper # 108699, February 2020.

Ref 14: Y. Gombo, A. Tiwari, and S. Devasia “Accelerated-Gradient-based Flexible-Object Transport with Decentralized Robot Networks,” Robotics and Automation Letters, Vol. 6(1), pp. 2377-3766, January 2021.

Ref 15: L. Yan, N. Banka, P. Owan, W.T. Piaskowy, J. Garbini, and S. Devasia “MIMO ILC using Complex-Kernel Regression and application to Precision SEA robots,” Automatica, Vol. 127, Paper# 109550, May 2021,

Ref 14: Y. Gombo, A. Tiwari, and S. Devasia “ommunication-free Cohesive Flexible-Object Transport using Decentralized Robot Networks,” 2021 ACC, May 2021..