-
Notifications
You must be signed in to change notification settings - Fork 86
Topic/task energy #155
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Topic/task energy #155
Conversation
Add signal dE_tank to monitor the energy derivative
Hi @NoelieRamuzat , thanks for your contribution, it looks really interesting. At a first sight, I am not against having this PR directly on the devel branch. It will probably take me a while to review, as it's a lot of changes, but I think we can leave it here. Unless of course @nim65s has a better idea. |
Hi, No, this looks really clean to me, I also think we should put that on devel. @NoelieRamuzat : could you document (and maybe motivate) the modifications you made ? I'm not talking about anything you added, just the cases where you changed a return type, or removed some Also, maybe, a test for this new feature would be nice ;) |
Hi @NoelieRamuzat , just for your information, I am reading your paper to understand the theory in details before reviewing this PR. I'll probably send you an email to ask a few questions about the theory. |
Hi @nim65s, @andreadelprete, Thanks for your comments, I am currently working on the API changes. |
Hi ! Sorry I made a miss-click on the PR. Here are the explanations for the changes in the API:
And if it is |
Remove blanks in the formulation file. Fix comment in the task energy. The python test creates a Romeo TsidBiped and add the energy task to the stack of task. Then the robot does a sinusoidal motion on its CoM. If the CoM error does not diverge, the energy tank does not fall under its minimal value (0.1J) and the QP is solved then the test is ok.
Hi ! The python test creates a Romeo robot using the |
Hi all !
Here are the developments that I made for the RA-L paper "Passive Inverse Dynamics Control using a Global Energy Tank for Torque-Controlled Humanoid Robots in Multi-Contact".
I implemented a new task called
energyTask
in TSID and made quite some changes. I don't know where to propose this PR as it also modifies the formulation of the QP (inverse-dynamics-formulation-acc-force
) to take the energy into account.If perhaps you can tell me where to propose it @andreadelprete, @nim65s ? Perhaps in a new branch ? Thanks !
The energy task is used to transform the classical whole-body controller using inverse dynamics of TSID into a passive scheme. The passivity is a stability criterion based on the power flow exchanged by the components of the system. A storage function H(x) is chosen (often as the energy of the system) and to have a passive system, the system internal power (d_H: derivative of the energy in the system) must be lesser or equal to the power transferred to the system through its port. For a system controller+robot (controlled in torque) it is the velocity and the torque.
The energy task adds an energy tank to the controller to monitor the energy flow of the system. This tank regulates the task gains using coefficients (alpha, beta and gamma in [0, 1]) to respect the passivity of the system. These coefficients are multiplied to the task vectors in the QP formulation.
Thus, if one of the coefficient is lowered to zero (when there is no more energy in the tank), the tasks are penalized because have a decreased desired acceleration or even a null one. Moreover, as the tank is computed without taking into account the constraints of the QP, a passivity constraint is added in the QP formulation in addition to the regulating coefficient.
For more information, see the paper (soon to be published).