-
Notifications
You must be signed in to change notification settings - Fork 31
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Creating V-REP plugin for Tasks #10
Comments
This plan does seem good. We have developed code that could help you, but unfortunately, it cannot be released.
For your questions:
|
Thanks, first I'm working to load the robot model's joints and links from V-REP into Tasks for a 7 dof kuka iiwa 14 R820 arm. There is a model built into V-REP, plus another available in the iiwa_stack library. Here is my initial pseudocode to initialize everything, hopefully it is reasonable! The key V-REP functions for the initial setup of just the joint model are: Key Objects in Tasks/RBDyn:
Pseudocode to construct the arm:
For each joint (and link between joints)
Question Regarding Inertias:
|
Your plan does seem reasonable! We have somehting similar, but it is URDF-based. Regarding your last point: yes, most probably.
|
Thanks I will keep that in mind. Right now I've got the initialization MultiBody and MultiBodyConfig set up. At the moment I'm assuming that the joint transforms V-REP gives me are independent of the current joint configuration at this time step.
|
for(std::string& name : vrepJointNames)
{
int id = mb.jointIndexByName(name);
value = jointAngleFromVrep(name);
mbc.q[id][0] = value;
} |
Okay that makes sense, thanks. Also, for QPSolver, is it best to initialize a new one at each time step I'm solving the constrained optimization problem or is it safe to create a persistent one, update the configuration, and re-run it multiple times? |
No you should most definitely keep it! A typical use case is to solve in a loop and integrate the solution to see how the robot would evolve if it followed perfectly the reference. This can be achieved by repeadtedly running this code: solver.solveNoMbcUpdate(mbs, mbcs);
for(int i = 0; i < mbs.size(); ++i)
{
solver.updateMbc(mbcs[i], i);
rbd::eulerIntegration(mbs[i], mbcs[i], timeStep);
rbd::forwardKinematics(mbs[i], mbcs[i]);
rbd::forwardVelocity(mbs[i], mbcs[i]);
} I think that if you replace Note: the solver may fail is the state is not correctly updated. Most notably, if you use a |
Okay, I'll probably need to update what I have a bit to reflect what you mentioned above. Right now I'm starting with simple position control, leaving orientation alone. I set up the robot handles and transforms from V-REP in this block. Then on each time step of V-REP I create a new QP and initialize it with the current arm pose, computing the forward kinematics. Unfortunately I must have a problem with configuring the transforms, because I'm using a kuka iiwa arm and I'm giving it a target pose of about 0.5 m in front of it (along the x axis) and 0.2 m up (z axis). This should be very reachable and I initialize with However, the solver finds a nearly zero set of joint angles Additionally, if I uncomment the code to set the joint limit constraints it crashes immediately. Any advice to debug these problems? Perhaps I should create a simplified zxz scene first and get that working... |
Nothing looks obviously out of kilter here but I have some suggestions:
I have three ideas as to what to could be wrong:
tasks::qp::PositionTask posTask(rbd_mbs_, simulatedRobotIndex, ikGroupTipName_,getObjectTransform(ikGroupTipHandle_).translation()); But this is simply a task to maintain position if I understand well. You should use your target position as the last argument to PositionTask.
For the failure when using limits: it crashes ? I.e. segfault ? Or solver failure ? For the first case, I'd say your problem is that your vector of limits is not properly constructed: it should contain empty vector for fixed joints. Check that your lower/upper bound are the same size as you vector Example, for a fixed base, 3 joints, one fixed joint robot:
You can check that your bounds are correct w.r.t your MultiBody by calling:
It will throw with an explanation of where there is a mismatch. |
Note that this error is also present on line 399: rbd_mbcs_[0].q[i]={currentJointPosVec[i]}; Should probably be: rbd_mbcs[0].q[mb.jointIndexByName(jointNames_[i])] = {currentJointPosVec[i]}; Moreover, if your objective is solely to compute Inverse Kinematics, you can probably use a pure RBDyn-based method (as presented in the tutorials, or in the simple IK that comes with RBDyn). |
I'm planning to do more than simple inverse kinematics, just trying to start simple. I fixed the goal passed into the task to be the actual tip target now. :-) I've also asked how the transforms are structured on the V-REP forum. Hopefully that can help clarify the details if I hear back from them. I plotted some poses as V-REP dummy objects as well, and it is clear that the transforms aren't being converted correctly between V-REP and Tasks. I need to look more carefully into what I'm loading, it may be the successor/predecessor relationship you describe, a row/column ordering issue, or something else. I'm in the process of debugging it. I'm also getting this odd behavior where running the solver always produces the same joint angles, even if the target translation changes:
Even if the arm description isn't correct, shouldn't I be able to expect these values to change as they go through the loop? I did remove the |
I've visualized the data and it looks like there is at least one mismatch between the world frames and the local frames. Could you advise me on where my initialization is going wrong? Perhaps the way I'm initializing X_base is part of the problem, but it doesn't quite explain the way world coordinates entities are broken. What I did is initialize the MultiBodyConfig::parentToSon() Locally correctIn the first one, I set the transforms all locally correct with respect to the first sphere via parentToSon() in my construct() header. This looks OK locally! Notice how it goes straight up just like the straight up arm. MultiBodyConfig::bodyPosW() World Frame incorrectWhen I only set the poses of each the world frame seems completely crazy using entries from bodyPosW() in my construct() header. You can't see the order from the pictures, but these dummies are jumping back and forth + up and down, you can see it in the actual printout of the frames for each joint below. Here are the actual frames as set in the world coordinate system with
|
Also, in my case Z is up. Is that potentially part of the problem? |
@haudren Perhaps you may have been mistaken about the inverse?
I tried this change simply eliminating all inverses and the locations of the dummy nodes look dead on (some are inside the arm so not visible): |
Actually while closer, this is still not correct... still debugging... |
Ok, it does look much better. Sorry about this, but it is a common problem we have with a lot of users. You can also try simply inverting the rotation for the base transform, it may be the cause of your problems. Note that without calling |
Are you suggesting adding |
Yes indeed. However, I don't think you can do this with transforms: X_base = sva.PTransformd(X_base.rotation().inverse(), X_base.translation()); Should do the trick |
That unfortunately didn't seem to do the trick, but I have a bit more debug data (without the inversion discussed above) printed from this revision. Here is the image: I created dummy nodes to debug the and Initialized them as follows:
These are the comparable Joint 1 For the first frame the transforms are equal, in other words they match and both appear to be correct:
Joint 2 Here is the first frame where they differ, the rotation is off.
Joint 3 Third Frame Happens to match up again:
Joint 4
Joint 5 Third Frame Happens to match up again:
Joint 6 First translation divergence, rotations also diverge:
Joint 7 Translations happen to match up again:
So you mentioned that sometimes the first joint is what's off? Since a few of the joints in the middle happen to match up I think it is a slight difference in the way the transforms are being specified. Perhaps an Axis difference, Joint direction, both? Switching the joint direction didn't seem to make a difference, I tried that before as well. I'll also try to get each transform relative to the previous joint for both V-REP and Tasks. That may finally make the exact difference obvious. |
Okay I believe this change outputs the relative frames, I know in V-REP these are each joint note: this picture is the same as the post above, there were no changes that should affect it Base to Joint 1
Joint 1 to Joint 2
Joint 2 to Joint 3
Joint 3 to Joint 4
Joint 4 to Joint 5
Joint 5 to Joint 6
Joint 6 to Joint 7
As you can see these matrices are essentially identical in all cases, I guess this mean it comes down to a difference in how the multiplications are carried out for the forward kinematics? At this point I'm very uncertain of the next debug step considering that the world frames match up at several, but not all joints in this simple case with zero joint angles, but the frame to frame matrices seem to be identical. I've also checked that my code to convert an |
(Sorry it's Friday evening, I am in a bit of a hurry)
I'll try looking a bit more into it, but it could be that you have to:
Note that the way that transformations are composed in ForwardKinematics is simply to left-multiply the parentToSon together.
|
Okay, thanks! Don't worry if you're busy at the moment.
All the positions match up now but the world orientations differ, and if I try rotating joints the forward kinematics don't match up (tool is removed but otherwise the same): I need to try another commit that tries converting to PTransformd via rpy format, though since I'm doing actual robot motion with these transforms rather than just visualization rpy could be vulnerable to gimbal lock aka singularities. |
It seems to me that all you matrices are the same in your gist, but I may just be tired... I'll come back to you on those issues during the week-end or on Monday at the latest! |
That does look to be the case for the parent to son vs the link in prev link frame comparisons between V-REP and RBDyn. Here is what's happening with links to the code for the RBDyn portion:
For the V-REP portion it is a touch simpler, though neither is very complex:
I think the conversions between V-REP and Eigen::Affine3d in my code are good to my knowledge, since I've been using both together for some time without issue, and originally had to go through a debugging process like this to get there. The problem may be in my functions that convert between my Eigen coordinates to SpaceVecAlg / RBDyn / Tasks coordinates, and the setup of the axes of rotation. I'm taking a look at the Rigid Body Dynamics book to see if I can figure out how they are organized differently. |
NOTE: Corrections made Joint alignment problem appears to be fixed: Okay I've made another big leap! The joints now appear to line up so far with this commit. I do the inverses you describe, and something similar to the inverse commit in my previous post, but I put the rotations in a Quaternion before performing the inverse. Now every joint seems to align at the 0 joint angle! New Joint angle and indexing problem: In RBDyn is each joint expected to be at the current pose, then the link portion is after the joint, or is that configurable based on how MultiBodyGraph:::linkBodies is called? I then tried starting at my initial starting joint angle, that's where you see the second robot in all the pictures. Unfortunately, there is a new problem, though it is much smaller. It seems the joint angle is rotating the joint previous to the one I desire to be rotated, rather than the one I'd like rotated, with updating the next index as shown in this commit as a partial workaround. All the poses look right except the first joint, which I skipped because of this offset problem. Fortunately the angle is very small for joint 1 (.06 radians) so it is easy to tell everything else is matching up closely. Here is an image, where you can see two frames at every joint. Each Dummy and triad's arrows corresponds to RBDyn or V-REP as follows:
I suspect I can fix this problem by modifying the way I call MultiBodyGraph::linkBodies, but I'm not certain. Another possible alternative might be to weirdly rename all of the bodies to be named after the joint before all the transforms, and Perhaps on the first joint I'll have to add a dummy transform so that the names and effects of applying joint angles lines up with V-REP. |
It looks like RBDyn gives transforms after the joint rotation is applied, while V-REP gives transforms before. Also, it looks like the indices of the RBDyn transforms are definitely off by one somehow, I'm still trying to debug if my initialization is the problem, it is simply a convention difference, or something else. |
There is definitely some sort of indexing problem, but I'm not sure why. I've not ruled out the possibility of this being an issue in RBDyn, though most likely it is my code. I have a hack that looks like it works around the problem, so I'll partially move on to the next problem and partially try to address the outstanding indexing problem. Running the solverSo, I try to apply the same index hacks when I try to use the solver to get to a simple target pose. I will be adding more complicated functionality once the basics work. This is what happens at each time step after the initialization is complete:
Resolving the joint angle indexing off by 1 errorI link bodies in a super straightforward way as follows: for(std::size_t i = 0; i < rbd_bodyNames_.size()-1; i++)
{
sva::PTransformd to(getObjectPTransform(rbd_jointHandles_[i+1],rbd_jointHandles_[i]));
sva::PTransformd from(sva::PTransformd::Identity());
std::string prevBody = rbd_bodyNames_[i];
std::string curJoint = rbd_jointNames_[i];
std::string nextBody = rbd_bodyNames_[i+1];
rbd_mbg_.linkBodies(prevBody, to, nextBody, from, curJoint);
} Then I try to access the joint names after computing forward kinematics, and when I visualize the data I have to offset by 1 for the frames to be correct: for (std::size_t i = 0; i < jointHandles_.size(); ++ i)
{
/// @todo TODO(ahundt) FIXME JOINT INDICES ARE OFF BY 1, Manually SETTING FIRST JOINT
std::string jointName = jointNames_[i];
std::size_t jointIdx = simArmMultiBody.jointIndexByName(jointName);
if(i<jointHandles_.size()-1) rbd_mbcs_[simulatedRobotIndex].q[jointIdx][0] = initialJointAngles[i+1]; // should be initialJointAngles[i]
simSetJointPosition(jointHandles_[i],initialJointAngles[i]);
}
std::string jointName = rbd_jointNames_[0]; /// @todo TODO(ahundt) HACK: This gets one joint after what's expected
std::size_t jointIdx = simArmMultiBody.jointIndexByName(jointName);
rbd_mbcs_[simulatedRobotIndex].q[jointIdx][0] = initialJointAngles[0]; These hacks get me good output poses for each frame given a set of input joint angles |
I'll try to answer your concerns. First the "off-by-one" error: when you create a MultiBody from a MultiBodyGraph, a "root" joint is added as first joint. In your case, it is a fixed joint, but for a free-floating robot (for example humanoid) it would be a free joint. I would suspect that this is the source of your problems. Similarly, note that the order of the joints in a MultiBody may not be the same as the order in which you added joints to the MultiBodyGraph. This is why you should rely on the joint name to get the correct indices. When you link bodies you will have this: // to from
// body --------> joint ---------> body Meaning that if you set "from" to identity, the joint will be at the same position as the next body. This is the convention in all of our programs, and in URDFs. If you want to set a joint to be at the same position as the current body, simply swap "to" and "from" transforms. Now onto the
However, I think the main point you are lacking is that your problem does not have a |
I assume Ah, it does look like the last joint there is a bit junky, I put the full Xt_ data in the issue's gist:
What initializes this particular variable? Going through some of the other data as well... |
Ok so this particular variable is the result of the fixed transforms "to" and "from" that you give when linking bodies in a MultiBodyGraph. Make sure you are not using anywhere an uninitialized Note that by using |
Ok I think I found the offensive code in Eigen::Affine3d PTranformToEigenAffine(sva::PTransform<double> & ptransform)
{
Eigen::Affine3d eigenTransform;
eigenTransform = Eigen::Quaterniond(ptransform.rotation()).inverse();
eigenTransform.translation() = ptransform.translation();
return eigenTransform;
} Should probably read: eigenTransform.rotation() = Eigen::Quaterniond(...) e/ Oh well I noticed that this is the opposite of what you're using for now... |
Strangely, you can't assign to the rotation component of an Affine3d, that is what I tried at first but you can assign the Quaternion to the whole transform, and I know if it is an identity transform it will become just a rotation (translation will remain 0). I did figure out that I'm going off the end of my set of joint handles, in other words one was uninitialized. I'm working around that now, but need to figure out what actual scene objects to tie it too which is making things more troublesome. |
Good news is with the added frame it no longer crashes! (still have the weird off by 1 error) On to the next step! The arm either appears to move, but extraordinarily slowly. How do I configure the solver and eulerIntegration step to solve for completing the motion by the next V-REP time step? I tried passing the timestep of 50ms or 0.05 seconds as the eulerIntegration step parameter, but I've been running for 10 simulated minutes and its moved about 1mm per minute, rather than the hoped for 1mm per time step.
|
The arm's speed is influenced mostly by two things:
To go faster, you simply have to increase the stiffness. Basically, the solver is trying to minimize the following cost function for set-point tasks: ||\ddot{\epsilon} - s*\epsilon + 2*\sqrt(\epsilon)*\dot{\epsilon} ||^2 i.e. in your case, we attract the arm to the desired position by "pulling" on it with a critically damped spring-damper system. It will thus never reach exactly the position you asked as it tries to reach that point at zero speed. We usually setup a small threshold to declare a task complete. If you want to actually follow a trajectory i.e. pass through a set of points at certain speeds, you will need a |
Amazing! Things seem to work! Thanks! Still have to work out the small bugs, but the big ones seem squashed. :-) |
Happy to hear that 😸 Do not hesitate if you have more problems! |
I'm trying to integrate DamperJointsLimitsConstr, and the solver succeeds for a few seconds and appears to move correctly, then it suddenly fails to find a solution after a few seconds (real time, 50ms time step). I used parameters based on the test code for the damper limits. Here are my changes to switch to DamperJointsLimitsConstr. I tried substantially reducing the stiffness on the position and orientation tasks, and this let it run quite a bit longer but it still eventually fails. Are there any particular configuration changes you recommend or an approach to work around this problem? Here is the key code: // joint limit objects, initialize to current q so entries
// will be reasonable, such as empty entries for fixed joints
std::vector<std::vector<double> > lBound = simArmConfig.q;
std::vector<std::vector<double> > uBound = simArmConfig.q;
std::vector<std::vector<double> > lVelBound = simArmConfig.alpha;
std::vector<std::vector<double> > uVelBound = simArmConfig.alpha;
// for all joints
for (std::size_t i=0 ; i < jointHandles_.size() ; i++)
{
/// limits must be organized as described in https://github.com/jrl-umi3218/Tasks/issues/10#issuecomment-257793242
std::string jointName = jointNames_[i];
std::size_t jointIdx = simArmMultiBody.jointIndexByName(jointName);
jointIdx-=1; /// @todo TODO(ahundt) HACK FIXME JOINT INDICES ARE OFF BY 1
lBound[jointIdx][0] = llimits[i];
uBound[jointIdx][0] = ulimits[i];
lVelBound[jointIdx][0] = -inf; /// @todo TODO(ahundt) Hardcoded infinite Velocity limits, set to real values
uVelBound[jointIdx][0] = inf;
}
tasks::qp::DamperJointLimitsConstr dampJointConstr(rbd_mbs_, simulatedRobotIndex, {lBound, uBound},{lVelBound, uVelBound}, 0.125, 0.025, 0.1, simulationTimeStep);
// Test add*Constraint
dampJointConstr.addToSolver(solver);
BOOST_VERIFY(solver.nrBoundConstraints() == 1);
BOOST_VERIFY(solver.nrConstraints() == 1); side note: The joint indexing bug was my fault and I've now fixed it in my code, sorry about that! |
DamperJointsLimitsConstr problem is resolved as well! When I was doing unrelated implementation steps it just so happens that I had one additional degree of freedom that I hadn't integrated yet that meant the solver could find solutions without errors for the damped joint constraints task. |
I'd like to write a small troubleshooting guide based on everything above. I expect there will be parts appropriate for each of Tasks, RBDyn, and SpaceVecAlg. What would be the best place to put it? Perhaps expand the README.md or do you recommend somewhere else? |
Hum, if it's very short you could add it to the README.md, but I'd rather keep it short and more focused on a general overview of the software. I would rather suggest adding it to the Doxygen documentation. The best way would probably be adding a markdown page with a link somewhere in the landing page. |
Sure adding adding an additional markdown page should work. Is the doxygen documentation actually posted for browsing somewhere? If not, I suggest using either:
We could then put a link to those docs in the README.md and in the description link field available from the top of the main page. |
No the doc is not hosted anywhere (you can of course browse it locally after installing the package). It could be a good idea to have it online somewhere and I'd favor using github pages so that the attribution is clear, but I'll let @gergondet give us his thoughts. |
I've setup the necessary info in the travis configuration file to upload the documentation for SpaceVecAlg, RBDyn and Tasks. There is an issue with cross-links that I've yet to figure a proper fix for. |
I'm trying out my plugin on real hardware with position control only for the moment, what's the best way to sync states between the arm sensors and the config? Right now I've got two arm models in V-REP, one "simulated arm" simulating where the arm should go, and another "measured arm" reflecting the measurements of the physical arm. Right now I copy the joint angles from the "measured arm" into the rbdyn config before running the tasks constrained optimization. Is that the right way to go about it? |
@ahundt : We are still working out the best way to solve this problem. Open-loop works just fine, but when we plug in the real position, it seems that our robot moves very slowly. We think there are synchronization issues, and are investigating. If you manage to reach a working solution, we'd be delighted to learn about it! Note that at least in the humanoids robotics community, there are very few details in QP papers that explain in detail this procedure. |
@haudren I've at least been able to reproduce and confirm the issues with slow motion. I also find this surprising, particularly considering that in my first iteration I re-initialize all objects on every time step, which eliminates the possibility of bugs in the state update between iterations. Perhaps writing out the specific parameters calculated in the open loop vs closed loop versions at each timestep for comparison could better illuminate differences in the inputs that could lead to the differing outputs. |
If I understand correctly, the latest discussion is about trying to feedback the joint state directly into the QP. I think you'll need to carefully consider the inner control loop (e.g., robot joint PIDs) to be able to do this because you are essentially feeding back/ trying to control the same state variables in the 2 different control loops. I've looked into it in the past, my opinion is that the cleanest way is to remove the inner control and let the QP also do that part. However, I wouldn't advise you to do that. The current "workarounds" are: |
Thanks for the input Joven! Indeed, we recently have tried to use visual servoing to "close the loop" with some success. See for example one of @jovenagravante papers: https://hal.inria.fr/hal-00821230/document (The relevant The problem we see when doing feedback on the joint position/speed might be linked to the delay of sending/receiving commands, as well as the fact that the QP is solving an acceleration problem, and then integrating it twice via As you can see, when you send the control computed from state Now, how to solve this properly is a bit more involved, and we never had enough time to properly solve it but I think that one (or more) of these proposals should help:
|
@haudren Delay is certainly a part of the issue but I'm unsure if it is the main cause. I think it is much more related to stability of the cascaded control design. Your proposal 1 fits in this explanation where usually the inner loop rate must be much higher (but since we can't do that, we can only make the outer one lower). For your proposal 3, I heard that at some point FK tried better numerical integration methods on the SoT framework, but it wasn't very promising. There might be some leads there if you wish to follow it. |
@jovenagravante : Thanks for the input! It would be interesting to have someone do an analysis of the stability of the whole control stack, but I simply don't have any time to allocate to this task. I just pushed on my fork ( haudren/RBDyn/topic/LeapFrog ) a simple Leapfrog integrator. @ahundt : You can try using it, but I am not sure it will improve anything, and might be bugged :p I only implemented |
@haudren : It's for future reference of course 😄 |
I'm planning to create a V-REP plugin for Tasks. Ideally my hope is to be able to V-REP scenes into tasks, run a solver, then update V-REP with the output. I'll explain my plan here, could you let me know if there are any gotchas or misunderstandings of the design of tasks?
The plugin will be in my grl library. I already wrote code implementing a similar plugin for another constrained optimization library. Since V-REP plugins are C/C++ and performance is a concern for me I plan to integrate directly using that API rather than via python.
My plan is to implement a minimal working example for my use case which I described in a jorisv/PG issue then expanding from there.
Minimal example:
Essentially, I expect to build the scene in v-rep, read and transfer the relevant configuration to tasks via the V-REP Plugin API, use tasks as the solver, then update V-REP with the results, which will also command an actual real physical arm when it is connected (I have another plugin that does this).
Questions:
The text was updated successfully, but these errors were encountered: