Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added URDF/XACRO for the Zivid One+ 3D Camera #17

Open
wants to merge 12 commits into
base: master
Choose a base branch
from
20 changes: 10 additions & 10 deletions zivid_description/urdf/macros/zivid_camera.xacro
Original file line number Diff line number Diff line change
@@ -1,25 +1,25 @@
<?xml version="1.0"?>
<robot xmlns:xacro="http://ros.org/wiki/xacro">
<xacro:macro name="zivid_camera" params="prefix">
<!-- Zivid Properties -->
<xacro:property name="M_PI" value="3.1415926535897931" />
<!-- Properties -->
<material name="zivid_gray">
<color rgba="0.25 0.25 0.25 1"/>
</material>

<xacro:macro name="zivid_camera" params="prefix">
<!-- Zivid Base Link -->
<link name="${prefix}base_link">
<!-- Visuals -->
<visual>
<origin xyz="-0.0030 -0.0758 0.0445" rpy="${0.5*M_PI} 0 ${0.5*M_PI}"/>
<material name="blue">
<color rgba="0.25 0.25 0.25 1"/>
</material>
<origin xyz="-0.0030 -0.0758 0.0445" rpy="${0.5*pi} 0 ${0.5*pi}"/>
dave992 marked this conversation as resolved.
Show resolved Hide resolved
<geometry>
<mesh filename="package://zivid_description/meshes/visual/zivid-one-plus.stl" scale="0.001 0.001 0.001"/>
</geometry>
<material name="zivid_gray"/>
</visual>

<!-- Collisions -->
<collision>
<origin xyz="-0.0030 -0.0758 0.0445" rpy="${0.5*M_PI} 0 ${0.5*M_PI}"/>
<origin xyz="-0.0030 -0.0758 0.0445" rpy="${0.5*pi} 0 ${0.5*pi}"/>
<geometry>
<mesh filename="package://zivid_description/meshes/collision/zivid-one-plus.stl" scale="0.001 0.001 0.001"/>
runenordmo marked this conversation as resolved.
Show resolved Hide resolved
</geometry>
Expand All @@ -32,13 +32,13 @@

<!-- Zivid Optical (Measurement) and Projector Joints -->
<joint name="${prefix}optical_joint" type="fixed">
<origin xyz="0.065 0.062 0.0445" rpy="-${0.5*M_PI} 0 -${0.5*M_PI + 8.5/180*M_PI}"/>
<origin xyz="0.065 0.062 0.0445" rpy="-${0.5*pi} 0 -${0.5*pi + 8.5/180*pi}"/>

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not sure we need a optical joint that is not the same as the base_link?
The optical joint's frame should ideally just be the same frame as the points in the pointcloud is given in, which is a fixed point in the camera - I can figure out exactly how it's specified.

Then the only frame that is essential is that optical frame, and a hand-eye transform will be used to get the pointcloud a robot's frame.
I think might be useful to also have a rough estimate of the projector coordinate system relative to the optical frame, like you have added (discussed in 624a977#r563312884).

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The optical joint's frame should ideally just be the same frame as the points in the pointcloud is given in, which is a fixed point in the camera - I can figure out exactly how it's specified.

joint != frame.
A frame is a coordinate frame. A joint is a connection between frames defining the transformation between them.

The measurement frame (optical_frame) is indeed defined by the frame in which the camera outputs the captures. This may or may not coincide with another frame, but is definitely a distinct frame (even if it is just for semantics). The joint just connect the two links together.

Having a confirmation on the location of the optical_frame relative to the mounting hole (/base_link) would be very helpful indeed. Our usage (attached to a robot manipulator) does show that this location is correct or at least really close to the actual measurement frame. We often use this description "as is" without calibration for some quick captures.

Then the only frame that is essential is that optical frame

If looking at the camera in isolation yes, but my intent behind making this package is to actually connect it to other hardware. Then the base_link is essential as well, even if only by convention, expectations, and ease of use.

The base_link is located such that the geometry can easily be attached, it is the "starting point" of the geometry. In this case, I picked the center mounting hole as I saw this as a convenient location to which I can attach the camera for example a robot or end-effector. All description packages should start with a base_link.

and a hand-eye transform will be used to get the pointcloud a robot's frame

I would say that calibration is indeed needed for real-world applications, but not part of the scope of this package. Description packages are just there to give the ideal geometry and required frames of hardware. This can be used then for simulations or for a first best guess of your real-world counterpart.

Typically calibrations will result in a new frame, for example: calibrated_optical_frame, that is then separately attached to the description by the user.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's keep the base_link link and optical joint. I see your point on that being useful for the simulation and a first best guess or starting point.

Typically calibrations will result in a new frame, for example: calibrated_optical_frame, that is then separately attached to the description by the user.

Yes, I agree, in a real-world application the hand-eye calibration will take over, to be able to know how the point cloud is related to the robot's base. And then the transformation between the base_link frame and the optical_frame is mostly useful for simulations and verifying that the robot-camera calibration is sound.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Having a confirmation on the location of the optical_frame relative to the mounting hole (/base_link) would be very helpful indeed.

Yes, I will get this information

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, so the point cloud is given relative to a location that is the optical center given at a certain temperature+aperture location. So this will vary for each camera, even within the same model, for instance Zivid One+ M.

So I think we can communicate through the naming of the joints and frames that the transformation between the between the mounting hole and the camera's optical center at the given calibration point (certain temperature+aperture) is an approximation.
And the we can use the fixed approximate values provided in the datasheet.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So this will vary for each camera, even within the same model, for instance Zivid One+ M.

Would the driver have a way of retrieving that information?

There's no requirement for the xacro:macro to contain that link.

If the driver could publish it (as a TF frame), that would work just as well.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So this will vary for each camera, even within the same model, for instance Zivid One+ M.

Would the driver have a way of retrieving that information?

Would also be interested in this, especially if it is moving between usages (e.g. due to temperature differences)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(forgive me if this has been discussed before / is generally known about Zivid devices)

Unless the pointcloud / depth images are automatically transformed by the driver to have their origins at a fixed point (so the driver / embedded firmware compensates for the offsets/variation due to temperature/other factors), not having the precise location of the optical frame significantly complicates using Zivid cameras for really precise/accurate work.

Extrinsic calibrations could likely compensate for that offset (they would incorporate it into whatever transform they determine between the camera itself and the mounting link), but IIUC from the comments by @runenordmo, that would essentially only be the extrinsic calibration for one particular 'state' of the sensor.

If the camera itself already compensates for this, a static link in the URDF/xacro:macro would seem to suffice. If not, the driver would ideally publish the transform itself -- perhaps not continuously, but at least the one associated with a particular capture. The rest of the system could then match it based on time from the header.stamp and the TF buffer.

<parent link="${prefix}base_link"/>
<child link="${prefix}optical_frame"/>
</joint>

<joint name="${prefix}projector_joint" type="fixed">
dave992 marked this conversation as resolved.
Show resolved Hide resolved
<origin xyz="-0.0030 -0.0758 0.0445" rpy="-${0.5*M_PI} 0 -${0.5*M_PI}"/>
<origin xyz="-0.0030 -0.0758 0.0445" rpy="-${0.5*pi} 0 -${0.5*pi}"/>
<parent link="${prefix}base_link"/>
<child link="${prefix}projector_frame"/>
</joint>
Expand Down