Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 6 additions & 1 deletion fuse_models/doc/sensor_models/transform_sensor.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,12 @@ This sensor is intended to be used to estimate the position of a body relative t
For example, this sensor is useful in the case of measuring april tags on some external body and using those measurements to estimate the full pose and twist of that body.
An example of this application can be seen [here](../../../fuse_tutorials/config/fuse_apriltag_tutorial.yaml) (or try `ros2 launch fuse_tutorials fuse_apriltag_tutorial.launch.py` to see the tutorial in action).

More generally, this sensor takes a transform between some set `estimation_frame` and one of a predefined set of `transforms`. When it receives such a transform, it will then apply a transform to this measurement to change it to be a measurement of the given `target_frame`. Then, it generates a 3D pose transaction for use by its parent optimizer.
More generally, this sensor takes a transform between one frame in the set `estimation_frames` and one transform from the set `transforms`. When it receives such a transform, it will then apply a transform to this measurement to change it to be a measurement of the given `target_frame`. Then, it generates a 3D pose transaction for use by its parent optimizer.

If you want to use multiple `estimation_frames`, you should define a `base_frame`.
If this frame is non-empty, then you should set the `map_frame_id` and `world_frame_id` in the publisher to the same value.
When this is set, the sensor will transform its pose to be a measurement in `base_frame` of `target_frame`, and the odom it produces will be in `base_frame`. This can be used when e.g. you have multiple cameras, all positioned relative to `world`, and want them measuring the same object.
This allows the transform sensor to use any number of cameras to estimate the pose of some other object.

A slightly confusing aspect of the sensor is the need for multiple definitions of the target frame. Every frame in `transforms` needs a corresponding `target_frame` to be published for it to be used. This is simply because `tf` uses a tree data structure, and the target frames are leaves. These should all end up being in the same global location (discounting measurement noise) but will have different names.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -152,7 +152,7 @@ struct Odometry3DPublisherParams : public ParameterBase
rclcpp::Duration covariance_throttle_period{ 0, 0 }; //!< The throttle period duration in seconds
//!< to compute the covariance
rclcpp::Duration tf_cache_time{ 10, 0 };
rclcpp::Duration tf_timeout{ 0, static_cast<uint32_t>(RCUTILS_S_TO_NS(0.1)) };
rclcpp::Duration tf_timeout{ 0, 0 };
int queue_size{ 1 };
std::string map_frame_id{ "map" };
std::string odom_frame_id{ "odom" };
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -83,8 +83,10 @@ struct TransformSensorParams : public ParameterBase

target_frame = fuse_core::getParam(interfaces, fuse_core::joinParameterName(ns, "target_frame"), target_frame);

estimation_frame =
fuse_core::getParam(interfaces, fuse_core::joinParameterName(ns, "estimation_frame"), estimation_frame);
base_frame = fuse_core::getParam(interfaces, fuse_core::joinParameterName(ns, "base_frame"), base_frame);

estimation_frames =
fuse_core::getParam(interfaces, fuse_core::joinParameterName(ns, "estimation_frames"), estimation_frames);

pose_loss = fuse_core::loadLossConfig(interfaces, fuse_core::joinParameterName(ns, "pose_loss"));
pose_covariance = fuse_core::getParam(interfaces, fuse_core::joinParameterName(ns, "pose_covariance"),
Expand All @@ -101,7 +103,8 @@ struct TransformSensorParams : public ParameterBase
int queue_size{ 10 };
std::vector<std::string> transforms;
std::string target_frame;
std::string estimation_frame;
std::string base_frame;
std::vector<std::string> estimation_frames;
std::vector<size_t> position_indices;
std::vector<size_t> orientation_indices;
fuse_core::Loss::SharedPtr pose_loss;
Expand Down
4 changes: 3 additions & 1 deletion fuse_models/include/fuse_models/transform_sensor.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -142,7 +142,9 @@ class TransformSensor : public fuse_core::AsyncSensorModel

std::unique_ptr<tf2_ros::Buffer> tf_buffer_;
std::unique_ptr<tf2_ros::TransformListener> tf_listener_;
std::set<std::string> transforms_of_interest_;
std::set<std::string> fiducial_frames_;
std::set<std::string> estimation_frames_;
std::vector<std::array<double, 6>> pose_covariances_;

rclcpp::Subscription<MessageType>::SharedPtr sub_;

Expand Down
2 changes: 1 addition & 1 deletion fuse_models/src/odometry_3d_publisher.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -441,7 +441,7 @@ void Odometry3DPublisher::predict(tf2::Transform& pose, nav_msgs::msg::Odometry&
geometry_msgs::msg::AccelWithCovarianceStamped acceleration_output,
bool latest_covariance_valid) const
{
double const dt = to_predict_to.seconds() - rclcpp::Time(odom_output.header.stamp).seconds();
double const dt = std::min(to_predict_to.seconds() - rclcpp::Time(odom_output.header.stamp).seconds(), 0.);
// Convert pose in Eigen representation
fuse_core::Vector3d position;
fuse_core::Vector3d velocity_linear;
Expand Down
68 changes: 55 additions & 13 deletions fuse_models/src/transform_sensor.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -48,6 +48,8 @@
#include <pluginlib/class_list_macros.hpp>
#include <rclcpp/rclcpp.hpp>
#include <stdexcept>
#include <string>
#include <tf2/LinearMath/Transform.hpp>
#include <tf2_geometry_msgs/tf2_geometry_msgs.hpp>

// Register this sensor model with ROS as a plugin.
Expand Down Expand Up @@ -102,12 +104,32 @@ void TransformSensor::onInit()

for (auto const& name : params_.transforms)
{
transforms_of_interest_.insert(name);
fiducial_frames_.insert(name);
}

if (params_.estimation_frame.empty())
if (params_.estimation_frames.empty())
{
throw std::runtime_error("No estimation frame specified.");
throw std::runtime_error("No estimation frames specified.");
}

for (auto const& name : params_.estimation_frames)
{
estimation_frames_.insert(name);
}

if (params_.pose_covariance.size() != 6 * estimation_frames_.size())
{
throw std::runtime_error("Must provide 6 `pose_covariance` values per estimation frame");
}

for (std::size_t i = 0; i < estimation_frames_.size(); ++i)
{
pose_covariances_.emplace_back();
auto& cur_entry = pose_covariances_.back();
for (std::size_t j = 0; j < 6; ++j)
{
cur_entry[j] = params_.pose_covariance[(i * 6) + j];
}
}

tf_buffer_ = std::make_unique<tf2_ros::Buffer>(clock_);
Expand Down Expand Up @@ -137,7 +159,9 @@ void TransformSensor::process(MessageType const& msg)
std::string const& parent_tf_name = transform.header.frame_id;

std::string const& child_tf_name = transform.child_frame_id;
bool const child_of_interest = transforms_of_interest_.find(child_tf_name) != transforms_of_interest_.end();
bool const child_of_interest = fiducial_frames_.find(child_tf_name) != fiducial_frames_.end();
auto const parent_it = estimation_frames_.find(parent_tf_name);
bool const parent_of_interest = parent_it != estimation_frames_.end();

if (!child_of_interest)
{
Expand All @@ -149,14 +173,15 @@ void TransformSensor::process(MessageType const& msg)
// we must either have april tag -> estimation frame or estimation frame -> april tag tf
if (child_of_interest)
{
if (parent_tf_name != params_.estimation_frame)
if (!parent_of_interest)
{
// we don't care about this transform , skip it
RCLCPP_DEBUG(logger_, "Ignoring transform from %s to %s", transform.header.frame_id.c_str(),
child_tf_name.c_str());
continue;
}
}
std::size_t estimation_index = std::distance(estimation_frames_.begin(), parent_it);
RCLCPP_DEBUG(logger_, "Got transform of interest from %s to %s", transform.header.frame_id.c_str(),
child_tf_name.c_str());
// Create a transaction object
Expand All @@ -165,10 +190,9 @@ void TransformSensor::process(MessageType const& msg)

tf2::Transform net_transform;
tf2::fromMsg(transform.transform, net_transform);
std::string target_frame_name;
if (!params_.target_frame.empty())
{
target_frame_name = params_.target_frame + "_" + child_tf_name;
std::string target_frame_name = params_.target_frame + "_" + child_tf_name;
tf2::Transform april_to_target;
try
{
Expand All @@ -187,17 +211,35 @@ void TransformSensor::process(MessageType const& msg)
}
net_transform = net_transform * april_to_target.inverse();
}
else
{
target_frame_name = "";
}

// Create the pose from the transform
// we want a measurement from the april tag (transform of interest) to some reference frame
// if we have the opposite, invert it and use that
geometry_msgs::msg::PoseWithCovarianceStamped pose;
pose.header = transform.header;
pose.header.frame_id = parent_tf_name;
// transform to base frame if it is defined
if (!params_.base_frame.empty())
{
tf2::Transform base_to_estimation;
try
{
tf2::fromMsg((*tf_buffer_)
.lookupTransform(params_.base_frame, parent_tf_name,
rclcpp::Time(transform.header.stamp.sec - 1, transform.header.stamp.nanosec),
params_.tf_timeout)
.transform,
base_to_estimation);
}
catch (...)
{
// tf2 throws a bunch of different exceptions that don't inherit from one base, just skip (this will happen for
// at least 1 second on startup)
continue;
}
net_transform = base_to_estimation * net_transform;
pose.header.frame_id = params_.base_frame;
}
pose.pose.pose.orientation.w = net_transform.getRotation().w();
pose.pose.pose.orientation.x = net_transform.getRotation().x();
pose.pose.pose.orientation.y = net_transform.getRotation().y();
Expand All @@ -207,9 +249,9 @@ void TransformSensor::process(MessageType const& msg)
pose.pose.pose.position.z = net_transform.getOrigin().z();

// TODO(henrygerardmoore): figure out better method to set the covariance
for (std::size_t i = 0; i < params_.pose_covariance.size(); ++i)
for (std::size_t i = 0; i < pose_covariances_[estimation_index].size(); ++i)
{
pose.pose.covariance[i * 7] = params_.pose_covariance[i];
pose.pose.covariance[i * 7] = pose_covariances_[estimation_index][i];
}

bool const validate = !params_.disable_checks;
Expand Down