Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

API suggestions #46

Open
devrite opened this issue Jan 9, 2024 · 2 comments
Open

API suggestions #46

devrite opened this issue Jan 9, 2024 · 2 comments

Comments

@devrite
Copy link

devrite commented Jan 9, 2024

Hi everyone,

thanks for the work and also avoiding some pitfalls off the image transport API. Making it easier for language bindings or offline usage.

some comments. Let me know what you think and whether something of the mentioned is wrong and which ones you think are worthy of adapting.

You already provide the possibility to convert to generic messages. It may be possible to remove the sub/adv from the plugins altogether or even implement a single message type for all compressions:

  • header
  • stored fields and other necessary info, possibly original fields to convert it back to an original as close as possible.
  • fmt/compression name (identifying what the codec plugins need to handle or the codec plugin to be loaded)
  • codec flags+codec header, data blo (all in blob or 3 fields).

Could be handled by the compressed message already.

I still see that a fixed node type is being used in the API instead of interfaces, requiring a per node type API.

POINT_CLOUD_TRANSPORT_PUBLIC
Publisher create_publisher(
std::shared_ptr<rclcpp::Node> node,
const std::string & base_topic,
rmw_qos_profile_t custom_qos = rmw_qos_profile_default,
const rclcpp::PublisherOptions & options = rclcpp::PublisherOptions());

What is nice is that you can do generic publishing already, still lacking in image_transport as far as I can see.

To summarize any transport/compression package in ROS should get as many of these:

  • Node type agnostic
  • Separate publisher / sub from compression (partially done)
    • No plugin specific publisher or subscriber if not needed
      • Single message
      • Or generic publish / sub and (de)serializion handling as part of the plugin or the library just calling the codec plugin.
        • The example code generic publishing can be hidden in the library (I believe), less bloat code that looks the same everywhere.
  • Make the codec the actual plugin to be of use offline scenarios or language bindings
  • Make it possible to configure (param) a preferred plugin to choose the best/fastest implementation to be used (CPU, GPU, ...)
    • Via Pub/Sub plugins (live) can be handled via remapping already. Offline no automatic loading based on topic type.
    • Node level / topic level and per direction encoder or decode (to handle resource limits).
  • Support streaming codecs if necessary with PCL (via pub/sub plugins already possible) but also in codec API
    • Flushing
    • Streaming may not be 1:1 msg per call (pipelines may need to be filled, may generate multiple messages/packets, ...).

Depending on the points you want to fulfill I would consider them early on as some will probably require for incompatible APIs and break stuff if you implement it later on.

Personally I would at least make it node agnostic and try to implement it is generic pub/sub or single message type.

Best regards!

@john-maidbot
Copy link
Collaborator

Hello, sorry for the delayed reply. And thank you for your thorough suggestions!

  • node type agnostic seems like a very reasonable change.
  • w.r.t. having a single message to separate pub/sub from compression, I'm concerned that this could limit future plugins that may need some field that is not provided by the current compressed point cloud message. And adding fields in the future would be a breaking change which should be avoided at all costs. But if people feel it's already flexible enough, this change would definitely simplify things a bit. @ahcorde do you have why thoughts on this point?

Make the codec the actual plugin to be of use offline scenarios or language bindings

  • how is this different from what we already do?

Make it possible to configure (param) a preferred plugin to choose the best/fastest implementation to be used

  • atm users can already choose a preferred transport by subscribing to it, right? Maybe I am misunderstanding what you are suggesting though 🤔 can you provide an example please?
  • sorry for my ignorance, what do you mean by "streaming codecs"? Do you mean compression that makes use of message sequences rather than individual messages?

@devrite
Copy link
Author

devrite commented Feb 5, 2024

Make the codec the actual plugin to be of use offline scenarios or language bindings.

Yours is already there. I just referenced it for completion. They only thing is that it could be separated into multiple plugins (one with publisher/sub if needed and the other just being the codec). In this case it would need to support a single message or accept serialized (dec) messages and return serialied messages (enc) in the API for codec agnostic interfaces.

Which make it easier to use in python or write language bindings in general and support an offline use case in a straight forward way, but it is at least possible todo that now (as you are not requried to create the sub/pub) if I am not mistaken. For reference in image_transport this is not the case.

w.r.t. having a single message to separate pub/sub from compression

Yes, for implemntations it means they would have to encode their own fields in either a separate field fields/codec header or just store it in the data blob as part of their serialization/encoding/decoding. It makes it less direct or viewable of course and may introduce a small overhead. But if I think of most video codecs that is what they do when they break it down into (network) messages. There is a container format (avi,mkv, ...) or a transport format (RTP etc.). If I think of image transport/CompressedImage it already has a codec string that allows to integrate png/jpeg already and theoretically could integrate anything. You loose the ability to directly inspect special fields though if they are not encoded into such a readable string and are than encode into the data blob.

But you can still do your own messages. In this cases you would only work with serialized messages at the interfaces. In this case one would also not need any pub/sub implementation in the plugins but it would limit the plugins to one message channel. Of course one could also make it a vector of messages with different topics/message type ids (return upon init or so).

If you do both, then essentially one cad decide to use your message or define it's own as it is both generic on top level.

If there is a decision to do at least one of those I don't have an opinion on if we should do both. Either way once we have generic messages at the interface we would not be able to enforce a single message for all anyway..

atm users can already choose a preferred transport by subscribing to it, right? Maybe I am misunderstanding what you are suggesting though 🤔 can you provide an example please?

Yep, based on remappings basically. If you have only one message type then you would either still have multiple topic names or implement to just load the configured plugin and possible topic name if you want to load multiple. Same on sub side.

In image_transport it looks currently like this:

  • image/compressed: CompressedImage
  • HW accelerated image/compressed_accelerated: CompressedImage

In order to connect and the correct plugins you have to remap on sub side even though they are the same type and set the param to the plugin to use. The advantage of this approach is that you can still connect to both, in case the performance degrades with multiple subs to the same accelerated implementation.

Which is fine but then the topic name carries the plugin implementation instead of just the information about the message type. In this case for recordings it is a little less-obvious if you do not remap and the one using your bag does not have that plugin.

For an offline use case it would be nice to have an API that selects based on the message type and returns just some plugin able to decode that type and a possibility to the select one of those per an argument.

sorry for my ignorance, what do you mean by "streaming codecs"? Do you mean compression that makes use of message sequences rather than individual messages?

Yep. I don't see much use case here until there is a plugin that can do it for sequences instead of "static" PCL maps. I know that there are libs/servers which sequentially stream in PCL data from low/qual to high, but they do not really work on sequences but rather static huge maps. Which would usually use and equivalent storage and renderer on client side instead of a huge PCL message. So my knowledge on PCL sequence compression is rather limited. In this case it might be better to just use image compression on range/intensity images?


Besides I do not know if there are any accelerated implementations for PCL compression. So I see it at lower priority as it can be done via remapping if needed.

It is only that I see coding from input/output side (e.g., these "messages" have the following providers/users/impls and will be supplied at that interface (topic)) rather than that topic uses that coding and that impl. If just see the topic as the interface then it could be nicer to just make the information of the interface visible the associated message/content not the impl. But it is a fair compromise to leave it as is.

But it would affect the API or usage via params. So, if one or the other is preferred than you would make the decision early.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants