My general opinion (feel free to challenge) is that we should strive to map UAVCAN exchanges to REST exchanges as precisely as possible, avoiding unnecessary aggregations or other deviations unless there are strong reasons to. For example, in the case of bus monitoring and message subscription, we have to use a different exchange logic on the REST side due to performance issues, so this would be an exception. I don’t foresee any performance-related issues in the GRV so that means that we should follow the exchange model of UAVCAN as closely as we can.
I think it does matter. The spec does not make assumptions about idempotency of register write operations, allowing applications much flexibility. If we follow the full-retry policy as you described, we would be implicitly assuming that register writes are idempotent. So I suggest we keep track of which ones have failed and then ask the user if we should retry. Would that be possible?
I would like to point out that the firmware update process is a bit hard to track because it inverts the control: once you requested a node to update its firmware, it takes over as the logical master of the following exchange:
- A unit U (say, Yukon, or an autopilot, companion computer, whatever) calls
uavcan.node.ExecuteCommandon a node N requesting it to update its software. The request contains the name of the image file.
- The node N receives the request, sends a response, and (unless the request is for some reason denied) takes over from here. The unit U at this point is no longer in charge of the process.
- The node N repeatedly requests
uavcan.file.Readon the unit U using the file name supplied in the first step until the file is transferred.
- The node N makes sure things are okay and starts executing the new software.
The inversion of control that occurs at the step 2 makes the progress tracking slightly non-trivial. You can easily detect only the beginning of the update process when the target node N responds with a confirmation saying that okay, it is going to update its firmware. Everything that follows can be observed only indirectly:
- Monitor if the target node has restarted (but it’s not required to – theoretically, some nodes may implement some form of hot-swapping, updating themselves as they go).
- Monitor the firmware file read requests. You can detect when the node whose ID matches N reads the specified firmware file. Knowing the size of the file you can determine how far along the update process the node is. However, it is not required to read the file sequentially, so this is a further complication.
- Monitor diagnostic messages (
uavcan.diagnostic.Record) from the node N. These are not formalized so hard to track automatically (that is, unless we’ve built in some kind of weak AI capability into Yukon which would interpret the meaning of log messages )
Would it not be easier to leave this to the user, at least for now? The user (being capable of abstract thinking and understanding, seeing as it’s likely to be a human being) should be able to deduce what’s happening by looking at the available controls & indicators.
As for the other undergoing procedures, I think we could generalize this into a simple display, like a set of tiny indicators, showing the currently pending service requests. Say, you send
uavcan.node.ExecuteCommand to a node N, and its status display on Yukon lights up an indicator showing that the node is yet to send a response to such and such request. When the node responds, the indicator disappears. If the request has timed out, the indicator turns red or something. This feature is by no means critical and I weakly suggest to push back on this.
If unsure, let’s postpone this.