Homepage GitHub

Data type extensibility and composition

The ability to mutate data type definitions as the systems they describe evolve is essential because here on Earth there is no such thing as a stable design unless it is obsolete. Being designed for high-integrity deeply embedded systems where a software update may be costly to execute, UAVCAN has to provide developers with means of modifying data type definitions without breaking wire compatibility with existing deployments.

The current v1.0-alpha specification is a huge step forward compared to the original v0 because it supports renaming, extension, and truncation of data type definitions without breaking backward compatibility (as explained with examples in the UAVCAN v1 crash course and discussed in UAVCAN v1.0 and ArduPilot). Extension and truncation combined enable one to replace existing fields with new ones, albeit at the cost of increased footprint (the space occupied by the removed field has to be padded out). Overall, the available data type modification tools are similar to those offered by conventional non-real-time protocols such as Cap’n’Proto or CORBA CDR, although the details differ due to the hard real-time constraints of DSDL.

While working on my write-up on the recommended data type design practices for our drone-minded colleagues I realized that the above-described facilities may not be sufficient. There is a problem that follows from our overly direct approach to serialization.

The offset of a field upon serialization is dependent on the set of the preceding fields, which in turn is determined by the preceding composite fields. Modification of any of the composites may alter the placement of the following fields, possibly originating from other composites, which introduces undesirable coupling across independent data types, rendering the involved data types non-extensible excepting the topmost one. Example:

# X.1.0
bool a

# X.1.1
bool a
bool d

# Y.1.0
bool b
bool c

# Z.1.0
X.1.0 x
Y.1.0 y

# Z.1.1
X.1.1 x  # <-- problem
Y.1.0 y

The above-defined versions of Z serialize as:

Type Field pattern
Z.1.0 abc
Z.1.1 adbc

Despite the modification to X.1.1 being backward-compatible by virtue of the implicit extension/truncation rules, the layout of the top-level container Z is broken.

The coupling can be eliminated by serializing composites orthogonally:


Diagram: top half – v1.0, bottom half – v1.1; the rightmost column shows orthogonal serialization; the case for more than two nested composites would be difficult to illustrate on a flat medium.

Such orthogonal axes can be mapped onto the final linear representation by augmenting it with the length information per axis. In the following example for v1.1, the length augmentation information is shown as #:

Type Pattern
Z.1.0 #a#bc
Z.1.1 #ad#bc

Such augmentation allows one to reconstruct orthogonal dimensions without knowledge of the specific data type definition, thus eliminating the undesirable coupling.

The idea is trivial and it can be found in other serialization protocols, although sometimes the length-based augmentation is replaced with direct addressing of each axis relative to the origin of the serialization buffer – the approaches are functionally equivalent. Its disadvantage is the length augmentation overhead introduced per composite-typed field. The practical significance of the overhead is the highest for data types whose footprint is low. At the same time, such data types tend to be simple, and therefore the probability of such a data type requiring modification is low, which reduces the value of orthogonality. An obvious way to mitigate the length overhead is to provide the data type designer with an option to suppress orthogonal serialization at the data type definition time. In the following example, type Y is serialized non-orthogonally (inline):

Type Pattern
Z.1.0 #abc
Z.1.1 #adbc

Similarities can be found with some OOP languages that distinguish between value types and reference types.

The length augmentation is a field that defines the size of the following segment (axis) in the serialization buffer. Its range should support the largest sensible object size and the type should be a standard-size integer, which leaves uint32 and uint64. A serialized representation that exceeds 4 GiB is unlikely to be ever encountered in practice, but for the sake of extensibility, it is rational to reserve the upper half of the range \left[2^{31}, 2^{32}-1\right] where the most significant bit is set to possibly support 64-bit length fields in the future (similar to the UTF-8 multi-byte encoding format).

Then, a composite-typed field like X.1.1 x is a DSDL equivalent of:

uint8[<2**32] axis

That is unless the type of the field is defined as non-extensible. Borrowing liberally from the widely adopted terminology in the OOP world, the non-orthogonality of a type can be declared with the help of a directive like @final. Example:

# Y.1.0
bool b
bool c

Here, the practical effect of the declaration is not that the type is non-extensible or immutable, but rather that the DSDL language will be unable to guarantee backward compatibility automatically, requiring the developer to take the necessary precautions manually. The existing standard types available under uavcan.primitive and uavcan.si will require such annotation.

Besides the above-listed concerns, the orthogonal serialization is vital for implementing the much-discussed structured data type design approach where the common functionality that is relevant for different specialized types is extracted into a base supertype. As a made-up example, consider a pair of types that model the status of a battery management system and an on-board electric turbogenerator. Imagine that the fields describing the current output power (watt) and the available energy reserves (joule) are extracted into a base supertype. Once such extraction is done, the base supertype (which may be highly complex) becomes impossible to extend without breaking the wire compatibility of every subtype, despite the pledge to support extensibility through implicit extension and truncation. The orthogonal serialization solves that problem.

I am currently evaluating how critical this addition is considering the ongoing efforts to define an industry-specific messaging standard based on UAVCAN. The effort required to implement this is low.

Can you provide some examples of this?

As with all things in UAVCAN, we must provide some concrete discussion of the effects on our worst-case transport, UAVCAN/CAN-2.0. Specifically, non-final types would have 63% overhead if we require a 4-byte length field which seems unacceptable. Taking a hint from UTF8, but discarding the need to remain compatible with 7-bit ascii, we can use a single length bit to encode as little as 16 bytes (128 bits) in length with no additional length bits up to load 8 PetaBytes using 8 length bytes:

| width | byte 0    | byte 1    | byte 2    | byte 3    | byte 4    | byte 5    | byte 6    | byte 7    |
| 7     | 0xxx xxxx |
| 13    | 110x xxxx | xxxx xxxx |
| 20    | 1110 xxxx | xxxx xxxx | xxxx xxxx |
| 27    | 1111 0xxx | xxxx xxxx | xxxx xxxx | xxxx xxxx |
| 34    | 1111 10xx | xxxx xxxx | xxxx xxxx | xxxx xxxx | xxxx xxxx |
| 41    | 1111 110x | xxxx xxxx | xxxx xxxx | xxxx xxxx | xxxx xxxx | xxxx xxxx |
| 48    | 1111 1110 | xxxx xxxx | xxxx xxxx | xxxx xxxx | xxxx xxxx | xxxx xxxx | xxxx xxxx |
| 56    | 1111 1111 | xxxx xxxx | xxxx xxxx | xxxx xxxx | xxxx xxxx | xxxx xxxx | xxxx xxxx | xxxx xxxx |

giving us lengths of (if I did my math right):

| bit width | len bytes | max bytes |
| 7         | 1         | 16        |
| 13        | 2         | 1024      |
| 20        | 3         | 128kB     |
| 27        | 4         | 16 MB     |
| 34        | 5         | 2 GB      |
| 41        | 6         | 256 GB    |
| 48        | 7         | 32 TB     |
| 56        | 8         | 8 PB      |

The problem is we are trading data-coupling for control-coupling. In a hard-realtime system the latter is less desirable and even fatal. As such I wonder if we should invert this to be “deterministic by default” and require datatype authors to specify not-final or mutable to enable this behavior.

# Y.1.0
bool b
bool c

The type of control coupling I’m describing already exists in UAVCAN where we use variable-length arrays but these are bounded so the effects on a remote system are also bounded and a provable analysis of the system is still possible. Allowing up to 9 peta-bytes of additional data would throw any marginal analysis for a system right out the window. It also raises the specter of the protocol becoming split into deterministic and non-deterministic flavors which I’m solidly against. This suggests that we must take the additional step of requiring type authors to provide an upper-bound.

# Y.1.0
bool b
bool c

This complicates serialization since we must now have the previous type available to serialize the new type in order to evaluate that it’s mutability bounds have not been violated:

# X.1.0
bool a

# X.1.1
@mutable[<=16] # cannot ever change after being defined in 1.0
bool a
bool d

# Y.1.0
bool b
bool c

# Y.1.1
bool b
bool c
bool e # <- illegal since 1.0 was not mutable

# Z.1.0
# implicitly, this is now @mutable[<=16]
X.1.0 x
Y.1.0 y

I’d be interested to hear how you would handle these issues.

I know it is probably too late for this, but have you thought about using a simplified version of some of the ASN.1 encoding rules?
I know ASN.1 is not a very cool standard, but so much of this problems were already solved there. And the “new” Octet Encoding Rule (OER) is a nice compromise between size and encoding/decoding speed and convenience.

I should have done a better job outlining the current state of the art to back up my reasoning. Let me fix that now. Please bear with me while I re-introduce the basics.


We can distinguish data representations between self-describing formats and non-self-describing ones, where “self-describability” means that encoded data contains the associated metadata that describes its semantics. A schema defines the semantics of data separate from the data itself, which is obviously a prerequisite for a non-self-describing format.

Practical formats are located somewhere on the spectrum of self-describability rather than being one of the extremes. Self-describability is commonly leveraged to allow identical data representations to be interpretable using different schema definitions as they evolve.

A popular approach to implementing self-describing binary formats is the so-called tag-length-value (TLV) encoding, where data items are prepended with discriminators describing the semantics and delimiting the items. Other approaches are also used; notably, non-TLV formats where the data is delimited using dedicated control symbols (this approach is used in virtually all human-readable text-based formats).

Just like with self-describability, TLV-encoded formats often include non-TLV artifacts where it makes sense and vice versa. For example, UBJSON or ASN.1’s indefinite length containers use special delimiting symbols instead of length prefixes to facilitate streaming.

One aspect of self-describing representations which is rarely considered in the literature and yet is relevant for highly predictable deterministic systems is the question of redundant states. In the interest of minimizing the state space of a safety-critical system, it is beneficial to eliminate redundant states from data representations, which implies that constructs that affect the serialized representation while not modifying its semantics (like self-description) should be avoided or minimized.

Another issue that is important for robust systems is that of canonical representations (forms). In this write-up, if the mapping from the set of serialized representations to the set of application object states is non-injective (i.e., different serialized representations may describe identical objects), we say that such encoding is non-canonical. Conversely, a canonical encoding yields injective mappings.

Pure canonical non-self-describing encodings are almost never used in practice because practical systems typically require their interfaces to be extensible. To make an interface extensible, one has to amend serialized representations with sufficient metadata that allows an agent that lacks the exactly matching schema definition to interpret the data.

Below I am going to provide a brief overview of several popular binary encoding standards to help everyone understand the position of UAVCAN DSDL compared to similar technologies. In the overview, attribution of properties like self-describability, TLV, etc. is done for the general case and it is recognized that most protocols include edge cases where they exhibit different properties.


Abstract Syntax Notation One is one of the oldest data serialization standards still in wide use today. It is distinguished by its wide scope and complexity. A cryptographic expert Bruce Schneier had the following to say on the subject:

The best-known TLV encoding is ASN.1, but it is incredibly complex and we shy away from it.

I am not aware of any application of ASN.1 in safety-critical systems, although it obviously doesn’t mean that there are none.

The ASN.1 standard (maintained by ITU) specifies an extremely capable schema language and a collection of data representation formats, both binary and text-based. Among the binary formats, the following two types are most commonly used:

  • Basic Encoding Rules (BER) – a self-describing non-canonical TLV encoding. There are also two commonly used canonical restricted versions of BER: CER (C – canonical) and DER (D – distinguished) which differ in a couple of inconsequential implementation details.

  • Packed Encoding Rules (PER) – a non-self-describing non-canonical non-TLV encoding. Being non-self-describing, the format relies on an external schema definition. A restricted canonical subset called CANONICAL-PER is also defined. Further, PER is differentiated between ALIGNED, where field boundaries are aligned with byte (octet) boundaries by means of implicit padding (note how this might lead to non-canonical representations unless the padding is well-characterized), and UNALIGNED, which behaves like UAVCAN DSDL by squeezing bits together without regard for byte boundaries. UNALIGNED PER is often referred to as UPER.

Here we are going to focus on PER because it is the closest to the core requirements of UAVCAN. The specification is defined in T-REC-X.691-201508-I!!PDF-E.pdf. Other aspects of ASN.1 are specified separately (all specs are freely available from ITU).

ASN.1 supports data type extensibility with the help of a dedicated extension marker ... which can be added in the field list of a type definition to indicate that more fields may be added in a future version. It is not possible to add the extension marker retroactively without violating wire compatibility. The marker works by inserting a special hidden field before the data type which indicates if any of the extension fields are present. When combined together with variable-length encoding, it leads to highly convoluted binary representations, which are hard to follow manually. This forces one to rely on automatic code generators which suffer from interoperability and correctness issues due to the complexity of the standard.

ASN.1 does not support data inheritance in the conventional sense, although it is possible to express structural subtyping in a manner similar to DSDL unless a TLV encoding is used (TLV makes data aliasing impossible due to its self-describing nature).

ASN.1, being an old standard, encodes multi-byte values in the big-endian order. Modern standards tend to prefer little-endian due to the prevalence of little-endian machines in the world. Some standards like OMG CDR even allow variable endianness (sender-defined), which reduces computational effort but is suboptimal for safety-critical systems due to the above-explained reasons of state-space minimization.

Consistently with the byte order, the bit order is big-endian; that is, MSb has the lowest ordinal.

Let’s examine a simple demo. Here is an ASN.1 definition where Foo1 is to be considered the root type that includes a nested item Baz:

    Baz ::= SEQUENCE {
        qux INTEGER(0..65535)
    Foo1 ::= SEQUENCE {
        foo INTEGER(0..65535),
        bar Baz,
        msg VisibleString

We serialize it using ALIGNED PER:

>>> from hexdump import dump
>>> import asn1tools
>>> foo = asn1tools.compile_files('foo.asn', codec='per')
>>> print(dump(foo.encode('Foo1', {'foo': 0xDEAD, 'bar': {'qux': 0xF00D}, 'msg': 'Hello world'})))
DE AD F0 0D 0B 48 65 6C 6C 6F 20 77 6F 72 6C 64


DE AD                               -- foo     = 0xDEAD
F0 0D                               -- bar.qux = 0xF00D
0B                                  -- String length
48 65 6C 6C 6F 20 77 6F 72 6C 64    -- Hello world

The structure is simple and so is its serialized representation. In fact, it would look identical in DSDL, except that the byte order would be the opposite.

Now, let’s deploy the ASN.1 standard data type extensibility feature by marking Baz extensible:

    Baz ::= SEQUENCE {
        qux INTEGER(0..65535),
        ...                     -- EXTENSION MARKER
    Foo1 ::= SEQUENCE {
        foo INTEGER(0..65535),
        bar Baz,
        msg VisibleString

Repeating the above serialization example (same values), we obtain:

00      -- NEW: Extension flag cleared (MSb), 7 padding bits.
F0 0D   -- The rest is the same
48 65 6C 6C 6F 20 77 6F 72 6C 64

Once again, this time having some extension fields after the extension marker:

    Baz ::= SEQUENCE {
        qux INTEGER(0..65535),
        ...,                        -- EXTENSION MARKER
        u32 INTEGER(0..4294967295), -- New extension fields
        u16 INTEGER(0..65535)
    Foo1 ::= SEQUENCE {
        foo INTEGER(0..65535),
        bar Baz,
        msg VisibleString

Recompile & encode two variants, without u16 and then with it:

>>> foo = asn1tools.compile_files('foo.asn', codec='per')
>>> print(dump(foo.encode('Foo1', {'foo': 0xDEAD, 'bar': {'qux': 0xF00D, 'u32': 0xabcdef12}, 'msg': 'Hello world'})))
DE AD 80 F0 0D 03 00 05 C0 AB CD EF 12 0B 48 65 6C 6C 6F 20 77 6F 72 6C 64
>>> print(dump(foo.encode('Foo1', {'foo': 0xDEAD, 'bar': {'qux': 0xF00D, 'u32': 0xabcdef12, 'u16': 0x7777}, 'msg': 'Hello world'})))
DE AD 80 F0 0D 03 80 05 C0 AB CD EF 12 02 77 77 0B 48 65 6C 6C 6F 20 77 6F 72 6C 64

The resulting representation is suddenly highly complex. In order to retain compatibility with schema definitions devoid of the extension fields, the representation has been extended with detailed information about the new fields; namely, presence markers and length information. The additions shift the encoding along the self-describability spectrum. Explanation:

DE AD       -- Field 'foo' = 0xDEAD
80          -- Extension flag set (MSb), 7 padding bits.
F0 0D       -- Field 'bar.qux' = 0xF00D
03          -- MSb is 0 hence preamble length <= 64 octets; preamble length = 0b000001, decoded as 2; first extension field ('u32') is present
00          -- Second extension field ('u16') is missing, then 7-bit padding
05          -- Next extension length = 5 octets
C0          -- Varint length prefix = 0b11000000; 0b11 = 3, decoded as 4; 6-bit padding
AB CD EF 12 -- Field 'bar.u32' = 0xABCDEF12
0B          -- String length = 11 octets
48 65 6C 6C 6F 20 77 6F 72 6C 64
DE AD       -- Field 'foo' = 0xDEAD
80          -- Extension flag set (MSb), then 7 padding bits.
F0 0D       -- Field 'bar.qux' = 0xF00D
03          -- MSb is 0 hence preamble length <= 64 octets; preamble length = 0b000001, decoded as 2; first extension field ('u32') is present
80          -- Second extension field ('u16') is present, then 7-bit padding
05          -- Next extension length = 5 octets
C0          -- Varint length prefix = 0b11000000; 0b11 = 3, decoded as 4; 6-bit padding
AB CD EF 12 -- Field 'bar.u32' = 0xABCDEF12
02          -- Next extension length = 2 octets
77 77       -- Field 'bar.u16' = 0x7777
0B          -- String length = 11 octets
48 65 6C 6C 6F 20 77 6F 72 6C 64

While preparing the annotations, I consulted with ASN.1 Studio (this is not meant to be an advertisement, I am just unaware of any adequate open-source solutions):

Google Protobuf

Being a modern standard widely used in general ICT, this one requires no introduction.

Protobuf is a self-describing schema-based TLV-encoded little-endian byte-aligned binary format. Its expressivity is much limited compared to ASN.1 (somewhere on par with UAVCAN DSDL, perhaps), which allows it to be simple.

In the schema language, each field definition is equipped with an explicit tag. The tag is encoded together with the type information in the TLV stream. The developer may assign tags arbitrarily as long as they are unique. In a TLV format, data type extensibility is trivial to implement – new fields can be just inserted into the stream at arbitrary locations; by virtue of having unique tags, they will be simply ignored by applications reliant on an older schema where the fields are absent.

Each type, naturally, has an independent set of tag values; therefore, some form of delimited encoding is required (otherwise, tags in nested types would conflict). This is achieved through length prefixes like in my proposal.

I chose to skip constructing an example because the format is trivial and the encoding rules are well-documented; the most relevant part in the context of the current discussion is this:


Since it is developed by the same author as Google Protobuf, one might expect it to be similar, but it is oddly not so – the serialization format is unorthodox and rather involved (not nearly as complex as ASN.1 BER/PER though).

Cap’n’Proto is a self-describing schema-based non-TLV little-endian 64-bit-aligned (with exceptions) binary format. The bit order is also little-endian, consistent with the byte order. Being non-TLV, the field order is deduced from the schema using sophisticated rules intended to minimize the number of padding bits. Manual serialization/deserialization, much like with ASN.1, is infeasible.

This format is unusual because even though there are field tags in the schema (denoted with @), the encoding is not TLV, and the schema specification requires the tags to form a continuous ordered sequence, thus making it possible to eliminate the tag metadata from the encoded representation by arranging the encoded items in a particular well-defined order. I am not exactly clear on what was the motivation for keeping the tags in the language in the first place – perhaps it’s just to permit the user to add new fields into arbitrary positions instead of just at the end, as any purely tagless language (such as DSDL) would require.

Let us review an example. Suppose we have a basic schema like this defined in file basic.capnp:


struct A {
  foo @0 :UInt16;
  bar @1 :Baz;
  baz @2 :Baz;

struct Baz {
  qux @0 :UInt16;

The type of interest here is A, which, as you can see, contains two composite-typed fields bar and baz. Let’s generate a sample message where foo=0xdead, bar=0xf00d, baz=0xbeef:

>>> import capnp, basic_capnp
>>> msg = basic_capnp.A.new_message(foo=0xdead, bar=basic_capnp.Baz.new_message(qux=0xf00d), baz=basic_capnp.Baz.new_message(qux=0xbeef))
>>> msg.to_bytes().hex()

In capnp, everything is aligned to one word, which is 8 bytes. Here’s the same byte string split into words:

00 00 00 00 06 00 00 00
00 00 00 00 01 00 02 00
ad de 00 00 00 00 00 00
04 00 00 00 01 00 00 00
04 00 00 00 01 00 00 00
0d f0 00 00 00 00 00 00
ef be 00 00 00 00 00 00

The meaning is as follows:

A “segment” is a fragment of the serialized representation of a message. It follows from the fact that in capnp the worst-case message footprint is not bounded so the serializer has to guess the amount of memory that has to be allocated beforehand (it holds for most general-purpose serialization formats). If the available memory is insufficient to contain the entire message, it is split into several segments.

The pointer format is quite convoluted but there appear to be solid reasons for that – it allows one to manipulate the data without having access to the schema (hence the format is self-describing). Each composite is serialized independently and included in the outer message such that its field offsets are decoupled from its composites.


The OMG’s Common Data Representation (CDR) standard was originally defined as a platform-agnostic serialization format for CORBA. CORBA has been shown to be ill-suited for real-time applications (The Adaptive Communication Environment Object Request Broker paper), which was a contributing factor for the development and standardization of DDS.

Despite a completely different architecture, DDS builds upon some of the core parts of CORBA, such as its IDL and the transfer syntax (i.e., serialization format). DDS extends the IDL with its own specific features and defines several alternative transfer syntaxes for CDR to support extensible data types. The following normative specifications are the most pertinent to this discussion:

Among the standards reviewed in this write-up, DDS CDR (also known as XCDR, X for extended, not to be confused with XDR) is the closest in spirit, feature set, and design objectives to UAVCAN. DDS CDR is significantly more complex and more feature-rich than UAVCAN DSDL due to its wider scope and the fact that hard real-time safety-critical applications are not within the scope.

CDR provides the most comprehensive support for data type extensibility along with a set of formalized compatibility criteria and a well-developed theoretical background. The type system defines three type extensibility categories: FINAL (the type is not extensible, like DSDL v0 or @final), APPENDABLE (like DSDL v1 – fields can be added at the end), and MUTABLE (pure self-describing TLV encoding, like Google Protobuf or ASN.1 BER, allows arbitrary modification of the type).

Further, in order to enlarge the set of usage scenarios where different types can form interoperable interfaces, explicit type coercion rules are defined as the so-called Try Construct behavior, which defines one of the three possible deserialization failure handling policies: DISCARD (the representation that cannot be deserialized using the available schema invalidates the entity that contains it, namely the higher-level composite or the container if any, this is the default behavior); USE_DEFAULT (fallback to the default value); TRIM (applies only to variable-length containers instructing the decoder to discard the excess elements).

A dedicated QoS policy TypeConsistencyEnforcementQosPolicy allows the participants to choose whether to allow automatic coercion of different yet compatible (assignable) types.

The type truncation, subtyping, and extension features are nearly equivalent to those of UAVCAN DSDL. One unbeatable advantage of the XCDR specification though is that it is an endless source of useful information about the world at large, pondering such existential matters like the number of legs a snake has or whether giraffes have scales.

All binary formats defined by XCDR are byte-oriented with liberal use of padding to ensure natural alignment for most members (e.g., a 32-bit integer is guaranteed to be 4-byte aligned, unless it is packed into an opaque container; larger items may have smaller alignment depending on the flavor). Enumeration members are 32-bit unsigned integers by default; variable-size items (like sequences) are prefixed with a fixed-size 16/32-bit unsigned integer length. Padding is well-specified to ensure unequivocal canonical representations. The byte order is either little- or big-endian, selected by the serializer; the bit order is always little-endian.

Fixed-size 32-bit length prefixes can also be found in Cap’n’Proto and XDR (not to be confused with XCDR), unlike Protobuf, ASN.1, or some binary JSON formats that tend to leverage variable-length length (sic!) fields.

Nested objects can be serialized in three formats: non-delimited, where the nested object is simply inserted in-pace as if it was replaced by its fields (which I call “non-orthogonal” or “flat”); delimited, where the nested object is prefixed by a length field (like Protobuf); or as a TLV parameter list (which is then terminated by a delimiting symbol, like some JSON-based binary formats).

UAVCAN is comparable in its commitment to support the evolution of data types to DDS, but its type system is far less rigorously modeled and the specification pays absolutely no regard to snakes or giraffes. UAVCAN delegates all compatibility-related concerns to the integrator by saying merely (5.1.2 Port compatibility):

The system integrator shall ensure that nodes participating in data exchange via a given port use data type definitions that are sufficiently congruent so that the resulting behavior of the involved nodes is predictable and the possibility of unintended behaviors caused by misinterpretation of exchanged serialized objects is eliminated.

…whereas the DDS specification does a great job of formally exploring the data type relations. If the UAVCAN type system model is developed further, it is still unlikely to reach the same level of complexity because UAVCAN intentionally omits the support for entities with a high degree of runtime variability (such as by-default-optional data members, arbitrary ordering of fields, runtime-defined types, etc).

The plain-CDR (CORBA) encoding is very much like DSDL v0; here is a basic example:

struct Baz
    uint16 qux;
    // uint16 waz;  // Commented out
struct Foo
    Baz    bar;
    uint16 foo;
    string msg;

Suppose we compile the IDL and publish a message initialized like:

Foo st;
Baz bz;
st.msg("Hello world");

We capture the message using Wireshark and it helpfully dissects the data packet for us:

I am running this on my little-endian PC hence ad de 0d f0 stands for 0xdead, 0xf00d, followed by the four-byte string length prefix 0xC = 12, followed by the null-terminated greeting. That’s it.

If you uncomment the commented-out field in the example, things will break because the default non-delimited encoding does not support mutable/appendable types. Unfortunately, most open-source DDS implementations as of 2020 do not support extensible data types so it would be hard to provide a practical demo, but then again, it is hardly necessary because the specification seems clear enough (as far as complex specifications go).

First, recall that XCDR defines multiple encodings. This sounds bad enough but the specification ships a clarification:

PLAIN_CDR and its second version use the most naive direct flat encoding. The second version merely adjusts some unimportant details like padding that do not warrant attention right now.

DELIMITED_CDR is like PLAIN_CDR2 but it injects a length prefix before APPENDABLE objects, thus facilitating orthogonal serialization.

PL_CDR and its second version are pure TLV like Protobuf or ASN.1 BER.

We could go into details here but I perceive that might distract the discussion a bit too much. The DDS spec is complex but far more manageable than ASN.1.


Although not widely used in new designs, the External Data Representation (XDR) format deserves an honorable mention. It is defined in IETF RFC 4506. Being an old technology, it is big-endian and lacks any extensibility/versioning features. The specification defines a very simple dedicated schema language.

This is a non-self-describing schema-based non-TLV-encoded big-endian 32-bit-aligned binary format. The field encoding is fully static and flat; nested objects are serialized in-place as if the fields of the nested object were just to drop-in replace the nested reference.

A basic serialization example can be found in the RFC.

This is probably the simplest format for which there is a formal specification. Overall this would be a solid piece of technology for simple applications if not for the lack of extensibility and the big-endian byte order.


I should emphasize once again that practical implementations have ambiguous features so the classification is intended to apply only to the most common use cases.

Format TLV Self-desc. Extensible Byte order Composite nesting
ASN.1 BER yes yes yes big length prefix, extension flags, delimiting symbols
ASN.1 PER no no yes big length prefix, extension flags, delimiting symbols
Google Protobuf yes yes yes little length prefix
Cap’n’Proto no yes yes little pointer-based
XCDR v1 PLAIN_CDR FINAL no no no any flat (direct in-place)
XCDR v1 PL_CDR APPENDABLE yes yes yes any delimiting symbols
XCDR v1 PL_CDR MUTABLE yes yes yes any delimiting symbols
XCDR v2 PLAIN_CDR2 FINAL no no no any flat (direct in-place)
XCDR v2 DELIMITED_CDR APPENDABLE no no yes any length prefix and delimiting symbols
XCDR v2 PL_CDR2 MUTABLE yes yes yes any length prefix and delimiting symbols
XDR no no no big flat (direct in-place)
BSON yes yes yes little length prefix
UBJSON yes yes yes little length prefix or delimiting symbols
CBOR yes yes yes big length prefix or delimiting symbols
UAVCAN DSDL no no little

I don’t think we are compromising on determinism here. The variabilities introduced by delimited encoding are easily predictable at the system definition time because the designer is supposed to be aware of the exact data types and objects exchanged over the network. The fact that a receiving node may not know the exact type at runtime is irrelevant because it is assumed that the system designer has validated the current configuration and ensured that it meets the real-time constraints.

I think it is no different from variable-length arrays or unions: while they are variable-length items, if you can prove that their length does not vary at runtime (at all or outside of well-defined limits) then there are no negative implications for determinism.

One issue that is unique to UAVCAN due to its commitment to determinism is that of memory buffer allocation. In the v0-style flat encoding format with the implicit truncation rule, it is clear that the last defined fields are to be truncated out shall the buffer be too small to accommodate a received message. In a delimited (orthogonal) encoding, this behavior can no longer be relied on. See the rightmost figure:

Here, the field Y.c is truncated out even though one would expect the entire Y to be removed.

Conventional protocols like DDS manage this simply by delaying the memory requirement identification until runtime, which is unacceptable to UAVCAN. The first obvious solution that comes to mind is to stuff out the end of each message with voids but the negative effects of that are evident. Another solution that actually seems sustainable is to require implementations to allocate more memory than required by the data type definition unless it is @final. The exact amount can be defined as _offset_.max * MR in bits, where MR is some well-defined default multiplier (probably around ~50%).

This approach allows for limited extensibility while ensuring deterministic memory requirements. Special uses can be supported by overriding the default reserve with a dedicated directive like @extension M, where M is the amount of memory to reserve in bits.

The units are bits for consistency because DSDL is a bit-oriented format. It would be natural to require that M is a multiple of 8.

Regarding the length prefix: it is no mistake that in the OP post I defined it in bytes rather than bits. The rationale is to avoid breaking byte alignment regardless of the contents of a nested type. If it were allowed for a nested type to be non-byte-aligned, then auto-generated deserializers would be unable to rely on the static byte alignment analysis for performance optimization past the first nested field in the type.

I think we should avoid introducing new numerical representations into the standard to manage its complexity; thus the UTF8-style numbers are probably best to be avoided unless we are talking about some worst-case reservations for future extensions (backward-compatible evolution of standards tends to be messy, always). If we limited ourselves to what can be expressed in the current DSDL, we have two options:

  • Fixed-size prefix like I described in the OP post, with the MSb reserved at zero just in case. Observe that some other standards use fixed 32-bit length fields.

  • Length length prefix, like ASN.1, CBOR, or binary flavors of JSON. This can be expressed in DSDL like:

@union        # The tag value is the length length in bytes
uavcan.primitive.Empty.1.0 zero  # Zero-length
uint8  one    # <256 bytes
uint16 two    # <64K bytes
uint24 three  # Etc.
uint32 four
uint40 five
# The rest are reserved for future use.

The disadvantage of the variable-length length is that it suddenly complicates the bit length analysis.

I think the overhead introduced by the fixed-length 32-bit length seems manageable especially if we consider that short types are likely to be @final thus requiring no length at all, and large types are cheap to extend with the overhead due to its relative insignificance.

For reference, here is the full list of nested fields in the current standard namespace uavcan (ignore empty lines). I suppose that the majority of those can be made @final due to their simplicity. Those that can’t be made final are unlikely to be bandwidth-sensitive. This is not to make an argument in favor of fixed-length length prefix but merely to expand the context.

$ grep '\.[0-9]\.[0-9][^.]' -R . | grep -o '^[^#]*' | column -ts ':'
./diagnostic/32760.Record.1.0.uavcan                    uavcan.time.SynchronizedTimestamp.1.0 timestamp
./diagnostic/32760.Record.1.0.uavcan                    Severity.1.0 severity
./file/405.GetInfo.0.1.uavcan                           Path.1.0 path
./file/405.GetInfo.0.1.uavcan                           Error.1.0 error
./file/406.List.0.1.uavcan                              Path.1.0 directory_path
./file/406.List.0.1.uavcan                              Path.1.0 entry_base_name
./file/407.Modify.1.0.uavcan                            Path.1.0 source
./file/407.Modify.1.0.uavcan                            Path.1.0 destination
./file/407.Modify.1.0.uavcan                            Error.1.0 error
./file/408.Read.1.0.uavcan                              Path.1.0 path
./file/408.Read.1.0.uavcan                              Error.1.0 error
./file/409.Write.1.0.uavcan                             Path.1.0 path
./file/409.Write.1.0.uavcan                             Error.1.0 error
./metatransport/can/ArbitrationID.0.1.uavcan            BaseArbitrationID.0.1     base
./metatransport/can/ArbitrationID.0.1.uavcan            ExtendedArbitrationID.0.1 extended
./metatransport/can/DataClassic.0.1.uavcan              ArbitrationID.0.1 arbitration_id
./metatransport/can/DataFD.0.1.uavcan                   ArbitrationID.0.1 arbitration_id
./metatransport/can/Frame.0.1.uavcan                    uavcan.time.SynchronizedTimestamp.1.0 timestamp
./metatransport/can/Frame.0.1.uavcan                    Manifestation.0.1 manifestation
./metatransport/can/Manifestation.0.1.uavcan            Error.0.1       error                        
./metatransport/can/Manifestation.0.1.uavcan            DataFD.0.1      data_fd                      
./metatransport/can/Manifestation.0.1.uavcan            DataClassic.0.1 data_classic                 
./metatransport/can/Manifestation.0.1.uavcan            RTR.0.1         remote_transmission_request  
./metatransport/can/RTR.0.1.uavcan                      ArbitrationID.0.1 arbitration_id
./metatransport/serial/Fragment.0.1.uavcan              uavcan.time.SynchronizedTimestamp.1.0 timestamp
./metatransport/udp/Frame.0.1.uavcan                    uavcan.time.SynchronizedTimestamp.1.0 timestamp
./metatransport/udp/Frame.0.1.uavcan                    Endpoint.0.1 source
./metatransport/udp/Frame.0.1.uavcan                    Endpoint.0.1 destination
./node/430.GetInfo.1.0.uavcan                           Version.1.0 protocol_version
./node/430.GetInfo.1.0.uavcan                           Version.1.0 hardware_version
./node/430.GetInfo.1.0.uavcan                           Version.1.0 software_version
./node/434.GetTransportStatistics.0.1.uavcan            IOStatistics.0.1 transfer_statistics
./node/434.GetTransportStatistics.0.1.uavcan            IOStatistics.0.1[<=MAX_NETWORK_INTERFACES] network_interface_statistics
./node/port/431.List.0.1.uavcan                         ID.1.0 port_id_lower_boundary
./node/port/432.GetInfo.0.1.uavcan                      ID.1.0 port_id  
./node/port/432.GetInfo.0.1.uavcan                      uavcan.node.Version.1.0 data_type_version
./node/port/433.GetStatistics.0.1.uavcan                ID.1.0 port_id  
./node/port/433.GetStatistics.0.1.uavcan                uavcan.node.IOStatistics.0.1 statistics
./node/port/ID.1.0.uavcan                               SubjectID.1.0 subject_id
./node/port/ID.1.0.uavcan                               ServiceID.1.0 service_id
./pnp/32741.NodeIDAllocationData.2.0.uavcan             uavcan.node.ID.1.0 node_id
./pnp/32742.NodeIDAllocationData.1.0.uavcan             uavcan.node.ID.1.0[<=1] allocated_node_id
./pnp/cluster/32740.Discovery.1.0.uavcan                uavcan.node.ID.1.0[<=5] known_nodes
./pnp/cluster/390.AppendEntries.1.0.uavcan              Entry.1.0[<=1] entries
./pnp/cluster/Entry.1.0.uavcan                          uavcan.node.ID.1.0 node_id      
./register/384.Access.1.0.uavcan                        Name.1.0 name
./register/384.Access.1.0.uavcan                        Value.1.0 value
./register/384.Access.1.0.uavcan                        uavcan.time.SynchronizedTimestamp.1.0 timestamp
./register/384.Access.1.0.uavcan                        Value.1.0 value
./register/385.List.1.0.uavcan                          Name.1.0 name
./register/Value.1.0.uavcan                             uavcan.primitive.Empty.1.0        empty         
./register/Value.1.0.uavcan                             uavcan.primitive.String.1.0       string        
./register/Value.1.0.uavcan                             uavcan.primitive.Unstructured.1.0 unstructured  
./register/Value.1.0.uavcan                             uavcan.primitive.array.Bit.1.0    bit           
./register/Value.1.0.uavcan                             uavcan.primitive.array.Integer64.1.0 integer64  
./register/Value.1.0.uavcan                             uavcan.primitive.array.Integer32.1.0 integer32  
./register/Value.1.0.uavcan                             uavcan.primitive.array.Integer16.1.0 integer16  
./register/Value.1.0.uavcan                             uavcan.primitive.array.Integer8.1.0  integer8   
./register/Value.1.0.uavcan                             uavcan.primitive.array.Natural64.1.0 natural64  
./register/Value.1.0.uavcan                             uavcan.primitive.array.Natural32.1.0 natural32  
./register/Value.1.0.uavcan                             uavcan.primitive.array.Natural16.1.0 natural16  
./register/Value.1.0.uavcan                             uavcan.primitive.array.Natural8.1.0  natural8   
./register/Value.1.0.uavcan                             uavcan.primitive.array.Real64.1.0 real64        
./register/Value.1.0.uavcan                             uavcan.primitive.array.Real32.1.0 real32        
./register/Value.1.0.uavcan                             uavcan.primitive.array.Real16.1.0 real16        
./si/sample/acceleration/Scalar.1.0.uavcan              uavcan.time.SynchronizedTimestamp.1.0 timestamp
./si/sample/acceleration/Vector3.1.0.uavcan             uavcan.time.SynchronizedTimestamp.1.0 timestamp
./si/sample/angle/Quaternion.1.0.uavcan                 uavcan.time.SynchronizedTimestamp.1.0 timestamp
./si/sample/angle/Scalar.1.0.uavcan                     uavcan.time.SynchronizedTimestamp.1.0 timestamp
./si/sample/angular_velocity/Scalar.1.0.uavcan          uavcan.time.SynchronizedTimestamp.1.0 timestamp
./si/sample/angular_velocity/Vector3.1.0.uavcan         uavcan.time.SynchronizedTimestamp.1.0 timestamp
./si/sample/duration/Scalar.1.0.uavcan                  uavcan.time.SynchronizedTimestamp.1.0 timestamp
./si/sample/duration/WideScalar.1.0.uavcan              uavcan.time.SynchronizedTimestamp.1.0 timestamp
./si/sample/electric_charge/Scalar.1.0.uavcan           uavcan.time.SynchronizedTimestamp.1.0 timestamp
./si/sample/electric_current/Scalar.1.0.uavcan          uavcan.time.SynchronizedTimestamp.1.0 timestamp
./si/sample/energy/Scalar.1.0.uavcan                    uavcan.time.SynchronizedTimestamp.1.0 timestamp
./si/sample/force/Scalar.1.0.uavcan                     uavcan.time.SynchronizedTimestamp.1.0 timestamp
./si/sample/force/Vector3.1.0.uavcan                    uavcan.time.SynchronizedTimestamp.1.0 timestamp
./si/sample/frequency/Scalar.1.0.uavcan                 uavcan.time.SynchronizedTimestamp.1.0 timestamp
./si/sample/length/Scalar.1.0.uavcan                    uavcan.time.SynchronizedTimestamp.1.0 timestamp
./si/sample/length/Vector3.1.0.uavcan                   uavcan.time.SynchronizedTimestamp.1.0 timestamp
./si/sample/length/WideVector3.1.0.uavcan               uavcan.time.SynchronizedTimestamp.1.0 timestamp
./si/sample/magnetic_field_strength/Scalar.1.0.uavcan   uavcan.time.SynchronizedTimestamp.1.0 timestamp
./si/sample/magnetic_field_strength/Vector3.1.0.uavcan  uavcan.time.SynchronizedTimestamp.1.0 timestamp
./si/sample/mass/Scalar.1.0.uavcan                      uavcan.time.SynchronizedTimestamp.1.0 timestamp
./si/sample/power/Scalar.1.0.uavcan                     uavcan.time.SynchronizedTimestamp.1.0 timestamp
./si/sample/pressure/Scalar.1.0.uavcan                  uavcan.time.SynchronizedTimestamp.1.0 timestamp
./si/sample/temperature/Scalar.1.0.uavcan               uavcan.time.SynchronizedTimestamp.1.0 timestamp
./si/sample/torque/Scalar.1.0.uavcan                    uavcan.time.SynchronizedTimestamp.1.0 timestamp
./si/sample/torque/Vector3.1.0.uavcan                   uavcan.time.SynchronizedTimestamp.1.0 timestamp
./si/sample/velocity/Scalar.1.0.uavcan                  uavcan.time.SynchronizedTimestamp.1.0 timestamp
./si/sample/velocity/Vector3.1.0.uavcan                 uavcan.time.SynchronizedTimestamp.1.0 timestamp
./si/sample/voltage/Scalar.1.0.uavcan                   uavcan.time.SynchronizedTimestamp.1.0 timestamp
./si/sample/volume/Scalar.1.0.uavcan                    uavcan.time.SynchronizedTimestamp.1.0 timestamp
./si/sample/volumetric_flow_rate/Scalar.1.0.uavcan      uavcan.time.SynchronizedTimestamp.1.0 timestamp
./time/510.GetSynchronizationMasterInfo.0.1.uavcan      TimeSystem.0.1 time_system
./time/510.GetSynchronizationMasterInfo.0.1.uavcan      TAIInfo.0.1 tai_info

Another option to consider: if we assume that a data type whose footprint is small in an early version is unlikely to become large in a future version, we could introduce a rule that all types whose size is under some threshold are to use a shorter length prefix.

This idea can be further extended: by default the length prefix can be set to be 32-bit as described above. The developer can be provided with an interface for overriding the default length prefix to be either 8, 16, or 64 bits. In this way, by default we support maximum extensibility at the cost of high overhead, but if the developer knows better, there is an opportunity to override things manually.

@length_width 16  # The length prefix will be 16 bits wide rather than 32
uint8[<=128*128] sensor_image

But we already have a parameter that is strongly coupled with length length: @extension. Deriving the length length from it is inconvenient because as defined earlier it is a delta from _offset_.max rather than a final value. If we changed its definition to accept the final buffer size instead of its delta it can still be used conveniently if placed at the very end of the definition where the _offset_.max equals the maximum footprint of the structure.

Instead of:

@extensible 128
float16[36] my_matrix

…where the total buffer capacity would be (16*36+128)/8 = 88 bytes, we say:

float16[36] my_matrix
@capacity _offset_.max + 128

Where the resulting capacity is the same. The latter form is actually clearer because the intent is unambiguously conveyed by the reliance on the _offset_. Obvious constraints:

  • It is an error to use @capacity anywhere except after the last field definition.
  • It is an error to specify a capacity that is smaller than _offset_.max.
  • It is an error to specify a capacity that is larger than _offset_.max if the type is @final.

If the type is @final then the default is:
@capacity _offset_.max
@capacity {_offset_.max * MR, MC}.max
(the values of the multiplier MR and the lower limit MC (should be >=256B) are up to debate).

Then, if we allow capacity specification in this way, it is possible to bind the length length definition to it as well, because if we know that the capacity is to never exceed X, it makes no sense to use more than log2(X) bits for the length prefix.

I know of at least one usage:

Basically the trick is to only use a subset of ASN.1, not the full specification.

As for the encodings, I would only use PER on new designs if size was really important to me. OER is much “easier” to use ─ https://www.oss.com/asn1/resources/books-whitepapers-pubs/Overview_of_OER.pdf.

The problem I see here is that the DSDL serialisation of choice depends on the transport layer. For “legacy” CAN (8-byte data packets), something like unaligned PER could be preferred, while for anything larger (i.e. CAN FD, with 64-byte data packets), something like Canonical OER would be my choice. However, I would never use unaligned PER if I could get away with it!

Anyway, I am not really pushing for using ASN.1 on UAVCAN. Just describing some of the advantages it has that maybe we should try to bring to the table.

Regarding the extensibility subject, I would try to limit what we are really trying to achieve.
I don’t think adding full extensibility support is a good idea. We could limit extensibility to adding fields, never to replace already existing ones. That makes the goal much simpler.

Changing fields in an inner structure is the same as replacing the outer field with another type. I don’t see why we should support that. If it is really needed, deprecate the outer structure and create a new version of it.

In my opinion, anything else favours semantic changes that will invalidate the use of the structure by legacy devices anyway, creating the type of incompatibilities we don’t want on critical components.

As a compromise, we can allow to extend tagged types (unions). That can be done by specifying a minimum number of bits for the tag, so unused values can eventually be extended on future versions.

This talk about extensibility is fueled by our empirical evidence that any serious system inevitably runs into the problem of modifying data types while retaining compatibility with deployed software (hardware).


You can see this discussed seriously in-depth in UAVCAN v1.0 and ArduPilot.

At the same time, we need to manage the complexity, which is why we are not actually talking about arbitrary modification of types but we limit ourselves to appendability only. Complexity is also the reason why I am hesitant to seriously consider the introduction of different binary representations, especially within the same transport (Classic CAN and CAN FD are the same in the eye of UAVCAN).

Having seriously analyzed the alternatives discussed here so far, I am inclined to believe that the approach described in my latest post here seems to be the closest to the optimal balance between the complexity and flexibility.

1 Like

I completely agree. Just being able to append meets the need while adding minimal complexity.
btw, one mistake we made with MAVLink 2.0 appendability is to not make an easy API for the consumer of a parsed packet to tell if a particular field was filled in or not. In MAVLink 2.0 we fill with zeros any field that is an extension that didn’t appear on the wire. We should have added an easy to use API to ask “was this extension field on the wire?”. The information is available via the wire size, but isn’t exposed in an easy manner.

1 Like

So I’ve made some changes to PyDSDL which are currently on a branch. It is more or less functionally complete but the tests are not yet updated. I had to introduce certain modifications to the proposal I shared here earlier because a closer look revealed some flaws.

One minor change is that I renamed “capacity” into “extent” in order to avoid overloading the term “capacity” as applied to arrays. Other than that, it is the same: extent is defined as the amount of buffer memory that is required for storing a serialized representation of the type, and thus it also limits the maximum bit length of compatible versions. It is never smaller than the maximum serialized bit length of all involved fields combined.

The length field that prepends a non-final type is termed “delimiter header” – borrowed from DDS-XTYPES. Contrary to the early proposal, it is fixed at 32 bits (with the option to make it configurable in the future). One could deduce the size of the header from the size of the data type, but that means that we arrive at an evil corner case where the data type designer might decide to add a few more fields into an existing type, inadvertently breaking the 64 KiB barrier and thus breaking the binary compatibility. That issue alone is easy to manage at the data type level itself (just remove some fields), but what if there is a hierarchy of nested types, where the top type has a 16-bit prefix, and the bottom type adds extra fields that push the footprint of the TOP type above the 64 K barrier? There are two options:

  1. Always use a large prefix (like in DDS-XTYPES), with the possibility of manually constraining it when needed.
  2. Use a variable-length prefix, like in Protobuf or some representations of ASN.1.

Here, I chose the first option because it is simpler and more in-line with the rest of the DSDL specification. The extra overhead only affects those types that are not @final, which tend to be large and complex, so in relative terms, the negative effects are tolerable.

If a type contains a field of an appendable type, its author is effectively unable to make strong assumptions about the alignment of the fields that are positioned after the appendable composite because its length may be changed arbitrarily:

Inner.1.0 foo
uint64 aligned  # Alignment unknown unless Inner.1.0 is @final

Yet, as we know from the experience, the byte alignment guarantee is vital for high-performance serialization and deserialization. Up to now, before the delimited serialization support, in most cases it was possible to enforce the required alignment by strategically placing the void padding fields where necessary. With the runtime variability that is brought in by the delimited serialization, the byte alignment in the presence of appendable types would be impossible to guarantee unless we require that the length of an appendable type is always a multiple of one byte (8 bits). Hence, as I mentioned in the OP post here, the delimiter header specifies the length in bytes, and the serialized representation of an appendable type is always padded out to one byte (with zeros). So that in the example above the serialization logic can always rely on faster byte-level operations rather than slow bit-level alternatives.

These considerations highlight a certain deficiency of the current design. Originally, DSDL was purely a bit-level format without any notion of byte or padding/alignment. This model was broken with the introduction of the rule that requires implicit array length prefixes and union tag fields to be of a standard size (8, 16, 32, or 64 bits). This change has somewhat altered the inner consistency of the logic, and now, the addition of the byte-padded appendable composites distorts the design even further.

First, our design guidelines mention that data types should be constructed such that their bit length sets are byte-aligned because it simplifies composition. Second, non-byte-aligned composites are difficult to serialize/deserialize using code generation, because if the composite is unaligned, it is impossible to delegate (de)serialization to a dedicated routine because the buffer is unaligned.

These considerations forced me to introduce the concept of data alignment into the standard such that composite-typed fields (whether final or appendable) are always aligned at one byte, and each member of their bit length set is always a multiple of one byte. This is achieved by dynamically padding the preceding space before a composite-typed field with up to 7 bits, and likewise padding the space after the last field of the nested composite such that its length is a multiple of 8. This matter was first discussed on this forum a long time ago in New DSDL directive for simple compile-time checks.

The alignment requirements for all type categories are as follows:

  • Primitive: 1 bit (i.e., no alignment)
  • Void: 1 bit (i.e., no alignment)
  • Composite: 8 bit (one byte)
  • Array: same as the element type.

Observe that by virtue of having the maximum alignment of 8 bit and the fact that the size of an implicit array prefix or union tag field is always a multiple of 8 bits, such implicit fields never break the alignment of the following data. Should we ever decide to introduce the alignment requirement that is greater than 8 bits (I doubt it’s ever going to make sense), we will need to alter the logic of deriving the size of such implicit fields to ensure that they are wide enough to not break the alignment of the following elements.

Suddenly this enables one trick for dynamically aligning data fields at one byte, as discussed in the above-linked thread:

bool[<=123] some_bits
@assert _offset_ % 8 != {0}   # Not aligned here due to the element size of the array.
uavcan.primitive.Empty.1.0 _  # This field is zero bits long
@assert _offset_ % 8 == {0}   # ...but the offset is now byte-aligned

Going back to the extent again, it essentially defines the degree to which a data type can expand in future revisions without breaking the compatibility. A greater limit obviously provides more opportunities for modifications; on the other hand, large values are costly for memory-constrained devices, so a sensible compromise is needed. While we could require the author of the data type to always define the extent for every data type definition explicitly, it harms the usability of the standard because generally the designer of a network service has a lot of other things to keep in mind. So unless the extent is set explicitly, it defaults to:

\left\lceil{}\left\lfloor{}\frac{3}{2} \max{}(B)\right\rfloor{}/8\right\rceil{}8

Where B is the bit length set as if the type was final. Essentially, we multiply the maximum size by 1.5 and pad the result to byte. This definition has an undesirable side effect of increasing the extent exponentially at each level of nesting (because each level introduces the 1.5 multiplication factor), but a more stable definition that would be trivial to compute does not appear to exist.

The bit length set of an appendable type is then the same as that of uint8[<=(extent+7)//8], except that the length prefix is always 32-bit. The definition of the bit length set for @final types is unchanged.

Suppose we have two types that are identical except that one of them is final and the other is appendable:

# Final.1.0
uint64 foo
@assert _offset_ == {64}
# Appendable.1.0
uint64 foo
@assert _offset_ == {64}

The resulting offsets are then as follows:

Final.1.0 foo
@assert _offset_ == {64}
Appendable.1.0 foo
@assert _offset_ == {32, 40, 48, 56, 64, 72, 80, 88, 96}

There remains one case of inner inconsistency, though: the definition of extent does not include the delimiter header, whereas the bit length set does. This is because the delimiter header is not involved when the composite is serialized as the top-level object.

The spec updates are in, please review:

At the call today, @scottdixon suggested that we should always require the extent to be specified explicitly because not doing so may create a false perception of UAVCAN requiring too much memory to operate. The default extent formula is not going to be needed then. I will be working on it soon, maybe this week.

@scottdixon but let’s consider the implications on the learning curve. I see two possible issues here:

  • Requiring the user to always specify the extent will make the language a little more tedious to use and a little less approachable for newcomers. The latter is because one will have to understand the nuance of data type evolution and memory allocation before being able to use UAVCAN.

  • A new user may feel annoyed by the need to always stick either an @extent or @sealed into every definition. Since @sealed is much easier to use (no need to choose the memory footprint, choices are hard to make), we may see this feature overused, leading to the same issues as if it was the default option.

I think that in the end, this is a problem of usability and human perception. It may be better addressed by tooling rather than the Specification. What if we make PyDSDL emit a benign diagnostic whenever it encounters an extensible type where the extent is not defined explicitly? The diagnostic would state what the auto-deduced extent value is and how it can be provided manually.

You’re right that having to provide @extent will be a pain. I think either the tooling approach or making @sealed the default are the best options.

I’d rather make @sealed the default since this also gets rid of the default extent formula and makes naive implementations or prototyping with UAVCAN more immediate and successful. It seems like any datatype author would start with a prototype and only need to think about the future when finalizing a design. It’s in this later phase of a project that one would take the time to calculate extents properly.

I think your assumption that a developer would sit down and think about the future is unsound. My experience indicates that people rarely do that, so we should either force them to (make @extent required) or do it for them (default extent plus the tooling approach).

So I think that here:

I think either the tooling approach or making @sealed the default are the best options.

We can agree on the first option. If this is so, I am going to update the PyDSDL PR with the diagnostic.

1 Like