Automotive and Embedded Networks

cancel
Showing results for 
Search instead for 
Did you mean: 

Signal Scaling Question

Why does the start bit seem so weird with big endian (Motorola) when encoding a frame into signals?  Here is an example.  If I have a Start Bit of 0 with the number of bits equal to 16 the visualization is clear and expected with little endian.  And as you increment the start bit you see the bits shift farther and farther down.  Here is an image at start bit of 16 and number of bits 16.

Normal Looking Little.png

Seems fine.  Now if I were to change the byte order to big endian I would expect the same 16 bits to be used, but the order to be reversed, but with CAN this isn't how it is and instead does something else.

Odd Looking Big.png

And if I increment the start bit to 17 now I don't even have a valid signal, and actually need to keep incrementing until it is 16 before it jumps down to the next bytes to use.  Here is a few more images of what I'm talking about.

 

More Odd Looking Big.png

 

More Odd Looking Big 2.png

 

More Odd Looking Big 3.png

 

I get that this is correct, but why?  I ask because I am now reading CAN data using the ISO 15765 protocol and am getting an array of bytes back.  And when it has a start bit of 0, regardless of the endianess used and bit length, the first bit is always the one used, where with the CAN Signal conversion it seems the start bit moves depending on the bit length, and endieness.  Thanks.

 

0 Kudos
Message 1 of 4
(3,621 Views)

The start bit represents the location of the least significant bit in the signal.

Jeff L
National Instruments
0 Kudos
Message 2 of 4
(3,567 Views)

Any reason on why ISO 15765 payloads don't obey the same scheme?  Yes the start bit is always correct but the number of bits and bit length are inconsistent with how DBC scales them.

 

Here is an example.  I have a CDD file that defines data types, and has DIDs.  One DID I have is for say ID 0x0100.  This DID has 12 DID objects and returns 24 bytes, 2 bytes for each object.  The first object has a start bit of 0, and a bit length of 16.  And the data type for the object is Motorola.  This type of signal can't exist in a DBC database but is fine in a CDD.  And it seems the reason is, because the endianess comes into play after extracting the data.  A start bit of 0 and bit lenght of 16 means grab the first two byte, regardless if it is big or little endian, when reading a DID.  But with a DBC the scaling isn't the same, since you can't have a start bit of 0, bit length of 16, and big endian.

0 Kudos
Message 3 of 4
(3,563 Views)

It may be an artifact of the FIBEX file format that the XNET DB editor uses by default. I asked around to see if I could find a specific reason why we associate the start bit with the LSB but didn't find out anything useful. 

Jeff L
National Instruments
0 Kudos
Message 4 of 4
(3,559 Views)