Sorry, that doesn't help me very much ... I think, the problem I have is not clear enough.
My equipment, that sends ASCII-Characters according to the IBM-Standard, wants me to interpret them in LabView which uses the Windows-ASCII-Set. So after the conversion with "String To Byte Array" the value of the sent character "â" is 226. Now the problem: I have to get 131 as value, according to IBM. And I think that's not the only problem, every other character with a higher value than 128 will be translated in a wrong way, too.
All I need is a function that converts my ASCII-Characters sent by my equipment according to the IBM-ASCII-Table and NOT to the Windows-ASCII-Table. If you look it up in IBM-Tables: "â" is 1000 (lower byte) 0011 (upper byte) and
that is (2^0 + 2^1 + 2^7) = 131 and NOT 226. So my functions output must be 131 with an "â"-input.
By the way, 226 is 11100010 and that is "Ô" (IBM-Table).