none
Where is the documentation on System.Runtime.Serialization.BinaryFormatter output? RRS feed

  • Question

  • I'm working on a utility function whose purpose is to take any two numeric typed objects (either item may be s/byte, u/int 16-64, float, decimal, or double) and determine if one (and if so which) of 2 values is logically greater/less than the other.  The general idea is that numeric types cast as Object are passed to the function, and the function returns a comparator value.

    Obviously you can't just do a comparison between a float and a UInt64.  I wrote some utility code to distinguish between types and cast appropriately, then ran into a problem where an Int32 with a value of 0 throws an InvalidCastException at runtime when trying to convert to a UInt64.

    So I tried binary comparisons. 

    * When a numeric value type such as Int32 is cast as Object, BitConverter fails at compiletime.

    * When attempting to use unsafe code, grabbing a pointer to the object start in memory, &varname fails at compile time because even though it's a value type it's cast as an Object so the compiler squats on its own head.

    Looking around online it seems like the only way to get raw binary data out of an Object (of any integral type) is Serialization.  Using BinaryFormatter, a single Byte value is bloated to 53 bytes of data while a UInt16 goes 54.  I've been able to determine that the relevant data portion (the numeric value in the Binary Serialization Stream) is at the end of the stream or byte array, so both the Byte and UInt16 values passed in begins at (Arr.Length - 1). 

    That might be handy if I had any way of starting with an unknown numeric type, serializing it and then processing the serialized data myself (such as to determine the data length to grab just before the last byte in the stream), but I can't find any documentation on the BinaryFormatter's output at all.  The best I could find is a hex dump and very shallow analysis on StackOverflow. 

    So is there any posted documentation on the actual output format of the BinaryFormatter?  Alternately if anybody can give me a better solution (not using the dynamic keyword) I'd love to hear it.


    Friday, October 25, 2019 4:05 PM

All replies

  • Hi Andrew B. Painter,

    Thank you for posting here.

    Have you tried to convert two numeric typed objects to Decimal and compare them?

    Convert.ToDecimal Method

    Decimal.Compare(Decimal, Decimal) Method

    Here's the code:

            public enum Relationship
            { LessThan = -1, Equals = 0, GreaterThan = 1 }
            static void Main(string[] args)
            {
                int a = 12;
                float b = (float)12.13;
                Decimal ad = Convert.ToDecimal(a);
                Decimal bd = Convert.ToDecimal(b);
                var result = (Relationship)Decimal.Compare(ad,bd);
                Console.WriteLine($"ad is {result} bd");
                Console.ReadLine();
            }

    Result:

    Hope it can help you.

    Besides, If I have any misunderstanding, I suggest you provide some related code here.

    Best Regards,

    Xingyu Zhao



    MSDN Community Support
    Please remember to click "Mark as Answer" the responses that resolved your issue, and to click "Unmark as Answer" if not. This can be beneficial to other community members reading this thread. If you have any compliments or complaints to MSDN Support, feel free to contact MSDNFSF@microsoft.com.


    Monday, October 28, 2019 9:52 AM
    Moderator
  • Hi Andrew B. Painter,

    Thank you for posting here.

    Have you tried to convert two numeric typed objects to Decimal and compare them?

    Yes I did, as shown here:

    private void Form1_Load(object sender, EventArgs e)
            {
                Int64 slmin = Int64.MinValue;
                Int64 slmax = Int64.MaxValue;
    
                // Single aka float.
                Single flmin = Single.MinValue;
                Single flmax = Single.MaxValue;
    
                Int32 comp = numericObjectCompare(flmax, slmin);
            }
    
            public static Int32 numericObjectCompare(Object a, Object b)
            {
                // Returns -1 if a < b
                // Returns 0 if a = b
                // Returns 1 if a > b
                Int32 r = 0;
    
                Decimal da = (Decimal)a; // Exception:  Specified cast is not valid (a is the float, as shown above)
                Decimal db = (Decimal)b; // If we got to this line, (the Signed Int64 Minimum Value) it would throw the exact same exception.
                if (da < db)
                {
                    r--;
                }
                else if (da > db)
                {
                    r++;
                }
    
                return r;
            }


    Convert.ToDecimal Method

    Decimal.Compare(Decimal, Decimal) Method


    I did test your suggested method.

    private void Form1_Load(object sender, EventArgs e)
            {
                Int64 slmin = Int64.MinValue;
                Int64 slmax = Int64.MaxValue;
    
                // Single aka float.
                Single flmin = Single.MinValue;
                Single flmax = Single.MaxValue;
    
                Int32 comp = numericObjectCompare2(flmax, slmin);
            }
    
            public static Int32 numericObjectCompare2(Object a, Object b)
            {
                // Returns -1 if a < b
                // Returns 0 if a = b
                // Returns 1 if a > b
                Int32 r = 0;
    
                Decimal da = Convert.ToDecimal(a); // Exception: Value was either too large or too small for a Decimal.
                Decimal db = Convert.ToDecimal(b);
                if (da < db)
                {
                    r--;
                }
                else if (da > db)
                {
                    r++;
                }
    
                return r;
            }

    I've been stymied by the inability to cast to Decimal too.





    Monday, October 28, 2019 4:38 PM
  • Hi Andrew B. Painter,

    Thanks for your feedback.

    I hope the following code could be helpful.

    static Object GetNumber(Object obj) {

    // Remove the integer part of the value。 var number = obj.ToString().Split('.')[1].Insert(0,"0."); return (object)number; } public static Int32 numericObjectCompare(Object a, Object b) { // Returns -1 if a < b // Returns 0 if a = b // Returns 1 if a > b Int32 r = 0; Double da = Convert.ToDouble(a); Double db = Convert.ToDouble(b); if (da < db) { r--; } else if (da > db) { r++; } // Considering the loss of accuracy, we need to use Decimal to compare two numbers again. else if (da == db) { var a1 = GetNumber(a); var b1 = GetNumber(b); Decimal ad = Convert.ToDecimal(a1); Decimal bd = Convert.ToDecimal(b1); r = Decimal.Compare(ad,bd); } return r; }

    Best Regards,

    Xingyu Zhao


    MSDN Community Support
    Please remember to click "Mark as Answer" the responses that resolved your issue, and to click "Unmark as Answer" if not. This can be beneficial to other community members reading this thread. If you have any compliments or complaints to MSDN Support, feel free to contact MSDNFSF@microsoft.com.



    Wednesday, October 30, 2019 7:10 AM
    Moderator
  • Hi Andrew B. Painter,

    Thanks for your feedback.

    I hope the following code could be helpful.

    static Object GetNumber(Object obj) {

    // Remove the integer part of the value。 var number = obj.ToString().Split('.')[1].Insert(0,"0."); return (object)number; } public static Int32 numericObjectCompare(Object a, Object b) { // Returns -1 if a < b // Returns 0 if a = b // Returns 1 if a > b Int32 r = 0; Double da = Convert.ToDouble(a); Double db = Convert.ToDouble(b); if (da < db) { r--; } else if (da > db) { r++; } // Considering the loss of accuracy, we need to use Decimal to compare two numbers again. else if (da == db) { var a1 = GetNumber(a); var b1 = GetNumber(b); Decimal ad = Convert.ToDecimal(a1); Decimal bd = Convert.ToDecimal(b1); r = Decimal.Compare(ad,bd); } return r; }


    Thanks very much for this.  It's clever and appears to work but ultimately fails.  Using Float's maximum value and Int64's minimum it returns 1 (indicating 3.xxxx > -areallybignumber as it should) but using the same Float Maximum and comparing instead to UInt64's maximum it also returns 1 (indicating 3.xxxx > +areallybignumber as it should not).

    I really think the best solution to this problem is going to wind up being direct binary comparisons.  Any chance you can just give me a link to the documentation for the output format of the data produced by System.Runtime.Serialization.BinaryFormatter?

    Thursday, October 31, 2019 12:01 AM
  • Hi Andrew B. Painter,

    I hope the following references could be helpful.

    What is binaryformatter and how to use it? And how to use filestream?

    How to analyse contents of binary serialization stream?

    Besides, it seems that Float's Maximum is greater than UInt64's maximum.

    Maybe key to the problem is how to remove the integer part of the value when the incoming data is not always like 'data.data'.

    Best Regards,

    Xingyu Zhao


    MSDN Community Support
    Please remember to click "Mark as Answer" the responses that resolved your issue, and to click "Unmark as Answer" if not. This can be beneficial to other community members reading this thread. If you have any compliments or complaints to MSDN Support, feel free to contact MSDNFSF@microsoft.com.


    Thursday, October 31, 2019 9:23 AM
    Moderator
  • Hi Andrew B. Painter,

    I hope the following references could be helpful.

    What is binaryformatter and how to use it? And how to use filestream?

    How to analyse contents of binary serialization stream?

    Thanks for continuing to try to help, but I don't need newbie/remedial instructions on usage - I need legitimate/explicit data format specifications. I mentioned in my first post here that I had already seen a shallow/irrelevant attempt at analysis on StackOverflow and you've found the exact one.

    Besides, it seems that Float's Maximum is greater than UInt64's maximum.

    Single/Float's maximum is 3.402823E+38 (the E meaning it's then followed by a hefty length of floating point places). The bottom line is that (even rounded up) float's maximum is 4. UInt64's maximum is 18446744073709551615 with no floating point places. Single/Float consumes 4 bytes in memory, UInt64 consumes 8 bytes in memory. So the point is that your code doesn't work, not that we crossed into some alternate dimension where float can make a larger number than ulong.

    The problem is that conversion to Decimal and/or Double is improperly treating the ulong's most significant bit as a signing bit - that's a juvenile-grade oversight that MS needs to fix in the Framework and frankly it should never have shipped with that bug in the first place.  Until MS manages to act like a company run by grownups whose goal is making money, though, this is the problem that needs to be worked around.

    Maybe key to the problem is how to remove the integer part of the value when the incoming data is not always like 'data.data'.


    Yes.  I could write a hundred if-else permutations accounting for every possible type condition where a is typeX and b is typeY, but that would be retarded.  So what I'm asking for is the actual specification - ie the RFC-style document - that lays out the actual fields, widths, and intended data container types that occur in the byte array output by serializing stuff via BinaryFormatter.

    Thursday, October 31, 2019 11:36 PM