Even after years of working with .NET, I still find things I've taken for granted and made incorrect assumptions about. The most recent of which is literal representations of bytes using the Hex (0x) or the new Byte (0b) notations and the actual data types they represent. Let's briefly explore these concepts.
It's not very often that I work directly with byte declarations outside of Flags enums, but when I do I generally use hexadecimal notation. For example, I may declare a null byte as such: 0x00. It was always my assumption that using this notation (or the new notation supported in Visual Studio 2017, 0b00) would result in a byte. This isn't the case. In fact, it will be an Int32.
What does this mean for us? Well, in most cases it doesn't affect our code. For instance, calling WriteByte(0x00) on a stream will work just fine because of an implicit conversion between int and byte (so long as the int is in the range of 0 to 255). In fact, we can call WriteByte(0) on a stream and it will work just fine, however, calling WriteByte(300) on a stream will result in an invalid cast.
With the above in mind, let's look at how I discovered it and how it can be a gotcha. While writing a custom binary file writer I needed on occasion to write specific bytes or other types. I created overloaded methods on a base class to handle byte, int and various other data types. My overloaded methods looked like the following:
public void Write(byte value)
public void Write(int value)
//WriteInt is a custom extension method
I invoke the above methods by calling Write(0x00) or Write(5). To my surprise, when calling the first version of the method, it was writing 4 bytes instead of 1. As a result, I fired up the debugger and took a look. Sure enough, both cases were entering the "int value" overload of the Write method. At first I assumed this to be a potential bug in the faking framework I was using in my unit tests which is generating dynamic proxies on top of my logic classes and I discovered the issue when testing logic methods.
After eliminating the faking framework and stepping through the code, the issue still existed, so I used the immediate window and entered (0x00).GetType(). To my surprise, the result was Int32. Next I tried the binary literal representation that was added in Visual Studio 2017 and entered (0b00).GetType(). Again, the result was Int32. At this point, I realized that the literal representations still result in an Integer. Changing my earlier call to Write((byte)0x00) resulted in the correct overload being called. It seems like such a silly mistake and something I should have known (and likely many do), but hopefully this post can help at least one person avoid the same mistake.
There's one final note I'd like to add. The above revelation never became a problem in anything close to production code because I caught the issue through good unit testing. This is just another example of why unit testing, and especially writing valuable tests (not just code coverage tests) are very important.