locked
Chr (VB.NET) vs char (C#)

    Question

  • I am converting a VB.NET program to C#.  One of the features is returning the character code for an ASCII value.  The (char) cast or Convert.ToChar seems to work in most situations. However, in some ASCII(134) I get different returns between the VB and C# code.

    In VB, Chr returns the small 't' symbol alla http://yorktown.cbe.wwu.edu/sandvig/docs/ASCIICodes.aspx.
    In C#, (char) or Convert.ToChar returns the 'square' symbol alla unrecognizable.

    I've looked all over but have not seen how to resolve this(other than setting a refernce to Microsoft.VisualBasic which is something I frown upon).

    Any help would be greatly appreciated.

    Thanks

     

     

     

     

    Friday, October 06, 2006 7:01 PM

Answers

  • Essentially, you have no problem -- 134 is 134 regardless of how it's displayed.  And, if I understand you correctly, the 134 is just an intermediate value between encrypting & decrypting, so you shouldn't be displaying it anyway.

    However, I think may have realized your real problem. I'mm gues here, but I assume that at one point you are reading some encrypted text (8-bit ASCII characters) into a .Net string (with 16-bit Unicode characters) and some characters are being mapped to new values.  If that's the case, you don't want to convert it to a string, but to a byte[]  (using the System.Text.Encoding.ASCII.GetBytes() method)

    Friday, October 06, 2006 8:53 PM

All replies

  • First of all,  the official ASCII Code standard doesn't define any character for codes above 127 (0x7F).  (The last bit was intended to be used for parity).  Character generator chip (and later font) vendors just choose whatever characters they wanted for 128-255.  The range 128-160 is so much in dispute, the UNICODE standad does even define displayable characters for those code.

    So, the problem really is -- when your VB.Net program is displaying the character, it's using one font.  When you C# program displays it, it's using a different font.

    Finally, the character 134 shown on that page is refered to as a "dagger" (the bottom of it really should be pointier) (and 135 is a "double-dagger").

    Friday, October 06, 2006 7:58 PM
  • James,

    Thanks for the response.  The problem is the program I am converting to is doing encryption and some of those codes are above 127 so that's why I'm running into this problem.  I understand the point about the fonts...but is there any resolution?

    Thanks

     

    Friday, October 06, 2006 8:43 PM
  • Essentially, you have no problem -- 134 is 134 regardless of how it's displayed.  And, if I understand you correctly, the 134 is just an intermediate value between encrypting & decrypting, so you shouldn't be displaying it anyway.

    However, I think may have realized your real problem. I'mm gues here, but I assume that at one point you are reading some encrypted text (8-bit ASCII characters) into a .Net string (with 16-bit Unicode characters) and some characters are being mapped to new values.  If that's the case, you don't want to convert it to a string, but to a byte[]  (using the System.Text.Encoding.ASCII.GetBytes() method)

    Friday, October 06, 2006 8:53 PM
  • James,
    Thanks again for the insight.  The issue is that the encoded string is in the database so it does contain the daggers '††††' (ASCII 134).  When I try to do a System.Text.Encoding.ASCII.GetBytes this interprets those characters as ASCII 63.  It seems there is no way of identifying those characters as ASCII 134 by the C# program.

    Paul

     

    Monday, October 09, 2006 1:48 PM
  •  ASCII 63 is naturally enough '?'.

    How 'bout we try System.Text.Encoding.UTF8.GetBytes()

    Monday, October 09, 2006 1:54 PM
  • I've often found that I had to use something like:

    System.Text.Encoding encoder = System.Text.ASCIIEncoding.Default;

    Monday, October 09, 2006 2:15 PM