A char * is a pointer to an 8 bit memory location where a wstring is a pointer to a 16 bit memory location. I dont know if you could just cast the char * to be long. look up how to convert a char to a unicode char. Mike used it all through his book..Code
push you ; haha!
Casting should work, if I'm correct. I believe that the ASCII codes align with the English Unicode. I looked into this once, but I very easily could be wrong.Feel you safe and secure in the protection of your pants . . . but one day, one day there shall be a No Pants Day and that shall be the harbinger of your undoing . . .
Oh yea... duh! Mr Mike does have something to say aboaut that in his book. (WideCharToMultiByte/MultiByteToWideChar)
ok heres another question on the subject; when do you use L" ", TEXT" ", and/or _T(" ")? I use TCHAR with _T(" "); but cant use it with wchar_t (works with L" " though). So when (why/how) do I use L" " versus TEXT" " or _T(" ")'?
All these things are for declarations. see
What happends is when you compile is the compiler decides if it should be unicode or not and then goes and changes all the _T("") and what nots to either char or wchar_t. The L is a long define so when you do L"Blah blah blah" then it makes each character in that string a 16 bit character. All of these things only work with string constants however.
To change a none unicode string to a unicode string you need a function such as Bytestounicode()
which may not do what you need it to do. None the less, you need a function to translate it and remeber that takes processor time and memory. It's almost easier to go back through the code(or start new code) and uniformly make all character either unicode or not unicode. You can do this with the methods expressed in Mike's book that you have listed..Code
push you ; haha!