In the early stages of programming, different regions and their own standards for representing characters used in their writing systems, i.e., they had their own encoding systems. Some like the most common ASCII fit their entire characters with a positive signed single byte (0-127) effectively, while others utilize an unsigned char byte (0-255) to represent their characters, among other encoding schemes (known as code pages, extended ASCII, or platform-specific encodings like Shift-JIS, EUC-KR, GB2312, and Big5). However, this resulted in conflict, as text in one encoding might be misinterpreted in another, described as "garbled text," "mojibake," or the "encoding nightmare." What exacerbated the issues was that the encoding was implemented not only in programming language compilers/editors but also in the OS and display servers/terminals.