Binary to Decimal Conversion
Decimal to Binary Conversion
Hexadecimal Conversions
ASCII - Character sets
Unicode - Character sets
Compression & Internet File Formats
Good notes will help you organise and process data and information
Unicode is a defined list of characters recognized by the computer hardware and software. Unicode is an information technology standard for the consistent encoding, representation, and handling of text expressed in most of the world's writing systems.
Unicode Converter
ASCII (American Standard Code for Information Interchange) was the first widespread encoding scheme used to represent text in computers. However, it only defined character codes for 128 characters, which was enough for the most common English characters, numbers, and punctuation. But it was not enough for the rest of the world, which needed more characters to be encoded.
For a while, depending on where you were, the same ASCII code could display a different character. This led to confusion and inconsistencies.
Eventually, other parts of the world began creating their own encoding schemes, resulting in various encoding schemes of different lengths. Programs had to figure out which encoding scheme they were supposed to use, which added to the confusion.
It became clear that a new character encoding scheme was needed to unify all the different encoding schemes and limit the confusion between computers. This led to the creation of the Unicode standard, which is now widely used to represent text in computers.
The Unicode standard defines values for over 128,000 characters, which can be seen at the Unicode Consortium. It has several character encoding forms, including:
By using Unicode, computers can display and process text in different languages and character sets without any confusion or inconsistencies.
AQA Computer Science Tutor