Number Base Converter
Convert numbers between decimal, binary, octal, and hexadecimal with two's complement.
About Number Base Converter
A number base (or radix) defines the set of digits used to represent numeric values and the positional weight of each digit. The decimal system (base 10) uses digits 0-9 and is the standard for human arithmetic, but computers operate natively in binary (base 2), using only 0 and 1 to represent all data as sequences of electrical on/off states. Octal (base 8) and hexadecimal (base 16) emerged as convenient shorthands for binary: each octal digit maps to exactly three binary digits, and each hex digit maps to exactly four, making them compact human-readable representations of binary data. When a developer sees the hex value 0xFF, they can instantly decompose it into the binary pattern 1111 1111 -- eight bits, all set to one -- which is far more intuitive than reading the decimal equivalent 255 and mentally converting it.
Hexadecimal is ubiquitous in software engineering. Memory addresses, color values (#FF5733), MAC addresses (00:1A:2B:3C:4D:5E), UUIDs, cryptographic hashes, and byte-level data inspection all use hex notation because it directly reflects the underlying binary structure in a compact form. Octal, while less common today, remains relevant in Unix file permissions (chmod 755 sets rwxr-xr-x) and some legacy systems. Binary representation is essential when working with bitwise operations, hardware registers, network protocol flags, and any domain where individual bit positions carry specific meaning -- such as the TCP flags field where each of the 6 bits (URG, ACK, PSH, RST, SYN, FIN) controls a distinct behavior.
Beyond representation, understanding number bases is critical for working with two's complement (the standard method for representing signed integers in binary), bit manipulation operations (AND, OR, XOR, shift), and the boundaries of fixed-width integer types. Knowing that a signed 8-bit integer ranges from -128 to 127 (because 10000000 in two's complement is -128) prevents subtle overflow bugs. Similarly, understanding that a left shift by N positions is equivalent to multiplication by 2^N -- but only when no bits are shifted out of the register width -- enables writing performant low-level code while avoiding undefined behavior. A number base converter is the essential calculator for this kind of work, translating values between representations instantly so developers can reason about the same number in whichever base is most natural for the task at hand.
How to Use the Number Base Converter
- Enter a numeric value in any supported base. Prefix the input according to common programming conventions: no prefix for decimal, 0b for binary, 0o for octal, or 0x for hexadecimal -- or simply select the input base from the dropdown.
- Select the input base (radix) if it was not auto-detected. Supported bases typically include binary (2), octal (8), decimal (10), and hexadecimal (16), with some converters supporting arbitrary bases up to 36.
- The tool instantly converts the input to all other supported bases and displays the results. Each output field shows the value with appropriate grouping -- binary digits grouped in fours, hex digits grouped in pairs -- for readability.
- For signed integer interpretation, toggle the bit width (8-bit, 16-bit, 32-bit, 64-bit) to see how the value would be represented in two's complement, including whether it is positive or negative in that width.
- Use the bitwise visualization to see each individual bit position and its decimal weight, which is especially helpful when working with flag fields, bitmasks, or hardware registers.
- Copy the converted value in your desired base using the copy button next to each output, ready for pasting into code, configuration files, or documentation.
Common Use Cases
Debugging Bitwise Operations and Flag Fields
Network protocols, file formats, and hardware interfaces frequently pack multiple boolean values into a single integer using bit flags. When debugging why a particular flag is not being set or read correctly, converting the integer to binary makes each flag visible as an individual bit. For example, converting TCP flags value 0x12 to binary yields 00010010, revealing that the SYN and ACK bits (positions 1 and 4) are set -- confirming this is a SYN-ACK packet.
Working with Unix File Permissions
Unix file permissions use octal notation where each digit represents a three-bit field for read (4), write (2), and execute (1) permissions. Converting between octal and binary makes the permission structure explicit: chmod 755 translates to binary 111 101 101, meaning the owner has all permissions (rwx), while group and others have read and execute but not write (r-x). This conversion is essential when configuring permissions programmatically or debugging access control issues.
Understanding Memory Addresses and Hex Dumps
Debuggers, memory analyzers, and hex editors display data as hexadecimal values. When investigating a crash dump or inspecting binary file formats, developers need to convert between hex and decimal to interpret pointer values, calculate offsets, and understand data alignment. Converting a hex address like 0x7FFF5FBFF8A0 to decimal helps when comparing with stack size limits or computing distances between memory regions.
Interpreting Cryptographic Hashes and Binary Data
Cryptographic hashes (SHA-256, MD5), encryption keys, and binary protocol payloads are typically displayed as hexadecimal strings. Converting individual bytes from hex to decimal or binary helps when implementing protocol parsers, verifying hash computations step by step, or understanding the structure of binary formats like PNG headers or TLS handshake messages where specific byte values carry defined meanings.
Frequently Asked Questions
Why do computers use binary instead of decimal?
Computers use binary because digital circuits are built from transistors that have two reliable states: on (high voltage, representing 1) and off (low voltage, representing 0). While it is theoretically possible to build circuits with more states (ternary computers have been researched), two-state circuits are far simpler to manufacture reliably, consume less power, and are more resistant to electrical noise. Every higher-level abstraction -- characters, images, floating-point numbers -- is ultimately encoded as sequences of these binary digits.
What is two's complement and why is it used for negative numbers?
Two's complement is a method for representing signed integers in binary where the most significant bit indicates the sign (0 for positive, 1 for negative). To negate a number, you invert all bits and add one. For example, in 8-bit two's complement, 5 is 00000101 and -5 is 11111011. The key advantage is that addition works identically for signed and unsigned numbers -- the hardware does not need separate circuits for signed arithmetic. It also avoids the problem of having two representations for zero (positive zero and negative zero) that plagues the older ones' complement and sign-magnitude schemes.
Why is hexadecimal preferred over binary for displaying byte values?
Hexadecimal is preferred because each hex digit represents exactly 4 bits, so a byte (8 bits) is always exactly two hex digits. This creates a compact, fixed-width representation: the byte 11111111 in binary is simply FF in hex, and the 32-bit value 11000000101010000000000100000001 becomes C0A80101 (which happens to be the IP address 192.168.1.1). Binary is too verbose for practical use with multi-byte values, while decimal obscures the bit-level structure that is often important in systems programming.
What is the maximum value that can be stored in N bits?
For an unsigned integer, the maximum value in N bits is 2^N - 1. So 8 bits can store 0 to 255, 16 bits can store 0 to 65,535, and 32 bits can store 0 to 4,294,967,295. For signed integers using two's complement, the range is -(2^(N-1)) to 2^(N-1) - 1. An 8-bit signed integer ranges from -128 to 127, and a 32-bit signed integer ranges from -2,147,483,648 to 2,147,483,647. Knowing these boundaries is essential for preventing integer overflow bugs in languages like C, C++, and Rust.
How do I convert between ASCII characters and their numeric values?
Each ASCII character is assigned a number from 0 to 127. The uppercase letter 'A' is 65 (0x41), 'a' is 97 (0x61), the digit '0' is 48 (0x30), and a space is 32 (0x20). Converting between bases helps you work with ASCII at the byte level: when you see 0x48 0x65 0x6C 0x6C 0x6F in a hex dump, converting each byte to decimal (72, 101, 108, 108, 111) and looking up the ASCII table reveals the string 'Hello'. This skill is fundamental when debugging network protocols, parsing binary file formats, or working with character encoding issues.
What bases other than 2, 8, 10, and 16 are used in computing?
Base 36 is occasionally used for compact alphanumeric identifiers, using digits 0-9 and letters A-Z (e.g., URL shorteners). Base 32 encoding (RFC 4648) is used in TOTP/HOTP authentication codes and some file-sharing systems. Base 58 (Bitcoin's Base58Check) omits visually ambiguous characters like 0/O and I/l to create human-friendly encodings for cryptocurrency addresses. Base 64 is ubiquitous for encoding binary data as ASCII text in email attachments, data URIs, and API payloads. Each base represents a different trade-off between compactness and the character set used.