We explain what a bit is, what are its different uses and the methods in which this computer unit can be calculated.
What is a bit?
In computing , a bit of the binary numbering system is called a bit (acronym in English for Binary digit , that is, “binary digit”) . This system is called that because it comprises only two basic values: 1 and 0, with which an infinite number of binary conditions can be represented: on and off, true and false, present and absent, etc.
A bit is, then, the minimum unit of information used by computer science , whose systems are all supported by said binary code. Each bit of information represents a specific value: 1 or 0, but by combining different bits many more combinations can be obtained, for example:
2-bit model (4 combinations):
00 – Both off
01 – First off, second on
10 – First on, second off
11 – Both on
With these two units we can represent four point values . Now suppose we have 8 bits (one octet), equivalent in some systems to one byte : 256 different values are obtained.
In this way, the binary system operates paying attention to the value of the bit (1 or 0) and its position in the represented string: if it is on and appears in a position to the left its value is doubled, and if it appears to the right, It is cut in half. For example:
To represent the number 20 in binary
Net Binary Value : 1 0 1 0 0
Numerical value per position: 168421
Result: 16 +0 +4 +0 + 0 = 20
Another example: to represent the number 2.75 in binary, assuming the reference in the middle of the figure:
Net Binary Value : 0 1 0 1 1
Numerical value per position: 4210.50.25
Result: 0 +2 +0 +0.5 + 0.25 = 2, 7 5
The bits in value 0 (off) are not counted, only those of value 1 (on) and their numerical equivalent is given based on their position in the string, thus forming a representation mechanism that will then be applied to alphanumeric characters ( called ASCII ).
In this way the operations of the microprocessors of the computers are registered : there can be architectures of 4, 8, 16, 32 and 64 bits . This means that the microprocessor handles that internal number of records, that is, the calculation capacity of the Arithmetic-Logic Unit.
For example, the first computers x86 series (the Intel 8086 and Intel 8088) had processors 16 bits, and the marked difference between their speeds had to do not much processing capacity as with the additional help of a 16 and 8 bit bus respectively.
Similarly, bits are used to measure the storage capacity of a digital memory.