Quite often is the short answer.
Have you ever wondered why your 500 GB hard drive only has about 488 gigabytes once it has been formatted? While most operating systems utilize the binary number system to express file data size, the prefixes for the multiples are based on the metric system. So even though a metric "kilo" equals 1000, a binary "kilo" equals 1024. Are you confused yet? Don't be surprised, because even the most tech-savvy people often mistake the two. Plainly put, the kilobyte is often intended as 1000 bytes, but it is really 1024 bytes.
Essentially it boils down to differences between binary and decimal units and the two should be carefully separated. For example, the difference between the transfer time of a 1 gigabyte (1000 Megabytes) file is going to be significantly better than a true binary gigabyte (referred to as a gibibyte) that contains 1024 megabytes. The larger the file used for data transfer, the bigger the difference will be.
Metric Prefixes from
yocto (10-24) to yotta (1024)
The basic measurement unit for binary data is called a bit (binary digit). Computers use data stored and measured bits, customarily coded as ones and zeros, so that all files are stored in binary format, and are translated into a higher level, working format by the operating system and the user software. This coding system is called a "binary number system". In comparison, the decimal number system has ten distinct digits, from zero through nine.
Years ago, at a time when computer capacities barely matched the few tens of kilobytes required by this single web “page”, computer engineers noticed that the binary 210 (1024) was very nearly equal to the decimal 103 (1000) and, purely as a matter of convenience, they began referring to 1024 bytes as a kilobyte. It was, after all, only a 2,4 % difference and all the professionals generally knew what they were talking about among themselves.
Despite its inaccuracy and the inappropriate use of the decimal SI prefix "kilo", the term was also easy for salesmen and shops to use, and it caught on with the public.
As time has passed, kilobytes have grown into megabytes, then gigabytes and now terabytes. The problem is that, at the SI tera-scale (1012), the discrepancy with the binary equivalent (240) is not the 2,4 % at kilo-scale but rather approaching 10 %. At exascale (1018 and 260), it is nearer 20 %. It is just mathematics that dictates that the bigger the number of bytes, the bigger the difference so that the inaccuracies – for engineers, marketing staff and public alike – are set to grow more and more significant.
Similar confusions arose between the computing and the telecommunications sectors of the IT world, where data transmission rates have grown enormously over the past few years. Network designers have generally used megabits per second (Mbit/s) to mean 1 048 576 bit/s, while telecommunications engineers have traditionally used the same term to mean 1 000 000 bit/s. Even the usually stated bandwidth of a PCI bus, 133,3 MB/s based on it being four bytes wide and running at 33,3 MHz, is inaccurate because the M in MHz means 1 000 000 while the M in MB means 1 048 576.
Mathematics dictates that the disparities resulting from the mixed and incorrect use of decimal prefixes will become increasingly significant as capacities and data rates continue to grow. In IEC 80000-13:2008, all branches of the IT industry have a tool with which to iron out this inconsistency. It eliminates confusion by setting out the prefixes and symbols for the binary, as opposed to decimal, multiples that most often apply in these fields.
Time will tell if comfortable reading will prevail over technical accuracy.