Data transmission powers everything from emails to complex financial transactions.
You need to ensure accuracy and data integrity while keeping these transmissions error-free. Many businesses use data exchange platforms to ensure their inherent meaning doesn’t alter during acquisition. Error detection plays a critical role in identifying and correcting these errors.
A data error can be a condition where the receiver’s data doesn’t match the sender’s information. It can be due to digital signals suffering from noise during transmission that introduces errors in the binary bits. Simply put, the 0 bit might change to 1, and the 1 bit may change to 0.
To prevent such errors, error detection codes are added as extra data to digital messages.
Error detection identifies mistakes in data transmission from sender to receiver. These errors might be due to noise or other impairments in the transmission.
Error detection techniques add additional data to a message that receivers use to ensure error-free information. If there’s any inconsistency, the receiver understands that there are errors in the received message. These techniques can be systematic or non-systematic.
Systematic error detection adds a fixed number of check bits, also known as parity data, to the message during transmission. The parity data is data bits that came out of an encoding algorithm. The receiver applies the same algorithm to both the received data bits and check bits. If there is a discrepancy between the computed and received check bits, an error is detected.
To recover and correct the original data, the receiver can use a decoding algorithm that processes the received data bits and checks bits.
On the other hand, non-systematic code transforms the original message into an encoded message (retaining the same information) with at least as many bits as the original message.
You can select the error detection technique based on the communication channel’s attributes. In common channel models like the memoryless model, errors occur randomly, and in dynamic models, errors occur primarily in bursts.
Below are different types of errors you’ll come across in data transmission.
A single binary digit gets altered during data transmission, resulting in an incorrect data unit. Here, either 1 changes to 0 or 0 changes to 1, corrupting the data at the receiver’s end.
These errors are frequent in parallel data transmissions. Suppose eight wires send eight bits of a byte, and one wire is noisy; a single bit gets corrupted per byte.
However, these errors are least likely to occur in serial data transmission.
A multiple-bit error occurs when more than one bit is affected during data transmission. Compared to single-bit errors, multiple-bit errors are rare. They typically occur in high-interference and high-noise environments.
Burst error means that more than one consecutive bit is altered from 1 to 0 or vice versa. When measuring burst error length, it measures from the first corrupted bit to the last corrupted bit. In between, some bits may or may not be corrupted.
These errors are frequent in serial data transmissions, where the number of affected bits depends on noise duration and data rate. The duration of noise in burst errors is longer than in single-bit errors.
There are several error detection techniques professionals adapt to ensure data integrity.
The parity check adds an extra bit, known as the parity bit, to data.
The parity bit will be 1 if the block contains an odd number of 1s, and 0 is added if it contains an even number of 1s. This ensures that the number of 1s in a block is even.
Source: Javatpoint
From the received data bits, the parity bit is calculated at the receiver’s end and compared with the received parity bit. The receiver accepts the data if the total number of 1s is even. If it’s odd, the receiver knows that an error has occurred.
This is known as Single Parity Checking. However, it detects only rare single-bit errors. If two bits are altered, this technique fails to detect errors.
In such cases, Two-Dimensional Parity Checking is employed. It organizes data in a table, and parity bits are computed for each row, the same as Single Parity Checking. However, parity bits are also calculated for all columns.
Source: Javatpoint
The parity bits are compared with the computed ones at the receiving end.
Two-Dimensional Parity Checking does a better job than Single Parity Checking, but it has a few exceptions. If two bits get corrupted in precisely the same position in different data blocks, 2-D Parity Checking won’t work. Moreover, it might not detect 4-bit errors or more in a few cases.
Suppose Amy wants to send a 7-bit signal to Bill and is of 7-bit, represented as “0011011”. Since the original data contains an even number of 1s (4), Amy adds a parity bit to make it even.
Thus, the data packet becomes “00110110”.
Imagine there was interference, and one bit was changed. Bill receives “00110010.” The number of 1s here is 3, which is an odd number. According to their agreement of using Even Parity (where the number of 1s should be even), Bill could tell that the data was incorrect.
Had Amy and Bill previously agreed that they wished for an odd number of 1s, Amy would have put one as the parity bit. Bill would then check for an odd number of 1s, and if he found one, he would consider the data correct.
The checksum method adds the binary values and sends a total along with the data. The receiver verifies the data using a similar summing process and compares values to detect errors.
The sender first divides the data into segments, each of fixed bits. They use a 1’s complement arithmetic to add the segments and get the sum. Then, the sum is complemented to get the checksum.
At the receiver’s end, all segments are added to get the sum using 1’s complement arithmetic, and the sum is complemented. The receiver accepts the outcome if the result is 0; otherwise, they discard it.
Unlike the Checksum, which is based on addition, the CRC technique is based on binary division. In CRC, a sequence of redundant bits, known as cyclic redundancy check bits, gets appended to the end of the data unit. They’re set so that after appending, the resulting data unit is exactly divisible by a second, set binary number.
On the receiver’s side, the data unit is divided by the second, set binary number, and accepted if there’s no remainder. If there’s a reminder, it indicates that the data unit is corrupted.
Advanced error detection methods offer more protection. They’re used to correct and detect memory errors that might result in server failure if left uncorrected.
Forward error correction sends additional redundant data along with the original data, helping receivers detect and correct errors without retransmission. Here are some of the standard FEC techniques:
With FEC, interoperability can be achieved when the transmitter and receiver follow and implement the same encoding and decoding rules.
In its simplest form, FEC sends each character several times to avoid data loss. The receiver then compares the character and recovers the data based on the majority of the signal received.
If there’s a discrepancy among the received characters, the value of a bit that appeared most frequently of the time is accepted. The exact process of FEC-based communication varies from one system to the other.
HARQ uses retransmission and error correction codes to ensure accurate data transmission and reception. In this technique, the receiver sends an acknowledgment (ACK) message to the sender to confirm receipt of the data.
If the sender doesn’t receive an ACK message, they assume inaccurate data wasn’t received and send the data again. The retransmission here is of the erroneous bits or packets, not the complete data, making the communication system more efficient.
Error detection assists different sectors in transmitting data signals accurately. The techniques have broad applications in:
In telecommunication, error detection maintains the accuracy of voice and data communication. Cellular networks and internet communications use error detection techniques like CRC and FEC to ensure data integrity.
When transmitting data signals over long distances, such as in space, the chances of signal degradation are higher. To maintain data accuracy, satellite communications use advanced FEC methods like LDPC codes or Turbo codes.
These methods provide reliable data communication, ensuring the integrity of information during space missions.
Data integrity is paramount in financial institutions. They use error detection techniques in banking networks to keep data secure and accurate while processing transactions. It safeguards data from getting corrupted or being accessible to unauthorized parties.
It’s tricky for computers and machines to understand whether the data packets they receive are accurate and consistent with the original transmission. Error detection techniques equip machines to differentiate between accurate and corrupted data, helping them maintain data integrity.
When working with confidential and sensitive data, protecting it against corruption is vital as it might have substantial negative impacts.
Learn more about data integrity and understand how data can become corrupted.
Edited by Monishka Agrawal
Sagar Joshi is a former content marketing specialist at G2 in India. He is an engineer with a keen interest in data analytics and cybersecurity. He writes about topics related to them. You can find him reading books, learning a new language, or playing pool in his free time.
All businesses manage copious amounts of data. Every day, new documents are created, older...
Tokenization and encryption are two sides of the same coin in data security.
What is data tokenization? Data tokenization is a process applied to datasets to protect...
All businesses manage copious amounts of data. Every day, new documents are created, older...
Tokenization and encryption are two sides of the same coin in data security.