No Data Corruption & Data Integrity
Uncover what ‘No Data Corruption & Data Integrity’ suggests for the info within your web hosting account.
The process of files being corrupted due to some hardware or software failure is referred to as data corruption and this is among the main problems that web hosting companies face as the larger a hard drive is and the more data is kept on it, the much more likely it is for data to be corrupted. There're several fail-safes, but often the info gets corrupted silently, so neither the file system, nor the admins notice anything. Consequently, a corrupted file will be handled as a good one and if the hard disk drive is part of a RAID, that particular file will be duplicated on all other drives. In theory, this is done for redundancy, but in practice the damage will be even worse. The moment some file gets corrupted, it will be partially or entirely unreadable, which means that a text file will no longer be readable, an image file will display a random mix of colors in case it opens at all and an archive shall be impossible to unpack, and you risk losing your website content. Although the most widespread server file systems include various checks, they often fail to identify some problem early enough or require a vast period of time to check all the files and the server will not be operational for the time being.
No Data Corruption & Data Integrity in Shared Hosting
We guarantee the integrity of the info uploaded in every single shared hosting account that is generated on our cloud platform due to the fact that we use the advanced ZFS file system. The aforementioned is the only one that was designed to avert silent data corruption through a unique checksum for every single file. We shall store your info on a large number of NVMe drives which function in a RAID, so the exact same files will exist on several places simultaneously. ZFS checks the digital fingerprint of all of the files on all of the drives in real time and if the checksum of any file is different from what it needs to be, the file system swaps that file with an undamaged version from some other drive in the RAID. There's no other file system which uses checksums, so it is easy for data to be silently corrupted and the bad file to be duplicated on all drives with time, but since this can never happen on a server using ZFS, you do not have to concern yourself with the integrity of your data.