They are both beautifully designed internally. I fully trust Arq's code/architecture and Backblaze's storage stability with my life's work. ) I've been an advanced programmer for 20 years and know my stuff. Just sign up to Arq and Backblaze and trust me. You seem to be a very worried person with all your dozens of long, concerned posts over the weeks, and it's not a good way to live life. I've reviewed the Backblaze storage design earlier in this thread. And Arq closely watches the server replies, to ensure that the data was stored, before it proceeds. This guarantees that the data stored on Backblaze is 100% identical to what was uploaded. Nothing gets stored on their servers if the chunk hashes mismatch. They demand a hash before you even start the upload. I've seen data folders vanish and re-appear in my Amazon Cloud Drive when I had that service.Īs for Backblaze: Their hashing is deeply embedded in their cloud service. That service is very new and low-quality and has had a billion bugs and has been a huge headache for Arq's developer. What's more likely is that Amazon Cloud Drive had a bug. If both match, Arq correctly marks that chunk as 100% successfully uploaded, and carries on. Hashing is a way to verify that data is identical in both places.Ħ. Compare the hash that the server told you, to the hash of your own local chunk. Wait for the reply from the server, saying "We received file X with hash 13819圓8y12983".ĥ. Upload the chunk to the chosen cloud storage.Ĥ. Split the file into smaller chunks (important it helps reduce data storage needs since many files will consist of identical chunks, so only one copy of that chunk needs to be stored).ģ. Not sure why this problem did not show up during validation if he had validated his backup I don't think the review's theory is correct. The user said because of this he lost some important files and this issue had not been resolved. This leads to those files in the cloud with rejections are not the same as the original files in the Mac. Larger file encounters more rejections than smaller files. Meanwhile, ARQ does not know some requests were rejected. When requests are made too fast, ACD sometime rejects the requests. Each chunk then makes a connection request to Amazon Cloud Drive for permission to upload. My understanding (not sure is correct) of the discussions is as follows: during upload, ARQ breaks up, for example, a GB file into a 1000 threads (chunks?). I do not know if there is only ONE user (he uploaded 4TB to ACD using ARQ 5 ) who had this problem, nor do I have the technical knowledge to judge how accurate his description of the problem is, nor if only ACD has this problem. I came upon the first negative discussions on ARQ 5: I was interested in the download speed and wanted to know if putting TB backup on HD delivered to home is necessary. So far all the discussions on ARQ 5 appears to be very positive ( myself included). However, if the backup stored in the cloud is not identical to the original data on the Mac, HD and cost don't matter. I have been following the discussions on getting the TB cloud backup on a hard drive delivered to one's home and how low cost B2 is.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |