>You are right that if a bit flips in a file it >will be copied up the chain, but if a bit flips it is most >likely to be caught as the chain goes from the files that are >being worked on up the chain.
Then only time I've had data loss it has been "mystery" loss. I keep my archive of shots online (unlike some who keep every shot, I keep only "keepers" and frequently with fresh eyes go back and delete some more).
Some years ago I happened to be hunting for an old shot I had not touched in ages, found it, and ... it was corrupt. Lots of work and time and I found about a dozen images on disk had mysteriously become corrupt, with no overt sign of disk problems. I do not even know when it happened -- it may have even been during a computer upgrade, it may have been some program run amok, it may have been cosmic rays or aliens, who were unnoticed in those shots, come back to hide the evidence.
But the result was that all of my copies of those were corrupt, as years had gone by and all archives written over. Fortunately they were not important, really.
But that's not all that unusual -- I've been in I.T. since the first disk drives were carved out of stone, and "stuff happens" to data on drives. Unreported read/write failures (and windows is awful at such), interrupted or failed copies, editing programs run amok, disk directory structure errors so that writes to "empty" space were writing to files, operator error....
Anyway... the bad thing is that sometimes these updates do mark the file to be copied. So archival systems that overwrite archived copies with new updates are vulnerable to such issues.
I've done several things to guard against it. First is Raid-1 - not a full solution but helps.
The second is that I've started using Teracopy to move files between drives and systems (not backup, just copies). It's a tool that replaces the explorer copy with one that does a verify pass -- copy, then re-read, checksum, and compare the copy with the original. I feel safer in particular when I'm going across non-local media - copies across the network, from external drives, etc. In particular I often am at a shoot with a tablet, and bring it back and copy those files to my PC over wireless.
And the third is I switched away from sync-style backups to versioning ones (Cloudberry in particular). This keeps a separate copy of every change to a file on my archive media. So if a (marked) change is made to a file, I now have two copies on the archive, and can always go back one. If it isn't changed, no harm done, no extra space. (Yes, I am now trusting a more complicated tool than a simple synch).
And the fourth is I switched to keeping backups in the cloud (specifically Amazon Glacier) for the final, all-else-fails off-site copy. I did that because I can get the files off-site the same day, and do not have to depend on discipline of swapping media somewhere else (and related issues of common disaster -- living in sea-level-for-miles Florida if I swapped with a buddy we both might be flooded by the same event). I use a versioning system there also, so if a file is changed, I keep both versions.
>Thanks for taking the time to reply. I also like to pick your >brain on ways to check the contents of the NEF files across >the drive, is there a method you are using?
The idea was it would validate the file structure (sort of -- it just called DNG Converter to see if it could read the file), then checksum it and remember. Nice in concept, but (a) it found a lot of false positives, for reasons I never quite understood, on the first pass, specifically TIF's from Photoshop, and (b) it was buggy with regard to time zones and daylight saving time, so depending on exactly when I ran it, it would find thousands of "new" images because it thought the date changed (and it didn't, it had just saved it incorrectly in its checksum list).
What I did is not a good solution, but I wrote my own. I didn't even try to validate structure, I just read the entire lightroom catalog, find each image on disk, checksum, and save the checksum. Then I go back periodically and re-checksum and compare. Gives me a bit of safety net for "bit drift" errors of the alien/cosmic-ray type. So far have not found any (other than manually introduced while testing).
I wish Adobe would include that feature. They have included it if you are willing to convert to DNG - you just hit a button and Lightroom validates the contents of all DNG's from their original checksum. But they didn't extend that to raw or JPG or other formats.
>will be copied up the chain, but if a bit flips it is most
>likely to be caught as the chain goes from the files that are
>being worked on up the chain.
Then only time I've had data loss it has been "mystery" loss. I keep my archive of shots online (unlike some who keep every shot, I keep only "keepers" and frequently with fresh eyes go back and delete some more).
Some years ago I happened to be hunting for an old shot I had not touched in ages, found it, and ... it was corrupt. Lots of work and time and I found about a dozen images on disk had mysteriously become corrupt, with no overt sign of disk problems. I do not even know when it happened -- it may have even been during a computer upgrade, it may have been some program run amok, it may have been cosmic rays or aliens, who were unnoticed in those shots, come back to hide the evidence.
But the result was that all of my copies of those were corrupt, as years had gone by and all archives written over. Fortunately they were not important, really.
But that's not all that unusual -- I've been in I.T. since the first disk drives were carved out of stone, and "stuff happens" to data on drives. Unreported read/write failures (and windows is awful at such), interrupted or failed copies, editing programs run amok, disk directory structure errors so that writes to "empty" space were writing to files, operator error....
Anyway... the bad thing is that sometimes these updates do mark the file to be copied. So archival systems that overwrite archived copies with new updates are vulnerable to such issues.
I've done several things to guard against it. First is Raid-1 - not a full solution but helps.
The second is that I've started using Teracopy to move files between drives and systems (not backup, just copies). It's a tool that replaces the explorer copy with one that does a verify pass -- copy, then re-read, checksum, and compare the copy with the original. I feel safer in particular when I'm going across non-local media - copies across the network, from external drives, etc. In particular I often am at a shoot with a tablet, and bring it back and copy those files to my PC over wireless.
And the third is I switched away from sync-style backups to versioning ones (Cloudberry in particular). This keeps a separate copy of every change to a file on my archive media. So if a (marked) change is made to a file, I now have two copies on the archive, and can always go back one. If it isn't changed, no harm done, no extra space. (Yes, I am now trusting a more complicated tool than a simple synch).
And the fourth is I switched to keeping backups in the cloud (specifically Amazon Glacier) for the final, all-else-fails off-site copy. I did that because I can get the files off-site the same day, and do not have to depend on discipline of swapping media somewhere else (and related issues of common disaster -- living in sea-level-for-miles Florida if I swapped with a buddy we both might be flooded by the same event). I use a versioning system there also, so if a file is changed, I keep both versions.
>Thanks for taking the time to reply. I also like to pick your
>brain on ways to check the contents of the NEF files across
>the drive, is there a method you are using?
For several years I used a product called ImageVerifier (part of ImageIngester) http://basepath.com/ImageIngester/wp/?cat=4
The idea was it would validate the file structure (sort of -- it just called DNG Converter to see if it could read the file), then checksum it and remember. Nice in concept, but (a) it found a lot of false positives, for reasons I never quite understood, on the first pass, specifically TIF's from Photoshop, and (b) it was buggy with regard to time zones and daylight saving time, so depending on exactly when I ran it, it would find thousands of "new" images because it thought the date changed (and it didn't, it had just saved it incorrectly in its checksum list).
What I did is not a good solution, but I wrote my own. I didn't even try to validate structure, I just read the entire lightroom catalog, find each image on disk, checksum, and save the checksum. Then I go back periodically and re-checksum and compare. Gives me a bit of safety net for "bit drift" errors of the alien/cosmic-ray type. So far have not found any (other than manually introduced while testing).
I wish Adobe would include that feature. They have included it if you are willing to convert to DNG - you just hit a button and Lightroom validates the contents of all DNG's from their original checksum. But they didn't extend that to raw or JPG or other formats.
Thanks for the interesting discussion.
Linwood
Comments welcomed on pictures: Http://www.captivephotons.com