Home
| Calendar
| Mail Lists
| List Archives
| Desktop SIG
| Hardware Hacking SIG
Wiki | Flickr | PicasaWeb | Video | Maps & Directions | Installfests | Keysignings Linux Cafe | Meeting Notes | Linux Links | Bling | About BLU |
Tom suggested: > You tried 'foremost' ? Foremost found only the .png files, and didn't find any .mpg files. I may be able to retry it in some other way, but I'd need more tech info about it. > These tools are designed to sift through unallocated space on a > drive and recognize common file structures (headers). They shouldn't > care about the presence of a journal file. For large files spanning many allocation units (these files are typically about 2-8GB in size), the tool does need to be filesystem-aware, even if the journal is irrelevant. > That the tools you are trying do, makes me wonder if you are > chasing after the wrong type of solution. I would expect journal > files to only be useful if the data on your disk is inconsistent > with what is in your directory structures. Agreed, I'm willing to try any tool that works and that claims support for ext4. So far the only one that I've seen that explicitly claims ext4 support is extundelete, which also explicitly claims that it's only useful for picking data out of the journal (which is not going to work and I can't see how it'd ever work). I think I'm out of options, and have lost about 400GB that hadn't yet been backed up. But I'll keep that terabyte volume around just in case it's ever retrievable. Meanwhile I mean what I say about not using ext4 for this use case anymore. Telling me to do more backups, as others here have, is preaching to a choir. I do 15-minute continuous backups of that 5% of my data that changes a lot; the video collection is simply too big with current technology to backup anywhere near as often, and sometimes my backup volumes aren't big enough (as was the case with this particular 400GB of semi-recent recordings, and my failure happened just as I'd bought a stack of 5 new 3TB disks to address the shortfall). I had *two* copies of the data in question until the failure synced too quickly across both, which is why I in turn preach to the rest of the choir about why you need at least *3* to be safe. -rich
BLU is a member of BostonUserGroups | |
We also thank MIT for the use of their facilities. |