Home
| Calendar
| Mail Lists
| List Archives
| Desktop SIG
| Hardware Hacking SIG
Wiki | Flickr | PicasaWeb | Video | Maps & Directions | Installfests | Keysignings Linux Cafe | Meeting Notes | Linux Links | Bling | About BLU |
On Wed, Dec 14, 2011 at 2:00 PM, Richard Pieri <richard.pieri at gmail.com>wrote: > On 12/14/2011 12:34 PM, Bill Bogstad wrote: > >> I've been watching the (second?) incarnation of this thread for a >> while now and I think that I see your point. I wonder if the "TRIM" >> functionality that is being added to filesystems in order to handle >> SSDs could help with this. >> > > I don't think so. The problem I describe is that once a dump goes missing > then any differentials against it will have inconsistencies between the > file data and the file metadata structures. TRIMming freed blocks won't > make this go away. It might make things worse what with dangling inode > lists pointing to de-allocated SSD blocks. > > > As an aside, enterprise backup systems like Amanda and Bacula and TSM do, > indeed, maintain databases of backed up files and what media they are on. > > ______________________________**_________________ > Discuss mailing list > Discuss at blu.org > http://lists.blu.org/mailman/**listinfo/discuss<http://lists.blu.org/mailman/listinfo/discuss> > Correct me if I'm wrong, but I thought differentials are a backup of all things that have changed since the last full. Incrementals are changes since the last incremental, differential or full, whichever happened last. For example one my SQL Servers has a schedule that is a full once per week (wednesday's), a differential every night (except wednesday), then incrementals every 10 minutes. If I want to restore up to this past Monday at 9AM I would take the full from last wednesday, then the differential from Sunday night/Monday morning, then I would apply all incrementals from the time of the differential up to 9AM on Monday. What I don't have to do is apply every differential (Thursday, Friday, Saturday & Sunday). Also, I believe I mentioned this in the last LVM discussion. When you snapshot LVM it does not make a copy of the original content. It marks all blocks in that original volume as read-only until the snapshot is released. Any new writes to either the original volume or the newly created snapshot happen in the "scratch" space. You can take as many snapshots as long as you monitor your scratch space to make sure it's not filled up. During a snapshot whether you access the original volume (+ changes) or the snapshot (+changes) it is on the fly deciding to pull blocks from the original volume and the scratch space to recreate what you're asking for. One thing to keep in mind when using snapshots is if your scratch space goes to 100%, then all snapshots are released and all changes to the original volume (which up to this point are being held in scratch space) are written back to the original volume. Allocating scratch space is done by not assigning to any logical volumes, and deciding how much to allocate is hugely dependent on amount of changes to your data over the amount of time that you keep your snapshots online and the number of snapshots and whether or not you also modify your snapshots. I've always told people if you don't have time to build, test, rebuild until you get it right, then just overallocate. Now, some cool tricks you can do with LVM are adding more drives to your volume and growing your volume on the fly. If you decide that you want to go from a 500GB volume to a 1TB volume, you can do an add and migrate of your data. All new data will be written to the new drive and during idle time blocks on your old drive will be migrated to the new volume. Once data is off your old volume it can be removed from the group and removed. Matthew Shields Owner BeanTown Host - Web Hosting, Domain Names, Dedicated Servers, Colocation, Managed Services www.beantownhost.com www.sysadminvalley.com www.jeeprally.com Like us on Facebook <http://www.facebook.com/beantownhost> Follow us on Twitter <https://twitter.com/#!/beantownhost>
BLU is a member of BostonUserGroups | |
We also thank MIT for the use of their facilities. |