Boston Linux & Unix (BLU) Home | Calendar | Mail Lists | List Archives | Desktop SIG | Hardware Hacking SIG
Wiki | Flickr | PicasaWeb | Video | Maps & Directions | Installfests | Keysignings
Linux Cafe | Meeting Notes | Linux Links | Bling | About BLU

BLU Discuss list archive


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

a few facts



I did do one demo at MIT in 1996; 


Also while I am not willing to bring floppies in and do a public demo; I have offered to show certain BLU folks a demo, off-line from a meeting, to video tape the demo, and then to show the video tape at a BLU meeting.


I don't know yet whether the people I am talking to will agree to this -- and won't know until next week or later, (at least this is my best information at this moment.)


Also, you guys should simply build the model I described.  IT COMPRESS'ES.  It's not powerful and it's not efficient, but it works.


Let me try and show you why, because apparently some participants here have a little trouble understanding basic vector algebra.


First, the client vector (the original buffer) is random;  that won't produce much of value when curve-fit.


Now, building the monotonic vector merely transforms the X-component (completely random,) to the Y-component, because now the randomness is tied to the Y attribute.  Graph it if you don't get this.  Of course each term used in the curve fit doesn't do much, because the coefficients 'lock' on the largest non-linearity;


This is why, after the subtraction of the approximation, the residue line has one major discontinuity for each term.  Remember, we are not trying to reduce a 1GB to 1byte here, each curve fit (and subsequent subtraction) simply reduces the magnitude of the residue remaining to be compressed.


I am sorry to observe these dynamics; I was hoping for someone to come forward and say, I built it and I have the following issues -- or -- I observed ...  

Here's a little more info; maybe this will help get the right person going...


The subtractive operation can be replaced by an XOR;  That's less intuitive and doesn't process the left-most three-bit column field as well, but it does substanially improve the remaining right-columns;  Also, doing this to say 128 cells, each 1 bit wide;

And by applying this process to each vector concurrently, then making a new vector from the residue of each Y'th row produces data with much better characteristics;


Today my best perpetual compressor's don't use floating point, but (except for the Y[zero] row of course, where no improvement can be obtained,) building a "cross-hatch" vector set has big advantages.


I am pretty busy these days, (and I don't do compression work any more -- the fighting is just ugly, not why I went into programming at all) but if someone is interested in this stuff, please email me.

--jg


 
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.blu.org/pipermail/discuss/attachments/20040815/e8aea479/attachment.html>



BLU is a member of BostonUserGroups
BLU is a member of BostonUserGroups
We also thank MIT for the use of their facilities.

Valid HTML 4.01! Valid CSS!



Boston Linux & Unix / webmaster@blu.org