Home
| Calendar
| Mail Lists
| List Archives
| Desktop SIG
| Hardware Hacking SIG
Wiki | Flickr | PicasaWeb | Video | Maps & Directions | Installfests | Keysignings Linux Cafe | Meeting Notes | Linux Links | Bling | About BLU |
I'm interested in implementing a checksum algorithm to help reduce the probability of data entry errors in a database. I took a stab at rolling my own, as follows: // 0 1 2 3 4 5 6 7 (digit index) // * 2 3 4 5 6 7 8 9 (multiplier) // - - - - - - - - // x x x x x x x x => (sum(x)+checksum digit) mod 10 must be zero. // // This means that each of the first eight digits of the code - excluding // the check digit itself - is multiplied by a number ranging from 9 to // 2 and that the resulting sum of the products, plus the check digit, // must be divisible by 10 without a remainder. // // This algorithm is adapted from the checksum used for ISBN codes. // See http://www.isbn.spk-berlin.de/html/userman/usm4.htm. Does this make sense? This seems like it could be one of those areas where apparent simplicity belies deeper mathematical principals. The criteria for this algorithm is that I end up with a nine-digit numeric code. This seems to eliminate CRC or MD5, because they assume full use of the bit string. Although I suppose you could just pick what you want out of the result. I just wonder if there's any design advantage then, however. I've looked at the credit card algorithm also. It works according to a similar principal as the one above. Ideally the algorithm would be particularly adept at detecting the most common typographical mistakes, such as transposed digits, off by one digit, etc. Any comments? -Ron- - Subcription/unsubscription/info requests: send e-mail with "subscribe", "unsubscribe", or "info" on the first line of the message body to discuss-request at blu.org (Subject line is ignored).
BLU is a member of BostonUserGroups | |
We also thank MIT for the use of their facilities. |