Home
| Calendar
| Mail Lists
| List Archives
| Desktop SIG
| Hardware Hacking SIG
Wiki | Flickr | PicasaWeb | Video | Maps & Directions | Installfests | Keysignings Linux Cafe | Meeting Notes | Linux Links | Bling | About BLU |
On 01/07/2010 12:35 PM, Jerry Feldman wrote: > I'm trying to get one of our servers to boot with 64GB. One of the > servers won't recognize 4GB DIMMs, but after switching to another broke= n > server we were able to get it up to 53GB with a good boot (RHEL 5.2), > but after adding an additional SATA and replacing the 4 remaining 1GB > DIMMs, the kernel boots fine, but we hang on udev. I've got a few other= > things I might try, but I am looking for some other ideas. This is an > Intel whitebox with a Supermicro X7DB8+ MB currently flashed at BIOS > 2.1. I may reflash it to 2.1a and possibly remove the second SATA. > > =20 I've been doing more checking. Note that in RHEL you can turn on udev debugging as a kernel argument (udevdebug). First, I've determined that the culprit is udevsettle that resides in /sbin/start_udev. This is a hard hang, and setting a timeout value does nothing (eg. udevtimeout=3D180). With udevdebug set I have a number of messages showin= g that some of the udev tasks have completed, but I have not been able to track them to a device. The only way to access the script is to boot into the rescue. I've also determined that removing the additional 2 SATA drives makes no difference. I've disabled both the serial, parallel and floppy ports in the BIOS. I've also set the noapic and acpi=3Doff kernel flags. I'm going to try to just replace the udevsettle with with a sleep command for about 3 minutes. I'd like to know what device we're hanging on because I could rewrite the rules. Also I have 3 other servers running in the same hardware configuration with no problems. --=20 Jerry Feldman <gaf-mNDKBlG2WHs at public.gmane.org> Boston Linux and Unix PGP key id: 537C5846 PGP Key fingerprint: 3D1B 8377 A3C0 A5F2 ECBB CA3B 4607 4319 537C 5846
BLU is a member of BostonUserGroups | |
We also thank MIT for the use of their facilities. |