Boston Linux & Unix (BLU) Home | Calendar | Mail Lists | List Archives | Desktop SIG | Hardware Hacking SIG
Wiki | Flickr | PicasaWeb | Video | Maps & Directions | Installfests | Keysignings
Linux Cafe | Meeting Notes | Blog | Linux Links | Bling | About BLU

BLU Discuss list archive


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

sed'ing out newlines



On Mon, 15 Dec 2003, Dan Barrett wrote:

> Can anyone remind me of the proper regexp to feed sed which will cause
> it to eat newlines off?  I've got a file of newline-separated data which
> I want to concatenate into one big string.

As another commenter noted, Perl is probably the right, or at least the
most flexible (if you want to do anything else as well) approach.

Still, I like this way too:

    fmt -2500 file

which will squash 'file' into lines of 2500 characters each.

This isn't always an effective approach, however:

    * Multiple newlines are preserved. You can get around this by grepping
      out blank lines:  grep -v '^ *$' file | fmt -2500

    * The command seems to have an upper bound on how long the line length
      can be. The man & info pages don't seem to spell out the limit, and
      I don't care to spelunk through the source at the moment, but
      depending on your version & platform, "long" lengths (things seem
      to break down for me at 5000) might not work.

Still, for smallish files, fmt can be a useful tool for this kind of
thing.

But if the data file is big, or you need to do anything more subtle than
"smash everything", go for Perl: it's more flexible than Sed, and it
doesn't have the same limits that most of the shell tools have.



-- 
Chris Devers




BLU is a member of BostonUserGroups
BLU is a member of BostonUserGroups
We also thank MIT for the use of their facilities.

Valid HTML 4.01! Valid CSS!



Boston Linux & Unix / webmaster@blu.org