Search My Blog

Tuesday, October 13, 2009

Unix shell script for removing duplicate files

Unix shell script for removing duplicate files [Leave a comment]

by Jarno Elonen, 2003-04-06...2004-08-14

The following shell script finds duplicate (2 or more identical) files and outputs a new shell script containing commented-out rm statements for deleting them (copy-paste from here):

OUTF=rem-duplicates.sh; echo "#! /bin/sh" > $OUTF; find "$@" -type f -exec md5sum {} \; | sort --key=1,32 | uniq -w 32 -d --all-repeated=separate | sed -r 's/^[0-9a-f]*( )*//;s/([^a-zA-Z0-9./_-])/\\\1/g;s/(.+)/#rm \1/' >> $OUTF; chmod a+x $OUTF; ls -l $OUTF 

You then have to edit the file to select which files to keep - the script can't safely do it automatically!

If you prefer a C program, try e.g. fdupes (may also be available in the the repository of you favorite Linux distribution). For a GUI based solution, fslint might do it for you.

The code was written for Debian GNU/Linux and has been tested with Bash, Zsh and Dash. Needless to say, you are welcome to do whatever you like with it as long as you don't blame me for disasters... (released in Public Domain)

Go there...
http://elonen.iki.fi/code/misc-notes/remove-duplicate-files/index.html

Don

No comments: