I'd rather read one Web page with all of the comics I like than a dozen separate pages, each offering a single comic strip in a sea of navigation and advertising. To this end, I've written a Perl script that predicts the URLs of the day's comics, then writes an HTML page with just the strips I want. Run it once a day, open the page it creates in a Web browser, et voilà: comics sans crap.
This Perl script predicts URLs for the following comics:
* Added for friends; I make no guarantee of the quality of their alleged humor.
I run this script at boot time, in a batch file, and redirect the output to a file:
perl scoopcomics.pl > dailycomics.html
Then open dailycomics.html in a Web browser. One of the nice things about the script is that it predicts the URLs, rather than scooping them from the Web sites. This means you can run it offline, which is nice if you don't have a permanent connection to the Internet. If you do, the various comics scoopers on Sourceforge are probably a better alternative for you.
If you've programmed, you'll be able to modify the script to add comics you want. Comics from ucomics.com and comics.com are particularly easy, since the URLs of comics from these sites look like others from the same site. To remove a comic, just comment out the corresponding line in the last section of the script by adding a "#" at the beginning of the line.
You can get the latest distribution of Perl here.
A few notes:
The comics from comics.com might occasionally be missing; I haven't perfectly descrambled their obfuscate URLs. Comics from that site are most likely to be missing on the first Monday of the month. (That's what you get for relying on free software. Feel free to fix the code and share it with me.)
This effort was inspired in part by sitescooper.