next unless $feed->{expireage} || $feed->{expirecount};
my $count=0;
my %seen;
- foreach my $item (sort { $IkiWiki::pagectime{$b->{page}} <=> $IkiWiki::pagectime{$a->{page}} }
- grep { exists $_->{page} && $_->{feed} eq $feed->{name} && $IkiWiki::pagectime{$_->{page}} }
+ foreach my $item (sort { ($IkiWiki::pagectime{$b->{page}}||0) <=> ($IkiWiki::pagectime{$a->{page}}||0) }
+ grep { exists $_->{page} && $_->{feed} eq $feed->{name} }
values %guids) {
- if ($feed->{expireage}) {
+ if ($feed->{expireage} && $IkiWiki::pagectime{$_->{page}}) {
my $days_old = (time - $IkiWiki::pagectime{$item->{page}}) / 60 / 60 / 24;
if ($days_old > $feed->{expireage}) {
debug(sprintf(gettext("expiring %s (%s days old)"),
+ikiwiki (2.65) UNRELEASED; urgency=low
+
+ * aggregate: Allow expirecount to work on the first pass. (expireage still
+ needs to wait for the pages to be rendered though)
+
+ -- Joey Hess <joeyh@debian.org> Wed, 17 Sep 2008 14:26:56 -0400
+
ikiwiki (2.64) unstable; urgency=low
* Avoid uninitialised value when --dumpsetup is used and no srcdir/destdir
tag="schmonz"
]]
- [[!aggregate
+ \[[!aggregate
name="Amitai's photos"
url="http://photos.schmonz.com/"
dir="planet/schmonz-photos"
Two things aren't working as I'd expect:
1. `expirecount` doesn't take effect on the first run, but on the second. (This is minor, just a bit confusing at first.)
+
+>
+
2. Where are the article bodies for e.g. David's and Nathan's blogs? The bodies aren't showing up in the `._aggregated` files for those feeds, but the bodies for my own blog do, which explains the planet problem, but I don't understand the underlying aggregation problem. (Those feeds include article bodies, and show up normally in my usual feed reader rss2email.) How can I debug this further?
--[[schmonz]]
+
+> I only looked at David's, but its rss feed is not escaping the html
+> inside the rss `description` tags, which is illegal for rss 2.0. These
+> unknown tags then get ignored, including their content, and all that's
+> left is whitespace. Escaping the html to `<` and `>` fixes the
+> problem. You can see the feed validator complain about it here:
+> <http://feedvalidator.org/check.cgi?url=http%3A%2F%2Fwww.davidj.org%2Frss.xml>
+>
+> It's sorta unfortunate that [[cpan XML::Feed]] doesn't just assume the
+> un-esxaped html is part of the description field. Probably other feed
+> parsers are more lenient. --[[Joey]]