From: http://smcv.pseudorandom.co.uk/ Date: Mon, 17 Aug 2009 20:15:41 +0000 (-0400) Subject: responses to code review (I'll try to get them implemented later this week) X-Git-Tag: 3.1415926~96 X-Git-Url: http://git.tremily.us/?a=commitdiff_plain;h=89b5e2b8224a6e05e8f2a7d2d0affe7ae700fa97;p=ikiwiki.git responses to code review (I'll try to get them implemented later this week) --- diff --git a/doc/todo/should_optimise_pagespecs.mdwn b/doc/todo/should_optimise_pagespecs.mdwn index 33bca677a..0a3720b3c 100644 --- a/doc/todo/should_optimise_pagespecs.mdwn +++ b/doc/todo/should_optimise_pagespecs.mdwn @@ -113,6 +113,13 @@ In saveindex it still or'd together the depends list, but the `{depends}` field seems only useful for backwards compatability (ie, ikiwiki-transition uses it still), and otherwise just bloats the index. +> If it's acceptable to declare that downgrading IkiWiki requires a complete +> rebuild, I'm happy with that. I'd prefer to keep the (simple form of the) +> transition done automatically during a load/save cycle, rather than +> requiring ikiwiki-transition to be run; we should probably say in NEWS +> that the performance increase won't fully apply until the next +> rebuild. --[[smcv]] + Is an array the right data structure? `add_depends` has to loop through the array to avoid dups, it would be better if a hash were used there. Since inline (and other plugins) explicitly add all linked pages, each as a @@ -131,17 +138,31 @@ to avoid.. > I wasn't thinking about a lookup hash, just a dedup hash, FWIW. > --[[Joey]] +>> I was under the impression from previous code review that you preferred +>> to represent unordered sets as lists, rather than hashes with dummy +>> values. If I was wrong, great, I'll fix that and it'll probably go +>> a bit faster. --[[smcv]] + Also, since a lot of places are calling add_depends in a loop, it probably makes sense to just make it accept a list of dependencies to add. It'll be marginally faster, probably, and should allow for better optimisation when adding a lot of depends at once. +> That'd be an API change; perhaps marginally faster, but I don't +> see how it would allow better optimisation if we're de-duplicating +> anyway? --[[smcv]] + In Render.pm, we now have a triply nested loop, which is a bit scary for efficiency. It seems there should be a way to rework this code so it can use the optimised `pagespec_match_list`, and/or hoist some of the inner loop calculations (like the `pagename`) out. +> I don't think the complexity is any greater than it was: I've just +> moved one level of "loop" out of the generated Perl, to be +> in visible code. I'll see whether some of it can be hoisted, though. +> --[[smcv]] + Very good catch on img/meta using the wrong dependency; verified in the wild! (I've cherry-picked those bug fixes.)