From: Shawn O. Pearce Date: Mon, 15 Jan 2007 11:51:58 +0000 (-0500) Subject: Optimize index creation on large object sets in fast-import. X-Git-Tag: v1.5.0-rc4~14^2~37 X-Git-Url: http://git.tremily.us/?a=commitdiff_plain;h=2fce1f3c862845d23b2bd8305f97abb115623192;p=git.git Optimize index creation on large object sets in fast-import. When we are generating multiple packfiles at once we only need to scan the blocks of object_entry structs which contain objects for the current packfile. Because the most recent blocks are at the front of the linked list, and because all new objects going into the current file are allocated from the front of that list, we can stop scanning for objects as soon as we identify one which doesn't belong to the current packfile. Signed-off-by: Shawn O. Pearce --- diff --git a/fast-import.c b/fast-import.c index 207acb323..cfadda043 100644 --- a/fast-import.c +++ b/fast-import.c @@ -678,10 +678,15 @@ static void write_index(const char *idx_name) idx = xmalloc(object_count * sizeof(struct object_entry*)); c = idx; for (o = blocks; o; o = o->next_pool) - for (e = o->entries; e != o->next_free; e++) - if (pack_id == e->pack_id) - *c++ = e; + for (e = o->next_free; e-- != o->entries;) { + if (pack_id != e->pack_id) + goto sort_index; + *c++ = e; + } +sort_index: last = idx + object_count; + if (c != last) + die("internal consistency error creating the index"); qsort(idx, object_count, sizeof(struct object_entry*), oecmp); /* Generate the fan-out array. */