remote: Make ref_remove_duplicates faster for large numbers of refs
The ref_remove_duplicates function was very slow at dealing with very
large numbers of refs. This is because it was using a linear search
through all remaining refs to find any duplicates of the current ref.
Rewriting it to use a string list to keep track of which refs have
already been seen and removing duplicates when they are found is much
more efficient.
Signed-off-by: Julian Phillips <julian@quantumfyre.co.uk>
Signed-off-by: Junio C Hamano <gitster@pobox.com>