[[!meta title="Reading IGOR files from Python"]] [[!template id=gitrepo repo=igor]] This is the home page for the `igor` package, [[Python]] modules for reading files written by [WaveMetrics][] IGOR Pro. Note that if you're designing a system, [[HDF5]] is almost certainly a better choice for your data file format than IBW or PXP. This package exists for those of you who's data is already stuck in an IGOR format. History ------- When I joined Prof. Yang's lab, there was a good deal of data analysis code written in IGOR, and a bunch of old data saved in IGOR binary wave (IBW) and packed experiment (PXP) files. I don't use MS Windows, so I don't run IGOR, but I still needed a way to get at the data. Luckily, the [WaveMetrics][] folks publish [some useful notes][TN] which explain the fundamentals of these two file formats ([TN003][] for IBW and [PTN003][] for PXP). The file formats are in a goofy format, but [strings][] pulls out enough meat to figure out what's going on. For a while I used a IBW → ASCII reader that I coded up in [[C]], but when I joined the [[Hooke]] project during the winter of 2009–2010, I translated the reader into [[Python]] to support the drivers for data from Asylum Research's [MFP-*][MFP-1D] and related microscopes. This scratched my itch for a few years. Fast forward to 2012, and for the first time I needed to extract data from a PXP file. Since my Python code only supported IBW's, I searched around and found [igor.py][] by Paul Kienzle Merlijn van Deen. They had a PXP reader, but no reader for stand-alone IBW files. I decided to merge the two projects, so I split my reader out of the Hooke repository and hacked up the [[Git]] repository referenced above. Now it's easy to get a hold of all that useful metadata in a hurry. No writing ability yet, but I don't know why you'd want to move data that direction anyway ;). Parsing dynamic structures with Python -------------------------------------- The IGOR file formats rely on lots of shenanigans with C `struct`s. To meld all the structures together in a natural way, I've extended Python's standard [struct][] library to support arbitrary nesting and dynamic fields. Take a look at [igor.struct][struct.py] for some examples. This framework makes it easy to load data from structures like: struct vector { unsigned int length; short data[length]; }; With the standard `struct` module, you'd read this using the functional approach: >>> import struct >>> buffer = b'\x00\x00\x00\x02\x01\x02\x03\x04' >>> length_struct = struct.Struct('>I') >>> length = length_struct.unpack_from(buffer)[0] >>> data = struct.unpack_from('>' + 'h'*length, buffer, length_struct.size) >>> print(data) (258, 772) This obviously works, but keeping track of the offsets, byte ordering, etc. can be tedious. My `igor.struct` package allows you to use a more object oriented approach: >>> from pprint import pprint >>> from igor.struct import Field, DynamicField, DynamicStructure >>> class DynamicLengthField (DynamicField): ... def pre_pack(self, parents, data): ... "Set the 'length' value to match the data before packing" ... vector_structure = parents[-1] ... vector_data = self._get_structure_data( ... parents, data, vector_structure) ... length = len(vector_data['data']) ... vector_data['length'] = length ... data_field = vector_structure.get_field('data') ... data_field.count = length ... data_field.setup() ... def post_unpack(self, parents, data): ... "Adjust the expected data count to match the 'length' value" ... vector_structure = parents[-1] ... vector_data = self._get_structure_data( ... parents, data, vector_structure) ... length = vector_data['length'] ... data_field = vector_structure.get_field('data') ... data_field.count = length ... data_field.setup() >>> dynamic_length_vector = DynamicStructure('vector', ... fields=[ ... DynamicLengthField('I', 'length'), ... Field('h', 'data', count=0, array=True), ... ], ... byte_order='>') >>> vector = dynamic_length_vector.unpack(buffer) >>> pprint(vector) {'data': array([258, 772]), 'length': 2} While this is overkill for such a simple example, it scales much more cleanly than an approach using the standard `struct` module. The main benefit is that you can use `Structure` instances as format specifiers for `Field` instances. This means that you could specify a C structure like: struct vectors { unsigned int length; struct vector data[length]; }; With: >>> dynamic_length_vectors = DynamicStructure('vectors', ... fields=[ ... DynamicLengthField('I', 'length'), ... Field(dynamic_length_vector, 'data', count=0, array=True), ... ], ... byte_order='>') The C code your mimicking probably only uses a handful of dynamic approaches. Once you've written classes to handle each of them, it is easy to translate arbitrarily complex nested C structures into Python representations. The pre-pack and post-unpack hooks also give you a convenient place to translate between some C struct's funky format and Python's native types. You take care off all that when you define the structure, and then any part of your software that uses the structure gets the native version automatically. [WaveMetrics]: http://www.wavemetrics.com/ [TN]: ftp://ftp.wavemetrics.net/IgorPro/Technical_Notes/ [TN003]: ftp://ftp.wavemetrics.net/IgorPro/Technical_Notes/TN003.zip [PTN003]: ftp://ftp.wavemetrics.net/IgorPro/Technical_Notes/PTN003.zip [strings]: http://www.gnu.org/software/binutils/ [MFP-1D]: http://www.asylumresearch.com/Products/Mfp1D/Mfp1D.shtml [igor.py]: http://pypi.python.org/pypi/igor.py [struct]: http://docs.python.org/library/struct.html [struct.py]: http://git.tremily.us/?p=igor.git;a=blob;f=igor/struct.py;hb=HEAD [[!tag tags/programming]]