threads to a single core. See the following discussions:
* http://smoothspan.wordpress.com/2007/09/14/guido-is-right-to-leave-the-gil-in-python-not-for-multicore-but-for-utility-computing/
threads to a single core. See the following discussions:
* http://smoothspan.wordpress.com/2007/09/14/guido-is-right-to-leave-the-gil-in-python-not-for-multicore-but-for-utility-computing/
* http://www.artima.com/weblogs/viewpost.jsp?thread=214235
* http://www.snaplogic.com/blog/?p=94
* http://stackoverflow.com/questions/31340/
* http://www.artima.com/weblogs/viewpost.jsp?thread=214235
* http://www.snaplogic.com/blog/?p=94
* http://stackoverflow.com/questions/31340/
Increasing `worker_pool` will only help you get around IO blockin
at the cost increased time-slicing overhead.
"""
Increasing `worker_pool` will only help you get around IO blockin
at the cost increased time-slicing overhead.
"""
self._receive_queue = Queue()
def _spawn_workers(self, worker_pool):
self._receive_queue = Queue()
def _spawn_workers(self, worker_pool):
self._workers = []
for i in range(worker_pool):
worker = WorkerThread(spawn_queue=self._spawn_queue,
self._workers = []
for i in range(worker_pool):
worker = WorkerThread(spawn_queue=self._spawn_queue,
if self._job_is_blocked(job):
log().debug('block job %s' % job)
self._blocked.append(job)
if self._job_is_blocked(job):
log().debug('block job %s' % job)
self._blocked.append(job)