Here is some code to illustrate the size of problem (as of 11th May 2015) and how to 'fix' it.

```
import numpy as np
import bisect
import timeit
from random import randint
dtype = np.dtype([ ('pos','<I'),('sig','<H') ]) # my data is unsigned 32bit, and unsigned 16bit
data1 = np.fromfile('./all2/840d.0a9b45e8c5344abf6ac761017e93b5bb.2.1bp.binary', dtype)
dtype2 = np.dtype([('pos',np.uint32),('sig',np.uint32)]) # convert data to both unsigned 32bit
data2 = data1.astype(dtype2)
data3 = data2.view(('uint32', len(data2.dtype.names))) # convert above to a normal array (not structured array)
print data1.dtype.descr # [('pos', '<u4'), ('sig', '<u2')]
print data2.dtype.descr # [('pos', '<u4'), ('sig', '<u4')]
print data3.dtype.descr # [('', '<u4')]
print data1.nbytes # 50344494
print data2.nbytes # 67125992
print data3.nbytes # 67125992
print data1['pos'].max() # 2099257376
print data2['pos'].max() # 2099257376
print data3[:,0].max() # 2099257376
def b1(): return bisect.bisect_left(data1['pos'], randint(100000000,200000000))
def b2(): return bisect.bisect_left(data2['pos'], randint(100000000,200000000))
def b3(): return bisect.bisect_left(data3[:,0], randint(100000000,200000000))
def ss1(): return np.searchsorted(data1['pos'], randint(100000000,200000000))
def ss2(): return np.searchsorted(data2['pos'], randint(100000000,200000000))
def ss3(): return np.searchsorted(data3[:,0], randint(100000000,200000000))
def ricob1(): return bisect.bisect_left(data1['pos'], np.uint32(randint(100000000,200000000)))
def ricob2(): return bisect.bisect_left(data2['pos'], np.uint32(randint(100000000,200000000)))
def ricob3(): return bisect.bisect_left(data3[:,0], np.uint32(randint(100000000,200000000)))
def ricoss1(): return np.searchsorted(data1['pos'], np.uint32(randint(100000000,200000000)))
def ricoss2(): return np.searchsorted(data2['pos'], np.uint32(randint(100000000,200000000)))
def ricoss3(): return np.searchsorted(data3[:,0], np.uint32(randint(100000000,200000000)))
print timeit.timeit(b1,number=300) # 0.0085117816925
print timeit.timeit(b2,number=300) # 0.00826191902161
print timeit.timeit(b3,number=300) # 0.00828003883362
print timeit.timeit(ss1,number=300) # 6.57477498055
print timeit.timeit(ss2,number=300) # 5.95308589935
print timeit.timeit(ss3,number=300) # 5.92483091354
print timeit.timeit(ricob1,number=300) # 0.00120902061462
print timeit.timeit(ricob2,number=300) # 0.00120401382446
print timeit.timeit(ricob3,number=300) # 0.00120711326599
print timeit.timeit(ricoss1,number=300) # 4.39265394211
print timeit.timeit(ricoss2,number=300) # 0.00116586685181
print timeit.timeit(ricoss3,number=300) # 0.00108909606934
```

*Update!*
So thanks to Rico's comments, it seems like setting the type for the number you want to searchsorted/bisect is really import!
However, on the structured array with 32bit and 16bit ints, its still slow (although no where near as slow as before)

It should be noted that if your array is big enough that the difference between bisect and searchsorted is significant, then the time taken to .copy() that column, use it for searchsorted lookups, then get the data by searchsorted's index is most likely going to be larger than the difference between bisect and mergesorted to begin with. Plus RAM.

(but 5/5 Bi Rico for finding out its the format thats the problem) – J.J – 2015-05-07T15:35:28.470

@user3329564 I believe there was a patch to fix this at some point, but don't remember which version it got into. – Bi Rico – 2015-05-07T20:49:17.050

I am using numpy

`1.10.1`

and i am getting the opposite behavior:`timeit a['f0'].searchsorted(400.)`

is`best of 3: 8.1 µs per loop`

and`timeit f0.searchsorted(400.)`

is`best of 3: 510 ns per loop`

. I wonder why that is. – snowleopard – 2016-10-29T18:25:09.707@snowleopard I'm not sure I understand your question. I believe a fix has been added to numpy to make the difference between

`a['f0'].searchsorted`

and`f0.searchsorted`

much smaller. They're never going to be the same, but the 1000x performance difference has been removed. – Bi Rico – 2016-10-31T19:37:31.720