## Binary search is not efficient with traversal costs. What is?

8

1

Binary search let me down when I tried to apply it to the real world. The scenario is as follows.

I need to test the range of a device that communicates over radio. Communication needs to occur quickly, but slow transmission is tolerable, up to a point (say, about 3 minutes). I need to test whether transmissions will be successful every 200 feet until failure, up to 1600 feet. Every 200 feet a test will be run which requires 3 minutes to execute.

I naively assumed that a binary search would be the most efficient method of finding the failure point, but consider a travel speed of 200 ft/min and test time of 3 minutes. If failure to transmit occurs at 500 feet, binary search is not the most efficient means of finding the failure point, as shown below.

Simply walking along and testing every single point would have found the solution sooner, taking only 12 minutes, whereas binary search & testing would take 16 minutes.

My question: How do you calculate the most efficient path to the solution when traveling time matters? What is this called (e.g., binary-travel search, etc.)?

6Binary search is not guaranteed to find the solution in the minimum number of steps for a particular problem. It is a guarantee about the worst-case number of steps over all problems. It also assumes equal cost to access all points, which doesn't even hold here. – Ted Hopp – 2012-12-03T02:28:12.167

1Standart binary search can't be used. It's good for random access. – Viktor Lova – 2012-12-03T02:33:59.007

So, i have a question. Your distance 1600 feet, and precision - 200 feet. So you got 8 points to check? What maximal number of points can be? – Viktor Lova – 2012-12-03T02:40:10.603

You can calculate the performance of different searching algorithms with algorithm analysis. If 1600 feet is static you can easily calculate 16 values (run-times) for each algorithm. Then you have to answer if it's the average case you are interested in or the worst case. – EralpB – 2012-12-03T02:42:31.647

3

Binary search is indeed predicated on `O(1)` access times; there's little point binary searching a linked list, for example [but see Note 1], and that's essentially what you're doing, since you seem to be assuming that only discrete intervals are worth testing. If you were seeking a more accurate answer, you would find that the binary search allows an arbitrary precision, at the cost of one additional test per bit of precision.

Let's suppose you don't know even what the maximum value might be. Then you couldn't first test in the middle, since you wouldn't know where the middle was. Instead, you might do an exponential search for a limit (which is kind of a binary search inside out); you start by testing at `x`, then `2x`, then `4x` until you reach a point which is greater than the maximum (the signal doesn't reach that far). (`x` is the smallest answer you find interesting; in other words, if the first test at `x` shows the signal doesn't reach, you will then stop.) At the end of this phase, you'll be at `2ix`, for some integer `i`, and you will know the answer is between `2i-1x` and `2ix`.

Now you can actually do the binary search, starting by going backwards by `2i-2x`. From there, you might go either forwards or backwards, but you will definitely travel `2i-3x`, and the next iteration you'll travel `2i-4x`, and so on.

So in all, in the first phase (search for a maximum), you walked to `2ix`, and did `i` tests. In the second phase, binary refinement, you walk a total of `(2i-1-1)x` and do `i-1` tests. You'll end up at some point `d` which is between `2i-1` and `2i`, so at worst you'll have walked `3d` of the final point (and at best, you'll have walked `3d/2`). The number of tests you will have done will be `2*ceil(log2(d/x)) - 1`, which is within one test of `2*log2(d/x)`.

Under what circumstances should you do the binary search algorithm, then? Basically, it depends on the ratio of the travel time and the test time, and the desired precision of the answer. The simple sequential algorithm finds position `d` after `d/x` moves of size `x` and `d/x` tests; the binary search algorithm above finds position `d` after travelling at most `3d` but doing only around `2 log(d/x)` tests. Roughly speaking, if a test costs you more than twice the cost of travelling `d/x`, and the expected distance is sufficiently larger than the precision, you should prefer the binary search.

In your example, you appear to want the result with a precision of 200 feet; the travel time is 1 minute and the test time is 3 minutes, which is more than twice the travel time. So you should prefer the binary search, unless you expect that the answer will be found in a small number of multiples of the precision (as is the case). Note that although the binary algorithm uses four tests and 1000 feet of travel (compared with three tests and 600 feet for the sequential algorithm), improving the precision to 50 feet will only add four more tests and 150 feet of travel to the binary algorithm, while the sequential algorithm will require 20 tests.

Note 1: Actually, it might make sense to binary search a linked list, using precisely the above algorithm, if the cost of the test is high. Assuming the cost of the test is not proportional to the index in the list, the complexity of the search will be `O(N)` for both a lineary search and the binary search, but the binary search will do `O(log N)` tests and `O(N)` steps, while the sequential search will do `O(N)` tests and `O(N)` steps. For large enough N, this doesn't matter, but for real-world sized N it might matter a lot.

0

In reality, binary search can be applied here, but with several changes. We must calc not center, but an optimalPosition to visit.

``````int length = maxUnchecked - minChecked;
whereToGo = minChecked + (int)(length * factorIncrease) + stepIncrease;
``````

Because we need find first position where communication failing, sometimes we must go back, after that can be optimal to use other strategy

``````int length = maxUnchecked - minChecked;
int whereToGo = 0;
if ( increase )
whereToGo = minChecked + (int)(length * factorIncrease) + stepIncrease;
else
whereToGo = minChecked + (int)(length * factorDecrease) + stepDecrease;
``````

So, our task - to figure out such optimal factorIncrease, factorDecrease, stepIncrease, stepDecrease, that value of sum of f(failPos) will be minimal. How? Full bruteforce will help you if n (total length / 200.0f) is small. Else you can try use genetic algorithms or smth simple.

Step precision = 1, step limit = [0, n). Factor eps - 1/(4*n), factor limit - [0,1).

Now, simple code (c#) to demonstate this:

``````class Program
{
static double factorIncrease;
static int stepIncrease;
static double factorDecrease;
static int stepDecrease;
static bool debug = false;

static int f(int lastPosition, int minChecked, int maxUnchecked, int last, int failPos, bool increase = true, int depth = 0)
{

if ( depth == 100 )
throw new Exception();

if ( maxUnchecked - minChecked <= 0 ) {
if ( debug )
Console.WriteLine("left: {0} right: {1}", minChecked, maxUnchecked);

return 0;
}

int length = maxUnchecked - minChecked;
int whereToGo = 0;
if ( increase )
whereToGo = minChecked + (int)(length * factorIncrease) + stepIncrease;
else
whereToGo = minChecked + (int)(length * factorDecrease) + stepDecrease;

if ( whereToGo <= minChecked )
whereToGo = minChecked + 1;

if ( whereToGo >= maxUnchecked )
whereToGo = maxUnchecked;

int cur = Math.Abs(whereToGo - lastPosition) + 3;

if ( debug ) {
Console.WriteLine("left: {2} right: {3} whereToGo:{0} cur: {1}", whereToGo, cur, minChecked, maxUnchecked);
}

if ( failPos == whereToGo || whereToGo == maxUnchecked )
return cur + f(whereToGo, minChecked, whereToGo - 1, last, failPos, true & increase, depth + 1);
else if ( failPos < whereToGo )
return cur + f(whereToGo, minChecked, whereToGo, last, failPos, true & increase, depth + 1);
else
return cur + f(whereToGo, whereToGo, maxUnchecked, last, failPos, false, depth + 1);

}

static void Main(string[] args)
{
int n = 20;

int minSum = int.MaxValue;
var minFactorIncrease = 0.0;
var minStepIncrease = 0;
var minFactorDecrease = 0.0;
var minStepDecrease = 0;

var eps = 1 / (4.00 * (double)n);

for ( factorDecrease = 0.0; factorDecrease < 1; factorDecrease += eps )
for ( stepDecrease = 0; stepDecrease < n; stepDecrease++ )
for ( factorIncrease = 0.0; factorIncrease < 1; factorIncrease += eps )
for ( stepIncrease = 0; stepIncrease < n; stepIncrease++ ) {
int cur = 0;
for ( int i = 0; i < n; i++ ) {
try {
cur += f(0, -1, n - 1, n - 1, i);
}
catch {
Console.WriteLine("fail {0} {1} {2} {3} {4}", factorIncrease, stepIncrease, factorDecrease, stepDecrease, i);
return;
}
}
if ( cur < minSum ) {
minSum = cur;
minFactorIncrease = factorIncrease;
minStepIncrease = stepIncrease;

minFactorDecrease = factorDecrease;
minStepDecrease = stepDecrease;
}
}

Console.WriteLine("best - mathmin={4}, f++:{0} s++:{1} f--:{2} s--:{3}", minFactorIncrease, minStepIncrease, minFactorDecrease, minStepDecrease, minSum);

factorIncrease = minFactorIncrease;
factorDecrease = minFactorDecrease;

stepIncrease = minStepIncrease;
stepDecrease = minStepDecrease;

//debug =true;
for ( int i = 0; i < n; i++ )
Console.WriteLine("{0} {1}", 3 + i * 4, f(0, -1, n - 1, n - 1, i));

debug = true;
Console.WriteLine(f(0, -1, n - 1, n - 1, n - 1));

}
}
``````

So, some values (f++ - factorIncrease, s++ - stepIncrease, f-- - factorDecrease):

`````` n = 9  mathmin = 144, f++: 0,1(1) s++: 1 f--: 0,2(2) s--: 1
n = 20 mathmin = 562, f++: 0,1125 s++: 2 f--: 0,25   s--: 1
``````

what's interesting - mathmin for n = 9 (your case) is equal to a mathmin with simple go by one step & check – Viktor Lova – 2012-12-03T04:45:33.413

0

Depending on what you actually want to optimise, there may be a way to work out an optimum search pattern. I presume you don't want to optimise the worst case time, because the slowest case for many search strategies will be when the break is at the very end, and binary search is actually pretty good here - you walk to the end without changing direction, and you don't make very many stops.

You might consider different binary trees, and perhaps work out the average time taken to work your way down to a leaf. Binary search is one sort of tree, and so is walking along and testing as you go - a very unbalanced tree in which each node has at least one leaf attached to it.

When following along such a tree you always start at one end or another of the line you are walking along, walk some distance before making a measurement, and then, depending on the result and the tree, either stop or repeat the process with a shorter line, where you are at one end or another of it.

This gives you something you can attack using dynamic programming. Suppose you have solved the problem for lengths of up to N segments, so that you know the cost for the optimum solutions of these lengths. Now you can work out the optimum solution for N+1 segments. Consider breaking the N+1 segments into two pieces in the N+1 possible ways. For each such way, work out the cost of moving to its decision point and taking a measurement and then add on the cost of the best possible solutions for the two sections of segments on either side of the decision point, possibly weighted to account for the probability of ending up in those sections. By considering those N+1 possible ways, you can work out the best way of splitting up N+1 segments, and its cost, and continue until you work out a best solution for the number of sections you actually have.