Inspired by Romain

Lynn has a calculator with only two buttons that perform + 1 \boxed{+1} and ÷ 2 \boxed{\div 2} .

She also has a screen with 9 9 significant figures, and it displays 0 0 when she gets the calculator.

If she wants the first 5 digits to be displayed as 1.0156 1.0156 , what is the minimum number of taps she needs to do?

Hint : The first few bits of 1.0156 1.0156 in binary are

1.0156 1.0000001111111110010111001001 2 1.0156 \approx 1.0000001111111110010111001001 \ldots _2 .

Note : You may want to use the code environment below.


            
You need to be connected to run code


Inspiration


The answer is 8.

This section requires Javascript.
You are seeing this because something didn't load right. We suggest you, (a) try refreshing the page, (b) enabling javascript if it is disabled on your browser and, finally, (c) loading the non-javascript version of this page . We're sorry about the hassle.

3 solutions

Ermanno Attardo
Mar 28, 2018

The integer part shall be added in the end, by performing a series of +1s. As for the fractionary part:

We want to approximate a generic decimal base 10 number e.g. 0.15 to 2 decimal places. The minimal set of {+1, /2} operations consists in either finding the exact base 2 number (in case the number, truncated to the wanted Nth decimal digit equals exactly a sum of powers of (1/2)^n or by finding the minimum upper bound in range [0.15,0.16).

My proof follows the lines of:

1) Deal with the integer part (E.g. Calvin Lin's Lemma 1).

This amounts to obviousness. Adding ad even number of ints and dividing by 2 equals wasting moves. By performing /2 when we have an odd integer part, we halve the existing decimal part and add 0.5 to it, but this can be achieved by a shorter set of moves ( e.g. we want to turn 0.125 into 0.625, adding 0.5. in order to achieve 0.125 we previously performed 0.25 /2 , revert that division, add 1, making 1,25 and finally divide, making 0.625). Therefore in our stream of operations our integer part never exceeds 1 until the fractionary part has been correctly approximated)

2) Explain why, for each power of 2, we want it at most once.

This is because binary 0.0000YXXXXX the Y is equal to infinite 1s to its right and therefore skipping this power of (1/2) results into a lower bound.

3) Explain why we always want to take away the largest power of 2 within the given bounds.

In case our approximation is not a perfect sum of powers of (1/2) a defective approximation will not satisfy our problem and an excess approximation adds operations because the bigger the number, the more operations are required to push the binary digit in place. Therefore we perform a logic AND to match the powers of 1/2 that the two numbers (lower and upper bound) have in common, if any and then add to this number the least binary digits that satisfy our approximation. This is exactly one digit or a set of digits, as least right-shifted as they need to be. Voit-lá.

Previous discussion:

I figured out you can find a certain sum of powers of decimal (1/2) and take its leftmost ciphers. For example binary 0.1 is equal to decimal 0.5 or (1/2) . Binary 0.01 (second decimal place) is decimal (1/2)^2 or 1/4 and makes 0.25.
Binary 0.11 is their sum (decimal 0.75). So in order to approximate the decimal part of an integer in binary (the integer part needs to be summed later), we need to try adding an 1 to the right as a new decimal digit.
By adding this 1 to tne binary number, we obtain can convert this number into decimal and compare it to the target number. If the decimal part is greater than the target number, we precede this last 1 with a zero (pushing it to the right).
If it's lesser, we add an 1 to the right, until it's greater, then we precede this last 1 with zeros again until it's less or equal. And so on, until we approximated the exact decimal part up to the Nth cipher we cared about.
As a last step we perform a scroll of this number from right to left. For every 1 we encounter, we perform "+1" and then "/2". For every zero, we perform "/2".
At the end of this step we have 0.xxxxxxxxx
Then we sum the eventual integer part (+1,+1,+1 etc N times as many as the units we want to add to 0.xxxx ) and we are done. Your problem is great because all we have to do is find the binary 0.000001 which is decimal 0.015625 and approximates the number up to the point we care.



First we check if the approximation is already sufficient up to the Nth cipher we wanted and then:

  • if the decimal part is greater than the target number, we precede it with a zero

else

  • if it's lesser, we add an 1 to the right

  • repeat step 1"

I need to demonstrate two things.

1) adding units at the very end, after obtaining the decimal part (0.xxxxxx) is optimal. Rephrasing, the final operations will be a series of +1. This can be proven by absurd. If we perform /2, then in case of an even number as the integer part, we simply halve the integer part, wasting moves. For example we need +1+1 to make 2 but +1+1+1+1/2 wastes 3 moves. If we need to halve the decimal part we can perform /2 and afterwards add the integers. If we halve an odd number we waste moves but add 0.5 to the decimal part, moreover we halve the already present decimal part. But:

  • Adding 0.5 can be done by avoiding a /2 (if this was the last operation performed) and then adding 1 and dividing by 2. Example: we have 0.125 and want to turn it to 0.625 (the goal is to add 0.5). 0.125 was obtained by performing 0.25 /2 (otherwise we would have 1.125, contraddicting statement 1). So we revert 1 move and go back to 0.25. we add 1 (1.25) and then divide by 2, resulting in 0.625. We can eventually add 1 afterwards. So this proves adding units at the end is either the optimal strategy or equal in cost with other optimal strategies (e.g. target number 1.5 : +1 +1 +1 /2 equals +1 /2 +1 but this is the best solution).

2) Until the final part, when we perform the final addition of units, it is not optimal (or equivalent to optimal in few cases but never better) to perform more than +1 between /2 s.

Let me make an example. We are still building the decimal part of 0.xxx. Say we got to 0.625 in binary 0.101. +1: 1.101 we can now perform /2 and right shift it: 0.1101, adding information. if we perform +1 instead we get 2.101. /2 makes 1.0101 (waste of move). Because +1 +1 /2 achieved the same result as /2 +1

Suppose +3 (shorthand for +1+1+1) /2 was efficient. we are adding binary 11 and shifting one place to the right. We can achieve the same with +1 /2 +1

Suppose +5 (binary 101) /2 (total 6 moves, turning 0.xxx into 10.1xxx) was efficient. We can achieve the same with +1 /2 +1 +1 (total 4 moves).

And so on.

Any even number wastes moves, an odd number adds 0.5 to the decimal part after halving it, and statement 1) proves there is a more or equally efficient way to do it.

Therefore I proved point 2.

So we know, as the binary representation suggests, that the (or one of the) optimal strategy is to add integer units at the very end and before that, perform a series of /2 with at most one +1 between them.

I converted your comments into a solution. Can you continue editing it for clarity?

Remember that the goal is to prove we have the minimum number of steps, which requries showing 1) It can be done, 2) It cannot be done in fewer steps. You have done 1 (in a manner that is slightly independent of the binary representation), but not 2. Yes, it is (close to) the correct algorithm, but you will still have to explain why it works.

Calvin Lin Staff - 3 years, 2 months ago

Log in to reply

And sorry about that, I fixed a few mistakes in case you were already reading.

Ermanno Attardo - 3 years, 2 months ago

Log in to reply

So this proves adding units at the end is either the optimal strategy or equal in cost with other optimal strategies

I do not think this is proven as yet. It's "sort of true", but you need to really explain why (which I consider is the hard part of the problem).

Calvin Lin Staff - 3 years, 2 months ago

2) Explain why, for each power of 2, we want it at most once.
This is because binary 0.0000YXXXXX the Y is equal to infinite 1s to its right and therefore skipping this power of (1/2) results into a lower bound.

The statement and the claim doesn't match up. The reasoning should be "If we wanted 2 (or more) of 1 2 s \frac{1}{ 2^s } , then we could have used one of 1 2 s 1 \frac{ 1}{ 2^{s-1} } which saves us one tap. Hence, in the optimum solution, each power of 2 is used at most once.


3) Explain why we always want to take away the largest power of 2 within the given bounds.

I think this part can be done better. You haven't explained why using the largest power of 2 will guarantee that we minimize the total number of taps to be used. Yout just talk about why this is a good thing to do (but not why this is the best thing to do).

E.g. The greedy algorithm is often used to deal with globally complex but locally clear scenarios. It results in a local optimum, but not necessarily a global optimum.

Calvin Lin Staff - 3 years, 1 month ago
David Vreken
Mar 28, 2018

The most efficient way to represent the numbers after the decimal on Lynn's calculator is to read the binary representation from right to left and use + 1 \boxed{+1} , ÷ 2 \boxed{\div 2} for each 1 1 and ÷ 2 \boxed{\div 2} for each 0 0 . The question is what is the minimum place value that Lynn can start at and still get 1.0156 1.0156 .

Now 1 2 1 = 0.5 \frac{1}{2^1} = 0.5 , 1 2 2 = 0.25 \frac{1}{2^2} = 0.25 , 1 2 3 = 0.125 \frac{1}{2^3} = 0.125 have an affect on the number immediately after the decimal because their decimal representations have a number in the tenth spot, but 1 2 4 = 0.0625 \frac{1}{2^4} = 0.0625 (and all powers of 2 2 after this) do not, because 2 3 < 1 0 1 < 2 4 2^3 < 10^1 < 2^4 . Likewise, exponents 1 2 4 \frac{1}{2^4} through 1 2 6 \frac{1}{2^6} have a number in the hundredth spot, but 1 2 7 \frac{1}{2^7} (and all powers of 2 2 after this) do not, because 2 6 < 1 0 2 < 2 7 2^6 < 10^2 < 2^7 . For a decimal representation to be accurate to 1 1 decimal place, in most cases Lynn must be sure that that the 2 n d 2^{nd} number is also accurate to deal with rounding, and so at most 6 6 binary decimals are needed, because 2 6 < 1 0 1 + 1 < 2 7 2^6 < 10^{1 + 1} < 2^7 . Using this logic, in general, to be accurate to n n decimal places, in most cases at most n + 1 log 2 1 \lfloor \frac{n + 1}{\log 2} - 1\rfloor binary decimals are needed.

However, the number of binary decimals needed can be shortened if the last few numbers in the n + 1 log 2 1 \lfloor \frac{n + 1}{\log 2} - 1\rfloor binary decimals are all 0 0 s or all 1 1 s. If the last few numbers are 0 0 s, then the binary representation can be truncated because the 0 0 s are unnecessary. If the last few numbers are 1 1 s, then 1 1 can be added to the n + 1 log 2 1 \lfloor \frac{n + 1}{\log 2} - 1\rfloor spot without changing the decimal representation value, and the addition will change all the last 1 1 s to 0 0 s and the last 0 0 to 1 1 , allowing another truncation because of unnecessary 0 0 s.

So for 1.0156 1.0156 , accurate to 4 4 decimal places, Lynn would at most need 4 + 1 log 2 1 = 15 \lfloor \frac{4 + 1}{\log 2} - 1\rfloor = 15 binary decimals. However, since the first 15 15 decimal places of the binary of 1.0156 1.0156 is 1.000000111111111 1.000000111111111 , which ends in a sequence of 1 1 s, Lynn can add 0.000000000000001 0.000000000000001 to get a shorter binary of 1.000001 1.000001 . Then using the method described above, Lynn can display 1.0156 1.0156 on her calculator by entering + 1 \boxed{+1} , ÷ 2 \boxed{\div 2} , ÷ 2 \boxed{\div 2} , ÷ 2 \boxed{\div 2} , ÷ 2 \boxed{\div 2} , ÷ 2 \boxed{\div 2} , ÷ 2 \boxed{\div 2} , + 1 \boxed{+1} , which is 8 \boxed{8} button presses.

Because the purpose of this question is to be pedantic about the choices / presentation that leads us to conclude we indeed have a minimum , can you explain why we "no lower value can be achieved", instead of just "I found a way to do it with 8."

And, for the cherry on top, to generalize this to any initial decimal string.

Calvin Lin Staff - 3 years, 2 months ago
Calvin Lin Staff
Mar 27, 2018

[This is not a complete solution. It is to guide you that the answer is not ~15.]

Hint: We care about getting some number from 1.0156 1.0156 to 1.015 7 1.0157^- . Working directly with 1.0156 may not be the most appropriate choice.

Hint: To reach exactly the real number r r , the binary representation (of the fractional part) allows us to determine the minimum number of steps needed. In particular, if a number has an infinitely long binary representation, then it cannot be reached in finitely many steps.

Hint: There's something special about the binary representation that caused me to choose this number. Think about it.


Here is a sketch of a rigorous solution.

Lemma 1: The minimum number of steps to get any real number r r is given by r + \lfloor r \rfloor + the minimum number of sets to get { r } \{ r \} .
Proof: Obviousness.

Henceforth, I'm restricting to talking about fractional parts 0 r < 1 0 \leq r < 1 .

Lemma 2: For a given binary representation of 0 r < 1 0 \leq r < 1 , r = 0. 2 r = 0.\ldots _ 2 , the minimum number of steps needed to reach exactly that number, is given by the formula that everyone is thinking of.
Proof that this can be achieved - Obviousness.
Proof of no lower value can be achieved - Obviousness.

Lemma 3 : If we want to get the first x x digits to look like 0 r < 1 0 \leq r < 1 , then we just need to consider all the binary representations from r r to r + 1 0 x r + 10^{-x} . We have to check where the first k k digits of r 2 r_2 and ( r + 10 x ) 2 ( r + 10 { - x } ) _ 2 agree. That, in turn, gives us the actual binary representation we want to use to arrive at the (globa) minimum.
Proof - Requires a bit of work to understand what I mean by "that, in turn, gives us the actual binary representation". But otherwise, obviousness.

Your problem is better than Romain Bouchard's because it allows to one to understand the way to solve it. (I failed his problem but aced yours, meaning your problem actually invites people to apply intelligence and guides them towards learning). I am a newbie at this site and my head spins.. I have never used LaTeX to write mathematical formulae so I will try to explain in plain text. I figured out you can find a certain sum of powers of decimal (1/2) and take its leftmost ciphers. For example binary 0.1 is equal to decimal 0.5 or (1/2) . Binary 0.01 (second decimal place) is decimal (1/2)^2 or 1/4 and makes 0.25. Binary 0.11 is their sum (decimal 0.75). So in order to approximate the decimal part of an integer in binary (the integer part needs to be summed later), we need to try adding an 1 to the right as a new decimal digit. By adding this 1 to tne binary number, we obtain can convert this number into decimal and compare it to the target number. If the decimal part is greater than the target number, we precede this last 1 with a zero (pushing it to the right). If it's lesser, we add an 1 to the right, until it's greater, then we precede this last 1 with zeros again until it's less or equal. And so on, until we approximated the exact decimal part up to the Nth cipher we cared about. As a last step we perform a scroll of this number from right to left. For every 1 we encounter, we perform "+1" and then "/2". For every zero, we perform "/2". At the end of this step we have 0.xxxxxxxxx Then we sum the eventual integer part (+1,+1,+1 etc N times as many as the units we want to add to 0.xxxx ) and we are done. Your problem is great because all we have to do is find the binary 0.000001 which is decimal 0.015625 and approximates the number up to the point we care.

Ermanno Attardo - 3 years, 2 months ago

Log in to reply

I am of course a newbie to this site. But I want to add. What did I have to gain from Romain Bouchard's overly complicated problem? Nothing. From yours I gained intellectual challenge and actually more understanding, turning myself into a better person. The valuable lesson is that not always "complicated" means "better". Thanks for your problem.

Ermanno Attardo - 3 years, 2 months ago

Log in to reply

Hi Ermanno, that's great to hear. There's an art to "asking the interesting question" that highlights an underlying theory/misconception.

Can you write your comment as a solution? That will make it easier for me to comment on and edit it.
You are on the right track, but still need to explain why "If the decimal part is greater than the target number, we precede it with a zero. If it's lesser, we add an 1 to the right, until it's greater". Note that the claim might not be completely true.

Calvin Lin Staff - 3 years, 2 months ago

Log in to reply

@Calvin Lin (In the meantime I edited my text to make it a little more clear)

Ermanno Attardo - 3 years, 2 months ago

@Calvin Lin

"If the decimal part is greater than the target number, we precede it with a zero. If it's lesser, we add an 1 to the right, until it's greater"

If we proceed from 0, the first attempt is 0.1

If this is greater, we immediately try 0.01.

If this is lesser, we try 0.011. If this fails, we try 0.0101. If this is still greater, we shrink to 0.01001. Etc. But at any time we found a sufficient approximation, we immediately quit.

Therefore I amend the sentence from:

"If the decimal part is greater than the target number, we precede it with a zero. If it's lesser, we add an 1 to the right, until it's greater"

to

"First we check if the approximation is already sufficient up to the Nth cipher we wanted and then:

  • if the decimal part is greater than the target number, we precede it with a zero

else

  • if it's lesser, we add an 1 to the right

  • repeat step 1"

This is only correct because we start from the minimum representation and work incrementally.

> Can you write your comment as a solution? That will make it easier for me to comment on and edit it.

Sorry I have to really go to sleep...

Ermanno Attardo - 3 years, 2 months ago

We have to disequate the Nth decimal digit ( 1*10-N ) with 2^-k, with N and K belonging to N (natural numbers). So in order to add 1/1000 ( N=3) K=9 (since 2^-10 is 1/1024 which is lower than 1/1000.

Precision up to the third digit implies 0.001 is covered (minimum) which is coverable by no less than going below 2^-9. Since we need coverage up to the next decimal digit, excluded (e.g. [0.15,0.15(9)] is acceptable range if we want to approximate up to the 2nd decimal digit) we can consider that 2^-4 is less than 1/10 and therefore between the first K that deals with the lowest decimal digit in range and 4 more binary digits, we have the minimum binary digits required to achieve the desired number at given approximation.

I am not very good at writing formulae here Calvin. Does this suffice or do you require more?

Ermanno Attardo - 3 years, 2 months ago

Log in to reply

(The issue isn't with formula, but with perspective.)

You have essentially the right relevant ideas to look at. However, it is the switch in perspective to looking at [ 0.0156 , 0.0157 ) [ 0.0156, 0.0157) which you are missing.

Because of how binary vs decimal numbers behave, your statements are not accurate. You end up with a wider margin of error to consider. It is not that we want to look at [ 0.0156 , 0.0156 + 1 2 10 ) [0.0156, 0.0156 + \frac{1}{2^{10} } ) or [ 0.0156 , 0.0156 + 1 2 9 ) [ 0.0156, 0.0156 + \frac{1}{2^{9} }) , because those ranges are either not sufficient or not necessary.
Specifically, your statement of "2^-4 is less than 1/10 and therefore between the first K that deals with the lowest decimal digit in range and 4 more binary digits" means that you searched through [ 0.15 , 0.15625 ) [ 0.15, 0.15625 ) , as opposed to searching through [ 0.15 , 0.16 ) [0.15, 0.16) .

Calvin Lin Staff - 3 years, 2 months ago

Log in to reply

Ok! So basically 1/10 is 0.0(0011) (periodically repeating) but we can round it up to 0.0001111. 1/100= 0.00(00001010001111010111) 0.0156 is 0.000001 0.0157 is 0.0000010000001 and rounded up 0.0000010000000100111010100100101011 etc etc (with rounding error)

It means we can not impose without rounding error a certain decimal digit in base 10, but we can add a significantly smaller value (go a certain number of zeros to the right and add 1). In order to fill that gap it is obvious we can try adding a binary digit (2^-x) and if it's too big, all we can do, base 2, is try a smaller fraction. Since in 0.0000XYYYYYY X is as worth as infinite Ys to its right, this is the shortest way to add that decimal value.

Is this satisfactory?

Ermanno Attardo - 3 years, 2 months ago

Log in to reply

@Ermanno Attardo Sortof. But you can see how trying to deal with 1 / 10 = 0. 00011 1/10 = 0.\overline{00011} isn't very clearcut.

So you are working with all numbers in the range [ 0.001001100110011 2 , 0.0010100011110101 2 ) [ 0.001001100110011\ldots_2, 0.0010100011110101\ldots_2 ) .
Must we look at every single number? If yes, is that a lot of work?
If no, how can we simplify this process further? (Hint: Lemma 3).

Calvin Lin Staff - 3 years, 2 months ago

Log in to reply

@Calvin Lin Just a note: if we just truncate the binary string by just dropping trailing digits after some point, obviously we always have an approximation by defect. This violates our target number. Since a binary representation is unique, therefore, a better solution (if exists) approximates always by excess,

Since we are not bound to give the exact binary representation of a decimal rational number, requiring potentially infinite binary digits, but all we must do is only approximate correctly up to the Nth digit, we don't need to care about every number but we can truncate the binary in the following way: If the "real" binary representation of a decimal rational number is called K, we can assume this K has a binary fractionary part composed of a string of 1s and 0s and this string is finite or infinite. When we can first add an 1 in place of a 0 at the very first leftmost place it does not violate our wished approximation, we will do it and truncate the number there. We obtained a number K2 >K which is an upper bound and this is the most efficient approximation. Unless we sooner land on the exact representation, meaning the target number was representable as the sum of powers of 1/2 and we didn't settle for less digits. This should be enough to also prove it cannot be done any better. Infact if we don't select the first convenient 1 (and we can STOP there, assuming there are zeros to its right), we have to continue and add one zero and 1s later, using more moves. Approximating the exact representation by defect (truncating) is not feasible since it violates our number. Turning ones into zeros permanently turns the number into defective by a previously proven point, etc

So we might encounter only 2 cases:

1) we land on the exact binary representation of the number (high enough wanted K digits and target number sum of powers of 1/2 in its fractional part)

2) approximating by excess at the first convenient chance is the shortest route

Ermanno Attardo - 3 years, 2 months ago

Log in to reply

@Ermanno Attardo Once again, you essentially have the right ideas. You just need to string them up and push them through.


Just a note: if we just truncate the binary string by just dropping trailing digits after some point, obviously we always have an approximation by defect. This violates our target number. Since a binary representation is unique, therefore, a better solution (if exists) approximates always by excess,

My (unsubstantiated) claim is that your claim is not true. We could "approximate by excess" while still ensuring the calculation is correct. IE We are guaranteed to find the greatest lower bound (minimum), and not just a lower bound (which may not be the greatest because we approximated by too much excess).

Yes, "That, in turn, gives us the actual binary representation we want to use to arrive at the (global) minimum. " is where a lot of the magic happens.

Calvin Lin Staff - 3 years, 2 months ago

Log in to reply

@Calvin Lin If we approximate by excess at the first (to the left) binary digit which still respects the approximation up the the Nth desired decimal digit (the N+1th can even be 9, we do not care), then we are done. Why bothering about too much excess? Why do we want the minimum bound? Please explain why we need to do that

Anyway all I could think of, to answer your question is:

1) we calculate the binary for the 10^-N we want to add.. e.,g. 0.01 decimal is 0.00(00001010001111010111). Add this to the number we want to add 1/100 (decimal) and we are done.

I lack the theory to extrapolate a more general formula I am afraid...

Ermanno Attardo - 3 years, 2 months ago

Log in to reply

@Ermanno Attardo We want to be certain that we do not approximate by too much excess. E.g If we approximated [ 0.15 , 0.16 ) [0.15, 0.16) by [ 0.12 , 0.20 ) [ 0.12, 0.20) almost certainly we will end up with a smaller value. Thus, when we approximate by excess, we run the risk of only finding a lower bound, and not the greatest lower bound.

It's not "theory" that you're lacking. All of the theory has been laid out (and by you). It's more a matter of rigour, which because of the context, I am setting a high bar. You're doing great.

Calvin Lin Staff - 3 years, 2 months ago

Log in to reply

@Calvin Lin Thanks. I wish to understand your wording about bounds. If I understand correctly the point about bounds, I would then proceed by considering, stepwise, increasing powers of 1/2 and subtracting them from the decimal part of the number, and every step will be subtracted from the remainder until we get the desired number..

Example 0.15 becomes the initial remainder to subtract from

1/2 ^0 = 1 more than 0.15. no op. result so far = 0

1/2 ^1 = 0.5 no op ... 1/2 ^2 = 0.25 no op.

1/2 ^3 = 0.125 subtraction can be done. remainder 0.025 result 0.125

1/2 ^4 = 0.0625 no op. remainder 0.025 result 0.125

1/2 ^6 = 0.015625 remainder 0.009375 result = 0.140625

1/2 ^7 = 0.0078125 result = 0.1484375

and so on, we approximate by defect. If we turn one of the bits to 0, for example the third, corresponding to 12^3 we can no longer reach 0.15.

If we try with 0.16:

1/2 ^3 = 0.125 subtraction can be done. remainder 0.025 result 0.125

1/2 ^5 = 0.03125 OK. result 0.15625 but note we can not accept this approximation if we want an exact approximation to the second digit

1/2 ^6 = 0.015625 remainder 0.009375 result = 0.140625

1/2 ^7 = 0.0078125 if we accept this, we get 0.1640625 (not in range [0.15,0.16) though )

Regardless. We can note, especially with smaller numbers differing in only 1 decimal digit that they share most of the leftmost bits and only differ with their tails. since 0.0156 and 0.0157 divide 0.015625, an exact power of 1/2 (sixth) and this happens for their first bit, they have no such resemblance. Anywayin order to perform your 10^x splitting we can keep finding powers of 1/2 in common, subtract and then find the first power of 1/2 which "splits" the 2 extremes of the range in a way it's less than 10^x (summed with the remainder of the number does not exceed). And we can turn this from 0 into 1 in order to approximate and finish. Since I am very sleepy and I work everyday, I will have to come back tomorrow and read what I just wrote, hoping I actually made good sense .... :)

Ermanno Attardo - 3 years, 2 months ago

Log in to reply

@Ermanno Attardo @Calvin Lin I am back. So to make it short: we have a number like 0.51561111355. We say we want to approximate it to the 4th decimal digit. The number becomes 0.5156. We eventually find all the subtractable powers of 1/2 we can add and add them (in this case 0.5 is 0.1 binary).

The remainder is 0.0156.

We then find the first power of 1/2 which surpasses the remainder and in range such not to violate our approximation, which is [0.0156, 0.0157) and this is the lowest upper bound.

In this case our number is binary 0.000001 corresponding to (1/2)^6, decimal 0.015625 ...and we approximate by excess using this bound exactly.

EDIT: (any higher power of 1/2, if existing, still in range [0.0156, 0.0157), would give a better approximation but need more operations to be performed. Now your comment makes entire sense if I got it right)

Waiting for your review

Ermanno Attardo - 3 years, 2 months ago

Log in to reply

@Ermanno Attardo That's one way to think about it (which is slightly different from how I structured my lemmas).

In this case, the proof follows the lines of:

  • Deal with the integer part (E.g. Lemma 1).
  • Explain why, for each power of 2, we want it at most once.
  • Explain why we always want to take away the largest power of 2 within the given bounds.

Calvin Lin Staff - 3 years, 2 months ago

Log in to reply

@Calvin Lin I thought you were pushing towards looking at this towards that perspective. I am confused. When this is over I will want to have a conversation or something.

Anyway: I think the 3 points are all kinda obvious and already demonstrated

  • Deal with the integer part

Already done when I demonstrated that, plus that you must at most perform a single +1 among /2 s in order to add information to the fractional part. I proved how adding 0.5 can be done as well (revert /2, add 1, perform /2). Moreover any need to divide the entire fractional part by 2 can be done when the integer part is still 0, otherwise we performed exactly that, but also wasted moves on integers. You commented it was obvious, remember?

  • Explain why, for each power of 2, we want it at most once

This question has different grounds. First, it implies we have selected to represent decimal numbers with base 2 and this is just a choice due to conveniency with the 2 allowed operations. Infact any number could be represented in any base but might be rational with infinite digits or even irrational like PI and we can try to represent it in different bases. Having chosen base 2, we know the property that 0.01 + 0.01 = 0.1, therefore chosing such power more than once means chosing another (negative) power of 2 (or positive power of 1/2) or a combination thereof. e.g. 0.001 three times = 0.011. So this is just a matter of picking names.

  • Explain why we always want to take away the largest power of 2 within the given bounds.

Picking a lower one might achieve an acceptable approximation as well. However this requires more operations. This is because when 0.00x is higher than 0.000y adding y takes more operations (within set {+1 ,/2}) than x. Voit lá.

But seriously, I wish to go back to your representation and solve it. Could you please clarify a little more how you meant it?

(Edit: did you also read my previous post, the one where I mentioned I was sleepy?)

Ermanno Attardo - 3 years, 2 months ago

Log in to reply

@Ermanno Attardo Note: I was just laying out the summarized structure of your proof, because it is different from my version of the 3 Lemmas above. I did not say that this proof approach will not work. In fact, I believe it does.

Note: My approach doesn't do the "inductive step" of "Explain why we always want to take away the largest power of 2 within the given bounds".


Yes, I read the last post you made yesterday.


With reference to my Lemma 3, the "actual binary representation" arrives at the representation by simply bounding the number of steps that are necessary, by looking at just the 2 endpoints. IE I only need the binary representation of 0.15 , 0.16 0.15, 0.16 (or whatever numbers we're choosing) to determine the "actual binary representation".
Note: This is exactly what you get from "take away the largest power of 2". For example, it should be clear how if you have the binary representation of the endpoints, it is "obvious" what the sequence of largest power of 2 to use is.

Calvin Lin Staff - 3 years, 2 months ago

Log in to reply

@Calvin Lin Ah yes....

Basically a binary "AND" for the part in common, which takes care of the left side, and then just another 1, as least right shifted as needs to be. And there we go.

Ermanno Attardo - 3 years, 2 months ago

Log in to reply

@Ermanno Attardo Great. Can you update your solution accordingly? I would be nice to have this different approach (from mine) written up.

Calvin Lin Staff - 3 years, 2 months ago

Log in to reply

@Calvin Lin Done. Is my update well written enough?

Ermanno Attardo - 3 years, 1 month ago

0 pending reports

×

Problem Loading...

Note Loading...

Set Loading...