WARNING: The below proof is wrong, as someone brought to my attention. This proof on Math/SE is correct though more complicated. Leaving the post up in case people want to try and find my mistake. :)
Wait, what?
I know what you're thinking so let's just get it out of the way.
def f(x):
return 0 if x < 0 else 1
"There! That's a discontinuous function in python, clearly whatever big-brained math proof you have must be wrong"
Ah, how easily we write . Note that to define a function taking real numbers as input, must have infinite precision. We can think of as a Turing machine which takes in an infinite bitstring of input digits and outputs another bitstring of digits. For this post, numbers are represented in fixed-point1.
Now consider how a Turing machine "checks" for . We feed in an infinite bitstring of zeros, we output if all the inputs are zero... But then either
- The machine never halts, as it has to check all the inputs are zero before outputting anything.
- After a certain point the machine decides the input is "close enough to zero" and outputs .
In the first case wouldn't be defined at zero. In the second case, we could change the digits the machine doesn't look at without changing the output (meaning a tiny tweak to the input doesn't change the output... continuity!)
Proof Idea
At some point the Turing machine processing has output digits of the output, say, after steps. In steps the machine could (at most) look at the first digits of , meaning changing any digits of beyond the th won't change the output.
This is continuity! Any changes to smaller than won't change the output.
Proof
Let and . We want to find a so that
Let be large enough that changing any digits after the th in doesn't change by more than .
Let be how many steps it takes the Turing machine before having output the first digits of the output. Now changing any digits after the th in won't change the first digits of the output, since the machine is only looking at the first in order to decide the first .
Now translating into language: Let be small enough that implies the first digits of are the same. This implies the first digits of the output will be the same, meaning the outputs change by less than by our definition of .
Conclusion
Hopefully I've shaken your faith in infinity and the reals making sense, there's still time to convert to Finitism before judgement day! It isn't too late!
Footnotes
-
I'm being a bit sloppy when I say , really I should say if I want to use the simple fixed-point encoding. There are ways to extend fixed-point beyond bounded-intervals, but they aren't of consequence here. ↩