I just cited myself. - eviltoast
  • yetAnotherUser@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    2
    ·
    edit-2
    5 months ago

    Unfortunately not an ideal proof.

    It makes certain assumptions:

    1. That a number 0.999… exists and is well-defined
    2. That multiplication and subtraction for this number work as expected

    Similarly, I could prove that the number which consists of infinite 9’s to the left of the decimal separator is equal to -1:

    ...999.0 = x
    ...990.0 = 10x
    
    Calculate x - 10x:
    
    x - 10x = ...999.0 - ...990.0
    -9x = 9
    x = -1
    

    And while this is true for 10-adic numbers, it is certainly not true for the real numbers.

    • Valthorn@feddit.nu
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      edit-2
      5 months ago

      While I agree that my proof is blunt, yours doesn’t prove that .999… is equal to -1. With your assumption, the infinite 9’s behave like they’re finite, adding the 0 to the end, and you forgot to move the decimal point in the beginning of the number when you multiplied by 10.

      x=0.999…999

      10x=9.999…990 assuming infinite decimals behave like finite ones.

      Now x - 10x = 0.999…999 - 9.999…990

      -9x = -9.000…009

      x = 1.000…001

      Thus, adding or subtracting the infinitesimal makes no difference, meaning it behaves like 0.

      Edit: Having written all this I realised that you probably meant the infinitely large number consisting of only 9’s, but with infinity you can’t really prove anything like this. You can’t have one infinite number being 10 times larger than another. It’s like assuming division by 0 is well defined.

      0a=0b, thus

      a=b, meaning of course your …999 can equal -1.

      Edit again: what my proof shows is that even if you assume that .000…001≠0, doing regular algebra makes it behave like 0 anyway. Your proof shows that you can’t to regular maths with infinite numbers, which wasn’t in question. Infinity exists, the infinitesimal does not.

      • yetAnotherUser@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        5 months ago

        Yes, but similar flaws exist for your proof.

        The algebraic proof that 0.999… = 1 must first prove why you can assign 0.999… to x.

        My “proof” abuses algebraic notation like this - you cannot assign infinity to a variable. After that, regular algebraic rules become meaningless.

        The proper proof would use the definition that the value of a limit approaching another value is exactly that value. For any epsilon > 0, 0.999… will be within the epsilon environment of 1 (= the interval 1 ± epsilon), therefore 0.999… is 1.