automatically turning on new (creepy)feature with no single click to disable them - eviltoast

there’s just this vague recommendation to “turn it off in settings.” no link, no toggle, nothing.

had to hunt for the relevant setting in the app(even their settings menu is down under in the list when you tap on your avatar.


image transcription:

a screenshot of Google play store which has a popup that can only be disabled by pressing OK button at the bottom.
the popup contains the following text:

Google is optimising app installs with your help
Google Play makes apps faster to install, open and run based on what people are using most. The first time that you open an app after installing, Google notes which parts of the app you use. When enough people do this, your apps will install faster. App install optimisation is automatically turned on, but you can turn it off in settings. Learn more

  • henfredemars@infosec.pub
    link
    fedilink
    English
    arrow-up
    28
    ·
    edit-2
    11 months ago

    It’s not impossible, but there are limitations to that alternative approach. For example, ARM processors are really heterogeneous with respect to exactly which extensions are supported. If you compile on the server, you might not be able to take advantage of features that only exist on a few phones like the latest flagships. It also requires the device to trust that the pre-compiled code is actually consistent and won’t cause bad memory access for example that can lead to a crash. If the device compiles the code, it knows there’s a true one-to-one correspondence with the actual Java byte code, closing an opportunity for inconsistency. The device knows the size of the CPU caches and might make different instruction scheduling decisions in the actual binary. Generally, it’s easier to be more efficient when you have all the information about the target processor available on device. But, you could have done it a different way that’s true. Google decided not to do that.

    We should also think about what’s to be gained by moving it all to the server? On the server, you still don’t know the actual user patterns. It’s not space efficient to compile the entire application even if the compile time is free on the server. You would have to download unnecessary native code that may only be used by an extremely small fraction of users, wasting storage space on user devices. What if I don’t use that whole area in the application? What if most users don’t use it? Why should we have to download an optimized version of it onto every device? Perhaps, the best optimization is to not ship any native code for the feature nobody uses.

    If you want the best balance of high performance on the user device while not wasting extra space for binary code that isn’t frequently used, I think this design makes more sense. You’re never getting away from having a full compiler on every device because Google can’t assume that you aren’t using other app stores, and it’s perfectly valid for Android apps to generate code dynamically at runtime that can never be compiled in advance.

    • lazyvar@programming.dev
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      2
      ·
      11 months ago

      I feel you’re brushing over the privacy implications regarding how apps are used.

      Sure, you could say: “Oh, but it’s inefficient to compile the entire application, and what if there are features that barely anyone uses.”

      But you can also say: “Compiling the entire application ensures we don’t need to collect usage data and it ensures everyone gets the best experience, even the people that use features that are otherwise hardly used.”

      Now, of course, to go with the second option, you need to care about user privacy and not gain any benefits from usage data beyond the benefits for compiling it.

      • henfredemars@infosec.pub
        link
        fedilink
        English
        arrow-up
        6
        ·
        11 months ago

        You’re right. That is a perfectly valid solution with different trade offs. Perhaps in a perfect world users could select which they prefer.

    • r00ty@kbin.life
      link
      fedilink
      arrow-up
      3
      arrow-down
      1
      ·
      11 months ago

      Yes, it’s why I said most popular configurations as well as architecture, not just CPU architecture. So that’s specific CPU/SoC components (GPU etc). Compiling for the most popular configurations (thinking Google Pixel, Samsung Sxx, whatever Motorola’s flagship is. Probably also the mid-range units that sell well too) you would cover a large %age with maybe 10-15 configurations, which isn’t THAT hard to do if you’re google.

      • henfredemars@infosec.pub
        link
        fedilink
        English
        arrow-up
        5
        ·
        11 months ago

        This is the kind of approach that Apple takes by building multiple versions of the application for different devices. It’s a perfectly serviceable solution that also works.

        I don’t have a great answer for why they don’t do it this way.

    • space@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      0
      ·
      11 months ago

      While not all ARMs are the same, I’m pretty sure there are certain configurations that are more common. Why not just provide binaries for those?

      • henfredemars@infosec.pub
        link
        fedilink
        English
        arrow-up
        1
        ·
        11 months ago

        You can, but that could be to the detriment of less common configurations. That’s a valid solution that strikes a different balance.

        You may also wish to consider that Android apps can generate and execute code at runtime. It’s perfectly valid to download or generate more Java bytecode after the app was installed. You would still need a compiler on the device to handle apps that do this efficiently.

        I also left out some more specific information to prevent my reply from being even longer, but the binary code your device generates actually adapts to your specific usage as well as the general usage. The profile you get from Play is just a starting point. Shipped binary code isn’t generally very flexible. If you like to use an app in a way that makes us runtime compile code that we thought wasn’t important before, your device actually just adds this information to the profile and recompiles when the device isn’t being used. That means you get a personal level of performance optimization depending on your personal usage.

        It’s not wrong to ship a slightly less optimal, fully compiled binary. I just don’t think that would work as well. It would be a lot simpler though.