Rule Tide - eviltoast
  • theneverfox@pawb.social
    link
    fedilink
    English
    arrow-up
    1
    ·
    4 months ago

    It might not be too bad, once you get into code breaking, some of the simple techniques quickly yield metrics that can guess at the language with not much processing (depending on the total message length, but you could get a similar low effort guess by just analysing a sample)

    It’s as simple as measuring the average distance between letters in a sample, and you could probably do more by using something like average ranges in UTC. Each language will vary, so you can build a map with some sample text, them just take n letters to guess the language with reasonable accuracy

    On top of that, you could use user feedback or other factors to further narrow it down…Not perfect (and would look strange like this when it does fail), but then you could flag a defected language and give users a one click translate button

    They probably don’t do the translations until requested like you said (there’s a lot of languages out there to translate into after all), but a platform as big as YouTube might be using big data to decide what to preemptively translate into what language (and maybe using low demand periods or optimizing for engagement, maybe a combination of both)

    • ReveredOxygen@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      ·
      4 months ago

      I mean they could. But do you think that if something offers a translate button and it translates to the same thing, that that’s costing them enough money that it’s worth it for them to spend all that effort?

      • theneverfox@pawb.social
        link
        fedilink
        arrow-up
        2
        ·
        4 months ago

        No way. It all comes down to the most expensive piece of most software - if they write a translation feature, and it works 98% of the time, that’s a complete success. That last 2% will probably take way longer to whittle down than the feature took to deploy

        Even if the percentage was lower (and honestly I think it’s even higher from my own use), to even figure out if it’s worth it you’d have to put man hours on breaking down the numbers, estimating alternatives, and then actually doing the work

        In this case, I don’t think​ it’s actually feasible - translation isn’t that resource intensive. If you’ve already done the cheap language detection so you don’t run it on everything and are using a reasonably efficient translation method, the last few percentage points of accuracy would probably take more resources than the occasional pointless work