Spotify is going to clone podcasters’ voices — and translate them to other languages - eviltoast

A partnership with OpenAI will let podcasters replicate their voices to automatically create foreign-language versions of their shows.

  • Pete90@feddit.de
    link
    fedilink
    English
    arrow-up
    9
    ·
    1 year ago

    I don’t think what you’re saying is possible. Voxels used in fMRI measure in millimeters (down to one of I recall) and don’t allow for such granular analysis. It is possible to ‘see’ what a person sees but the image doesn’t resemble the original too closely.

    At least that’s what I have learned a few years ago. I’m happy to look at new sources, if you have some though.

    • sudoshakes@reddthat.com
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      1 year ago

      Seeing Beyond the Brain: Conditional Diffusion Model with Sparse Masked Modeling for Vision Decoding: https://aiimpacts.org/2022-expert-survey-on-progress-in-ai/

      High-resolution image reconstruction with latent diffusion models from human brain activity: https://www.biorxiv.org/content/10.1101/2022.11.18.517004v3

      Semantic reconstruction of continuous language from non-invasive brain recordings: https://www.biorxiv.org/content/10.1101/2022.09.29.509744v1

    • sudoshakes@reddthat.com
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      2
      ·
      1 year ago

      I like how I said, the problem is progress is moving so far you don’t even realize what you don’t know about the subject as a layman… and then this comment appears saying things are not possible.

      Lol.

      How timely.

      I the speed at which things are changing and redefining what is possible in this space is moving faster than any other are of research. It’s insane to the point that if you are not actively reading white papers every day, you miss major advances.

      The layman had this idea of what “AI” means, but we have truly no good way to make the word align to its meaning and capabilities with how fast we change what it means underneath.

      • Pete90@feddit.de
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        1 year ago

        I looked at your sources or at least one of them. The problem is, that, as you said, I am a layman at least when it comes To AI. I do know how fMRI works though.

        And I stand corrected. Some of those pictures do closely resemble the original. Impressive, although not all subjects seem to produce the same level of detail and accuracy. Unfortunately, I have no way to verify the AI side of the paper. It is mind boggling that such images can be constructed from voxels of such size. 1.8mm contain close to 100k neurons and even more synapses. And the fMRI signal itself is only ablood oxygen level overshoot in these areas and no direct measurement of neural activity. It makes me wonder what constraints and tricks had to be used to generate these images. I guess combining the semantic meaning of the image in combination with the broader image helped. Meaning inferring pixel color (e.g. Mostly blue with some gray on the middle) and then adding the sematic meaning (plane) to then combine these two.

        Truly amazing, but I do remain somewhat sceptical.

        • sudoshakes@reddthat.com
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          The model inferred meaning much the same way it infers meaning from text. Short phrases can generate intricate images accurate to author intent using stable diffusion.

          The models themselves in those studies leveraged stable diffusion as the mechanism of image generation, but instead of text prompts, they use fMRI data training.