• sramder@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    ·
    1 year ago

    Anyone know how many hours of training data it takes to build up a convincing model of someone’s voice? It was 10’s of hours when I did a bit of research a year ago… the article says social media is the likely source of training data for these scams, but that seems unlikely at this point.

      • sramder@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        The technology has clearly come a long way in a short time, really fascinating.

        I remember the first examples I read about being trained with celebrity read audiobooks because they needed so much audio data. I want to say Tom Hanks or Anthony Hopkins but I could have that confused with something else.

    • CrabLangEnjoyer@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      1 year ago

      A current state of the art ai model from Microsoft can achieve acceptable quality with about 3 seconds of audio. Commercially available stuff like eleven labs about 30 minutes. But quality will obviously vary heavily but then again they’re using a low quality phone call so maybe not that important

      • sramder@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        That’s downright scary :-) I think it took longer in the last Mission Impossible.

        30 minutes is still pretty minimal for the kind of targeted attack it sounds like this is used for. I suppose we all need to work with our families on code words or something.

        I went in thinking the article was a bit alarmist, but that’s clearly not the case. Thank for the insight.

      • madsen@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        1 year ago

        With that little, they may be able to recreate the timbre of someone’s voice, but speech carries a multitude of other identifiers and idiosyncrasies that they’re unlikely to get with that little audio, like personal vocabulary (we don’t choose the same words and phrasings for things), specific pronunciations (e.g. “library” vs “libary”), voice inflections, etc. Obviously, the more training data you have, the better the output.