• cmgvd3lw@discuss.tchncs.de
    link
    fedilink
    arrow-up
    19
    arrow-down
    1
    ·
    6 months ago

    If AI object/scene recognition is done locally, wouldn’t it increase the memory footprint of the browser process. Also how many objects can it identify if its run on a modest 4-8 GB RAM system? One more question is would they ever introduce anonymised telemetry for these generations?

    • Vincent@feddit.nl
      link
      fedilink
      arrow-up
      11
      ·
      6 months ago

      If it works anything like Firefox Translations does, the model is only downloaded on-demand, so it wouldn’t affect your browser usage if you don’t use the feature.

      • FaceDeer@fedia.io
        link
        fedilink
        arrow-up
        10
        arrow-down
        1
        ·
        6 months ago

        The state of the art for small models is improving quite dramatically quite quickly. Microsoft just released the phi-3 model family under the MIT license, I haven’t played with them myself yet but the comments are very positive.

        Alternately, just turn that feature off.

    • Carighan Maconar@lemmy.world
      link
      fedilink
      arrow-up
      3
      arrow-down
      3
      ·
      6 months ago

      If Firefox uses even more memory it’ll bend the memory-time continuum so much it becomes a memory singularity.

      The concept of memory ceases to exist at the boundary to the Firefox process. What happens beyond it is unknown, except that no matter how much memory you throw at it, none ever gets out.