• Jesus@lemmy.world
    link
    fedilink
    English
    arrow-up
    34
    arrow-down
    2
    ·
    19 hours ago

    My guess is that, given Lemmy’s software developer demographic, I’m not the only person here who is close to this space and these players.

    From what I’m seeing in my day to day work, MS is still aggressively dedicated to AI internally.

    • jj4211@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      ·
      16 hours ago

      That’s compatible with a lack of faith in profitable growth opportunity.

      So far they have gone big with what I’d characterize as more evolutionary enhancements to tech. While that may find some acceptance, it’s not worth quite enough to pay off the capital investment in this generation of compute. If they overinvest and hope to eventually recoup by not upgrading, they are at severe risk of being superseded by another company that saved some expenditure to have a more modest, but more up to date compute infrastructure.

      Another possibility is that they predicted a huge boom of a other companies spending on Azure hosting for AI stuff, and they are predicting those companies won’t have the growth either.

    • Optional@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      15 hours ago

      I am sure the internal stakeholders of Micro$oft’s AI strategies will be the very last to know. Probably as they are instructed to clean out their desks.

      • Jesus@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        14 hours ago

        There are a few of us here who are closer to Satya‘s strategic roadmap than you might think.

        • Optional@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          14 hours ago

          I’m sure but they’re not going to hedge on a roadmap. Roadmaps are aways full-steam-ahead.

    • theneverfox@pawb.social
      link
      fedilink
      English
      arrow-up
      7
      ·
      16 hours ago

      I think deepseek shook them enough to realize what should have been obvious for a while… Brute force doesn’t beat new techniques, and spending the most might not be the safest bet

      There’s a ton of new techniques being developed all the time to do things more efficiently, and if you don’t need a crazy context window, in many use cases you can get away with much smaller models that don’t need massive datacenters

    • turnip@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      12 hours ago

      Because investors expect it, whether it generates profit or not. I guess we will see how it changes workflows, or whether people continue to do things like they always have.

    • Alex@lemmy.ml
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      2
      ·
      18 hours ago

      Context is king which is why even the biggest models get tied in knots when I try them on my niche coding problems. I’ve been playing a bit with NotebookLM which promises to be interesting with enough reference material but unfortunately when I tried to add the Vulcan specs it complained it couldn’t accept them (copyright maybe?).

      We have recently been given clearance to use the Gemini Pro tools with Google office at work. While we are still not using them for code generation I have found the transcription and meeting summary tools very useful and certainly a time saver.