The study tracked around 800 developers, comparing their output with and without GitHub’s Copilot coding assistant over three-month periods. Surprisingly, when measuring key metrics like pull request cycle time and throughput, Uplevel found no meaningful improvements for those using Copilot.

  • IceHouse
    link
    fedilink
    English
    arrow-up
    5
    ·
    1 hour ago

    I do a lot of scripting for cloud infrastructure deployments and linux/windows basic scripting and the bing chat is great for banging out 5 liners in 1 second that would take me an hour even after multiple decades of being an admin.

    Anything more complex it is useless for so it is limited but nice to have.

  • stoy
    link
    fedilink
    English
    arrow-up
    7
    ·
    3 hours ago

    The few times I have used AI to help me with coding has mostly been to ask it for examples on how to use a specific feature, then it has been ok for the most part.

    I mostly code in PowerShell, HTML and CSS, and Bing Chat helpful when I am stuck on a small issue.

    We also recently started testing Copilot Pro 365, the one that can help you make documents or search through company documents and stuff like that.

    As a test I asked it to make me a powerpoint presentation about the top ten podcasting microphones to buy.

    The result looked great at first glance, but quicly got very generic.

    Sure, it did show pictures of some microphones and even spoke about them, but it was just vauge and generic

  • AnarchoSnowPlow@midwest.social
    link
    fedilink
    English
    arrow-up
    9
    ·
    4 hours ago

    I’ve tried it for even some boiler plate code a few times. I’ve had to end up rewriting it every time.

    It makes mistakes like Junior engineers, but it doesn’t make them in the same way that junior engineers do, which means that as a senior engineer it takes me significantly more effort to review. It also makes mistakes that humans don’t, which is even weirder to catch in review.

    • leisesprecher@feddit.org
      link
      fedilink
      English
      arrow-up
      3
      ·
      2 hours ago

      Also my experience. It sometimes tries to be smart and gets everything wrong.

      I think code shows clearly, that LLMs don’t actually understand what’s written. Often enough you can clearly see it trying to insert a common pattern even though that doesn’t make sense at this point.