• keepthepace@slrpnk.net
    link
    fedilink
    arrow-up
    4
    ·
    1 year ago

    I was under the impression that it requires massive amounts of processing power to even run these AI.

    It was true a year ago but things change quickly. People run LLMs on raspberry pi nowadays.

    Training requires more GPU power but this too is becoming more and more accessible. Fine tuning is easy, training a base model still a bit beefy, but small teams manage to do it. Mistral, the french company that trained the latest good 7B model from scratch just has a few dozen people.

    The main argument behind the article is that only Big AI can afford to pay for licensed materials to train their AI, so this will hurt the small developers.

    This is not the main problem, the main problem is that these big companies don’t tell on what illegal dataset they train, unlike the open source efforts that can’t hide too much.

    And they are not facing the “Creators”, they are facing the big copyright holders who convinced a fistful of artists that it is in their interest. If they win, that’s a huge step backward and gives just a few years to a dying business model. A lot of their arguments are misguided or downright deceitful.