Archived link

Since its founding in 2015, its leaders have said their top priority is making sure artificial intelligence is developed safely and beneficially. They’ve touted the company’s unusual corporate structure as a way of proving the purity of its motives. OpenAI was a nonprofit controlled not by its CEO or by its shareholders, but by a board with a single mission: keep humanity safe.

But this week, the news broke that OpenAI will no longer be controlled by the nonprofit board. OpenAI is turning into a full-fledged for-profit benefit corporation. Oh, and CEO Sam Altman, who had previously emphasized that he didn’t have any equity in the company, will now get equity worth billions, in addition to ultimate control over OpenAI.

In an announcement that hardly seems coincidental, chief technology officer Mira Murati said shortly before that news broke that she was leaving the company. Employees were so blindsided that many of them reportedly reacted to her abrupt departure with a “WTF” emoji in Slack.

WTF indeed.

  • Storksforlegs@beehaw.org
    link
    fedilink
    English
    arrow-up
    103
    ·
    2 months ago

    CEO Sam Altman, who had previously emphasized that he didn’t have any equity in the company, will now get equity worth billions, in addition to ultimate control over OpenAI.

    what! You mean he stands to profit billions after lying about his intentions?! A techbro would never!!

  • zante@lemmy.wtf
    link
    fedilink
    English
    arrow-up
    38
    ·
    2 months ago

    comedy goldmine :

    They could get up to 100 times what they put in, but beyond that, the money would go to the nonprofit, which would use it to benefit the public. For example, it could fund a universal basic income program to help people adjust to automation-induced joblessness.

    • TimLovesTech (AuDHD)(he/him)@badatbeing.social
      link
      fedilink
      English
      arrow-up
      28
      ·
      2 months ago

      “If OpenAI were to retroactively remove profit caps from investments, this would in effect transfer billions in value from a non-profit to for-profit investors,” Jacob Hilton, a former employee of OpenAI who joined before it transitioned from a nonprofit to a capped-profit structure.

      I’m sure the investors weren’t selling him on the idea that if they got a bigger return he would as well, surely.

  • PhilipTheBucket@ponder.cat
    link
    fedilink
    English
    arrow-up
    34
    ·
    2 months ago

    I think that over the next few years Sam Altman is going to learn the same lessons that events have been trying to teach Elon Musk since circa 2021.

    1. You didn’t build that. The people that work for you did.
    2. Being a big hero is contingent on you and your behavior, and can change.
    3. Those people who are giving you all this money aren’t your comrades. When your usefulness is at its end, they won’t give you a second thought.
    • Anyolduser@lemmynsfw.com
      link
      fedilink
      arrow-up
      7
      ·
      2 months ago

      I was about to say …

      Vox can speak for itself. Big sections of the public knew they were being sold a bill of goods.

  • Sonori@beehaw.org
    link
    fedilink
    arrow-up
    22
    ·
    2 months ago

    What, founder of cryptoscam Worldcoin is going to cash out of a project sold primarily on hype. Say it ain’t so. /s

  • FIash Mob #5678@beehaw.org
    link
    fedilink
    arrow-up
    20
    ·
    2 months ago

    It’s WeWork and Adam Neumann all over again.

    You couldn’t pay me to invest in this shit and it feels a little insane that seemingly intelligent VC’s are doing so.

  • hddsx@lemmy.ca
    link
    fedilink
    arrow-up
    19
    ·
    2 months ago

    Gotta get out while the gettin is good. Otherwise, if you lose the copyright lawsuits… RIP

  • kibiz0r@midwest.social
    link
    fedilink
    English
    arrow-up
    18
    ·
    2 months ago

    just sold you out

    They been sellin us out since the start. And they never even paid for us!

  • tal@lemmy.today
    link
    fedilink
    English
    arrow-up
    8
    ·
    edit-2
    2 months ago

    I don’t know whether Altman or the board is better from a leadership standpoint, but I don’t think that it makes sense to rely on boards to avoid existential dangers for humanity. A board runs one company. If that board takes action that is a good move in terms of an existential risk for humanity but disadvantageous to the company, they’ll tend to be outcompeted by and replaced by those who do not. Anyone doing that has to be in a position to span multiple companies. I doubt that market regulators in a single market could do it, even – that’s getting into international treaty territory.

    The only way in which a board is going to be able to effectively do that is if one company, theirs, effectively has a monopoly on all AI development that could pose a risk.