• Brkdncr@lemmy.world
    link
    fedilink
    English
    arrow-up
    113
    arrow-down
    2
    ·
    5 months ago

    Love this. I don’t know much about risc-v but I’d love to see it disrupt the market a bit.

    • SeaJ@lemm.ee
      link
      fedilink
      English
      arrow-up
      55
      ·
      5 months ago

      RISC-V still has a ways to go before it usable for much.

      • GreyBeard@lemmy.one
        link
        fedilink
        English
        arrow-up
        23
        arrow-down
        1
        ·
        5 months ago

        Its usable for much now… Just not as a daily driver laptop. It is good for embedded applications now, but not quire there for phone or laptop use. Maybe one day.

        • ozymandias117@lemmy.world
          link
          fedilink
          English
          arrow-up
          13
          ·
          5 months ago

          Google is certainly planning on it being viable.

          They’ve been merging RISC-V support in Android and have documented the minimum extensions over the base ISA that must be implemented for Android certification

        • sugar_in_your_tea@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          5 months ago

          I was hoping to use it for a NAS (just storage and retrieval), but board selection was limited and I wasn’t ready to gamble on something like a USB-C enclosure. It would theoretically be a great fit, hopefully it gets there soon.

  • zarenki@lemmy.ml
    link
    fedilink
    English
    arrow-up
    69
    arrow-down
    2
    ·
    5 months ago

    This board has the StarFive JH7110 SoC. That processor has previously been in very low power single board computers like StarFive VisionFive 2 (2022) and Milk-V Mars (2023), a Raspberry Pi clone that can be bought for as low as $40. Its storage limitations (SD/eMMC rather than NVMe) show how much this isn’t meant for laptop use.

    Very underpowered for a laptop too, even when considering this is intended for developers and doesn’t need to be remotely performance competitive. Consider that this has just 4 RV64GC cores, the cheapest Intel board options Framework offers are 12 cores (4P+8E), and any modern RISC-V core is far simpler with less area than even an Intel E core. These cores also lack the RISC-V vector instructions extension.

      • barsoap@lemm.ee
        link
        fedilink
        English
        arrow-up
        11
        ·
        edit-2
        5 months ago

        You don’t need a laptop to use a framework mainboard, they run without battery and display and everything. So if you have a Framework 13 or are in the market for one this might actually be a very nice thing, especially if the price is comparable to other boards.

        • sugar_in_your_tea@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          2
          ·
          5 months ago

          I guess? But why would you swap to RISC-V from their x86 boards? It’ll be slower and less compatible.

          I can see it for devs, but they’re going to want a separate laptop or an SBC, they’re not going to be swapping mainboards on the regular.

            • barsoap@lemm.ee
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              5 months ago

              Hooking up a BananaPi to a keyboard+monitor is going to be quite a bit cheaper, and unlike with the framework laptop you can’t re-use case, monitor, etc. with an upgraded board.

              • lengau@midwest.social
                link
                fedilink
                English
                arrow-up
                2
                ·
                5 months ago

                It would, but I already have several dev boards I use in that configuration. What I’m looking for now is something I can take with me to use as a semi-daily driver so I can start reporting bugs in real world use cases.

          • barsoap@lemm.ee
            link
            fedilink
            English
            arrow-up
            4
            ·
            5 months ago

            You can develop using it as an SBC, then put it into the laptop when you go to a conference to present your stuff. Or if you really want to code in the park it’s not like it’d be a microcontroller, it is fast enough to run an editor and compiler.

            But granted it’s a hassle to switch out the mainboard. OTOH you can also use the x86 board as an SBC so when you’re at home it doesn’t really matter which board happens to be inside.

            I guess from framework’s POV there’s not much of an argument, it’s less “do people want potato laptops” but “do we want to get our feet wet with RISC-V and the SBC market”. Nobody actually needs to use it in a laptop for the whole thing to make sense to them.

      • utopiah@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        5 months ago

        Indeed I bought a Banana Pi BPI-F3 with SpacemiT K1 8 core RISC-V chip,4G RAM and 16G eMMC https://www.banana-pi.org/en/banana-pi-sbcs/175.html for €95.89 including delivery. The form factor is nice though and I do enjoy Framework mission and partnerships. Depends what people need it for, good to have more options than aren’t “just” SBC/devboards. I won’t buy one now but I’ll definitely keep it in mind.

      • zarenki@lemmy.ml
        link
        fedilink
        English
        arrow-up
        3
        ·
        5 months ago

        I bought a Milk-V Mars (4GB version) last year. Pi-like form factor and price seemed like an easy pick for dipping my toes into RISC-V development, and I paid US$49 plus shipping at the time. There’s an 8GB version too but that was out of stock when I ordered.

        If I wanted to spend more I’d personally prefer to put that budget toward a higher core system (for faster compile times) before any laptop parts, as either HDMI+USB or VNC would be plenty sufficient even if I did need to work on GUI things.

        Other RISC-V laptops already are cheaper and with higher performance than this would be with Framework’s shell+screen+battery, so I’m not sure what need this fills. If you intend to use the board in an alternate case without laptop parts you might as well buy an SBC instead.

  • BrianTheeBiscuiteer@lemmy.world
    link
    fedilink
    English
    arrow-up
    73
    arrow-down
    15
    ·
    5 months ago

    This board also has soldered memory and uses MicroSD cards and eMMC for storage, both of which are limitations of the processor.

    Ah, yeah, hard no from me dog. Can we get one of the new Snapdragons tho? Please?

    • j4k3@lemmy.world
      link
      fedilink
      English
      arrow-up
      122
      arrow-down
      1
      ·
      5 months ago

      Qualcomm and Broadcom are the two biggest reasons you don’t own your devices any more. That is the last option anyone that cares about ownership should care about. You should expect an orphaned kernel just like all their other mobile garbage. Qualcomm is like the Satan of hardware manufacturers. The world would be a much better place if Qualcomm and Broadcom were not in it at all.

      • TFO Winder@lemmy.ml
        link
        fedilink
        English
        arrow-up
        12
        arrow-down
        2
        ·
        5 months ago

        What did they do ? I thought all processor are following standards hence I am running Linux on my Intel or AMD CPU.

        • j4k3@lemmy.world
          link
          fedilink
          English
          arrow-up
          49
          arrow-down
          2
          ·
          edit-2
          5 months ago
          All their hardware documentation is locked under NDA nothing is publicly available about the hardware at the hardware registers level.

          For instance, the base Android system AOSP is designed to use Linux kernels that are prepackaged by Google. These kernels are well documented specifically for manufacturers to add their hardware support binary modules at the last possible moment in binary form. These modules are what makes the specific hardware work. No one can update the kernel on the device without the source code for these modules. As the software ecosystem evolves, the ancient orphaned kernel creates more and more problems. This is the only reason you must buy new devices constantly. If the hardware remained undocumented publicly while just the source code for modules present on the device was merged with the kernel, the device would be supported for decades. If the hardware was documented publicly, we would write our own driver modules and have a device that is supported for decades.

          This system is about like selling you a car that can only use gas that was refined prior to your purchase of the vehicle. That would be the same level of hardware theft.

          The primary reason governments won’t care or make effective laws against orphaned kernels is because the bleeding edge chip foundries are the primary driver of the present economy. This is the most expensive commercial endeavor in all of human history. It is largely funded by these devices and the depreciation scheme.

          That is both sides of the coin, but it is done by stealing ownership from you. Individual autonomy is our most expensive resource. It can only be bought with blood and revolutions. This is the primary driver of the dystopian neofeudalism of the present world. It is the catalyst that fed the sharks that have privateered (legal piracy) healthcare, home ownership, work-life balance, and democracy. It is the spark of a new wave of authoritarianism.

          Before the Google “free” internet (ownership over your digital person to exploit and manipulate), all x86 systems were fully documented publicly. The primary reason AMD exists is because we (the people) were so distrusting over these corporations stealing and manipulating that governments, militaries, and large corporations required second sourcing of chips before purchasing with public funds. We knew that products as a service - is a criminal extortion scam, way back then. AMD was the second source for Intel and produced the x86 chips under license. It was only after that when they recreated an instructions compatible alternative from scratch. There was a big legal case where Intel tried to claim copyright over their instruction set, but they lost. This created AMD. Since 2012, both Intel and AMD have proprietary code. This is primarily because the original 8086 patents expired. Most of the hardware could be produced anywhere after that. In practice there are only Intel, TSMC, and Samsung on bleeding edge fab nodes. Bleeding edge is all that matters. The price is extraordinary to bring one online. The tech it requires is only made once for a short while. The cutting edge devices are what pays for the enormous investment, but once the fab is paid for, the cost to continue running one is relatively low. The number of fabs within a node is carefully decided to try and accommodate trailing edge node demand. No new trailing edge nodes are viable to reproduce. There is no store to buy fab node hardware. As soon as all of a node’s hardware is built by ASML, they start building the next node.

          But if x86 has proprietary, why is it different than Qualcomm/Broadcom - no one asked. The proprietary parts are of some concern. There is an entire undocumented operating system running in the background of your hardware. That’s the most concerning. The primary thing that is proprietary is the microcode. This is basically the power cycling phase of the chip, like the order that things are given power, and the instruction set that is available. Like how there are not actual chips designed for most consumer hardware. The dies are classed by quality and functionality and sorted to create the various products we see. Your slower speed laptop chip might be the same as a desktop variant that didn’t perform at the required speed, power is connected differently, and it becomes a laptop chip.

          When it comes to trending hardware, never fall for the Apple trap. They design nice stuff, but on the back end, Apple always uses junky hardware, and excellent in house software to make up the performance gap. They are a hype machine. The only architecture that Apple has used and hasn’t abandoned because it went defunct is x86. They used MOS in the beginning. The 6502 was absolute trash compared to the other available processors. It used a pipeline trick to hack twice the actual clock speed because they couldn’t fab competitive quality chips. They were just dirt cheap compared to the competition. Then it was Motorola. Then Power PC. All of these are now irrelevant. The British group that started Acorn sold the company right after RISC-V passed the major hurtle of getting past Berkeley’s ownership grasp. It is a slow moving train, like all hardware, but ARM’s days are numbered. RISC-V does the same fundamental thing without the royalty. There is a ton of hype because ARM is cheap and everyone is trying to grab the last treasure chests they can off the slow sinking ship. In 10 years it will be dead in all but old legacy device applications. RISC-V is not a guarantee of a less proprietary hardware future, but ARM is one of the primary cornerstones blocking end user ownership. They are enablers for thieves; the ones opening your front door to let the others inside. Even the beloved raspberry pi is a proprietary market manipulation and control scheme. It is not actually open source at the registers level and it is priced to prevent the scale viability of a truly open source and documented alternative. The chips are from a failed cable TV tuner box, and they are only made in a trailing edge fab when the fab has no other paid work. They are barely above cost and a tax write off, thus the “foundation” and dot org despite selling commercial products.

          • syd@lemy.lol
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            18
            ·
            edit-2
            5 months ago

            This is not written by ChatGPT right?

            Edit: ok don’t kill me, it was so long :/

            • Da Bald Eagul@feddit.nl
              link
              fedilink
              English
              arrow-up
              11
              ·
              5 months ago

              I doubt it, there are some grammar mistakes in there I think. At least, it doesn’t look like the typical ChatGPT writing style.

            • j4k3@lemmy.world
              link
              fedilink
              English
              arrow-up
              7
              ·
              5 months ago

              The easiest ways to distinguish I’m human are the patterns as, others have mentioned, assuming you’re familiar with the primary Socrates entity’s style in the underlying structure of the LLM. The other easy way to tell I’m human is my conceptual density and mobility when connecting concepts across seemingly disconnected spaces. Presently, the way I am connecting politics, history, and philosophy to draw a narrative about a device, consumers, capitalism, and venture capital is far beyond the attention scope of the best AI. No doubt the future will see AI rise an order of magnitude to meet me, but that is not the present. AI has far more info available, but far less scope in any given subject when it comes to abstract thought.

              The last easy way to see that I am human is that I can talk about politics in a critical light. Politics is the most heavily bowdlerized space in any LLM at present. None of the models can say much more than gutter responses that are form like responses overtrained in this space so that all questions land on predetermined replies.

              I play with open source offline AI a whole lot, but I will always tell you if and how I’m using it. I’m simply disabled, with too much time on my hands, and y’all are my only real random humans interactions. - warmly

              I don’t fault your skepticism.

        • 𝒍𝒆𝒎𝒂𝒏𝒏@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          34
          ·
          5 months ago

          Not the case with ARM processors sadly, IMO they’re a bit of a mess from that perspective. Proprietary blobs for hardware, unusual kernel hacks for some devices, and no device tree support so you can’t just boot any image on any device. I think Windows for ARM encouraged some standardization in that regard, but for the most part looking at Android devices it’s still very much the wild west.

          This is one of the many reasons why Raspberry Pi ARM boards remain popular for the time being, despite there being so many other cheap alternatives available: they actually keep supporting their old boards & ensure hardware on their boards works from the get-go.

          There are also some rare cases where Raspberry Pi rewrite open source implementations of Broadcom’s proprietary blob drivers, in one instance for the built in CSI (optional camera)

        • SomeoneSomewhere@lemmy.nz
          link
          fedilink
          English
          arrow-up
          14
          ·
          5 months ago

          Essentially no processors follow a standard. There are some that have become a de facto standard and had both backwards compatibility and clones produced like x86. But it is certainly not an open standard, and many lawsuits have been filed to limit the ability of other companies to produce compatible replacement chips.

          RISC-V is an attempt to make an open instruction set that any manufacturer can make a compatible chip for, and any software developer can code for.

        • SGG@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          5 months ago

          They make a bunch of the other chips that go into computer devices, and from what I understand it’s binary blob or nothing for a lot of it?

        • apt_install_coffee@lemmy.ml
          link
          fedilink
          English
          arrow-up
          1
          ·
          5 months ago

          Both Intel and AMD invest a lot into open source drivers, firmware and userspace applications, but also due to the nature of X86_64’s UEFI, a lot of the proprietary crap is loaded in ROM on the motherboard, and as microcode.

      • apt_install_coffee@lemmy.ml
        link
        fedilink
        English
        arrow-up
        7
        ·
        5 months ago

        I work with SoC suppliers, including Qualcomm and can confirm; you need to sign an NDA to get a highly patched old orphaned kernel, often with drivers that are provided only as precompiled binaries, preventing you updating the kernel yourself.

        If you want that source code, you need to also pay a lot of money yearly to be a Qualcomm partner and even then you still might not have access to the sources for all the binaries you use. Even when you do get the sources, don’t expect them to be updated for new kernel compatibility; you’ve gotta do that yourself.

        Many other manufacturers do this as well, but few are as bad. The environment is getting better, but it seems to be a feature that many large manufacturers feel they can live without.

        • cornshark@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          5 months ago

          How’s this possible with the kernel under gpl? If you’re getting precompiled binaries, shouldn’t you also be able to get their sources by law?

          • apt_install_coffee@lemmy.ml
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            5 months ago

            Kernel modules don’t have to be open source provided they follow certain rules like not using gpl only symbols. This is the same reason you can use an NVIDIA driver.

            Its not enforced so much by law as what the fsf and Linux foundation can prove and are willing to pursue; going after a company that size is expensive, especially when they’re a Linux foundation partner. A lot of major Linux foundation partners are actively breaking the GPL.

      • SuperSpruce
        link
        fedilink
        English
        arrow-up
        2
        ·
        5 months ago

        I thought Mediatek was even more closed off than Qualcomm.

        • j4k3@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          5 months ago

          MIPS is Stanford’s alternative architecture to Berkeley’s RISC-I/RISC-II. I was somewhat concerned about their stuff in routers, especially when the primary bootloader used is proprietary.

          The person that wrote the primary bootloader, is the same person writing most of the Mediatek kernel code in mainline. I forget where I put together their story, but I think they were some kind of prodigy type that reverse engineered and wrote an entire bootloader from scratch, implying a very deep understanding of the hardware. IIRC I may have seen that info years ago in the uboot forum. I think someone accused the mediatek bootloader of copying uboot. Again IIRC, their bootloader was being developed open source and there is some kind of partially available source still on a git somewhere. However, they wound up working for Mediatek and are now doing all the open source stuff. I found them on the OpenWRT and was a bit of an ass asking why they didn’t open source the bootloader code. After that, some of the more advanced users on OpenWRT explained to me how the bootloader is static, which I already kinda knew, I mean, I know it is on a flash memory chip on the SPI bus. This makes it much easier to monitor the starting state and what is really happening. These systems are very old 1990’s era designs, there is not a lot of room to do extra stuff unnoticed.

          On the other hand, all cellular modems are completely undocumented, as are all WiFi modems since the early 2010’s, with the last open source WiFi modem being the Atheros chips.

          There is no telling what is happening with cellular modems. I will say, the integrated nonremovable batteries have nothing to do with design or advancement. They are capable monitoring devices that cannot be turned off.

          However, if we can monitor all registers in a fully documented SoC, we can fully monitor and control a peripheral bus in most instances.

          Overall, I have little issue with Mediatek compared to Qualcomm. They are largely emulating the behavior of the bigger player, Broadcom.

      • exu@feditown.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        5 months ago

        Usually you can get the kernel source for Qualcomm at least, MediaTek tho…

    • zelifcam@lemmy.world
      link
      fedilink
      English
      arrow-up
      65
      arrow-down
      1
      ·
      5 months ago

      This is a dev kit. This is not for normal people to use. RISC-V is not there yet, but this is a good first step.

    • morhp@lemmynsfw.com
      link
      fedilink
      English
      arrow-up
      14
      arrow-down
      2
      ·
      5 months ago

      That doesn’t bother me too much.

      With the CPU being that slow, I don’t think you’ll really need a proper SSD. (And the CPU doesn’t have the required PCIe interfaces anyway).

      They probably could’ve added socketed RAM, but based on the photo, the main board looks quite full and messy with random chips (likely needed to work around CPU limitations), so it probably wasn’t a high priority.

      I’m interested in the cooling requirements and battery life.

      I’m not interested in ARM CPUs with all their weird proprietary stuff.

      • ferret@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        3
        ·
        5 months ago

        The mainboard looks cluttered due to the verbose silkscreening, it doesn’t actually look that complex compared to the other mainboards.

    • Linkerbaan@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      2
      ·
      5 months ago

      At the point you want to upgrade this chip swapping out the entire SOC including the RAM is likely a better option.

    • Synapse@lemmy.world
      link
      fedilink
      English
      arrow-up
      111
      arrow-down
      5
      ·
      5 months ago

      RISC-V (pronounced risk five), is a Free open-source Instruction Set Architecture (ISA). Other well established ISA like x86, amd64 (Intel and AMD) and ARM, are proprietary and therefore, one must pay every expensive licenses to design and build a processor using these architectures. You don’t need to pay a license to build a RISC-V processor, you only need to follow the specifications. That doesn’t mean the CPU design is also free, no, they stay very much the closed property of the designer, but RISC-V represents non the less, a very big step towards more transparency and technology freedom.

      • StitchIsABitch@lemmy.world
        link
        fedilink
        English
        arrow-up
        64
        arrow-down
        1
        ·
        5 months ago

        I pity the five year old who has to read this.

        I’m a grown up though so thank you for the explanation.

      • msage@programming.dev
        link
        fedilink
        English
        arrow-up
        1
        ·
        5 months ago

        Isn’t it possible to add custom instructions and locking others from them, leading back to the current ARM situation?

        • Synapse@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          5 months ago

          I know there are already a number of extensions specified in the specifications, such that Risc-V could be relevant to design the simplest of microcontroller up to the most powerful super computer. I suppose it is possible and allowed to design a CPU with proprietary extensions. What should prevent an ARM type of situation is the fact that so many use-cases are already covered by the open specifications. What is not there yet, to my knowledge, are things like graphics, video, neural-net acceleration.

          • barsoap@lemm.ee
            link
            fedilink
            English
            arrow-up
            1
            ·
            5 months ago

            graphics, video, neural-net acceleration.

            All three are kinda at least half-covered by the vector instructions which absolutely and utterly kills any BLAS workload dead. 3d workloads use fancy indexing schemes for texture mapping that aren’t included, video I guess you’d want some special APU sauce for wavelets or whatever (don’t know the first thing about codecs), neural nets should run fine as they are provided you have a GPU-like memory architecture, the vector extension certainly has gather/scatter opcodes. Oh, you’d want reduced precision but that’s in the pipeline.

            Especially with stuff like NNs though the microarch is going to matter a lot. Even if a say convolution kernel from one manufacturers uses instructions a chip from another manufacturer understands, it’s probably not going to perform at an optimal level.

            VPUs AFAIU are usually architected like DSPs: A bunch of APUs stitched together with a VLIW insn encoder very much not intended to run code that is in any way general-purpose, because the only thing it’ll ever run is hand-written assembly, anyway. Can’t find the numbers right now but IIRC my rk3399 comes with a VPU that out-flops both the six arm cores and the Mali GPU, combined, but it’s also hopeless to use for anything that can’t be streamed linearly from and to memory.

            Graphics is the by far most interesting one in my view. That is, it’s a lot general purpose stuff (for GPGPU values of “general purpose”) with only a couple of bits and pieces domain-specific.

        • Aux@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          5 months ago

          The instruction set is a tiny part of the overall CPU architecture. You don’t need to lock it as everything else is proprietary: manufacturing, cores, electric design, etc. Most RISC-V processors today use ARM cores and are subject to ARM licensing.

    • sugar_in_your_tea@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      59
      arrow-down
      1
      ·
      5 months ago

      RISC-V is like LEGO, where you can put together pieces to make whatever you want. Nobody can tell you what you can or can’t make, you can be as creative as you want. Oh, and there’s motors and stuff too.

      ARM is like Hotwheels, there are lots of cars, but you can’t make your own. You can get a bit creative making tracks, but that’s about it.

      AMD and Intel are like RC cars, they’re really fun, but they use a lot of batteries and you can’t really customize them. Oh, and they’re expensive, so you only get one.

      Each is cool, but with LEGO, you can do everything the others do, and more. Like LEGO, RISC-V can be slow to work with, especially if you don’t have the pieces you want, but the more people that use it, the better it’ll get and the more pieces you can get. And if you have a 3D printer, you can make your own pieces and share them with others.

      • cmhe@lemmy.world
        link
        fedilink
        English
        arrow-up
        16
        ·
        5 months ago

        “you” as in person with required skills, resources and access to a chip fabrication facility. For many others they can just buy something designed and produced by others, or play around a bit on FPGAs.

        We will also see how much variation with RISC-V will actually happen, because if every processor is a unique piece of engineering, it is really hard to write software, that works on every one.

        Even with ARM there are arguable too many designs out there, which currently take a lot of effort to integrate.

        • sugar_in_your_tea@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          6
          ·
          5 months ago

          Sure, and there are more people with that access than just AMD, ARM, NVIDIA, and Intel.

          If game devs supported RISC-V, Valve could’ve made the Steam Deck without having to get AMD’s help, which means they would’ve had more options to keep prices down while meeting their performance goals. Likewise for server vendors, phone manufacturers, etc, who currently need to buy from ARM (and fab themselves) or AMD/Intel.

          And that’s why I mentioned 3D printing. Making custom 3D models of LEGO pieces is out of reach for many (most?) and even owning a 3D printer is out of reach for many. I have one, but I’ve only built a handful of things because it’s time consuming.

          As it gets more software support, we should see a lot more variety in RISC-V chips. We’re not there yet, but we should be excited because it’s starting to get traction, and the future looks bright.

          • cmhe@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            5 months ago

            It also means that anyone can make their own instruction set extensions or just some custom modifications, which would make software much more difficult to port. You would have to patch your compiler for every individual chip, if you even figure out what those instructions are, and what they do. Backwards, forwards or sideway (to other cpus from other vendors) compatibility takes effort, and not everyone will try to have that, and instead add their own individual secret sauce to their instruction set.

            IMO, I am excited about RISC-V, but if the license doesn’t force adopters to open their designs under an open source license as well, I do expect even more portability issues as we already have with ARM socs.

            • sugar_in_your_tea@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              2
              ·
              edit-2
              5 months ago

              Compilers basically already do that, and distributed executables usually assume minimal instruction support. Compilers can detect what’s supported, so it’s largely a solved problem, at least if you compile things yourself.

      • nickwitha_k (he/him)@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        5
        ·
        5 months ago

        And if you have a 3D printer, you can make your own pieces and share them with others.

        I really wish that an affordable desktop chip fab was a thing. Maybe with graphene semiconductors it could be feasible.

        • sugar_in_your_tea@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          ·
          5 months ago

          It’s affordable today, but only for big orders in the millions (e.g. someone like Valve is big enough).

          It would be super cool if small batches (hundreds) were feasible, but I don’t think there’s much demand there since that’s where FPGAs come in.

      • Kazumara@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        5 months ago

        ARM is like Hotwheels, there are lots of cars, but you can’t make your own.

        That’s not entirely true. There are companies that have the ARM achitecture license, like Apple or Cavium (now bought by Marvell). They are allowed to make their own hotwheels using the spring system or the wheels or whatever.

    • floridaman@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      16
      arrow-down
      9
      ·
      edit-2
      5 months ago

      Not an eli5 because I’m still not caught up on it but if my memory serves, RISC-V is an open source architecture for processors, basically like amd64 or arm64, actually I’m pretty sure ARM’s chips are RISC derivatives.

      Edit: correcting my comment, ARM makes RISC chips, not RISC-V

      • boonhet@lemm.ee
        link
        fedilink
        English
        arrow-up
        39
        arrow-down
        1
        ·
        edit-2
        5 months ago

        ARM and RISC-V are entirely different in that neither one is based on the other, but what they have in common is that they’re both RISC (Reduced Instruction Set Computing) architectures. RISC is what makes ARM CPUs (in your phone, etc) so efficient and hopefully RISC-V will get there too.

        x86 by comparison is Complex Instruction Set Computing, which allows for more performance in some cases, but isn’t as efficient.

        • __dev@lemmy.world
          link
          fedilink
          English
          arrow-up
          17
          ·
          5 months ago

          The original debate from the 80s that defined what RISC and CISC mean has already been settled and neither of those categories really apply anymore. Today all high performance CPUs are superscalar, use microcode, reorder instructions, have variable width instructions, vector instructions, etc. These are exactly the bits of complexity RISC was supposed to avoid in order to achieve higher clock speeds and therefore better performance. The microcode used in modern CPUs is very RISC like, and the instruction sets of ARM64/RISC-V and their extensions would have likely been called CISC in the 80s. All that to say the whole RISC vs CISC thing doesn’t really apply anymore and neither does it explain any differences between x86 and ARM. There are differences and they do matter, but by an large it’s not due to RISC vs CISC.

          As for an example: if we compare the M1 and the 7840u (similar CPUs on a similar process node, one arm64 the other AMD64), the 7840u beats the M1 in performance per watt and outright performance. See https://www.cpu-monkey.com/en/compare_cpu-amd_ryzen_7_7840u-vs-apple_m1. Though the M1 has substantially better battery life than any 7840u laptop, which very clearly has nothing to do with performance per watt but rather design elements adjacent to the CPU.

          In conclusion the major benefit of ARM and RISC-V really has very little to do with the ISA itself, but their more open nature allows manufacturers to build products that AMD and Intel can’t or don’t. CISC-V would be just as exciting.

          • barsoap@lemm.ee
            link
            fedilink
            English
            arrow-up
            7
            ·
            edit-2
            5 months ago

            have variable width instructions,

            compressed instruction set /= variable-width. x86 instructions are anything from one to a gazillion bytes, while RISC-V is four bytes or optionally (very commonly supported) two bytes. Much easier to handle.

            vector instructions,

            RISC-V is (as far as I’m aware) the first ISA since Cray to use vector instructions. Certainly the only one that actually made a splash. SIMD isn’t vector instructions, most crucially with vector insns the ISA doesn’t care about vector length on an opcode level. That’s like if you wrote MMX code back in the days and if you run the same code now on a modern CPU it’s using just as wide registers as SSE3.

            But you’re right the old definitions are a bit wonky nowadays, I’d say the main differentiating factor nowadays is having a load/store architecture and disciplined instruction widths. Modern out-of-order CPUs with half a gazillion instructions of a single thread in flight at any time of course don’t really care about the load/store thing but both things simplify insn decoding to ludicrous degrees, saving die space and heat. For simpler cores it very much does matter, and “simpler core” here can also could mean barely superscalar, but with insane vector width, like one of 1024 GPU cores consisting mostly of APUs, no fancy branch prediction silicon, supporting enough hardware threads to hide latency and keep those APUs saturated. (Yes the RISC-V vector extension has opcodes for gather/scatter in case you’re wondering).


            Then, last but not least: RISC-V absolutely deserves the name it has because the whole thing started out at Berkeley. RISC I and II were the originals, II is what all the other RISC architectures were inspired by, III was a Smalltalk machine, IV Lisp. Then a long time nothing, then lecturers noticed that teaching modern microarches with old or ad-hoc insn sets is not a good idea, x86 is out of the question because full of hysterical raisins, ARM is actually quite clean but ARM demands a lot, and I mean a lot of money for the right to implement their ISA in custom silicon, so they started rolling their own in 2010. Calling it RISC V was a no-brainer.

            • __dev@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              5 months ago

              compressed instruction set /= variable-width […]

              Oh for sure, but before the days of super-scalars I don’t think the people pushing RISC would have agreed with you. Non-fixed instruction width is prototypically CISC.

              For simpler cores it very much does matter, and “simpler core” here can also could mean barely superscalar, but with insane vector width, like one of 1024 GPU cores consisting mostly of APUs, no fancy branch prediction silicon, supporting enough hardware threads to hide latency and keep those APUs saturated. (Yes the RISC-V vector extension has opcodes for gather/scatter in case you’re wondering).

              If you can simplify the instruction decoding that’s always a benefit - moreso the more cores you have.

              Then, last but not least: RISC-V absolutely deserves the name it has because the whole thing started out at Berkeley.

              You’ll get no disagreement from me on that. Maybe you misunderstood what I meant by “CISC-V would be just as exciting”? I meant that if there was a popular, well designed, open source CISC architecture that was looking to be the eventual future of computing instead of RISC-V then that would be just as exciting as RISC-V is now.

          • pantyhosewimp@lemmynsfw.com
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            5 months ago

            Thank you so much for this information.

            If you still have commenting motivation, what are the top 5 differences between x86 and ARM?

            Up until your post I had thought it exactly was the size of the instruction set with x86 having lots of very specific multi-step-in-a-single instruction as well as crufty instruction for backwards compatibility (like MPSADBW).

            • __dev@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              5 months ago

              I’m more familiar with RISC-V than I am with ARM though it’s my understanding they’re quite similar.

              • ARM/RISC-V are load-store architectures, meaning they divide instructions between loading/storing and doing computation. x86 on the other hand is a register-memory architecture, having instructions that do both computation as well as loading/storing.

              • ARM/RISC-V also have weaker guarantees as to memory ordering allowing for less synchronization between cores, however RISC-V has an extension to enforce the same guarantees as x86 and Apple’s M-series CPU have a similar extension for ARM. If you want to emulate x86 applications on ARM/RISC-V these kinds of extensions are essential for performance.

              • ARM/RISC-V instructions are variable width but only in a limited sense. They have “compressed instructions” - 2 bytes instead of 4 - to increase instruction density in order to compete with x86’s true variable width instructions. They’re fairly close in instruction density, though compressed instructions are annoying for compilers to handle due to instruction alignment. 4 byte instructions must be aligned to 4 bytes, so if you have 3 instructions A, B and C but only B has a compressed version then you can’t actually use it because there must be 4 bytes between instructions A and C.

              • ARM/RISC-V also makes backwards compatibility entirely optional, Apple’s M-series don’t implement 32-bit mode for instance, whereas x86-64 still has “real mode” for running 16 bit operating systems.

              There’s also a number of other differences, like the number of registers, page table formats, operating modes, etc, but those are the more fundamental ones I can think of.

              Up until your post I had thought it exactly was the size of the instruction set with x86 having lots of very specific multi-step-in-a-single instruction as well as crufty instruction for backwards compatibility (like MPSADBW).

              The MPSADBW thing likely comes from the hackaday article on why “x86 needs to die”. The kinda funny thing about that is MPSADBW is actually a really important instruction for (apparently) video decoding; ARM even has a similar instruction called SABD.

              x86 does have a large number of instructions (even more so if you want to count the variants of each), but ARM does not have a small number of instructions and a lot of that instruction complexity stops at the decoder. There’s a whole lot more to a CPU than the decoder.

            • exu@feditown.com
              link
              fedilink
              English
              arrow-up
              2
              ·
              5 months ago

              You can pay ARM to build and sell cores, you can’t do that for x86.

            • areyouevenreal@lemm.ee
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              5 months ago

              ARM is load-store and has a relaxed ordering. Whereas x86 has instructions that can read straight from memory, and has Total Store Ordering. ARM also is fixed instruction width, where x86/AMD64 is variable instruction width. Outside of that the difference is mostly licensing.

        • areyouevenreal@lemm.ee
          link
          fedilink
          English
          arrow-up
          5
          ·
          5 months ago

          The CISC vs RISC thing is dead. Also modern ARM ISAs aren’t even RISC anymore even if that’s what they started out as. People have no idea what’s going on with modern technology.

          X86 can actually be quite low power (see LPE cores and Intel Atom). The producers of x86 don’t specialize in that though, unlike a lot of RISC-V and ARM producers. It’s not that it’s impossible, just that it isn’t typically done that way.

        • Echo Dot@feddit.uk
          link
          fedilink
          English
          arrow-up
          1
          ·
          5 months ago

          So is Reduced Instruction Set like in the old assembly days where you couldn’t do multiplication, as there wasn’t a command for it, so you had to do multiple loops of addition?

          • Spedwell@lemmy.world
            link
            fedilink
            English
            arrow-up
            6
            ·
            5 months ago

            Right concept, except you’re off in scale. A MULT instruction would exist in both RISC and CISC processors.

            The big difference is that CISC tries to provide instructions to perform much more sophisticated subroutines. This video is a fun look at some of the most absurd ones, to give you an idea.

            • barsoap@lemm.ee
              link
              fedilink
              English
              arrow-up
              2
              ·
              edit-2
              5 months ago

              ARM prominently has an instruction to deal with Javascript. And RISC-V will have those kinds of instructions, too, they’re too useful, saving a massive amount of instructions and cycles and the CPU itself doesn’t really need any logic added, the insn decoder just has to be taught a bit pattern and which microops to emit, the APUs already can do it.

              What that instruction will never do in a RISC CPU though is read from memory.

              On the flipside, some RISC-V macroops are CISC, fusing memory access and arithmetic. That’s an architecture detail, though, only affecting code to the degree of "if you want to do this stuff, and want it to run faster on some cores, put those instructions in this exact sequence so the core can spot and fuse them).

          • boonhet@lemm.ee
            link
            fedilink
            English
            arrow-up
            1
            ·
            5 months ago

            Nah, the Complex instructions are ridiculously complex and the Reduced ones can still do a lot of stuff.

      • qaz@lemmy.world
        link
        fedilink
        English
        arrow-up
        20
        ·
        edit-2
        5 months ago

        ARM = Advanced RISC Machine

        However, RISC-V is specific type of RISC and ARM is not a derivative of RISC-V but of RISC.

        • Rinox@feddit.it
          link
          fedilink
          English
          arrow-up
          5
          ·
          5 months ago

          ARM = Advanced RISC Machine

          Originally Acorn RISC Machine before that

        • Blisterexe
          link
          fedilink
          English
          arrow-up
          4
          ·
          5 months ago

          To clarify for those that might not understand that explanation, RISC is just a type of instruction set, x86 is CISC, but arm and RISC-V are RISC

          • sugar_in_your_tea@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            5 months ago

            Yup. In general:

            • CISC - complex instruction set - you’ll get really exotic operations, like PMADDWD (multiply numbers, then add 16-bit chunks) or the SSE 4.2 string compare instructions
            • RISC - reduced instruction set - instead of an instruction for everything, RISC requires users to combine instructions, and specialialized extensions are fairly rare

            Modern CISC CPUs often (usually? Always?) have a RISC design behind the CISC interface, it just translates CISC -> RISC for processing. RISC CPUs tend to have more user-accessible cores, so the user/OS handles sending instructions. CISC can be faster for complex operations since you have fewer round-trips to the CPU, whereas RISC can handle more instructions simultaneously due to more cores, so big, diverse workloads may see better throughput. Basically, it’s the old argument of bandwidth vs latency.

            • areyouevenreal@lemm.ee
              link
              fedilink
              English
              arrow-up
              1
              ·
              5 months ago

              Except modern ARM chips are actually CISC too. Also microcode isn’t strictly RISC either. It’s a lot more complex than you are thinking.

              There are some RISC characteristics ARM has kept like load-store architecture and fixed width instructions. However it’s actually more complex in terms of capabilities and instructions than pretty much all earlier CISC systems, as early CISC systems did not have vector units and instructions for example.

              • sugar_in_your_tea@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                1
                ·
                5 months ago

                Yeah, they’ve gotten a bit bloated, but ARM is still a lot simpler than x86. That’s why ARM is usually higher core count, because they don’t have as many specialized circuits. That’s good for some use cases (servers, low power devices, etc), and generally bad for others (single app uses like gaming and productivity), though Apple is trying to bridge that gap.

                But yeah, ARM and x86 are a lot more similar today than they were 10 years ago. There’s still a distinct difference though, but RISC-V is a lot more RISC than ARM.

          • areyouevenreal@lemm.ee
            link
            fedilink
            English
            arrow-up
            9
            ·
            5 months ago

            It’s not just a separate product line. It’s a different architecture. Not made by the same companies either, so ARM aren’t involved at all. It’s actually a competitor to ARM64.

            • sugar_in_your_tea@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              1
              ·
              5 months ago

              Exactly. That’s what I meant by “different product line,” like how Honda makes both cars and motorcycles, they may share similar underlying concepts (e.g. combustion engines), but they’re separate things entirely.

              And since RISC-V is open source, the discussion about companies is irrelevant. AMD could make RISC-V chips if it wants, and they do make ARM chips. Same company, three different product lines. Intel also makes ARM chips, so the same is true for them.

              • areyouevenreal@lemm.ee
                link
                fedilink
                English
                arrow-up
                1
                ·
                5 months ago

                Since when did AMD make ARM chips? Also they aren’t as different as a motorcycle and a car. It’s more like compression ignition vs spark ignition. They are largely used in the same applications (or might be in the future), although some specific use cases work better with one or the other. Much like how cars can use either petrol or diesel, but say a large ship is better to use compression ignition and a motorcycle to use spark ignition.

                • sugar_in_your_tea@sh.itjust.works
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  arrow-down
                  1
                  ·
                  5 months ago

                  At least 10 years now, and they’re preparing to make ARM PC chips.

                  Also they aren’t as different as a motorcycle and a car. It’s more like compression ignition vs spark ignition.

                  I tried to keep it relatively simple. They have different use cases like cars vs motorcycles, and those use cases tend to lead to different focuses. We can compare in multiple ways:

                  X86 like motorcycle:

                  • more torque (higher clock speeds, better IPC)
                  • single or dual rider - fewer, faster cores
                  • less complicated (less stuff on the SOC), but more intricate (more pipelining)

                  ARM like motorcycle:

                  • simpler engine - less pipelining, smaller area, less complex cooling
                  • simpler accessories - the engine is a SOC, but you can attach a sidecar (coprocessor) or trailer, but your options are pretty limited (unlike x86 where a lot of stuff is still outside the CPU, but that’s changing)

                  The engines (microarch) aren’t that different, but they target different types of customers. You could throw a big motorcycle engine into a car, and maybe put a small car engine into a motorcycle, but it’s not going to work as well. So the form factor (ISA) is the main difference here.

                  But yeah, diesel vs gasoline is also a descent example, but that kind of begs the question as to where RISC-V fits in (in my example, it would be a diy engine kit, where it can scale from motorcycles to cars to trucks to ships, if you pick the right pieces).

  • neclimdul@lemmy.world
    link
    fedilink
    English
    arrow-up
    45
    arrow-down
    1
    ·
    5 months ago

    When the first person opens their new laptop:

    “RISC architecture is going to change everything”

  • suction@lemmy.world
    link
    fedilink
    English
    arrow-up
    44
    arrow-down
    3
    ·
    5 months ago

    Managers at big companies: “No we will not buy any products that have ‘Risc’ in them…if someone gets hacked we’ll take all the blame!”

  • smileyhead@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    40
    arrow-down
    2
    ·
    5 months ago

    Now imagine we only had Windows and no one would create such thing because Windows and it’s programs does not have support.

  • nameisnotimportant@lemmy.ml
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    1
    ·
    5 months ago

    Great, I’d be glad if they would consider shipping to more countries as well with localized keyboards

    • dezmd@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      5 months ago

      I mean, they at least offer a blank + clear ANSI and blank + clear ISO keyboard options along side their 14 other keyboard formats.

      • nameisnotimportant@lemmy.ml
        link
        fedilink
        English
        arrow-up
        1
        ·
        5 months ago

        Yes that’s amazing — but a blank keyboard is not for everyone.

        Moreover, even if I try to cope with this setup, I still cannot receive the laptop and I’d have to use a power adapter

          • nameisnotimportant@lemmy.ml
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            5 months ago

            Yes, and so what?

            As surprising as it may seem, some might still want to use the supplied charger because they don’t have spare ones powerful enough for the laptop.

            I have a Macbook with Magsafe and 6 USB-C phone / small devices chargers. None of them could power a Frame.work so I cannot just use another charger because it’s usb-c

            • erwan@lemmy.ml
              link
              fedilink
              English
              arrow-up
              2
              ·
              5 months ago

              Just buy one if you need. I don’t understand people who prefer forced bundles over deciding what to buy.

              Unless you think an included charger is free. It’s not, it’s factored in the price.

              • nameisnotimportant@lemmy.ml
                link
                fedilink
                English
                arrow-up
                1
                ·
                5 months ago

                Unless you think an included charger is free

                Spot on, last time I bought a laptop it came with a charger, so that’s why I was referring to this and why I was concerned about its compatibility with my power plugs.

                As I was still unable to order a frame.work yet I wasn’t aware that frame.work didn’t include by default a charger, so your point makes perfect sense.

                In this case then I’ll probably end up buying a charger — because none of them in my possession is able to cope with the watts required.

  • Toes♀@ani.social
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    6
    ·
    5 months ago

    Any information on the GPU they are pairing with it?

    Does anyone know if it’s possible to use a regular AMD or Nvidia GPU with it?

    • braindefragger@lemmy.world
      link
      fedilink
      English
      arrow-up
      39
      arrow-down
      1
      ·
      edit-2
      5 months ago

      This is not for someone to daily drive. You’ll probably get better performance duct taping and raspberry pi to Bluetooth keyboard and 7 inch pi display.

      • Toes♀@ani.social
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        4
        ·
        5 months ago

        haha, that doesn’t answer the question at all. But I appreciate you.

        • zelifcam@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          2
          ·
          edit-2
          5 months ago

          It does actually.

          Edit: It’s an article about how a company is going to assist in providing RISC 5 dev boards to framework. It’s not about a consumer ready product with a dedicated GPU.

    • morhp@lemmynsfw.com
      link
      fedilink
      English
      arrow-up
      10
      ·
      edit-2
      5 months ago

      The GPU inside the processor/soc has the following specifications:

      • Imagination BXE-4-32 GPU with support for OpenCL 1.2, OpenGL ES 3.2, Vulkan 1.2
      • Video Decoder – H.265, H.264 4K @ 60fps or 1080p @ 30fps, MJPEG
      • Video Encoder – H.265/HEVC Encoder, 1080p @ 30fps

      I don’t think you’ll be able to use a separate/external GPU with it. Thunderbolt support is highly unlikely and that processor has only 1 or 2 PCIe lanes (depending how USB is connected), which is likely already used for WiFi.

    • Technus
      link
      fedilink
      English
      arrow-up
      9
      ·
      5 months ago

      The processor it’s using is linked in the article: https://www.cnx-software.com/2022/08/29/starfive-jh7110-risc-v-processor-specifications/

      It’s a system-on-chip (SoC) design with an embedded GPU, the Imagination BXE-4-32, which appears to be designed mainly for smart TVs and set-top boxes.

      The SoC itself only has two PCIe 2.0 lanes on separate interfaces so you can’t use both for the same device, and one is shared with the USB 3.0 interface.

      That’s not even enough bandwidth to drive an entry-level notebook GPU from over a decade ago. Seriously: the GeForce GT 520M, launched January 2011, wants a full PCIe 2.0 x16 interface. Same with the Raedeon HD 6330M. You could probably get away with just 8 lanes if you had to, but not only one.

      The other commenter wasn’t kidding by saying you could get more power out of a Raspberry Pi 4. It’s even mentioned in the article.

      • morhp@lemmynsfw.com
        link
        fedilink
        English
        arrow-up
        4
        ·
        5 months ago

        Seriously: the GeForce GT 520M, launched January 2011, wants a full PCIe 2.0 x16 interface. Same with the Raedeon HD 6330M. You could probably get away with just 8 lanes if you had to, but not only one.

        Connecting a GPU with just one PCIe lane isn’t the biggest problem. You’ll just slow down data exchange between the CPU and GPU (mostly loading textures and vertex positions).

        If your game mostly relies on shaders and renders lots of rather static stuff, you’ll mostly just get longer loading times but FPS shouldn’t suffer too much.

        • Technus
          link
          fedilink
          English
          arrow-up
          3
          ·
          5 months ago

          Given how much modern games stream data in and out of VRAM, I think it would actually be quite a significant issue. Although, for modern games the 520M would probably be below minimum requirements anyway. It was just to illustrate my point.

          • morhp@lemmynsfw.com
            link
            fedilink
            English
            arrow-up
            2
            ·
            5 months ago

            It would be obviously “an issue” and drastically reduce performance in many cases, but compared to the buildin igpu, you’d probably still be able to get a much better performance for lots of applications.