I left the headline like the original, but I see this as a massive win for Apple. The device is ridiculously expensive, isn’t even on sale yet and already has 150 apps specifically designed for that.

If Google did this, it wouldn’t even get 150 dedicated apps even years after launch (and the guaranteed demise of it) and even if it was something super cheap like being made of fucking cardboard.

This is something that as an Android user I envy a lot from the Apple ecosystem.

Apple: this is a new feature => devs implement them in their apps the very next day even if it launches officially in 6 months.

Google: this is a new feature => devs ignore it, apps start to support it after 5-6 Android versions

    • jarfil@beehaw.org
      link
      fedilink
      arrow-up
      2
      ·
      10 months ago

      because when I use Safari, Notes and Word what I REALLY need is augmenter reality

      You may not realize it, but you actually want AR for everything: pick up some coffee, read some news, take some notes, write them into a document… while still sipping your coffee, and no computers in sight.

      AR is not the tiny dancing characters you see through your phone’s camera, that’s a silly gimmick. AR is the equivalent of picking a bunch of sheets of paper, and having them display the different apps, except without any paper, or taking any physical space, or buying more devices to fill your workspace.

        • jarfil@beehaw.org
          link
          fedilink
          arrow-up
          2
          ·
          10 months ago

          As strange as looking at your monitor, instead of buying a newspaper that you can take to the bathroom then reuse it when you’re done.

          Having monitors, screens, and other displays scattered around, will be as backwards as the newspaper thing. Why even buy a monitor, when you have all the virtual monitors you might ever want, right there on your head?

            • jarfil@beehaw.org
              link
              fedilink
              arrow-up
              1
              ·
              10 months ago

              We’re talking about specific device

              I was talking about AR, not a specific device.

              Jesus, Mac fanboys are just the worst…

              Right… thanks, but no thanks.

                • jarfil@beehaw.org
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  10 months ago

                  Magic Leap fell for the same trap as many VR/AR projects before it: let the marketing department overpromise, then have clients disappointed when they underdeliver. Don’t get mistaken, I also think this Apple Vision Pro is overpromising, and that they’ll get hit hard for it.

                  Still, most people would jump at the opportunity of shitting in the woods, or on the moons of Jupiter, or in their favorite fantasy porn den… it’s part of why making appealing marketing for this stuff is so easy: people love to get carried away by gimmicks.

                  And yet again, none of that changes the actual utility of AR, which, if implemented correctly, goes far beyond a gimmick and becomes life changing.

                  It just needs to pass a single filter: human capabilities. In particular, vision and balance perception.

                  Vision

                  Vision is ironically a pretty low and high bar at the same time: the optical nerve only has 1M signals going through it, that’s about 640x480x3, a VGA display could fool it. At the same time, the eye can scan its surroundings with a fovea with an equivalent 60 pixels per degree, with about 135° horizontal × 180° vertical.

                  The Magic Leap 2 has a 45°×55° FOV (70° diagonal), with a 1440×1760 display, giving it a 30PPD, or about 1/4 (square) of human vision, and a very limited viewing area.

                  The Apple Vision Pro claims a 110° FOV (presumably diagonal) with 4K displays or 2160×3840… for around a 40PPD, or about 1/2 (square) of human vision, with still a quite small viewing area.

                  Human vision with a 135°×180° FOV at 60PPD, would require something in the range of 8100×10800px static displays.

                  Balance

                  Balance perception has to do with visual feedback, and the vestibulo-ocular reflex… which is informed on one side by the vestibular system, that barely reacts at more than 10Hz, and the retina cones that are capable of reacting at up to 400Hz!

                  The idea of pre-scanning the environment in the Magic Leap and Apple Vision Pro, looks like a step in the right direction, allowing the system to pre-render images into the future, adapted to the probable environment… but I think they’ll still get smashed against the 400Hz barrier.

                  Meaning, a static display system would need a couple of 16K HDR screens running at 480Hz… which is way above anything being sold or even planned right now. There have been alternative technical solutions, like eye tracking while projecting directly onto the retina, but they seem to still have most of the same limitations.

                  So… 10 years into the future you said? Maybe. I got an Oculus DK1 about 10 years ago… then promptly went part blind in one eye… but still had a chance at seeing what 640x800 per eye at below 10PPD and 250Hz looked like (like crap, and made a lot of people vomit).

                  10 years sounds like the timeframe for a wide adoption where people go around with their AR goggles onto the street, some in groups with their virtual friends, some on a peaceful meadow with no one in sight, some with their IRL families or friends and any mix of the aforementioned.