I found this on reddit which I reluctantly to cite it here [1], anyway the comments and the findings were as vague as Apple claiming they beat Nvidia RTX 3090 GPU with that fancy chart.
Regardless, all Apple current lineups, incl. Macbooks, Mac mini, Mac Studio Max come with 16-core Neural Engine, and the Ultra comes with 32-core Neural Engine.
What does it actually do despite all the marketing claims that none other than BS vague stuffs that only accessible to Apple proprietary apps, Finder, FaceTime, Final Cut Pro…
And from the schematic diagram of Apple M series SoC, the Neural Engine used significant space of the SoC.
Does Pytorch and other ML frameworks actually utilize that 16/32-core ?
[1] https://www.reddit.com/r/apple/comments/122iqf4/everything_we_actually_know_about_the_apple/
Tasks the Apple Neural Engine Takes Responsibility For
It’s time to dive into just what sort of jobs the Neural Engine takes care of. As previously mentioned, every time you use Face ID to unlock your iPhone or iPad, your device uses the Neural Engine. When you send an animated Memoji message, the Neural Engine is interpreting your facial expressions.
That’s just the beginning, though. Cupertino also employs its Neural Engine to help Siri better understand your voice. In the Photos app, when you search for images of a dog, your iPhone does so with ML (hence the Neural Engine.)
Initially, the Neural Engine was off-limits to third-party developers. It couldn’t be used outside of Apple’s own software. In 2018, though, Cupertino released the CoreML API to developers in iOS 11. That’s when things got interesting.
The CoreML API allowed developers to start taking advantage of the Neural Engine. Today, developers can use CoreML to analyze video or classify images and sounds. It’s even able to analyze and classify objects, actions and drawings.
https://www.macobserver.com/tips/deep-dive/what-is-apple-neural-engine/
This thread claims the neural engine works on models with frameworks being private (fuck you for that apple) bit there are solutions to play with it outside of xcode.
From what I’ve read it’s mainly for AI and machine learning jobs. Some common uses are for specific photo editing tasks like AI noise reduction (or other AI based editing) and also video encoding make use of the neural engine.
Aside from that I’m really not sure.