• PhobosAnomaly@feddit.uk
    link
    fedilink
    arrow-up
    0
    ·
    1 year ago

    Awesome, thanks for the insight.

    I’m showing my age here, but much like we had math coprocessors running beside the 286 and 386 gen CPUs to take on floating point operations; then graphics cards offloaded geometry-based math operations to GPU’s - are we looking at AI-style die or chips to specifically work on AI functions?

    Excuse my oversimplification, this isn’t my field of expertise!

    • beefcat@beehaw.org
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      not a dedicated chip per se, the trend is to build it directly into the SoC (mobile devices) or the dedicated GPU

    • Kevin Herrera@beehaw.org
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      Apple added (a while back) what they call a “Neural Engine,” which is hardware dedicated to efficient execution of ML workloads.

      https://en.m.wikipedia.org/wiki/Apple_A11

      They have been refining it ever since. I would not be surprised if they made advancements in both the hardware and software used for local GAI.