Intel doesn't make its own discrete GPU but has built something that specializes in processing 4K graphics. But that product isn't powerful enough to run Crysis, if you were wondering.
The chipmaker showed off its Intel Visual Compute Accelerator 2 at the NAB show in Las Vegas this week. It has the build of a GPU but is designed for server applications and not for PCs.
The VCA 2 is aimed at cloud streaming 4K video, graphics, and virtual reality content. Servers with the graphics accelerator installed could be used to stream video or broadcast content.
The VCA 2 uses the 4K-capable Iris Pro Graphics P580 graphics chip and three Intel Xeon E3-1500 v5 processors. The P580 is also used in Intel's mini-PC called Skull Canyon, which is designed for gaming.
Like GPUs, the VCA 2 plugs into PCI-Express 3.0 slots. It is not meant to be the main CPU or GPU for a PC.
It uses year-old GPU technology, not the faster Iris Pro GPUs that are in the 7th Generation Core chips.
You won't be able to buy VCA 2 off the shelf, but instead, it'll be sold directly to server and device makers. Intel declined to comment on when it'll be available.
The VCA 2 is an upgrade from its predecessor, simply called the Visual Compute Accelerator. The original VCA had the older Iris Pro graphics P6300, used in Broadwell chips, and the older Xeon E3-v4 chips.
At NAB, Haivision showed its KB 4K Encoder, which is powered by Intel's VCA 2. The device helped stitch and stream 360-degree content taken by Nokia's OZO Live 4K virtual reality camera. The content is delivered to VR headsets.
Intel isn't really known for its prowess in GPUs, though its new integrated processors in some Kaby Lake chips support 4K. PCs with Intel's integrated GPUs can handle VR content but have a long way to go to compete with AMD or Nvidia GPUs.
Larrabee, from 2010, was Intel's first attempt to make a discrete high-end GPU, but the product was unceremoniously abandoned even after a prototype was shown. Technical challenges led to its cancellation, but its byproducts were used in the Xeon Phi supercomputing and integrated graphics chips.
A full-fledged GPU still doesn't exist in Intel's arsenal, and that's a big hole because of the growing popularity of gaming, virtual reality, and machine learning. Nvidia and AMD GPUs are used in most servers involved in deep learning, natural language processing, and other machine-learning tasks.
Intel has said FPGAs could mimic GPUs to an extent. But GPUs are much more flexible than FPGAs, which are designed to do specific tasks.