Intel has showcased next-generation Xeon Phi chips for Artificial Intelligence (AI). Code-named as Knights Mill, the chip is geared towards higher performance deep-learning applications.
Intel said the chip would add more so-called “floating point” calculations, which are important for powering machine learning algorithms. The company did not reveal any details about Knights Mill. Intel says its processors already power 97 percent of servers deployed to support machine learning workloads. According to the chipmaker, only 7% of all servers being used for machine learning and only 0.1% are running deep neural nets, a subset of machine learning that emulates the neurons and synapses of the brain to make sense of unstructured data.
However, Nvidia has questioned Intel about the performance of its Xeon Phi cards. Intel recently published some benchmarks comparing the ones with Nvidia’s technology, claiming that the Xeon Phi processor is 2.3 times faster than Nvidia GPUs for machine learning algorithms. Nvidia has denied these claims saying that most of the benchmarking came from outdated software and hardware that doesn’t offer a true side-by-side comparison. Nvidia claimed if Intel had used the latest technology, Nvidia would achieve 30% faster training machine learning models over Intel.
The company said the Xeon Phi chip would be out sometime in 2017. Earlier this month, Intel acquired machine learning startup Nervana.