Apple is reportedly developing a dedicated chip for integration with devices like iPhone that can handle artificial intelligence tasks, such as facial and speech recognition, according to a report on Friday.
Referred to internally as “Apple Neural Engine,” the silicon is Apple’s attempt to leapfrog the burgeoning AI market, which has surged over the past year with products like Amazon’s Alexa and Google Assistant. According to a source familiar with the matter, the chip is designed to handle complex tasks that would otherwise require human intelligence to accomplish, Bloomberg reports.
Though Apple devices already sport forms of AI technology — the Siri virtual assistant and basic computer vision assets — a dedicated chip would further improve user experience. In addition, offloading AI-related computational processing from existing A-series SoCs could improve the battery life of portable devices like iPhone and iPad. If it comes to fruition, the strategy would be similar to chips introduced by competing manufacturers, including Google and its Tensor Processing Unit.
Apple has tested Apple Neural Engine in prototype iPhones, and is thinking about offloading core applications including Photos facial recognition, speech recognition and the iOS predictive keyboard to the chip, the report says. The source claims Apple plans to open up third-party developer access to the AI silicon, much like APIs for other key hardware features like Touch ID.
Whether the chip will be ready in time for inclusion in an iPhone revision later this year is unknown, though today’s report speculates Apple could announce work on Apple Neural Engine at WWDC next month.
Apple’s interest in AI, and related augmented reality tech, is well documented. CEO Tim Cook has on multiple occasions hinted that Apple-branded AR solutions are on the horizon. The company has been less forthcoming about its ambitions for AI.
That cloak of secrecy is slowly lifting, however. At a conference last year, Apple Director of Artificial Intelligence Research Russ Salakhutdinov said employees working on AI research are now allowed to publish their findings and interface with academics in the field. Some believe the shift in company policy was designed to retain high-value talent, as many researchers prefer to discuss their work with peers.
Just weeks after the IP embargo lifted, Apple published its first AI research paper focusing on advanced methods of training computer vision algorithms to recognize objects using synthetic images.
Apple has been aggressively building out its artificial intelligence and augmented reality teams through acquisitions and individual hires. Last August, for example, the company snapped up machine learning startup Turi for around $ 200 million. That purchase came less than a year after Apple bought another machine learning startup, Perceptio, and natural language processing firm VocalIQ to bolster in-house tech like Siri and certain facets of iOS, MacOS, tvOS and CarPlay.
Earlier this year, Apple was inducted into the the Partnership for AI as a founding member, with Siri co-founder and Apple AI expert Tom Gruber named to the group’s board of directors.
Most recently, Apple in February reveled plans to expand its Seattle offices, which act as a hub for the company’s AI research and development team. The company is also working on “very different” AI tech at its R&D facility in Yokohama, Japan.</span>
Let’s block ads! (Why?)