Open main menu

An AI accelerator is a class of microprocessor[1] or computer system[2] designed as hardware acceleration for artificial intelligence applications, especially artificial neural networks, machine vision and machine learning. Typical applications include algorithms for robotics, internet of things and other data-intensive or sensor-driven tasks.[3] They are often manycore designs and generally focus on low-precision arithmetic, novel dataflow architectures or in-memory computing capability.[4] A number of vendor-specific terms exist for devices in this category, and it is an emerging technology without a dominant design. AI accelerators can be found in many devices such as smartphones, tablets, and computers all around the world. See the heading titled ¨Examples" for more examples.


History of AI accelerationEdit

Computer systems have frequently complemented the CPU with special purpose accelerators for specialized tasks, known as coprocessors. Notable application-specific hardware units include video cards for graphics, sound cards, graphics processing units and digital signal processors. As deep learning and artificial intelligence workloads rose in prominence in the 2010s, specialized hardware units were developed or adapted from existing products to accelerate these tasks.

Early attemptsEdit

As early as 1993, digital signal processors were used as neural network accelerators e.g. to accelerate optical character recognition software.[5] In the 1990s, there were also attempts to create parallel high-throughput systems for workstations aimed at various applications, including neural network simulations.[6][7][8] FPGA-based accelerators were also first explored in the 1990s for both inference[9] and training.[10] ANNA was a neural net CMOS accelerator developed by Yann LeCun.[11]

Heterogeneous computingEdit

Heterogeneous computing refers to incorporating a number of specialized processors in a single system, or even a single chip, each optimized for a specific type of task. Architectures such as the cell microprocessor[12] have features significantly overlapping with AI accelerators including: support for packed low precision arithmetic, dataflow architecture, and prioritizing 'throughput' over latency. The Cell microprocessor was subsequently applied to a number of tasks[13][14][15] including AI.[16][17][18]

In the 2000s, CPUs also gained increasingly wide SIMD units, driven by video and gaming workloads; as well as support for packed low precision data types.[19]

Use of GPUEdit

Graphics processing units or GPUs are specialized hardware for the manipulation of images and calculation of local image properties. The mathematical basis of neural networks and image manipulation are similar, embarrassingly parallel tasks involving matrices, leading GPUs to become increasingly used for machine learning tasks.[20][21][22] As of 2016, GPUs are popular for AI work, and they continue to evolve in a direction to facilitate deep learning, both for training[23] and inference in devices such as self-driving cars.[24] GPU developers such as Nvidia NVLink are developing additional connective capability for the kind of dataflow workloads AI benefits from.[25] As GPUs have been increasingly applied to AI acceleration, GPU manufacturers have incorporated neural network specific hardware to further accelerate these tasks.[26][27] Tensor cores are intended to speed up the training of neural networks.[27]

Use of FPGAsEdit

Deep learning frameworks are still evolving, making it hard to design custom hardware. Reconfigurable devices such as field-programmable gate arrays (FPGA) make it easier to evolve hardware, frameworks and software alongside each other.[9][10][28]

Microsoft has used FPGA chips to accelerate inference.[29][30] The application of FPGAs to AI acceleration motivated Intel to acquire Altera with the aim of integrating FPGAs in server CPUs, which would be capable of accelerating AI as well as general purpose tasks.[31]

Emergence of dedicated AI accelerator ASICsEdit

While GPUs and FPGAs perform far better[quantify] than CPUs for AI related tasks, a factor of up to 10 in efficiency[32][33] may be gained with a more specific design, via an application-specific integrated circuit (ASIC).[citation needed] These accelerators employ strategies such as optimized memory use[citation needed] and the use of lower precision arithmetic to accelerate calculation and increase throughput of computation.[34][35] Some adopted low-precision floating-point formats used AI acceleration are half-precision and the bfloat16 floating-point format.[36][37][38][39][40][41][42]

In-memory computing architecturesEdit

In June 2017, IBM researchers announced an architecture in contrast to the Von Neumann architecture based on in-memory computing and phase-change memory arrays applied to temporal correlation detection, intending to generalize the approach to heterogeneous computing and massively parallel systems.[43] In October 2018, IBM researchers announced an architecture based on in-memory processing and modeled on the human brain's synaptic network to accelerate deep neural networks.[44] The system is based on phase-change memory arrays.[45]


As of 2016, the field is still in flux and vendors are pushing their own marketing term for what amounts to an "AI accelerator", in the hope that their designs and APIs will become the dominant design. There is no consensus on the boundary between these devices, nor the exact form they will take; however several examples clearly aim to fill this new space, with a fair amount of overlap in capabilities.

In the past when consumer graphics accelerators emerged, the industry eventually adopted Nvidia's self-assigned term, "the GPU",[46] as the collective noun for "graphics accelerators", which had taken many forms before settling on an overall pipeline implementing a model presented by Direct3D.


Stand alone productsEdit

  • Google Tensor processing unit is an accelerator specifically designed by Google for its TensorFlow framework, which is extensively used for convolutional neural networks. It focuses on a high volume of 8-bit precision arithmetic. The initial first generation from 2015 focused on inference, while the second generation announced in May 2017 increased capability for neural network training also. The third-generation TPU was announced on 8 May 2018. On July 2018 the Edge TPU was announced. Edge TPU is Google’s purpose-built ASIC chip designed to run its TensorFlow Lite machine learning (ML) models at the edge.[47]
  • Adapteva epiphany is a many-core coprocessor featuring a network on a chip scratchpad memory model, suitable for a dataflow programming model, which should be suitable for many machine learning tasks.[citation needed]
  • Intel Nervana NNP (Neural Network Processor) (a.k.a. ”Lake Crest”), which Intel claims is the first commercially available chip with a purpose built architecture for deep learning. Facebook was a partner in the design process.[48][49]
  • Movidius Myriad 2 is a many-core VLIW AI accelerator complemented with video fixed function units.
  • Mobileye's EyeQ is a processor specialized for vision processing for self-driving cars[50]
  • NM500 is the latest as of 2016 in a series of accelerator chips for radial basis function neural nets from General Vision.[51]
  • Kendryte K210 contains 64-bit RISC-V CPU and KPU, a general-purpose neural network processor with built-in convolution, batch normalization, activation, and pooling operations.
  • Qualcomm announced Cloud AI 100, an inference accelerator. [1]
  • Habana Labs' Habana Goya (HL-1000) is for inference and currently in production. Habana Gaudi (HL-2000) is for training and will be sampling in 2019 Q2.
  • Tesla's FSD chip includes two neural net processing units with 72 trillion operations per second (TOPS).[52]

GPU based productsEdit

FPGA based productsEdit

AI accelerating co-processorsEdit

  • Qualcomm's Hexagon DSPs since the Snapdragon 820 released in March 2015 using their Qualcomm Snapdragon Neural Processing Engine SDK.[58]
    • Qualcomm's Snapdragon 855 contains their 4th generation on-device AI engine, including a dedicated Tensor Accelerator.
  • Cadence's Tensilica IP is a family of neural network processor and neural network-optimized digital signal processor IP core. Such as the Tensilica Vision P6 was announced in May 2016, the Tensilica Vision C5 DSP released in May 2017 and the Tensilica Vision Q6 DSP released in April 2018.[62][63][64] The Tensilica DNA 100 Processor was announced in September 2018.[65] The Tensilica Vision Q7 DSP was announced in May 2019.[66]
  • Imagination Technologies' PowerVR Series2NX NNA (Neural Net Accelerator) is an IP core from NEC (now Renesas) licensed for integration into chips, first announced September 2017.[67] On December 2018 PowerVR Series3NX and Series3NX-F was announced.[68]
  • Apple's Neural Engine is an AI accelerator core within Apple-designed processors. The Apple A11 Bionic SoC[69] released on September 2017 featured a dual core Neural Engine. The Apple A12 Bionic SoC released on September 2018 featured an octa core Neural Engine.
  • Samsung's Exynos 9820 has an integrated Neural Processing Unit (NPU). It allows the processor to perform AI-related functions seven times faster than its predecessor. From enhancing photos to advanced AR features, the Exynos 9820 with NPU expands AI capabilities of mobile devices.[70]
  • Cambricon Technologies's Machine Learning Unit (MLU) family of neural processors such as the MLU-100 and MLU-200.[71]
  • HiSilicon's Neural Processing Unit is a neural network accelerator within HiSilicon's Kirin SoCs. The Kirin 970[72] with a NPU from Cambricon Technologies was released in October, 2017. The Kirin 980 with a dual core NPU from Cambricon Technologies was released in October, 2018.
  • Google's Pixel Visual Core (PVC) is a fully programmable Image, Vision and AI processor for mobile devices. First featured in the Google Pixel 2 released in October, 2017.
  • Arm's ML Processor is dedicated IP for neural network model inferencing acceleration. First announced as Project Trillium in January 2018.[73]
  • CEVA's NeuPro family of AI processors. The NP500, NP1000, NP2000 and NP4000 were first announced on January 2018. Each containing one programmable vector DSP and one hardwired implementation of 8-bit or 16-bit neural network layers supporting neural nets with performances ranging from 2 TOPS thru 12.5 TOPS.[74]
  • Universal Multifunction Accelerator (UMA) by Manjeera Digital Systems in Hyderabad is an accelerator in a proprietary architecture based on Middle Stratum Operations.[75][76][77]
  • DinoPlusAI's latency optimized AI processor platform provides deterministic ultra-low latency beside computing capacity and power efficiency. The DinoplusAI processor features a scalable architecture, software stack and user interfaces.

Research and unreleased productsEdit

  • In December 2017 Tesla Motors confirmed a rumour that it is developing an AI chip for autonomous driving. Jim Keller worked on this project between at least early 2016 and early 2018.[78]
  • MIT Eyeriss is an accelerator design aimed explicitly at convolutional neural networks, using a scratchpad memory and network-on-chip architecture.[79]
  • Georgia Tech has designed a neuro-inspired processor for performing online reinforcement learning for ultra-low power robotics. It employs mixed-signal design techniques to reduce the operating power.[80]
  • Nullhop is an accelerator designed at the Institute of Neuroinformatics of ETH Zürich and University of Zürich based on sparse representation of feature maps. The second generation of the architecture is commercialized by the university spin-off Synthara Technologies.[81][82]
  • Kalray is an accelerator for convolutional neural nets.[83]
  • SpiNNaker is a many-core design specialized for simulating a large neural network.
  • Graphcore IPU is a graph-based AI accelerator.[84]
  • DPU, by Wave Computing, a dataflow architecture[85]
  • STMicroelectronics at the start of 2017 presented a demonstrator SoC manufactured in a 28 nm process containing a deep CNN accelerator.[86]
  • TrueNorth is a manycore design based on spiking neurons rather than traditional arithmetic.[87][88]
  • Intel Loihi is an experimental neuromorphic chip.[89]
  • BrainChip in September 2017 introduced a commercial PCI Express card with a Xilinx Kintex Ultrascale FPGA running neuromorphic neural cores applying pattern recognition on 600 video images per second using 16 watts of power.[90]
  • IIT Madras is designing a spiking neuron accelerator for big-data analytics.[91]
  • Several memristor-based AI accelerators have been proposed which leverage in-memory computing capability of memristor.[4]
  • AlphaICs is designing an agent-based coprocessor called Real AI Processor (RAP) to enable perception and decision making in a chip.[92]

Potential applicationsEdit

See alsoEdit


  1. ^ "Intel unveils Movidius Compute Stick USB AI Accelerator". July 21, 2017. Archived from the original on August 11, 2017. Retrieved August 11, 2017.
  2. ^ "Inspurs unveils GX4 AI Accelerator". June 21, 2017.
  3. ^ "Google Developing AI Processors".Google using its own AI accelerators.
  4. ^ a b "A Survey of ReRAM-based Architectures for Processing-in-memory and Neural Networks", S. Mittal, Machine Learning and Knowledge Extraction, 2018
  5. ^ "convolutional neural network demo from 1993 featuring DSP32 accelerator".
  6. ^ "design of a connectionist network supercomputer".
  7. ^ "The end of general purpose computers (not)".This presentation covers a past attempt at neural net accelerators, notes the similarity to the modern SLI GPGPU processor setup, and argues that general purpose vector accelerators are the way forward (in relation to RISC-V hwacha project. Argues that NN's are just dense and sparse matrices, one of several recurring algorithms)
  8. ^ Ramacher, U.; Raab, W.; Hachmann, J.A.U.; Beichter, J.; Bruls, N.; Wesseling, M.; Sicheneder, E.; Glass, J.; Wurz, A.; Manner, R. (1995). Proceedings of 9th International Parallel Processing Symposium. pp. 774–781. CiteSeerX doi:10.1109/IPPS.1995.395862. ISBN 978-0-8186-7074-9.
  9. ^ a b "Space Efficient Neural Net Implementation".
  10. ^ a b "A Generic Building Block for Hopfield Neural Networks with On-Chip Learning" (PDF). 1996.
  11. ^ Application of the ANNA Neural Network Chip to High-Speed Character Recognition
  12. ^ "Synergistic Processing in Cell's Multicore Architecture". 2006.
  13. ^ De Fabritiis, G. (2007). "Performance of Cell processor for biomolecular simulations". Computer Physics Communications. 176 (11–12): 660–664. arXiv:physics/0611201. doi:10.1016/j.cpc.2007.02.107.
  14. ^ "Video Processing and Retrieval on Cell architecture". CiteSeerX
  15. ^ Benthin, Carsten; Wald, Ingo; Scherbaum, Michael; Friedrich, Heiko (2006). 2006 IEEE Symposium on Interactive Ray Tracing. pp. 15–23. CiteSeerX doi:10.1109/RT.2006.280210. ISBN 978-1-4244-0693-7.
  16. ^ "Development of an artificial neural network on a heterogeneous multicore architecture to predict a successful weight loss in obese individuals" (PDF).
  17. ^ Kwon, Bomjun; Choi, Taiho; Chung, Heejin; Kim, Geonho (2008). 2008 5th IEEE Consumer Communications and Networking Conference. pp. 1030–1034. doi:10.1109/ccnc08.2007.235. ISBN 978-1-4244-1457-4.
  18. ^ Duan, Rubing; Strey, Alfred (2008). Euro-Par 2008 – Parallel Processing. Lecture Notes in Computer Science. 5168. pp. 665–675. doi:10.1007/978-3-540-85451-7_71. ISBN 978-3-540-85450-0.
  19. ^ "Improving the performance of video with AVX". February 8, 2012.
  20. ^ "microsoft research/pixel shaders/MNIST".
  21. ^ "how the gpu came to be used for general computation".
  22. ^ "imagenet classification with deep convolutional neural networks" (PDF).
  23. ^ "nvidia driving the development of deep learning". May 17, 2016.
  24. ^ "nvidia introduces supercomputer for self driving cars". January 6, 2016.
  25. ^ "how nvlink will enable faster easier multi GPU computing". November 14, 2014.
  26. ^ "A Survey on Optimized Implementation of Deep Learning Models on the NVIDIA Jetson Platform", 2019
  27. ^ a b Harris, Mark (May 11, 2017). "CUDA 9 Features Revealed: Volta, Cooperative Groups and More". Retrieved August 12, 2017.
  28. ^ "FPGA Based Deep Learning Accelerators Take on ASICs". The Next Platform. August 23, 2016. Retrieved September 7, 2016.
  29. ^ "microsoft extends fpga reach from bing to deep learning". August 27, 2015.
  30. ^ Chung, Eric; Strauss, Karin; Fowers, Jeremy; Kim, Joo-Young; Ruwase, Olatunji; Ovtcharov, Kalin (February 23, 2015). "Accelerating Deep Convolutional Neural Networks Using Specialized Hardware" (PDF). Microsoft Research.
  31. ^ "A Survey of FPGA-based Accelerators for Convolutional Neural Networks", Mittal et al., NCAA, 2018
  32. ^ "Google boosts machine learning with its Tensor Processing Unit". May 19, 2016. Retrieved September 13, 2016.
  33. ^ "Chip could bring deep learning to mobile devices". February 3, 2016. Retrieved September 13, 2016.
  34. ^ "Deep Learning with Limited Numerical Precision" (PDF).
  35. ^ Rastegari, Mohammad; Ordonez, Vicente; Redmon, Joseph; Farhadi, Ali (2016). "XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks". arXiv:1603.05279 [cs.CV].
  36. ^ Khari Johnson (May 23, 2018). "Intel unveils Nervana Neural Net L-1000 for accelerated AI training". VentureBeat. Retrieved May 23, 2018. ...Intel will be extending bfloat16 support across our AI product lines, including Intel Xeon processors and Intel FPGAs.
  37. ^ Michael Feldman (May 23, 2018). "Intel Lays Out New Roadmap for AI Portfolio". TOP500 Supercomputer Sites. Retrieved May 23, 2018. Intel plans to support this format across all their AI products, including the Xeon and FPGA lines
  38. ^ Lucian Armasu (May 23, 2018). "Intel To Launch Spring Crest, Its First Neural Network Processor, In 2019". Tom's Hardware. Retrieved May 23, 2018. Intel said that the NNP-L1000 would also support bfloat16, a numerical format that’s being adopted by all the ML industry players for neural networks. The company will also support bfloat16 in its FPGAs, Xeons, and other ML products. The Nervana NNP-L1000 is scheduled for release in 2019.
  39. ^ "Available TensorFlow Ops | Cloud TPU | Google Cloud". Google Cloud. Retrieved May 23, 2018. This page lists the TensorFlow Python APIs and graph operators available on Cloud TPU.
  40. ^ Elmar Haußmann (April 26, 2018). "Comparing Google's TPUv2 against Nvidia's V100 on ResNet-50". RiseML Blog. Archived from the original on April 26, 2018. Retrieved May 23, 2018. For the Cloud TPU, Google recommended we use the bfloat16 implementation from the official TPU repository with TensorFlow 1.7.0. Both the TPU and GPU implementations make use of mixed-precision computation on the respective architecture and store most tensors with half-precision.
  41. ^ Tensorflow Authors (February 28, 2018). "ResNet-50 using BFloat16 on TPU". Google. Retrieved May 23, 2018.[permanent dead link]
  42. ^ Joshua V. Dillon; Ian Langmore; Dustin Tran; Eugene Brevdo; Srinivas Vasudevan; Dave Moore; Brian Patton; Alex Alemi; Matt Hoffman; Rif A. Saurous (November 28, 2017). TensorFlow Distributions (Report). arXiv:1711.10604. Bibcode:2017arXiv171110604D. Accessed 2018-05-23. All operations in TensorFlow Distributions are numerically stable across half, single, and double floating-point precisions (as TensorFlow dtypes: tf.bfloat16 (truncated floating point), tf.float16, tf.float32, tf.float64). Class constructors have a validate_args flag for numerical asserts
  43. ^ Abu Sebastian; Tomas Tuma; Nikolaos Papandreou; Manuel Le Gallo; Lukas Kull; Thomas Parnell; Evangelos Eleftheriou (2017). "Temporal correlation detection using computational phase-change memory". Nature Communications. 8. arXiv:1706.00511. doi:10.1038/s41467-017-01481-9.
  44. ^ "A new brain-inspired architecture could improve how computers handle data and advance AI". American Institute of Physics. October 3, 2018. Retrieved October 5, 2018.
  45. ^ Carlos Ríos; Nathan Youngblood; Zengguang Cheng; Manuel Le Gallo; Wolfram H.P. Pernice; C David Wright; Abu Sebastian; Harish Bhaskaran (2018). "In-memory computing on a photonic platform". arXiv:1801.06228 [cs.ET].
  46. ^ "NVIDIA launches the World's First Graphics Processing Unit, the GeForce 256".
  47. ^ Kundu, Kishalaya (July 26, 2018). "Google Announces Edge TPU, Cloud IoT Edge at Cloud Next 2018". Beebom. Retrieved February 2, 2019.
  48. ^ Kampman, Jeff (October 17, 2017). "Intel unveils purpose-built Neural Network Processor for deep learning". Tech Report. Retrieved October 18, 2017.
  49. ^ "Intel Nervana Neural Network Processors (NNP) Redefine AI Silicon". Retrieved October 20, 2017.
  50. ^ "The Evolution of EyeQ".
  51. ^ "NM500, Neuromorphic chip with 576 neurons". Archived from the original on October 3, 2017. Retrieved October 3, 2017.
  52. ^ "FSD Chip - Tesla". Retrieved April 30, 2019.
  53. ^ "Nvidia goes beyond the GPU for AI with Volta".
  54. ^ "The NVIDIA Turing GPU Architecture Deep Dive: Prelude to GeForce RTX". AnandTech.
  55. ^ "nvidia dgx-1" (PDF).
  56. ^ Frumusanu, Andrei. "Investigating NVIDIA's Jetson AGX: A Look at Xavier and Its Carmel Cores". Retrieved February 2, 2019.
  57. ^ Smith, Ryan (December 12, 2016). "AMD Announces Radeon Instinct: GPU Accelerators for Deep Learning, Coming in 2017". Anandtech. Retrieved December 12, 2016.
  58. ^ a b "On-Device AI with Qualcomm Snapdragon Neural Processing Engine SDK". Qualcomm Developer Network. Retrieved February 2, 2019.
  59. ^ "NEC SX-Aurora TSUBASA".
  60. ^ "AI Acceleration-with-NEC's New Vector Computer".
  61. ^ "InAccel".
  62. ^ "Cadence Announces New Tensilica Vision P6 DSP Targeting Embedded Neural Network Applications". Retrieved May 30, 2019.
  63. ^ "Cadence Unveils Industry's First Neural Network DSP IP for Automotive, Surveillance, Drone and Mobile Markets".
  64. ^ Frumusanu, Andrei. "Cadence Announces Tensilica Vision Q6 DSP". Retrieved February 2, 2019.
  65. ^ Frumusanu, Andrei. "Cadence Announces The Tensilica DNA 100 IP: Bigger Artificial Intelligence". Retrieved February 2, 2019.
  66. ^ Frumusanu, Andrei. "Cadence Announces Tensilica Vision Q7 DSP". Retrieved May 30, 2019.
  67. ^ "The highest performance neural network inference accelerator".
  68. ^ Oh, Nate. "Imagination Announces PowerVR Series9XTP, Series9XMP, and Series9XEP GPU Cores". Retrieved February 2, 2019.
  69. ^ "The iPhone X's new neural engine exemplifies Apple's approach to AI". The Verge. Retrieved September 23, 2017.
  70. ^ "Exynos 9 Series (9820) - The Next-level Processor for the Mobile Future". Retrieved March 31, 2019.
  71. ^ Cutress, Ian. "Cambricon, Makers of Huawei's Kirin NPU IP, Build A Big AI Chip and PCIe Card". Retrieved February 2, 2019.
  72. ^ "HUAWEI Reveals the Future of Mobile AI at IFA 2017".
  73. ^ Cutress, Ian. "Hot Chips 2018: Arm's Machine Learning Core Live Blog". Retrieved February 2, 2019.
  74. ^ "A Family of AI Processors for Deep Learning at the Edge".
  75. ^ Manjeera Digital System, UMA. "Universal Multifunction Accelerator". Manjeera Digital Systems. Retrieved June 28, 2018.
  76. ^ Manjeera Digital Systems, Universal Multifunction Accelerator. "Revolutionise Processing". Indian Express. Retrieved June 28, 2018.
  77. ^ AI Chip, UMA (May 10, 2018). "AI Chip from Hyderabad" (News Paper). Telangana Today. Retrieved June 28, 2018.
  78. ^ Lambert, Fred (December 8, 2017). "Elon Musk confirms that Tesla is working on its own new AI chip led by Jim Keller".
  79. ^ Chen, Yu-Hsin; Krishna, Tushar; Emer, Joel; Sze, Vivienne (2016). "Eyeriss: An Energy-Efficient Reconfigurable Accelerator for Deep Convolutional Neural Networks". IEEE International Solid-State Circuits Conference, ISSCC 2016, Digest of Technical Papers. pp. 262–263.
  80. ^ "Mixed-signal Processing Powers Bio-mimetic CMOS Chip to Enable Neural Learning in Autonomous Micro-Robots | IEN".
  81. ^ Aimar, Alessandro; et al. (2017). "NullHop: A Flexible Convolutional Neural Network Accelerator Based on Sparse Representations of Feature Maps". arXiv:1706.01406 [cs.CV].
  82. ^ "Synthara Technologies".
  83. ^ "kalray MPPA" (PDF).
  84. ^ "Graphcore Technology".
  85. ^ "Wave Computing's DPU architecture". August 23, 2017.
  86. ^ "A 2.9 TOPS/W Deep Convolutional Neural Network SoC in FD-SOI 28nm for Intelligent Embedded Systems" (PDF).
  87. ^ "yann lecun on IBM truenorth".argues that spiking neurons have never produced leading quality results, and that 8-16 bit precision is optimal, pushes the competing 'neuflow' design
  88. ^ "IBM cracks open new era of neuromorphic computing". TrueNorth is incredibly efficient: The chip consumes just 72 milliwatts at max load, which equates to around 400 billion synaptic operations per second per watt — or about 176,000 times more efficient than a modern CPU running the same brain-like workload, or 769 times more efficient than other state-of-the-art neuromorphic approaches
  89. ^ "Intel's New Self-Learning Chip Promises to Accelerate Artificial Intelligence".
  90. ^ "BrainChip Accelerator". Archived from the original on October 3, 2017. Retrieved October 3, 2017.
  91. ^ "India preps RISC-V Processors - Shakti targets servers, IoT, analytics". The Shakti project now includes plans for at least six microprocessor designs as well as associated fabrics and an accelerator chip
  92. ^ "AlphaICs".
  93. ^ "drive px".
  94. ^ "design of a machine vision system for weed control" (PDF). Archived from the original (PDF) on June 23, 2010. Retrieved June 17, 2016.
  95. ^ "qualcomm research brings server class machine learning to every data devices". October 2015.
  96. ^ "movidius powers worlds most intelligent drone". March 16, 2016.

External linksEdit