
cover
- Published: September 2025
- Pages: 280
- Tables: 56
- Figures: 49
The global AI chip market is experiencing unprecedented growth in 2025. The first quarter of 2025 demonstrated the market's robust health with 75 startups collectively raising over $2 billion. AI chips and enabling technologies emerged as major winners, with companies developing optical communications technology for chips and data center infrastructure pulling in over $400 million. Notably, six companies raised at least $100 million in investment during Q1 alone. Recent funding rounds throughout 2024-2025 reveal sustained investor confidence across diverse AI chip technologies. Major European investments include VSORA's $46 million raise led by Otium for high-performance AI inference chips, and Axelera AI's €61.6 million grant from the EuroHPC Joint Undertaking for RISC-V-based AI acceleration platforms. Asian markets showed strong momentum with Rebellions securing $124 million in Series B funding led by KT Corp for domain-specific AI processors, while HyperAccel raised $40 million for generative AI inference solutions.
Emerging technologies attracted significant capital, particularly in neuromorphic computing and analog processing. Innatera Nanosystems raised €15 million for brain-inspired processors using spiking neural networks, while Semron secured €7.3 million for analog in-memory computing using memcapacitors. These investments highlight the industry's push toward ultra-low power edge AI solutions.
Optical and photonic technologies dominated large funding rounds, with Celestial AI raising $250.0M in Series C1 funding led by Fidelity Management & Research Company for photonic fabric technology. Similarly, quantum computing platforms attracted substantial investment, including QuEra Computing's $230.0M financing from Google and SoftBank Vision Fund for neutral-atom quantum computers. Government support continued expanding globally, with Japan's NEDO providing significant subsidies including EdgeCortix's combined $46.7 million in government funding for AI chiplet development. European initiatives showed strong momentum through the European Innovation Council Fund's participation in multiple rounds, supporting companies like NeuReality ($20 million) and CogniFiber ($5 million).
North American companies maintained strong fundraising activity, with Etched raising $120 million for transformer-specific ASICs and Groq securing $640 million in Series D funding for language processing units. Tenstorrent's massive $693 million Series D round, led by Samsung Securities, demonstrated continued confidence in RISC-V-based AI processor IP. The sustained investment flows reflect fundamental shifts in AI computing requirements. Industry analysts project that the market for gen AI inference will grow faster than training in 2025 and beyond, driving demand for specialized inference accelerators. Companies like Recogni ($102 million), SiMa.ai ($70 million), and Blaize ($106 million) received substantial funding specifically for inference-optimized solutions.
Edge computing represents a critical growth vector, with companies developing ultra-low power solutions attracting significant investment. Blumind's $14.1 million raise for analog AI inference chips and Mobilint's $15.3 million Series B for edge NPU chips demonstrate investor recognition of the edge AI opportunity.
The competitive landscape continues evolving with new architectural approaches gaining traction. Fractile's $15 million seed funding for in-memory processing chips and Vaire Computing's $4.5 million raise for adiabatic reversible computing represent novel approaches to addressing AI's energy consumption challenges.
AI chip startups secured a cumulative US$7.6 billion in venture capital funding globally during the second, third, and last quarter of 2024, with 2025 maintaining this momentum across diverse technology categories, from photonic interconnects to neuromorphic processors, positioning the industry for continued rapid expansion and technological innovation.
Data center and cloud infrastructure represent the primary growth drivers. Chip sales are set to soar in 2025, led by generative AI and data center build-outs, even as traditional PC and mobile markets remain subdued. The investment focus reflects this trend, with optical interconnect and photonic technologies receiving substantial attention from venture capitalists and strategic investors. Government funding has become increasingly strategic, with governments around the globe starting to invest more heavily in chip design tools and related research as part of an effort to boost on-shore chip production.
The Global Artificial Intelligence (AI) Chips Market 2026-2036 provides comprehensive analysis of the rapidly evolving AI semiconductor industry, covering market dynamics, technological innovations, competitive landscapes, and future growth opportunities across multiple application sectors. This strategic market intelligence report examines the complete AI chip ecosystem from emerging neuromorphic processors to established GPU architectures, delivering critical insights for semiconductor manufacturers, technology investors, system integrators, and enterprise decision-makers navigating the AI revolution.
Report contents include:
- Market size forecasts and revenue projections by chip type, application, and region (2026-2036)
- Technology readiness levels and commercialization timelines for next-generation AI accelerators
- Competitive analysis of 147+ companies including NVIDIA, AMD, Intel, Google, Amazon, and emerging AI chip startups
- Supply chain analysis covering fab investments, advanced packaging technologies, and manufacturing capabilities
- Government funding initiatives and policy impacts across US, Europe, China, and Asia-Pacific regions
- Edge AI vs. cloud computing trends and architectural requirements
- AI Chip Definition & Core Technologies - Hardware acceleration principles, software co-design methodologies, and key performance capabilities
- Historical Development Analysis - Evolution from general-purpose processors to specialized AI accelerators and neuromorphic computing
- Application Landscape - Comprehensive coverage of data centers, automotive, smartphones, IoT, robotics, and emerging use cases
- Architectural Classifications - Training vs. inference optimizations, edge vs. cloud requirements, and power efficiency considerations
- Computing Requirements Analysis - Memory bandwidth, processing throughput, and latency specifications across different AI workloads
- Semiconductor Packaging Evolution - 1D to 3D integration technologies, chiplet architectures, and advanced packaging solutions
- Regional Market Dynamics - China's domestic chip initiatives, US CHIPS Act implications, European Chips Act strategic goals, and Asia-Pacific manufacturing hubs
- Edge AI Deployment Strategies - Edge vs. cloud trade-offs, inference optimization, and distributed AI architectures
- AI Chip Fabrication & Technology Infrastructure
- Supply Chain Ecosystem - Foundry capabilities, IDM strategies, and manufacturing bottlenecks analysis
- Fab Investment Trends - Capital expenditure analysis, capacity expansion plans, and technology node roadmaps
- Manufacturing Innovations - Chiplet integration, 3D fabrication techniques, algorithm-hardware co-design, and advanced lithography
- Instruction Set Architectures - RISC vs. CISC implementations for AI workloads and specialized ISA developments
- Programming & Execution Models - Von Neumann architecture limitations and alternative computing paradigms
- Transistor Technology Roadmap - FinFET scaling, GAAFET transitions, and next-generation device architectures
- Advanced Packaging Technologies - 2.5D packaging implementations, heterogeneous integration, and system-in-package solutions
- AI Chip Architectures & Design Innovations
- Distributed Parallel Processing - Multi-core architectures, interconnect technologies, and scalability solutions
- Optimized Data Flow Architectures - Memory hierarchy optimization, data movement minimization, and bandwidth enhancement
- Design Flexibility Analysis - Specialized vs. general-purpose trade-offs and programmability requirements
- Training vs. Inference Hardware - Architectural differences, precision requirements, and performance optimization strategies
- Software Programmability Frameworks - Development tools, compiler optimizations, and deployment ecosystems
- Architectural Innovation Trends - Specialized processing units, dataflow optimization, model compression techniques
- Biologically-Inspired Designs - Neuromorphic computing principles and spike-based processing architectures
- Analog Computing Revival - Mixed-signal processing, in-memory computing, and energy efficiency benefits
- Photonic Connectivity Solutions - Optical interconnects, silicon photonics integration, and bandwidth scaling
- Sustainability Considerations - Energy efficiency metrics, green data center requirements, and lifecycle management
- Comprehensive AI Chip Type Analysis
- Training Accelerators - High-performance computing requirements, multi-GPU scaling, and distributed training architectures
- Inference Accelerators - Real-time processing optimization, edge deployment considerations, and latency minimization
- Automotive AI Chips - ADAS implementations, autonomous driving processors, and safety-critical system requirements
- Smart Device AI Chips - Mobile processors, power efficiency optimization, and on-device AI capabilities
- Cloud Data Center Chips - Hyperscale deployment strategies, rack-level optimization, and cooling considerations
- Edge AI Chips - Power-constrained environments, real-time processing, and connectivity requirements
- Neuromorphic Chips - Brain-inspired architectures, spike-based processing, and ultra-low power applications
- FPGA-Based Solutions - Reconfigurable computing, rapid prototyping, and application-specific optimization
- Multi-Chip Modules - Heterogeneous integration strategies, chiplet ecosystems, and system-level optimization
- Emerging Technologies - Novel materials (2D, photonic, spintronic), advanced packaging, and next-generation computing paradigms
- Memory Technologies - HBM stacks, GDDR implementations, SRAM optimization, and emerging memory solutions
- CPU Integration - AI acceleration in general-purpose processors and hybrid computing architectures
- GPU Evolution - Data center GPU trends, NVIDIA ecosystem analysis, AMD competitive positioning, and Intel market entry
- Custom ASIC Development - Cloud service provider strategies, Amazon Trainium/Inferentia, Microsoft Maia, Meta MTIA analysis
- Alternative Architectures - Spatial accelerators, CGRAs, and heterogeneous matrix-based solutions
- Market Applications & Vertical Analysis
- Data Center Market - Hyperscale deployment trends, cloud infrastructure requirements, and performance benchmarking
- Automotive Sector - Autonomous driving chip requirements, power management, and safety certification processes
- Industry 4.0 Applications - Smart manufacturing, predictive maintenance, and industrial automation use cases
- Smartphone Integration - Mobile AI processor evolution, performance improvements, and competitive landscape
- Tablet Computing - AI acceleration in consumer devices and productivity applications
- IoT & Industrial IoT - Edge computing requirements, sensor integration, and connectivity solutions
- Personal Computing - AI-enabled laptops, desktop acceleration, and parallel computing applications
- Drones & Robotics - Real-time processing requirements, power constraints, and autonomous operation capabilities
- Wearables & AR/VR - Ultra-low power AI, gesture recognition, and immersive computing applications
- Sensor Applications - Smart sensors, structural health monitoring, and distributed sensing networks
- Life Sciences - Medical imaging acceleration, drug discovery applications, and diagnostic AI systems
- Financial Analysis & Market Forecasts
- Cost Structure Analysis - Design, manufacturing, testing, and operational cost breakdowns across technology nodes
- Revenue Projections by Chip Type - Market size forecasts segmented by GPU, ASIC, FPGA, and emerging technologies (2020-2036)
- Market Revenue by Application - Vertical market analysis with growth projections across all major sectors
- Regional Revenue Analysis - Geographic market distribution, growth rates, and competitive positioning by region
- Comprehensive Company Profiles including AiM Future, Aistorm, Advanced Micro Devices (AMD), Alpha ICs, Amazon Web Services (AWS), Ambarella Inc., Anaflash, Andes Technology, Apple, Arm, Astrus Inc., Axelera AI, Axera Semiconductor, Baidu Inc., BirenTech, Black Sesame Technologies, Blaize, Blumind Inc., Brainchip Holdings Ltd., Cambricon, Ccvui (Xinsheng Intelligence), Celestial AI, Cerebras Systems, Ceremorphic, ChipIntelli, CIX Technology, CogniFiber, Corerain Technologies, DeGirum, Denglin Technology, DEEPX, d-Matrix, Eeasy Technology, EdgeCortix, Efinix, EnCharge AI, Enerzai, Enfabrica, Enflame, Esperanto Technologies, Etched.ai, Evomotion, Expedera, Flex Logix, Fractile, FuriosaAI, Gemesys, Google, Graphcore, GreenWaves Technologies, Groq, Gwanak Analog Co. Ltd., Hailo, Horizon Robotics, Houmo.ai, Huawei, HyperAccel, IBM, Iluvatar CoreX, Innatera Nanosystems, Intel, Intellifusion, Intelligent Hardware Korea (IHWK), Inuitive, Jeejio, Kalray SA, Kinara, KIST (Korea Institute of Science and Technology), Kneron, Krutrim, Kunlunxin Technology, Lightmatter, Lightstandard Technology, Lightelligence, Lumai, Luminous Computing, MatX, MediaTek, MemryX, Meta, Microsoft, Mobilint, Modular, Moffett AI, Moore Threads, Mythic, Nanjing SemiDrive Technology, Nano-Core Chip, National Chip, Neuchips, NeuronBasic, NeuReality, NeuroBlade, NextVPU, Nextchip Co. Ltd., NXP Semiconductors, Nvidia, Oculi, OpenAI, Panmnesia and more....
1 INTRODUCTION 16
- 1.1 What is an AI chip? 16
- 1.1.1 AI Acceleration 16
- 1.1.2 Hardware & Software Co-Design 17
- 1.2 Key capabilities 18
- 1.3 History of AI Chip Development 18
- 1.4 Applications 19
- 1.5 AI Chip Architectures 20
- 1.6 Computing requirements 22
- 1.7 Semiconductor packaging 23
- 1.7.1 Evolution from 1D to 3D semiconductor packaging 23
- 1.8 AI chip market landscape 24
- 1.8.1 China 25
- 1.8.2 USA 26
- 1.8.2.1 The US CHIPS and Science Act of 2022 26
- 1.8.3 Europe 27
- 1.8.3.1 The European Chips Act of 2022 27
- 1.8.4 Rest of Asia 28
- 1.8.4.1 South Korea 28
- 1.8.4.2 Japan 28
- 1.8.4.3 Taiwan 29
- 1.9 Edge AI 29
- 1.9.1 Edge vs Cloud 30
- 1.9.2 Edge devices that utilize AI chips 30
- 1.9.3 Players in edge AI chips 31
- 1.9.4 Inference at the edge 33
- 1.10 Market drivers 33
- 1.11 Government funding and initiatives 34
- 1.12 Funding and investments 35
- 1.13 Market challenges 38
- 1.14 Market players 39
- 1.15 Future Outlook for AI Chips 40
- 1.15.1 Specialization 41
- 1.15.2 3D System Integration 41
- 1.15.3 Software Abstraction Layers 41
- 1.15.4 Edge-Cloud Convergence 41
- 1.15.5 Environmental Sustainability 41
- 1.15.6 Neuromorphic Photonics 41
- 1.15.7 New Materials 42
- 1.15.8 Efficiency Improvements 43
- 1.15.9 Automated Chip Generation 43
- 1.16 AI roadmap 44
- 1.17 Large AI Models 45
2 AI CHIP FABRICATION 47
- 2.1 Supply chain 47
- 2.2 Fab investments and capabilities 48
- 2.3 Manufacturing advances 50
- 2.3.1 Chiplets 50
- 2.3.2 3D Fabrication 50
- 2.3.3 Algorithm-Hardware Co-Design 51
- 2.3.4 Advanced Lithography 51
- 2.3.5 Novel Devices 52
- 2.4 Instruction Set Architectures 53
- 2.4.1 Instruction Set Architectures (ISAs) for AI workloads 53
- 2.4.2 CISC and RISC ISAs for AI accelerators 54
- 2.5 Programming Models and Execution Models 55
- 2.5.1 Programming model vs execution model 55
- 2.5.2 Von Neumann Architecture 57
- 2.6 Transistors 58
- 2.6.1 Increasing Transistor Count 58
- 2.6.2 Planar FET to FinFET 59
- 2.6.3 GAAFET, MBCFET, RibbonFET 60
- 2.7 Advanced Semiconductor Packaging 64
- 2.7.1 1D to 3D semiconductor packaging 64
- 2.7.2 2.5D packaging 66
3 AI CHIP ARCHITECTURES 67
- 3.1 Distributed Parallel Processing 67
- 3.2 Optimized Data Flow 68
- 3.3 Flexible vs. Specialized Designs 68
- 3.4 Hardware for Training vs. Inference 69
- 3.5 Software Programmability 70
- 3.6 Architectural Optimization Goals 70
- 3.7 Innovations 71
- 3.7.1 Specialized Processing Units 71
- 3.7.2 Dataflow Optimization 72
- 3.7.3 Model Compression 72
- 3.7.4 Biologically-Inspired Designs 73
- 3.7.5 Analog Computing 74
- 3.7.6 Photonic Connectivity 74
- 3.8 Sustainability 75
- 3.8.1 Energy Efficiency 75
- 3.8.2 Green Data Centers 75
- 3.8.3 Eco-Electronics 75
- 3.8.4 Reusable Architectures & IP 76
- 3.8.5 Regulated Lifecycles 76
- 3.8.6 AI for Sustainability 76
- 3.8.7 AI Model Efficiency 76
- 3.9 Companies, by architecture 77
- 3.10 Hardware Architectures 78
- 3.10.1 ASICs, FPGAs, and GPUs used for neural network architectures 78
- 3.10.2 Types of AI Chips 79
- 3.10.3 TRL 80
- 3.10.4 Commercial AI chips 81
- 3.10.5 Emerging AI chips 83
4 TYPES OF AI CHIPS 84
- 4.1 Training Accelerators 84
- 4.2 Inference Accelerators 87
- 4.3 Automotive AI Chips 88
- 4.4 Smart Device AI Chips 90
- 4.5 Cloud Data Center Chips 92
- 4.6 Edge AI Chips 93
- 4.7 Neuromorphic Chips 94
- 4.8 FPGA-Based Solutions 96
- 4.9 Multi-Chip Modules 97
- 4.10 Emerging technologies 98
- 4.10.1 Novel Materials 98
- 4.10.1.1 2D materials 98
- 4.10.1.2 Photonic materials 98
- 4.10.1.3 Spintronic materials 99
- 4.10.1.4 Phase change materials 99
- 4.10.1.5 Neuromorphic materials 100
- 4.10.2 Advanced Packaging 101
- 4.10.3 Software Abstraction 101
- 4.10.4 Environmental Sustainability 102
- 4.10.1 Novel Materials 98
- 4.11 Specialized components 102
- 4.11.1 Sensor Interfacing 102
- 4.11.2 Memory Technologies 103
- 4.11.2.1 HBM stacks 103
- 4.11.2.2 GDDR 104
- 4.11.2.3 SRAM 104
- 4.11.2.4 STT-RAM 104
- 4.11.2.5 ReRAM 104
- 4.11.3 Software Frameworks 104
- 4.11.4 Data Center Design 105
- 4.12 AI-Capable Central Processing Units (CPUs) 106
- 4.12.1 Core architecture 106
- 4.12.2 CPU requirements 108
- 4.13 Graphics Processing Units (GPUs) 109
- 4.13.1 Types of AI GPUs 109
- 4.13.2 Data center GPUs key features 110
- 4.13.2.1 NVIDIA 111
- 4.13.2.2 AMD 112
- 4.13.2.3 Intel 112
- 4.14 Custom AI ASICs for Cloud Service Providers (CSPs) 113
- 4.14.1 Amazon Trainium and Inferentia 115
- 4.14.2 Microsoft Maia 116
- 4.14.3 Meta MTIA 117
- 4.15 Other AI Chips 118
- 4.15.1 Heterogenous Matrix-Based AI Accelerators 118
- 4.15.1.1 Habana 118
- 4.15.1.2 Cambricon Technologies 119
- 4.15.1.3 Huawe 120
- 4.15.1.4 Baidu 121
- 4.15.1.5 Qualcomm 122
- 4.15.2 Spatial AI Accelerators 124
- 4.15.2.1 Cerebras 124
- 4.15.2.2 Graphcore 125
- 4.15.2.3 Groq 126
- 4.15.2.4 SambaNova 127
- 4.15.2.5 Untether AI 128
- 4.15.3 Coarse-Grained Reconfigurable Arrays (CGRAs) 128
- 4.15.1 Heterogenous Matrix-Based AI Accelerators 118
5 AI CHIP MARKETS 130
- 5.1 Market map 130
- 5.2 Data Centers 132
- 5.2.1 Market overview 132
- 5.2.2 Market players 132
- 5.2.3 Hardware 132
- 5.2.4 Trends 133
- 5.3 Automotive 134
- 5.3.1 Market overview 134
- 5.3.2 Market outlook 135
- 5.3.3 Autonomous Driving 135
- 5.3.3.1 Market players 136
- 5.3.4 Increasing power demands 137
- 5.3.5 Market players 137
- 5.4 Industry 4.0 138
- 5.4.1 Market overview 138
- 5.4.2 Applications 138
- 5.4.3 Market players 138
- 5.5 Smartphones 139
- 5.5.1 Market overview 139
- 5.5.2 Commercial examples 142
- 5.5.3 Smartphone chipset market 142
- 5.5.4 Process nodes 142
- 5.6 Tablets 144
- 5.6.1 Market overview 144
- 5.6.2 Market players 144
- 5.7 IoT & IIoT 145
- 5.7.1 Market overview 145
- 5.7.2 AI on the IoT edge 146
- 5.7.3 Consumer smart appliances 147
- 5.7.4 Market players 147
- 5.8 Computing 148
- 5.8.1 Market overview 148
- 5.8.2 Personal computers 149
- 5.8.3 Parallel computing 149
- 5.8.4 Low-precision computing 150
- 5.8.5 Market players 150
- 5.9 Drones & Robotics 151
- 5.9.1 Market overview 151
- 5.9.2 Market players 152
- 5.10 Wearables, AR glasses and hearables 152
- 5.10.1 Market overview 152
- 5.10.2 Applications 153
- 5.10.3 Market players 154
- 5.11 Sensors 155
- 5.11.1 Market overview 155
- 5.11.2 Challenges 156
- 5.11.3 Applications 156
- 5.11.4 Market players 156
- 5.12 Life Sciences 158
- 5.12.1 Market overview 158
- 5.12.2 Applications 158
- 5.12.3 Market players 159
6 GLOBAL MARKET REVENUES AND COSTS 159
- 6.1 Costs 160
- 6.2 Revenues by chip type, 2020-2036 161
- 6.3 Revenues by market, 2020-2036 162
- 6.4 Revenues by region, 2020-2036 164
7 COMPANY PROFILES 166 (147 company profiles)
8 APPENDIX 277
- 8.1 Research Methodology 277
9 REFERENCES 278
List of Tables
- Table 1. Markets and applications for AI chips. 20
- Table 2. AI Chip Architectures. 21
- Table 3. Computing requirements and constraints. 22
- Table 4. Computing requirements and constraints by applications. 22
- Table 5. Advantages and disadvantages of edge AI. 29
- Table 6. Edge vs Cloud. 30
- Table 7. Edge devices that utilize AI chips. 31
- Table 8. Players in edge AI chips. 32
- Table 9. Market drivers for AI Chips. 34
- Table 10. AI chip government funding and initiatives. 34
- Table 11. AI chips funding and investment, by company. 35
- Table 12. Market challenges in AI chips. 39
- Table 13. Key players in AI chips. 40
- Table 14. AI Chip Supply Chain. 48
- Table 15. Fab investments and capabilities. 49
- Table 16. Comparison of AI chip fabrication capabilities between IDMs (integrated device manufacturers) and dedicated foundries. 49
- Table 17. Programming model vs execution model. 55
- Table 18. Von Neumann compared with common programming models. 57
- Table 19. Goals driving the exploration into AI chip architectures. 70
- Table 20. Concepts from neuroscience influence architecture. 73
- Table 21. Companies by Architecture. 77
- Table 22. Types of training accelerators for AI chips. 86
- Table 23. Types of inference accelerators for AI chips. 88
- Table 24. Types of Automotive AI chips. 90
- Table 25. Smart device AI chips. 91
- Table 26. Types of cloud data center AI chips. 93
- Table 27. Key types of edge AI chips. 94
- Table 28. Types of neuromorphic chips and their attributes. 95
- Table 29. Types of FPGA-based solutions for AI acceleration. 96
- Table 30. Types of multi-chip module (MCM) integration approaches for AI chips. 97
- Table 31. 2D materials in AI hardware. 98
- Table 32. Photonic materials for AI hardware. 99
- Table 33. Spintronic materials for AI hardware. 99
- Table 34. Phase change materials for AI hardware. 100
- Table 35. Neuromorphic materials in AI hardware. 100
- Table 36. Techniques for combining chiplets and dies using advanced packaging for AI chips. 101
- Table 37. Types of sensors. 103
- Table 38. AI ASICs based on application. 113
- Table 39. Key AI chip products and solutions targeting automotive applications. 136
- Table 40. AI versus non-AI smartphones 140
- Table 41. Key chip fabrication process nodes used by various mobile AI chip designers. 143
- Table 42. AI versus non AI tablets. 145
- Table 43. Market players in AI chips for personal, parallel, and low-precision computing. 150
- Table 44. AI chip company products for drones and robotics. 152
- Table 45. Applications of AI chips in wearable devices. 153
- Table 46. Applications of ai chips and sensors and structural health monitoring. 156
- Table 47. Applications of AI chips in life sciences. 158
- Table 48. AI chip costs analysis-design, operation and fabrication. 160
- Table 49. Design, manufacturing, testing, and operational costs associated with leading-edge process nodes for AI chips. 160
- Table 50. Assembly, test, and packaging (ATP) costs associated with manufacturing AI chips. 160
- Table 51. Global market revenues by chip type, 2020-2036 (billions USD). 161
- Table 52. Global market revenues by market, 2020-2036 (billions USD). 162
- Table 53. Global market revenues by region, 2020-2036 (billions USD). 164
- Table 54. AMD AI chip range. 168
- Table 55. Applications of CV3-AD685 in autonomous driving. 172
- Table 56. Evolution of Apple Neural Engine. 175
List of Figures
- Figure 1. Nvidia H200 AI Chip. 16
- Figure 2. History of AI development. 19
- Figure 3. AI roadmap. 45
- Figure 4. Device architecture roadmap. 62
- Figure 5. TRL of AI chip technologies. 80
- Figure 6. Nvidia A100 GPU . 85
- Figure 7. Google Cloud TPUs. 85
- Figure 8. Groq Node. 86
- Figure 9. Intel Movidius Myriad X. 87
- Figure 10. Qualcomm Cloud AI 100. 88
- Figure 11. Tesla FSD Chip. 90
- Figure 12. Qualcomm Snapdragon. 91
- Figure 13. Xeon CPUs for data center. 108
- Figure 14. Google TPU. 114
- Figure 15. Colossus™ MK2 IPU processor. 125
- Figure 16. AI chio market map. 130
- Figure 17. Global market revenues by chip type, 2020-2036 (billions USD). 162
- Figure 18. Global market revenues by market 2020-2036 (billions USD). 164
- Figure 19. Global market revenues by region, 2020-2036 (billions USD). 165
- Figure 20. AMD Radeon Instinct. 168
- Figure 21. AMD Ryzen 7040. 169
- Figure 22. Alveo V70. 169
- Figure 23. Versal Adaptive SOC. 169
- Figure 24. AMD’s MI300 chip. 170
- Figure 25. Cerebas WSE-2. 185
- Figure 26. DeepX NPU DX-GEN1. 190
- Figure 27. InferX X1. 199
- Figure 28. “Warboy”(AI Inference Chip). 201
- Figure 29. Google TPU. 202
- Figure 30. GrAI VIP. 203
- Figure 31. Colossus™ MK2 GC200 IPU. 204
- Figure 32. GreenWave’s GAP8 and GAP9 processors. 205
- Figure 33. Journey 5. 209
- Figure 34. IBM Telum processor. 212
- Figure 35. 11th Gen Intel® Core™ S-Series. 215
- Figure 36. Envise. 222
- Figure 37. Pentonic 2000. 226
- Figure 38. Meta Training and Inference Accelerator (MTIA). 228
- Figure 39. Azure Maia 100 and Cobalt 100 chips. 229
- Figure 40. Mythic MP10304 Quad-AMP PCIe Card. 233
- Figure 41. Nvidia H200 AI chip. 241
- Figure 42. Grace Hopper Superchip. 242
- Figure 43. Panmnesia memory expander module (top) and chassis loaded with switch and expander modules (below). 245
- Figure 44. Cloud AI 100. 248
- Figure 45. Peta Op chip. 250
- Figure 46. Cardinal SN10 RDU. 253
- Figure 47. MLSoC™. 258
- Figure 48. Grayskull. 264
- Figure 49. Tesla D1 chip. 265
The report includes these components:
- PDF report download/by email. Print edition also available.
- Comprehensive Excel spreadsheet of all data.
- Mid-year Update
Payment methods: Visa, Mastercard, American Express, Paypal, Bank Transfer. To order by Bank Transfer (Invoice) select this option from the payment methods menu after adding to cart, or contact info@futuremarketsinc.com