WO2022039334A1 - 신경망 프로세싱 유닛 - Google Patents
신경망 프로세싱 유닛 Download PDFInfo
- Publication number
- WO2022039334A1 WO2022039334A1 PCT/KR2020/019488 KR2020019488W WO2022039334A1 WO 2022039334 A1 WO2022039334 A1 WO 2022039334A1 KR 2020019488 W KR2020019488 W KR 2020019488W WO 2022039334 A1 WO2022039334 A1 WO 2022039334A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- neural network
- data
- npu
- artificial neural
- memory
- Prior art date
Links
- 238000013528 artificial neural network Methods 0.000 title claims abstract description 1429
- 238000012545 processing Methods 0.000 title claims abstract description 938
- 230000015654 memory Effects 0.000 claims abstract description 1071
- 238000003062 neural network model Methods 0.000 claims abstract description 37
- 238000000547 structure data Methods 0.000 claims abstract description 27
- 238000013139 quantization Methods 0.000 claims description 150
- 238000000034 method Methods 0.000 claims description 73
- 238000012546 transfer Methods 0.000 claims description 50
- 238000004891 communication Methods 0.000 claims description 48
- 238000013473 artificial intelligence Methods 0.000 claims description 37
- 230000003068 static effect Effects 0.000 claims description 13
- 239000010410 layer Substances 0.000 description 513
- 238000004422 calculation algorithm Methods 0.000 description 127
- 230000000694 effects Effects 0.000 description 111
- 238000005457 optimization Methods 0.000 description 108
- 241001442055 Vipera berus Species 0.000 description 57
- 238000013138 pruning Methods 0.000 description 57
- 238000011156 evaluation Methods 0.000 description 45
- 230000006870 function Effects 0.000 description 35
- 101710118890 Photosystem II reaction center protein Ycf12 Proteins 0.000 description 27
- 230000008569 process Effects 0.000 description 25
- 238000010586 diagram Methods 0.000 description 22
- 238000005516 engineering process Methods 0.000 description 21
- 238000013527 convolutional neural network Methods 0.000 description 17
- 238000004364 calculation method Methods 0.000 description 14
- 238000006243 chemical reaction Methods 0.000 description 13
- 230000004044 response Effects 0.000 description 12
- 230000007423 decrease Effects 0.000 description 11
- 230000004913 activation Effects 0.000 description 10
- 230000006835 compression Effects 0.000 description 10
- 238000007906 compression Methods 0.000 description 10
- 230000009467 reduction Effects 0.000 description 10
- 230000005540 biological transmission Effects 0.000 description 9
- 210000002569 neuron Anatomy 0.000 description 9
- 238000010295 mobile communication Methods 0.000 description 8
- 238000003860 storage Methods 0.000 description 8
- 230000015556 catabolic process Effects 0.000 description 7
- 238000006731 degradation reaction Methods 0.000 description 7
- 210000004027 cell Anatomy 0.000 description 6
- 238000001514 detection method Methods 0.000 description 6
- 230000020169 heat generation Effects 0.000 description 6
- 230000033001 locomotion Effects 0.000 description 6
- 238000012986 modification Methods 0.000 description 6
- 230000004048 modification Effects 0.000 description 6
- 238000012549 training Methods 0.000 description 6
- 230000008901 benefit Effects 0.000 description 5
- 238000005265 energy consumption Methods 0.000 description 5
- 238000004519 manufacturing process Methods 0.000 description 5
- 230000003287 optical effect Effects 0.000 description 5
- 238000004088 simulation Methods 0.000 description 5
- 238000006467 substitution reaction Methods 0.000 description 5
- 238000010977 unit operation Methods 0.000 description 5
- 238000009825 accumulation Methods 0.000 description 4
- 230000008859 change Effects 0.000 description 4
- 230000006866 deterioration Effects 0.000 description 4
- 230000007774 longterm Effects 0.000 description 4
- 239000011159 matrix material Substances 0.000 description 4
- 238000011017 operating method Methods 0.000 description 4
- 238000011176 pooling Methods 0.000 description 4
- 239000013585 weight reducing agent Substances 0.000 description 4
- 238000003491 array Methods 0.000 description 3
- 210000004556 brain Anatomy 0.000 description 3
- 230000003247 decreasing effect Effects 0.000 description 3
- 238000013135 deep learning Methods 0.000 description 3
- 238000011161 development Methods 0.000 description 3
- 238000007726 management method Methods 0.000 description 3
- 238000011160 research Methods 0.000 description 3
- 238000013526 transfer learning Methods 0.000 description 3
- 230000000903 blocking effect Effects 0.000 description 2
- 238000007796 conventional method Methods 0.000 description 2
- 238000013140 knowledge distillation Methods 0.000 description 2
- 230000035484 reaction time Effects 0.000 description 2
- 238000012827 research and development Methods 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- XUIMIQQOPSSXEZ-UHFFFAOYSA-N Silicon Chemical compound [Si] XUIMIQQOPSSXEZ-UHFFFAOYSA-N 0.000 description 1
- 206010041349 Somnolence Diseases 0.000 description 1
- 230000001133 acceleration Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 239000002131 composite material Substances 0.000 description 1
- 230000001808 coupling effect Effects 0.000 description 1
- 230000003111 delayed effect Effects 0.000 description 1
- 210000005069 ears Anatomy 0.000 description 1
- 230000005672 electromagnetic field Effects 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000008921 facial expression Effects 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 230000013632 homeostatic process Effects 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000001678 irradiating effect Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 230000007257 malfunction Effects 0.000 description 1
- 230000001537 neural effect Effects 0.000 description 1
- 230000008520 organization Effects 0.000 description 1
- 230000010355 oscillation Effects 0.000 description 1
- 230000003071 parasitic effect Effects 0.000 description 1
- 230000005855 radiation Effects 0.000 description 1
- 230000000306 recurrent effect Effects 0.000 description 1
- 238000004064 recycling Methods 0.000 description 1
- 238000011946 reduction process Methods 0.000 description 1
- 230000002787 reinforcement Effects 0.000 description 1
- 230000015541 sensory perception of touch Effects 0.000 description 1
- 238000004904 shortening Methods 0.000 description 1
- 229910052710 silicon Inorganic materials 0.000 description 1
- 239000010703 silicon Substances 0.000 description 1
- 239000002356 single layer Substances 0.000 description 1
- 239000004984 smart glass Substances 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 239000000126 substance Substances 0.000 description 1
- 210000000225 synapse Anatomy 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 238000001931 thermography Methods 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
- 210000000857 visual cortex Anatomy 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/48—Program initiating; Program switching, e.g. by interrupt
- G06F9/4806—Task transfer initiation or dispatching
- G06F9/4843—Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
- G06F9/4881—Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/06—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
- G06N3/063—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F1/00—Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
- G06F1/26—Power supply means, e.g. regulation thereof
- G06F1/32—Means for saving power
- G06F1/3203—Power management, i.e. event-based initiation of a power-saving mode
- G06F1/3234—Power saving characterised by the action undertaken
- G06F1/329—Power saving characterised by the action undertaken by task scheduling
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F15/00—Digital computers in general; Data processing equipment in general
- G06F15/76—Architectures of general purpose stored program computers
- G06F15/80—Architectures of general purpose stored program computers comprising an array of processing units with common control, e.g. single instruction multiple data processors
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0655—Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
- G06F3/0659—Command handling arrangements, e.g. command buffers, queues, command scheduling
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0463—Neocognitrons
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N5/00—Computing arrangements using knowledge-based models
- G06N5/04—Inference or reasoning models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Definitions
- the present disclosure relates to a neural network processing unit, and more particularly, to a low power neural network processing unit.
- the human brain is made up of numerous nerve cells called neurons. Each neuron is connected to hundreds or thousands of other neurons through connections called synapses.
- an artificial neural network ANN
- These artificial neural network models are divided into 'single-layer neural network' and 'multi-layer neural network' according to the number of layers.
- a general multilayer neural network consists of an input layer, a hidden layer, and an output layer.
- the input layer is a layer that receives external data, and the number of neurons in the input layer is the same as the number of input variables.
- At least one hidden layer is located between the input layer and the output layer, receives a signal from the input layer, extracts characteristics, and transmits the extracted characteristics to the output layer.
- An output layer receives a signal from at least one hidden layer and outputs it to the outside. The input signal between neurons is multiplied by each connection strength with a value between 0 and 1, and then summed.
- DNN deep neural network
- CNNs convolutional neural networks
- CNN convolutional neural network
- the convolutional neural network is configured in a form in which convolutional channels and pooling channels are repeated.
- convolution a mathematical term, called convolution in Korean.
- a convolutional neural network recognizes objects by extracting image features of each channel by a matrix-type kernel, and providing homeostasis such as movement or distortion by pooling.
- a feature map is obtained by convolution (ie, convolution) of the input data and the kernel, and then an activation function such as Rectified Linear Unit (ReLU) is applied to generate an activation map of the corresponding channel. Pooling may then be applied.
- convolution ie, convolution
- ReLU Rectified Linear Unit
- the neural network that actually classifies the pattern is located at the rear end of the feature extraction neural network, and is called a fully connected layer.
- most computations are performed through convolution or matrix multiplication.
- the frequency of reading the necessary kernels from memory is quite frequent.
- a significant part of the operation of the convolutional neural network takes time to read the kernels corresponding to each channel from the memory.
- a memory consists of a plurality of memory cells, and each memory cell of the memory has a unique memory address.
- the processor When the processor generates a kernel read command stored in the memory, a latency of several clocks may occur until the memory cell corresponding to the address of the memory is accessed.
- AI reasoning ability develops, artificial intelligence speaker, smart phone, smart refrigerator, VR device, AR device, AI CCTV, AI robot vacuum cleaner, tablet, laptop computer, autonomous vehicle, biped robot, quadruped
- inference services such as acoustic recognition, voice recognition, image recognition, object detection, driver drowsiness detection, danger moment detection, and gesture detection using artificial intelligence are provided to various electronic devices such as walking robots and industrial robots.
- This artificial neural network inference service repeatedly trains the artificial neural network with a large amount of learning data, and infers various and complex data through the learned artificial neural network model. Accordingly, various services are being provided to the above-described electronic devices using artificial neural network technology.
- Edge computing refers to an edge or periphery where computing occurs, and refers to a terminal that directly produces data or various electronic devices in a location close to the terminal, and edge computing may be referred to as an edge device.
- a computing system that is separated from the server of the data center in the cloud computing system, is located at the end of the cloud computing system, and communicates with the server of the data center may be defined as an edge device.
- Edge devices can also be used to perform necessary tasks immediately and reliably, such as autonomous robots or autonomous vehicles that need to process massive amounts of data within 1/1,000 of a second. is increasing significantly.
- the inventor of the present disclosure intends to provide a specialized reasoning function to various electronic devices to which an edge device can be applied with low power and low cost by using artificial neural network model technology.
- the inventor of the present disclosure since the performance of the hardware of each of the various electronic devices is different from each other, and the performance and characteristics of the required reasoning function are also different from each other, the artificial neural network technology that can be applied to each electronic device is also different from each other. It was recognized that it should be optimized in consideration of the characteristics of the devices.
- the inventors of the present disclosure have made various research and development in order to develop an independent, low-power, low-cost neural network processing unit that can be optimized for each of various electronic devices.
- the inventor of the present disclosure has recognized that it is necessary to develop an independent neural network processing unit in which processing elements optimized for inference are implemented in order to optimize an artificial neural network inference technique that is inherent in each electronic device and can operate independently .
- the central processing unit which can be embedded in various electronic devices, is optimized for serial operation, and is not suitable for the artificial neural network operation method that processes massive data in parallel, so the artificial neural network model
- the central processing unit CPU
- the central processing unit is optimized for serial operation, and is not suitable for the artificial neural network operation method that processes massive data in parallel, so the artificial neural network model
- a graphic processing unit which can be embedded in various electronic devices, is optimized for image processing, and is relatively advantageous for artificial neural network model inference than a central processing unit, but for artificial neural network model inference It was recognized that there is a problem in terms of cost to be applied to various electronic devices because it is not optimized, there is a problem of excessive power consumption, and the manufacturing cost is high.
- the inventor of the present disclosure has recognized that most electronic devices have difficulties in internalizing artificial neural network model reasoning technology due to problems such as power consumption, heat generation, memory, computation processing speed, and cost, and improve it It was recognized that it was necessary to develop a neural network processing unit that can do this.
- an object of the present disclosure is to provide an independent, low-power, low-cost neural network processing unit capable of improving the above-described problems.
- a neuromorphic analog artificial neural network integrated circuit that mimics a human brain.
- Such a neuromorphic integrated circuit is an analog multiplier array composed of NOR logic, and has the advantage of being implemented with two simple transistors, and has the advantage of providing an inference function by using an analog voltage with low power, but is sensitive to various noises.
- EMI electromagnetic interference
- ADC analog-to-digital conversion circuit
- Another problem to be solved by the present disclosure is to provide an independent, low-power, low-cost neural network processing unit including processing elements implemented digitally rather than analog, which can improve the above-described problems.
- the inventors of the present disclosure have recognized that there is a need for a standalone, low-power, low-cost neural network processing unit that can efficiently perform at least one or more specialized artificial neural network model inference.
- the inventor of the present disclosure is an independent, low-power, low-cost neural network configured to efficiently perform not only the above-described artificial neural network model inference operation specialized for one specific function, but also a plurality of different artificial neural network model inference operations specialized for different functions.
- some of the processing elements of the neural network processing unit that can be internalized in various electronic devices are allocated to the inference operation of the first artificial neural network model, and another part is allocated to the inference operation of the second artificial neural network model, It was recognized that it was necessary to optimize the resources of the neural network processing unit and at the same time minimize power consumption by stopping the rest. That is, it was also recognized that a neural network processing unit configured so that the neural network processing unit can perform speech recognition inference and gesture inference or infer different image data input from a plurality of cameras is required.
- Another problem to be solved by the present disclosure is to provide a neural network processing unit capable of inferring at least one or more specialized artificial neural network models capable of improving the above-described problems and reducing power consumption.
- the inventors of the present disclosure have recognized the fact that it is necessary to minimize the data size of the trained artificial neural network model in order to implement a standalone low-power and low-cost neural network processing unit.
- each trained artificial neural network model may provide a respective specialized inference function.
- each trained artificial neural network model may have a significant amount of weight data, input data, and calculation values.
- the operation value may be referred to as node data or feature map data.
- the size of the weight data of the artificial neural network model having the VGG16 deep learning architecture in which the inventor of the present disclosure learned to recognize 30 specific keywords was 550 MB.
- the image data of the autonomous driving cameras to be processed for 1 hour for autonomous driving of the autonomous driving vehicle may be 3TByte or more.
- the inventor of the present disclosure recognized the problem that the memory size of the neural network processing unit and the number of processing elements should be exponentially increased for a massive amount of artificial neural network model reasoning operation.
- the physical size of the neural network processing unit increases due to the limitation of semiconductor directivity, and the number of processing elements and the size of the memory are required to increase, resulting in increased power consumption, increased heat generation, and The inventor of the present disclosure recognized the problem that the calculation speed is lowered.
- the inventors of the present disclosure have recognized the fact that the number of operation bits of a plurality of processing elements must be adjustable for implementation of a standalone low-power low-cost neural network processing unit. That is, the operation of a specific layer of a specific artificial neural network model can be an 8-bit operation, and the operation of other layers is set as a 2-bit operation, so it is necessary to reduce the amount of computation and power consumption while minimizing the deterioration of inference accuracy. recognized the fact.
- Another problem to be solved by the present disclosure is to provide a lightweight artificial neural network model capable of improving the above-described problems, and to provide a neural network processing unit configured to calculate the lightweight artificial neural network model, and an operating method thereof will do
- a neural network processing unit configured to use a minimum power and memory by lightening an artificial neural network model under optimal conditions so that a specific function can be inferred with a predetermined accuracy or more. and to provide an operating method thereof.
- Another problem to be solved by the present disclosure is weight data and input data of at least one layer in consideration of at least one of inference accuracy, computational amount, memory, and power consumption for each layer of the learned artificial neural network model. , or to provide a neural network processing unit and an operating method thereof, including an artificial neural network model in which the number of bits of an operation value is reduced.
- Another task to be solved by the present disclosure is to set a relative priority among each condition when considering at least one of inference accuracy, computational amount, memory, and power consumption for each layer of the learned artificial neural network model.
- a neural network processing unit including an artificial neural network model in which the number of bits of weight data, input data, or computation value of at least one layer is reduced.
- another object of the present disclosure is to provide a method of reducing the weight of an artificial neural network model by setting a target power consumption of the neural network processing unit so that the power consumption of the neural network processing unit is equal to or less than the target power consumption.
- Another problem to be solved by the present disclosure is to set the target inference speed of the neural network processing unit to provide a method of reducing the weight of the artificial neural network model so that the inference speed of the neural network processing unit is greater than or equal to the target inference speed.
- Another problem to be solved by the present disclosure is to set the memory size of the neural network processing unit to provide a method of reducing the weight of an artificial neural network model in which memory use can be efficient during inference of the neural network processing unit.
- Another problem to be solved by the present disclosure is to set the maximum MAC limit for one inference of the neural network processing unit, so that the neural network processing unit reduces the amount of computation when performing inference operations. It is to provide a way to reduce the weight.
- Another problem to be solved by the present disclosure is to provide a neural network processing unit configured to optimize the number of bits of values of a plurality of processing elements for each layer in consideration of the characteristics of each layer of the artificial neural network model applied to the neural network processing unit.
- Another problem to be solved by the present disclosure is to provide a neural network processing unit, configured such that the number of bits of values of a plurality of processing elements is equal to or less than the number of bits of weight data of a lightweight artificial neural network model.
- the inventor of the present disclosure recognized the fact that various problems occur when providing a cloud-type artificial intelligence inference recognition function configured to always operate to various electronic devices.
- the response speed may be slow depending on the cloud server state, and the service may not be provided depending on the state of the communication network.
- the electronic device when only the cloud-based voice recognition function is provided to the electronic device, the electronic device may be connected to the cloud-based artificial intelligence voice recognition service through a communication network to receive the voice recognition function. That is, when only the cloud-based voice recognition inference function is always provided to the electronic device, the electronic device must be connected to the big data-based cloud artificial intelligence voice recognition service in real time. This connection method is inefficient. In more detail, most of the acoustic data transmitted by the electronic device through the communication network for 24 hours may be ambient noise. In addition, the noise data transmitted in this way increases the traffic of the communication network, wastes unnecessary power, and transmits unnecessary queries to the server, which can cause problems in delaying the response speed of the cloud AI voice recognition service. In particular, when the response speed of voice recognition is delayed, the user's voice recognition satisfaction may be reduced. In particular, an electronic device operated by a battery has a problem in that the operation time of the electronic device may be significantly reduced due to unnecessary power consumption.
- the neural network processing unit recognizes a user's voice command or keyword, and controls the function of the electronic device according to the recognized voice command or keyword, or a specific configuration of the corresponding electronic device
- An object of the present invention is to provide a neural network processing unit capable of controlling power consumption of an electronic device that is wasted unnecessarily by controlling the power of the elements, and an artificial neural network model corresponding thereto.
- Another problem to be solved by the present disclosure is to provide a low-power, low-cost neural network processing unit having a lightweight AI speech recognition model in consideration of the processing power of the neural network processing unit.
- Another problem to be solved by the present disclosure is to provide a low-power, low-cost neural network processing unit having a lightweight AI sound recognition model in consideration of the processing power of the neural network processing unit.
- Another problem to be solved by the present disclosure is to provide a low-power, low-cost neural network processing unit having a lightweight AI keyword recognition model in consideration of the processing power of the neural network processing unit.
- Another problem to be solved by the present disclosure is to provide a low-power, low-cost neural network processing unit having a lightweight AI event recognition model in consideration of the processing power of the neural network processing unit.
- another problem to be solved by the present disclosure is a plurality of artificial neural networks among a lightweight AI speech recognition model, an AI sound recognition model, an AI keyword recognition model, and an AI event recognition model in consideration of the processing power of the neural network processing unit.
- a neural network processing unit in which models are embedded and the neural network processing unit to simultaneously infer at least a plurality of artificial neural network models, to provide a method of allocating processing elements and memory resources of the neural network processing unit.
- Another problem to be solved by the present disclosure is to provide an electronic device having a lightweight AI keyword recognition model and a neural network processing unit to infer it, and to control the power mode of the electronic device according to keyword recognition of the artificial neural network model. Control and connect the electronic device with the cloud artificial intelligence voice recognition service through the communication network to significantly reduce unnecessary communication network traffic, reduce unnecessary power consumption, and reduce unnecessary queries.
- the inventors of the present disclosure paid attention to a convolutional neural network in order to increase the accuracy of keyword recognition.
- the convolutional neural network lacks the ability to infer continuity of words, but has the advantage of providing relatively good accuracy in inferring image similarity.
- the inventor of the present disclosure has researched a technique for imaging voice data of a keyword, and specifically, a technique for converting acoustic data corresponding to a specific keyword into a two-dimensional image.
- Another problem to be solved by the present disclosure is to provide a neural network processing unit in which an AI keyword image recognition model trained to recognize imaged keywords is embedded.
- the inventor of the present disclosure in order to increase the accuracy of keyword recognition, the fact that the recognition rate of the AI keyword recognition model can be improved when the training data of the AI keyword recognition model is generated in consideration of the learning characteristics of the AI keyword recognition model was recognized.
- Another problem to be solved by the present disclosure is to provide training data of an AI keyword image recognition model trained to recognize imaged keywords.
- a neural network processing unit includes a processing element array, an NPU memory system configured to store at least some data of an artificial neural network model processed in the processing element array, and structural data or artificial neural network data locality information of the artificial neural network model an NPU scheduler configured to control the processing element array and the NPU memory system based on
- the processing element array may include a plurality of processing elements configured to perform a MAC operation.
- the NPU scheduler may be further configured to control the read and write order of the processing element array and the NPU memory system.
- the NPU scheduler may be further configured to control the processing element array and the NPU memory system by analyzing the structural data of the artificial neural network model or the artificial neural network data locality information.
- the NPU scheduler may further include node data of each layer of the artificial neural network model, arrangement structure data of layers, and weight data of a connection network connecting nodes of each layer.
- the NPU scheduler may be further configured to schedule the operation order of the artificial neural network model based on the structural data of the artificial neural network model or the artificial neural network data locality information.
- the NPU scheduler may be configured to schedule an operation order of a plurality of processing elements included in the processing element array based on the arrangement structure data of layers of the artificial neural network model among the structural data of the artificial neural network model.
- the NPU scheduler may be configured to access a memory address value in which node data of a layer of an artificial neural network model and weight data of a connection network are stored based on structural data of the artificial neural network model or locality information of artificial neural network data.
- the NPU scheduler may be configured to control the NPU memory system and the processing element array so that operations are performed in a set scheduling order.
- the processing order may be configured to schedule the processing order.
- the NPU scheduler is a neural network processing unit, configured to schedule a processing order based on structural data or artificial neural network data locality information from an input layer to an output layer of the artificial neural network of the artificial neural network model.
- the NPU scheduler may be configured to improve the memory reuse rate by controlling the NPU memory system by utilizing the scheduling sequence based on the structural data of the artificial neural network model or the artificial neural network data locality information.
- the NPU scheduler may be configured to reuse a memory address value in which the first operation value of the first scheduling is stored as a memory address value corresponding to the node data of the second layer of the second scheduling that is the next scheduling of the first scheduling.
- the NPU scheduler may be configured to reuse a value of a memory address in which an operation result is stored in a subsequent operation.
- the NPU memory system may include static memory.
- the NPU memory system may include at least one of SRAM, MRAM, STT-MRAM, eMRAM, HBM, and OST-MRAM.
- Edge device includes a central processing unit, a main memory system configured to store an artificial neural network model, a system bus and processing element array for controlling communication between the central processing unit and the main memory system, an NPU memory system, and a processing element an NPU scheduler configured to control the array and the NPU memory system, and a neural network processing unit comprising an NPU interface, wherein the NPU interface is configured to communicate with the central processing unit through a system bus, the NPU interface comprising the artificial neural network model related data may be configured to communicate directly with the main memory system.
- Edge devices include mobile phones, smart phones, artificial intelligence speakers, digital broadcasting terminals, navigation devices, wearable devices, smart watches, smart refrigerators, smart TVs, digital signage, VR devices, AR devices, AI CCTVs, AI robot cleaners, and tablets.
- a laptop computer an autonomous vehicle, an autonomous drone, an autonomous driving biped robot, an autonomous quadruped robot, an autonomous driving mobility, an artificial intelligence robot, a PDA, and a PMP.
- the NPU memory system of the neural network processing unit may be configured such that the read/write speed of the inference operation of the artificial neural network model is relatively faster than that of the main memory system, and consumes relatively less power.
- the neural network processing unit may be configured to improve the memory reuse rate of the NPU memory system, based on the structural data of the artificial neural network model or the artificial neural network data locality information.
- the neural network processing unit may be configured to acquire data of at least one of a number of memories, a memory type, a data transfer rate, and a memory size of the main memory system.
- the neural network processing unit controls the reuse of data stored inside the NPU memory system based on the structural data of the artificial neural network model or the artificial neural network data locality information, and the neural network processing unit is configured not to request memory access to the main memory system when data is reused.
- the NPU memory system does not include DRAM, and the NPU memory system may include a static memory configured to have relatively faster read and write speeds and relatively less power consumption than the main memory system.
- the NPU memory system may be configured to control scheduling by comparing the data size of the artificial neural network model to be called from the main memory system and the memory size of the NPU memory system.
- a neural network processing unit includes a processing element array, an NPU memory system configured to store an artificial neural network model processed in the processing element array, and a processing element based on structural data or artificial neural network data locality information of the artificial neural network model an NPU scheduler configured to control the array and the NPU memory system, wherein the processing element array is configured to perform a MAC operation, and the processing element array may be configured to quantize and output a MAC operation result.
- a first input of each processing element of the array of processing elements may be configured to receive a variable value, and a second input of each processing element of the array of processing elements may be configured to receive a constant value.
- the processing element may be configured to include a multiplier, an adder, an accumulator and a bit quantization unit.
- the NPU scheduler recognizes reusable variable values and reusable constant values based on the structural data of the artificial neural network model or artificial neural network data locality information, and uses the reusable variable values and reusable constant values to reuse the memory. can be configured to control
- It may be configured to reduce the number of bits of an operation value of the processing element array in consideration of MAC operation characteristics and power consumption characteristics of the processing element array.
- the NPU memory system may be a low-power memory system configured to reuse a specific memory address in which weight data is stored in consideration of the data size and operation step of the artificial neural network model.
- the NPU scheduler stores the MAC operation value of the neural network model according to the scheduling order in a specific memory address of the NPU memory system, and the specific memory address in which the MAC operation value is stored may be input data of the MAC operation of the next scheduling order.
- the NPU system memory may be configured to preserve weight data of the networks stored in the NPU system memory while the speculation operation continues.
- the number of updates of the memory address in which the input data of the first input of each processing element of the processing element array is stored may be greater than the number of updates of the memory address in which the input data of the second input is stored.
- the NPU system memory may be configured to reuse the MAC operation value stored in the NPU system memory while the speculation operation continues.
- a neural network processing unit is a processing element array including a plurality of processing elements and a plurality of register files, an NPU memory system configured to store an artificial neural network model processed in the processing element array, and an artificial neural network model and an NPU scheduler configured to control the processing element array and the NPU memory system based on the structure data of the neural network data or the locality information of the artificial neural network data.
- the memory size of each of the plurality of register files is relatively smaller than the memory size of the NPU memory system, and the maximum transfer rate of each of the plurality of register files may be relatively faster than the maximum transfer rate of the NPU memory system.
- the memory size of each of the plurality of register files may be configured to have a memory size having a relatively faster maximum transfer rate than the memory size of the NPU memory system.
- the memory size of each of the plurality of register files is relatively smaller than the memory size of the NPU memory system, and the power consumption based on the same transfer rate of each of the plurality of register files is relatively higher than the power consumption based on the same transfer rate of the NPU memory system. could be less.
- the memory size of each of the plurality of register files may be configured to have a memory size that is relatively smaller than the memory size of the NPU memory system for the same transfer rate reference power consumption.
- Each of the plurality of register files may be configured as a memory having a relatively faster maximum transfer rate than the NPU memory system, and having a relatively smaller power consumption based on the same transfer rate.
- the NPU memory system further includes a first memory, a second memory, and a third memory having a hierarchical structure, and the NPU scheduler is based on the NPU structure data or the artificial neural network data locality information of the artificial neural network model running in the neural network processing unit.
- the first memory, the second memory and the third memory of the memory system may be controlled based on the hierarchical structure to improve the memory reuse rate of the NPU memory system.
- the first memory may be configured to communicate with the second memory and the third memory
- the second memory and the third memory may be configured to communicate with the plurality of processing elements and the plurality of register files.
- the NPU memory system may be configured to have a plurality of memory hierarchical structures optimized for memory reuse.
- the NPU scheduler may be configured to determine the size of data for each scheduling order, and to sequentially store data for each scheduling order within the available limit of the first memory.
- the NPU scheduler may be configured to selectively store some of the data stored in the first memory in one of the second memory and the third memory by comparing the memory reuse rates.
- a memory reuse rate of data stored in the second memory may be higher than a memory reuse rate of data stored in the third memory.
- the NPU scheduler may be configured to delete duplicate data of the first memory when the characteristic data is stored in the second memory.
- Data stored in the third memory may be configured to have reusable variable characteristics.
- An edge device includes a microphone configured to sense acoustic data, a camera configured to sense image data, and a neural network processing unit configured to perform at least two different inference operations, the neural network processing unit comprising: configured to drive the trained AI keyword recognition model to infer a keyword based on the acoustic data, and the neural network processing unit is configured to drive the trained AI gesture recognition model to infer a gesture based on the image data in response to the keyword inference result can be
- the edge device further includes a central processing unit and a power control unit, the central processing unit instructs to enter the first mode when there is no input for a predetermined time, and the power control unit supplies power to the microphone and sends power to the camera when in the first mode It may be configured to cut off power.
- the neural network processing unit may be configured to stop the inference operation of the AI gesture recognition model in the first mode.
- the central processing unit may receive the inference result of the AI keyword recognition model of the neural network processing unit and instruct it to enter the second mode, and the power control unit may be configured to supply power to the camera in the second mode.
- the neural network processing unit may be configured to perform an inference operation of the AI gesture recognition model in the second mode.
- the neural network processing unit may be a standalone neural network processing unit.
- Image data and sound data including privacy data may be configured to be deleted after an inference operation by the neural network processing unit.
- the AI gesture recognition model may be configured to drive in response to an inference result of the AI keyword recognition model of the neural network processing unit.
- An edge device includes an input unit configured to provide a plurality of sense data, a neural network processing unit configured to drive a plurality of artificial neural network models, and a first artificial neural network model among the plurality of artificial neural network models is It is an artificial neural network model that is always operated, and the second artificial neural network model among the plurality of artificial neural network models may be an artificial neural network model configured to operate only under preset conditions.
- the second artificial neural network model may be configured to be driven or not according to the inference result of the first artificial neural network model.
- the neural network processing unit may further include an NPU scheduler, and the NPU scheduler may be configured to determine a scheduling order based on structural data of a plurality of artificial neural network models or artificial neural network data locality information.
- the neural network processing unit further includes a plurality of processing elements, and the NPU scheduler includes node data of each layer of the plurality of artificial neural network models, data size of weights of each connection network, and structural data or artificial neural network data of the plurality of artificial neural network models. determine a scheduling order based on the regionality information, and allocate processing elements according to the determined scheduling order.
- the neural network processing unit may further include an NPU memory system, and the NPU scheduler may be configured to set the priority of data stored in the NPU memory system.
- a neural network processing unit includes a processing element array, an NPU memory system configured to store an artificial neural network model processed in the processing element array, an NPU scheduler configured to control the processing element array and the NPU memory system, and an artificial neural network It may include an NPU deployment mode configured to utilize the model to infer a plurality of different input data.
- a plurality of different input data may be a plurality of image data.
- NPU batch mode may be configured to increase the motion frame by combining a plurality of image data.
- the NPU deployment mode may be configured to recycle the weight data of the artificial neural network model to perform inference operations on a plurality of different input data.
- the NPU batch mode may be configured to convert a plurality of input data into one continuous data.
- a neural network processing unit includes at least one processing element, an NPU memory system capable of storing an artificial neural network model that can be inferred by the at least one processing element, and structural data or an artificial neural network of the artificial neural network model. and an NPU scheduler configured to control the at least one processing element and the NPU memory system based on the data locality information.
- the NPU scheduler may be configured to further receive structural data of the neural network processing unit.
- the structure data of the neural network processing unit may include at least one of a memory size of the NPU memory system, a hierarchical structure of the NPU memory system, data on the number of at least one processing element, and an operator structure of the at least one processing element.
- the neural network processing unit includes an artificial neural network model trained to perform an inference function, an array of processing elements configured to infer input data by utilizing the neural network model, an NPU memory system configured to communicate with the array of processing elements, and an array of processing elements and an NPU memory system and an NPU scheduler configured to control , and at least one of the artificial intelligence-based optimization algorithms may be configured to be optimized in consideration of the memory size of the NPU memory system.
- the artificial neural network model may be optimized in the neural network processing unit via an optimization system configured to communicate with the neural network processing unit.
- the artificial neural network model may be optimized based on at least one of structural data of the artificial neural network model or locality information of artificial neural network data and structural data of a processing unit.
- the quantization algorithm may be applied.
- a pruning algorithm is applied, a quantization algorithm is applied, and a re-learning algorithm can be applied.
- the quantization algorithm may be applied, then the re-learning algorithm may be applied, and then the model compression algorithm may be applied.
- the artificial neural network model may include a plurality of layers, each of the plurality of layers may include weight data, and each weight data may be pruned.
- At least one or more weight data having a relatively larger data size among weight data may be preferentially pruned.
- At least one or more weight data having a relatively larger computational amount may be preferentially pruned.
- the artificial neural network model includes a plurality of layers, each of the plurality of layers includes node data and weight data, and the weight data may be quantized, respectively.
- Node data may be quantized.
- Node data and weight data of at least one layer may be quantized, respectively.
- At least one or more weight data having a relatively larger size among weight data may be preferentially quantized.
- At least one or more node data having a relatively larger size among node data may be preferentially quantized.
- the array of processing elements may include at least one processing element, wherein the at least one processing element may be configured to compute node data and weight data, each having a quantized number of bits.
- the processing element array may further include a bit quantization unit, and the number of bits of output data of the processing element array may be configured to be quantized by the bit quantization unit.
- the artificial neural network model is a quantized artificial neural network model
- the NPU memory system is configured to store the data of the artificial neural network model in response to the number of bits of the plurality of quantized weight data of the artificial neural network model and the number of bits of the quantized plurality of node data.
- the artificial neural network model is a quantized artificial neural network model
- the processing element array corresponds to the number of bits of the plurality of quantized weight data of the artificial neural network model and the number of bits of the quantized plurality of node data of the plurality of weights quantized from the NPU memory system. It may be configured to receive data and quantized plurality of node data.
- the neural network processing unit includes an artificial neural network model, a plurality of processing elements configured to process the artificial neural network model, an NPU memory system configured to supply data of the artificial neural network model to the processing element array, and an NPU configured to control the processing element array and the NPU memory system Including a scheduler, the artificial neural network model may be quantized by at least one grouping policy.
- the at least one grouping policy may use at least one of an operation order of the artificial neural network model, a computational amount size of the artificial neural network model, and a memory usage size of the artificial neural network model as a criterion for determining the at least one grouping policy.
- the at least one grouping policy is a plurality of grouping policies, and in each of the plurality of grouping policies, an order of the grouping policies may be determined by a respective weight value.
- a neural network processing unit wherein the neural network model is quantized into an order of grouped data ordered according to at least one grouping policy.
- a quantization recognition learning algorithm may be applied to the artificial neural network model.
- the artificial neural network model When the artificial neural network model is quantized by the grouping policy, it can be quantized by referring to the inference accuracy of the artificial neural network model including the data group being quantized.
- the number of bits of the artificial neural network model including the quantized data group may be reduced.
- the quantization when the estimated inference accuracy of the artificial neural network model including the quantized data group is less than the preset target inference accuracy, the quantization can be terminated by restoring the quantizing data group to the moment when the target inference accuracy was high.
- the neural network processing unit further includes an edge device connected to the neural network processing unit, and the artificial neural network model includes the computational processing power of the edge device, the computational processing power of the neural network processing unit, the available memory bandwidth of the edge device, the available memory bandwidth of the neural network processing unit,
- the quantization may be performed according to a grouping policy order determined based on data of at least one of the maximum memory bandwidth of the edge device, the maximum memory bandwidth of the neural network processing unit, the maximum memory latency of the edge device, and the maximum memory latency of the neural network processing unit.
- the neural network processing unit includes an artificial neural network model, a plurality of processing elements configured to process the artificial neural network model, an NPU memory system configured to supply data of the artificial neural network model to the processing element array, and an NPU configured to control the processing element array and the NPU memory system Including a scheduler, the artificial neural network model may quantize node data of at least one layer or weight data of at least one layer of the artificial neural network model based on the memory size of the NPU memory system.
- the NPU memory system may be configured to store the quantized number of bits data of at least one layer of node data or weight data of at least one layer of the artificial neural network model.
- the number of bits of data input to each input unit of the plurality of processing elements may be configured to operate corresponding to information on the number of bits of the quantized input data.
- the edge device includes a main memory system and a neural network processing unit configured to communicate with the main memory system, the neural network processing unit comprising a processing element array, an NPU memory system, a processing element array, and an NPU scheduler configured to control the NPU memory system, , the quantized node data, quantized weight data, quantized weight kernel, and/or quantized feature map of the artificial neural network model stored in the main memory system and the NPU memory system are the memory size of the main memory system and the memory size of the NPU memory system. It can be quantized with reference to .
- FIG. 1 is a schematic conceptual diagram illustrating an edge device that can be implemented with various modifications including a neural network processing unit that can be applied to examples of the present disclosure.
- FIG. 2 is a schematic conceptual diagram illustrating a neural network processing unit according to an example of the present disclosure.
- FIG. 3 is a schematic conceptual diagram illustrating one processing element of an array of processing elements that may be applied to an example of the present disclosure.
- FIG. 4 is a table schematically illustrating energy consumption per unit operation of a neural network processing unit.
- FIG. 5 is a schematic conceptual diagram illustrating an exemplary artificial neural network model that can be applied to examples of the present disclosure.
- FIG. 6 is a schematic conceptual diagram illustrating a neural network processing unit according to another example of the present disclosure.
- FIG. 7 is a schematic conceptual diagram illustrating a neural network processing unit according to another example of the present disclosure.
- FIG. 8 is a schematic conceptual diagram illustrating characteristics of a maximum transfer rate according to a memory size of an exemplary register file of a neural network processing unit and a memory size of an exemplary NPU memory system composed of SRAM according to another example of the present disclosure.
- FIG. 9 is a schematic conceptual diagram illustrating the same transfer rate-based power consumption characteristics according to the memory size of an exemplary register file of a neural network processing unit and the memory size of an exemplary NPU memory system composed of SRAM according to another example of the present disclosure. .
- FIG. 10 is a schematic conceptual diagram illustrating a neural network processing unit according to another example of the present disclosure.
- FIG. 11 is a schematic conceptual diagram illustrating an optimization system capable of optimizing an artificial neural network model that can be processed by a neural network processing unit according to an example of the present disclosure.
- FIG. 12 is a schematic conceptual diagram illustrating an edge device according to another example of the present disclosure.
- the components In interpreting the components, it is interpreted as including an error range even if there is no separate explicit description.
- the positional relationship between the two components is expressed as 'on', 'on', 'on', 'next to', 'adjacent to', etc.
- one or more other elements may be positioned between two elements unless 'directly' or 'directly' is used.
- Reference to a device or layer “on” another device or layer includes any intervening layer or other device directly on or in the middle of another device.
- FIG. 1 is a schematic conceptual diagram illustrating an edge device that can be implemented with various modifications including a neural network processing unit that can be applied to examples of the present disclosure.
- an edge device 1000 illustrates one example of various electronic devices that may be variously modified.
- the edge device 1000 includes an artificial neural network, for example, a mobile phone, a smart phone, an artificial intelligence speaker, a digital broadcast terminal, a navigation device, a wearable device, a smart watch, a smart refrigerator, a smart TV, a digital signage, a VR device, AR device, artificial intelligence CCTV, artificial intelligence robot vacuum cleaner, tablet, laptop computer, self-driving car, self-driving drone, self-driving bipedal robot, self-driving quadrupedal robot, autonomous driving mobility, artificial intelligence robot, personal digital assistant (PDA) digital assistance), and a personal multimedia player (PMP).
- an artificial neural network for example, a mobile phone, a smart phone, an artificial intelligence speaker, a digital broadcast terminal, a navigation device, a wearable device, a smart watch, a smart refrigerator, a smart TV, a digital signage, a VR device, AR device, artificial intelligence CCTV, artificial intelligence robot vacuum cleaner, tablet, laptop computer, self-driving car, self-driving drone, self-driv
- the edge device 1000 includes a neural network processing unit 100 and may refer to various electronic devices that can be utilized for edge computing by using an artificial neural network model inferred by the neural network processing unit 100 .
- edge computing means an edge or a periphery where computing occurs, and may refer to a terminal that directly produces data or various electronic devices located close to the terminal.
- the neural network processing unit 100 may also be referred to as a neural processing unit (NPU).
- edge device 1000 is not limited to the above-described electronic devices.
- FIG. 1 is just one example of the edge device 1000 and shows various components that the edge device 1000 may include.
- examples according to the present disclosure are not limited thereto, and each component may be selectively included or excluded according to the purpose and configuration of the example. That is, some of the components shown in FIG. 1 may not be essential components in some cases, and each example preferably includes or does not include some of the components shown in FIG. 1 from the viewpoint of optimization. can do.
- the edge device 1000 includes at least a neural network processing unit 100 , and includes a wireless communication unit 1010 , an input unit 1020 , a sensing unit 1030 , an output unit 1040 , an interface unit 1050 , and a system bus 1060 . ), the main memory system 1070 , the central processing unit 1080 , and at least a portion of the power control unit 1090 may be selectively further included. Also, the edge device 1000 may be configured to communicate with the cloud artificial intelligence service 1100 through the wireless communication unit 1010 .
- the system bus 1060 is configured to control data communication of respective components of the edge device 1000 . That is, the system bus 1060 is a transportation system of the edge device 1000 .
- the system bus 1060 may be referred to as a computer bus. All components of the edge device 1000 may have a unique address, and the system bus 1060 may connect each component through the address.
- the system bus 1060 may process three types of data, for example. First, the system bus 1060 may process an address in which data is stored in the main memory system 1070 when data is transmitted. Second, the system bus 1060 may process meaningful data such as an operation result stored in a corresponding address. Third, the system bus 1060 may process address data and data flow, such as how to process data, when and where data should be moved. However, examples according to the present disclosure are not limited thereto.
- Various control signals generated by the central processing unit 1080 may be transmitted to corresponding components through the system bus 1060 .
- the wireless communication unit 1010 enables wireless communication between the edge device 1000 and the wireless communication system, between the edge device 1000 and another edge device, or between the edge device 1000 and the cloud artificial intelligence service 1100 . It may include one or more communication modules.
- the wireless communication unit 1010 may include at least one of a mobile communication module 1011 , a wireless Internet module 1012 , a short-range communication module 1013 , and a location information module 1014 .
- the mobile communication module 1011 of the wireless communication unit 1010 means a module for transmitting and receiving wireless data with at least one of a base station, an external terminal, and a server on a mobile communication network constructed according to technical standards or communication methods for mobile communication do.
- the mobile communication module 1011 may be built-in or external to the edge device 1000 .
- the technical standards are, for example, Global System for Mobile communication (GSM), Code Division Multi Access (CDMA), Code Division Multi Access 2000 (CDMA2000), Enhanced Voice-Data Optimized or Enhanced Voice-Data Only (EVDO), and WCDMA (WCDMA).
- Wideband CDMA High Speed Downlink Packet Access
- HSDPA High Speed Downlink Packet Access
- HSUPA High Speed Uplink Packet Access
- LTE Long Term Evolution
- LTE-A Long Term Evolution-Advanced
- 5G Fifth Generation
- the wireless Internet module 1012 of the wireless communication unit 1010 means a module for wireless Internet access.
- the wireless Internet module 1012 may be built-in or external to the edge device 1000 .
- the wireless Internet module 1012 is configured to transmit and receive wireless data in a communication network according to wireless Internet technologies.
- Wireless Internet technologies are, for example, Wireless LAN (WLAN), Wireless-Fidelity (Wi-Fi), Wireless Fidelity (Wi-Fi) Direct, Digital Living Network Alliance (DLNA), Wireless Broadband (WiBro), World WiMAX (WiMAX).
- the wireless Internet module 1012 for performing wireless Internet access through a mobile communication network may be understood as a type of the mobile communication module 1011 .
- the short-range communication module 1013 of the wireless communication unit 1010 refers to a module for short-range communication.
- Short-range communication technologies are, for example, BluetoothTM, Radio Frequency Identification (RFID), Infrared Data Association (IrDA), Ultra Wideband (UWB), ZigBee, Near Field Communication (NFC), Wi-Fi (Wireless-Fidelity), Wi-Fi Direct, and Wireless USB (Wireless Universal Serial Bus).
- RFID Radio Frequency Identification
- IrDA Infrared Data Association
- UWB Ultra Wideband
- ZigBee Near Field Communication
- NFC Near Field Communication
- Wi-Fi Wireless-Fidelity
- Wi-Fi Direct Wireless USB
- Wireless USB Wireless Universal Serial Bus
- another edge device is a wearable device such as a smart watch, smart glass, or HMD (head mounted display) capable of exchanging data with the edge device 1000 according to the present disclosure.
- HMD head mounted display
- examples according to the present disclosure are not limited thereto.
- the location information module 1014 of the wireless communication unit 1010 means a module for acquiring the location of the edge device 1000 .
- Location information technologies include, for example, a Global Positioning System (GPS) module or a Wireless Fidelity (Wi-Fi) module.
- GPS Global Positioning System
- Wi-Fi Wireless Fidelity
- the edge device 1000 may acquire the location of the edge device 1000 using data transmitted from a GPS satellite.
- the edge device 1000 utilizes the Wi-Fi module
- the location of the edge device 1000 is based on the data of the wireless AP (Wireless Access Point) that transmits or receives the Wi-Fi module and wireless data. can be obtained.
- wireless AP Wireless Access Point
- the edge device 1000 may be connected to the cloud artificial intelligence service 1100 , and the edge device 1000 may query various types of artificial intelligence services to the cloud artificial intelligence service 1100 in the form of a query. you can request
- the edge device 1000 may ask “How is the weather today?” transmits voice data to the cloud artificial intelligence service 1100 through the wireless communication unit 1010, and the cloud artificial intelligence service 1100 transmits the inference result of the received voice data through the wireless communication unit 1010 to the edge device 1000.
- the cloud artificial intelligence service 1100 transmits the inference result of the received voice data through the wireless communication unit 1010 to the edge device 1000.
- the input unit 1020 may include various data input to the edge device 1000 or various components that provide data.
- the input unit 1020 includes a camera 1021 for inputting image data and a microphone for inputting sound data. 1022, a user input module 1023 for receiving data from a user, a proximity sensor 1024 for detecting a distance, an illuminance sensor 1025 for detecting an amount of ambient light, and emitting radio waves of a specific frequency to target an object It may include a radar 1026 for sensing, and a lidar 1027 for sensing an object by emitting a laser.
- the input unit 1020 may be configured to perform a function of providing at least one of image data, sound data, user input data, and distance data.
- the camera 1021 of the input unit 1020 may be a camera for image processing, gesture recognition, object recognition, event recognition, etc. inferred by the neural network processing unit 100 .
- the camera 1021 of the input unit 1020 may provide still image or moving image data.
- Image data of the camera 1021 of the input unit 1020 may be transmitted to the central processing unit 1080 .
- the central processing unit 1080 may be configured to transmit the image data to the neural network processing unit 100 .
- the central processing unit 1080 may perform image processing processing, and image-processed image data may be transmitted to the neural network processing unit 100 .
- the present invention is not limited thereto, and the system bus 1060 may also transmit image data to the neural network processing unit 100 .
- Image data of the camera 1021 of the input unit 1020 may be transmitted to the neural network processing unit 100 .
- the neural network processing unit 100 may be configured to transmit the inferred result to the central processing unit 1080 .
- inference operations such as image processing, gesture recognition, object recognition, event recognition, etc. may be performed according to the artificial neural network model operated in the neural network processing unit 100, and the inferred result will be transmitted to the central processing unit 1080.
- the present invention is not limited thereto, and the neural network processing unit 100 may transmit the inferred result to a component other than the central processing unit 1080 through the system bus 1060 .
- At least one camera 1021 of the input unit 1020 may be configured.
- the camera 1021 of the input unit 1020 may be a plurality of cameras that provide image data in front, rear, left, and right directions for autonomous driving of an autonomous vehicle.
- a vehicle indoor camera for detecting the condition of the indoor driver may be further included.
- the camera 1021 of the input unit 1020 may be a plurality of cameras having different angles of view in the smart phone.
- the camera 1021 of the input unit 1020 may be configured as at least one of a visible light camera, a near infrared camera, and a thermal imaging camera.
- the present invention is not limited thereto, and the camera 1021 may be configured as a composite image sensor configured to simultaneously detect visible light and near-infrared rays, and may be configured to simultaneously detect visible light and near-infrared rays.
- the edge device 1000 stores the image data in a batch mode form the neural network processing unit 100 can be provided to
- the microphone 1022 of the input unit 1020 converts external sound data into electrical voice data and outputs it.
- the voice data may be output as analog data or digital data.
- Various noise removal algorithms for removing noise generated in the process of receiving external sound data may be implemented in the microphone 1022 .
- At least one microphone 1022 of the input unit 1020 may be configured.
- the plurality of microphones 1022 may be microphones disposed in each of two earphones positioned at both ears.
- Sound data of the microphone 1022 of the input unit 1020 may be transmitted to the central processing unit 1080 .
- the sound data may be transmitted to the neural network processing unit 100 through the system bus 1060 .
- the central processing unit 1080 may convert the sound data into the frequency domain by Fourier transform, and the converted sound data may be transmitted to the neural network processing unit 100 .
- the present invention is not limited thereto, and it is also possible to transmit image data to the neural network processing unit 100 through a component other than the central processing unit 1080 through the system bus 1060 .
- the acoustic data of the microphone 1022 of the input unit 1020 may be transmitted to the neural network processing unit 100 .
- the neural network processing unit 100 may be configured to transmit the inferred result to the central processing unit 1080 .
- inference operations such as sound processing, keyword recognition, noise removal, sentence recognition, and translation into another language may be performed according to the artificial neural network model operated in the neural network processing unit 100, and the inferred result is output to the central processing unit ( 1080).
- the present invention is not limited thereto and the neural network processing unit 100 is not the central processing unit 1080, but a power control unit 1090, a wireless communication unit 1010, an interface unit 1050, an output unit 1040, or a main memory system. It is also possible to pass the inferred result to other components, such as (1070).
- the user input module 1023 of the input unit 1020 includes, for example, a touch button, a push button, a touch panel, a mouse, a keyboard, and a touch pad ( touch pad) may be included.
- a touch button for example, a touch button, a push button, a touch panel, a mouse, a keyboard, and a touch pad ( touch pad) may be included.
- the neural network processing unit 100 may be configured to receive data from the user input module 1023 according to the artificial neural network model being operated, and to perform a corresponding reasoning operation.
- examples according to the present disclosure are not limited thereto.
- the user input module 1023 of the input unit 1020 is for receiving data from the user, and when data is inputted through the user input module 1023, the central processing unit 1080 is an edge device ( 1000) can be controlled.
- the user input module 1023 may include a mechanical input means, a button, a switch, and a touch input means.
- the touch input means may include a visual key displayed on the touch screen through software processing or a touch key disposed on a portion other than the touch screen.
- the touch screen may sense a touch input to the display module 1041 by using at least one of various touch methods such as a resistive method, a capacitive method, an infrared method, an ultrasonic method, and a magnetic field method.
- the touch screen may be configured to detect the position, area, pressure, and the like of the touch object.
- the capacitive touch screen may be configured to convert a change in pressure applied to a specific part or capacitance occurring in a specific part into an electrical input signal.
- the touch object may be a finger, a touch pen or a stylus pen, a pointer, or the like.
- the proximity sensor 1024 of the input unit 1020 means a sensor that detects the presence or absence of an object approaching the edge device 1000 or an object existing in the vicinity without mechanical contact using the force of an electromagnetic field or infrared rays.
- the proximity sensor 1024 includes a transmission type photoelectric sensor, a direct reflection type photoelectric sensor, a mirror reflection type photoelectric sensor, a high frequency oscillation type proximity sensor, a capacitive type proximity sensor, a magnetic type proximity sensor, an infrared proximity sensor, and the like.
- the neural network processing unit 100 may be configured to receive a signal from the proximity sensor 1024 according to an artificial neural network model to be operated, and to perform a corresponding reasoning operation.
- examples according to the present disclosure are not limited thereto.
- the illuminance sensor 1025 of the input unit 1020 refers to a sensor capable of detecting the amount of ambient light of the edge device 1000 by using a photodiode.
- the neural network processing unit 100 may be configured to receive a signal from the illuminance sensor 1025 according to the artificial neural network model being operated, and to perform a corresponding reasoning operation.
- examples according to the present disclosure are not limited thereto.
- the radar 1026 of the input unit 1020 may detect a signal reflected by an object by transmitting electromagnetic waves, and may provide data such as a distance and an angular velocity of the object.
- the edge device 1000 may be configured to include a plurality of radars 1026 .
- the radar 1026 may be configured to include at least one of a short range radar, a middle range radar, and a long range radar.
- the neural network processing unit 100 may be configured to receive data from the radar 1026 according to the artificial neural network model being operated, and to perform a corresponding reasoning operation.
- examples according to the present disclosure are not limited thereto.
- the lidar 1027 of the input unit 1020 may provide surrounding 3D spatial data by irradiating an optical signal in a predetermined manner and analyzing the optical energy reflected by the object.
- the edge device 1000 may be configured to include a plurality of lidars 1027 .
- the neural network processing unit 100 may be configured to receive data of the lidar 1027 according to the artificial neural network model being operated, and to perform a corresponding reasoning operation.
- examples according to the present disclosure are not limited thereto.
- the input unit 1020 is not limited to the above-described examples, and an acceleration sensor, a magnetic sensor, a gravity sensor (G-sensor), a gyroscope sensor, and a motion sensor (motion) sensor), a fingerprint recognition sensor, an ultrasonic sensor, a battery gauge, a barometer, a hygrometer, a thermometer, a radiation sensor, a heat sensor, a gas detection sensor, and a chemical detection sensor. It may be configured to further include one.
- the neural network processing unit 100 of the edge device 1000 may be configured to receive data from the input unit 1020 according to the artificial neural network model being operated, and to perform a corresponding reasoning operation.
- the edge device 1000 may be configured to provide various input data input from the input unit 1020 to the neural network processing unit 100 to perform various reasoning operations. It is also possible for input data to be input to the neural network processing unit 100 after being pre-processed in the central processing unit 1080 .
- the neural network processing unit 100 may be configured to selectively receive input data of each of the camera 1021 , the radar 1026 , and the lidar 1027 , and infer surrounding environment data for autonomous driving.
- the neural network processing unit 100 may be configured to receive input data of the camera 1021 and the radar 1026 and infer surrounding environment data required for autonomous driving.
- the output unit 1040 is for generating an output related to visual, auditory or tactile sense, and includes at least one of a display module 1041, a sound output module 1042, a haptip module 1043, and an optical output module 1044. can do.
- the display module 1041 may be a liquid crystal panel or an organic light emitting display panel including a plurality of pixel arrays.
- examples according to the present disclosure are not limited thereto.
- the sound output module 1042 may be a receiver of a phone, a speaker that outputs sound, a buzzer, or the like.
- the light output module 1044 may output an optical signal for notifying the occurrence of an event by using the light of the light source of the edge device 1000 . Examples of the generated event may be message reception, missed call, alarm, schedule notification, email reception, data reception through an application, and the like.
- the interface unit 1050 serves as a passage with all external devices connected to the edge device 1000 .
- the interface unit 1050 receives data from an external device, receives power and transmits it to each component inside the edge device 1050 , or allows data inside the edge device 1050 to be transmitted to an external device.
- a wired/wireless headset port, an external charger port, a wired/wireless data port, a memory card port, a port for connecting a device equipped with an identification module (port), an audio I/O (Input/Output) port, a video I/O (Input/Output) port, an earphone port, etc. may be included in the interface unit 1050 .
- the main memory system 1070 is a device for storing data under the control of the edge device 1000 .
- the main memory system 1070 may selectively include a volatile memory and a non-volatile memory.
- the volatile memory device may be a memory device that stores data only when power is supplied and loses stored data when power supply is cut off.
- the nonvolatile memory device may be a device in which data is stored even when power supply is interrupted.
- a program for the operation of the central processing unit 1080 or the neural network processing unit 100 may be stored, and input/output data may be temporarily stored.
- the main memory system 1070 is a flash memory type, a hard disk type, a solid state disk type, an SDD type (Silicon Disk Drive type), and a multimedia card micro type.
- micro type card type memory (such as SD or XD memory), random access memory (RAM), dynamic random access memory (DRAM), high bandwidth memory (HBM), static random access memory (SRAM) , magnetic random access memory (MRAM), spin-transfer torque magnetic random access memory (STT-MRAM), embedded magnetic random access memory (eMRAM), orthogonal spin transfer magnetic random access memory (OST-MRAM), phase change RAM (PRAM) ), at least one type of ferroelectric RAM (FeRAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), programmable read-only memory (PROM), magnetic memory, magnetic disk, and optical disk may include a storage medium of Various artificial neural network models to be described later may be stored in a volatile and/or non-volatile memory device of the main memory system 1070 . However, the present invention is not limited thereto. At least one of the artificial neural network models may be stored in the volatile memory of the neural network processing unit 100 according to a command of the edge device 1000 to provide a reasoning
- the central processing unit 1080 may control the overall operation of the edge device 1000 .
- the central processing unit 1080 may be a central processing unit (CPU), an application processor (AP), or a digital signal processing unit (DSP).
- the central processing unit 1080 may control the edge device 1000 or perform various commands.
- the central processing unit 1080 may provide or receive data necessary for the neural network processing unit 100 .
- the central processing unit 1080 may control various components connected to the system bus 1060 .
- the power control unit 1090 is configured to control the power of each component.
- the central processing unit 1080 may be configured to control the power control unit 1090 .
- the power control unit 1090 receives external power and internal power and supplies power to each component included in the edge device 1000 .
- the power control unit 190 may include a battery. When control data is not provided from the central processing unit 1080 for a specific time, the power control unit 1090 may selectively cut off the supply of power to each component of the edge device 1000 .
- the neural network processing unit 100 may be configured to operate at all times, infer a specific situation, and provide weather data to the central processing unit 1080 .
- the central processing unit 1080 may control the power control unit 1090 to supply power to a specific component of the edge device 1000 according to an inference result of the neural network processing unit 100 .
- the neural network processing unit 100 is configured to perform various artificial neural network inference operations.
- the neural network processing unit 100 is characterized in that the central processing unit 1080 is configured to efficiently compute an artificial neural network reasoning operation that is inefficient for computation.
- the neural network processing unit 100 may call the artificial neural network model trained to make a specific inference from the main memory system 1070 .
- the neural network processing unit 100 will be described in more detail.
- FIG. 2 is a schematic conceptual diagram illustrating a neural network processing unit according to an example of the present disclosure.
- the neural network processing unit 100 is configured to include a processing element array 110 , an NPU memory system 120 , an NPU scheduler 130 , and an NPU interface 140 .
- the neural network processing unit 100 is a processing element array 110, an NPU memory system configured to store an artificial neural network model that can be inferred from the processing element array 110 or to store at least some data of the artificial neural network model ( 120), and the NPU scheduler 130 configured to control the processing element array 110 and the NPU memory system 120 based on the structural data or the neural network data locality information of the neural network model.
- the artificial neural network model may include structural data of the artificial neural network model or locality information of the artificial neural network data.
- the artificial neural network model may refer to an AI recognition model trained to perform a specific reasoning function.
- the NPU interface 140 is the edge device 1000, such as the central processing unit 1080, the main memory system 1070, the input unit 1020, the wireless communication unit 1010, of the edge device 1000 through the system bus 1060. It can communicate with various components.
- the central processing unit 1080 may instruct the operation of a specific artificial neural network model to the neural network processing unit 100 through the NPU interface 140 .
- the neural network processing unit 100 may call the data of the artificial neural network model stored in the main memory system 1070 through the NPU interface 140 to the NPU memory system 120 .
- the NPU interface 140 may transmit data provided from the camera 1021 and the microphone 1022 of the input unit 1020 to the neural network processing unit 100 .
- the neural network processing unit 100 may provide the inference result of the artificial neural network model to the central processing unit 1080 through the NPU interface 140 .
- the NPU interface 140 may be configured to perform data communication with various components capable of communicating with the neural network processing unit 100 .
- the neural network processing unit 100 is configured to communicate directly with the main memory system 1070 without going through the system bus 1060 . Therefore, the NPU interface has the effect of directly receiving various data of the artificial neural network model that can be stored in the main memory system 1070 .
- the edge device 1000 includes a central processing unit 1080, a main memory system 1070 configured to store an artificial neural network model, and a system for controlling communication between the central processing unit 1080 and the main memory system 1070 .
- the NPU interface 140 is configured to communicate with the central processing unit 1080 through the system bus 1060 , and the NPU interface 140, the data related to the artificial neural network model communicates directly with the main memory system 1070 .
- examples of the present disclosure are not limited thereto.
- the neural network processing unit 100 has a high priority generated by the central processing unit 1080 of the edge device 1000 and the processing order may be pushed back by other processing independent of the neural network processing unit 100 .
- the problem can be improved. Therefore, according to the above-described configuration, the neural network processing unit 100 can stably operate while improving the response speed delay problem.
- Neural network processing unit 100 is not limited to the NPU interface 140, it is also possible to be configured not to include the NPU interface 140, the neural network processing unit 100 is the edge device (1000) ) may be configured to communicate directly via the system bus 1060 of
- the NPU scheduler 130 is configured to control the operation of the processing element array 100 for the reasoning operation of the neural network processing unit 100 and the read and write order of the NPU memory system 120 .
- the NPU scheduler 130 may be configured to control the processing element array 100 and the NPU memory system 120 by analyzing the structural data or the artificial neural network data locality information of the artificial neural network model.
- the NPU scheduler 130 may analyze the structure of the artificial neural network model to be operated in the processing element array 100 or may be provided with the analyzed structure information.
- the artificial neural network data that the artificial neural network model may include may include node data of each layer, arrangement structure data of layers, weight data of each connection network connecting nodes of each layer, or locality information of artificial neural network data. .
- Data of the artificial neural network may be stored in a memory provided inside the NPU scheduler 130 or the NPU memory system 120 .
- the NPU scheduler 130 may utilize the necessary data by accessing the memory in which the data of the artificial neural network is stored.
- the present invention is not limited thereto, and structural data of the artificial neural network model or locality information of artificial neural network data may be generated based on data such as node data and weight data of the artificial neural network model. It is also possible that the weight data is referred to as a weight kernel.
- the node data may also be referred to as a feature map.
- the data in which the structure of the artificial neural network model is defined may be generated when the artificial neural network model is designed or learning is completed.
- the present invention is not limited thereto.
- the NPU scheduler 130 may schedule the operation sequence of the artificial neural network model based on structural data of the artificial neural network model or locality information of the artificial neural network data.
- the neural network processing unit 100 may sequentially process calculations for each layer according to the structure of the artificial neural network model. That is, when the structure of the artificial neural network model is determined, the operation order for each kernel or layer can be determined. This information can be defined as structural data of the artificial neural network model.
- the NPU scheduler 130 is based on the structural data of the artificial neural network model or the locality information of the artificial neural network data, the node data of the layer of the artificial neural network model and the weight data of the connection network A stored memory address value can be obtained. For example, the NPU scheduler 130 may obtain a memory address value in which node data of a layer of an artificial neural network model and weight data of a connection network stored in the main memory system 1070 are stored.
- the NPU scheduler 130 may bring the node data of the layer of the artificial neural network model to be driven and the weight data of the connection network from the main memory system 1070 and store it in the NPU memory system 120 .
- Node data of each layer may have a corresponding respective memory address value.
- Weight data of each connection network may have respective memory address values.
- the NPU scheduler 130 is based on the structural data or artificial neural network data locality information of the artificial neural network model, for example, the arrangement structure data of the layers of the artificial neural network of the artificial neural network model, or the artificial neural network data processing element array 110 based on locality information. You can schedule the operation order of .
- the NPU scheduler 130 may operate differently from the scheduling concept of the central processing unit 1080 because it schedules based on the structural data of the artificial neural network model or the artificial neural network data locality information.
- the scheduling of the central processing unit 1080 operates to achieve the best efficiency in consideration of fairness, efficiency, stability, reaction time, and the like. That is, it is scheduled to perform the most processing within the same time in consideration of priority and operation time.
- the NPU scheduler 130 may determine the processing order based on the structural data of the artificial neural network model or the artificial neural network data locality information.
- the NPU scheduler 130 may determine the processing order based on the structural data of the neural network model or the neural network data locality information and/or the structural data of the neural network processing unit 100 to be used.
- the present disclosure is not limited to the structure data of the neural network processing unit 100 .
- the structure data of the neural network processing unit 100 is the memory size of the NPU memory system 130, the hierarchical structure of the NPU memory system 130, the number of processing elements (PE1 to PE12) data, processing elements ( The processing order may be determined by using at least one data among the operator structures of PE1 to PE12). That is, the structure data of the neural network processing unit 100 is the memory size of the NPU memory system 130 , the hierarchical structure of the NPU memory system 130 , the number data of the processing elements PE1 to PE12 , and the processing elements PE1 to PE12) of the operator structure may include at least one or more data.
- the present disclosure is not limited to the structure data of the neural network processing unit 100 .
- the memory size of the NPU memory system 130 includes information about the memory capacity.
- the hierarchical structure of the NPU memory system 130 includes information about a connection relationship between specific layers for each hierarchical structure.
- the operator structure of the processing elements PE1 to PE12 includes information about the components inside the processing element.
- the neural network processing unit 100 stores at least one processing element, an artificial neural network model that can be inferred by the at least one processing element, or an NPU that can store at least some data of the artificial neural network model.
- the memory system 120 may be configured to include an NPU scheduler 130 configured to control the NPU memory system 120 and at least one processing element based on the structural data of the neural network model or the neural network data locality information.
- the NPU scheduler 130 may be configured to further receive the structure data of the neural network processing unit (100).
- the structure data of the neural network processing unit 100 includes the memory size of the NPU memory system 120 , the hierarchical structure of the NPU memory system 120 , the number data of at least one processing element, and the operator structure of the at least one processing element. It may be configured to include at least one piece of data.
- the compiler compiles the neural network model so that the neural network model is executed in the neural network processing unit 100
- the neural network data locality of the artificial neural network model at the processing element array-NPU memory system level may be configured.
- the compiler may be implemented as separate software. However, the present invention is not limited thereto.
- the compiler can properly configure the data locality of the neural network model at the processing element array-NPU memory system level according to the hardware operating characteristics of the algorithms and the neural network processing unit 100 applied to the artificial neural network model.
- the artificial neural network data locality of the artificial neural network model may be configured differently according to a method in which the neural network processing unit 100 calculates the corresponding artificial neural network model.
- the neural network data locality of the neural network model may be configured based on algorithms such as feature map tiling, a stationary technique of processing elements, and memory reuse.
- the neural network data locality of the neural network model is the number of processing elements of the neural network processing unit 100 , the memory capacity of the NPU memory system 120 for storing feature maps and weights, and the memory in the neural network processing unit 100 . It may be configured based on a hierarchical structure or the like.
- the compiler configures the neural network data locality of the neural network model at the processing element array-NPU memory system level in word units of the neural network processing unit 100 to determine the order of data required for arithmetic processing.
- the word unit may vary according to quantization of the corresponding kernel, and may be, for example, 4 bits, 8 bits, 16 bits, or 32 bits. However, the present invention is not limited thereto.
- the neural network data locality of the artificial neural network model existing at the processing element array-NPU memory system level may be defined as operation order information of the artificial neural network model processed by the processing element array 110 .
- the NPU scheduler 130 When the NPU scheduler 130 receives artificial neural network data locality information, the NPU scheduler 130 can know the operation order of the artificial neural network model in word units, so that the necessary data is stored in the main memory system 1070 from the NPU memory system. (120), there is an effect that can be stored in advance.
- the NPU scheduler 130 may be configured to store the neural network data locality information and/or structure data of the artificial neural network.
- the above-described structural data means structural data of the concept of layers and kernel units of an artificial neural network model.
- the above-described structural data may be utilized at an algorithm level.
- the aforementioned artificial neural network data locality means processing order information of the neural network processing unit 100 determined when a corresponding artificial neural network model is converted to operate in a specific neural network processing unit by a compiler.
- the artificial neural network data locality is, when the neural network processing unit 100 processes a specific artificial neural network model, the neural network processing unit 100, which is performed according to the structure and calculation algorithm of the artificial neural network model, performs the artificial neural network. It may mean order information that is a word unit of data required for operation processing. A word unit may mean an element unit, which is a basic unit that the neural network processing unit 100 can process. Neural network data locality can be utilized at the hardware-memory level.
- the NPU scheduler 130 predicts in advance a memory read/write operation to be requested by the neural network processing unit 100 based on structural data or artificial neural network data locality, and stores the data to be processed by the neural network processing unit 100 in the main memory system ( 1070) in the NPU memory system 120 can be stored in advance. Accordingly, there is an effect of minimizing or substantially eliminating the data supply delay.
- the NPU scheduler 130 may determine the processing order even if only the structural data of the artificial neural network of the artificial neural network model or the artificial neural network data locality information is utilized at least. That is, the NPU scheduler 130 may determine the operation order by using the structural data from the input layer to the output layer of the artificial neural network or artificial neural network data locality information. For example, an input layer operation may be scheduled first and an output layer operation may be scheduled last. Therefore, when the NPU scheduler 130 receives the structural data or the artificial neural network data locality information of the artificial neural network model, it is possible to know all the operation order of the artificial neural network model. Accordingly, there is an effect that all scheduling orders can be determined.
- the NPU scheduler 130 may determine the processing order in consideration of the structural data of the artificial neural network model or artificial neural network data locality information and the structural data of the neural network processing unit 100, and processing optimization for each determined order is also possible.
- the NPU scheduler 130 receives both the structural data or the artificial neural network data locality information of the artificial neural network model and the structural data of the neural network processing unit 100, the structural data of the artificial neural network model or the artificial neural network data locality information There is an effect that the operation efficiency of each determined scheduling order can be further improved.
- the NPU scheduler 130 may obtain 4 layers of artificial neural network layers and network data having weight data of 3 layers connecting each layer. In this case, a method for scheduling the processing sequence by the NPU scheduler 130 based on the structural data of the neural network model or the locality information of the artificial neural network data will be described below, for example.
- the NPU scheduler 130 sets the input data for the inference operation to the node data of the first layer that is the input layer of the artificial neural network model, and the node data of the first layer and the first connection network corresponding to the first layer. It can be scheduled to perform the MAC (multiply and accumulate) operation of the weight data of .
- the examples of the present disclosure are not limited to the MAC operation, and the artificial neural network operation may be performed using multipliers and adders that can be variously modified and implemented to perform the artificial neural network operation.
- a corresponding operation may be referred to as a first operation
- a result of the first operation may be referred to as a first operation value
- a corresponding scheduling may be referred to as a first scheduling.
- the NPU scheduler 130 sets the first operation value to the node data of the second layer corresponding to the first connection network, and the node data of the second layer and weight data of the second connection network corresponding to the second layer. It is possible to schedule the MAC operation to be performed after the first scheduling.
- a corresponding operation may be referred to as a second operation
- a result of the second operation may be referred to as a second operation value
- a corresponding scheduling may be referred to as a second scheduling.
- the NPU scheduler 130 sets the second operation value to the node data of the third layer corresponding to the second connection network, and the node data of the third layer and weight data of the third connection network corresponding to the third layer. can be scheduled to perform the MAC operation of the second scheduling.
- a corresponding operation may be referred to as a third operation
- a result of the third operation may be referred to as a third operation value
- a corresponding scheduling may be referred to as a third scheduling.
- the NPU scheduler 130 sets the third operation value to the node data of the fourth layer that is the output layer corresponding to the third connection network, and the NPU memory system 120 with the inference result stored in the node data of the fourth layer ) can be scheduled to be stored in
- the corresponding scheduling may be referred to as a fourth scheduling.
- the inference result value may be transmitted and utilized to various components of the edge device 1000 .
- the neural network processing unit 100 transmits the inference result to the central processing unit 1080, and the edge device 1000 performs an operation corresponding to the specific keyword. can be done
- the NPU scheduler 130 may control the NPU memory system 120 and the processing element array 110 so that the operation is performed in the first scheduling, the second scheduling, the third scheduling, and the fourth scheduling order. That is, the NPU scheduler 130 may be configured to control the NPU memory system 120 and the processing element array 110 so that operations are performed in a set scheduling order.
- the neural network processing unit 100 may be configured to schedule a processing order based on a structure of layers of an artificial neural network and operation order data corresponding to the structure.
- the scheduled processing order may be at least one or more.
- the neural network processing unit 100 can predict all operation orders, it is also possible to schedule the next operation, and it is also possible to schedule the operation in a specific order Do.
- the NPU scheduler 130 may be configured to schedule a processing order based on structural data from an input layer to an output layer of an artificial neural network of an artificial neural network model or artificial neural network data locality information.
- the NPU scheduler 130 controls the NPU memory system 120 by utilizing the scheduling sequence based on the structural data of the artificial neural network model or the artificial neural network data locality information to improve the operation rate of the neural network processing unit, and the memory reuse rate. There is an effect that can be improved.
- the operation value of one layer may have a characteristic that becomes input data of the next layer.
- the neural network processing unit 100 controls the NPU memory system 120 according to the scheduling order, there is an effect that can improve the memory reuse rate of the NPU memory system 120.
- Memory reuse can be determined by the number of times the data stored in the memory is read. For example, if specific data is stored in the memory and then the specific data is read only once and then the corresponding data is deleted or overwritten, the memory reuse rate may be 100%. For example, if specific data is stored in the memory, the specific data is read 4 times, and then the corresponding data is deleted or overwritten, the memory reuse rate may be 400%. That is, the memory reuse rate may be defined as the number of reuse of stored data once. That is, memory reuse may mean reusing data stored in the memory or a specific memory address in which specific data is stored.
- the NPU scheduler 130 is configured to receive the structural data or the artificial neural network data locality information of the artificial neural network model, and the operation of the artificial neural network is performed by the received structural data of the artificial neural network model or the artificial neural network data locality information.
- the NPU scheduler 130 recognized the fact that the operation result of the node data of a specific layer of the artificial neural network model and the weight data of the specific connection network becomes the node data of the corresponding next layer. That is, the neural network processing unit 100 of the edge device 1000 may be configured to improve the memory reuse rate of the NPU memory system 120 based on the structural data of the artificial neural network model or the artificial neural network data locality information.
- the NPU scheduler 130 may reuse the value of the memory address in which the specific operation result is stored in the subsequent operation. Accordingly, the memory reuse rate may be improved.
- the neural network processing unit 100 stores the calculated output feature map data in the NPU memory system 120, and the NPU scheduler ( 130) and/or NPU memory system 120 may be controlled.
- the first operation value of the above-described first scheduling is set as node data of the second layer of the second scheduling.
- the NPU scheduler 130 is a memory address value corresponding to the first operation value of the first scheduling stored in the NPU memory system 120, the memory address value corresponding to the node data of the second layer of the second scheduling can be reset to That is, the memory address value can be reused. Therefore, the NPU scheduler 130 by reusing the data of the memory address of the first scheduling, the NPU memory system 120 has an effect that can be utilized as the second layer node data of the second scheduling without a separate memory write operation.
- the second operation value of the above-described second scheduling is set as node data of the third layer of the third scheduling.
- the NPU scheduler 130 stores a memory address value corresponding to the second operation value of the second scheduling stored in the NPU memory system 120, and a memory address value corresponding to the node data of the third layer of the third scheduling. can be reset to That is, the memory address value can be reused. Therefore, the NPU scheduler 130 by reusing the data of the memory address of the second scheduling, the NPU memory system 120 has an effect that can be utilized as the third layer node data of the third scheduling without a separate memory write operation.
- the third operation value of the above-described third scheduling is set as node data of the fourth layer of the fourth scheduling.
- the NPU scheduler 130 is a memory address value corresponding to the third operation value of the third scheduling stored in the NPU memory system 120, the memory address value corresponding to the node data of the fourth layer of the fourth scheduling can be reset to That is, the memory address value can be reused. Therefore, the NPU scheduler 130 by reusing the data of the memory address of the third scheduling, the NPU memory system 120 has an effect that can be utilized as the fourth layer node data of the fourth scheduling without a separate memory write operation.
- the NPU scheduler 130 is configured to control the NPU memory system 120 by determining whether the scheduling order and memory reuse. In this case, there is an effect that the NPU scheduler 130 can provide efficient scheduling by analyzing the structural data of the artificial neural network model or the locality information of the artificial neural network data. In addition, there is an effect that can reduce the memory usage because the data required for the memory reuse is not redundantly stored in the NPU memory system 120 . In addition, the NPU scheduler 130 has an effect that can increase the efficiency of the NPU memory system 120 by calculating the memory usage reduced by the memory reuse.
- the NPU scheduler 130 may be configured to determine the scheduling order based on the artificial neural network data locality information, and to store the data required in the NPU memory system 120 in advance. Therefore, when the processing element array 110 is calculated according to the scheduled order, there is an effect that can utilize the data prepared in advance in the NPU memory system 120 without requesting data from the main memory system 1070 .
- the NPU scheduler 130 may be configured to monitor the resource usage of the NPU memory system 130, the resource usage of the processing elements PE1 to PE12, based on the structural data of the neural network processing unit 100. . Accordingly, there is an effect of improving the hardware resource utilization efficiency of the neural network processing unit 100 .
- the NPU scheduler 130 of the neural network processing unit 100 has the effect of reusing the memory by utilizing the structural data of the artificial neural network model or the artificial neural network data locality information.
- the artificial neural network model is a deep neural network
- the number of layers and the number of connections can be significantly increased, and in this case, the effect of memory reuse can be further maximized.
- the NPU scheduler 130 determines whether the values stored in the NPU memory system 120 are memory reused. cannot judge Accordingly, the NPU scheduler 130 unnecessarily generates a memory address required for each processing, and substantially the same data must be copied from one memory address to another. Therefore, unnecessary memory read and write operations are generated, and duplicate values are stored in the NPU memory system 120 , which may cause a problem in which memory is wasted unnecessarily.
- the processing element array 110 refers to a configuration in which a plurality of processing elements PE1 to PE12 configured to calculate node data of an artificial neural network and weight data of a connection network are disposed.
- Each processing element may be configured to include a multiply and accumulate (MAC) operator and/or an Arithmetic Logic Unit (ALU) operator.
- MAC multiply and accumulate
- ALU Arithmetic Logic Unit
- processing element array 110 may be referred to as at least one processing element including a plurality of operators.
- the processing element array 110 is configured to include a plurality of processing elements PE1 to PE12.
- the plurality of processing elements PE1 to PE12 illustrated in FIG. 2 is merely an example for convenience of description, and the number of the plurality of processing elements PE1 to PE12 is not limited.
- the size or number of the processing element array 110 may be determined by the number of the plurality of processing elements PE1 to PE12 .
- the size of the processing element array 110 may be implemented in the form of an N x M matrix. where N and M are integers greater than zero.
- the processing element array 110 may include N x M processing elements. That is, there may be more than one processing element.
- the size of the processing element array 110 may be designed in consideration of the characteristics of the artificial neural network model in which the neural network processing unit 100 operates. In other words, the number of processing elements may be determined in consideration of the data size of the artificial neural network model to be operated, the required operating speed, the required power consumption, and the like.
- the size of the data of the artificial neural network model may be determined in correspondence with the number of layers of the artificial neural network model and the weight data size of each layer.
- the size of the processing element array 110 of the neural network processing unit 100 is not limited. As the number of processing elements of the processing element array 110 increases, the parallel computing power of the working artificial neural network model increases, but the manufacturing cost and physical size of the neural network processing unit 100 may increase.
- the artificial neural network model operated in the neural network processing unit 100 may be an artificial neural network trained to detect 30 specific keywords, that is, an AI keyword recognition model.
- the processing element of the neural network processing unit 100 The size of the array 110 may be designed to be 4 x 3 in consideration of the computational amount characteristic.
- the neural network processing unit 100 may be configured to include 12 processing elements.
- the present invention is not limited thereto, and the number of the plurality of processing elements PE1 to PE12 may be selected within a range of, for example, 8 to 16,384. That is, examples of the present disclosure are not limited to the number of processing elements.
- the processing element array 110 is configured to perform functions such as addition, multiplication, and accumulation necessary for an artificial neural network operation.
- the processing element array 110 may be configured to perform a multiplication and accumulation (MAC) operation.
- MAC multiplication and accumulation
- the first processing element PE1 of the processing element array 110 will be described as an example.
- FIG. 3 is a schematic conceptual diagram illustrating one processing element of an array of processing elements that may be applied to an example of the present disclosure.
- Neural network processing unit 100 is a processing element array 110, an NPU configured to store an artificial neural network model that can be inferred from the processing element array 110 or to store at least some data of the artificial neural network model.
- a memory system 120 and an NPU scheduler 130 configured to control the processing element array 110 and the NPU memory system 120 based on the structural data of the neural network model or the neural network data locality information, the processing element array 110 may be configured to perform a MAC operation, and the processing element array 110 may be configured to quantize and output a MAC operation result.
- examples of the present disclosure are not limited thereto.
- the NPU memory system 120 may store all or part of the artificial neural network model according to the memory size and the data size of the artificial neural network model.
- the first processing element PE1 may be configured to include a multiplier 111 , an adder 112 , an accumulator 113 , and a bit quantization unit 114 .
- examples according to the present disclosure are not limited thereto, and the processing element array 110 may be modified in consideration of the computational characteristics of the artificial neural network.
- the multiplier 111 multiplies the received (N)bit data and (M)bit data.
- the operation value of the multiplier 111 is output as (N+M)bit data.
- N and M are integers greater than zero.
- the first input unit for receiving (N)bit data may be configured to receive a value having a characteristic such as a variable
- the second input unit for receiving the (M)bit data may be configured to receive a value having a characteristic such as a constant.
- the NPU scheduler 130 distinguishes the variable value and the constant value characteristic, the NPU scheduler 130 has the effect of increasing the memory reuse rate of the NPU memory system 120 .
- the input data of the multiplier 111 is not limited to constant values and variable values.
- the input data of the processing element may operate by understanding the characteristics of the constant value and the variable value, the computational efficiency of the neural network processing unit 100 may be improved.
- the neural network processing unit 100 is not limited to the characteristics of constant values and variable values of input data.
- the meaning of a value having a variable-like characteristic or the meaning of a variable means that a value of a memory address in which the corresponding value is stored is updated whenever incoming input data is updated.
- the node data of each layer may be a MAC operation value that reflects the weight data of the artificial neural network model, and when inferring object recognition of video data with the artificial neural network model, since the input image changes every frame, each The node data of the layer is changed.
- the meaning of a value having a characteristic such as a constant or the meaning of a constant means that the value of the memory address in which the corresponding value is stored is preserved regardless of the update of incoming input data.
- the weight data of the connection network is a unique inference determination criterion of the artificial neural network model, and even if object recognition of moving image data is inferred with the artificial neural network model, the weight data of the connection network may not change.
- the multiplier 111 may be configured to receive one variable and one constant.
- the variable value input to the first input unit may be node data of a layer of an artificial neural network, and the node data may be input data of an input layer of an artificial neural network, an accumulated value of a hidden layer, and an accumulated value of an output layer.
- the constant value input to the second input unit may be weight data of a connection network of an artificial neural network.
- NPU scheduler 130 may be configured to improve the memory reuse rate in consideration of the characteristics of the constant value.
- variable value is the calculated value of each layer, and the NPU scheduler 130 recognizes the reusable variable value based on the structural data of the artificial neural network model or the artificial neural network data locality information, and the NPU memory system 120 to reuse the memory. can be controlled
- the constant value is the weight data of each connection network
- the NPU scheduler 130 recognizes the constant value of the connection network that is repeatedly used based on the structural data of the artificial neural network model or the locality information of the artificial neural network data, and the NPU memory system ( 120) can be controlled.
- the NPU scheduler 130 recognizes the reusable variable values and reusable constant values based on the structural data of the artificial neural network model or the artificial neural network data locality information, and the NPU scheduler 130 uses the NPU memory system ( 120) can be configured to control.
- the processing element knows that when 0 is inputted to one of the first input unit and the second input unit of the multiplier 111, the operation result is 0 even if no operation is performed. can be limited
- the multiplier 111 may be configured to operate in a zero skipping manner.
- the number of bits of data input to the first input unit and the second input unit may be determined according to quantization of node data and weight data of each layer of the artificial neural network model. For example, node data of the first layer may be quantized to 5 bits and weight data of the first layer may be quantized to 7 bits.
- the first input unit may be configured to receive 5-bit data
- the second input unit may be configured to receive 7-bit data. That is, the number of bits of data input to each input unit may be different from each other.
- the processing element may be configured to receive quantization information of data input to each input unit.
- the artificial neural network data locality information may include quantization information of input data and output data of a processing element.
- the neural network processing unit 100 may control the number of quantized bits to be converted in real time when the quantized data stored in the NPU memory system 120 is input to the inputs of the processing element. That is, the number of quantized bits may be different for each layer, and when the number of bits of input data is converted, the processing element receives bit number information from the neural network processing unit 100 in real time and converts the number of bits in real time to generate input data can be configured to
- the accumulator 113 accumulates the operation value of the multiplier 111 and the operation value of the accumulator 113 by using the adder 112 as many times as (L) loops. Accordingly, the number of bits of data of the output unit and the input unit of the accumulator 113 may be output as (N+M+log2(L))bits. where L is an integer greater than 0.
- the accumulator 113 may receive an initialization signal (initialization reset) to initialize the data stored in the accumulator 113 to 0.
- an initialization signal initialization reset
- examples according to the present disclosure are not limited thereto.
- the bit quantization unit 114 may reduce the number of bits of data output from the accumulator 113 .
- the bit quantization unit 114 may be controlled by the NPU scheduler 130 .
- the number of bits of the quantized data may be output as (X) bits. where X is an integer greater than 0.
- the processing element array 110 is configured to perform a MAC operation, and the processing element array 110 has an effect of quantizing and outputting the MAC operation result.
- such quantization has the effect of further reducing power consumption as (L)loops increases.
- the power consumption is reduced, there is an effect that the heat generation of the edge device can also be reduced.
- the heat generation is reduced, there is an effect of reducing the possibility of malfunction due to the high temperature of the neural network processing unit 100 .
- the output data (X) bit of the bit quantization unit 114 may be node data of a next layer or input data of convolution. If the artificial neural network model has been quantized, the bit quantization unit 114 may be configured to receive quantized information from the artificial neural network model. However, the present invention is not limited thereto, and the NPU scheduler 130 may be configured to analyze the artificial neural network model to extract quantized information. Accordingly, the output data (X) bits may be converted into the quantized number of bits to correspond to the quantized data size and output. The output data (X) bit of the bit quantization unit 114 may be stored in the NPU memory system 120 as the number of quantized bits.
- the processing element array 110 of the neural network processing unit 100 includes a multiplier 111 , an adder 112 , an accumulator 113 , and a bit quantization unit 114 .
- the processing element array 110 may reduce the number of bits of (N+M+log2(L))bit data output from the accumulator 113 by the bit quantization unit 114 to the number of bits of (X)bit.
- the NPU scheduler 130 may control the bit quantization unit 114 to reduce the number of bits of the output data by a predetermined bit from a least significant bit (LSB) to a most significant bit (MSB). When the number of bits of output data is reduced, power consumption, calculation amount, and memory usage may be reduced.
- LSB least significant bit
- MSB most significant bit
- the reduction in the number of bits of the output data may be determined by comparing the reduction in power consumption, the amount of computation, and the amount of memory usage compared to the reduction in inference accuracy of the artificial neural network model. It is also possible to determine the quantization level by determining the target inference accuracy of the artificial neural network model and testing it while gradually reducing the number of bits. The quantization level may be determined for each operation value of each layer.
- the number of bits of (N)bit data and (M)bit data of the multiplier 111 is adjusted, and the bit of the operation value (X)bit is performed by the bit quantization unit 114 .
- the processing element array 110 has the effect of reducing power consumption while improving the MAC operation speed, and has the effect of more efficiently performing the convolution operation of the artificial neural network. .
- the NPU memory system 120 of the neural network processing unit 100 may be a memory system configured in consideration of MAC operation characteristics and power consumption characteristics of the processing element array 110 .
- the neural network processing unit 100 may be configured to reduce the number of bits of an operation value of the processing element array 110 in consideration of MAC operation characteristics and power consumption characteristics of the processing element array 110 .
- the NPU memory system 120 of the neural network processing unit 100 may be configured to minimize power consumption of the neural network processing unit 100 .
- the NPU memory system 120 of the neural network processing unit 100 may be a memory system configured to control the memory with low power in consideration of the data size and operation step of the artificial neural network model to be operated.
- the NPU memory system 120 of the neural network processing unit 100 may be a low-power memory system configured to reuse a specific memory address in which weight data is stored in consideration of the data size and operation step of the artificial neural network model.
- the neural network processing unit 100 may provide various activation functions for imparting non-linearity. For example, a sigmoid function, a hyperbolic tangent function, or a ReLU function may be provided.
- the activation function may be selectively applied after the MAC operation.
- the operation value to which the activation function is applied may be referred to as an activation map.
- FIG. 4 is a table schematically illustrating energy consumption per unit operation of the neural network processing unit 100 .
- FIG. 4 it is a table schematically illustrating energy consumed per unit operation of the neural network processing unit 100 . Energy consumption can be divided into memory access, addition operation, and multiplication operation.
- 8b Add means an 8-bit integer addition operation of the adder 112 .
- An 8-bit integer addition operation can consume 0.03 pj of energy.
- 16b Add refers to the 16-bit integer addition operation of the adder 112 .
- a 16-bit integer addition operation can consume 0.05pj of energy.
- 32b Add refers to a 32-bit integer addition operation of the adder 112 .
- a 32-bit integer addition operation can consume 0.1pj of energy.
- 16b FP Add means a 16-bit floating-point addition operation of the adder 112 .
- a 16-bit floating-point addition operation can consume 0.4pj of energy.
- 32b FP Add means a 32-bit floating-point addition operation of the adder 112 .
- a 32-bit floating-point addition operation can consume 0.9pj of energy.
- 8b Mult means an 8-bit integer multiplication operation of the multiplier 111 .
- An 8-bit integer multiplication operation can consume 0.2pj of energy.
- 32b Mult means a 32-bit integer multiplication operation of the multiplier 111 .
- a 32-bit integer multiplication operation can consume 3.1pj of energy.
- 16b FP Mult means a 16-bit floating-point multiplication operation of the multiplier 111 .
- a 16-bit floating-point multiplication operation can consume 1.1pj of energy.
- 32b FP Mult means a 32-bit floating-point multiplication operation of the multiplier 111 .
- a 32-bit floating-point multiplication operation can consume 3.7 pj of energy.
- 32b SRAM Read refers to 32-bit data read access when the internal memory of the NPU memory system 120 is a static random access memory (SRAM). Reading 32-bit data from the NPU memory system 120 may consume 5pj of energy.
- 32b DRAM Read refers to a 32-bit data read access when the memory of the main memory system 1070 of the edge device 1000 is DRAM. Reading 32-bit data from the main memory system 1070 to the NPU memory system 120 may consume 640pj of energy. Energy unit means pico-joule (pj).
- energy consumption per unit operation is approximately 18.5 times different.
- 32-bit data is read from the main memory system 1070 of the edge device 1000 composed of DRAM and when 32-bit data is read from the NPU memory system 120 composed of SRAM, the energy consumption per unit operation is approximately 128 times It makes a difference.
- the NPU memory system 120 of the neural network processing unit 100 may be configured to include a high-speed static memory such as an SRAM tube and not include a DRAM.
- the neural network processing unit according to examples of the present disclosure is not limited to the SRAM.
- the NPU memory system 120 does not include DRAM, and the NPU memory system 120 is configured to have relatively higher read and write speeds and relatively less power consumption than the main memory system 1070 . It may be configured to include a memory.
- the NPU memory system 120 of the neural network processing unit 100 has a relatively faster read/write speed of the inference operation of the artificial neural network model than the main memory system 1070, and the power consumption is relatively can be configured to consume less.
- Static memories capable of high-speed driving such as SRAM include SRAM, MRAM, STT-MRAM, eMRAM, and OST-MRAM. Furthermore, MRAM, STT-MRAM, eMRAM, and OST-MRAM are static memories and have non-volatile characteristics. Therefore, when the edge device 1000 is rebooted after the power is cut off, there is an effect that the artificial neural network model does not need to be provided again from the main memory system 1070 .
- examples according to the present disclosure are not limited thereto.
- the neural network processing unit 100 has an effect of significantly reducing power consumption by the DRAM during the reasoning operation of the artificial neural network model.
- the memory cell of the SRAM of the NPU memory system 120 is composed of, for example, 4 to 6 transistors to store one bit data.
- examples according to the present disclosure are not limited thereto.
- the memory cell of the MRAM of the NPU memory system 120 is composed of, for example, one magnetic tunnel junction (MTJ) and one transistor to store one bit data. there is.
- MTJ magnetic tunnel junction
- the neural network processing unit 100 may be configured to check data of at least one of a number of memories, a memory type, a data transfer rate, and a memory size of the main memory system 1070 of the edge device 1000 .
- the neural network processing unit 100 may request system data from the edge device 1000 to receive data such as a memory size and speed of the memory system 1070 .
- examples according to the present disclosure are not limited thereto.
- the neural network processing unit 100 may operate to minimize memory access with the main memory system 1070 .
- the NPU memory system 120 may be a low-power memory system configured to reuse a specific memory address in consideration of the data size and operation step of the artificial neural network model.
- the neural network processing unit 100 of the edge device 1000 controls the reuse of data stored inside the NPU memory system 120 based on the structural data of the artificial neural network model or the artificial neural network data locality information, and the neural network processing unit ( 100 may be configured not to make a memory access request to the main memory system 120 when data is reused.
- the neural network processing unit 100 accesses the main memory system 1070 based on structural data or artificial neural network data locality information of an artificial neural network model to be operated in the neural network processing unit 100 . It is possible to minimize the request, it is possible to increase the reuse frequency of the data stored inside the NPU memory system 120. Therefore, the frequency of use of the static memory of the NPU memory system 120 can be increased, and there is an effect of reducing power consumption of the neural network processing unit 100 and improving the operation speed.
- the neural network processing unit 100 controls the reuse of data stored inside the NPU memory system 120 based on the structural data of the artificial neural network model or the artificial neural network data locality information, and the neural network processing unit 100 is used for data reuse. It may be configured not to make a memory access request to the main memory system 1070 .
- the neural network processing unit 100 receives the artificial neural network model to be driven when the edge device 100 boots from the main memory system 1070 and stores it in the NPU memory system 120, and the neural network processing unit 100 is the NPU It may be configured to perform an inference operation based on structural data of an artificial neural network model or artificial neural network data locality information stored in the memory system 120 .
- the neural network processing unit 100 When the neural network processing unit 100 operates to reuse data stored inside the NPU memory system 120, the neural network processing unit 100 minimizes the number of data access requests to the main memory system 1070 to reduce power consumption. can Furthermore, the neural network processing unit 100 receives the artificial neural network model data only once for the first time when the edge device 1000 boots, and subsequent inference operations may be configured to reuse the data stored in the NPU memory system 120 .
- the artificial neural network model stored in the neural network processing unit 100 is an AI keyword recognition model
- voice data for keyword inference calculation is received from the microphone 1022 and provided to the neural network processing unit 100, and the artificial neural network model of the data is stored in the NPU memory system 120, it is possible to improve the data reuse rate by analyzing the reusable data among the stored data.
- the artificial neural network model stored in the neural network processing unit 100 is an AI object recognition model
- image data for object inference calculation is received from the camera 1021 and provided to the neural network processing unit 100, and the artificial neural network model of the data is stored in the NPU memory system 120, it is possible to improve the data reuse rate by analyzing the reusable data among the stored data.
- the neural network processing unit 100 compares the data size of the artificial neural network model to be called from the main memory system 1070 and the memory size of the NPU memory system 120 to operate only with the data stored in the NPU memory system 120 . . If the memory size of the NPU memory system 120 is larger than the data size of the artificial neural network model, the neural network processing unit 100 may be configured to operate only with data stored in the NPU memory system 120 .
- the neural network processing unit 100 may be configured to compare the data size of the artificial neural network model to be called from the main memory system 1070 and the memory size of the NPU memory system 120 .
- the neural network processing unit 100 may be configured to compare the data size of the artificial neural network model stored in the main memory system 1070 and the memory size of the NPU memory system 120 .
- the data size of the artificial neural network model is smaller than the memory size of the NPU memory system 120 , there is an effect that the reasoning operation can be repeated while reusing the data stored in the NPU memory system 120 .
- the NPU memory system 120 may reduce additional memory access. Therefore, there is an effect that the NPU memory system 120 can reduce the operation time required for memory access, there is an effect that can reduce the power consumption required for memory access. That is, there is an effect of reducing power consumption and improving the inference speed at the same time.
- FIG. 5 is a schematic conceptual diagram illustrating an exemplary artificial neural network model that can be applied to examples of the present disclosure.
- the exemplary artificial neural network model 1300 of FIG. 5 may be an artificial neural network learned by the edge device 100 or by a separate machine learning apparatus.
- the artificial neural network model 1300 may be an artificial neural network trained to perform various inference functions, such as object recognition and voice recognition.
- the artificial neural network model 1300 may be a convolutional neural network (CNN), which is a type of a deep neural network (DNN).
- CNN convolutional neural network
- a convolutional neural network may be composed of one or several convolutional layers, a pooling layer, and a combination of fully connected layers.
- the convolutional neural network has a structure suitable for learning and inference of two-dimensional data, and can be learned through a backpropagation algorithm.
- the artificial neural network model 1300 is not limited to a deep neural network.
- the artificial neural network model 1300 includes VGG, VGG16, DenseNet, and a fully convolutional network (FCN) having an encoder-decoder structure, a deep neural network (DNN) such as SegNet, DeconvNet, DeepLAB V3+, U-net, It can be implemented with models such as SqueezeNet, Alexnet, ResNet18, MobileNet-v2, GoogLeNet, Resnet-v2, Resnet50, Resnet101, Inception-v3, etc.
- DNN deep neural network
- the present disclosure is not limited to the above-described models.
- the artificial neural network model 1300 may be an ensemble model based on at least two different models.
- the artificial neural network model 1300 may be stored in the NPU memory system 120 of the neural network processing unit 100 .
- the artificial neural network model 1300 is stored in the non-volatile memory and/or volatile memory of the main memory system 1070 of the edge device 1000 and then stored in the neural network processing unit 100 when the artificial neural network model 1300 is operated. It can also be implemented.
- the present invention is not limited thereto, and the artificial neural network model 1300 may be provided to the neural network processing unit 100 by the wireless communication unit 1010 of the edge device 1000 .
- the artificial neural network model 1300 includes an input layer 1310, a first connection network 1320, a first hidden layer 1330, a second connection network 1340, a second hidden layer 1350, a third connection network 1360, and an output layer 1370 .
- the artificial neural network model according to the examples of the present disclosure is not limited thereto.
- the first hidden layer 1330 and the second hidden layer 1350 may be referred to as a plurality of hidden layers.
- the input layer 1310 may include, for example, x1 and x2 input nodes. That is, the input layer 1310 may include node data including two node values.
- the NPU scheduler 130 may set a memory address in which the input data of the input layer 1310 is stored in the NPU memory system 120 .
- the first connection network 1320 may include, for example, networks having weight data including 6 weight values connecting each node of the input layer 1310 and each node of the first hidden layer 1330 .
- Each connection network includes respective weight data.
- the NPU scheduler 130 may set a memory address in which the weight data of the first connection network 1320 is stored in the NPU memory system 120 . Weight data of each connection network is multiplied with corresponding input node data, and an accumulated value of the multiplied values is stored in the first hidden layer 1330 .
- the first hidden layer 1330 may include nodes a1, a2, and a3 for example. That is, the first hidden layer 1330 may include node data including three node values.
- the NPU scheduler 130 may set a memory address in which node data of the first hidden layer 1330 is stored in the NPU memory system 120 .
- the second connection network 1340 exemplarily connects networks having weight data including 9 weight values connecting each node of the first hidden layer 1330 and each node of the second hidden layer 1350 .
- the NPU scheduler 130 may set a memory address in which the weight data of the second connection network 1340 is stored in the NPU memory system 120 .
- the weight data of each connection network is multiplied with the corresponding input node data, and the accumulated value of the multiplied values is stored in the second hidden layer 1350 .
- the second hidden layer 1350 may include nodes b1, b2, and b3 for example. That is, the second hidden layer 1350 may include node data including three node values.
- the NPU scheduler 130 may set a memory address in which the node data of the second hidden layer 1350 is stored in the NPU memory system 120 .
- the third connection network 1360 may include, for example, networks having weight data including six weight values connecting each node of the second hidden layer 1350 and each node of the output layer 1370 .
- Each connection network includes respective weight data.
- the NPU scheduler 130 may set a memory address in which the weight data of the third connection network 1360 is stored in the NPU memory system 120 . Weight data of each connection network is multiplied with corresponding input node data, and an accumulated value of the multiplied values is stored in the output layer 1370 .
- the output layer 1370 may include y1 and y2 nodes for example. That is, the output layer 1370 may include node data including two node values.
- the NPU scheduler 130 may set a memory address in which node data of the output layer 1370 is stored in the NPU memory system 120 .
- the NPU scheduler 130 may analyze or receive the structure of the artificial neural network model to operate in the processing element array 100 .
- the artificial neural network data that the artificial neural network model can include may include node data of each layer, arrangement structure data of layers, weight data of each connection network connecting nodes of each layer, or locality information of artificial neural network data. there is.
- the NPU scheduler 130 Since the NPU scheduler 130 has received structural data or artificial neural network data locality information of the exemplary artificial neural network model 1300, the NPU scheduler 130 calculates the operation sequence from the input to the output of the artificial neural network model 1300. It is also possible to identify each layer of , or a word unit processed by the neural network processing unit.
- the NPU scheduler 130 may set the memory address in which the MAC operation values of each layer are stored in the NPU memory system 120 in consideration of the scheduling order.
- the specific memory address may be a MAC operation value of the input layer 1310 and the first connection network 1320 , and may be input data of the first hidden layer 1330 at the same time.
- the present disclosure is not limited to the MAC operation value, and the MAC operation value may also be referred to as an artificial neural network operation value.
- the NPU scheduler 130 since the NPU scheduler 130 knows that the MAC operation result of the input layer 1310 and the first connection network 1320 is the input data of the first hidden layer 1330, it can be controlled to use the same memory address. . That is, the NPU scheduler 130 may reuse the MAC calculation value based on the structural data of the artificial neural network model or the artificial neural network data locality information. Therefore, there is an effect that the NPU system memory 120 can provide a memory reuse function.
- the NPU scheduler 130 stores the MAC operation value of the neural network model 1300 according to the scheduling order in a specific memory address of the NPU memory system 120, and the specific memory address in which the MAC operation value is stored is the next scheduling sequence. It can be used as input data for MAC operation.
- the MAC operation will be described in detail from the perspective of the first processing element PE1.
- the first processing element PE1 may be designated to perform a MAC operation of the a1 node of the first hidden layer 1330 .
- the first processing element PE1 inputs the x1 node data of the input layer 1310 to the first input unit of the multiplier 111 , and the weight data between the x1 node and the a1 node to the second input unit.
- the adder 112 adds the calculated value of the multiplier 111 and the calculated value of the accumulator 113 .
- the operation value of the adder 112 may be the same as the operation value of the multiplier 111 .
- the counter value of (L)loops may be 1.
- the first processing element PE1 inputs the x2 node data of the input layer 1310 to the first input unit of the multiplier 111 , and the weight data between the x2 node and the a1 node to the second input unit.
- the adder 112 adds the calculated value of the multiplier 111 and the calculated value of the accumulator 113 .
- (L)loops is 1, the x1 node data calculated in the previous step and the weight multiplication value between the x1 node and the a1 node are stored. Accordingly, the adder 112 generates MAC operation values of the x1 node and the x2 node corresponding to the a1 node.
- the NPU scheduler 130 may terminate the MAC operation of the first processing element PE1 based on the structural data of the artificial neural network model or the artificial neural network data locality information.
- the accumulator 113 may be initialized by inputting an initialization signal (initialization reset). That is, the counter value of (L)loops can be initialized to 0.
- the bit quantization unit 114 may be adjusted appropriately according to the accumulated value. In more detail, as (L)loops increases, the number of bits of an output value increases. At this time, the NPU scheduler 130 may remove a predetermined lower bit so that the number of bits of the operation value of the first processing element PE1 becomes (x) bits.
- the MAC operation will be described in detail in terms of the second processing element (PE2).
- the second processing element PE2 may be designated to perform a MAC operation of the a2 node of the first hidden layer 1330 .
- the second processing element PE2 inputs the x1 node data of the input layer 1310 to the first input unit of the multiplier 111 , and the weight data between the x1 node and the a2 node to the second input unit.
- the adder 112 adds the calculated value of the multiplier 111 and the calculated value of the accumulator 113 .
- the operation value of the adder 112 may be the same as the operation value of the multiplier 111 .
- the counter value of (L)loops may be 1.
- the second processing element PE2 inputs the x2 node data of the input layer 1310 to the first input unit of the multiplier 111 , and the weight data between the x2 node and the a2 node to the second input unit.
- the adder 112 adds the calculated value of the multiplier 111 and the calculated value of the accumulator 113 .
- (L)loops is 1, the x1 node data calculated in the previous step and the weight multiplication value between the x1 node and the a2 node are stored. Accordingly, the adder 112 generates MAC operation values of the x1 node and the x2 node corresponding to the a2 node.
- the NPU scheduler 130 may terminate the MAC operation of the first processing element PE1 based on the structural data of the artificial neural network model or the artificial neural network data locality information.
- the accumulator 113 may be initialized by inputting an initialization signal (initialization reset). That is, the counter value of (L)loops can be initialized to 0.
- the bit quantization unit 114 may be adjusted appropriately according to the accumulated value.
- the MAC operation will be described in detail in terms of the third processing element (PE3).
- the third processing element PE3 may be designated to perform the MAC operation of the a3 node of the first hidden layer 1330 .
- the third processing element PE3 inputs the x1 node data of the input layer 1310 to the first input unit of the multiplier 111 , and the weight data between the x1 node and the a3 node to the second input unit.
- the adder 112 adds the calculated value of the multiplier 111 and the calculated value of the accumulator 113 .
- the operation value of the adder 112 may be the same as the operation value of the multiplier 111 .
- the counter value of (L)loops may be 1.
- the third processing element PE3 inputs the x2 node data of the input layer 1310 to the first input unit of the multiplier 111 , and the weight data between the x2 node and the a3 node to the second input unit.
- the adder 112 adds the calculated value of the multiplier 111 and the calculated value of the accumulator 113 .
- (L)loops is 1, the x1 node data calculated in the previous step and the weight multiplication value between the x1 node and the a3 node are stored. Accordingly, the adder 112 generates MAC operation values of the x1 node and the x2 node corresponding to the a3 node.
- the NPU scheduler 130 may terminate the MAC operation of the first processing element PE1 based on the structural data of the artificial neural network model or the artificial neural network data locality information.
- the accumulator 113 may be initialized by inputting an initialization signal (initialization reset). That is, the counter value of (L)loops can be initialized to 0.
- the bit quantization unit 114 may be adjusted appropriately according to the accumulated value.
- the NPU scheduler 130 of the neural network processing unit 100 may perform the MAC operation of the first hidden layer 1330 using the three processing elements PE1 to PE3 at the same time.
- the MAC operation will be described in detail in terms of the fourth processing element (PE4).
- the fourth processing element PE4 may be designated to perform a MAC operation of the b1 node of the second hidden layer 1350 .
- the fourth processing element PE4 inputs the a1 node data of the first hidden layer 1330 to the first input unit of the multiplier 111 , and the weight data between the a1 node and the b1 node to the second input unit. .
- the adder 112 adds the calculated value of the multiplier 111 and the calculated value of the accumulator 113 .
- the operation value of the adder 112 may be the same as the operation value of the multiplier 111 .
- the counter value of (L)loops may be 1.
- the fourth processing element PE4 inputs the a2 node data of the first hidden layer 1330 to the first input unit of the multiplier 111 , and the weight data between the a2 node and the b1 node to the second input unit. .
- the adder 112 adds the calculated value of the multiplier 111 and the calculated value of the accumulator 113 .
- the adder 112 generates MAC operation values of the a1 node and the a2 node corresponding to the b1 node.
- the counter value of (L)loops may be 2.
- the fourth processing element PE4 inputs the a3 node data of the input layer 1310 to the first input unit of the multiplier 111 and the weight data between the a3 node and the b1 node to the second input unit.
- the adder 112 adds the calculated value of the multiplier 111 and the calculated value of the accumulator 113 .
- (L)loops is 2
- the MAC operation values of the a1 node and the a2 node corresponding to the b1 node calculated in the previous step are stored. Accordingly, the adder 112 generates MAC operation values of the a1 node, the a2 node, and the a3 node corresponding to the b1 node.
- the NPU scheduler 130 may terminate the MAC operation of the first processing element PE1 based on the structural data of the artificial neural network model or the artificial neural network data locality information.
- the accumulator 113 may be initialized by inputting an initialization signal (initialization reset). That is, the counter value of (L)loops can be initialized to 0.
- the bit quantization unit 114 may be adjusted appropriately according to the accumulated value.
- the MAC operation will be described in detail in terms of the fifth processing element (PE5).
- the fifth processing element PE5 may be designated to perform the MAC operation of the b2 node of the second hidden layer 1350 .
- the fifth processing element PE5 inputs the a1 node data of the first hidden layer 1330 to the first input unit of the multiplier 111 , and the weight data between the a1 node and the b2 node to the second input unit. .
- the adder 112 adds the calculated value of the multiplier 111 and the calculated value of the accumulator 113 .
- the operation value of the adder 112 may be the same as the operation value of the multiplier 111 .
- the counter value of (L)loops may be 1.
- the fifth processing element PE5 inputs the a2 node data of the first hidden layer 1330 to the first input unit of the multiplier 111 , and the weight data between the a2 node and the b2 node to the second input unit. .
- the adder 112 adds the calculated value of the multiplier 111 and the calculated value of the accumulator 113 .
- the adder 112 generates MAC operation values of the a1 node and the a2 node corresponding to the b2 node.
- the counter value of (L)loops may be 2.
- the fifth processing element PE5 inputs the a3 node data of the input layer 1310 to the first input unit of the multiplier 111 and the weight data between the a3 node and the b2 node to the second input unit.
- the adder 112 adds the calculated value of the multiplier 111 and the calculated value of the accumulator 113 .
- (L)loops is 2
- the MAC operation values of the a1 node and the a2 node corresponding to the b2 node calculated in the previous step are stored. Accordingly, the adder 112 generates MAC operation values of the a1 node, the a2 node, and the a3 node corresponding to the b2 node.
- the NPU scheduler 130 may terminate the MAC operation of the first processing element PE1 based on the structural data of the artificial neural network model or the artificial neural network data locality information.
- the accumulator 113 may be initialized by inputting an initialization signal (initialization reset). That is, the counter value of (L)loops can be initialized to 0.
- the bit quantization unit 114 may be adjusted appropriately according to the accumulated value.
- the MAC operation will be described in detail from the perspective of the sixth processing element (PE6).
- the sixth processing element PE6 may be designated to perform a MAC operation of the b3 node of the second hidden layer 1350 .
- the sixth processing element PE6 inputs the a1 node data of the first hidden layer 1330 to the first input unit of the multiplier 111 and the weight data between the a1 node and the b3 node to the second input unit. .
- the adder 112 adds the calculated value of the multiplier 111 and the calculated value of the accumulator 113 .
- the operation value of the adder 112 may be the same as the operation value of the multiplier 111 .
- the counter value of (L)loops may be 1.
- the sixth processing element PE6 inputs the a2 node data of the first hidden layer 1330 to the first input unit of the multiplier 111 , and the weight data between the a2 node and the b3 node to the second input unit. .
- the adder 112 adds the calculated value of the multiplier 111 and the calculated value of the accumulator 113 .
- the adder 112 generates MAC operation values of the a1 node and the a2 node corresponding to the b3 node.
- the counter value of (L)loops may be 2.
- the sixth processing element PE6 inputs the a3 node data of the input layer 1310 to the first input unit of the multiplier 111 and the weight data between the a3 node and the b3 node to the second input unit.
- the adder 112 adds the calculated value of the multiplier 111 and the calculated value of the accumulator 113 .
- (L) loops is 2
- the MAC operation values of the a1 node and the a2 node corresponding to the b3 node calculated in the previous step are stored. Accordingly, the adder 112 generates MAC operation values of the a1 node, the a2 node, and the a3 node corresponding to the b3 node.
- the NPU scheduler 130 may terminate the MAC operation of the first processing element PE1 based on the structural data of the artificial neural network model or the artificial neural network data locality information.
- the accumulator 113 may be initialized by inputting an initialization signal (initialization reset). That is, the counter value of (L)loops can be initialized to 0.
- the bit quantization unit 114 may be adjusted appropriately according to the accumulated value.
- the NPU scheduler 130 of the neural network processing unit 100 may perform the MAC operation of the second hidden layer 1350 using the three processing elements PE4 to PE6 at the same time.
- the MAC operation will be described in detail in terms of the seventh processing element (PE7).
- the seventh processing element PE7 may be designated to perform a MAC operation of the y1 node of the output layer 1370 .
- the seventh processing element PE7 inputs the b1 node data of the second hidden layer 1350 to the first input unit of the multiplier 111 , and the weight data between the b1 node and the y1 node to the second input unit. .
- the adder 112 adds the calculated value of the multiplier 111 and the calculated value of the accumulator 113 .
- the operation value of the adder 112 may be the same as the operation value of the multiplier 111 .
- the counter value of (L)loops may be 1.
- the seventh processing element PE7 inputs the b2 node data of the second hidden layer 1350 to the first input unit of the multiplier 111 , and the weight data between the b2 node and the y1 node to the second input unit. .
- the adder 112 adds the calculated value of the multiplier 111 and the calculated value of the accumulator 113 .
- the adder 112 generates MAC operation values of the b1 node and the b2 node corresponding to the y1 node.
- the counter value of (L)loops may be 2.
- the seventh processing element PE7 inputs the b3 node data of the input layer 1310 to the first input unit of the multiplier 111 , and the weight data between the b3 node and the y1 node to the second input unit.
- the adder 112 adds the calculated value of the multiplier 111 and the calculated value of the accumulator 113 .
- (L)loops is 2
- the MAC operation values of the b1 node and the b2 node corresponding to the y1 node calculated in the previous step are stored. Accordingly, the adder 112 generates MAC operation values of the b1 node, the b2 node, and the b3 node corresponding to the y1 node.
- the NPU scheduler 130 may terminate the MAC operation of the first processing element PE1 based on the structural data of the artificial neural network model or the artificial neural network data locality information.
- the accumulator 113 may be initialized by inputting an initialization signal (initialization reset). That is, the counter value of (L)loops can be initialized to 0.
- the bit quantization unit 114 may be adjusted appropriately according to the accumulated value.
- the MAC operation will be described in detail in terms of the eighth processing element (PE8).
- the eighth processing element PE8 may be designated to perform the MAC operation of the y2 node of the output layer 1370 .
- the eighth processing element PE8 inputs the b1 node data of the second hidden layer 1350 to the first input unit of the multiplier 111 , and the weight data between the b1 node and the y2 node to the second input unit. .
- the adder 112 adds the calculated value of the multiplier 111 and the calculated value of the accumulator 113 .
- the operation value of the adder 112 may be the same as the operation value of the multiplier 111 .
- the counter value of (L)loops may be 1.
- the eighth processing element PE8 inputs the b2 node data of the second hidden layer 1350 to the first input unit of the multiplier 111 , and the weight data between the b2 node and the y2 node to the second input unit. .
- the adder 112 adds the calculated value of the multiplier 111 and the calculated value of the accumulator 113 .
- the adder 112 generates MAC operation values of the b1 node and the b2 node corresponding to the y2 node.
- the counter value of (L)loops may be 2.
- the eighth processing element PE8 inputs the b3 node data of the input layer 1310 to the first input unit of the multiplier 111 , and the weight data between the b3 node and the y2 node to the second input unit.
- the adder 112 adds the calculated value of the multiplier 111 and the calculated value of the accumulator 113 .
- (L)loops is 2
- the MAC operation values of the b1 node and the b2 node corresponding to the y2 node calculated in the previous step are stored. Accordingly, the adder 112 generates MAC operation values of the b1 node, the b2 node, and the b3 node corresponding to the y2 node.
- the NPU scheduler 130 may terminate the MAC operation of the first processing element PE1 based on the structural data of the artificial neural network model or the artificial neural network data locality information.
- the accumulator 113 may be initialized by inputting an initialization signal (initialization reset). That is, the counter value of (L)loops can be initialized to 0.
- the bit quantization unit 114 may be adjusted appropriately according to the accumulated value.
- the NPU scheduler 130 of the neural network processing unit 100 may perform the MAC operation of the output layer 1370 using the two processing elements PE7 to PE8 at the same time.
- the reasoning operation of the artificial neural network model 1300 may be completed. That is, it may be determined that the artificial neural network model 1300 has completed the reasoning operation of one frame. If the neural network processing unit 100 infers video data in real time, image data of the next frame may be input to the x1 and x2 input nodes of the input layer 1310 . In this case, the NPU scheduler 130 may store the image data of the next frame in the memory address storing the input data of the input layer 1310 . If this process is repeated for each frame, the neural network processing unit 100 may process the inference operation in real time. Also, there is an effect that the memory address once set can be reused.
- the neural network processing unit 100 uses the NPU scheduler 130 for the inference operation of the artificial neural network model 1300 , the structural data of the artificial neural network model 1300 or artificial Based on the neural network data locality information, an operation scheduling order may be determined.
- the NPU scheduler 130 may set a memory address required for the NPU memory system 120 based on the operation scheduling order.
- the NPU scheduler 130 may set a memory address for reusing the memory based on the structural data of the neural network model 1300 or the locality information of the artificial neural network data.
- the NPU scheduler 130 may perform a speculation operation by designating the processing elements PE1 to PE8 required for the speculation operation.
- the number of (L)loops of the accumulator of the processing element may be set to L-1. That is, the accumulator has the effect of easily performing an inference operation by increasing the number of times the accumulator is accumulated even when the weight data of the artificial neural network increases.
- the NPU scheduler 130 of the neural network processing unit 100 is an input layer 1310, a first connection network 1320, a first hidden layer 1330, a second connection network 1340, The processing element array 100 and the NPU based on the structural data of the artificial neural network model including the structural data of the second hidden layer 1350, the third connection network 1360, and the output layer 1370 or the artificial neural network data locality information
- the memory system 120 may be controlled.
- the NPU scheduler 130 includes node data of the input layer 1310 , weight data of the first connection network 1320 , node data of the first hidden layer 1330 , weight data of the second connection network 1340 , the second Memory address values corresponding to node data of the hidden layer 1350 , weight data of the third connection network 1360 , and node data of the output layer 1370 may be set in the NPU memory system 110 .
- the NPU scheduler 130 may schedule the operation sequence of the artificial neural network model based on structural data of the artificial neural network model or locality information of the artificial neural network data.
- the NPU scheduler 130 may obtain a memory address value in which node data of a layer of an artificial neural network model and weight data of a connection network are stored on the basis of structural data of the artificial neural network model or locality information of artificial neural network data.
- the NPU scheduler 130 may obtain a memory address value in which node data of a layer of an artificial neural network model and weight data of a connection network stored in the main memory system 1070 are stored. Therefore, the NPU scheduler 130 may bring the node data of the layer of the artificial neural network model to be driven and the weight data of the connection network from the main memory system 1070 and store it in the NPU memory system 120 .
- Node data of each layer may have a corresponding respective memory address value.
- Weight data of each connection network may have respective memory address values.
- the NPU scheduler 130 is based on the structural data or artificial neural network data locality information of the artificial neural network model, for example, the arrangement structure data of the layers of the artificial neural network of the artificial neural network model, or the artificial neural network data locality information configured when compiled processing element An operation order of the array 110 may be scheduled.
- the NPU scheduler 130 may obtain weight data having four artificial neural network layers and weight values of three layers connecting each layer, that is, connection network data.
- connection network data For example, a method for scheduling the processing sequence by the NPU scheduler 130 based on the structural data of the neural network model or the locality information of the artificial neural network data will be described below, for example.
- the NPU scheduler 130 sets the input data for the inference operation to the node data of the first layer that is the input layer 1310 of the artificial neural network model 1300, and the node data of the first layer and the first layer It can be scheduled to first perform the MAC operation of the weight data of the first connection network corresponding to .
- a corresponding operation may be referred to as a first operation
- a result of the first operation may be referred to as a first operation value
- a corresponding scheduling may be referred to as a first scheduling.
- the NPU scheduler 130 sets the first operation value to the node data of the second layer corresponding to the first connection network, and the node data of the second layer and weight data of the second connection network corresponding to the second layer. It is possible to schedule the MAC operation to be performed after the first scheduling.
- a corresponding operation may be referred to as a second operation
- a result of the second operation may be referred to as a second operation value
- a corresponding scheduling may be referred to as a second scheduling.
- the NPU scheduler 130 sets the second operation value to the node data of the third layer corresponding to the second connection network, and the node data of the third layer and weight data of the third connection network corresponding to the third layer. can be scheduled to perform the MAC operation of the second scheduling.
- a corresponding operation may be referred to as a third operation
- a result of the third operation may be referred to as a third operation value
- a corresponding scheduling may be referred to as a third scheduling.
- the NPU scheduler 130 sets the third operation value to the node data of the fourth layer, which is the output layer 1370 corresponding to the third connection network, and stores the inference result stored in the node data of the fourth layer in the NPU memory. It can be scheduled for storage in system 120 .
- the corresponding scheduling may be referred to as a fourth scheduling.
- the inference result value may be transmitted and utilized to various components of the edge device 1000 .
- the neural network processing unit 100 transmits the inference result to the central processing unit 1080, and the edge device 1000 performs an operation corresponding to the specific keyword. can be done
- the NPU scheduler 130 may drive the first to third processing elements PE1 to PE3 in the first scheduling.
- the NPU scheduler 130 may drive the fourth to sixth processing elements PE4 to PE6 in the second scheduling.
- the NPU scheduler 130 may drive the seventh to eighth processing elements PE7 to PE8 in the third scheduling.
- the NPU scheduler 130 may output an inference result in the fourth scheduling.
- the NPU scheduler 130 may control the NPU memory system 120 and the processing element array 110 so that the operation is performed in the first scheduling, the second scheduling, the third scheduling, and the fourth scheduling order. That is, the NPU scheduler 130 may be configured to control the NPU memory system 120 and the processing element array 110 so that operations are performed in a set scheduling order.
- the neural network processing unit 100 may be configured to schedule a processing order based on a structure of layers of an artificial neural network and operation order data corresponding to the structure.
- the NPU scheduler 130 has the effect of improving the memory reuse rate by controlling the NPU memory system 120 by utilizing the scheduling sequence based on the structural data of the artificial neural network model or the artificial neural network data locality information.
- the operation value of one layer may have a characteristic that becomes input data of the next layer.
- the neural network processing unit 100 controls the NPU memory system 120 according to the scheduling order, there is an effect that can improve the memory reuse rate of the NPU memory system 120 .
- the NPU scheduler 130 is configured to receive the structural data or the artificial neural network data locality information of the artificial neural network model, and the operation of the artificial neural network is performed by the provided structural data of the artificial neural network model or the artificial neural network data locality information.
- the NPU scheduler 130 may grasp the fact that the operation result of the node data of a specific layer of the artificial neural network model and the weight data of the specific connection network becomes the node data of the corresponding layer. Therefore, the NPU scheduler 130 may reuse the value of the memory address in which the operation result is stored in the subsequent operation.
- the first operation value of the above-described first scheduling is set as node data of the second layer of the second scheduling.
- the NPU scheduler 130 is a memory address value corresponding to the first operation value of the first scheduling stored in the NPU memory system 120, the memory address value corresponding to the node data of the second layer of the second scheduling can be reset to That is, the memory address value can be reused. Therefore, by reusing the memory address value of the first scheduling by the NPU scheduler 130, there is an effect that the NPU memory system 120 can be utilized as the second layer node data of the second scheduling without a separate memory write operation.
- the second operation value of the above-described second scheduling is set as node data of the third layer of the third scheduling.
- the NPU scheduler 130 stores a memory address value corresponding to the second operation value of the second scheduling stored in the NPU memory system 120, and a memory address value corresponding to the node data of the third layer of the third scheduling. can be reset to That is, the memory address value can be reused. Therefore, by reusing the memory address value of the second scheduling by the NPU scheduler 130, there is an effect that the NPU memory system 120 can be utilized as the third layer node data of the third scheduling without a separate memory write operation.
- the third operation value of the above-described third scheduling is set as node data of the fourth layer of the fourth scheduling.
- the NPU scheduler 130 is a memory address value corresponding to the third operation value of the third scheduling stored in the NPU memory system 120, the memory address value corresponding to the node data of the fourth layer of the fourth scheduling can be reset to That is, the memory address value can be reused. Therefore, by reusing the memory address value of the third scheduling by the NPU scheduler 130, there is an effect that the NPU memory system 120 can be utilized as the fourth layer node data of the fourth scheduling without a separate memory write operation.
- the NPU scheduler 130 is configured to control the NPU memory system 120 by determining whether the scheduling order and memory reuse. In this case, there is an effect that the NPU scheduler 130 can provide optimized scheduling by analyzing the structural data of the artificial neural network model or locality information of the artificial neural network data. In addition, there is an effect that can reduce the memory usage because the data required for the memory reuse is not redundantly stored in the NPU memory system 120 . In addition, the NPU scheduler 130 has the effect of optimizing the NPU memory system 120 by calculating the memory usage reduced by the memory reuse.
- a first input of the first processing element PE1, (N)bit input, receives a variable value, and the second input, (M)bit input has a constant value. may be configured to receive input. Also, such a configuration may be identically set to other processing elements of the processing element array 110 . That is, one input of the processing element may be configured to receive a variable value and the other input to receive a constant value. Accordingly, there is an effect that the number of times of data update of the constant value can be reduced.
- the NPU scheduler 130 utilizes the structural data or the artificial neural network data locality information of the artificial neural network model 1300 to the input layer 1310, the first hidden layer 1330, the second hidden layer 1350 and the output layer.
- the node data of 1370 is set as a variable
- the weight data of the first connection network 1320, the weight data of the second connection network 1340, and the weight data of the third connection network 1360 are constant. can be set to That is, the NPU scheduler 130 may distinguish a constant value from a variable value.
- the present disclosure is not limited to constants and variable data types, and in essence, it is possible to improve the reuse rate of the NPU memory system 120 by distinguishing a value that is frequently changed and a value that is not.
- the NPU system memory 120 may be configured to preserve the weight data of the connections stored in the NPU system memory 120 while the reasoning operation of the neural network processing unit 100 continues. Accordingly, there is an effect of reducing memory read/write operations.
- the NPU system memory 120 may be configured to reuse the MAC operation value stored in the NPU system memory 120 while the speculation operation is continued.
- the number of times of data update of the memory address in which the input data (N) bits of the first input of each processing element of the processing element array 110 is stored is the data update of the memory address in which the input data (M) bits of the second input is stored. It can be more than the number of times. That is, there is an effect that the number of data updates of the second input unit may be smaller than the number of data updates of the first input unit.
- FIG. 6 is a schematic conceptual diagram illustrating a neural network processing unit according to another example of the present disclosure.
- the neural network processing unit 200 includes a processing element array 110 , an NPU memory system 120 , an NPU scheduler 130 , and an NPU interface 140 , and an NPU batch mode (BATCH MODE; 150) is further included.
- the neural network processing unit 200 is substantially the same except for the NPU deployment mode 250, compared with the neural network processing unit 100 according to an example of the present disclosure, the following only description of For convenience, duplicate descriptions may be omitted.
- the neural network processing unit 200 may be configured to operate in a batch mode.
- the batch mode refers to a mode in which multiple different reasoning operations can be performed simultaneously using one artificial neural network model.
- the number of cameras 1021 of the edge device 1000 may be plural.
- the neural network processing unit 200 may be configured to activate the NPU deployment mode 250 . If the NPU batch mode 250 is activated, the neural network processing unit 200 may operate in the batch mode.
- the neural network processing unit 200 may activate the NPU deployment mode 250 .
- the neural network processing unit 200 sequentially infers the input images of six cameras using one artificial neural network model trained to infer pedestrian recognition, vehicle recognition, obstacle recognition, etc. can be configured.
- the neural network processing unit 200 stores an artificial neural network model that can be inferred from the processing element arrays PE1 to PE12, the processing element arrays PE1 to PE12, or an NPU memory configured to store at least some data of the artificial neural network model.
- the NPU memory system 120 may store all or part of the artificial neural network model according to the memory size and the data size of the artificial neural network model.
- the neural network processing unit allocates each artificial neural network model for each image inference, and each artificial neural network model You need to set up each memory for the inference operation. That is, in the conventional case, it is necessary for the neural network processing unit to individually drive six artificial neural network models, and each artificial neural network model cannot determine a correlation with each other within the neural network processing unit. That is, in the conventional method, six artificial neural network models must be driven in parallel at the same time, and the conventional method may cause a problem that the memory of the NPU memory system is required 6 times than that of the batch mode according to another example of the present disclosure.
- the NPU memory system may not be able to store six artificial neural network models at once.
- the neural network processing unit stores six artificial neural network models in the main memory system, and after the inference operation of one artificial neural network model is finished through the NPU interface, the data of the other artificial neural network model is transferred from the main memory system to the NPU memory system.
- the difference in power consumption between SRAM of the NPU memory system and DRAM of the main memory system is large, and time consumption of reading the artificial neural network model from the main memory system to the NPU memory system occurs.
- neural network processing It was necessary to increase the number of units, and there was a problem in that power consumption increased and cost increased accordingly.
- the arrangement mode of the neural network processing unit 200 is characterized in that it is configured to sequentially infer a plurality of input data using one artificial neural network model.
- the NPU batch mode 250 of the neural network processing unit 200 may be configured to convert a plurality of input data into one continuous data.
- the NPU batch mode 250 may convert the video data of 30 frames input from six cameras into a video of 180 frames. That is, the NPU arrangement mode 250 may increase the operation frame by combining a plurality of image data.
- the NPU arrangement mode 250 may sequentially infer frames of input images input from six cameras by using one artificial neural network model. That is, the NPU deployment mode 250 may perform the inference operation of a plurality of different input data by recycling the weight data of the artificial neural network model.
- the NPU arrangement mode 250 of the neural network processing unit 200 may store the weight data of the first layer of the artificial neural network model in the NPU memory system 120 .
- six first operation values or first feature maps may be obtained by calculating a plurality of input data, for example, six camera input images and weight data of the first layer. That is, six calculation values may be sequentially calculated using the weight data of the first layer. The calculated plurality of first operation values may be reused in an operation of a next layer.
- the NPU arrangement mode 250 of the neural network processing unit 200 may store the weight data of the second layer of the artificial neural network model in the NPU memory system 120 .
- second calculation values may be obtained by using the plurality of calculated first calculation values and weight data of the second layer. That is, six calculation values may be sequentially calculated using the second layer weight data. Accordingly, the reuse rate of weight data of the second layer may be improved.
- the NPU arrangement mode 250 of the neural network processing unit 200 may store the weight data of the third layer of the artificial neural network model in the NPU memory system 120 .
- third operation values may be obtained by using the plurality of calculated second operation values and weight data of the third layer. That is, six calculation values may be sequentially calculated using the third layer weight data. Accordingly, the reuse rate of the weight data of the third layer may be improved.
- the NPU deployment mode 250 of the neural network processing unit 200 may operate in the above-described manner.
- the weight data of each layer can be reused as much as the number of input data. Accordingly, there is an effect that the memory reuse rate of weight data can be further improved. In particular, as the number of input data increases, memory reuse may be proportionally improved.
- weight data size of the artificial neural network model is often relatively larger than the data size of the input image. Therefore, reusing weight data is more effective in reducing power consumption, improving operation speed, and reducing memory read time.
- the main memory system 1070 transfers data through the system bus 1060 , the memory transfer speed may be significantly lower than that of the NPU memory system 120 , and power consumption may increase rapidly.
- the NPU deployment mode 250 can perform a plurality of inference operations using one artificial neural network model, there is an effect that can improve the above-mentioned problems.
- the NPU scheduler 130 has an effect of inferring the input data of the six cameras by using one artificial neural network model.
- the data size of one artificial neural network model is, for example, 7 MB
- the SRAM memory size of the NPU memory system 120 is 10 MB
- all data of the artificial neural network model are stored in the NPU memory system by the NPU arrangement mode 250 All of them can be stored in 120.
- the NPU memory system 120 is a static memory such as SRAM and the main memory system 1070 is a DRAM
- the NPU memory system 120 is a type of memory. Due to its characteristics, it may operate 10 times faster than the main memory system 1070 .
- the amount of inference operation can be increased by 6 times, but since only the NPU memory system 120 can be used, the operation speed is substantially reduced in the conventional manner It has the effect of being faster. That is, since the neural network processing unit 200 can compare the data size of the artificial neural network model and the memory size of the NPU memory system 120 , there is an effect that can eliminate or minimize communication with the main memory system 1070 . there is.
- the above-described configuration is not limited to the NPU deployment mode 250 and it is also possible to apply to other examples of the present disclosure without the NPU deployment mode 250 .
- the NPU placement mode 250 is not limited to being placed inside the neural network processing unit 200 , and the NPU placement mode 250 is also possible to be included in the edge device 1000 .
- FIG. 7 is a schematic conceptual diagram illustrating a neural network processing unit according to another example of the present disclosure.
- the neural network processing unit 300 is configured to include a processing element array 310 , an NPU memory system 120 , an NPU scheduler 130 , and an NPU interface 140 .
- the neural network processing unit 300 is substantially the same as the neural network processing unit 100 according to an example of the present disclosure, except for the processing element array 310 , only a description is given below. For the convenience of , the redundant description may be omitted.
- the processing element array 310 stores the plurality of processing elements PE1 to PE12 and respective register files RF1 to RF12 corresponding to the respective processing elements PE1 to PE12. It is configured to include more.
- the plurality of processing elements PE1 to PE12 and the plurality of register files RF1 to RF12 illustrated in FIG. 7 are merely examples for convenience of description, and the plurality of processing elements PE1 to PE12 and the plurality of registers
- the number of files RF1 to RF12 is not limited.
- the size or number of the processing element array 310 may be determined by the number of the plurality of processing elements PE1 to PE12 and the plurality of register files RF1 to RF12 .
- the size of the processing element array 310 and the plurality of register files RF1 to RF12 may be implemented in the form of an N ⁇ M matrix. where N and M are integers greater than zero.
- the array size of the processing element array 310 may be designed in consideration of the characteristics of the artificial neural network model in which the neural network processing unit 300 operates.
- the memory size of the register file may be determined in consideration of the data size of the artificial neural network model to be operated, the required operating speed, the required power consumption, and the like.
- the register files RF1 to RF12 of the neural network processing unit 300 are static memory units directly connected to the processing elements PE1 to PE12.
- the register files RF1 to RF12 may include, for example, flip-flops and/or latches.
- the register files RF1 to RF12 may be configured to store MAC operation values of the corresponding processing elements RF1 to RF12.
- the register files RF1 to RF12 may be configured to provide or receive the NPU system memory 120 and weight data and/or node data.
- FIG. 8 is a schematic conceptual diagram illustrating characteristics of a maximum transfer rate according to a memory size of an exemplary register file of a neural network processing unit and a memory size of an exemplary NPU memory system composed of SRAM according to another example of the present disclosure.
- the X axis of FIG. 8 means the memory size.
- the Y axis means the maximum transmission rate (MHz).
- the NPU memory system 120 of FIG. 8 is exemplarily made of SRAM.
- the register file may have a characteristic that the maximum transfer rate (MHz) rapidly decreases as the memory size increases. However, in the NPU memory system 120, the maximum transfer rate (MHz) degradation characteristic according to the increase in the memory size is relatively less compared to the register file.
- the memory size of the register files RF1 to RF12 of the neural network processing unit 300 is relatively smaller than the memory size of the NPU memory system 120, and the maximum transfer rate is relatively It can be configured faster.
- the memory size of each of the plurality of register files RF1 to RF12 is relatively smaller than the memory size of the NPU memory system 120
- the maximum transfer rate of each of the plurality of register files RF1 to RF12 is the NPU. It may be configured to be relatively faster than the maximum transfer speed of the memory system 120 .
- the memory size of each register file is 30 KB or less, and the memory size of the NPU memory system 120 may be larger than the memory size of the register file.
- each register file is 20 KB or less and the memory size of the NPU memory system 120 is larger than the memory size of the register file.
- the present disclosure is not limited thereto.
- the processing element array 310 is a register file of a memory size having a relatively faster maximum transfer rate than the NPU memory system 120 .
- each of the plurality of register files may be configured to have a memory size having a relatively faster maximum transfer rate than the NPU memory system 120 . Accordingly, there is an effect of improving the operating speed of the neural network processing unit 300 .
- FIG. 9 is a schematic conceptual diagram illustrating power consumption characteristics based on the same transfer rate according to the memory size of an exemplary register file of a neural network processing unit and the memory size of an exemplary NPU memory system composed of SRAM according to another example of the present disclosure. .
- the X axis of FIG. 9 means the memory size.
- the Y-axis means power consumption (mW/GHz) based on the same transmission rate. That is, when the transmission speed is the same at 1 GHz, it means a change in the power consumption characteristic according to the size of the memory.
- the NPU memory system 120 of FIG. 9 is exemplarily made of SRAM.
- the register file may have a characteristic in which power consumption (mW/GHz) based on the same transmission rate rapidly increases as the memory size increases. However, the NPU memory system 120 has relatively less characteristics of increasing the power consumption (mW/GHz) based on the same transfer rate than the register file even when the size of the memory is increased.
- the memory size of the register files (RF1 to RF12) of the neural network processing unit 300 is relatively smaller than the memory size of the NPU memory system 120, and power consumption based on the same transmission rate It can be configured to have these relatively few features.
- each of the plurality of register files is relatively smaller than the memory size of the NPU memory system 120, and the same transmission rate reference power of each of the plurality of register files (RF1 to RF12). Consumption may be configured to be relatively less than the power consumption based on the same transfer rate of the NPU memory system 120 .
- the processing element array 310 has relatively less power consumption based on the same transfer rate than the NPU memory system 120 .
- the neural network processing unit 300 according to another example of the present disclosure can relatively reduce power consumption more than the neural network processing unit 100 according to an example of the present disclosure. It works.
- each of the plurality of register files may be configured to have a memory size that is relatively smaller in power consumption based on the same transfer rate than the NPU memory system 120 . Accordingly, there is an effect of reducing the power consumption of the neural network processing unit 300 .
- each of the plurality of register files (RF1 to RF12) has a relatively faster maximum transfer rate than the NPU memory system 120, and may be configured as a memory having a relatively smaller size of power consumption based on the same transfer rate. . Therefore, the power consumption of the neural network processing unit 300 can be reduced, and at the same time, the operation speed of the neural network processing unit 300 can be improved.
- FIG. 10 is a schematic conceptual diagram illustrating a neural network processing unit according to another example of the present disclosure.
- the neural network processing unit 400 is configured to include a processing element array 410 , an NPU memory system 420 , an NPU scheduler 130 , and an NPU interface 140 .
- the neural network processing unit 400 is substantially the same as the neural network processing unit 300 according to an example of the present disclosure, except for the NPU memory system 420 , the following only description For the convenience of , the redundant description may be omitted.
- the NPU memory system 420 is configured to include a first memory 421 , a second memory 422 and a third memory 423 .
- the first memory 421 , the second memory 422 , and the third memory 423 may be logically separated memories or physically separated memories.
- the NPU scheduler 130 is based on the structural data of the artificial neural network model or the artificial neural network data locality information, the first memory 421, the second memory 422, and the third memory ( 423) can be controlled. Accordingly, the NPU scheduler 130 is based on the structural data or artificial neural network data locality information of the artificial neural network model driven in the neural network processing unit 400, the first memory 421 of the NPU memory system 420, the second memory 422 ), and by controlling the third memory 423 may be configured to improve the memory reuse rate of the NPU memory system 420 .
- the first memory GB is configured to communicate with the main memory system 1070 , the second memory 422 , and the third memory 423 .
- the second memory 422 and the third memory 423 are configured to communicate with the plurality of processing elements PE1 to PE12 and the plurality of register files RF1 to RF12 .
- the neural network processing unit 400 may be configured to have a plurality of memory hierarchical structures optimized for memory reuse.
- the main memory system 1070 may be a first layer memory.
- the first memory 421 may be a memory of the second layer.
- the second memory 422 and the third memory 423 may be memories of a third layer.
- the plurality of register files RF1 to RF12 may be a memory of the fourth layer.
- the NPU scheduler 130 may be configured to store data in which information about the memory hierarchy and the size of each memory is stored.
- the main memory system 1070 of the edge device 1000 may be configured to store all data of the artificial neural network model.
- the main memory system 1070 may store at least one artificial neural network model.
- the NPU memory system 420 may store all or part of the artificial neural network model according to the memory size and the data size of the artificial neural network model.
- the first artificial neural network model may be an AI keyword recognition model configured to infer a specific keyword.
- the second artificial neural network model may be an AI gesture recognition model configured to infer a specific gesture.
- examples according to the present disclosure are not limited thereto.
- the memory size of the main memory system 1070 may be larger than that of the first memory 421 .
- the memory size of the first memory 421 may be larger than the memory size of the second memory 422 or the third memory 423 .
- the memory size of each of the second memory 422 and the third memory 423 may be larger than each of the plurality of register files RF1 to RF12 .
- examples according to the present disclosure are not limited thereto.
- the main memory system 1070 may be a first layer memory.
- the memory of the first layer may have the largest memory size and the largest power consumption.
- examples according to the present disclosure are not limited thereto.
- the first memory 421 may be a memory of the second layer.
- the first memory 421 of the neural network processing unit 400 may be configured to store some or all data of the artificial neural network model.
- the memory of the second layer may have a smaller memory size than the memory of the first layer, consume less power than the memory of the first layer, and may have a faster transfer speed than the memory of the first layer.
- the NPU scheduler 130 utilizes the structural data of the artificial neural network model or the artificial neural network data locality information to store all scheduling data in the first memory 421, data such as node data of the layer and weight data of connected networks. .
- the edge device 1000 may continuously reuse the data stored in the NPU memory system 420 to provide an effect of improving the reasoning speed of the neural network processing unit 400 and reducing power consumption.
- the NPU scheduler 130 utilizes the structural data of the artificial neural network model or the artificial neural network data locality information to store some data, such as the node data of the layer and the weight data of the connected networks, in the first memory 421 in the determined scheduling order. .
- the NPU scheduler 130 stores data up to a specific scheduling order, and as the scheduled scheduling is sequentially completed , it is possible to secure a storage space of the first memory 421 by reusing a part of the scheduling data for which calculation has been completed, and deleting or overwriting data that is not reused.
- the NPU scheduler 130 may analyze the secured storage space and store data necessary for the subsequent scheduling sequence.
- the NPU scheduler 130 does not schedule with a scheduling algorithm used in a conventional CPU, for example, a priority setting algorithm such as fairness, efficiency, stability, reaction time, etc., the structure of the artificial neural network model Since scheduling based on data or artificial neural network data locality information can be provided, the operation order can be determined according to the structure of the artificial neural network model, and since all operation sequences can be scheduled in advance, the first memory 421 Even when the storage space is insufficient, there is an effect that the memory reuse of the NPU memory system 420 can be optimized.
- a scheduling algorithm used in a conventional CPU for example, a priority setting algorithm such as fairness, efficiency, stability, reaction time, etc.
- the AI gesture recognition model when the AI gesture recognition model is a VGG16 convolutional artificial neural network model, the AI gesture recognition model has weight data of 16 connections, node data of a corresponding layer, and an operation sequence according to the layer structure of the AI gesture recognition model. It may contain data. Accordingly, the NPU scheduler 130 may determine the scheduling order, for example, as the first to the sixteenth scheduling based on the structural data of the artificial neural network model or the artificial neural network data locality information. In this case, each scheduling may be configured to perform a MAC operation that performs convolution.
- the NPU scheduler 130 may check the size of the node data of each layer and the weight data of the connection network. Therefore, the NPU scheduler 130 may calculate the memory size required for each inference operation for each scheduling. Accordingly, the NPU scheduler 130 may determine the data that can be stored in the first memory 421 .
- the NPU scheduler 130 when the NPU scheduler 130 processes the first to the sixteenth scheduling, it is calculated that the data size required for the operation from the first scheduling to the sixth scheduling is larger than the size of the first memory 421 . can Accordingly, only data corresponding to the first to fifth scheduling may be stored from the main memory system 1070 to the first memory 421 . After the first scheduling is completed, the NPU scheduler 130 compares the data size of the first scheduling with the data size of the sixth scheduling, and deletes or overwrites the data of the first scheduling stored in the first memory 421, 6 scheduling data may be stored in the first memory 421 .
- the NPU scheduler 130 compares the data size of the second scheduling with the data size of the seventh scheduling, and deletes or overwrites the data of the second scheduling stored in the first memory 421, 7 scheduling data may be stored in the first memory 421 .
- the NPU scheduler 130 compares the data size of the second scheduling with the data size of the seventh scheduling, and deletes or overwrites the data of the second scheduling stored in the first memory 421, 7 scheduling data may be stored in the first memory 421 .
- the NPU scheduler 130 may store the data of the new scheduling in the first memory 421 when the data size of the completed scheduling is larger than the data size of the scheduling to be newly stored. In other words, if the data size of the first scheduling is 500KB and the data size of the sixth scheduling is 450KB, the memory usage will not increase even if the data of the sixth scheduling is stored by deleting or overwriting the data of the first scheduling. can
- the NPU scheduler 130 may not store the data of the new scheduling in the first memory 421 when the data size of the completed scheduling is smaller than the data size of the scheduling to be newly stored.
- the data size of the first scheduling is 450 KB and the data size of the sixth scheduling is 850 KB, even if the data of the first scheduling is deleted or overwritten, an additional memory space of 450 KB is required.
- the NPU scheduler 130 does not store the data of the sixth scheduling in the first memory 421 .
- the data size of the second scheduling is 400KB, the sum of the data sizes of the first scheduling and the second scheduling becomes 850KB, so after the second scheduling is completed, the data of the second scheduling is deleted or overwritten, You can store scheduling data.
- the NPU scheduler 130 of the neural network processing unit 400 determines the size of data for each scheduling order, and sequentially deletes, overwrites and / Or you can save it.
- the second memory 422 and the third memory 423 may be memories of a third layer.
- the memory of the third layer may have a smaller memory size than the memory of the second layer and may have a higher maximum transfer rate.
- the second memory 422 and the third memory 423 of the neural network processing unit 400 may be configured to selectively store a portion of data stored in the first memory 421 in consideration of a memory reuse rate. That is, the NPU scheduler 130 may be configured to selectively store some of the data stored in the first memory 421 in one of the second memory 422 and the third memory 423 in consideration of the memory reuse rate. .
- a memory reuse rate of data stored in the second memory 422 may be higher than a memory reuse rate of data stored in the third memory 423 .
- the weight data of the network of the artificial neural network model may not be changed until the artificial neural network model is updated. That is, when the weight data of the connection network of the artificial neural network model is stored in the second memory 422 , the stored data may have a constant characteristic.
- examples according to the present disclosure are not limited thereto.
- the weight data of the connection network stored in the second memory 422 can be reused 30 times per second, and 1800 for 1 minute. It can be reused once.
- the second memory 422 is configured to store data having a constant characteristic, so that the memory reuse rate can be improved, and power consumption can be reduced.
- the first memory 421 may delete duplicate data. That is, the NPU scheduler 130 may be configured to delete the duplicated data of the first memory 421 when the data of the constant characteristic is stored in the second memory 422 .
- Data stored in the third memory 423 may have reusable variable characteristics.
- examples according to the present disclosure are not limited thereto.
- the MAC operation value of a specific layer of the artificial neural network model can be reused as node data of the next scheduling layer. That is, when the node data of the layer of the artificial neural network model is stored in the third memory 423 , the stored data may have a reused variable characteristic.
- examples according to the present disclosure are not limited thereto.
- the third memory 423 has an effect that the MAC operation value of each layer can be reused according to the scheduling order. Therefore, there is an effect that can improve the memory efficiency of the NPU memory system (400).
- the second memory 422 and the third memory 423 are configured to communicate with the register files RF1 to RF12 and/or the processing elements PE1 to PE12.
- the register files RF1 to RF12 have a higher maximum transfer rate than the NPU memory system 420 and have relatively higher power consumption based on the same transfer rate. It can be configured with a small size of memory.
- the first memory 421 may be configured as a non-volatile memory
- the second memory 422 and the third memory 423 may be configured as a volatile memory. Therefore, when the edge device 1000 is rebooted after the power is cut off, there is an effect that the artificial neural network model does not need to be provided again from the main memory system 1070 .
- the register files RF1 to RF12 may be configured as volatile memory.
- examples according to the present disclosure are not limited thereto. According to the above-described configuration, the neural network processing unit 100 has an effect of reducing power consumption by the DRAM during the reasoning operation of the artificial neural network model.
- the first memory 421 , the second memory 422 , and the third memory 423 may be configured as non-volatile memories. Therefore, when the edge device 1000 is rebooted after the power is cut off, there is an effect that the artificial neural network model does not need to be provided again from the main memory system 1070 .
- the register files RF1 to RF12 may be configured as volatile memory.
- examples according to the present disclosure are not limited thereto. According to the above-described configuration, the neural network processing unit 100 has an effect of reducing power consumption by the DRAM during the reasoning operation of the artificial neural network model.
- the NPU scheduler 130 utilizes the artificial neural network structural data or artificial neural network data locality information and the structural data and memory hierarchical structure of the neural network processing unit so that the reuse rate of data stored in the memory of the layer furthest from the processing element is close to the processing element. Up to hierarchical memory, it is possible to schedule the memory memory reuse rate to improve.
- the hierarchical structure of the memory of the neural network processing unit 100 may not be limited to the NPU memory system 420 , and may be extended to the main memory system 1070 of the edge device 1000 .
- the main memory system 1070 may be the memory of the first layer.
- the first memory 421 may be a memory of the second layer.
- the second memory 422 and the third memory 423 may be memories of the third layer.
- the register files RF1 to RF12 may be a memory of the fourth layer.
- the neural network processing unit 400 may be operated to store, based on the structural data of the neural network processing unit 400 , in a memory of a layer adjacent to the processing element in the order of data having a high memory reuse rate.
- examples according to the present disclosure are not limited to the main memory system 1070 , and the neural network processing unit 400 may be configured to include the main memory system 1070 .
- FIG. 11 is a schematic conceptual diagram illustrating an optimization system capable of optimizing an artificial neural network model that can be processed by a neural network processing unit according to an example of the present disclosure.
- the optimization system 1500 may be configured to optimize the artificial neural network model driven in the neural network processing unit 100 in the neural network processing unit 100 .
- the optimization system 1500 is a separate external system of the edge device 1000 and refers to a system configured to optimize an artificial neural network model used in the neural network processing unit 100 according to an example of the present disclosure. Therefore, the optimization system 1500 may be referred to as a dedicated artificial neural network model emulator or artificial neural network model simulator of the neural network processing unit 100 .
- the conventional artificial neural network model is an artificial neural network model learned without considering the hardware characteristics of the neural network processing unit 100 and the edge device 1000 . That is, the conventional artificial neural network model was learned without considering the hardware limitations of the neural network processing unit 100 and the edge device 1000 . Therefore, when processing the conventional artificial neural network model, the processing performance in the neural network processing unit may not be optimized. For example, processing performance degradation may be due to inefficient memory management of edge devices and processing of massive amounts of computation of artificial neural network models. Therefore, the conventional edge device that processes the conventional artificial neural network model may have a problem of high power consumption or low processing speed.
- the optimization system 1500 uses the structural data of the artificial neural network model or locality information of the artificial neural network data, and hardware characteristic data of the neural network processing unit 100 and the edge device 1000 to generate an artificial neural network model. designed to optimize.
- the optimized artificial neural network model is processed in the edge device 1000 including the neural network processing unit 100, compared to the unoptimized artificial neural network model, the effect of providing relatively improved performance and reduced power consumption is there is.
- the optimization system 1500 may be configured to include an artificial neural network model reading module 1510 , an optimization module 1520 , an artificial neural network model evaluation module 1530 , and an artificial neural network model updating module 1540 .
- the module may be a module composed of hardware, a module composed of firmware, or a module composed of software, and the optimization system 1500 is not limited to hardware, firmware, or software.
- the optimization system 1500 may be configured to communicate with the edge device 1000 .
- the present invention is not limited thereto, and the optimization system 1500 may be configured to separately receive the edge device 1000 and data of the neural network processing unit 100 included in the edge device 1000 .
- the optimization system 1500 may be configured to receive structural data of an artificial neural network model to be optimized or locality information of artificial neural network data.
- Structural data or artificial neural network data locality information of an artificial neural network model includes node data of a layer of an artificial neural network model, weight data of a connection network, arrangement structure data of layers of an artificial neural network model, or locality information of artificial neural network data, activation map, and weight kernel. It may be configured to include at least one piece of data.
- the weight kernel may be a weight kernel used for convolution for extracting features of a certain portion of input data while scanning it.
- the weight kernel traverses each channel of the input data, calculates a convolution, and then generates a feature map for each channel.
- the optimization system 1500 may be configured to further receive structural data of the neural network processing unit 100 .
- the present invention is not limited thereto.
- Structural data of the neural network processing unit 100 is the memory size of the NPU memory system 130 , the hierarchical structure of the NPU memory system 130 , the maximum transfer rate and access latency of respective memories of the hierarchical structure of the NPU memory system 130 . It may be configured to include data of at least one of latency, the number of processing elements PE1 to PE12 , and an operator structure of the processing elements PE1 to PE12 .
- the optimization system 1500 may be configured to further receive structure data of the main memory system 1070 of the edge device 1000 .
- the present invention is not limited thereto.
- the structural data of the memory system 1070 is at least one of a memory size of the memory system 1070 , a hierarchical structure of the memory system 1070 , an access latency of the memory system 1070 , and a maximum transfer rate of the memory system 1070 . may be configured to contain data.
- the optimization system 1500 determines the artificial neural network model of the edge device 1000 based on the structural data of the artificial neural network model, the structural data of the neural network processing unit 100, and the structural data of the main memory system 1070 of the edge device 1000. ) can be configured to provide a simulation function that can predict performance when operating in That is, the optimization system 1500 may be an emulator or simulator of an artificial neural network model that can operate in the edge device 1000 .
- the optimization system 1500 is the structure data of the artificial neural network model or locality information of the artificial neural network data, the structure data of the neural network processing unit 100, and the structure of the main memory system 1070 of the edge device 1000 It can be configured to readjust various values of the artificial neural network model based on the data. Therefore, the optimization system 1500 may be configured to lighten the artificial neural network model so that the artificial neural network model is optimized for the edge device 1000 .
- the optimization system 1500 may receive various data from the edge device 1000 and the neural network processing unit 100 to simulate an artificial neural network model to be used.
- the optimization system 1500 may determine whether the artificial neural network model can be smoothly operated in the edge device 1000 by simulating the corresponding artificial neural network model.
- the optimization system 1500 calculates the computational processing power, available memory bandwidth, maximum bandwidth, and memory access latency of the edge device 1000 and the neural network processing unit 100, and the computation amount and memory usage of each layer of the artificial neural network model.
- data such as data, etc., it is possible to provide simulation results such as computation time, number of inferences per second, hardware resource usage, inference accuracy, and power consumption when a specific artificial neural network model operates on a specific edge device including a specific neural network processing unit.
- the optimization system 1500 may be configured to optimize the artificial neural network model through the optimization module 1520 when it is determined that a specific artificial neural network model is difficult to operate smoothly in the edge device 1000 .
- the optimization system 1500 may be configured to optimize the artificial neural network model to a specific condition by providing various cost functions.
- the optimization system 1500 may be configured to lighten the weight data and node data of each layer of the artificial neural network model so as to satisfy the condition that the deterioration of inference accuracy is maintained above a specific value.
- the optimization system 1500 is configured to lighten the weight data and node data of each layer of the artificial neural network model so that the data size of the artificial neural network model becomes less than or equal to a specific value and the condition of the smallest decrease in inference accuracy is satisfied.
- the artificial neural network model reading module 1510 is configured to receive an artificial neural network model to be optimized from the outside.
- the artificial neural network model to be optimized may be a conventional artificial neural network model trained by a user who wants to utilize the neural network processing unit 100 .
- the artificial neural network model reading module 1510 may be configured to analyze the artificial neural network model to be optimized to extract structural data of the artificial neural network model or locality information of the artificial neural network data.
- the present invention is not limited thereto, and the artificial neural network model reading module 1510 may be configured to receive structural data of the artificial neural network model or locality information of the artificial neural network data from the outside.
- the artificial neural network model reading module 1510 may be additionally configured to further receive structural data of the neural network processing unit 100 .
- the artificial neural network model reading module 1510 may be configured to additionally receive structural data of the main memory system 1070 of the edge device 1000 .
- the optimization module 1520 may optimize the artificial neural network model based on the structural data of the artificial neural network model provided from the artificial neural network model reading module 1510 or the artificial neural network data locality information.
- the optimization module 1520 may be configured to further receive at least one of the structural data of the neural network processing unit 100 and the structural data of the memory system 1070 from the artificial neural network model reading module 1510 .
- the optimization module 1520 is the structural data of the artificial neural network model or artificial neural network data locality information provided from the artificial neural network model reading module 1510, the structural data of the neural network processing unit 100, and the memory system 1070.
- the artificial neural network model can be optimized based on the structural data of the artificial neural network model or the artificial neural network data locality information and additional structural data.
- the optimization module 1520 includes a quantization algorithm, a pruning algorithm, a retraining algorithm, a quantization aware retraining algorithm, a model compression algorithm, and artificial intelligence-based optimization.
- AI based model optimization may be configured to selectively utilize an algorithm.
- the pruning algorithm is a technology that can reduce the amount of computation of an artificial neural network model.
- the pruning algorithm may be configured to substitute 0 for small values close to 0 among weight data and/or weight kernels of all layers of the artificial neural network model.
- specific weight data is replaced with 0 by the pruning algorithm, the pruning algorithm can provide substantially the same effect as disconnecting the neural network model having the corresponding weight data.
- the multiplier 111 of the first processing element PE1 of the neural network processing unit 100 may process convolutional matrix multiplication. If the value input to the first input unit or the second input unit of the multiplier 111 is 0, the result value of the multiplier 111 becomes 0 regardless of the values of the other operands. Accordingly, the first processing element PE1 may be configured to determine a case in which data input to at least one input unit of the multiplier 111 is 0 and skip a corresponding multiplication operation.
- the counter value of (L)loops of the accumulator 113 may be decreased by the number of zero data. That is, there is an effect that the unnecessary multiplication operation of the multiplier 111 and the subsequent unnecessary addition operation of the adder 112 and the unnecessary accumulation operation of the accumulator 113 can be skipped.
- the computational efficiency of the processing element array 110 of the neural network processing unit 100 increases while the computational processing speed can be increased.
- the pruning algorithm evaluates the accuracy while gradually increasing the level of the threshold value that replaces small values close to 0 with 0 among the weight data and/or the weight kernel of all layers of the artificial neural network model, or evaluates the accuracy while gradually decreasing can be configured to
- the pruning algorithm may be configured to increase the level of the threshold in such a way that weight data of 0.01 or less is replaced with 0, and then weighted data of 0.02 or less is replaced with 0.
- the level of the threshold value increases, there is an effect that the weight data conversion rate substituted with zero may increase.
- the artificial neural network model evaluation module 1530 may evaluate the artificial neural network model to which the pruning algorithm is applied. If the evaluated inference accuracy is higher than the target inference accuracy, the neural network model evaluation module 1530 is configured to instruct the optimization module 1520 to gradually increase the level of the substitution threshold value of the pruning algorithm of the artificial neural network model can be Whenever the substitution threshold value changes, the neural network model evaluation module 1530 is configured to evaluate the inference accuracy.
- the artificial neural network model evaluation module 1530 may be configured to gradually reduce the amount of computation of the artificial neural network model by repeatedly instructing the pruning algorithm until the evaluated inference accuracy falls below the target inference accuracy. In this case, the artificial neural network model evaluation module 1530 may be configured to store and evaluate the artificial neural network models of various versions to which the threshold value is selectively applied.
- the optimization system 1500 evaluates and compares various versions of pruned artificial neural network models, minimizes the decrease in accuracy of the artificial neural network model, and generates an artificial neural network model having a relatively high conversion rate of weight data replaced with zero. There is an effect that can be done.
- the pruning algorithm may be configured to increase or decrease the level of a threshold value for substituting small values close to zero among weight data and/or weight kernels of at least one connection network among all layers of the artificial neural network model. .
- the pruning algorithm replaces the weight data of 0.01 or less of a specific layer with 0 and evaluates it, and replaces and evaluates the weight data of 0.02 or less of the corresponding layer with 0. It can be configured to increase the level.
- the level of the threshold value of the weight data of a specific layer increases, there is an effect that the conversion rate of the weight data substituted with 0 in the specific layer may increase.
- the pruning algorithm may be configured to preferentially prune weight data of a specific layer among weight data of a plurality of layers.
- the pruning algorithm may preferentially select and prune a layer having the largest data size among weight data of a plurality of layers.
- the pruning algorithm may preferentially select and prune weight data of some layers in an order of the largest data size among weight data of a plurality of layers. For example, weight data of the upper three layers having the largest data size may be preferentially pruned.
- the pruning threshold level of the weight data of the layer with the largest data size among the layers of the artificial neural network model is set higher than the pruning threshold level of the weight data of the layer with the smallest data size, the data size will increase.
- the weight data conversion rate of the largest layer may be higher than the weight data conversion rate of the layer having the smallest data size.
- the weight data conversion rate of layers with a large data size may be higher than the weight data conversion rate of layers with a small data size.
- the pruning degree of weight data of layers with large data can be relatively large, and as a result, the total amount of computation of the artificial neural network model can be further reduced, power consumption can be reduced, and the deterioration of inference accuracy can be minimized. can have an effect.
- the pruning algorithm may preferentially prune a layer with the largest amount of computation among weight data of a plurality of layers.
- the pruning algorithm may preferentially select and prune weight data of some layers in an order of the greatest amount of computation among weight data of a plurality of layers.
- the pruning algorithm may preferentially prune the weight data of the top three layers with the largest amount of sound computation.
- the corresponding layer affects the operation of all other layers. That is, the degree of optimization of the layer pruned later may vary depending on the layer pruned first.
- the layer with the most amount of computation may be higher than the weight data conversion rate of the layer with the least amount of computation.
- the weight data conversion rate of layers with a large amount of computation of the artificial neural network model may be higher than the weight data conversion rate of layers with a small amount of computation.
- the artificial neural network model evaluation module 1530 tells the optimization module 1520 to gradually increase the level of the substitution threshold value of the weight data of the layer being pruned. It can be configured to Whenever the substitution threshold value is changed, the neural network model evaluation module 1530 is configured to evaluate the inference accuracy of the neural network model to which the pruned layer is applied.
- the artificial neural network model evaluation module 1530 may be configured to gradually reduce the amount of computation of the artificial neural network model by repeatedly applying the pruning algorithm until the evaluated inference accuracy falls below the target inference accuracy. In this case, the artificial neural network model evaluation module 1530 may be configured to store and evaluate the artificial neural network models of various versions to which the substitution threshold value is selectively applied.
- the optimization system 1500 ends the pruning of the corresponding layer and fixes the weight data of the optimized layer.
- the optimization system 1500 may then prune the weight data of each layer of the artificial neural network model in such a way that the pruning of the weight data of another layer starts. Therefore, there is an effect that the pruning degree of the weight data of each layer of the artificial neural network model can be optimized for each layer.
- the pruning degree of the weight data of all layers of the artificial neural network model is the same, the degree of degradation of the inference accuracy of the artificial neural network model can be significantly increased.
- the pruning degree of layers that are not sensitive to pruning can be made sensitive to pruning.
- the pruning degree of the artificial neural network model can be optimized for each layer.
- an artificial neural network model such as VGG16
- VGG16 even if more than 90% of weight data is replaced with 0, it can be implemented even when the inference accuracy of the artificial neural network model is hardly degraded. % has the effect of being able to perform it without actually calculating the processing element. Accordingly, the artificial neural network model to which the pruning algorithm is applied has the effect of reducing the actual amount of computation of the neural network processing unit 100 without substantially reducing the inference accuracy, thereby reducing power consumption and improving the computation speed.
- the quantization algorithm is a technology that can reduce the data size of an artificial neural network model.
- the quantization algorithm may be configured to selectively reduce the number of bits of node data of each layer of the artificial neural network model and weight data of each connection network.
- the quantization algorithm can reduce the data size of the artificial neural network model stored in the NPU memory system 120, and each input of the processing element It is possible to provide an effect of reducing the size of input data.
- the first input unit of the first processing element PE1 of the neural network processing unit 100 is configured to receive quantized (N) bit data, and the second input unit is quantized It can be configured to receive (M)bit data.
- N quantized
- M quantized bit data
- each input unit of the multiplier 111 of the first processing element PE1 receives data of the quantized number of bits
- the corresponding adder 112 and the accumulator 113 are also configured to operate with the data of the quantized number of bits.
- the first input unit may be configured to receive node data of a layer quantized to (N) bits
- the second input unit may be configured to receive weight data of a layer quantized to (M) bits.
- the number of bits of data input to the first input unit may be different from the number of bits of data input to the second input unit.
- 5-bit data may be input to the first input unit and 7-bit data may be input to the second input unit.
- output data of the bit quantization unit 114 may be input data of a next layer. Accordingly, if the node data of the next layer has been quantized to a specific number of bits, the bit quantization unit 114 may be configured to correspondingly convert the number of bits of the output data.
- the power consumption of the multiplier 111 and the adder 112 of the processing element can be reduced according to the quantization level.
- the optimization module 1520 quantizes node data of a specific layer of the artificial neural network model, weight data of a specific layer, a specific weight kernel, and/or a specific feature map based on the memory size of the NPU memory system 120, and the artificial neural network
- the model evaluation module 1530 may be configured to evaluate the size of the quantized data for each quantized node data, weight data, weight kernel, and/or feature map.
- the size of the target data may be determined based on the memory size of the NPU memory system (120). Accordingly, the optimization module 1520 may be configured to gradually reduce the number of bits of node data and/or weight data of a specific layer of the artificial neural network model to be quantized in order to achieve a target data size.
- the node data size of a specific layer of the artificial neural network model, the weight data size of the specific layer, the data size of the specific weight kernel, and/or the data size of the specific feature map may be quantized to be smaller than the memory size of the NPU memory system 120 .
- the artificial neural network model evaluation module 1520 may be configured to evaluate the inference accuracy of the quantized node data, weight data, weight kernel, and/or feature map smaller than the memory size of the NPU memory system 120 .
- the size of the specific data of the artificial neural network model is smaller than the memory size of the NPU memory system 120, there is an effect that the reasoning operation can be efficiently performed only with the internal memory of the NPU memory system 120.
- the optimization module 1520 is based on the memory size of the NPU memory system 120 and the memory size of the main memory system 1070, node data of a specific layer of the artificial neural network model, weight data of a specific layer, a specific weight kernel, and / Alternatively, a specific feature map may be quantized, and the neural network model evaluation module 1530 may be configured to evaluate the size of the quantized data for each quantized node data, weight data, weight kernel, and/or feature map.
- the size of the target data may be determined based on the memory sizes of the NPU memory system 120 and the main memory system 1070 .
- the optimization module 1520 may be configured to gradually reduce the number of bits of node data and/or weight data of a specific layer of the artificial neural network model to be quantized in order to achieve a target data size.
- the node data size of a specific layer of the artificial neural network model, the weight data size of the specific layer, the data size of the specific weight kernel, and/or the data size of the specific feature map are the memory size of the NPU memory system 120 and/or the main memory system It can be quantized to be smaller than the memory size of (1070).
- the artificial neural network model evaluation module 1520 is quantized to be smaller than the memory size of the NPU memory system 120 and/or the memory size of the main memory system 1070 of the quantized node data, weight data, weight kernel, and/or feature map. and may be configured to evaluate inference accuracy.
- the size of specific data of the artificial neural network model is smaller than the memory size of the NPU memory system 120 and/or the memory size of the main memory system 1070, the memory of the NPU memory system 120 and the main memory system
- the reasoning operation can be efficiently performed by selectively using the memory of 1070 .
- the main memory system 1070 of the edge device 1000 may be configured to include on-chip memory and/or off-chip memory.
- the neural network processing unit 100 divides and stores the data of the artificial neural network model in the NPU memory system 120 and the main memory system 170 based on the maximum transfer speed and access latency information of each memory of the main memory system 1070 . can be configured to
- the data is stored in the NPU memory system 120 and the data larger than the memory size of the NPU memory system 120 is the main memory system 1070 may be stored.
- a weight kernel having a high frequency of use and a small data size may be stored in the NPU memory system 120 , and the feature map may be stored in the main memory system 1070 .
- the neural network processing unit 100 When the weight data, node data, input data, etc. of the artificial neural network model are stored in each memory, and the neural network processing unit 100 is based on information such as available memory bandwidth, maximum memory bandwidth, and memory access latency of each memory, memory management Efficiency may be improved, so that the operating speed of the neural network processing unit 100 may be improved.
- node data of each layer of the quantized artificial neural network model and weight data of each layer may be configured to provide quantized number of bits data of
- the neural network processing unit 100 stores the node data of each quantized layer and the weight data of each quantized layer in the NPU memory system 120, it is stored in the NPU memory system 120 as the number of individually quantized bits.
- the conventional memory may be configured to store data in units of 8 bits, 16 bits, 32 bits, 64 bits, or 128 bits.
- the NPU memory system 120 may be configured to store the weight data and node data of the artificial neural network model as the number of quantized bits.
- the weight data of the layer quantized to 3 bits may be stored in the NPU memory system 120 or the main memory system 1070 in units of 3 bits.
- the node data of the layer quantized to 7 bits may be stored in the NPU memory system 120 in units of 7 bits.
- the NPU memory system 120 may be configured to store 8 weight data quantized to 4 bits in a memory cell stored in units of 32 bits.
- the neural network processing unit 100 has the effect of efficiently storing the quantized data in the NPU memory system 120 because it has quantized bit number data for each layer.
- the memory usage and the amount of calculation may be reduced.
- data can be stored in the number of quantized bits in the NPU memory system 120 or the main memory system 1070, there is an effect that memory usage efficiency can be improved.
- the quantization algorithm may be configured to evaluate inference accuracy while gradually reducing the number of bits of node data and weight data of all layers of the artificial neural network model.
- the quantization algorithm may be configured to reduce the number of bits of node data and weight data of all layers from 32 bits to 31 bits, and then from 31 bits to 30 bits.
- the number of bits may be quantized within a range in which inference accuracy for each layer is not substantially deteriorated.
- the artificial neural network model evaluation module 1530 may evaluate the artificial neural network model to which the quantization algorithm is applied. If the estimated inference accuracy is higher than the target inference accuracy, the neural network model evaluation module 1530 may be configured to instruct the optimization module 1520 to gradually reduce the number of bits of the quantization algorithm of the artificial neural network model. Whenever the number of bits decreases, the neural network model evaluation module 1530 is configured to evaluate the inference accuracy. The artificial neural network model evaluation module 1530 may be configured to gradually lighten the data size of the artificial neural network model by repeatedly instructing the quantization algorithm until the evaluated inference accuracy falls below the target inference accuracy. In this case, the artificial neural network model evaluation module 1530 may be configured to store and evaluate various versions of the artificial neural network model to which the number of bits is selectively applied.
- the optimization system 1500 evaluates and compares various versions of quantized artificial neural network models, and minimizes the decrease in inference accuracy of the artificial neural network model while minimizing the number of bits of node data and weight data of all layers. has the effect of creating
- the quantization algorithm may be configured to reduce the number of bits of node data and/or weight data of at least one connection network among all layers of the artificial neural network model. For example, the quantization algorithm reduces and evaluates the node data of a specific layer from 32 bits to 31 bits, and reduces the number of bits of the node data of a specific layer by reducing and evaluating the node data of the corresponding layer from 31 bits to 30 bits. It can be configured to When the number of bits of node data and/or weight data of a specific layer is decreased, there is an effect that the quantization level of the specific layer may be increased.
- the number of quantized bits may be evaluated while being reduced in a specific bit unit.
- the specific bit unit may be 1 bit or more, and is not limited to the number of bits. In this case, the number of bits of each layer may be individually reduced within a range in which the estimated inference accuracy is not substantially deteriorated.
- the quantization algorithm may be configured to preferentially quantize node data and/or weight data of a specific layer among node data and/or weight data of a plurality of layers.
- the quantization algorithm may quantize by preferentially selecting node data and/or weight data of a layer having the largest data size among node data and/or weight data of a plurality of layers.
- the quantization algorithm may preferentially select and quantize node data and/or weight data of some layers in the order of the largest data size among node data and/or weight data of a plurality of layers. For example, weight data of the upper three layers having the largest data size may be preferentially quantized.
- the quantization level of the node data and/or weight data of the layer having the largest data size among the layers of the artificial neural network model is higher than the quantization level of the node data and/or weight data of the layer having the smallest data size
- the quantization level of the layer having the largest data size may be higher than the quantization level of the layer having the smallest data size.
- the quantization level of layers with a large data size may be higher than the quantization level of layers with a small data size.
- the quantization level of node data and/or weight data of layers having large data may be relatively higher, and as a result, the overall size of the artificial neural network model is lighter, power consumption is reduced, and inference accuracy is higher. It has the effect of minimizing degradation.
- the quantization algorithm may be configured to perform optimization based on structural data of an artificial neural network model or locality information of artificial neural network data.
- the quantization order of the quantization algorithm may be determined based on various criteria based on structural data of an artificial neural network model or locality information of artificial neural network data.
- the artificial neural network model evaluation module 1530 may be configured to perform optimization based on the structural data of the neural network processing unit 100 .
- the quantization algorithm may preferentially quantize a layer with the largest amount of computation among node data and/or weight data of a plurality of layers.
- the quantization algorithm may preferentially select and quantize node data and/or weight data of some layers in an order of the greatest amount of computation among node data and/or weight data of a plurality of layers.
- the quantization algorithm may preferentially quantize node data and/or weight data of a layer of an upper group with the largest amount of computation.
- the number of upper groups may be, for example, three, five, or the like.
- the quantization algorithm can preferentially quantize weight data with high frequency of use.
- weight data of a specific layer it is possible to use weight data of another layer.
- the corresponding layer affects the operation of all other layers. That is, the degree of optimization of the layer quantized later may vary depending on the layer quantized first.
- node data and/or weight data of a layer with a small amount of computation or small data size but sensitive to quantization are first quantized, node data and/or weight data of the layer with the largest amount of computation or the largest data size. quantization efficiency may be reduced.
- node data and/or weight data of a layer with a large amount of computation or large data size and less sensitive to pruning are first quantized, node data and/or weight data of a layer with a small amount of computation or small data size Even if the quantization efficiency is lowered, there is an effect that the overall quantization efficiency of the artificial neural network model can be improved.
- the amount of computation The number of bits of data of the layer with the most may be smaller than the number of bits of data of the layer with the least amount of computation.
- the number of bits of data of a specific layer may be the sum of the number of bits of node data of the corresponding layer and the number of bits of weight data.
- the quantization level of layers with a large amount of computation of the artificial neural network model may be higher than the quantization level of layers with a small amount of computation.
- the artificial neural network model evaluation module 1530 tells the optimization module 1520 to gradually decrease the number of bits of node data and/or weight data of the layer being quantized. It can be configured to Whenever the number of bits decreases, the neural network model evaluation module 1530 is configured to evaluate the inference accuracy of the neural network model to which the quantized layer is applied.
- the artificial neural network model evaluation module 1530 may be configured to gradually reduce the data size of the artificial neural network model by repeatedly applying the quantization algorithm until the evaluated inference accuracy falls below the target inference accuracy. In this case, the artificial neural network model evaluation module 1530 may be configured to store and evaluate various versions of the artificial neural network model to which the number of bits is selectively applied.
- the neural network model evaluation module 1530 is configured to evaluate both the inference accuracy and the data size of the layer while repeating the act of reducing the number of bits.
- the final number of bits of a specific quantized layer may be determined as the number of bits at which the data size of the layer becomes smaller than the target data size of the layer. If the evaluated reasoning accuracy is lower than the target reasoning accuracy, the final number of bits may be determined as the number of bits before the evaluated reasoning accuracy is lower than the target reasoning accuracy.
- the optimization system 1500 When quantization of node data and/or weight data of a specific layer is optimized, the optimization system 1500 terminates quantization of the corresponding layer and fixes the number of bits of node data and/or weight data of the optimized layer. The optimization system 1500 may then quantize node data and/or weight data of each layer of the artificial neural network model in a manner that starts quantization of node data and/or weight data of other layers. Accordingly, there is an effect that the quantization degree of node data and/or weight data of each layer of the artificial neural network model can be optimized for each layer. That is, there is an effect that the quantization degree of node data of each layer of the artificial neural network model can be optimized for each layer. That is, there is an effect that the quantization degree of weight data of each layer of the artificial neural network model can be optimized for each layer.
- the quantization of node data and/or weight data of all layers of the artificial neural network model is the same, the degree of deterioration in inference accuracy of the artificial neural network model may be significantly increased.
- the quantization level of layers that are not sensitive to quantization is applied higher than the quantization level of Rays sensitive to quantization, so that the quantization level of the artificial neural network model can be optimized for each node data and/or weight data of the layer.
- the artificial neural network model evaluation module 1530 may be configured to perform optimization based on structural data of the main memory system 1070 of the edge device 1000 .
- the optimization module 1520 quantizes node data of a specific layer and/or weight data of a specific layer of the artificial neural network model based on the memory size of the memory system 1070 , and the artificial neural network model evaluation module 1530 quantizes the quantized layer It can be configured to evaluate the size of the data of the quantized layer for each.
- the target data size of the layer may be determined based on the memory size of the NPU memory system 120 .
- the artificial neural network model evaluation module 1530 repeats the act of reducing the number of bits of a specific layer while maintaining both the inference accuracy and the data size of the layer. can be configured to evaluate.
- the final number of bits of a specific quantized layer may be determined as the number of bits at which the data size of the layer becomes smaller than the target data size of the layer. If the evaluated accuracy is lower than the target reasoning accuracy, the final number of bits may be determined as the number of bits before the evaluated reasoning accuracy is lower than the target reasoning accuracy.
- Node data of each layer and weight data of each layer of the quantized artificial neural network model may be configured to have an individually optimized number of bits. According to the above configuration, the inference accuracy of the artificial neural network model is substantially reduced because the inference accuracy of each layer of the quantized artificial neural network model is individually quantized while the weight data of each layer is individually quantized. There is an effect that the node data of each layer and the weight data of each layer can have the individually optimized number of bits without causing them to occur.
- the artificial neural network model can be quantized in consideration of the hardware characteristics of the neural network processing unit 100 and / or the edge device 1000, it can be optimized for the neural network processing unit 100 and / or the edge device 1000 It works.
- the optimization system 1500 may provide the artificial neural network model to be quantized through the artificial neural network model reading module 1510 .
- the optimization system 1500 may be configured to analyze structural data of an artificial neural network model to be optimized or locality information of artificial neural network data. Furthermore, the optimization system 1500 may be configured to further analyze the structural data of the artificial neural network model to be optimized or the locality information of the artificial neural network data and the structural data of the neural network processing unit 100 . Furthermore, the optimization system 1500 may be configured to further analyze the structural data of the artificial neural network model to be optimized or the locality information of the artificial neural network data, the structural data of the neural network processing unit 100, and the structural data of the edge device 1000. can
- the above-described structural data or artificial neural network data locality information analysis can be implemented in one of the artificial neural network model reading module 1510, the optimization module 1520, and the artificial neural network model evaluation module 1530 of the optimization system 1500.
- the optimization module 1520 will be described as an example of analyzing at least one piece of structural data or artificial neural network data locality information, but the present disclosure is not limited thereto.
- the optimization module 1520 may be configured to apply a quantization algorithm to an artificial neural network model to be optimized.
- the optimization module 1520 may be configured to perform a grouping policy of a quantization algorithm.
- the grouping policy may be a policy of grouping quantized data among structural data of an artificial neural network model or locality information of artificial neural network data according to a specific criterion.
- the quantization algorithm can group and quantize data having features or a common denominator.
- the degree of reduction in the number of bits of the quantization algorithm can be improved and the reduction in inference accuracy can be minimized.
- the grouping policy may be subdivided into a first grouping policy, a second grouping policy, a third grouping policy, and a fourth grouping policy.
- the first grouping policy is a grouping policy based on the operation order or scheduling order of the artificial neural network model.
- the optimization system 1500 may analyze the operation sequence of the artificial neural network model by analyzing the structural data of the artificial neural network model or locality information of the artificial neural network data. Therefore, when the first grouping policy is selected, the quantization order may be determined by grouping by layer of the artificial neural network model or by weight kernel.
- the first grouping policy may determine a quantization order by grouping at least one layer.
- node data and/or weight data of at least one layer may be grouped.
- node data and/or weight data of at least one layer may be grouped and quantized by a predetermined number of bits. That is, one layer or a plurality of layers may be quantized according to an operation order for each analyzed group.
- the first grouping policy may determine a quantization order by grouping at least one weight kernel.
- at least one weight kernel and/or feature map may be grouped.
- at least one weight kernel and/or feature map may be grouped and quantized by a predetermined number of bits. That is, one weight kernel or a plurality of weight kernels may be quantized according to an operation order for each analyzed group.
- the reason for the first grouping policy is that, in the case of a specific artificial neural network model, the higher the layer in the operation order, the greater the degradation of inference accuracy during quantization.
- the quantization placed at the back while minimizing the reduction in inference accuracy of the quantized artificial neural network model. This is because it can improve the quantization level of layers insensitive to or convolution.
- the present disclosure is not limited thereto, and in a specific artificial neural network model, a layer disposed at the back may be more sensitive to quantization.
- the second grouping policy is a grouping policy based on the size of the computational amount of the artificial neural network model.
- the optimization system 1500 may analyze the amount of computation of the artificial neural network model by analyzing the structural data of the artificial neural network model or locality information of the artificial neural network data. Therefore, when the second grouping policy is selected, the quantization order can be determined by grouping each layer or weighted kernel of the artificial neural network model.
- the second grouping policy may determine a quantization order by grouping at least one layer.
- node data and/or weight data of at least one layer may be grouped.
- node data and/or weight data of at least one layer may be grouped and quantized by a predetermined number of bits. That is, one layer or a plurality of layers may be quantized according to the order of the amount of computation for each analyzed group.
- the second grouping policy may determine a quantization order by grouping at least one weight kernel.
- at least one weight kernel and/or feature map may be grouped.
- at least one weight kernel and/or feature map may be grouped and quantized by a predetermined number of bits. That is, one weight kernel or a plurality of weight kernels may be quantized according to the order of the amount of computation for each analyzed group.
- the reason for the second grouping policy is that, in the case of a specific artificial neural network model, the difference in the amount of computation for each layer or for each weighted kernel may be quite large. In this case, if a layer or a convolution with a large amount of computation is first quantized and the inference accuracy degradation due to quantization is verified in advance, the reduction in the inference accuracy of the quantized artificial neural network model can be minimized while reducing the amount of computation.
- the present disclosure is not limited thereto.
- the third grouping policy is a grouping policy based on the memory usage size of the artificial neural network model.
- the optimization system 1500 may analyze the memory usage size of the artificial neural network model by analyzing the structural data of the artificial neural network model or locality information of the artificial neural network data. Therefore, when the third grouping policy is selected, the quantization order can be determined by grouping by layer of the artificial neural network model or by weight kernel.
- the third grouping policy may determine a quantization order by grouping at least one layer.
- node data and/or weight data of at least one layer may be grouped.
- node data and/or weight data of at least one layer may be grouped and quantized by a predetermined number of bits. That is, one layer or a plurality of layers may be quantized according to the order of memory usage size for each analyzed group.
- the third grouping policy may determine a quantization order by grouping at least one weight kernel.
- at least one weight kernel and/or feature map may be grouped.
- at least one weight kernel and/or feature map may be grouped and quantized by a predetermined number of bits. That is, one weight kernel or a plurality of weight kernels may be quantized according to the order of memory usage size for each analyzed group.
- the reason for the third grouping policy is that, in the case of a specific artificial neural network model, the size difference in memory usage for each layer or for each weighted kernel may be quite large. In this case, if a layer or convolution with a large memory usage size is first quantized and the inference accuracy degradation due to quantization is verified in advance, it is because it is possible to minimize the reduction in the inference accuracy of the quantized artificial neural network model while reducing the memory usage.
- the present disclosure is not limited thereto.
- the fourth grouping policy is a policy for selectively applying the first to third grouping policies. Since each of the above-described policies may have advantages and disadvantages, the first to third grouping policies may be selectively used according to the characteristics of the artificial neural network model. For example, in the case of a specific artificial neural network model, the quantization order may be determined in the order of memory usage size as the first priority. However, as the second priority, the quantization order may be readjusted in the order of the amount of computation. In addition, the quantization order may be readjusted as the third order of operation. When each grouping policy is mixed and applied, the optimization system 1500 may be configured to provide a grouping policy of various combinations by giving respective weight values to the grouping policies.
- the optimization system 1500 completes data grouping for quantization of the artificial neural network model according to a policy selected among the grouping policies described above, and the ordered data groups may be sequentially quantized.
- the optimization system 1500 is configured to sequentially quantize the grouped data ordered according to the grouping policy.
- the quantization algorithm can be quantized according to the quantization level. When the quantization level is 1, data grouped by 1 bit can be quantized. If the quantization level is 2, data grouped by 2 bits can be quantized. That is, the quantization level may be an integer of 1 or more.
- the optimization module 1520 may selectively apply a quantization recognition learning algorithm to improve inference accuracy of an artificial neural network model to which quantization is applied.
- the quantization recognition learning algorithm quantizes a data group and then performs re-learning to compensate for the decrease in inference accuracy of the artificial neural network model due to quantization.
- the artificial neural network model evaluation module 1530 is configured to evaluate the inference accuracy, memory usage requirements, power consumption, computational performance, for example, number of inferences per second, etc. of the artificial neural network model including the data group being quantized determined by the grouping policy. do.
- the artificial neural network model evaluation module 1530 may be configured to provide the evaluation result through various cost functions.
- the artificial neural network model evaluation module 1530 sends the optimization module 1520 to the Instructs to further reduce the number of bits of the artificial neural network model including the data group being quantized. Then, the inference accuracy of the artificial neural network model including the data group being quantized with a further reduced number of bits is evaluated again.
- the above-described quantization of the data group being quantized may be repeated until the estimated inference accuracy of the artificial neural network model becomes smaller than the target inference accuracy.
- the artificial neural network model evaluation module 1530 targets the quantized data group It can be configured to restore the moment when the inference accuracy was high and to determine the number of quantized bits by terminating the quantization of the data group being quantized.
- quantization of the data group in the next order may be started, and quantization of each data group may be repeated in the same order. Therefore, all data groups may be quantized according to a preset grouping policy order.
- the artificial neural network model evaluation module 1530 may be configured to simulate performance when the quantized artificial neural network model operates in the neural network processing unit 100 according to the grouping policy order.
- the artificial neural network model evaluation module 1530 of the optimization system 1500 receives various data from the edge device 1000 and the neural network processing unit 100 and provides a simulation result of the quantized artificial neural network model according to the grouping policy order. can be
- the optimization system 1500 simulates the quantized artificial neural network model according to the grouping policy order, and it can be determined whether the artificial neural network model can be smoothly operated in the edge device 1000 including the neural network processing unit 100. there is.
- the optimization system 1500 includes data such as arithmetic processing power, available memory bandwidth, maximum memory bandwidth, and memory access latency of the edge device 1000 and the neural network processing unit 100, and the computation amount and memory usage of each layer of the artificial neural network model. Computation time, number of inferences per second, hardware resource usage, inference accuracy, and power consumption when a quantized artificial neural network model operates on a specific edge device including a specific neural network processing unit according to the grouping policy order using data such as simulation results or simulation estimates, etc. may be provided.
- the optimization system 1500 sets the priority of the grouping policy to the application requirements of the edge device 1000, the neural network processing unit 100, and/or the artificial neural network model, for example, inference accuracy, memory size limit, power consumption limit. , and may be configured to determine the inference rate per second in consideration of a combination of limitations.
- the optimization system 1500 considers the application characteristics of a specific edge device, and when inference speed per second should be prioritized among the performance of a specific edge device, the grouping policy order of the optimization module 1520 is changed to reduce the amount of computation.
- the artificial neural network model can be optimized.
- the optimization system 1500 changes the grouping policy order of the optimization module 1520 to reduce memory usage when memory usage is to be prioritized among the performance of a specific edge device.
- the artificial neural network model can be optimized.
- the model compression algorithm is a technique for compressing weight data, activation map, or feature map of an artificial neural network model.
- the compression technique may utilize conventional known compression techniques. Therefore, the data size of the artificial neural network model can be reduced.
- the optimization module 1520 has the effect that the data size can be reduced when the artificial neural network model compressed into data of a relatively smaller size according to the above-described configuration is stored in the NPU memory system 120 .
- the re-learning algorithm is a technology capable of compensating for inference accuracy that is lowered when various algorithms of the optimization module 1520 are applied. For example, when techniques such as a quantization algorithm, a pruning algorithm, and a model compression algorithm are applied, the inference accuracy of the artificial neural network model may be reduced. In this case, the optimization system 1500 may retrain the artificial neural network model compressed by pruning, quantization, and/or model. In this case, there is an effect that the accuracy of the retrained artificial neural network model can be increased again. Accordingly, there is an effect of improving the performance of the portable artificial neural network apparatus 100 having limited hardware resources.
- the optimization system 1500 may be configured to include at least one or more training data sets and a corresponding evaluation data set to perform a re-learning algorithm.
- the transfer learning algorithm is a kind of re-learning algorithm and can be included in the re-learning algorithm. Transfer learning algorithms can also be configured to use Knowledge Distillation techniques.
- the knowledge distillation technology is a technology for learning a lightweight artificial neural network model to be applied to the edge device 1000 by using an artificial neural network model with a relatively larger data size that has been trained well as a reference model.
- an artificial neural network with a relatively larger size of previously trained data may be an artificial neural network model composed of about 100 input layers, hidden layers, and output layers, and an artificial neural network model to be applied to the edge device 1000 .
- a large artificial neural network model with a relatively large number of layers and a relatively large amount of weight data implements a relatively high level of artificial intelligence, but in an environment with limited hardware resources such as the edge device 1000, it is difficult to smoothly process a large amount of computation. it's difficult.
- the transfer learning algorithm may be configured to store an artificial neural network model for reference.
- the artificial neural network models to be referenced are reference models, and the optimization system may be stored.
- the quantization-aware re-learning algorithm is an algorithm that can be included in the re-learning algorithm as a type of re-learning algorithm.
- the quantization recognition learning algorithm is a technology for re-learning a quantized artificial neural network model using learning data. When the quantized artificial neural network model is retrained with training data once again, there is an effect that the inference accuracy of the artificial neural network model can be improved.
- the artificial intelligence-based optimization algorithm is a method in which the optimization system 1500 uses the various algorithms of the optimization module 1520 to search the structure of the artificial neural network model in an artificial intelligence reinforcement learning method to generate an optimally lightweight artificial neural network model or A method in which the artificial intelligence of the optimization system 1500 performs a weight reduction process on its own to obtain an optimal weight reduction result, not based on a weight reduction method such as a quantization algorithm, a pruning algorithm, a re-learning algorithm, a model compression algorithm, and a model compression algorithm am.
- a weight reduction method such as a quantization algorithm, a pruning algorithm, a re-learning algorithm, a model compression algorithm, and a model compression algorithm am.
- the optimization system 1500 may be configured to selectively apply a plurality of optimization algorithms to the artificial neural network model to be optimized through the optimization module 1520 .
- the optimization system 1500 may be configured to provide a lightweight artificial neural network model by applying a pruning algorithm.
- the optimization system 1500 may be configured to provide a lightweight artificial neural network model by applying a pruning algorithm and then applying a quantization algorithm.
- the optimization system 1500 may be configured to provide a lightweight artificial neural network model by applying a pruning algorithm, then applying a quantization algorithm, and then applying a re-learning algorithm.
- the optimization system 1500 may be configured to provide a lightweight artificial neural network model by applying the pruning algorithm, then applying the quantization algorithm, then applying the re-learning algorithm, and then applying the model compression algorithm.
- the optimization system 1500 is an artificial neural network that can operate in the neural network processing unit 100 applied to the edge device 1000 by sequentially applying at least one optimization algorithm included in the various optimization modules 1520 . It has the effect of optimizing the model.
- the artificial neural network model evaluation module 1530 is configured to receive the evaluation data set of the artificial neural network model to be verified. For example, if the neural network model is a car inference model, the evaluation data set may be image files of various cars. The artificial neural network model evaluation module 1530 is configured to input the evaluation data set into the artificial neural network model to determine inference accuracy. Depending on the estimated inference accuracy and/or the degree of weight reduction, the optimization system 1500 may repeat or terminate the optimization.
- the artificial neural network model update module 1540 is configured to update the optimized model and provide it to an external system connected to the optimization system 1500 . Accordingly, the edge device 1000 may receive an optimized artificial neural network model from the optimization system 1500 .
- FIG. 12 is a schematic conceptual diagram illustrating an edge device according to another example of the present disclosure.
- the edge device 2000 according to another example of the present disclosure is an example configured to provide various values to users by applying various examples of the present disclosure.
- the edge device 2000 is characterized in that it is configured to provide keyword recognition and gesture recognition through one neural network processing unit 2100 . That is, one neural network processing unit 2100 is configured to provide a plurality of inference functions. According to the above-described configuration, the edge device 2000 may perform a plurality of inference operations with one neural network processing unit, thereby reducing the number of parts and manufacturing costs of the edge device 2000 .
- edge device 1000 and the edge device 2000 according to another example of the present disclosure include a plurality of substantially similar configurations. Therefore, for the sake of convenience of description, the following redundant description may be omitted.
- the edge device 2000 may be implemented with various modifications.
- the neural network processing unit 2100 is the neural network processing unit 100 , the neural network processing unit 200 , the neural network processing unit 300 , and the neural network described above.
- One of the processing units 400 may be configured substantially the same as a neural network processing unit.
- the edge device 2000 is, for example, a mobile phone, a smart phone, an artificial intelligence speaker, a digital broadcasting terminal, a navigation system, a smart refrigerator, a smart TV, an artificial intelligence CCTV, a tablet, a notebook computer, an autonomous vehicle, and personal digital assistance (PDA). , and a personal multimedia player (PMP).
- PDA personal digital assistance
- PMP personal multimedia player
- the central processing unit 1080 may be configured to control the power control unit 1090 and the neural network processing unit 2100 .
- the power control unit 1090 may be configured to selectively supply or cut off power of each component of the edge device 2000 .
- examples of the present disclosure are not limited thereto.
- the edge device 2000 may be configured to include an input unit 1020 including at least a neural network processing unit 2100 , a camera 1021 , and a microphone 1022 .
- the camera 1021 and the microphone 1022 may also be referred to as an input unit 1020 configured to provide a plurality of sensed data.
- the input unit 1020 is not limited to the camera 1021 and the microphone 1022 , and may be composed of a combination of components of the above-described input unit 1020 .
- the edge device 2000 includes at least a microphone 1022 configured to sense acoustic data, a camera 1021 configured to sense image data, and a neural network processing unit 2100 configured to perform at least two different inference operations. can be configured.
- the neural network processing unit 2100 may be configured to drive the trained AI keyword recognition model to infer keywords based on acoustic data.
- the neural network processing unit 2100 may drive the AI gesture recognition model trained to infer a gesture based on the image data in response to the keyword inference result.
- the edge device 2000 may be in the first mode to reduce power consumption.
- the first mode may also be referred to as a sleep mode.
- the first mode may be set when there is no special input to the edge device 2000 for a predetermined time. If there is no input for a predetermined time, the central processing unit 1080 may instruct the power control unit 1090 to put the edge device 2000 in the first mode.
- the power control unit 1090 may be configured to supply power to the microphone 1022 and cut off power to the camera 1021 in the first mode. According to the above configuration, the edge device 2000 has an effect of blocking power consumption of the camera 1021 in the first mode. In addition, there is an effect of preventing misjudgment of a gesture due to an unintended operation of the user in the first mode.
- the power control unit 1090 may be configured to selectively cut off power of various components.
- the power control unit 1090 may be configured to cut off the power of the output unit.
- the power control unit 1090 may be configured to cut off the power of the wireless communication unit.
- the power control unit 1090 may provide various power cutoff policies according to the configuration of the edge device 2000 .
- the neural network processing unit 2100 may be configured to stop inference of the AI gesture recognition model in the first mode. Accordingly, the edge device 2000 has an effect of simultaneously reducing power consumption of the camera 1021 and the neural network processing unit 2100 .
- the neural network processing unit 2100 receives acoustic data and drives the AI keyword recognition model to drive the AI gesture recognition model. It can be configured to infer possible keywords. That is, when a specific keyword is input into the microphone 1022 in the first mode, the AI gesture recognition model may be driven in response to the inference result of the AI keyword recognition model of the neural network processing unit 400 .
- the AI keyword recognition model may be an artificial neural network trained to recognize only specific keywords.
- specific keywords are “Alexa”, “Hey Siri”, “Volume up”, “Volume down”, “Search”, “Turn on”, “Turn off”, “Internet”, “Music”, “Movie”
- keywords may be 1 to 100 frequently used keyword commands.
- the edge device 2000 may be switched from the first mode to the second mode according to the inference result.
- the central processing unit 1080 may receive the inference result of the AI keyword recognition model of the neural network processing unit 2100 and instruct the power control unit 1090 to set the edge device 2000 to the second mode.
- the power control unit 1090 may be configured to supply power to the camera 1021 in the second mode.
- the neural network processing unit 2100 may be configured to perform an inference operation of the AI gesture recognition model in the second mode.
- the AI gesture recognition model may be an artificial neural network trained to recognize only a specific gesture.
- the specific gestures may be specific hand gestures, body movements, facial expressions, and the like.
- the neural network processing unit 2100 of the edge device 2000 may be configured as an independent neural network processing unit. That is, the edge device 2000 may be configured to independently perform an inference operation by an independent neural network processing unit. If the edge device receives an artificial neural network inference service from a cloud computing-based server through a wireless communication network, the data of the camera 1021 and the microphone 1022 for inference are stored in the main memory system 1070, It must be transmitted through a wireless communication unit.
- the main memory system 1070 is inefficient compared to the NPU memory system in terms of power consumption.
- the edge device 2100 can be reduced by the neural network processing unit 2100 that can operate independently.
- the audio data and the image data may include private data. If the edge device continuously transmits an image captured by the user's conversation or private life through the wireless communication unit, a privacy problem may occur.
- the edge device 2000 may perform an inference operation on the data of the input unit 1020 that may include privacy data by the neural network processing unit 2100 and then delete it. That is, image data and sound data in which privacy data may be included may be deleted after a reasoning operation by the neural network processing unit 2100 .
- the edge device 2000 may be configured to block transmission of data of the input unit 1020 that may include privacy data through the wireless communication unit.
- the edge device 2000 may be configured not to store data of the input unit 1020 that may include privacy data in the main memory system 1070 .
- the edge device 2000 may be configured to classify data of the input unit 1020 that may include privacy data into data including privacy data.
- the edge device 2000 has the effect of providing convenience to the user, reducing power consumption and at the same time blocking the privacy data leakage problem.
- the neural network processing unit 2100 may be optimized to provide a multitasking function.
- the neural network processing unit 2100 may be configured to drive at least two artificial neural network models to provide at least two different inference operations.
- it may be configured to drive another artificial neural network model according to the inference result of one artificial neural network model. That is, one artificial neural network model may be an artificial neural network model that operates all the time, and the other artificial neural network model may be an artificial neural network model that operates under specific conditions.
- the edge device 2000 includes an input unit 1020 configured to provide a plurality of sensing data, and a neural network processing unit 2100 configured to drive a plurality of artificial neural network models, and includes a first artificial neural network model among the plurality of artificial neural network models.
- the neural network model is an artificial neural network model that is always operated, and the second artificial neural network model among the plurality of artificial neural network models may be composed of an artificial neural network model that operates under specific conditions. In addition, whether the second artificial neural network model is driven may be controlled according to the inference result of the first artificial neural network model.
- the NPU scheduler of the neural network processing unit 2100 may be configured to determine a scheduling order based on structural data or artificial neural network data locality information of a plurality of artificial neural network models.
- processing elements of the neural network processing unit 2100 may be selectively assigned. For example, when the number of processing elements is 100, 30 processing elements may be allocated for the inference operation of the first artificial neural network model, and 50 processing elements may be allocated for the inference operation of the second artificial neural network model. can And in this case, the remaining unallocated processing elements may be controlled not to operate.
- the NPU scheduler may determine a scheduling order based on the size and structure data of node data of each layer of a plurality of artificial neural network models and weight data of each connection network, and may allocate processing elements according to the scheduling order.
- one neural network processing unit 2100 can infer a plurality of artificial neural network models in parallel at the same time. It works.
- the NPU scheduler can check the size of the node data of each layer of the plurality of artificial neural network models and the weight data of each connection network by using the structural data of the artificial neural network model or the locality information of the artificial neural network data, inference for each scheduling You can calculate the amount of memory required for the operation. Accordingly, the NPU scheduler may store data required for each scheduling order within the memory available limit of the NPU memory system of the neural network processing unit 2100 capable of performing multitasking.
- the NPU scheduler may be configured to set the priority of data stored in the NPU memory system.
- the AI keyword recognition model may be configured to always operate in the neural network processing unit 2100 . Accordingly, the neural network processing unit 2100 may set the priority of data of the AI keyword recognition model to be high.
- the AI gesture recognition model may be configured to operate only under specific conditions in the neural network processing unit 2100 . Accordingly, the neural network processing unit 2100 may set the priority of data of the AI gesture recognition model to be low.
- a neural network processing unit is a processing element array, an NPU memory system configured to store an artificial neural network model processed in the processing element array or to store at least some data of the artificial neural network model, and structural data of the artificial neural network model Or it may include an NPU scheduler configured to control the processing element array and the NPU memory system based on the artificial neural network data locality information.
- the processing element array may include a plurality of processing elements configured to perform a MAC operation.
- the NPU scheduler may be further configured to control the read and write order of the processing element array and the NPU memory system.
- the NPU scheduler may be further configured to control the processing element array and the NPU memory system by analyzing the structural data of the artificial neural network model or the artificial neural network data locality information.
- the NPU scheduler may further include node data of each layer of the artificial neural network model, arrangement structure data of layers, and weight data of a connection network connecting nodes of each layer.
- the NPU scheduler may be further configured to schedule the operation order of the artificial neural network model based on the structural data of the artificial neural network model or the artificial neural network data locality information.
- the NPU scheduler may be configured to schedule an operation order of a plurality of processing elements included in the processing element array based on the arrangement structure data of layers of the artificial neural network model among the structural data of the artificial neural network model.
- the NPU scheduler may be configured to access a memory address value in which node data of a layer of an artificial neural network model and weight data of a connection network are stored based on structural data of the artificial neural network model or locality information of artificial neural network data.
- the NPU scheduler may be configured to control the NPU memory system and the processing element array so that operations are performed in a set scheduling order.
- the processing order may be configured to schedule the processing order.
- the NPU scheduler is a neural network processing unit, configured to schedule a processing order based on structural data or artificial neural network data locality information from an input layer to an output layer of the artificial neural network of the artificial neural network model.
- the NPU scheduler may be configured to improve the memory reuse rate by controlling the NPU memory system by utilizing the scheduling sequence based on the structural data of the artificial neural network model or the artificial neural network data locality information.
- the NPU scheduler may be configured to reuse a memory address value in which the first operation value of the first scheduling is stored as a memory address value corresponding to the node data of the second layer of the second scheduling that is the next scheduling of the first scheduling.
- the NPU scheduler may be configured to reuse a value of a memory address in which an operation result is stored in a subsequent operation.
- the NPU memory system may include static memory.
- the NPU memory system may include at least one of SRAM, MRAM, STT-MRAM, eMRAM, and OST-MRAM.
- Edge device includes a central processing unit, a main memory system configured to store an artificial neural network model, a system bus and processing element array for controlling communication between the central processing unit and the main memory system, an NPU memory system, and a processing element an NPU scheduler configured to control the array and the NPU memory system, and a neural network processing unit comprising an NPU interface, wherein the NPU interface is configured to communicate with the central processing unit through a system bus, the NPU interface comprising the artificial neural network model related data may be configured to communicate directly with the main memory system.
- Edge devices include mobile phones, smart phones, artificial intelligence speakers, digital broadcasting terminals, navigation devices, wearable devices, smart watches, smart refrigerators, smart TVs, digital signage, VR devices, AR devices, artificial intelligence CCTVs, artificial intelligence robot cleaners, and tablets.
- a laptop computer an autonomous vehicle, an autonomous drone, an autonomous driving biped robot, an autonomous quadruped robot, an autonomous driving mobility, an artificial intelligence robot, a PDA, and a PMP.
- the NPU memory system of the neural network processing unit may be configured such that the read/write speed of the inference operation of the artificial neural network model is relatively faster than that of the main memory system, and consumes relatively less power.
- the neural network processing unit may be configured to improve the memory reuse rate of the NPU memory system, based on the structural data of the artificial neural network model or the artificial neural network data locality information.
- the neural network processing unit may be configured to acquire data of at least one of a number of memories, a memory type, a data transfer rate, and a memory size of the main memory system.
- the neural network processing unit controls the reuse of data stored inside the NPU memory system based on the structural data of the artificial neural network model or the artificial neural network data locality information, and the neural network processing unit is configured not to make a memory access request to the main memory system when data is reused.
- the NPU memory system does not include DRAM, and the NPU memory system may include a static memory configured to have relatively faster read and write speeds and relatively less power consumption than the main memory system.
- the NPU memory system may be configured to control scheduling by comparing the data size of the artificial neural network model to be called from the main memory system and the memory size of the NPU memory system.
- a neural network processing unit includes a processing element array, an NPU memory system configured to store an artificial neural network model processed in the processing element array, or to store at least some data of the artificial neural network model, and structural data of the artificial neural network model or an NPU scheduler configured to control the processing element array and the NPU memory system based on the artificial neural network data locality information, wherein the processing element array is configured to perform a MAC operation, and the processing element array is configured to quantize and output a MAC operation result can be
- a first input of each processing element of the array of processing elements may be configured to receive a variable value, and a second input of each processing element of the array of processing elements may be configured to receive a constant value.
- the processing element may be configured to include a multiplier, an adder, an accumulator and a bit quantization unit.
- the NPU scheduler recognizes reusable variable values and reusable constant values based on the structural data of the artificial neural network model or artificial neural network data locality information, and uses the reusable variable values and reusable constant values to reuse the memory. can be configured to control
- It may be configured to reduce the number of bits of an operation value of the processing element array in consideration of MAC operation characteristics and power consumption characteristics of the processing element array.
- the NPU memory system may be a low-power memory system configured to reuse a specific memory address in which weight data is stored in consideration of the data size and operation step of the artificial neural network model.
- the NPU scheduler stores the MAC operation value of the neural network model according to the scheduling order in a specific memory address of the NPU memory system, and the specific memory address in which the MAC operation value is stored may be input data of the MAC operation of the next scheduling order.
- the NPU system memory may be configured to preserve weight data of the networks stored in the NPU system memory while the speculation operation continues.
- the number of updates of the memory address in which the input data of the first input of each processing element of the processing element array is stored may be greater than the number of updates of the memory address in which the input data of the second input is stored.
- the NPU system memory may be configured to reuse the MAC operation value stored in the NPU system memory while the speculation operation continues.
- a neural network processing unit stores a processing element array including a plurality of processing elements and a plurality of register files, an artificial neural network model processed in the processing element array, or at least some data of the artificial neural network model an NPU memory system configured to: and an NPU scheduler configured to control the processing element array and the NPU memory system based on the structural data of the artificial neural network model or the artificial neural network data locality information.
- the memory size of each of the plurality of register files is relatively smaller than the memory size of the NPU memory system, and the maximum transfer rate of each of the plurality of register files may be relatively faster than the maximum transfer rate of the NPU memory system.
- the memory size of each of the plurality of register files may be configured to have a memory size having a relatively faster maximum transfer rate than the memory size of the NPU memory system.
- the memory size of each of the plurality of register files is relatively smaller than the memory size of the NPU memory system, and the power consumption based on the same transfer rate of each of the plurality of register files is relatively higher than the power consumption based on the same transfer rate of the NPU memory system. could be less.
- the memory size of each of the plurality of register files may be configured to have a memory size that is relatively smaller than the memory size of the NPU memory system for the same transfer rate reference power consumption.
- Each of the plurality of register files may be configured as a memory having a relatively faster maximum transfer rate than the NPU memory system, and having a relatively smaller power consumption based on the same transfer rate.
- the NPU memory system further includes a first memory, a second memory, and a third memory having a hierarchical structure, and the NPU scheduler is based on the NPU structure data or the artificial neural network data locality information of the artificial neural network model running in the neural network processing unit.
- the first memory, the second memory and the third memory of the memory system may be controlled based on the hierarchical structure to improve the memory reuse rate of the NPU memory system.
- the first memory may be configured to communicate with the second memory and the third memory
- the second memory and the third memory may be configured to communicate with the plurality of processing elements and the plurality of register files.
- the NPU memory system may be configured to have a plurality of memory hierarchical structures optimized for memory reuse.
- the NPU scheduler may be configured to determine the size of data for each scheduling order, and to sequentially store data for each scheduling order within the available limit of the first memory.
- the NPU scheduler may be configured to selectively store some of the data stored in the first memory in one of the second memory and the third memory by comparing the memory reuse rate.
- a memory reuse rate of data stored in the second memory may be higher than a memory reuse rate of data stored in the third memory.
- the NPU scheduler may be configured to delete duplicate data of the first memory when the characteristic data is stored in the second memory.
- Data stored in the third memory may be configured to have reusable variable characteristics.
- An edge device includes a microphone configured to sense acoustic data, a camera configured to sense image data, and a neural network processing unit configured to perform at least two different inference operations, the neural network processing unit comprising: configured to drive the trained AI keyword recognition model to infer a keyword based on the acoustic data, and the neural network processing unit is configured to drive the trained AI gesture recognition model to infer a gesture based on the image data in response to the keyword inference result can be
- the edge device further includes a central processing unit and a power control unit, the central processing unit instructs to enter the first mode when there is no input for a predetermined time, and the power control unit supplies power to the microphone and sends power to the camera when in the first mode It may be configured to cut off power.
- the neural network processing unit may be configured to stop the inference operation of the AI gesture recognition model in the first mode.
- the central processing unit may receive the inference result of the AI keyword recognition model of the neural network processing unit and instruct it to enter the second mode, and the power control unit may be configured to supply power to the camera in the second mode.
- the neural network processing unit may be configured to perform an inference operation of the AI gesture recognition model in the second mode.
- the neural network processing unit may be a standalone neural network processing unit.
- Image data and sound data including privacy data may be configured to be deleted after an inference operation by the neural network processing unit.
- the AI gesture recognition model may be configured to drive in response to an inference result of the AI keyword recognition model of the neural network processing unit.
- An edge device includes an input unit configured to provide a plurality of sense data, a neural network processing unit configured to drive a plurality of artificial neural network models, and a first artificial neural network model among the plurality of artificial neural network models is It is an artificial neural network model that is always operated, and the second artificial neural network model among the plurality of artificial neural network models may be an artificial neural network model configured to operate only under preset conditions.
- the second artificial neural network model may be configured to be driven or not to be controlled according to the inference result of the first artificial neural network model.
- the neural network processing unit may further include an NPU scheduler, and the NPU scheduler may be configured to determine a scheduling order based on structural data of a plurality of artificial neural network models or artificial neural network data locality information.
- the neural network processing unit further includes a plurality of processing elements, and the NPU scheduler includes node data of each layer of the plurality of artificial neural network models, data size of weights of each connection network, and structural data or artificial neural network data of the plurality of artificial neural network models. determine a scheduling order based on the regionality information, and allocate processing elements according to the determined scheduling order.
- the neural network processing unit may further include an NPU memory system, and the NPU scheduler may be configured to set the priority of data stored in the NPU memory system.
- a neural network processing unit is a processing element array, an NPU memory system configured to store an artificial neural network model processed in the processing element array, or to store at least some data of the artificial neural network model, a processing element array and an NPU memory system NPU scheduler configured to control, and may include an NPU deployment mode configured to infer a plurality of different input data by utilizing an artificial neural network model.
- a plurality of different input data may be a plurality of image data.
- NPU batch mode may be configured to increase the motion frame by combining a plurality of image data.
- the NPU deployment mode may be configured to recycle the weight data of the artificial neural network model to perform inference operations on a plurality of different input data.
- the NPU batch mode may be configured to convert a plurality of input data into one continuous data.
- a neural network processing unit includes at least one processing element, an NPU memory system that can store an artificial neural network model that can be inferred by the at least one processing element or can store at least some data of the artificial neural network model, and an NPU scheduler configured to control the at least one processing element and the NPU memory system based on the structural data of the artificial neural network model or the artificial neural network data locality information.
- the NPU scheduler may be configured to further receive structural data of the neural network processing unit or artificial neural network data locality information.
- the structure data of the neural network processing unit may include at least one of a memory size of the NPU memory system, a hierarchical structure of the NPU memory system, data on the number of at least one processing element, and an operator structure of the at least one processing element.
- the neural network processing unit includes an artificial neural network model trained to perform an inference function, an array of processing elements configured to infer input data by utilizing the neural network model, an NPU memory system configured to communicate with the array of processing elements, and an array of processing elements and an NPU memory system includes an NPU scheduler configured to control , and at least one of the artificial intelligence-based optimization algorithms may be configured to be optimized in consideration of the memory size of the NPU memory system.
- the artificial neural network model may be optimized in the neural network processing unit via an optimization system configured to communicate with the neural network processing unit.
- the artificial neural network model may be optimized based on at least one of structural data of the artificial neural network model or locality information of artificial neural network data and structural data of a processing unit.
- the quantization algorithm may be applied.
- a pruning algorithm is applied, a quantization algorithm is applied, and a re-learning algorithm can be applied.
- the quantization algorithm may be applied, then the re-learning algorithm may be applied, and then the model compression algorithm may be applied.
- the artificial neural network model may include a plurality of layers, each of the plurality of layers may include weight data, and each weight data may be pruned.
- At least one or more weight data having a relatively larger data size among weight data may be preferentially pruned.
- At least one or more weight data having a relatively larger computational amount may be preferentially pruned.
- the artificial neural network model includes a plurality of layers, each of the plurality of layers includes node data and weight data, and the weight data may be quantized, respectively.
- Node data may be quantized.
- Node data and weight data of at least one layer may be quantized, respectively.
- At least one or more weight data having a relatively larger size among weight data may be preferentially quantized.
- At least one or more node data having a relatively larger size among node data may be preferentially quantized.
- the array of processing elements may include at least one processing element, wherein the at least one processing element may be configured to compute node data and weight data, each having a quantized number of bits.
- the processing element array may further include a bit quantization unit, and the number of bits of output data of the processing element array may be configured to be quantized by the bit quantization unit.
- the artificial neural network model is a quantized artificial neural network model
- the NPU memory system is configured to store the data of the artificial neural network model in response to the number of bits of the plurality of quantized weight data of the artificial neural network model and the number of bits of the quantized plurality of node data.
- the artificial neural network model is a quantized artificial neural network model
- the processing element array corresponds to the number of bits of the plurality of quantized weight data of the artificial neural network model and the number of bits of the quantized plurality of node data of the plurality of weights quantized from the NPU memory system. It may be configured to receive data and quantized plurality of node data.
- the neural network processing unit includes an artificial neural network model, a plurality of processing elements configured to process the artificial neural network model, an NPU memory system configured to supply data of the artificial neural network model to the processing element array, and an NPU configured to control the processing element array and the NPU memory system Including a scheduler, the artificial neural network model may be quantized by at least one grouping policy.
- the at least one grouping policy may use at least one of an operation order of the artificial neural network model, a computational amount size of the artificial neural network model, and a memory usage size of the artificial neural network model as a criterion for determining the at least one grouping policy.
- the at least one grouping policy is a plurality of grouping policies, and in each of the plurality of grouping policies, an order of the grouping policies may be determined by a respective weight value.
- a neural network processing unit wherein the neural network model is quantized into an order of grouped data ordered according to at least one grouping policy.
- a quantization recognition learning algorithm may be applied to the artificial neural network model.
- the artificial neural network model When the artificial neural network model is quantized by the grouping policy, it can be quantized by referring to the inference accuracy of the artificial neural network model including the data group being quantized.
- the number of bits of the artificial neural network model including the quantized data group may be reduced.
- the quantization when the estimated inference accuracy of the artificial neural network model including the quantized data group is less than the preset target inference accuracy, the quantization can be terminated by restoring the quantizing data group to the moment when the target inference accuracy was high.
- the neural network processing unit further includes an edge device connected to the neural network processing unit, and the artificial neural network model includes the computational processing power of the edge device, the computational processing power of the neural network processing unit, the available memory bandwidth of the edge device, the available memory bandwidth of the neural network processing unit, quantization according to a grouping policy order determined based on data of at least one of a maximum memory bandwidth of the edge device, a maximum memory bandwidth of the neural network processing unit, a maximum memory latency of the edge device, and a maximum memory latency of the neural network processing unit.
- the neural network processing unit includes an artificial neural network model, a plurality of processing elements configured to process the artificial neural network model, an NPU memory system configured to supply data of the artificial neural network model to the processing element array, and an NPU configured to control the processing element array and the NPU memory system Including a scheduler, the artificial neural network model may quantize node data of at least one layer or weight data of at least one layer of the artificial neural network model based on the memory size of the NPU memory system.
- the NPU memory system may be configured to store the quantized number of bits data of at least one layer of node data or weight data of at least one layer of the artificial neural network model.
- the number of bits of data input to each input unit of the plurality of processing elements may be configured to operate corresponding to information on the number of bits of the quantized input data.
- the edge device includes a main memory system and a neural network processing unit configured to communicate with the main memory system, the neural network processing unit comprising a processing element array, an NPU memory system, a processing element array, and an NPU scheduler configured to control the NPU memory system, , the quantized node data, quantized weight data, quantized weight kernel, and/or quantized feature map of the artificial neural network model stored in the main memory system and the NPU memory system are the memory size of the main memory system and the memory size of the NPU memory system. It can be quantized with reference to .
- Neural Network Processing Units 100, 200, 300, 400, 2100
- NPU memory system 120
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Mathematical Physics (AREA)
- Data Mining & Analysis (AREA)
- Molecular Biology (AREA)
- Computer Hardware Design (AREA)
- Human Computer Interaction (AREA)
- Neurology (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Social Psychology (AREA)
- Psychiatry (AREA)
- Medical Informatics (AREA)
- Databases & Information Systems (AREA)
- Image Analysis (AREA)
- Computational Mathematics (AREA)
- Pure & Applied Mathematics (AREA)
- Mathematical Optimization (AREA)
- Mathematical Analysis (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Acoustics & Sound (AREA)
- Signal Processing (AREA)
Abstract
Description
Claims (20)
- 프로세싱 엘리먼트 어레이;상기 프로세싱 엘리먼트 어레이에서 처리되는 인공신경망모델의 적어도 일부 데이터를 저장하도록 구성된 NPU 메모리 시스템; 및상기 인공신경망모델의 구조 데이터 또는 인공신경망 데이터 지역성 정보에 기초하여 상기 프로세싱 엘리먼트 어레이 및 상기 NPU 메모리 시스템을 제어하도록 구성된 NPU 스케줄러를 포함하는,신경망 프로세싱 유닛.
- 제 1 항에 있어서,상기 프로세싱 엘리먼트 어레이는 MAC 연산을 수행하도록 구성된 복수의 프로세싱 엘리먼트를 포함하는, 신경망 프로세싱 유닛.
- 제 1 항에 있어서,상기 NPU 스케줄러는 상기 프로세싱 엘리먼트 어레이 및 상기 NPU 메모리 시스템의 읽기 및 쓰기 순서를 제어하도록 더 구성된, 신경망 프로세싱 유닛.
- 제 1 항에 있어서,상기 NPU 스케줄러는 상기 인공신경망모델의 상기 구조 데이터 또는 인공신경망 데이터 지역성 정보를 분석하여 상기 프로세싱 엘리먼트 어레이 및 상기 NPU 메모리 시스템을 제어하도록 더 구성된, 신경망 프로세싱 유닛.
- 제 1 항에 있어서,상기 NPU 스케줄러는 상기 인공신경망모델의 상기 구조 데이터 또는 인공신경망 데이터 지역성 정보에 기초하여 상기 인공신경망모델의 연산 순서를 스케줄링 하도록 더 구성된, 신경망 프로세싱 유닛.
- 제 1 항에 있어서,상기 NPU 스케줄러는 상기 인공신경망모델의 상기 구조 데이터 또는 인공신경망 데이터 지역성 정보에 기초하여 상기 인공신경망모델의 레이어의 노드 데이터 및 연결망의 가중치 데이터가 저장된 메모리 어드레스 값을 액세스하도록 구성된, 신경망 프로세싱 유닛.
- 제 1 항에 있어서,상기 NPU 스케줄러는 상기 인공신경망모델의 인공 신경망의 입력 레이어부터 출력 레이어 까지의 구조 데이터 또는 인공신경망 데이터 지역성 정보를 기초로 프로세싱 순서를 스케줄링 하도록 구성된, 신경망 프로세싱 유닛.
- 제 1 항에 있어서,상기 NPU 스케줄러는 상기 인공신경망모델의 상기 구조 데이터 또는 인공신경망 데이터 지역성 정보에 기초한 스케줄링 순서를 활용하여, 상기 NPU 메모리 시스템을 제어하여 메모리 재사용율을 향상시키도록 구성된, 신경망 프로세싱 유닛.
- 제 1 항에 있어서,상기 NPU 스케줄러는 제1 스케줄링의 제1 연산 값이 저장된 메모리 어드레스 값을 상기 제1 스케줄링의 다음 스케줄링인 제2 스케줄링의 제2 레이어의 노드 데이터에 대응되는 메모리 어드레스 값으로 재사용 하도록 구성된, 신경망 프로세싱 유닛.
- 제 1 항에 있어서,상기 NPU 스케줄러는 연산 결과가 저장된 메모리 어드레스의 값을 이어지는 다음 연산에서 재사용하도록 구성된, 신경망 프로세싱 유닛.
- 제 1항에 있어서,상기 NPU 메모리 시스템은 정적 메모리를 포함하는, 신경망 프로세싱 유닛.
- 제 11항에 있어서,상기 NPU 메모리 시스템은 SRAM, MRAM, STT-MRAM, eMRAM, HBM, 및 OST-MRAM 중 적어도 하나를 포함하는, 신경망 프로세싱 유닛.
- 중앙 처리 장치;인공신경망모델을 저장하도록 구성된 메인 메모리 시스템;상기 중앙 처리 장치 및 상기 메인 메모리 시스템의 통신을 제어하도록 구성된 시스템 버스; 및프로세싱 엘리먼트 어레이, NPU 메모리 시스템, 상기 프로세싱 엘리먼트 어레이 및 상기 NPU 메모리 시스템을 제어하도록 구성된 NPU 스케줄러, 및 NPU 인터페이스를 포함하는, 신경망 프로세싱 유닛을 포함하고,상기 NPU 인터페이스는, 상기 시스템 버스를 통해서 상기 중앙 처리 장치와 통신하도록 구성되고,상기 NPU 인터페이스는, 상기 인공신경망모델 관련된 데이터를 상기 메인 메모리 시스템과 직접 통신하도록 구성된, 엣지 디바이스.
- 제 13항에 있어서,상기 엣지 디바이스는 휴대폰, 스마트 폰, 인공지능 스피커, 디지털 방송 단말기, 네비게이션, 웨어러블 디바이스, 스마트 시계, 스마트 냉장고, 스마트 TV, 디지털 사이니지, VR 장치, AR 장치, 인공지능 CCTV, 인공지능 로봇 청소기, 태블릿, 노트북 컴퓨터, 자율 주행 자동차, 자율 주행 드론, 자율 주행 2족 보행 로봇, 자율 주행 4족 보행 로봇, 자율 주행 모빌리티, 인공지능 로봇, PDA, 및 PMP 중 하나인, 엣지 디바이스.
- 제 13항에 있어서,상기 신경망 프로세싱 유닛의 상기 NPU 메모리 시스템은, 상기 메인 메모리 시스템보다 인공신경망모델의 추론 연산의 읽기/쓰기 속도가 상대적으로 더 빠르고, 소비 전력을 상대적으로 더 적게 소비하도록 구성된, 엣지 디바이스.
- 제 13항에 있어서,상기 신경망 프로세싱 유닛은 상기 인공신경망모델의 구조 데이터 또는 인공신경망 데이터 지역성 정보에 기초하여, 상기 NPU 메모리 시스템의 메모리 재사용율을 향상시키도록 구성된, 엣지 디바이스.
- 제 13항에 있어서,상기 신경망 프로세싱 유닛은 상기 메인 메모리 시스템의 메모리 개수, 메모리 종류, 데이터 전송 속도, 메모리 크기 중 적어도 하나 이상의 데이터를 획득하도록 구성된, 엣지 디바이스.
- 프로세싱 엘리먼트 어레이;상기 프로세싱 엘리먼트 어레이에서 처리되는 인공신경망모델을 저장하도록 구성된 NPU 메모리 시스템; 및상기 인공신경망모델의 구조 데이터 또는 인공신경망 데이터 지역성 정보에 기초하여 상기 프로세싱 엘리먼트 어레이 및 상기 NPU 메모리 시스템을 제어하도록 구성된 NPU 스케줄러를 포함하고,상기 프로세싱 엘리먼트 어레이는 MAC 연산을 수행하도록 구성되고,상기 프로세싱 엘리먼트 어레이는 MAC 연산 결과를 양자화해서 출력하도록 구성된, 신경망 프로세싱 유닛.
- 제 18항에 있어서,상기 프로세싱 엘리먼트는 곱셈기, 가산기, 누산기 및 비트 양자화 유닛을 포함하도록 구성된, 신경망 프로세싱 유닛.
- 제 18항에 있어서,상기 NPU 스케줄러는상기 인공신경망모델의 구조 데이터 또는 인공신경망 데이터 지역성 정보에 기초하여 재사용 가능한 변수 값 및 재사용 가능한 상수 값을 인식하고,상기 재사용 가능한 변수 값 및 재사용 가능한 상수 값을 이용하여 메모리를 재사용 하도록 상기 NPU 메모리 시스템을 제어하도록 더 구성된, 신경망 프로세싱 유닛.
Priority Applications (10)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020237022536A KR102647686B1 (ko) | 2020-08-21 | 2020-12-31 | 양자화된 인공신경망 모델을 구동하도록 구성된 신경망 프로세싱 유닛 |
KR1020237022541A KR20230106733A (ko) | 2020-08-21 | 2020-12-31 | 신경망 프로세싱 유닛을 포함하는 전자 장치 |
US17/431,152 US11977916B2 (en) | 2020-08-21 | 2020-12-31 | Neural processing unit |
KR1020237022529A KR102647690B1 (ko) | 2020-08-21 | 2020-12-31 | 최적화된 인공신경망 모델을 구동하도록 구성된 신경망 프로세싱 유닛 |
KR1020227004135A KR102530548B1 (ko) | 2020-08-21 | 2020-12-31 | 신경망 프로세싱 유닛 |
KR1020247008700A KR20240038165A (ko) | 2020-08-21 | 2020-12-31 | 추론 연산을 수행하는 전자 장치 |
CN202080027203.3A CN114402336A (zh) | 2020-08-21 | 2020-12-31 | 神经处理单元 |
KR1020247010033A KR20240042266A (ko) | 2020-08-21 | 2020-12-31 | 추론 연산을 수행하는 전자 장치 |
KR1020237015212A KR20230070515A (ko) | 2020-08-21 | 2020-12-31 | 신경망 프로세싱 유닛 |
KR1020237022542A KR102649071B1 (ko) | 2020-08-21 | 2020-12-31 | 프루닝된 인공신경망 모델을 구동하도록 구성된 신경망 프로세싱 유닛 |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR10-2020-0105509 | 2020-08-21 | ||
KR20200105509 | 2020-08-21 | ||
KR10-2020-0107324 | 2020-08-25 | ||
KR20200107324 | 2020-08-25 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022039334A1 true WO2022039334A1 (ko) | 2022-02-24 |
Family
ID=80322962
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2020/019488 WO2022039334A1 (ko) | 2020-08-21 | 2020-12-31 | 신경망 프로세싱 유닛 |
Country Status (4)
Country | Link |
---|---|
US (1) | US11977916B2 (ko) |
KR (8) | KR102647690B1 (ko) |
CN (1) | CN114402336A (ko) |
WO (1) | WO2022039334A1 (ko) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20220121927A1 (en) * | 2020-10-21 | 2022-04-21 | Arm Limited | Providing neural networks |
US11886973B2 (en) | 2022-05-30 | 2024-01-30 | Deepx Co., Ltd. | Neural processing unit including variable internal memory |
US20240037150A1 (en) * | 2022-08-01 | 2024-02-01 | Qualcomm Incorporated | Scheduling optimization in sequence space |
KR20240032707A (ko) * | 2022-08-29 | 2024-03-12 | 주식회사 딥엑스 | 인공신경망의 분산 연산 시스템 및 방법 |
WO2024076163A1 (ko) * | 2022-10-06 | 2024-04-11 | 오픈엣지테크놀로지 주식회사 | 신경망 연산방법과 이를 위한 npu 및 컴퓨팅 장치 |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9525636B2 (en) * | 2014-10-20 | 2016-12-20 | Telefonaktiebolaget L M Ericsson (Publ) | QoS on a virtual interface over multi-path transport |
US10019668B1 (en) * | 2017-05-19 | 2018-07-10 | Google Llc | Scheduling neural network processing |
US20190266015A1 (en) * | 2018-02-27 | 2019-08-29 | Microsoft Technology Licensing, Llc | Deep neural network workload scheduling |
Family Cites Families (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20110037184A (ko) * | 2009-10-06 | 2011-04-13 | 한국과학기술원 | 뉴로-퍼지 시스템과 병렬처리 프로세서가 결합된, 파이프라이닝 컴퓨터 시스템, 이를 이용하여 영상에서 물체를 인식하는 방법 및 장치 |
US10761849B2 (en) * | 2016-09-22 | 2020-09-01 | Intel Corporation | Processors, methods, systems, and instruction conversion modules for instructions with compact instruction encodings due to use of context of a prior instruction |
US11531877B2 (en) * | 2017-11-10 | 2022-12-20 | University of Pittsburgh—of the Commonwealth System of Higher Education | System and method of deploying an artificial neural network on a target device |
CN111542808B (zh) * | 2017-12-26 | 2024-03-22 | 三星电子株式会社 | 预测电子设备上运行应用的线程的最优数量的方法和系统 |
EP3731089B1 (en) | 2017-12-28 | 2023-10-04 | Cambricon Technologies Corporation Limited | Scheduling method and related apparatus |
US20190332924A1 (en) * | 2018-04-27 | 2019-10-31 | International Business Machines Corporation | Central scheduler and instruction dispatcher for a neural inference processor |
KR102135632B1 (ko) | 2018-09-28 | 2020-07-21 | 포항공과대학교 산학협력단 | 뉴럴 프로세싱 장치 및 그것의 동작 방법 |
KR20200057814A (ko) * | 2018-11-13 | 2020-05-27 | 삼성전자주식회사 | 뉴럴 네트워크를 이용한 데이터 처리 방법 및 이를 지원하는 전자 장치 |
US11275558B2 (en) * | 2018-11-30 | 2022-03-15 | Advanced Micro Devices, Inc. | Sorting instances of input data for processing through a neural network |
KR20200075185A (ko) * | 2018-12-17 | 2020-06-26 | 삼성전자주식회사 | 뉴럴 프로세싱 시스템 및 그것의 동작 방법 |
KR20200095300A (ko) * | 2019-01-31 | 2020-08-10 | 삼성전자주식회사 | 뉴럴 네트워크의 컨볼루션 연산을 처리하는 방법 및 장치 |
US11281496B2 (en) * | 2019-03-15 | 2022-03-22 | Intel Corporation | Thread group scheduling for graphics processing |
US11782755B2 (en) | 2019-12-02 | 2023-10-10 | Intel Corporation | Methods, systems, articles of manufacture, and apparatus to optimize thread scheduling |
-
2020
- 2020-12-31 KR KR1020237022529A patent/KR102647690B1/ko active IP Right Grant
- 2020-12-31 KR KR1020247010033A patent/KR20240042266A/ko unknown
- 2020-12-31 KR KR1020227004135A patent/KR102530548B1/ko active IP Right Grant
- 2020-12-31 KR KR1020237022542A patent/KR102649071B1/ko active IP Right Grant
- 2020-12-31 KR KR1020237022541A patent/KR20230106733A/ko not_active Application Discontinuation
- 2020-12-31 KR KR1020237015212A patent/KR20230070515A/ko active Application Filing
- 2020-12-31 KR KR1020247008700A patent/KR20240038165A/ko active Application Filing
- 2020-12-31 KR KR1020237022536A patent/KR102647686B1/ko active IP Right Grant
- 2020-12-31 CN CN202080027203.3A patent/CN114402336A/zh active Pending
- 2020-12-31 WO PCT/KR2020/019488 patent/WO2022039334A1/ko active Application Filing
- 2020-12-31 US US17/431,152 patent/US11977916B2/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9525636B2 (en) * | 2014-10-20 | 2016-12-20 | Telefonaktiebolaget L M Ericsson (Publ) | QoS on a virtual interface over multi-path transport |
US10019668B1 (en) * | 2017-05-19 | 2018-07-10 | Google Llc | Scheduling neural network processing |
US20190266015A1 (en) * | 2018-02-27 | 2019-08-29 | Microsoft Technology Licensing, Llc | Deep neural network workload scheduling |
Non-Patent Citations (2)
Title |
---|
SHUNZHI YANG; ZHENG GONG; KAI YE; YUNGEN WEI; ZHENG HUANG; ZHENHUA HUANG: "EdgeCNN: Convolutional Neural Network Classification Model with small inputs for Edge Computing", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 30 September 2019 (2019-09-30), 201 Olin Library Cornell University Ithaca, NY 14853 , XP081485076 * |
ZEBIN TAHMINA; SCULLY PATRICIA J.; PEEK NIELS; CASSON ALEXANDER J.; OZANYAN KRIKOR B.: "Design and Implementation of a Convolutional Neural Network on an Edge Computing Smartphone for Human Activity Recognition", IEEE ACCESS, IEEE, USA, vol. 7, 1 January 1900 (1900-01-01), USA , pages 133509 - 133520, XP011747558, DOI: 10.1109/ACCESS.2019.2941836 * |
Also Published As
Publication number | Publication date |
---|---|
US11977916B2 (en) | 2024-05-07 |
CN114402336A (zh) | 2022-04-26 |
KR20220025143A (ko) | 2022-03-03 |
KR20240042266A (ko) | 2024-04-01 |
KR20230106731A (ko) | 2023-07-13 |
KR20230106733A (ko) | 2023-07-13 |
KR102647690B1 (ko) | 2024-03-14 |
US20230168921A1 (en) | 2023-06-01 |
KR102530548B1 (ko) | 2023-05-12 |
KR20240038165A (ko) | 2024-03-22 |
KR20230106732A (ko) | 2023-07-13 |
KR102647686B1 (ko) | 2024-03-14 |
KR20230070515A (ko) | 2023-05-23 |
KR102649071B1 (ko) | 2024-03-19 |
KR20230106734A (ko) | 2023-07-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2022039334A1 (ko) | 신경망 프로세싱 유닛 | |
WO2017082543A1 (en) | Electronic device and method for controlling the same | |
WO2018088794A2 (ko) | 디바이스가 이미지를 보정하는 방법 및 그 디바이스 | |
WO2020138624A1 (en) | Apparatus for noise canceling and method for the same | |
WO2018182202A1 (en) | Electronic device and method of executing function of electronic device | |
WO2020091210A1 (en) | System and method of integrating databases based on knowledge graph | |
WO2020246634A1 (ko) | 다른 기기의 동작을 제어할 수 있는 인공 지능 기기 및 그의 동작 방법 | |
WO2020213750A1 (ko) | 객체를 인식하는 인공 지능 장치 및 그 방법 | |
WO2019225961A1 (en) | Electronic device for outputting response to speech input by using application and operation method thereof | |
WO2017164708A1 (en) | Electronic device and method of providing information in electronic device | |
WO2020235696A1 (ko) | 스타일을 고려하여 텍스트와 음성을 상호 변환하는 인공 지능 장치 및 그 방법 | |
WO2018199483A1 (ko) | 지능형 에이전트 관리 방법 및 장치 | |
WO2020213758A1 (ko) | 음성으로 상호작용하는 인공 지능 장치 및 그 방법 | |
WO2020184748A1 (ko) | 교통 정보에 기반한 오토 스탑 시스템을 제어하는 인공 지능 장치 및 그 방법 | |
WO2018199379A1 (ko) | 인공 지능 기기 | |
WO2019135621A1 (ko) | 영상 재생 장치 및 그의 제어 방법 | |
WO2021006404A1 (ko) | 인공지능 서버 | |
WO2020209693A1 (ko) | 인공지능 모델을 갱신하는 전자 장치, 서버 및 그 동작 방법 | |
WO2021029457A1 (ko) | 사용자에게 정보를 제공하는 인공 지능 서버 및 그 방법 | |
EP3603040A1 (en) | Electronic device and method of executing function of electronic device | |
WO2022154457A1 (en) | Action localization method, device, electronic equipment, and computer-readable storage medium | |
WO2022075668A1 (ko) | 인공지능 모델을 분산 처리하는 시스템 및 그 동작 방법 | |
WO2016013892A1 (en) | Device and method for processing image | |
WO2020218635A1 (ko) | 인공 지능을 이용한 음성 합성 장치, 음성 합성 장치의 동작 방법 및 컴퓨터로 판독 가능한 기록 매체 | |
WO2021162481A1 (en) | Electronic device and control method thereof |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
ENP | Entry into the national phase |
Ref document number: 20227004135 Country of ref document: KR Kind code of ref document: A |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20950414 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 310723) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20950414 Country of ref document: EP Kind code of ref document: A1 |