WO2020118555A1 - Network model data access method and device and electronic device - Google Patents

Network model data access method and device and electronic device Download PDF

Info

Publication number
WO2020118555A1
WO2020118555A1 PCT/CN2018/120563 CN2018120563W WO2020118555A1 WO 2020118555 A1 WO2020118555 A1 WO 2020118555A1 CN 2018120563 W CN2018120563 W CN 2018120563W WO 2020118555 A1 WO2020118555 A1 WO 2020118555A1
Authority
WO
WIPO (PCT)
Prior art keywords
calculation
configuration parameters
input data
memory
network
Prior art date
Application number
PCT/CN2018/120563
Other languages
French (fr)
Chinese (zh)
Inventor
徐欣
Original Assignee
深圳鲲云信息科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳鲲云信息科技有限公司 filed Critical 深圳鲲云信息科技有限公司
Priority to CN201880083680.4A priority Critical patent/CN111542818B/en
Priority to PCT/CN2018/120563 priority patent/WO2020118555A1/en
Publication of WO2020118555A1 publication Critical patent/WO2020118555A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/16Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Definitions

  • the present invention relates to the field of data processing, and more specifically, to a network model data access method, device, and electronic equipment.
  • the requirements for complex deep learning network models are higher.
  • the data transmission volume of a layer of network is very large.
  • each layer of network needs to transmit a large amount of data during the simulation process.
  • the amount of data transferred in the regression test even reaches hundreds of millions of data, so there is a problem that the data reading speed is slow, which makes the verification test time very long.
  • the purpose of the present invention is to provide a method, device and electronic device for accessing network model data in response to the above-mentioned defects in the prior art, and to solve the problem of slow reading speed of network model data.
  • a method for accessing network model data includes:
  • the calculation result is stored in the simulation memory.
  • the method further includes:
  • the calculation result in the simulation memory is read and compared with the historical calculation result in the database corresponding to the input data and configuration parameters to obtain a verification result.
  • the input data includes image data acquired by a peripheral device
  • the input data in the analog memory includes image data acquired by a peripheral device or a calculation result corresponding to a previous network layer.
  • the calculation result of reading the simulation memory is compared with the historical calculation result of the input data and configuration parameters in the database to obtain a verification result, including:
  • the calculation engine includes any one of the following:
  • Computing engine based on graphics processor GPU.
  • the method further includes:
  • a network model data access device in a second aspect, includes:
  • An acquisition module for acquiring input data and configuration parameters, the configuration parameters including configuration parameters of each network layer;
  • An analog storage module which is used to store the input data and configuration parameters into an analog memory in the form of an index array
  • the simulation configuration module is configured to read the configuration parameters of the corresponding network layer from the simulation memory, and configure the calculation engine of the corresponding network layer according to the configuration parameters;
  • a calculation module configured to read the input data in the simulation memory to the calculation engine for calculation, and obtain a calculation result
  • the result storage module is used to store the calculation result in the simulation memory.
  • an electronic device including: a memory, a processor, and a computer program stored on the memory and executable on the processor, and when the processor executes the computer program, the implementation of the present invention is implemented Example provides steps in the network model data access method.
  • a computer-readable storage medium is provided, and a computer program is stored on the computer-readable storage medium.
  • a computer program is executed by a processor, a computer model storage method provided in an embodiment of the present invention is implemented. step.
  • the beneficial effects brought by the present invention are: obtaining input data and configuration parameters, the configuration parameters including configuration parameters of each network layer; storing the input data and configuration parameters in a simulation memory, the simulation memory including an array and an array index Reading the configuration parameters of the corresponding network layer from the simulation memory, and configuring the calculation engine of the corresponding network layer according to the configuration parameters; reading the input data in the simulation memory to the calculation engine for calculation to obtain the calculation Results; store the calculation results in the simulation memory.
  • the hit rate of data reading can be improved, thereby increasing the speed of data reading, and thereby increasing the data processing speed of the entire network model .
  • FIG. 1 is a schematic flowchart of a network model data access method according to an embodiment of the present invention
  • FIG. 2 is a schematic flowchart of another network model data access method according to an embodiment of the present invention.
  • FIG. 3 is a schematic diagram of data access of a verification network layer according to an embodiment of the invention.
  • FIG. 4 is a schematic structural diagram of a network model data access device according to an embodiment of the present invention.
  • FIG. 5 is a schematic structural diagram of another network model data access device according to an embodiment of the present invention.
  • Neural networks have broad and attractive prospects in the fields of system identification, pattern recognition, and intelligent control. Especially in intelligent control, people are particularly interested in the self-learning function of neural networks, and regard this important feature of neural networks as One of the key keys to solve the problem of controller adaptability in automatic control.
  • Neural Networks is a complex network system formed by a large number of simple processing units (called neurons) that are widely connected to each other. It reflects many basic characteristics of human brain function and is a highly complex Non-linear dynamic learning system. Neural networks have large-scale parallelism, distributed storage and processing, self-organization, self-adaptive and self-learning capabilities, and are particularly suitable for processing inaccurate and fuzzy information processing problems that need to consider many factors and conditions simultaneously.
  • the development of neural networks is related to neuroscience, mathematical science, cognitive science, computer science, artificial intelligence, information science, cybernetics, robotics, microelectronics, psychology, optical computing, molecular biology, etc. Interdisciplinary.
  • neural networks The basis of neural networks is neurons.
  • Neurons are biological models based on nerve cells of the biological nervous system. When people study the biological nervous system to explore the mechanism of artificial intelligence, the neurons are mathematicalized to produce a mathematical model of neurons.
  • Neural network is a highly nonlinear dynamic system. Although the structure and function of each neuron are not complicated, the dynamic behavior of the neural network is very complicated; therefore, the neural network can express various phenomena in the actual physical world.
  • the neural network model is described based on the mathematical model of the neuron.
  • Artificial neural network (Artificial Neural Network) is a description of the first-order characteristics of the human brain system. Simply put, it is a mathematical model.
  • the neural network model is represented by the characteristics of network topology nodes and learning rules.
  • the great attraction of the neural network to people mainly includes: parallel distributed processing, high robustness and fault tolerance, distributed storage and learning capabilities, and can fully approximate complex nonlinear relationships.
  • the invention provides a network model data access method, device and electronic equipment.
  • FIG. 1 is a schematic flowchart of a network model data access method according to an embodiment of the present invention. As shown in FIG. 1, the method includes the following steps:
  • the aforementioned input data may be acquired by an external device, the aforementioned input data may be image data, and the aforementioned external device may be a sensor, such as a camera, or a storage device storing image data, such as a mobile hard disk or a database server;
  • the above input data can also be generated internally, such as image software.
  • the above configuration parameters include configuration parameters of each layer of the network.
  • the above-mentioned each layer of the network refers to each layer of the network in the network model.
  • the above configuration parameters may be configuration parameters used by the network model during training, such as the convolutional layer Parameters such as weight parameters, convolution kernel parameters, and step parameters.
  • the above-mentioned network model refers to a pre-trained network model. Training data can be obtained from various databases and obtained after data processing. Data processing can be processing such as cropping, compression, and data cleaning.
  • the above-mentioned analog memory may be a virtual memory, and the input data and configuration parameters are stored in the analog memory, and the speed of reading and writing can be increased when each network layer reads input data or writes calculation data.
  • the above array is used to store data, and the above array index is used to index into the array, thereby improving the reading speed of data.
  • the hit rate of data reading can be improved, thereby increasing the speed of data access.
  • the above input data may also be calculation results obtained by calculating the same input data based on a network with the same configuration parameters as the upper layer network, for example: suppose that the network model 01 needs to be verified and tested The configuration parameter of the first layer is A, the input data is B, and the calculated result is C1. Assuming that there is a network layer with the same configuration parameter A as the network of this layer, it belongs to a layer in a mature network model 02.
  • This layer of network The calculation result of the input data B is C2 (C2 can be regarded as the expected calculation result), and the calculation result C2 of the layer network and the input data B are read into the above-mentioned analog memory as input data, which can be understood as, input Data B is the initial input data in the network model 01 to be verified and tested, and C1 is calculated in the first-layer network, and the next-layer network of the first-layer network, which is the second-layer network, can read C1 for calculation , Can also read C2 to calculate.
  • input Data B is the initial input data in the network model 01 to be verified and tested
  • C1 is calculated in the first-layer network
  • the next-layer network of the first-layer network which is the second-layer network
  • Can also read C2 to calculate can also read.
  • the above configuration parameters may be preset parameters.
  • the above configuration parameters correspond to each network layer.
  • the algorithm of each layer of the network may be different, and the input data requirements and processing methods may also be different.
  • each layer of the network can be configured with different parameters to achieve the algorithm of each layer of network, the specific parameters that need to be configured can be set according to different models or different algorithms.
  • the above calculation engine is used to calculate the input data according to the configuration parameters.
  • the foregoing input data may be understood as input data required by each layer of the network, may be input data of the first layer of the network model, or may be data of calculation results of the network of the upper layer.
  • the input data of different layers can be stored in different storage areas, that is, the input data of different layers can be stored in different arrays or different arrays for storage, and the These arrays establish corresponding indexes.
  • the input data can be read in the analog memory through the computing engine of the corresponding network layer.
  • step 104 after calculating the input data, the calculation result is obtained, and the obtained calculation result can be returned to the simulation memory, stored in a new storage area, and a new index is established to index the data of the calculation result.
  • the above new storage area may be a new array or a new subscript range in an array.
  • input data and configuration parameters are obtained, and the configuration parameters include configuration parameters of each network layer; the input data and configuration parameters are stored in an analog memory, and the analog memory includes an array and an array index; from Reading the configuration parameters of the corresponding network layer from the simulation memory, and configuring the calculation engine of the corresponding network layer according to the configuration parameters; reading the input data from the simulation memory to the calculation engine for calculation, and obtaining the calculation result; The calculation result is stored in the simulation memory.
  • the above calculation result may be referred to as a simulation result, and the network model data access method provided in the embodiment of the present invention may be applied to devices that need to perform data access, such as computers, servers, mobile phones, etc., which can perform network model data Accessed equipment.
  • FIG. 2 is a schematic flowchart of another network model data access method according to an embodiment of the present invention. As shown in FIG. 2, the method includes the following steps:
  • the calculation result is stored in the analog memory, so that the next layer of network can increase the reading speed when reading the calculation result as input data.
  • the above calculation result may be the calculation result of a certain layer of the network or the final calculation result of the entire network model. It is understandable that the historical calculation results in the database refer to the same input data in The result obtained in the input network model under the same configuration parameter can be considered as the expected result under the configuration parameter and input data.
  • the calculation result obtained by calculating the input data A and the configuration parameter B on a certain layer of the network is C
  • the above historical calculation result may be a calculation result obtained by calculating the input data A and the configuration parameter B in a layer of the network, assuming C1, the above comparison verification is to compare the calculation result C with the calculation result C1; and
  • the calculation result calculated by the input data A and the configuration parameter B in the entire network model is D.
  • the above historical calculation result may be the calculation calculated by the input data A and the configuration parameter B in a mature network model As a result, assuming that it is D1, the above comparison verification is to compare the calculation result D with the calculation result D1.
  • the above historical calculation results are used to verify the network model.
  • the above comparison verification results include the same but may be different.
  • the hardware model of the network model is on board The test fails. If the calculation result is the same as the corresponding historical calculation result in the database, it indicates that the hardware test of the network model passed.
  • the above database may be a remote database or a local database, and the calculation result in the database may be a calculation result obtained based on a mature network model, which is used to compare calculation results of data simulation detection. The above comparison and verification can be compared by the CPU processor.
  • the historical calculation results may be stored into the above-mentioned analog memory, When the next layer network starts to calculate, the historical calculation results are read as input data to the next layer network for calculation, so that the input data of each layer is expected to be input data, thereby improving the verification test Precision.
  • the test on the hardware board is implemented, because the calculation result is read from the simulation memory to the processing Compared with the device, the data reading speed is fast, which reduces the simulation time and further reduces the verification test time.
  • the input data includes image data acquired by a peripheral device
  • the input data in the analog memory includes image data acquired by a peripheral device or a calculation result corresponding to a previous network layer.
  • the above-mentioned peripherals can be data collection devices equipped with memory, such as cameras, mobile phones, tablet computers, etc., or they can be hard disks for storing data, cloud databases, and other devices that provide input data, and read the data in the above devices
  • the input data can be read through the simulation memory to increase the data reading speed.
  • the above-mentioned image data data may be image data for face recognition, image data for vehicle recognition, image data for object recognition, or the like.
  • the above-mentioned simulation memory obtains input data and configuration parameters from peripheral devices. During simulation, the network model reads input data and configuration parameters from the simulation memory.
  • the above-mentioned simulation memory may also be obtained from a database
  • the historical calculation results are stored as input data.
  • the input data obtained from the peripherals can be regarded as the initial input data.
  • the obtained calculation results can be input as input data to the next layer of networks for calculation.
  • the calculation result of reading the simulation memory is compared with the historical calculation result corresponding to the input data and configuration parameters in the database to obtain a verification result, including:
  • the above database may store historical calculation results corresponding to each network layer (expected calculation results) ), in the verification test of the network model, you can store the calculation result in the simulation memory after each layer is calculated, and then read the calculation result from the simulation memory and put it into the processor, and the processor gets it into the database Compare the historical calculation results (expected calculation results) of this layer network. Each layer of network is calculated once, and a comparison is performed. In this way, the verification test of each layer of network can improve the accuracy of the verification test.
  • the above database can store the historical calculation results (expected calculation results) corresponding to the entire network model, and can wait for all the network layers in the network model to complete the calculation and then calculate the calculation results. Compare it with the historical calculation results (expected calculation results) in the database to the processor.
  • the calculation engine includes any one of the following:
  • Computing engine based on graphics processor GPU.
  • the above-mentioned field programmable gate array makes the design of digital circuit system very flexible, and significantly shortens the development cycle of the system, reduces the volume of the digital circuit system and the types of chips used, and can be used for image acquisition or image recognition.
  • the above-mentioned dedicated integrated circuit has the characteristics of strong specificity, and can be dedicated to the verification test of the network model hardware after being designed.
  • the above-mentioned graphics processor has a graphics acceleration function, which can process the calculation of complex images, liberate the CPU processing from the calculation, improve the calculation of image data, and also increase the speed of comparison of the calculation results by the CPU processor.
  • the above calculation engine is only a better choice in this embodiment, and should not be construed as limiting this embodiment.
  • the method further includes:
  • the above new network layer refers to the next network layer.
  • the network layer configuration parameters in the network model can be configured at one time or at the beginning of the calculation Before configuring, for example, you can start to configure the parameters of the next layer of network after the calculation of one layer of network is completed. It can be understood that the parameters of each layer of network are stored in different arrays in the above-mentioned simulation memory, which can be simulated The corresponding index in the memory is obtained. Repeat the calculation process to calculate all the network layers in the network model until the last layer of network calculations. The final calculation results are obtained and stored in the simulation memory, which is convenient for taking out the historical calculation results in the database (expected calculation results) comparing.
  • a network model data access device in a second aspect, as shown in FIG. 4, is provided.
  • the device includes:
  • the obtaining module 301 is used to obtain input data and configuration parameters, where the configuration parameters include configuration parameters of each network layer;
  • the simulation storage module 302 is used to store the input data and configuration parameters in the form of an index array into the simulation memory;
  • the simulation configuration module 303 is configured to read the configuration parameters of the corresponding network layer from the simulation memory, and configure the calculation engine of the corresponding network layer according to the configuration parameters;
  • the calculation module 304 is configured to read the input data in the analog memory to the calculation engine for calculation, and obtain a calculation result;
  • the result storage module 305 is used to store the calculation result in the simulation memory.
  • the device further includes:
  • the verification module 306 is configured to read and compare the calculation result in the simulation memory with the historical calculation result corresponding to the input data and configuration parameters in the database to obtain a verification result.
  • the input data includes image data acquired by a peripheral device
  • the input data in the analog memory includes image data acquired by a peripheral device or a calculation result corresponding to a previous network layer.
  • the verification module 306 is further configured to obtain historical calculation results of input data and configuration parameters corresponding to each network layer in the database and compare and verify the calculation results of each network layer respectively; or
  • the verification module 306 is also used to obtain historical calculation results of input data and configuration parameters corresponding to the entire network model in the database, and perform comparison and verification on final calculation results of all network layers.
  • the calculation engine includes any one of the following:
  • Computing engine based on graphics processor GPU.
  • the simulation configuration module 303 is further configured to read configuration parameters corresponding to the new network layer from the simulation memory, and update the calculation engine corresponding to the network layer according to the configuration parameters;
  • the calculation module 304 is further configured to use the calculation result of the previous network layer in the simulation memory as input data and read the updated calculation engine to perform calculation to obtain a new calculation result;
  • the result storage module 305 is also used to store the new calculation result in the simulation memory
  • the result storage module 305 is also used to store the final calculation result obtained by repeating the above steps until all network layers are calculated into the simulation memory.
  • an embodiment of the present invention provides an electronic device, including: a memory, a processor, and a computer program stored on the memory and executable on the processor, when the processor executes the computer program
  • the steps in the installation of the container orchestration engine provided by the embodiments of the present invention are implemented.
  • an embodiment of the present invention provides a computer-readable storage medium storing a computer program on the computer-readable storage medium, which when executed by a processor implements a container orchestration engine provided by an embodiment of the present invention Steps in installation.
  • the disclosed device may be implemented in other ways.
  • the device embodiments described above are only schematic.
  • the division of the unit is only a logical function division.
  • there may be another division manner for example, multiple units or components may be combined or may Integration into another system, or some features can be ignored, or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the above integrated unit can be implemented in the form of hardware or software program modules.
  • the integrated unit is implemented in the form of a software program module and sold or used as an independent product, it may be stored in a computer-readable memory.
  • the technical solution of the present application essentially or part of the contribution to the existing technology or all or part of the technical solution can be embodied in the form of a software product, the computer software product is stored in a memory, Several instructions are included to enable a computer device (which may be a personal computer, server, network device, etc.) to perform all or part of the steps of the methods described in the embodiments of the present application.
  • the aforementioned memory includes: U disk, Read-Only Memory (ROM, Read-Only Memory), Random Access Memory (RAM, Random Access Memory), mobile hard disk, magnetic disk or optical disk and other media that can store program codes.
  • the program may be stored in a computer-readable memory, and the memory may include: a flash disk , Read-Only Memory (English: Read-Only Memory, abbreviation: ROM), Random Access Device (English: Random Access Memory, abbreviation: RAM), magnetic disk or optical disk, etc.
  • ROM Read-Only Memory
  • RAM Random Access Device
  • magnetic disk or optical disk etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Software Systems (AREA)
  • Biomedical Technology (AREA)
  • Pure & Applied Mathematics (AREA)
  • Biophysics (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Algebra (AREA)
  • Neurology (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

Disclosed are a network model data access method and device and an electronic device. The method comprises the following steps: obtaining input data and configuration parameters, the configuration parameters including configuration parameters of each network layer (101); storing the input data and the configuration parameters into an analog memory, the analog memory including an array and an array index (102); reading configuration parameters of a corresponding network layer in the analog memory, and configuring a calculation engine of the corresponding network layer according to the configuration parameters (103); reading the input data of the analog memory into the calculation engine for calculation to obtain a calculation result (104); and storing the calculation result in the analog memory (105). By storing input data and calculation results in an analog memory, the method uses index arrays to improve the hit rate of data reading, thereby increasing the speed of data reading and further improving the data processing speed of the entire network model.

Description

一种网络模型数据存取方法、装置及电子设备Network model data access method, device and electronic equipment 技术领域Technical field
本发明涉及数据处理领域,更具体的说,是涉及一种网络模型数据存取方法、装置及电子设备。The present invention relates to the field of data processing, and more specifically, to a network model data access method, device, and electronic equipment.
背景技术Background technique
随着AI技术的发展,对复杂的深度学习网络模型要求更高,在深度学习网络模型中,一层网络的数据传输量非常大,为了保证硬件上板测试的正确率,通过需要对很多或者全部的网络层都进行仿真,包括读取数据和将计算结果存入在存储器中,由于每层网络的数据量非常大,所以每层网络在仿真的过程中,都需要传输大量的数据,在回归测试中的传输数据量甚至达到上亿个数据,因此存在数据读取速度慢,使得验证测试的时间非常的长的问题。With the development of AI technology, the requirements for complex deep learning network models are higher. In the deep learning network model, the data transmission volume of a layer of network is very large. In order to ensure the correct rate of hardware on-board testing, it is necessary to pass many or All network layers are simulated, including reading data and storing calculation results in memory. Because the amount of data in each layer of network is very large, each layer of network needs to transmit a large amount of data during the simulation process. The amount of data transferred in the regression test even reaches hundreds of millions of data, so there is a problem that the data reading speed is slow, which makes the verification test time very long.
发明内容Summary of the invention
本发明的目的是针对上述现有技术存在的缺陷,提供一种网络模型数据存取方法、装置及电子设备,解决了网络模型数据读取速度慢的问题。The purpose of the present invention is to provide a method, device and electronic device for accessing network model data in response to the above-mentioned defects in the prior art, and to solve the problem of slow reading speed of network model data.
本发明的目的是通过以下技术方案来实现的:The purpose of the present invention is achieved by the following technical solutions:
第一方面,提供一种网络模型数据存取方法,所述方法包括:In a first aspect, a method for accessing network model data is provided. The method includes:
获取输入数据与配置参数,所述配置参数包括各网络层的配置参数;Obtain input data and configuration parameters, including the configuration parameters of each network layer;
将所述输入数据及配置参数以索引数组的形式存入模拟存储器中;Store the input data and configuration parameters in an analog memory in the form of an index array;
从所述模拟存储器中读取对应网络层的配置参数,根据所述配置参数配置对应网络层的计算引擎;Reading configuration parameters of the corresponding network layer from the analog memory, and configuring a computing engine of the corresponding network layer according to the configuration parameters;
将所述模拟存储器中的输入数据读取到所述计算引擎进行计算,得到计算结果;Reading the input data in the simulation memory to the calculation engine for calculation, and obtaining a calculation result;
将所述计算结果存入所述模拟存储器中。The calculation result is stored in the simulation memory.
可选的,在所述将所述模拟结果存入所述模拟存储器中之后,所述方法还包括:Optionally, after storing the simulation result in the simulation memory, the method further includes:
读取所述模拟存储器中的计算结果与数据库中对应所述输入数据及配置参 数的历史计算结果进行对比验证,得到验证结果。The calculation result in the simulation memory is read and compared with the historical calculation result in the database corresponding to the input data and configuration parameters to obtain a verification result.
可选的,所述输入数据包括外设获取到的图像数据,所述模拟存储器中的输入数据包括外设获取到的图像数据或对应于上一网络层的计算结果。Optionally, the input data includes image data acquired by a peripheral device, and the input data in the analog memory includes image data acquired by a peripheral device or a calculation result corresponding to a previous network layer.
可选的,所述读取所述模拟存储器中的计算结果与数据库中对应所述输入数据及配置参数的历史计算结果进行对比验证,得到验证结果,包括:Optionally, the calculation result of reading the simulation memory is compared with the historical calculation result of the input data and configuration parameters in the database to obtain a verification result, including:
获取数据库中对应每层网络层的输入数据与配置参数的历史计算结果对每一网络层的计算结果分别进行对比验证;或Obtain the historical calculation results of the input data and configuration parameters corresponding to each network layer in the database and compare and verify the calculation results of each network layer; or
获取数据库中对应整个网络模型的输入数据与配置参数的历史计算结果对所有网络层的最终计算结果进行对比验证。Obtain the historical calculation results of the input data and configuration parameters corresponding to the entire network model in the database, and compare and verify the final calculation results of all network layers.
可选的,在所述计算引擎包括以下任一项:Optionally, the calculation engine includes any one of the following:
基于现场可编程门阵列FPGA的计算引擎;Calculation engine based on field programmable gate array FPGA;
基于专用集成电路ASIC的计算引擎;ASIC-based calculation engine;
基于图形处理器GPU的计算引擎。Computing engine based on graphics processor GPU.
可选的,在所述将所述计算结果存入所述模拟存储器中之后,所述方法还包括:Optionally, after storing the calculation result in the simulation memory, the method further includes:
从所述模拟存储器中读取新网络层对应的配置参数,根据所述配置参数更新对应网络层的计算引擎;Reading configuration parameters corresponding to the new network layer from the simulation memory, and updating the computing engine corresponding to the network layer according to the configuration parameters;
将所述模拟存储器中的上一网络层的计算结果做为输入数据读取到更新后的计算引擎进行计算,得到新的计算结果;Using the calculation result of the previous network layer in the analog memory as input data and reading to the updated calculation engine for calculation to obtain a new calculation result;
将所述新的计算结果存入所述模拟存储器中;Store the new calculation result in the simulation memory;
将重复上述步骤直到计算完所有网络层得到的最终计算结果存储到模拟存储器中。The above steps will be repeated until the final calculation results obtained after calculating all the network layers are stored in the simulation memory.
第二方面,提供一种网络模型数据存取装置,所述装置包括:In a second aspect, a network model data access device is provided. The device includes:
获取模块,用于获取输入数据与配置参数,所述配置参数包括各网络层的配置参数;An acquisition module, for acquiring input data and configuration parameters, the configuration parameters including configuration parameters of each network layer;
模拟存储模块,用于将所述输入数据及配置参数以索引数组的形式存入模拟存储器中;An analog storage module, which is used to store the input data and configuration parameters into an analog memory in the form of an index array;
模拟配置模块,用于从所述模拟存储器中读取对应网络层的配置参数,根据所述配置参数配置对应网络层的计算引擎;The simulation configuration module is configured to read the configuration parameters of the corresponding network layer from the simulation memory, and configure the calculation engine of the corresponding network layer according to the configuration parameters;
计算模块,用于将所述模拟存储器中的输入数据读取到所述计算引擎进行计算,得到计算结果;A calculation module, configured to read the input data in the simulation memory to the calculation engine for calculation, and obtain a calculation result;
结果存储模块,用于将所述计算结果存入所述模拟存储器中。The result storage module is used to store the calculation result in the simulation memory.
第三方面,提供一种电子设备,包括:存储器、处理器及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现本发明实施例提供的网络模型数据存取方法中的步骤。According to a third aspect, an electronic device is provided, including: a memory, a processor, and a computer program stored on the memory and executable on the processor, and when the processor executes the computer program, the implementation of the present invention is implemented Example provides steps in the network model data access method.
第四方面,提供一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机程序,所述计算机程序被处理器执行时实现本发明实施例提供的网络模型数据存取方法中的步骤。According to a fourth aspect, a computer-readable storage medium is provided, and a computer program is stored on the computer-readable storage medium. When the computer program is executed by a processor, a computer model storage method provided in an embodiment of the present invention is implemented. step.
本发明带来的有益效果:获取输入数据与配置参数,所述配置参数包括各网络层的配置参数;将所述输入数据及配置参数存入模拟存储器中,所述模拟存储器包括数组及数组索引;从所述模拟存储器中读取对应网络层的配置参数,根据所述配置参数配置对应网络层的计算引擎;将所述模拟存储器中的输入数据读取到所述计算引擎进行计算,得到计算结果;将所述计算结果存入所述模拟存储器中。通过将输入数据及计算结果存入模拟存储器中,在模拟存储器中,通过索引到数组方式,可以提高数据读取的命中率,从而提高数据读取的速度,进而提高整个网络模型的数据处理速度。The beneficial effects brought by the present invention are: obtaining input data and configuration parameters, the configuration parameters including configuration parameters of each network layer; storing the input data and configuration parameters in a simulation memory, the simulation memory including an array and an array index Reading the configuration parameters of the corresponding network layer from the simulation memory, and configuring the calculation engine of the corresponding network layer according to the configuration parameters; reading the input data in the simulation memory to the calculation engine for calculation to obtain the calculation Results; store the calculation results in the simulation memory. By storing the input data and calculation results in the simulation memory, in the simulation memory, by indexing to the array method, the hit rate of data reading can be improved, thereby increasing the speed of data reading, and thereby increasing the data processing speed of the entire network model .
附图说明BRIEF DESCRIPTION
图1为本发明实施例的一种网络模型数据存取方法流程示意图;FIG. 1 is a schematic flowchart of a network model data access method according to an embodiment of the present invention;
图2为本发明实施例的另一种网络模型数据存取方法流程示意图;2 is a schematic flowchart of another network model data access method according to an embodiment of the present invention;
图3为本发明实施例的验证网络层的数据存取示意图;3 is a schematic diagram of data access of a verification network layer according to an embodiment of the invention;
图4为本发明实施例的一种网络模型数据存取装置结构示意图;4 is a schematic structural diagram of a network model data access device according to an embodiment of the present invention;
图5为本发明实施例的另一种网络模型数据存取装置结构示意图;5 is a schematic structural diagram of another network model data access device according to an embodiment of the present invention;
具体实施方式detailed description
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。The technical solutions in the embodiments of the present application will be described clearly and completely below with reference to the drawings in the embodiments of the present application. Obviously, the described embodiments are part of the embodiments of the present application, but not all of the embodiments. Based on the embodiments in the present application, all other embodiments obtained by a person of ordinary skill in the art without creative work fall within the protection scope of the present application.
本申请的说明书和权利要求书及所述附图中的术语“第一”、“第二”、“第三”和“第四”等是用于区别不同对象,而不是用于描述特定顺序。此外,术语“包括”和“具有”以及它们任何变形,意图在于覆盖不排他的包含。例如包含了一系列步骤或单元的过程、方法、系统、产品或设备没有限定于已列出的步骤或单元, 而是可选地还包括没有列出的步骤或单元,或可选地还包括对于这些过程、方法、产品或设备固有的其它步骤或单元。The terms "first", "second", "third", and "fourth" in the description and claims of the present application and the accompanying drawings are used to distinguish different objects, not to describe a specific order . In addition, the terms "including" and "having" and any variations thereof are intended to cover non-exclusive inclusions. For example, a process, method, system, product, or device that includes a series of steps or units is not limited to the listed steps or units, but optionally includes steps or units that are not listed, or optionally also includes Other steps or units inherent to these processes, methods, products or equipment.
在本文中提及“实施例”意味着,结合实施例描述的特定特征、结构或特性可以包含在本申请的至少一个实施例中。在说明书中的各个位置出现该短语并不一定均是指相同的实施例,也不是与其它实施例互斥的独立的或备选的实施例。本领域技术人员显式地和隐式地理解的是,本文所描述的实施例可以与其它实施例相结合。Reference herein to "embodiments" means that specific features, structures, or characteristics described in connection with the embodiments may be included in at least one embodiment of the present application. The appearance of the phrase in various places in the specification does not necessarily refer to the same embodiment, nor is it an independent or alternative embodiment mutually exclusive with other embodiments. Those skilled in the art understand explicitly and implicitly that the embodiments described herein can be combined with other embodiments.
模拟人类实际神经网络的数学方法问世以来,人们已慢慢习惯了把这种人工神经网络直接称为神经网络。神经网络在系统辨识、模式识别、智能控制等领域有着广泛而吸引人的前景,特别在智能控制中,人们对神经网络的自学习功能尤其感兴趣,并且把神经网络这一重要特点看作是解决自动控制中控制器适应能力这个难题的关键钥匙之一。Since the advent of mathematical methods for simulating human actual neural networks, people have gradually become accustomed to calling this artificial neural network directly as a neural network. Neural networks have broad and attractive prospects in the fields of system identification, pattern recognition, and intelligent control. Especially in intelligent control, people are particularly interested in the self-learning function of neural networks, and regard this important feature of neural networks as One of the key keys to solve the problem of controller adaptability in automatic control.
神经网络(Neural Networks,NN)是由大量的、简单的处理单元(称为神经元)广泛地互相连接而形成的复杂网络系统,它反映了人脑功能的许多基本特征,是一个高度复杂的非线性动力学习系统。神经网络具有大规模并行、分布式存储和处理、自组织、自适应和自学能力,特别适合处理需要同时考虑许多因素和条件的、不精确和模糊的信息处理问题。神经网络的发展与神经科学、数理科学、认知科学、计算机科学、人工智能、信息科学、控制论、机器人学、微电子学、心理学、光计算、分子生物学等有关,是一门新兴的边缘交叉学科。Neural Networks (NN) is a complex network system formed by a large number of simple processing units (called neurons) that are widely connected to each other. It reflects many basic characteristics of human brain function and is a highly complex Non-linear dynamic learning system. Neural networks have large-scale parallelism, distributed storage and processing, self-organization, self-adaptive and self-learning capabilities, and are particularly suitable for processing inaccurate and fuzzy information processing problems that need to consider many factors and conditions simultaneously. The development of neural networks is related to neuroscience, mathematical science, cognitive science, computer science, artificial intelligence, information science, cybernetics, robotics, microelectronics, psychology, optical computing, molecular biology, etc. Interdisciplinary.
神经网络的基础在于神经元。The basis of neural networks is neurons.
神经元是以生物神经系统的神经细胞为基础的生物模型。在人们对生物神经系统进行研究,以探讨人工智能的机制时,把神经元数学化,从而产生了神经元数学模型。Neurons are biological models based on nerve cells of the biological nervous system. When people study the biological nervous system to explore the mechanism of artificial intelligence, the neurons are mathematicalized to produce a mathematical model of neurons.
大量的形式相同的神经元连结在—起就组成了神经网络。神经网络是一个高度非线性动力学系统。虽然,每个神经元的结构和功能都不复杂,但是神经网络的动态行为则是十分复杂的;因此,用神经网络可以表达实际物理世界的各种现象。A large number of neurons in the same form are connected together to form a neural network. Neural network is a highly nonlinear dynamic system. Although the structure and function of each neuron are not complicated, the dynamic behavior of the neural network is very complicated; therefore, the neural network can express various phenomena in the actual physical world.
神经网络模型是以神经元的数学模型为基础来描述的。人工神经网络(Artificial Neural Network)是对人类大脑系统的一阶特性的一种描述。简单地讲,它是一个数学模型。神经网络模型由网络拓扑节点特点和学习规则来表示。神 经网络对人们的巨大吸引力主要包括:并行分布处理、高度鲁棒性和容错能力、分布存储及学习能力、能充分逼近复杂的非线性关系。The neural network model is described based on the mathematical model of the neuron. Artificial neural network (Artificial Neural Network) is a description of the first-order characteristics of the human brain system. Simply put, it is a mathematical model. The neural network model is represented by the characteristics of network topology nodes and learning rules. The great attraction of the neural network to people mainly includes: parallel distributed processing, high robustness and fault tolerance, distributed storage and learning capabilities, and can fully approximate complex nonlinear relationships.
在控制领域的研究课题中,不确定性系统的控制问题长期以来都是控制理论研究的中心主题之一,但是这个问题一直没有得到有效的解决。利用神经网络的学习能力,使它在对不确定性系统的控制过程中自动学习系统的特性,从而自动适应系统随时间的特性变异,以求达到对系统的最优控制;显然这是一种十分振奋人心的意向和方法。In the research topics in the field of control, the control of uncertain systems has long been one of the central themes of control theory research, but this problem has not been effectively solved. Using the learning ability of the neural network, it can automatically learn the characteristics of the system in the process of controlling the uncertain system, so as to automatically adapt to the characteristics of the system over time, in order to achieve optimal control of the system; obviously this is a kind of Very exciting intentions and methods.
人工神经网络的模型现在有数十种之多,应用较多的典型的神经网络模型包括BP神经网络、Hopfield网络、ART网络和Kohonen网络。There are dozens of models of artificial neural networks, and the typical neural network models with more applications include BP neural network, Hopfield network, ART network and Kohonen network.
本发明提供了一种网络模型数据存取方法、装置及电子设备。The invention provides a network model data access method, device and electronic equipment.
本发明的目的是通过以下技术方案来实现的:The purpose of the present invention is achieved by the following technical solutions:
第一方面,请参见图1,图1是本发明实施例提供的一种网络模型数据存取方法的流程示意图,如图1所示,所述方法包括以下步骤:In the first aspect, please refer to FIG. 1. FIG. 1 is a schematic flowchart of a network model data access method according to an embodiment of the present invention. As shown in FIG. 1, the method includes the following steps:
101、获取输入数据与配置参数,所述配置参数包括各网络层的配置参数。101. Obtain input data and configuration parameters, where the configuration parameters include configuration parameters of each network layer.
上述的输入数据可以是由外部设备进行获取,上述的输入数据可以是图像数据,上述的外部设备可以是传感器,比如摄像头,也可以是存储有图像数据的存储设备,比如移动硬盘或数据库服务器;上述的输入数据也可以是由内部生成,比如图像软件。上述的配置参数包括每层网络的配置参数,上述的每层网络指的是网络模型中的每层网络,上述的配置参数可以是由对网络模型进行训练时使用的配置参数,比如卷积层的权值参数、卷积核参数、步长参数等参数。上述的网络模型指是预先训练过的网络模型,训练数据可以通过在各个数据库中获取,并经过数据处理后得到,数据处理可以是裁剪、压缩、数据清洗等处理。The aforementioned input data may be acquired by an external device, the aforementioned input data may be image data, and the aforementioned external device may be a sensor, such as a camera, or a storage device storing image data, such as a mobile hard disk or a database server; The above input data can also be generated internally, such as image software. The above configuration parameters include configuration parameters of each layer of the network. The above-mentioned each layer of the network refers to each layer of the network in the network model. The above configuration parameters may be configuration parameters used by the network model during training, such as the convolutional layer Parameters such as weight parameters, convolution kernel parameters, and step parameters. The above-mentioned network model refers to a pre-trained network model. Training data can be obtained from various databases and obtained after data processing. Data processing can be processing such as cropping, compression, and data cleaning.
102、将所述输入数据及配置参数存入模拟存储器中,所述模拟存储器包括数组及数组索引。102. Store the input data and configuration parameters in an analog memory, where the analog memory includes an array and an array index.
上述的模拟存储器可以是虚拟存储器,将所述输入数据及配置参数存入模拟存储器中,可以在各网络层读取输入数据或写入计算数据的时候,提高读写的速度。上述的数组用于存入数据,上述的数组索引用于索引到数组,从而提高数据的读取速度。通过模拟存储器对输入数据及计算数据进行存取,可以提高数据读取的命中率,进而提高数据的存取速度。在一些可能的实施例中,上述的输入数据也可以是基于与上一层网络具有相同配置参数的网络对相同输入 数据进行计算得到的计算结果,比如:假设需要验证测试的网络模型01中的一层的配置参数为A,输入数据为B,计算得到的计算结果为C1,假设有与该层网络具有相同配置参数A的网络层,属于一个成熟网络模型02中的一层,这层网络对输入数据B的计算结果为C2(可以认为C2为期望的计算结果),将该层网络的计算结果C2与输入数据B一同做为输入数据读取到上述模拟存储器中,可以理解为,输入数据B为待验证测试的网络模型01中的初始输入数据,在第一层网络中计算得到C1,而第一层网络的下一层网络,即是第二层网络,可以读取C1进行计算,也可以读取C2进行计算。这样,可以防止在验证测试中因某层的计算偏差导致后面的计算偏差过大,进而找不到最开始出现偏差的层或者说找不出所有出现偏差的层的问题。The above-mentioned analog memory may be a virtual memory, and the input data and configuration parameters are stored in the analog memory, and the speed of reading and writing can be increased when each network layer reads input data or writes calculation data. The above array is used to store data, and the above array index is used to index into the array, thereby improving the reading speed of data. Through the analog memory to access the input data and calculation data, the hit rate of data reading can be improved, thereby increasing the speed of data access. In some possible embodiments, the above input data may also be calculation results obtained by calculating the same input data based on a network with the same configuration parameters as the upper layer network, for example: suppose that the network model 01 needs to be verified and tested The configuration parameter of the first layer is A, the input data is B, and the calculated result is C1. Assuming that there is a network layer with the same configuration parameter A as the network of this layer, it belongs to a layer in a mature network model 02. This layer of network The calculation result of the input data B is C2 (C2 can be regarded as the expected calculation result), and the calculation result C2 of the layer network and the input data B are read into the above-mentioned analog memory as input data, which can be understood as, input Data B is the initial input data in the network model 01 to be verified and tested, and C1 is calculated in the first-layer network, and the next-layer network of the first-layer network, which is the second-layer network, can read C1 for calculation , Can also read C2 to calculate. In this way, it is possible to prevent the problem that the subsequent calculation deviation due to the calculation deviation of a certain layer in the verification test is too large, and then the layer in which the deviation first occurs or all layers in which the deviation occurs cannot be found.
103、从所述模拟存储器中读取对应网络层的配置参数,根据所述配置参数配置对应网络层的计算引擎。103. Read the configuration parameters of the corresponding network layer from the simulation memory, and configure the computing engine of the corresponding network layer according to the configuration parameters.
上述的配置参数可以是预先设置好的参数,上述的配置参数对应于各网络层,在网络模型中,每层网络的算法可以是不一样的,对输入数据的要求以及处理方式也可以是不一样的,因此,每层网络可以配置不同的参数以实现各层网络的算法,具体需要配置的参数可以根据模型的不同或者是算法的不同来进行具体的设置。上述的计算引擎用于根据配置参数对输入数据进行计算。The above configuration parameters may be preset parameters. The above configuration parameters correspond to each network layer. In the network model, the algorithm of each layer of the network may be different, and the input data requirements and processing methods may also be different. The same, therefore, each layer of the network can be configured with different parameters to achieve the algorithm of each layer of network, the specific parameters that need to be configured can be set according to different models or different algorithms. The above calculation engine is used to calculate the input data according to the configuration parameters.
104、将所述模拟存储器中的输入数据读取到所述计算引擎进行计算,得到计算结果。104. Read the input data in the analog memory to the calculation engine for calculation, and obtain a calculation result.
上述的输入数据可以理解为每一层网络需要的输入数据,可以是网络模型的第一层网络的输入数据,也可以是上一层网络的计算结果数据。在上述模拟存储器中对输入数据进行存储时,可以将不同层的输入数据存储到不同的存储区域,即可以将不同层的输入数据存储到不同的一个数组或不同数组中进行存储,并对该些数组建立对应的索引。输入数据的读取可以是通过对应网络层的计算引擎在模拟存储器中进行读取。The foregoing input data may be understood as input data required by each layer of the network, may be input data of the first layer of the network model, or may be data of calculation results of the network of the upper layer. When storing the input data in the above analog memory, the input data of different layers can be stored in different storage areas, that is, the input data of different layers can be stored in different arrays or different arrays for storage, and the These arrays establish corresponding indexes. The input data can be read in the analog memory through the computing engine of the corresponding network layer.
105、将所述计算结果存入所述模拟存储器中。105. Store the calculation result in the simulation memory.
在步骤104中,对输入数据进行计算后,得到计算结果,可以将得到的计算结果返回到模拟存储器中,存入新的存储区域中,并建立新的索引以索引到该计算结果的数据。上述新的存储区域可以是新的数组,也可以是一个数组中的新的下标范围。这样,在下一网络层准备好后,可以通过模拟存储器快速读取计算结果进行计算,从而提高整个网络模型的数据存取速度,进而提高仿真的速度。In step 104, after calculating the input data, the calculation result is obtained, and the obtained calculation result can be returned to the simulation memory, stored in a new storage area, and a new index is established to index the data of the calculation result. The above new storage area may be a new array or a new subscript range in an array. In this way, after the next network layer is ready, the calculation results can be quickly read through the simulation memory for calculation, thereby increasing the data access speed of the entire network model, and thus the simulation speed.
在本实施例中,获取输入数据与配置参数,所述配置参数包括各网络层的配置参数;将所述输入数据及配置参数存入模拟存储器中,所述模拟存储器包括数组及数组索引;从所述模拟存储器中读取对应网络层的配置参数,根据所述配置参数配置对应网络层的计算引擎;将所述模拟存储器中的输入数据读取到所述计算引擎进行计算,得到计算结果;将所述计算结果存入所述模拟存储器中。通过将输入数据及计算结果存入模拟存储器中,在模拟存储器中,通过索引到数组方式,可以提高数据读取的命中率,从而提高数据读取的速度,进而提高整个网络模型的数据处理速度。In this embodiment, input data and configuration parameters are obtained, and the configuration parameters include configuration parameters of each network layer; the input data and configuration parameters are stored in an analog memory, and the analog memory includes an array and an array index; from Reading the configuration parameters of the corresponding network layer from the simulation memory, and configuring the calculation engine of the corresponding network layer according to the configuration parameters; reading the input data from the simulation memory to the calculation engine for calculation, and obtaining the calculation result; The calculation result is stored in the simulation memory. By storing the input data and calculation results in the simulation memory, in the simulation memory, by indexing to the array method, the hit rate of data reading can be improved, thereby increasing the speed of data reading, and thereby increasing the data processing speed of the entire network model .
需要说明的是,上述的计算结果可以称作仿真结果,本发明实施例提供的网络模型数据存取方法可以应用于需要进行数据存取设备,例如:计算机、服务器、手机等可以进行网络模型数据存取的设备。It should be noted that the above calculation result may be referred to as a simulation result, and the network model data access method provided in the embodiment of the present invention may be applied to devices that need to perform data access, such as computers, servers, mobile phones, etc., which can perform network model data Accessed equipment.
请参见图2,图2是本发明实施例提供的另一种网络模型数据存取方法的流程示意图,如图2所示,所述方法包括以下步骤:Please refer to FIG. 2. FIG. 2 is a schematic flowchart of another network model data access method according to an embodiment of the present invention. As shown in FIG. 2, the method includes the following steps:
201、获取输入数据与配置参数,所述配置参数包括各网络层的配置参数;201. Obtain input data and configuration parameters, where the configuration parameters include configuration parameters of each network layer;
202、将所述输入数据及配置参数存入模拟存储器中,所述模拟存储器包括数组及数组索引;202. Store the input data and configuration parameters in an analog memory, where the analog memory includes an array and an array index;
203、从所述模拟存储器中读取对应网络层的配置参数,根据所述配置参数配置对应网络层的计算引擎;203. Read the configuration parameters of the corresponding network layer from the simulation memory, and configure the calculation engine of the corresponding network layer according to the configuration parameters;
204、将所述模拟存储器中的输入数据读取到所述计算引擎进行计算,得到计算结果;204. Read the input data in the analog memory to the calculation engine for calculation, and obtain a calculation result;
205、将所述计算结果存入所述模拟存储器中;205. Store the calculation result in the simulation memory;
206、读取所述模拟存储器中的计算结果与数据库中对应所述输入数据及配置参数的历史计算结果进行对比验证,得到验证结果。206. Read and compare the calculation result in the simulation memory with the historical calculation result in the database corresponding to the input data and configuration parameters to obtain a verification result.
在步骤205中,将计算结果存入模拟存储器,便于下一层网络在读取该计算结果做为输入数据时,提高读取速度。在步骤206中,上述的计算结果可以是某一层网络的计算结果,也可以是整个网络模型的最终计算结果,可以理解的是,数据库中的历史计算结果指的是将相同的输入数据在相同配置参数下输入网络模型中的得到的结果,可以认为是在该配置参数与输入数据下的期望结果,比如,通过输入数据A与配置参数B在某一层网络进行计算得到的计算结果为C,上述的历史计算结果可以是通过输入数据A与配置参数B在一层网络进行计算得到的计算结果,假设为C1,上述的对比验证即是将计算结果C与计算结果C1进行比较;又如,通过输入数据A与配置参数B在整个网络模型 中计算得到的计算结果为D,上述的历史计算结果则可以是通过输入数据A与配置参数B在一个成熟的网络模型中计算得到的计算结果,假设为D1,则上述的对比验证是将计算结果D与计算结果D1进行比较。上述的历史计算结果用于对网络模型进行验证,上述的对比验证结果包括相同可不相同,可以理解的是如果计算结果与数据库中对应的历史计算结果不相同,则说明该网络模型的硬件上板测试不通过,如果计算结果与数据库中对应的历史计算结果相同,则说明该网络模型的硬件上板测试通过。需要说明的是,上述的数据库可以是远程数据库,也可以是本地数据库,数据库中的计算结果可以是基于成熟的网络模型得到的计算结果,用于对比数据仿真检测的计算结果。上述的对比验证可以通过CPU处理器进行对比。在一些可能的实施例中,对每层网络的计算结果进行对比验证的情况下,如果一层网络的计算结果与历史计算结果不相同,可以将该历史计算结果存储进入上述的模拟存储器中,在下一层网络开始计算时,将该历史计算结果做为输入数据读取到该下一层网络中进行计算,这样,可以保证每一层的输入数据都是期望输入数据,进而提高验证测试的精准度。In step 205, the calculation result is stored in the analog memory, so that the next layer of network can increase the reading speed when reading the calculation result as input data. In step 206, the above calculation result may be the calculation result of a certain layer of the network or the final calculation result of the entire network model. It is understandable that the historical calculation results in the database refer to the same input data in The result obtained in the input network model under the same configuration parameter can be considered as the expected result under the configuration parameter and input data. For example, the calculation result obtained by calculating the input data A and the configuration parameter B on a certain layer of the network is C, the above historical calculation result may be a calculation result obtained by calculating the input data A and the configuration parameter B in a layer of the network, assuming C1, the above comparison verification is to compare the calculation result C with the calculation result C1; and For example, the calculation result calculated by the input data A and the configuration parameter B in the entire network model is D. The above historical calculation result may be the calculation calculated by the input data A and the configuration parameter B in a mature network model As a result, assuming that it is D1, the above comparison verification is to compare the calculation result D with the calculation result D1. The above historical calculation results are used to verify the network model. The above comparison verification results include the same but may be different. It is understandable that if the calculation results are different from the corresponding historical calculation results in the database, the hardware model of the network model is on board The test fails. If the calculation result is the same as the corresponding historical calculation result in the database, it indicates that the hardware test of the network model passed. It should be noted that the above database may be a remote database or a local database, and the calculation result in the database may be a calculation result obtained based on a mature network model, which is used to compare calculation results of data simulation detection. The above comparison and verification can be compared by the CPU processor. In some possible embodiments, in the case of comparing and verifying the calculation results of each layer of network, if the calculation results of one layer of network are different from the historical calculation results, the historical calculation results may be stored into the above-mentioned analog memory, When the next layer network starts to calculate, the historical calculation results are read as input data to the next layer network for calculation, so that the input data of each layer is expected to be input data, thereby improving the verification test Precision.
在本实施例中,通过将存储在模拟存储器中的计算结果读取到处理器中与数据库中历史数据结果进行对比,实现对硬件上板的测试,由于计算结果是模拟存储器中读取到处理器中进行对比,数据读取速度快,减少了仿真时间,进而减少验证测试的时间。In this embodiment, by comparing the calculation result stored in the simulation memory to the processor and the historical data result in the database, the test on the hardware board is implemented, because the calculation result is read from the simulation memory to the processing Compared with the device, the data reading speed is fast, which reduces the simulation time and further reduces the verification test time.
可选的,所述输入数据包括外设获取到的图像数据,所述模拟存储器中的输入数据包括外设获取到的图像数据或对应于上一网络层的计算结果。Optionally, the input data includes image data acquired by a peripheral device, and the input data in the analog memory includes image data acquired by a peripheral device or a calculation result corresponding to a previous network layer.
上述的外设可以是设置有存储器的数据采集设备,比如摄像机、手机、平板电脑等,也可以是用于存储数据的硬盘、云端数据库等提供输入数据的设备,将上述设备中的数据读取到模拟存储器中,在验证测试时,通过模拟存储器对输入数据进行读取,可以提高数据的读取速度。上述的图像数据数据可以是用于人脸识别的图像数据,可以是用于车辆识别的图像数据,可以是用于物体识别的图像数据等。上述模拟存储器从外设获取输入数据以及配置参数,在进行仿真时,网络模型从模拟存储器中读取输入数据以及配置参数,在一些可能这的实施例中,上述模拟存储器还可以从数据库中获取历史计算结果做为输入数据进行存储。从外设获取到的输入数据,可以看成初始输入数据,该初始输入数据经过一层网络进行计算后,得到的计算结果可以做为输入数据输入到下一层网络中进行计算。The above-mentioned peripherals can be data collection devices equipped with memory, such as cameras, mobile phones, tablet computers, etc., or they can be hard disks for storing data, cloud databases, and other devices that provide input data, and read the data in the above devices In the simulation memory, during the verification test, the input data can be read through the simulation memory to increase the data reading speed. The above-mentioned image data data may be image data for face recognition, image data for vehicle recognition, image data for object recognition, or the like. The above-mentioned simulation memory obtains input data and configuration parameters from peripheral devices. During simulation, the network model reads input data and configuration parameters from the simulation memory. In some possible embodiments, the above-mentioned simulation memory may also be obtained from a database The historical calculation results are stored as input data. The input data obtained from the peripherals can be regarded as the initial input data. After the initial input data is calculated through a layer of networks, the obtained calculation results can be input as input data to the next layer of networks for calculation.
可选的,所述读取所述模拟存储器中的计算结果与数据库中对应所述输入 数据及配置参数的历史计算结果进行对比验证,得到验证结果,包括:Optionally, the calculation result of reading the simulation memory is compared with the historical calculation result corresponding to the input data and configuration parameters in the database to obtain a verification result, including:
获取数据库中对应每层网络层的输入数据与配置参数的历史计算结果对每一网络层的计算结果分别进行对比验证;或Obtain the historical calculation results of the input data and configuration parameters corresponding to each network layer in the database and compare and verify the calculation results of each network layer; or
获取数据库中对应整个网络模型的输入数据与配置参数的历史计算结果对所有网络层的最终计算结果进行对比验证。Obtain the historical calculation results of the input data and configuration parameters corresponding to the entire network model in the database, and compare and verify the final calculation results of all network layers.
在该实施方式中,请结合图3,当需要对网络模型中的每层网络层的计算结果都进行验证时,上述数据库中可以存储有对应于每层网络层的历史计算结果(期望计算结果),在对网络模型的验证测试中,可以每计算完一层,将计算结果存入模拟存储器中,再将计算结果从模拟存储器中读取出来放入处理器中,处理器获取到数据库中对于该层网络的历史计算结果(期望计算结果)进行对比。每层网络计算一次,执行一次对比,这样,对每一层网络进行验证测试,可以提高验证测试的精准度。当需要对网络模型最终计算结果进行验证时,上述数据库中可以存储有对应于整个网络模型的历史计算结果(期望计算结果),可以等待网络模型中的所有的网络层计算完毕,再将计算结果与数据库中的历史计算结果(期望计算结果)读取到处理器中进行对比。In this embodiment, please refer to FIG. 3, when the calculation results of each network layer in the network model need to be verified, the above database may store historical calculation results corresponding to each network layer (expected calculation results) ), in the verification test of the network model, you can store the calculation result in the simulation memory after each layer is calculated, and then read the calculation result from the simulation memory and put it into the processor, and the processor gets it into the database Compare the historical calculation results (expected calculation results) of this layer network. Each layer of network is calculated once, and a comparison is performed. In this way, the verification test of each layer of network can improve the accuracy of the verification test. When the final calculation result of the network model needs to be verified, the above database can store the historical calculation results (expected calculation results) corresponding to the entire network model, and can wait for all the network layers in the network model to complete the calculation and then calculate the calculation results. Compare it with the historical calculation results (expected calculation results) in the database to the processor.
可选的,在所述计算引擎包括以下任一项:Optionally, the calculation engine includes any one of the following:
基于现场可编程门阵列FPGA的计算引擎;Calculation engine based on field programmable gate array FPGA;
基于专用集成电路ASIC的计算引擎;ASIC-based calculation engine;
基于图形处理器GPU的计算引擎。Computing engine based on graphics processor GPU.
上述的现场可编程门阵列使数字电路系统的设计非常灵活,并且显著缩短了系统研制的周期,缩小了数字电路系统的体积和所用芯片的种类,可以用于图像获取或图像识别。上述的专用集成电路具有专用性强的特点,可以在经过设计后专用于网络模型硬件上板验证测试。上述图形处理器具有图形加速功能,可以处理复杂图像的计算,将CPU处理从计算中解放,提高图像数据计算的同时,也提高CPU处理器对计算结果的对比速度。当然,上述的计算引擎只是本实施例中较优的选择,不应理解为是对本实施例的限定。The above-mentioned field programmable gate array makes the design of digital circuit system very flexible, and significantly shortens the development cycle of the system, reduces the volume of the digital circuit system and the types of chips used, and can be used for image acquisition or image recognition. The above-mentioned dedicated integrated circuit has the characteristics of strong specificity, and can be dedicated to the verification test of the network model hardware after being designed. The above-mentioned graphics processor has a graphics acceleration function, which can process the calculation of complex images, liberate the CPU processing from the calculation, improve the calculation of image data, and also increase the speed of comparison of the calculation results by the CPU processor. Of course, the above calculation engine is only a better choice in this embodiment, and should not be construed as limiting this embodiment.
在所述将所述计算结果存入所述模拟存储器中之后,所述方法还包括:After the storing the calculation result in the simulation memory, the method further includes:
从所述模拟存储器中读取新网络层对应的配置参数,根据所述配置参数更新对应网络层的计算引擎;Reading configuration parameters corresponding to the new network layer from the simulation memory, and updating the computing engine corresponding to the network layer according to the configuration parameters;
将所述模拟存储器中的上一网络层的计算结果做为输入数据读取到更新后的计算引擎进行计算,得到新的计算结果;Using the calculation result of the previous network layer in the analog memory as input data and reading to the updated calculation engine for calculation to obtain a new calculation result;
将所述新的计算结果存入所述模拟存储器中;Store the new calculation result in the simulation memory;
将重复上述步骤直到计算完所有网络层得到的最终计算结果存储到模拟存储器中。The above steps will be repeated until the final calculation results obtained after calculating all the network layers are stored in the simulation memory.
在一层网络计算完毕后,会进行下一层网络的计算,上述的新网络层指的下一网络层,网络模型中的网络层配置参数可以是一次性配置完成,也可以是在计算开始前进行配置,比如,可以是在一层网络计算完成后,开始配置下一层网络的参数,可以理解的是,每一层网络的参数存储于上述模拟存储器中的不同数组中,可在模拟存储器中的进行相应的索引得到。重复计算的过程,对网络模型中所有的网络层进行计算,直到最后一层网络的计算,得到最终的计算结果并存入模拟存储器中,便于取出与数据库中的历史计算结果(期望计算结果)进行对比。After the calculation of one layer of network is completed, the calculation of the next layer of network will be carried out. The above new network layer refers to the next network layer. The network layer configuration parameters in the network model can be configured at one time or at the beginning of the calculation Before configuring, for example, you can start to configure the parameters of the next layer of network after the calculation of one layer of network is completed. It can be understood that the parameters of each layer of network are stored in different arrays in the above-mentioned simulation memory, which can be simulated The corresponding index in the memory is obtained. Repeat the calculation process to calculate all the network layers in the network model until the last layer of network calculations. The final calculation results are obtained and stored in the simulation memory, which is convenient for taking out the historical calculation results in the database (expected calculation results) comparing.
第二方面,如图4所示,提供一种网络模型数据存取装置,所述装置包括:In a second aspect, as shown in FIG. 4, a network model data access device is provided. The device includes:
获取模块301,用于获取输入数据与配置参数,所述配置参数包括各网络层的配置参数;The obtaining module 301 is used to obtain input data and configuration parameters, where the configuration parameters include configuration parameters of each network layer;
模拟存储模块302,用于将所述输入数据及配置参数以索引数组的形式存入模拟存储器中;The simulation storage module 302 is used to store the input data and configuration parameters in the form of an index array into the simulation memory;
模拟配置模块303,用于从所述模拟存储器中读取对应网络层的配置参数,根据所述配置参数配置对应网络层的计算引擎;The simulation configuration module 303 is configured to read the configuration parameters of the corresponding network layer from the simulation memory, and configure the calculation engine of the corresponding network layer according to the configuration parameters;
计算模块304,用于将所述模拟存储器中的输入数据读取到所述计算引擎进行计算,得到计算结果;The calculation module 304 is configured to read the input data in the analog memory to the calculation engine for calculation, and obtain a calculation result;
结果存储模块305,用于将所述计算结果存入所述模拟存储器中。The result storage module 305 is used to store the calculation result in the simulation memory.
可选的,如图5所示,所述装置还包括:Optionally, as shown in FIG. 5, the device further includes:
验证模块306,用于读取所述模拟存储器中的计算结果与数据库中对应所述输入数据及配置参数的历史计算结果进行对比验证,得到验证结果。The verification module 306 is configured to read and compare the calculation result in the simulation memory with the historical calculation result corresponding to the input data and configuration parameters in the database to obtain a verification result.
可选的,所述输入数据包括外设获取到的图像数据,所述模拟存储器中的输入数据包括外设获取到的图像数据或对应于上一网络层的计算结果。Optionally, the input data includes image data acquired by a peripheral device, and the input data in the analog memory includes image data acquired by a peripheral device or a calculation result corresponding to a previous network layer.
可选的,所述验证模块306还用于获取数据库中对应每层网络层的输入数据与配置参数的历史计算结果对每一网络层的计算结果分别进行对比验证;或Optionally, the verification module 306 is further configured to obtain historical calculation results of input data and configuration parameters corresponding to each network layer in the database and compare and verify the calculation results of each network layer respectively; or
所述验证模块306还用于获取数据库中对应整个网络模型的输入数据与配置参数的历史计算结果对所有网络层的最终计算结果进行对比验证。The verification module 306 is also used to obtain historical calculation results of input data and configuration parameters corresponding to the entire network model in the database, and perform comparison and verification on final calculation results of all network layers.
可选的,所述计算引擎包括以下任一项:Optionally, the calculation engine includes any one of the following:
基于现场可编程门阵列FPGA的计算引擎;Calculation engine based on field programmable gate array FPGA;
基于专用集成电路ASIC的计算引擎;ASIC-based calculation engine;
基于图形处理器GPU的计算引擎。Computing engine based on graphics processor GPU.
可选的,所述模拟配置模块303还用于从所述模拟存储器中读取新网络层对应的配置参数,根据所述配置参数更新对应网络层的计算引擎;Optionally, the simulation configuration module 303 is further configured to read configuration parameters corresponding to the new network layer from the simulation memory, and update the calculation engine corresponding to the network layer according to the configuration parameters;
所述计算模块304还用于将所述模拟存储器中的上一网络层的计算结果做为输入数据读取到更新后的计算引擎进行计算,得到新的计算结果;The calculation module 304 is further configured to use the calculation result of the previous network layer in the simulation memory as input data and read the updated calculation engine to perform calculation to obtain a new calculation result;
所述结果存储模块305还用于将所述新的计算结果存入所述模拟存储器中;The result storage module 305 is also used to store the new calculation result in the simulation memory;
所述结果存储模块305还用于将重复上述步骤直到计算完所有网络层得到的最终计算结果存储到模拟存储器中。The result storage module 305 is also used to store the final calculation result obtained by repeating the above steps until all network layers are calculated into the simulation memory.
第三方面,本发明实施例提供一种电子设备,包括:存储器、处理器及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现本发明实施例提供的容器编排引擎的安装中的步骤。In a third aspect, an embodiment of the present invention provides an electronic device, including: a memory, a processor, and a computer program stored on the memory and executable on the processor, when the processor executes the computer program The steps in the installation of the container orchestration engine provided by the embodiments of the present invention are implemented.
第四方面,本发明实施例提供一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机程序,所述计算机程序被处理器执行时实现本发明实施例提供的容器编排引擎的安装中的步骤。According to a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium storing a computer program on the computer-readable storage medium, which when executed by a processor implements a container orchestration engine provided by an embodiment of the present invention Steps in installation.
需要说明的是,对于前述的各方法实施例,为了简单描述,故将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本申请并不受所描述的动作顺序的限制,因为依据本申请,某些步骤可以采用其他顺序或者同时进行。其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于可选实施例,所涉及的动作和模块并不一定是本申请所必须的。It should be noted that, for the sake of simple description, the foregoing method embodiments are all expressed as a series of action combinations, but those skilled in the art should know that this application is not limited by the described action sequence, Because according to the present application, certain steps can be performed in other orders or simultaneously. Secondly, those skilled in the art should also be aware that the embodiments described in the specification are all optional embodiments, and the involved actions and modules are not necessarily required by this application.
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其他实施例的相关描述。In the above embodiments, the description of each embodiment has its own emphasis. For a part that is not detailed in an embodiment, you can refer to related descriptions in other embodiments.
在本申请所提供的几个实施例中,应该理解到,所揭露的装置,可通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性或其它的形式。In the several embodiments provided in this application, it should be understood that the disclosed device may be implemented in other ways. For example, the device embodiments described above are only schematic. For example, the division of the unit is only a logical function division. In actual implementation, there may be another division manner, for example, multiple units or components may be combined or may Integration into another system, or some features can be ignored, or not implemented. In addition, the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical or other forms.
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者 也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。The units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件程序模块的形式实现。In addition, each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit. The above integrated unit can be implemented in the form of hardware or software program modules.
所述集成的单元如果以软件程序模块的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储器中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储器中,包括若干指令用以使得一台计算机设备(可为个人计算机、服务器或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储器包括:U盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、移动硬盘、磁碟或者光盘等各种可以存储程序代码的介质。If the integrated unit is implemented in the form of a software program module and sold or used as an independent product, it may be stored in a computer-readable memory. Based on this understanding, the technical solution of the present application essentially or part of the contribution to the existing technology or all or part of the technical solution can be embodied in the form of a software product, the computer software product is stored in a memory, Several instructions are included to enable a computer device (which may be a personal computer, server, network device, etc.) to perform all or part of the steps of the methods described in the embodiments of the present application. The aforementioned memory includes: U disk, Read-Only Memory (ROM, Read-Only Memory), Random Access Memory (RAM, Random Access Memory), mobile hard disk, magnetic disk or optical disk and other media that can store program codes.
本领域普通技术人员可以理解上述实施例的各种方法中的全部或部分步骤是可以通过程序来指令相关的硬件来完成,该程序可以存储于一计算机可读存储器中,存储器可以包括:闪存盘、只读存储器(英文:Read-Only Memory,简称:ROM)、随机存取器(英文:Random Access Memory,简称:RAM)、磁盘或光盘等。A person of ordinary skill in the art may understand that all or part of the steps in the various methods of the above embodiments may be completed by a program instructing relevant hardware. The program may be stored in a computer-readable memory, and the memory may include: a flash disk , Read-Only Memory (English: Read-Only Memory, abbreviation: ROM), Random Access Device (English: Random Access Memory, abbreviation: RAM), magnetic disk or optical disk, etc.
以上对本申请实施例进行了详细介绍,本文中应用了具体个例对本申请的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本申请的方法及其核心思想;同时,对于本领域的一般技术人员,依据本申请的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本申请的限制。The embodiments of the present application are described in detail above, and specific examples are used to explain the principles and implementation of the present application. The descriptions of the above embodiments are only used to help understand the method and core idea of the present application; at the same time, Those of ordinary skill in the art, based on the ideas of the present application, will have changes in specific implementations and application scopes. In summary, the content of this specification should not be construed as limiting the present application.

Claims (10)

  1. 一种网络模型数据存取方法,其特征在于,所述方法包括:A network model data access method, characterized in that the method includes:
    获取输入数据与配置参数,所述配置参数包括各网络层的配置参数;Obtain input data and configuration parameters, including the configuration parameters of each network layer;
    将所述输入数据及配置参数存入模拟存储器中,所述模拟存储器包括数组及数组索引;Store the input data and configuration parameters in an analog memory, the analog memory includes an array and an array index;
    从所述模拟存储器中读取对应网络层的配置参数,根据所述配置参数配置对应网络层的计算引擎;Reading configuration parameters of the corresponding network layer from the analog memory, and configuring a computing engine of the corresponding network layer according to the configuration parameters;
    将所述模拟存储器中的输入数据读取到所述计算引擎进行计算,得到计算结果;Reading the input data in the simulation memory to the calculation engine for calculation, and obtaining a calculation result;
    将所述计算结果存入所述模拟存储器中。The calculation result is stored in the simulation memory.
  2. 如权利要求1所述的方法,其特征在于,在所述将所述模拟结果存入所述模拟存储器中之后,所述方法还包括:The method according to claim 1, wherein after the storing the simulation result in the simulation memory, the method further comprises:
    读取所述模拟存储器中的计算结果与数据库中对应所述输入数据及配置参数的历史计算结果进行对比验证,得到验证结果。The calculation result in the simulation memory is read and compared with the historical calculation result in the database corresponding to the input data and configuration parameters to obtain a verification result.
  3. 如权利要求2所述的方法,其特征在于,所述输入数据包括外设获取到的图像数据,所述模拟存储器中的输入数据包括外设获取到的图像数据或对应于上一网络层的计算结果。The method according to claim 2, wherein the input data includes image data acquired by a peripheral device, and the input data in the analog memory includes image data acquired by a peripheral device or corresponding to the previous network layer. Calculation results.
  4. 如权利要求3所述的方法,其特征在于,所述读取所述模拟存储器中的计算结果与数据库中对应所述输入数据及配置参数的历史计算结果进行对比验证,得到验证结果,包括:The method according to claim 3, wherein the calculation result of reading the simulation memory is compared with the historical calculation result corresponding to the input data and configuration parameters in the database to obtain the verification result, including:
    获取数据库中对应每层网络层的输入数据与配置参数的历史计算结果对每一网络层的计算结果分别进行对比验证;或Obtain the historical calculation results of the input data and configuration parameters corresponding to each network layer in the database and compare and verify the calculation results of each network layer; or
    获取数据库中对应整个网络模型的输入数据与配置参数的历史计算结果对所有网络层的最终计算结果进行对比验证。Obtain the historical calculation results of the input data and configuration parameters corresponding to the entire network model in the database, and compare and verify the final calculation results of all network layers.
  5. 如权利要求1至4中任一所述的方法,其特征在于,所述计算引擎包括以下任一项:The method according to any one of claims 1 to 4, wherein the calculation engine includes any one of the following:
    基于现场可编程门阵列FPGA的计算引擎;Calculation engine based on field programmable gate array FPGA;
    基于专用集成电路ASIC的计算引擎;ASIC-based calculation engine;
    基于图形处理器GPU的计算引擎。Computing engine based on graphics processor GPU.
  6. 如权利要求2所述的方法,其特征在于,在所述将所述计算结果存入所述模拟存储器中之后,所述方法还包括:The method according to claim 2, wherein after the storing the calculation result in the simulation memory, the method further comprises:
    从所述模拟存储器中读取新网络层对应的配置参数,根据所述配置参数更新对应网络层的计算引擎;Reading configuration parameters corresponding to the new network layer from the simulation memory, and updating the computing engine corresponding to the network layer according to the configuration parameters;
    将所述模拟存储器中的上一网络层的计算结果做为输入数据读取到更新后的计算引擎进行计算,得到新的计算结果;Using the calculation result of the previous network layer in the analog memory as input data and reading to the updated calculation engine for calculation to obtain a new calculation result;
    将所述新的计算结果存入所述模拟存储器中;Store the new calculation result in the simulation memory;
    将重复上述步骤直到计算完所有网络层得到的最终计算结果存储到模拟存储器中。The above steps will be repeated until the final calculation results obtained after calculating all the network layers are stored in the simulation memory.
  7. 一种网络模型数据存取装置,其特征在于,所述装置包括:A network model data access device, characterized in that the device includes:
    获取模块,用于获取输入数据与配置参数,所述配置参数包括各网络层的配置参数;An acquisition module, for acquiring input data and configuration parameters, the configuration parameters including configuration parameters of each network layer;
    模拟存储模块,用于将所述输入数据及配置参数以索引数组的形式存入模拟存储器中;An analog storage module, which is used to store the input data and configuration parameters into an analog memory in the form of an index array;
    模拟配置模块,用于从所述模拟存储器中读取对应网络层的配置参数,根据所述配置参数配置对应网络层的计算引擎;The simulation configuration module is configured to read the configuration parameters of the corresponding network layer from the simulation memory, and configure the calculation engine of the corresponding network layer according to the configuration parameters;
    计算模块,用于将所述模拟存储器中的输入数据读取到所述计算引擎进行计算,得到计算结果;A calculation module, configured to read the input data in the simulation memory to the calculation engine for calculation, and obtain a calculation result;
    结果存储模块,用于将所述计算结果存入所述模拟存储器中。The result storage module is used to store the calculation result in the simulation memory.
  8. 如权利要求7所述的装置,其特征在于,所述装置还包括:The device of claim 7, wherein the device further comprises:
    对比验证模块,用于读取所述模拟存储器中的计算结果与数据库中对应所述输入数据及配置参数的历史计算结果进行对比验证,得到验证结果。The comparison and verification module is used for reading and comparing the calculation results in the simulation memory with the historical calculation results corresponding to the input data and configuration parameters in the database to obtain the verification results.
  9. 一种电子设备,其特征在于,包括:存储器、处理器及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现如权利要求1至6中任一项所述的基于目标检测的网络模型数据存取方法中的步骤。An electronic device, comprising: a memory, a processor, and a computer program stored on the memory and executable on the processor, and the processor implements the computer program as claimed in claim 1 Steps 6 to 6 in the network data access method based on target detection.
  10. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质上存储有计算机程序,所述计算机程序被处理器执行时实现如权利要求1至6中任一项所述的网络模型数据存取方法中的步骤。A computer-readable storage medium, characterized in that a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the network model according to any one of claims 1 to 6 is implemented Steps in the data access method.
PCT/CN2018/120563 2018-12-12 2018-12-12 Network model data access method and device and electronic device WO2020118555A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201880083680.4A CN111542818B (en) 2018-12-12 2018-12-12 Network model data access method and device and electronic equipment
PCT/CN2018/120563 WO2020118555A1 (en) 2018-12-12 2018-12-12 Network model data access method and device and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/120563 WO2020118555A1 (en) 2018-12-12 2018-12-12 Network model data access method and device and electronic device

Publications (1)

Publication Number Publication Date
WO2020118555A1 true WO2020118555A1 (en) 2020-06-18

Family

ID=71075272

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/120563 WO2020118555A1 (en) 2018-12-12 2018-12-12 Network model data access method and device and electronic device

Country Status (2)

Country Link
CN (1) CN111542818B (en)
WO (1) WO2020118555A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112363877B (en) * 2020-11-10 2023-04-21 海光信息技术股份有限公司 Chip verification method and platform
CN116150563B (en) * 2023-02-24 2024-01-05 之江实验室 Service execution method and device, storage medium and electronic equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106126481A (en) * 2016-06-29 2016-11-16 华为技术有限公司 A kind of computing engines and electronic equipment
CN106650922A (en) * 2016-09-29 2017-05-10 清华大学 Hardware neural network conversion method, computing device, compiling method and neural network software and hardware collaboration system
CN107016175A (en) * 2017-03-23 2017-08-04 中国科学院计算技术研究所 It is applicable the Automation Design method, device and the optimization method of neural network processor
CN107229967A (en) * 2016-08-22 2017-10-03 北京深鉴智能科技有限公司 A kind of hardware accelerator and method that rarefaction GRU neutral nets are realized based on FPGA

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6852365B2 (en) * 2016-11-25 2021-03-31 富士通株式会社 Information processing equipment, information processing system, information processing program and information processing method
US11023807B2 (en) * 2016-12-30 2021-06-01 Microsoft Technology Licensing, Llc Neural network processor
CN108718296A (en) * 2018-04-27 2018-10-30 广州西麦科技股份有限公司 Network management-control method, device and computer readable storage medium based on SDN network

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106126481A (en) * 2016-06-29 2016-11-16 华为技术有限公司 A kind of computing engines and electronic equipment
CN107229967A (en) * 2016-08-22 2017-10-03 北京深鉴智能科技有限公司 A kind of hardware accelerator and method that rarefaction GRU neutral nets are realized based on FPGA
CN106650922A (en) * 2016-09-29 2017-05-10 清华大学 Hardware neural network conversion method, computing device, compiling method and neural network software and hardware collaboration system
CN107016175A (en) * 2017-03-23 2017-08-04 中国科学院计算技术研究所 It is applicable the Automation Design method, device and the optimization method of neural network processor

Also Published As

Publication number Publication date
CN111542818A (en) 2020-08-14
CN111542818B (en) 2023-06-06

Similar Documents

Publication Publication Date Title
WO2019200544A1 (en) Method for implementing and developing network model and related product
WO2022042002A1 (en) Training method for semi-supervised learning model, image processing method, and device
CN112119409B (en) Neural network with relational memory
CN205680247U (en) Cell/convolutional neural networks intelligent vision driving fatigue monitoring accelerator
WO2019091020A1 (en) Weight data storage method, and neural network processor based on method
CN106982359B (en) A kind of binocular video monitoring method, system and computer readable storage medium
Tadeusiewicz et al. Exploring neural networks with C
CN109359539B (en) Attention assessment method and device, terminal equipment and computer readable storage medium
WO2022001805A1 (en) Neural network distillation method and device
CN111259738B (en) Face recognition model construction method, face recognition method and related device
JP6812086B2 (en) Training method for reticulated pattern removal system, reticulated pattern removal method, equipment, equipment and media
CN111325664B (en) Style migration method and device, storage medium and electronic equipment
CN109614899B (en) Human body action recognition method based on lie group features and convolutional neural network
CN110738141A (en) vein identification method, device, equipment and storage medium
WO2020118555A1 (en) Network model data access method and device and electronic device
WO2023072175A1 (en) Point cloud data processing method, neural network training method, and related device
CN113065997B (en) Image processing method, neural network training method and related equipment
WO2022184124A1 (en) Physiological electrical signal classification and processing method and apparatus, computer device, and storage medium
CN106980831A (en) Based on self-encoding encoder from affiliation recognition methods
WO2023165361A1 (en) Data processing method and related device
WO2024001806A1 (en) Data valuation method based on federated learning and related device therefor
CN110210540A (en) Across social media method for identifying ID and system based on attention mechanism
CN114925320B (en) Data processing method and related device
CN114707641A (en) Training method, device, equipment and medium for neural network model of double-view diagram
CN113627163A (en) Attention model, feature extraction method and related device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18943176

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18943176

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 30.09.2021)

122 Ep: pct application non-entry in european phase

Ref document number: 18943176

Country of ref document: EP

Kind code of ref document: A1