WO2019200545A1 - 网络模型的运行方法及相关产品 - Google Patents
网络模型的运行方法及相关产品 Download PDFInfo
- Publication number
- WO2019200545A1 WO2019200545A1 PCT/CN2018/083436 CN2018083436W WO2019200545A1 WO 2019200545 A1 WO2019200545 A1 WO 2019200545A1 CN 2018083436 W CN2018083436 W CN 2018083436W WO 2019200545 A1 WO2019200545 A1 WO 2019200545A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- network model
- weight data
- output result
- data
- updated
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 45
- 238000004590 computer program Methods 0.000 claims description 15
- 238000012545 processing Methods 0.000 claims description 12
- 238000004364 calculation method Methods 0.000 claims description 11
- 230000006870 function Effects 0.000 claims description 8
- 230000008676 import Effects 0.000 claims description 4
- 238000013528 artificial neural network Methods 0.000 description 26
- 238000003062 neural network model Methods 0.000 description 14
- 210000002569 neuron Anatomy 0.000 description 9
- 238000012549 training Methods 0.000 description 8
- 230000009471 action Effects 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000011161 development Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 238000013178 mathematical model Methods 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 238000013473 artificial intelligence Methods 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000010365 information processing Effects 0.000 description 2
- 210000000653 nervous system Anatomy 0.000 description 2
- 238000005312 nonlinear dynamic Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 238000004088 simulation Methods 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 230000003925 brain function Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000001149 cognitive effect Effects 0.000 description 1
- 238000012067 mathematical method Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000004377 microelectronic Methods 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F30/00—Computer-aided design [CAD]
- G06F30/20—Design optimisation, verification or simulation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/213—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2413—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
- G06F18/24133—Distances to prototypes
- G06F18/24137—Distances to cluster centroïds
- G06F18/2414—Smoothing the distance, e.g. radial basis function networks [RBFN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F30/00—Computer-aided design [CAD]
- G06F30/30—Circuit design
- G06F30/36—Circuit design at the analogue level
- G06F30/367—Design verification, e.g. using simulation, simulation program with integrated circuit emphasis [SPICE], direct methods or relaxation methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/445—Program loading or initiating
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/10—Interfaces, programming languages or software development kits, e.g. for simulating neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N5/00—Computing arrangements using knowledge-based models
- G06N5/04—Inference or reasoning models
- G06N5/046—Forward inferencing; Production systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
Definitions
- the present application relates to the field of information processing technologies, and in particular, to a method for operating a network model and related products.
- the embodiment of the present application provides a network model operation method and related products, which can implement a simulation operation of a network model and a real hardware environment operation, and the simulation operation can test the network model in advance to improve calculation accuracy and user experience.
- the real hardware environment can directly deploy the network model to the target hardware platform and perform high performance computing.
- a method for operating a network model comprising the steps of:
- the preset data is extracted, and the preset data is input as input data to the updated network model to perform an operation to obtain an output result, and the output result is displayed.
- an operating platform of a network model includes:
- transceiver unit configured to receive a weight data group sent by a network model compiler
- an updating unit configured to update the n-th layer weight data of the network model according to the weight data group to obtain the updated network model
- the processing unit is configured to extract preset data, input the preset data as input data to the updated network model, perform an operation to obtain an output result, and display the output result.
- a computer readable storage medium storing a computer program for electronic data exchange, wherein the computer program causes a computer to perform the method of the second aspect.
- a computer program product comprising a non-transitory computer readable storage medium storing a computer program, the computer program being operative to cause a computer to perform the method of the second aspect.
- the technical solution of the network model is simulated to obtain an output result, and then the output result is displayed, so that the user can judge whether the network model is suitable for the corresponding hardware structure by using the output result. This can improve the user experience.
- the real hardware environment can directly deploy the network model to the target hardware platform and perform high performance computing.
- FIG. 1 is a schematic flowchart diagram of a method for operating a network model according to an embodiment of the present application.
- FIG. 2 is a schematic structural diagram of an operation platform of a network model according to an embodiment of the present application.
- references to "an embodiment” herein mean that a particular feature, structure, or characteristic described in connection with the embodiments can be included in at least one embodiment of the present application.
- the appearances of the phrases in various places in the specification are not necessarily referring to the same embodiments, and are not exclusive or alternative embodiments that are mutually exclusive. Those skilled in the art will understand and implicitly understand that the embodiments described herein can be combined with other embodiments.
- Neural networks have broad and attractive prospects in the fields of system identification, pattern recognition, and intelligent control. Especially in intelligent control, people are especially interested in the self-learning function of neural networks, and regard the important feature of neural networks as One of the key keys to solving the problem of controller adaptability in automatic control.
- Neural Networks is a complex network system formed by a large number of simple processing units (called neurons) that are interconnected to each other. It reflects many basic features of human brain function and is highly complex. Nonlinear dynamic learning system. Neural networks have massively parallel, distributed storage and processing, self-organizing, adaptive, and self-learning capabilities, and are particularly well-suited for handling inaccurate and ambiguous information processing problems that require many factors and conditions to be considered simultaneously.
- the development of neural networks is related to neuroscience, mathematical science, cognitive science, computer science, artificial intelligence, information science, cybernetics, robotics, microelectronics, psychology, optical computing, molecular biology, etc. The edge of the interdisciplinary.
- the basis of neural networks is the neurons.
- Neurons are biological models based on nerve cells of the biological nervous system. When people study the biological nervous system to explore the mechanism of artificial intelligence, the neurons are mathematically generated, and the mathematical model of the neuron is generated.
- neural network A large number of neurons of the same form are connected to form a neural network.
- the neural network is a highly nonlinear dynamic system. Although the structure and function of each neuron are not complicated, the dynamic behavior of neural networks is very complicated; therefore, neural networks can express various phenomena in the actual physical world.
- the neural network model is based on a mathematical model of neurons.
- the Artificial Neural Network is a description of the first-order properties of the human brain system. Simply put, it is a mathematical model.
- the neural network model is represented by network topology, node characteristics, and learning rules.
- the great appeal of neural networks to people includes: parallel distributed processing, high robustness and fault tolerance, distributed storage and learning capabilities, and the ability to fully approximate complex nonlinear relationships.
- Typical neural network models with more applications include BP neural network, Hopfield network, ART network and Kohonen network.
- FIG. 1 is a method for operating a network model according to the present application.
- the method is implemented by a neural network chip, and the neural network chip may specifically include: a special neural network chip, such as an AI chip, of course, in practical applications.
- the method may also include: a general processing chip such as a CPU or an FPGA.
- the present application does not limit the specific expression of the above neural network chip. As shown in FIG. 1, the method includes the following steps:
- Step S101 Receive a weight data group sent by a network model compiler.
- the method for receiving the weight data group sent by the receiving network model transformer in the above step S101 may be multiple.
- the method may be received by using a wireless method, including but not limited to: Bluetooth, wifi And the like, of course, in another optional technical solution of the present application, it can be received by wire, including but not limited to, a bus mode, a port mode or a pin mode.
- Step S102 Update the n-th layer weight data of the network model according to the weight data group to obtain the updated network model.
- the implementation method of the foregoing step S102 may specifically include:
- the weight data corresponding to each layer in the weight data group is extracted, and the weight data corresponding to each layer is replaced with the original weight data of the network model.
- Step S103 Extract preset data, input the preset data as input data to the updated network model, perform an operation to obtain an output result, and display the output result.
- the preset data in the above steps may be marked data, and the data may be stored in the software memory of the chip.
- the implementation method of the foregoing step S103 may specifically be:
- the preset data is extracted, and the preset data is input as input data to the updated network model to call the software memory to perform an operation to obtain an output result.
- the implementation method of the foregoing step S103 may specifically include:
- the technical solution of the network model is simulated to obtain an output result, and then the output result is displayed, so that the user can judge whether the network model is suitable for the corresponding hardware structure by using the output result. This can improve the user experience.
- the method may include: inputting a large number of labeled samples (generally 50 or more samples) into the original neural network model (the weight data group at this time is an initial value), performing multiple iteration operations to update the initial weight, Each iteration operation includes: n-layer forward operation and n-layer inverse operation, and the weight gradient of the n-layer inverse operation updates the weight of the corresponding layer, and can realize the weight data group after calculation of multiple samples.
- the completed neural network model receives the data to be calculated, and performs the n-layer forward operation on the data to be calculated and the trained weight data group to obtain the output result of the forward operation.
- the output result can be analyzed to obtain the operation result of the neural network, such as the neural network model if it is a neural network model for face recognition. Type, then the result of the operation is seen as matching or not.
- the neural network model For the training of the neural network model, it requires a lot of computation, because for the n-layer forward operation and the n-layer inverse operation, the calculation amount of any layer involves a large amount of computation, and the face recognition neural network model
- most of the operations of each layer are convolution operations.
- the convolution input data is thousands of rows and thousands of columns, so the product of one convolution operation for such large data may be up to 106 times.
- the requirements on the processor are very high, and it takes a lot of overhead to perform such operations.
- this operation requires multiple iterations and n layers, and each sample needs to be calculated once, which is even more The computational overhead is increased. This computational overhead is currently not achievable by FPGA. Excessive computational overhead and power consumption require high hardware configuration.
- the application also provides a running platform of a network model.
- the operating platform of the network model includes:
- the transceiver unit 201 is configured to receive a weight data group sent by the network model compiler
- the method for receiving the weight data group sent by the receiving network model transformer of the transceiver unit 201 may be multiple.
- the method may be received by using a wireless method, including but not limited to: Bluetooth.
- the wifi and the like can be received by wire, including but not limited to, a bus mode, a port mode or a pin mode.
- the updating unit 202 is configured to update the n-th layer weight data of the network model according to the weight data group to obtain the updated network model;
- the processing unit 203 is configured to extract preset data, input the preset data as input data to the updated network model, perform an operation to obtain an output result, and display the output result.
- the technical solution of the network model is simulated to obtain an output result, and then the output result is displayed, so that the user can judge whether the network model is suitable for the corresponding hardware structure by using the output result. This can improve the user experience.
- the updating unit 202 is specifically configured to extract the weight data corresponding to each layer in the weight data group, and replace the weight data corresponding to each layer with the original weight data of the network model to obtain the updated network model.
- the processing unit 203 is specifically configured to input the preset data as input data to the updated network model to call the software memory to perform an operation to obtain an output result.
- the processing unit 203 is specifically configured to traverse all the computing nodes of the network model, import the parameter values in the weight data group, reserve the storage space in the software memory, traverse all the computing nodes according to the calculated order, and the scheduling strategy involving the heterogeneous computing According to the scheduling strategy, the calculation function is called according to the calling node, and the result is collected to obtain an output result.
- the present application also provides a computer readable storage medium storing a computer program for electronic data exchange, wherein the computer program causes the computer to perform the method as shown in Figure 1 and a refinement of the method.
- the application also provides a computer program product comprising a non-transitory computer readable storage medium storing a computer program operative to cause a computer to perform the method as shown in FIG. 1 and the method Refinement plan.
- the disclosed apparatus may be implemented in other ways.
- the device embodiments described above are merely illustrative.
- the division of the unit is only a logical function division.
- there may be another division manner for example, multiple units or components may be combined or may be Integrate into another system, or some features can be ignored or not executed.
- the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, and may be electrical or otherwise.
- the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
- each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
- the above integrated unit can be implemented in the form of hardware or in the form of a software program module.
- the integrated unit if implemented in the form of a software program module and sold or used as a standalone product, may be stored in a computer readable memory.
- a computer readable memory A number of instructions are included to cause a computer device (which may be a personal computer, server or network device, etc.) to perform all or part of the steps of the methods described in various embodiments of the present application.
- the foregoing memory includes: a U disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk, and the like, which can store program codes.
- ROM Read-Only Memory
- RAM Random Access Memory
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Software Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computing Systems (AREA)
- Computer Hardware Design (AREA)
- Computational Linguistics (AREA)
- Mathematical Physics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Biophysics (AREA)
- Molecular Biology (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Health & Medical Sciences (AREA)
- Geometry (AREA)
- Microelectronics & Electronic Packaging (AREA)
- Stored Programmes (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
本公开提供了一种网络模型的运行方法及相关产品,所述方法包括如下步骤:接收网络模型编译器发送的权值数据组;依据该权值数据组对网络模型的n层权值数据进行更新得到更新后的网络模型;提取预设数据,将该预设数据作为输入数据输入到更新后的网络模型进行运算得到输出结果,将该输出结果展示。本申请提供的技术方案具有用户体验度高的优点。
Description
本申请涉及信息处理技术领域,具体涉及一种网络模型的运行方法及相关产品。
随着信息技术的不断发展和人们日益增长的需求,人们对信息及时性的要求越来越高了。网络模型例如神经网络模型随着技术的发展应用的越来越广泛,对于计算机、服务器等设备而言,其对网络模型执行训练以及运算的均能够实现,由于现有的神经网络并非所有平台均能够完成训练功能,这样就存在训练好的网络模型转平台应用的方案,这样就无法保证转用以后能够适应新的硬件结构,导致平台计算精度降低,影响用户体验度。
申请内容
本申请实施例提供了一种网络模型的运行方法及相关产品,可以实现网络模型的模拟运行和真实硬件环境运行,模拟运行可以提前试运行网络模型,提高计算精度以及用户体验度。真实硬件环境运行可以直接部署网络模型于目标硬件平台,执行高性能计算。
第一方面,提供一种网络模型的运行方法,所述方法包括如下步骤:
接收网络模型编译器发送的权值数据组;
依据该权值数据组对网络模型的n层权值数据进行更新得到更新后的网络模型;
提取预设数据,将该预设数据作为输入数据输入到更新后的网络模型进行运算得到输出结果,将该输出结果展示。
第二方面,提供一种网络模型的运行平台,所述网络模型的运行平台包括:
收发单元,用于接收网络模型编译器发送的权值数据组;
更新单元,用于依据该权值数据组对网络模型的n层权值数据进行更新得 到更新后的网络模型;
处理单元,用于提取预设数据,将该预设数据作为输入数据输入到更新后的网络模型进行运算得到输出结果,将该输出结果展示。
第三方面,提供一种计算机可读存储介质,其存储用于电子数据交换的计算机程序,其中,所述计算机程序使得计算机执行第二方面所述的方法。
第四方面,提供一种计算机程序产品,所述计算机程序产品包括存储了计算机程序的非瞬时性计算机可读存储介质,所述计算机程序可操作来使计算机执行第二方面所述的方法。
本申请提供的技术方案在进行网络模型的更新以后,对该网络模型进行模拟运行得到输出结果,然后展示该输出结果,这样用户可以通过该输出结果来判断该网络模型是否适合该对应的硬件结构,这样可以提高用户体验度。真实硬件环境运行可以直接部署网络模型于目标硬件平台,执行高性能计算。
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本申请实施例提供的一种网络模型的运行方法的流程示意图。
图2是本申请一个实施例提供的网络模型的运行平台的结构示意图。
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
本申请的说明书和权利要求书及所述附图中的术语“第一”、“第二”、“第三”和“第四”等是用于区别不同对象,而不是用于描述特定顺序。此外,术语“包括”和“具有”以及它们任何变形,意图在于覆盖不排他的包含。例如包含了一系列步 骤或单元的过程、方法、系统、产品或设备没有限定于已列出的步骤或单元,而是可选地还包括没有列出的步骤或单元,或可选地还包括对于这些过程、方法、产品或设备固有的其它步骤或单元。
在本文中提及“实施例”意味着,结合实施例描述的特定特征、结构或特性可以包含在本申请的至少一个实施例中。在说明书中的各个位置出现该短语并不一定均是指相同的实施例,也不是与其它实施例互斥的独立的或备选的实施例。本领域技术人员显式地和隐式地理解的是,本文所描述的实施例可以与其它实施例相结合。
模拟人类实际神经网络的数学方法问世以来,人们已慢慢习惯了把这种人工神经网络直接称为神经网络。神经网络在系统辨识、模式识别、智能控制等领域有着广泛而吸引人的前景,特别在智能控制中,人们对神经网络的自学习功能尤其感兴趣,并且把神经网络这一重要特点看作是解决自动控制中控制器适应能力这个难题的关键钥匙之一。
神经网络(Neural Networks,NN)是由大量的、简单的处理单元(称为神经元)广泛地互相连接而形成的复杂网络系统,它反映了人脑功能的许多基本特征,是一个高度复杂的非线性动力学习系统。神经网络具有大规模并行、分布式存储和处理、自组织、自适应和自学能力,特别适合处理需要同时考虑许多因素和条件的、不精确和模糊的信息处理问题。神经网络的发展与神经科学、数理科学、认知科学、计算机科学、人工智能、信息科学、控制论、机器人学、微电子学、心理学、光计算、分子生物学等有关,是一门新兴的边缘交叉学科。
神经网络的基础在于神经元。
神经元是以生物神经系统的神经细胞为基础的生物模型。在人们对生物神经系统进行研究,以探讨人工智能的机制时,把神经元数学化,从而产生了神经元数学模型。
大量的形式相同的神经元连结在—起就组成了神经网络。神经网络是一个高度非线性动力学系统。虽然,每个神经元的结构和功能都不复杂,但是神经网络的动态行为则是十分复杂的;因此,用神经网络可以表达实际物理世界的各种现象。
神经网络模型是以神经元的数学模型为基础来描述的。人工神经网络(Artificial Neural Network)是对人类大脑系统的一阶特性的一种描述。简单地讲, 它是一个数学模型。神经网络模型由网络拓扑.节点特点和学习规则来表示。神经网络对人们的巨大吸引力主要包括:并行分布处理、高度鲁棒性和容错能力、分布存储及学习能力、能充分逼近复杂的非线性关系。
在控制领域的研究课题中,不确定性系统的控制问题长期以来都是控制理论研究的中心主题之一,但是这个问题一直没有得到有效的解决。利用神经网络的学习能力,使它在对不确定性系统的控制过程中自动学习系统的特性,从而自动适应系统随时间的特性变异,以求达到对系统的最优控制;显然这是一种十分振奋人心的意向和方法。
人工神经网络的模型现在有数十种之多,应用较多的典型的神经网络模型包括BP神经网络、Hopfield网络、ART网络和Kohonen网络。
参阅图1,图1为本申请提供的一种网络模型的运行方法,该方法由神经网络芯片来执行,该神经网络芯片具体可以包括:专门的神经网络芯片,例如AI芯片,当然在实际应用中,还可以包括:通用的处理芯片例如CPU或FPGA等,本申请并不限制上述神经网络芯片的具体表现形式,如图1所示,上述方法包括如下步骤:
步骤S101、接收网络模型编译器发送的权值数据组;
上述步骤S101的接收网络模型变压器发送的权值数据组的接收方式可以有多种,例如,在本申请一种可选的技术方案中,可以通过无线方式接收,包括但不限于:蓝牙、wifi等等方式,当然,在本申请另一种可选的技术方案中,可以通过有线方式接收,包括但不限于,总线方式、端口方式或引脚方式。
步骤S102、依据该权值数据组对网络模型的n层权值数据进行更新得到更新后的网络模型;
上述步骤S102的实现方法具体可以包括:
提取权值数据组内每层对应的权值数据,将每层对应的权值数据替换网络模型的原始权值数据。
步骤S103、提取预设数据,将该预设数据作为输入数据输入到更新后的网络模型进行运算得到输出结果,将该输出结果展示。
上述步骤中预设数据可以为标记好的数据,该数据可以存储在芯片的软件内存中。
上述步骤S103的实现方法具体可以为:
提取预设数据,将该预设数据作为输入数据输入到更新后的网络模型调用软件内存进行运算得到输出结果。
上述步骤S103的实现方法具体还可以包括:
遍历网络模型的所有计算节点,导入权值数据组内的参数值,在软件内存内预留存储空间,按照计算的顺序遍历全部计算节点,涉及异行计算的调度策略,根据调度策略依据调用指定节点的计算函数进行计算,并收集结果得到输出结果。
本申请提供的技术方案在进行网络模型的更新以后,对该网络模型进行模拟运行得到输出结果,然后展示该输出结果,这样用户可以通过该输出结果来判断该网络模型是否适合该对应的硬件结构,这样可以提高用户体验度。
下面介绍一下上述技术方案的细化方案,对于神经网络模型来说,其分为两个大的部分,分别为训练和正向运算,对于训练即是对神经网络模型进行优化的过程,具体的实现方式可以包括:将大量的标注好的样本(一般为50以上的样本)依次输入原始的神经网络模型(此时的权值数据组为初始数值)执行多次迭代运算对初始权值进行更新,每次迭代运算均包括:n层正向运算以及n层反向运算,n层反向运算的权值梯度更新对应层的权值,经过多个样本的计算即能够实现对权值数据组的多次更新以完成神经网络模型的训练,完成训练的神经网络模型接收待计算的数据,将该待计算的数据与训练好的权值数据组执行n层正向运算得到正向运算的输出结果,这样对输出结果进行分析即能够得到该神经网络的运算结果,如该神经网络模型如果为人脸识别的神经网络模型,那么其运算结果看为匹配或不匹配。
对于神经网络模型的训练其需要很大的计算量,因为对于n层正向运算以及n层反向运算,任意一层的运算量均涉及到很大的计算量,以人脸识别神经网络模型为例,每层运算大部分为卷积的运算,卷积的输入数据均是上千行和上千列,那么对于这么大的数据的一次卷积运算的乘积运算可能能够达到10
6次,这对处理器的要求是很高的,需要花费很大的开销来执行此类运算,更何况这种运算需要经过多次的迭代以及n层,并且每个样本均需要计算一遍,就更加的提高了计算开销,这种计算开销目前通过FPGA是无法实现的,过多的计算开销以及功耗需要很高的硬件配置,这样的硬件配置的成本对于FPGA设备来说很显然是不现实的,所以对于FPGA来说,其是通过配置权值数据组来 完成神经网络模型的训练的,但是对于FPGA设备,其是否适应该配置权值数据组用户无法获知,这里将一个预设的数据执行芯片内部的运算,即调用软内存的方式来实现对网络模型的运算,这样即能够依据输出结果来确定是否适合,从而提高用户体验度。
本申请还提供一种网络模型的运行平台,参阅图2,所述网络模型的运行平台包括:
收发单元201,用于接收网络模型编译器发送的权值数据组;
上述收发单元201的接收网络模型变压器发送的权值数据组的接收方式可以有多种,例如,在本申请一种可选的技术方案中,可以通过无线方式接收,包括但不限于:蓝牙、wifi等等方式,当然,在本申请另一种可选的技术方案中,可以通过有线方式接收,包括但不限于,总线方式、端口方式或引脚方式。
更新单元202,用于依据该权值数据组对网络模型的n层权值数据进行更新得到更新后的网络模型;
处理单元203,用于提取预设数据,将该预设数据作为输入数据输入到更新后的网络模型进行运算得到输出结果,将该输出结果展示。
本申请提供的技术方案在进行网络模型的更新以后,对该网络模型进行模拟运行得到输出结果,然后展示该输出结果,这样用户可以通过该输出结果来判断该网络模型是否适合该对应的硬件结构,这样可以提高用户体验度。
可选的,
更新单元202,具体用于提取权值数据组内每层对应的权值数据,将每层对应的权值数据替换网络模型的原始权值数据以得到更新后的网络模型。
可选的,
处理单元203,具体用于将该预设数据作为输入数据输入到更新后的网络模型调用软件内存进行运算得到输出结果。
处理单元203,具体用于遍历网络模型的所有计算节点,导入权值数据组内的参数值,在软件内存内预留存储空间,按照计算的顺序遍历全部计算节点,涉及异行计算的调度策略,根据调度策略依据调用指定节点的计算函数进行计算,并收集结果得到输出结果。
本申请还提供一种计算机可读存储介质,其存储用于电子数据交换的计算机程序,其中,所述计算机程序使得计算机执行如图1所示的方法以及该方法 的细化方案。
本申请还提供一种计算机程序产品,所述计算机程序产品包括存储了计算机程序的非瞬时性计算机可读存储介质,所述计算机程序可操作来使计算机执行如图1所示的方法以及该方法的细化方案。
需要说明的是,对于前述的各方法实施例,为了简单描述,故将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本申请并不受所描述的动作顺序的限制,因为依据本申请,某些步骤可以采用其他顺序或者同时进行。其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于可选实施例,所涉及的动作和模块并不一定是本申请所必须的。
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其他实施例的相关描述。
在本申请所提供的几个实施例中,应该理解到,所揭露的装置,可通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件程序模块的形式实现。
所述集成的单元如果以软件程序模块的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储器中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储器中, 包括若干指令用以使得一台计算机设备(可为个人计算机、服务器或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储器包括:U盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、移动硬盘、磁碟或者光盘等各种可以存储程序代码的介质。
本领域普通技术人员可以理解上述实施例的各种方法中的全部或部分步骤是可以通过程序来指令相关的硬件来完成,该程序可以存储于一计算机可读存储器中,存储器可以包括:闪存盘、只读存储器(英文:Read-Only Memory,简称:ROM)、随机存取器(英文:Random Access Memory,简称:RAM)、磁盘或光盘等。
以上对本申请实施例进行了详细介绍,本文中应用了具体个例对本申请的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本申请的方法及其核心思想;同时,对于本领域的一般技术人员,依据本申请的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本申请的限制。
Claims (10)
- 一种网络模型的运行方法,其特征在于,所述方法包括如下步骤:接收网络模型编译器发送的权值数据组;依据该权值数据组对网络模型的n层权值数据进行更新得到更新后的网络模型;提取预设数据,将该预设数据作为输入数据输入到更新后的网络模型进行运算得到输出结果,将该输出结果展示。
- 根据权利要求1所述的方法,其特征在于,所述依据该权值数据组对网络模型的n层权值数据进行更新得到更新后的网络模型,具体包括:提取权值数据组内每层对应的权值数据,将每层对应的权值数据替换网络模型的原始权值数据以得到更新后的网络模型。
- 根据权利要求1所述的方法,其特征在于,所述将该预设数据作为输入数据输入到更新后的网络模型进行运算得到输出结果,具体包括:将该预设数据作为输入数据输入到更新后的网络模型调用软件内存进行运算得到输出结果。
- 根据权利要求1所述的方法,其特征在于,所述将该预设数据作为输入数据输入到更新后的网络模型进行运算得到输出结果,具体包括:遍历网络模型的所有计算节点,导入权值数据组内的参数值,在软件内存内预留存储空间,按照计算的顺序遍历全部计算节点,涉及异行计算的调度策略,根据调度策略依据调用指定节点的计算函数进行计算,并收集结果得到输出结果。
- 一种网络模型的运行平台,其特征在于,所述网络模型的运行平台包括:收发单元,用于接收网络模型编译器发送的权值数据组;更新单元,用于依据该权值数据组对网络模型的n层权值数据进行更新得到更新后的网络模型;处理单元,用于提取预设数据,将该预设数据作为输入数据输入到更新后的网络模型进行运算得到输出结果,将该输出结果展示。
- 根据权利要求5所述的网络模型的运行平台,其特征在于,所述更新单元,具体用于提取权值数据组内每层对应的权值数据,将每层 对应的权值数据替换网络模型的原始权值数据以得到更新后的网络模型。
- 根据权利要求5所述的网络模型的运行平台,其特征在于,所述处理单元,具体用于将该预设数据作为输入数据输入到更新后的网络模型调用软件内存进行运算得到输出结果。
- 根据权利要求5所述的网络模型的运行平台,其特征在于,所述处理单元,具体用于遍历网络模型的所有计算节点,导入权值数据组内的参数值,在软件内存内预留存储空间,按照计算的顺序遍历全部计算节点,涉及异行计算的调度策略,根据调度策略依据调用指定节点的计算函数进行计算,并收集结果得到输出结果。
- 一种计算机可读存储介质,其特征在于,其存储用于电子数据交换的计算机程序,其中,所述计算机程序使得计算机执行如权利要求1-4任一项所述的方法。
- 一种计算机程序产品,其特征在于,所述计算机程序产品包括存储了计算机程序的非瞬时性计算机可读存储介质,所述计算机程序可操作来使计算机执行如权利要求1-4任一项所述的方法。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201880001817.7A CN109313673A (zh) | 2018-04-17 | 2018-04-17 | 网络模型的运行方法及相关产品 |
US17/044,502 US20210042621A1 (en) | 2018-04-17 | 2018-04-17 | Method for operation of network model and related product |
PCT/CN2018/083436 WO2019200545A1 (zh) | 2018-04-17 | 2018-04-17 | 网络模型的运行方法及相关产品 |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2018/083436 WO2019200545A1 (zh) | 2018-04-17 | 2018-04-17 | 网络模型的运行方法及相关产品 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2019200545A1 true WO2019200545A1 (zh) | 2019-10-24 |
Family
ID=65221735
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2018/083436 WO2019200545A1 (zh) | 2018-04-17 | 2018-04-17 | 网络模型的运行方法及相关产品 |
Country Status (3)
Country | Link |
---|---|
US (1) | US20210042621A1 (zh) |
CN (1) | CN109313673A (zh) |
WO (1) | WO2019200545A1 (zh) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109492241B (zh) * | 2018-08-10 | 2020-03-10 | 中科寒武纪科技股份有限公司 | 转换方法、装置、计算机设备和存储介质 |
CN109918237B (zh) * | 2019-04-01 | 2022-12-09 | 中科寒武纪科技股份有限公司 | 异常网络层确定方法及相关产品 |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102004446A (zh) * | 2010-11-25 | 2011-04-06 | 福建师范大学 | 具有多层结构的bp神经元自适应方法 |
US20140330402A1 (en) * | 2013-05-02 | 2014-11-06 | Aspen Technology, Inc. | Computer Apparatus And Method using Model Structure Information of Model Predictive Control |
CN106295799A (zh) * | 2015-05-12 | 2017-01-04 | 核工业北京地质研究院 | 一种深度学习多层神经网络的实现方法 |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103323772B (zh) * | 2012-03-21 | 2016-02-10 | 北京光耀能源技术股份有限公司 | 基于神经网络模型的风力发电机运行状态分析方法 |
US11244225B2 (en) * | 2015-07-10 | 2022-02-08 | Samsung Electronics Co., Ltd. | Neural network processor configurable using macro instructions |
CN106357419A (zh) * | 2015-07-16 | 2017-01-25 | 中兴通讯股份有限公司 | 网管数据处理方法及装置 |
CN106529820A (zh) * | 2016-11-21 | 2017-03-22 | 北京中电普华信息技术有限公司 | 一种运营指标的预测方法及系统 |
US10795836B2 (en) * | 2017-04-17 | 2020-10-06 | Microsoft Technology Licensing, Llc | Data processing performance enhancement for neural networks using a virtualized data iterator |
US11373266B2 (en) * | 2017-05-05 | 2022-06-28 | Intel Corporation | Data parallelism and halo exchange for distributed machine learning |
US10019668B1 (en) * | 2017-05-19 | 2018-07-10 | Google Llc | Scheduling neural network processing |
-
2018
- 2018-04-17 WO PCT/CN2018/083436 patent/WO2019200545A1/zh active Application Filing
- 2018-04-17 CN CN201880001817.7A patent/CN109313673A/zh active Pending
- 2018-04-17 US US17/044,502 patent/US20210042621A1/en not_active Abandoned
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102004446A (zh) * | 2010-11-25 | 2011-04-06 | 福建师范大学 | 具有多层结构的bp神经元自适应方法 |
US20140330402A1 (en) * | 2013-05-02 | 2014-11-06 | Aspen Technology, Inc. | Computer Apparatus And Method using Model Structure Information of Model Predictive Control |
CN106295799A (zh) * | 2015-05-12 | 2017-01-04 | 核工业北京地质研究院 | 一种深度学习多层神经网络的实现方法 |
Non-Patent Citations (1)
Title |
---|
YAN, MING: "Hardware Implementation of Neural Network based on FPGA", CHINESE MASTER'S THESES, no. 02, 15 February 2009 (2009-02-15) * |
Also Published As
Publication number | Publication date |
---|---|
CN109313673A (zh) | 2019-02-05 |
US20210042621A1 (en) | 2021-02-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2019200544A1 (zh) | 网络模型的应用开发方法及相关产品 | |
Lu et al. | Brain intelligence: go beyond artificial intelligence | |
WO2022042002A1 (zh) | 一种半监督学习模型的训练方法、图像处理方法及设备 | |
Wu et al. | Evolving RBF neural networks for rainfall prediction using hybrid particle swarm optimization and genetic algorithm | |
WO2019091020A1 (zh) | 权重数据存储方法和基于该方法的神经网络处理器 | |
WO2023221928A1 (zh) | 一种推荐方法、训练方法以及装置 | |
CN108809694B (zh) | 业务编排方法、系统、装置与计算机可读存储介质 | |
CN110674869A (zh) | 分类处理、图卷积神经网络模型的训练方法和装置 | |
CN107578014A (zh) | 信息处理装置及方法 | |
CN116415654A (zh) | 一种数据处理方法及相关设备 | |
CN109754068A (zh) | 基于深度学习预训练模型的迁移学习方法及终端设备 | |
CN109101624A (zh) | 对话处理方法、装置、电子设备及存储介质 | |
WO2023040147A1 (zh) | 神经网络的训练方法及装置、存储介质和计算机程序 | |
JP7488375B2 (ja) | ニューラルネットワークの生成方法、機器及びコンピュータ可読記憶媒体 | |
WO2022184124A1 (zh) | 生理电信号分类处理方法、装置、计算机设备和存储介质 | |
WO2019200545A1 (zh) | 网络模型的运行方法及相关产品 | |
CN111651989B (zh) | 命名实体识别方法和装置、存储介质及电子装置 | |
CN110837567A (zh) | 实现知识图谱嵌入的方法和系统 | |
CN116992151A (zh) | 一种基于双塔图卷积神经网络的在线课程推荐方法 | |
CN117744759A (zh) | 文本信息的识别方法、装置、存储介质及电子设备 | |
CN108229640B (zh) | 情绪表达的方法、装置和机器人 | |
CN107665349A (zh) | 一种分类模型中多个目标的训练方法和装置 | |
CN108737491A (zh) | 信息推送方法和装置以及存储介质、电子装置 | |
WO2019200548A1 (zh) | 网络模型编译器及相关产品 | |
Khouas et al. | Training Machine Learning models at the Edge: A Survey |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 18915511 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205 DATED 29.01.2021) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 18915511 Country of ref document: EP Kind code of ref document: A1 |