CN110874550A - Data processing method, apparatus, device and system - Google Patents

Data processing method, apparatus, device and system Download PDF

Info

Publication number
CN110874550A
CN110874550A CN201811016411.4A CN201811016411A CN110874550A CN 110874550 A CN110874550 A CN 110874550A CN 201811016411 A CN201811016411 A CN 201811016411A CN 110874550 A CN110874550 A CN 110874550A
Authority
CN
China
Prior art keywords
neural network
network model
data
neuron
activation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811016411.4A
Other languages
Chinese (zh)
Inventor
贾贝
傅蓉蓉
高帅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN201811016411.4A priority Critical patent/CN110874550A/en
Priority to PCT/CN2019/085468 priority patent/WO2020042658A1/en
Publication of CN110874550A publication Critical patent/CN110874550A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/082Learning methods modifying the architecture, e.g. adding, deleting or silencing nodes or connections

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a data processing method, which comprises the following steps: the edge device obtains an initial neural network model that includes at least one neuron. The edge device inputs the data to be processed into the trained neural network model to obtain result data; the result data is obtained by processing data to be processed by using neurons in the trained neural network model, the trained neural network model is obtained by training a pruning neural network model by using N groups of training samples, and the pruning neural network model is obtained by pruning at least one neuron in the initial neural network model according to respective activation information of at least one neuron. Therefore, the accuracy of data processing is improved, the calculation amount is reduced, and the waste of calculation resources is avoided.

Description

数据处理方法、装置、设备和系统Data processing method, apparatus, device and system

技术领域technical field

本发明涉及计算机技术领域,尤其涉及一种数据处理方法、装置、设备和系统。The present invention relates to the field of computer technology, and in particular, to a data processing method, apparatus, device and system.

背景技术Background technique

随着深度学习技术的发展,特别是卷积神经网络的普及,其被广泛应用到诸如图像处理、人脸识别等领域。目前考虑到模型的通用性、设计成本等原因,在同一应用场景中通常设计一个通用的神经网络模型,以进行数据处理。With the development of deep learning technology, especially the popularization of convolutional neural network, it is widely used in fields such as image processing and face recognition. At present, considering the versatility of the model, the design cost and other reasons, a general neural network model is usually designed in the same application scenario for data processing.

在视频监控场景中,不考虑实际监控环境,如摄像头安装的高度以及角度等不同。如果在数据处理时均采用同一神经网络模型对不同摄像头采集的图像进行处理,这样会影响图像处理的精准度。此外,考虑到神经网络模型的通用性,其设计的神经元较多、网络层数较多,这样在一些特定场景中还会存在计算资源的浪费。In the video surveillance scenario, the actual surveillance environment is not considered, such as the height and angle of the camera installation. If the same neural network model is used to process images collected by different cameras during data processing, it will affect the accuracy of image processing. In addition, considering the versatility of the neural network model, it is designed with many neurons and many network layers, so that there will be a waste of computing resources in some specific scenarios.

发明内容SUMMARY OF THE INVENTION

本申请公开了一种数据处理方法、装置、设备和系统,能够提出适用边缘设备自身或当前所处场景的神经网络模型,以进行相应地数据处理,从而提升了数据处理的精准度,避免了计算资源的浪费。The present application discloses a data processing method, device, device and system, which can propose a neural network model suitable for the edge device itself or the current scene, so as to perform corresponding data processing, thereby improving the accuracy of data processing and avoiding the need for Waste of computing resources.

第一方面,本申请公开了一种数据处理方法,包括:边缘设备获取训练后的神经网络模型,将待处理数据输入训练后的神经网络模型,以获得待处理数据的结果数据。其中,该训练后的神经网络模型为使用N组训练样本对剪枝神经网络模型进行训练获得的。该剪枝神经网络模型为根据初始的神经网络模型中每个神经元各自的激活信息对该初始的神经网络模型进行剪枝处理后获得的。In a first aspect, the present application discloses a data processing method, comprising: obtaining a trained neural network model by an edge device, and inputting data to be processed into the trained neural network model to obtain result data of the data to be processed. The trained neural network model is obtained by using N groups of training samples to train the pruned neural network model. The pruned neural network model is obtained after pruning the initial neural network model according to the activation information of each neuron in the initial neural network model.

考虑到模型训练计算量较大,边缘设备的计算资源有限等原因,通常在数据中心侧进行神经网络模型的训练。换句话说,数据中心可根据初始的神经网络模型中每个神经元各自的激活信息对该初始的神经网络模型中的神经元进行剪枝处理,以获得剪枝神经网络模型。进一步,数据中心使用N组训练样本对剪枝神经网络模型进行训练,以获得训练后的神经网络模型。进而,数据中心将训练后的神经网络模型发送给边缘设备,便于边缘设备基于该训练后的神经网络模型进行数据处理。Considering the large amount of computation for model training and the limited computing resources of edge devices, neural network model training is usually performed on the data center side. In other words, the data center may perform pruning processing on the neurons in the initial neural network model according to the respective activation information of each neuron in the initial neural network model to obtain a pruned neural network model. Further, the data center uses N groups of training samples to train the pruned neural network model to obtain the trained neural network model. Furthermore, the data center sends the trained neural network model to the edge device, so that the edge device can perform data processing based on the trained neural network model.

其中,神经元的激活信息是指在使用初始的神经网络模型中的神经元进行数据处理时,该神经元所产生的相关信息。该激活信息包括但不限于激活值、激活次数以及平均激活值,具体在下文进行详述。The activation information of the neuron refers to the relevant information generated by the neuron when the neuron in the initial neural network model is used for data processing. The activation information includes, but is not limited to, activation value, activation times, and average activation value, which will be described in detail below.

通过实施上述过程,能够提出适合边缘设备自身(或边缘设备当前所处场景)的神经网络模型,进而利用该神经网络模型进行数据处理,能提升数据处理的精准度。此外,相比于现有技术的通用神经网络模型而言,还能减小模型规模,提升数据处理速率,避免计算资源的浪费。By implementing the above process, a neural network model suitable for the edge device itself (or the scene where the edge device is currently located) can be proposed, and then the neural network model is used for data processing, which can improve the accuracy of data processing. In addition, compared with the general neural network model in the prior art, the model scale can be reduced, the data processing rate can be improved, and the waste of computing resources can be avoided.

在一种可能的实施方式中,每组训练样本都包括输入数据和输出数据。其中,输出数据为利用初始的神经网络模型对输入数据进行处理获得的。具体的,边缘设备从数据中心获得初始的神经网络模型。然后,采集自身设备或自身设备当前场景的多组输入数据,再利用初始的神经网络模型对这多组输入数据进行处理,以获得对应的输出数据。进一步地,将每组输入数据和输出数据作为一组训练样本,从而可获得多组训练样本。将多组训练样本发送给数据中心,便于数据中心利用这多组训练样本对初始的神经网络模型进行再训练,以获得适合该边缘设备(或该边缘设备当前所处场景)的训练后的神经网络模型。In a possible implementation, each set of training samples includes input data and output data. The output data is obtained by processing the input data by using the initial neural network model. Specifically, the edge device obtains the initial neural network model from the data center. Then, collect multiple sets of input data of the own device or the current scene of the own device, and then use the initial neural network model to process the multiple sets of input data to obtain corresponding output data. Further, each set of input data and output data is regarded as a set of training samples, so that multiple sets of training samples can be obtained. Send multiple sets of training samples to the data center, so that the data center can use the multiple sets of training samples to retrain the initial neural network model to obtain a trained neural network suitable for the edge device (or the current scene of the edge device). network model.

可选地,在利用初始的神经网络模型进行数据处理时,还可记录每次该初始的神经网络模型中每个神经元的激活信息等。Optionally, when using the initial neural network model for data processing, the activation information of each neuron in the initial neural network model can also be recorded each time.

通过实施上述步骤,边缘设备可获得定制化的训练后神经网络模型,该训练后神经网络模型能够适合自身或自身所处场景的训练样本,可以获得更精准的结果。而且,该训练样本无需人工标注,能够减少用户操作,同时提升神经网络模型训练的精准度。By implementing the above steps, the edge device can obtain a customized neural network model after training, the neural network model after training can be adapted to the training samples of itself or the scene in which it is located, and more accurate results can be obtained. Moreover, the training samples do not need manual annotation, which can reduce user operations and improve the accuracy of neural network model training.

在一种可能的实施方式中,激活信息包括以下中的至少一项:激活值、激活次数以及平均激活值。其中,激活值为每次使用初始的神经网络模型中的神经元进行数据处理时,该神经元的输出值。激活次数为多次使用初始的神经网络模型中的神经元进行数据处理中,该神经元的激活值小于或等于预设阈值(第四阈值)的次数。平均激活值为多次使用初始的神经网络模型中的神经元进行数据处理中,该神经元的激活值的平均值。相应地,数据中心对初始的神经网络模型进行剪枝处理的过程具体包括以下中的至少一项:当激活信息包括激活值时,数据中心根据初始的神经网络模型中至少一个神经元各自的激活值,将初始的神经网络模型中激活值小于或等于第一阈值的神经元进行删除(剪枝)。当激活信息包括激活次数时,数据中心根据初始的神经网络模型中至少一个神经元各自的激活次数,将初始的神经网络模型中激活次数小于或等于第二阈值的神经元进行删除。当激活信息包括平均激活值时,数据中心根据初始的神经网络模型中至少一个神经元各自的平均激活值,将初始的神经网络模型中平均激活值小于或等于第三阈值的神经元进行删除。In a possible implementation manner, the activation information includes at least one of the following: activation value, activation times, and average activation value. Among them, the activation value is the output value of the neuron each time the neuron in the initial neural network model is used for data processing. The number of activations is the number of times that the activation value of the neuron is less than or equal to the preset threshold (the fourth threshold) during data processing using the neurons in the initial neural network model for many times. The average activation value is the average value of the activation value of the neuron in the data processing using the neurons in the initial neural network model for many times. Correspondingly, the process that the data center performs pruning processing on the initial neural network model specifically includes at least one of the following: when the activation information includes an activation value, the data center is based on the activation of at least one neuron in the initial neural network model. value, delete (prune) neurons whose activation value is less than or equal to the first threshold in the initial neural network model. When the activation information includes activation times, the data center deletes neurons whose activation times are less than or equal to the second threshold in the initial neural network model according to the activation times of at least one neuron in the initial neural network model. When the activation information includes an average activation value, the data center deletes neurons whose average activation value is less than or equal to the third threshold in the initial neural network model according to the respective average activation value of at least one neuron in the initial neural network model.

通过实施上述过程,数据中心能依据初始的神经网络模型中每个神经元的激活信息,对模型自身中的神经元进行剪枝删除,以获得剪枝神经网络模型。便于后续对剪枝神经网络模型进行训练,获得适应边缘设备或所处场景的训练后的神经网络模型。这样能减少模型规模,减少计算资源的浪费,提升数据处理的效率。By implementing the above process, the data center can prune and delete neurons in the model itself according to the activation information of each neuron in the initial neural network model to obtain a pruned neural network model. It is convenient for subsequent training of the pruned neural network model to obtain a trained neural network model adapted to the edge device or the scene in which it is located. In this way, the model scale can be reduced, the waste of computing resources can be reduced, and the efficiency of data processing can be improved.

在一种可能的实施方式中,该训练后的神经网络模型具体可为数据中心利用损失函数对剪枝神经网络模型中的参数进行更新获得的。该损失函数用于指示训练数据和预测数据之间的误差损失,该训练数据为将输入数据输入初始的神经网络模型中获得的全连接层之前输出的数据。该预测数据为将输入数据输入剪枝神经网络模型中获得的全连接层之前输出的数据。具体的,数据中心可利用N组训练样本对剪枝神经网络模型进行多次训练,每次训练过程中均使用损失函数对模型的参数进行修正,选取损失函数值最小的一次作为训练后的神经网络模型的参数,从而获得训练后的神经网络模型。In a possible implementation manner, the trained neural network model may be obtained by updating the parameters in the pruned neural network model by using a loss function in the data center. The loss function is used to indicate the error loss between the training data and the prediction data, where the training data is the data output before inputting the input data into the fully connected layer obtained in the initial neural network model. The prediction data is the data output before inputting the input data into the fully connected layer obtained in the pruned neural network model. Specifically, the data center can use N groups of training samples to train the pruned neural network model for multiple times. In each training process, the loss function is used to modify the parameters of the model, and the one with the smallest loss function value is selected as the neural network after training. parameters of the network model to obtain the trained neural network model.

通过实施上述过程,数据中心可通过损失函数来训练获得精确度更高的训练后的神经网络模型,便于后续边缘设备侧直接利用训练后的神经网络模型进行数据处理,从而提升了数据处理的精确度。By implementing the above process, the data center can use the loss function to train the trained neural network model with higher accuracy, which is convenient for the subsequent edge device side to directly use the trained neural network model for data processing, thereby improving the accuracy of data processing. Spend.

第二方面,本申请提供了一种数据处理装置,所述装置包括用于执行如上第一方面或第一方面的任意可能的实施方式中所描述的方法的功能模块或单元。In a second aspect, the present application provides a data processing apparatus, the apparatus comprising functional modules or units for executing the method described in the first aspect or any possible implementation manner of the first aspect.

第三方面,本申请提供了一种边缘设备(例如智能摄像头、路边监控设备等),包括处理器,存储器,通信接口和总线;处理器、通信接口、存储器通过总线相互通信;通信接口,用于接收和发送数据;存储器,用于存储指令;处理器,用于调用存储器中的指令,执行上述第一方面或第一方面的任意可能的实施方式中所描述的方法。In a third aspect, the present application provides an edge device (such as a smart camera, a roadside monitoring device, etc.), including a processor, a memory, a communication interface and a bus; the processor, the communication interface, and the memory communicate with each other through the bus; the communication interface, for receiving and sending data; a memory for storing instructions; and a processor for invoking the instructions in the memory to execute the method described in the first aspect or any possible implementation manner of the first aspect.

第四方面,本申请提供了一种数据处理系统,包括数据中心和边缘设备;其中,数据中心用于存储初始的神经网络模型,并利用N组训练样本对该初始的神经网络模型进行训练,以获得训练后的神经网络模型。边缘设备包括处理器,存储器,通信接口和总线;处理器、通信接口、存储器通过总线相互通信;通信接口,用于接收和发送数据;存储器,用于存储指令;处理器,用于调用存储器中的指令,执行上述第一方面或第一方面的任意可能的实施方式中所描述的方法。In a fourth aspect, the present application provides a data processing system, including a data center and an edge device; wherein, the data center is used to store an initial neural network model, and use N groups of training samples to train the initial neural network model, to obtain the trained neural network model. The edge device includes a processor, a memory, a communication interface and a bus; the processor, the communication interface, and the memory communicate with each other through the bus; the communication interface is used to receive and send data; the memory is used to store instructions; to execute the method described in the first aspect or any possible implementation manner of the first aspect.

第五方面,本申请提供一种计算机可读存储介质,所述计算机可读存储介质中存储有指令,当其在计算机上运行时,使得计算机执行上述第一方面所述的方法。In a fifth aspect, the present application provides a computer-readable storage medium, where instructions are stored in the computer-readable storage medium, when the computer-readable storage medium runs on a computer, the computer executes the method described in the first aspect.

第六方面,本申请提供了一种包含指令的计算机程序产品,当其在计算机上运行时,使得计算机执行上述各方面所述的方法。In a sixth aspect, the present application provides a computer program product comprising instructions which, when run on a computer, cause the computer to perform the methods described in the above aspects.

本申请在上述各方面提供的实现方式的基础上,还可以进行进一步组合以提供更多实现方式。On the basis of the implementation manners provided by the above aspects, the present application may further combine to provide more implementation manners.

附图说明Description of drawings

图1是本发明实施例提供的一种YOLO模型的结构示意图。FIG. 1 is a schematic structural diagram of a YOLO model provided by an embodiment of the present invention.

图2是本发明实施例提供的一种数据处理系统的网络框架示意图。FIG. 2 is a schematic diagram of a network framework of a data processing system provided by an embodiment of the present invention.

图3是本发明实施例提供的一种数据处理方法的流程示意图。FIG. 3 is a schematic flowchart of a data processing method provided by an embodiment of the present invention.

图4A-图4B是本发明实施例提供的两种网络层的结构示意图。4A-4B are schematic structural diagrams of two network layers provided by an embodiment of the present invention.

图5是本发明实施例提供的一种数据处理装置的结构示意图。FIG. 5 is a schematic structural diagram of a data processing apparatus provided by an embodiment of the present invention.

图6是本发明实施例提供的一种边缘设备的结构示意图。FIG. 6 is a schematic structural diagram of an edge device provided by an embodiment of the present invention.

具体实施方式Detailed ways

下面将结合本发明的附图,对本发明实施例中的技术方案进行详细描述。The technical solutions in the embodiments of the present invention will be described in detail below with reference to the accompanying drawings of the present invention.

首先,介绍本发明涉及的一些技术术语。First, some technical terms involved in the present invention are introduced.

边缘设备,指安装在边缘网络侧的设备。例如,在视频监控场景中,该边缘设备具体可为安装在道路上的监控设备或智能摄像头等。Edge devices refer to devices installed on the edge network side. For example, in a video surveillance scenario, the edge device may specifically be a surveillance device or a smart camera installed on the road.

神经网络模型,指由大量的、简单的处理单元(称为神经元)相互连接而形成的复杂网络系统。神经网络模型是以神经元的数学模型为基础来描述的,具有大规模并行、分布式存储和处理、自适应和自学习等能力。关于神经元的说明见下文。在实际应用中,神经网络模型由以下中的至少一种网络层组成:卷积层、激活层、池化层以及全连接层等,每种网络层部署的数量本发明不做限定,其可为一个或多个。每个网络层由一个或多个神经元组成,该神经元的参数(权值)也可称为神经网络模型的模型参数。每个网络层中的神经元可以相互连接,或者不同网络层中的神经元可以相互连接,如下文图4A具体示出相邻两个网络层中神经元可以相互连接。A neural network model refers to a complex network system formed by interconnecting a large number of simple processing units (called neurons). The neural network model is described based on the mathematical model of neurons, and has the capabilities of large-scale parallelism, distributed storage and processing, self-adaptation and self-learning. See below for a description of neurons. In practical applications, the neural network model is composed of at least one of the following network layers: convolutional layer, activation layer, pooling layer, and fully connected layer, etc. The number of each network layer deployed is not limited in the present invention, which can be for one or more. Each network layer consists of one or more neurons, and the parameters (weights) of the neurons can also be referred to as model parameters of the neural network model. Neurons in each network layer can be connected to each other, or neurons in different network layers can be connected to each other, as shown in FIG. 4A below, neurons in two adjacent network layers can be connected to each other.

神经元,也称为节点。每个神经元代表一种特定的输出函数,也称为激励函数。在使用神经网络进行数据处理时,实质是利用神经网络模型中的神经元进行数据处理,也即是利用该神经元的激励函数进行数据处理。可理解的,由于神经网络模型不同,不同神经网络模型中同一神经元的激励函数也可不同,本发明不做限定。Neurons, also known as nodes. Each neuron represents a specific output function, also known as the excitation function. When using the neural network for data processing, the essence is to use the neurons in the neural network model to perform data processing, that is, to use the excitation function of the neuron to perform data processing. Understandably, due to different neural network models, the excitation function of the same neuron in different neural network models may also be different, which is not limited in the present invention.

卷积层,指用于进行特征提取的网络层。在卷积层中,可对输入数据进行卷积运算操作,以提取该输入数据的深层次特征数据。以输入数据为图像为,将该图像输入卷积层中,利用卷积层对该图像进行卷积运算,以获得该图像隐藏的、深层次的特征图像。Convolutional layer refers to the network layer used for feature extraction. In the convolution layer, a convolution operation can be performed on the input data to extract the deep-level feature data of the input data. Taking the input data as an image, the image is input into the convolution layer, and the convolution layer is used to perform the convolution operation on the image to obtain the hidden and deep feature image of the image.

池化层,指用于进行数据压缩的网络层。激活层,指对输入数据进行激活操作,其实质是进行特定函数的运算操作,以增强数据的非线性表达能力。全连接层,指在神经网络模型中起分类器作用的网络层,该全连接层中的每一个神经元(节点)都与上一网络层中的神经元连接,以将上一层网络层中神经元输出的特征数据综合起来,获得神经网络模型最终的输出结果。Pooling layer refers to the network layer used for data compression. The activation layer refers to the activation of the input data, and its essence is to perform the operation of a specific function to enhance the nonlinear expression ability of the data. The fully connected layer refers to the network layer that acts as a classifier in the neural network model. Each neuron (node) in the fully connected layer is connected with the neurons in the previous network layer to connect the previous network layer. The feature data output by the neurons is combined to obtain the final output result of the neural network model.

本发明中,神经网络模型包括但不限于卷积神经网络模型、循环神经网络模型、深度神经网络模型、前馈神经网络模型、深度信念网络模型、生成式对抗网络模型以及其他神经网络模型。为便于描述,本发明的以下实施例中以卷积神经网络模型中的YOLO模型为例进行进一步描述。其中,该YOLO模型为卷积神经网络模型中用于目标检测的网络模型,例如检测图像中包括的目标物体,如车辆、小狗等。下面简要介绍YOLO模型涉及的相关实施例。In the present invention, neural network models include but are not limited to convolutional neural network models, recurrent neural network models, deep neural network models, feedforward neural network models, deep belief network models, generative adversarial network models, and other neural network models. For ease of description, the following embodiments of the present invention take the YOLO model in the convolutional neural network model as an example for further description. Among them, the YOLO model is a network model used for target detection in the convolutional neural network model, for example, to detect target objects included in the image, such as vehicles, dogs, etc. The relevant embodiments involved in the YOLO model are briefly introduced below.

图1为本发明实施例提供的一种神经网络模型的逻辑结构示意图,该神经网络模型也可以称为YOLO模型。如图所示,YOLO模型包括9个网络层,其中前7个网络层(图示为层1-层7)均为卷积层,后2个网络层(图示为层8和层9)均为全连接层。其中,每个网络层各自所包括的神经元的数量可不相同。FIG. 1 is a schematic diagram of a logical structure of a neural network model provided by an embodiment of the present invention, and the neural network model may also be called a YOLO model. As shown in the figure, the YOLO model includes 9 network layers, of which the first 7 network layers (shown as layers 1-7) are convolutional layers, and the last 2 network layers (shown as layers 8 and 9) Both are fully connected layers. The number of neurons included in each network layer may be different.

激活信息,指神经元被激活时的相关信息,例如神经元的激活值、激活次数以及平均激活值等。这里的神经元被激活是指使用神经网络模型中的神经元进行数据运算(数据计算),也认为该神经元被激活使用。Activation information refers to relevant information when a neuron is activated, such as the activation value of the neuron, the number of activations, and the average activation value. The activation of neurons here refers to the use of neurons in the neural network model to perform data operations (data calculations), and it is also considered that the neurons are activated for use.

激活值,指神经元被激活时的输出值。换句话说,指在使用神经元对输入数据进行数据运算时,获得的该神经元的输出值。The activation value refers to the output value of the neuron when it is activated. In other words, it refers to the output value of the neuron obtained when the neuron is used to perform data operations on the input data.

激活次数,指神经元被多次激活时该神经元的输出值大于预设阈值的次数。换句话说,指多次使用神经元进行数据计算时,统计的每次神经元的激活值(输出值)大于预设阈值的次数。所述预设阈值为用户或系统自定义设置的。具体的,该预设阈值可为用户或系统根据一系列实验数据统计获得的,或者根据实际经验设置的经验值等,本发明不做限定。The number of activations refers to the number of times that the output value of the neuron is greater than the preset threshold when the neuron is activated multiple times. In other words, it refers to the number of times that the activation value (output value) of each neuron is greater than the preset threshold when the neuron is used for data calculation multiple times. The preset threshold is a user or system self-defined setting. Specifically, the preset threshold value may be obtained by the user or the system according to a series of experimental data statistics, or an empirical value set according to actual experience, etc., which is not limited in the present invention.

平均激活值,指神经元被多次激活时该神经元的输出值的平均值。换句话说,指多次使用神经元进行数据计算时,对这多次获得的该神经元的激活值求取的平均值。The average activation value refers to the average value of the output value of the neuron when the neuron is activated multiple times. In other words, it refers to the average value of the activation values of the neuron obtained for multiple times when the neuron is used for data calculation.

示例性地,在图像识别的应用场景中,边缘设备为智能摄像头。边缘设备可周期性地采集当前场景中的多张图像,将每张图像输入神经网络模型中进行处理,以获得相应地结果数据。该结果数据用于指示该图像所属的分类,例如森林、沙滩、星空。其中,每次使用神经网络模型对一张图像进行处理时,实质是使用神经网络模型中的神经元对该图像进行处理。相应地,在多张图像处理中,边缘设备将多次使用神经网络模型中的神经元对这多张图像进行处理。每次图像处理过程中,边缘设备将记录该神经元的激活信息,如激活值、激活次数以及平均激活值等,便于后续利用神经元的激活信息对神经网络模型进行更新,具体在下文进行详述。Exemplarily, in an application scenario of image recognition, the edge device is a smart camera. The edge device can periodically collect multiple images in the current scene, and input each image into the neural network model for processing to obtain corresponding result data. The resulting data is used to indicate the category to which the image belongs, eg forest, beach, starry sky. Wherein, every time an image is processed by using the neural network model, the essence is to use the neurons in the neural network model to process the image. Correspondingly, in multiple image processing, the edge device will use the neurons in the neural network model to process the multiple images multiple times. In each image processing process, the edge device will record the activation information of the neuron, such as activation value, activation times, and average activation value, etc., so as to facilitate the subsequent use of the activation information of the neuron to update the neural network model, which will be described in detail below. described.

损失函数,也称为代价函数。指神经网络模型的优化函数,用于度量预测的错误程度。神经网络模型的训练过程,其实质是寻找损失函数的最小值的过程,即是预测结果和真实结果之间的差值(也称损失值)最小,或预测结果和真实结果最接近的过程。可理解的,在实际神经网络模型训练过程中,数据中心通过多次训练神经网络模型中的模型参数,求取损失函数的值最小一次的模型参数,作为训练后的神经网络模型中的模型参数,从而获得训练后的神经网络模型。Loss function, also known as cost function. Refers to the optimization function of the neural network model, which is used to measure the degree of error of the prediction. The essence of the training process of the neural network model is the process of finding the minimum value of the loss function, that is, the process in which the difference (also called the loss value) between the predicted result and the actual result is the smallest, or the predicted result is the closest to the actual result. Understandably, in the actual neural network model training process, the data center obtains the model parameters with the smallest value of the loss function by training the model parameters in the neural network model multiple times, as the model parameters in the trained neural network model. , so as to obtain the trained neural network model.

举例来说,以神经网络模型为图像识别模型为例,假设训练该图像识别模型的训练样本为多张图像,每张图像由一个或多个像素点组成,每个像素点对应有各自的像素值(即输入的真实数据)。每张图像中包括有目标物体,该目标物体所在的图像区域的像素值有别于该图像中其他区域的像素值。换句话说,目标物体可用该目标物体所在的图像区域的像素值进行区别或标识。For example, taking the neural network model as an image recognition model as an example, assuming that the training samples for training the image recognition model are multiple images, each image consists of one or more pixels, and each pixel corresponds to its own pixel. value (that is, the real data entered). Each image includes a target object, and the pixel value of the image area where the target object is located is different from the pixel values of other areas in the image. In other words, the target object can be distinguished or identified by the pixel value of the image area where the target object is located.

相应地,数据中心将多张图像输入该图像识别模型中,可对每张图像中的像素点进行推理计算,以获得该像素点的预测数据。进一步地,数据中心还将利用预设的损失函数对多张图像中像素点的预测数据和该像素点的真实数据进行对比,以修正图像识别模型中的模型参数。重复上述操作,数据中心可寻找出多张图像中像素点的预测数据和真实数据之间的差值最小的一次,将该次训练(修正)的图像识别模型的模型参数,作为训练后的图像识别模型的模型参数,从而也获得了训练后的图像识别模型。Correspondingly, the data center inputs multiple images into the image recognition model, and can perform inference calculation on the pixels in each image to obtain the prediction data of the pixels. Further, the data center will also use a preset loss function to compare the predicted data of a pixel in multiple images with the real data of the pixel to correct the model parameters in the image recognition model. Repeat the above operation, the data center can find the one with the smallest difference between the predicted data and the real data of the pixels in the multiple images, and use the model parameters of the image recognition model trained (corrected) this time as the image after training. The model parameters of the recognition model are obtained, thereby also obtaining the trained image recognition model.

图2为本发明实施例提供的一种数据处理系统的网络框架示意图。如图所示,该数据处理系统100包括数据中心102和边缘设备104。其中,数据中心102,也称为云端,数据中心102中包括多台服务器,其部署有训练好的神经网络模型(具体可为下文中初始的神经网络模型或训练后的神经网络模型),以供边缘设备104下载和使用。FIG. 2 is a schematic diagram of a network framework of a data processing system according to an embodiment of the present invention. As shown, the data processing system 100 includes a data center 102 and edge devices 104 . Wherein, the data center 102, also known as the cloud, includes multiple servers, which are deployed with a trained neural network model (specifically, the initial neural network model or the trained neural network model below), to For download and use by edge devices 104.

在实际应用中,考虑数据处理过程会导致边缘设备的能耗要求高、计算能力较低等原因,通常将神经网络模型的训练放在数据中心102完成。In practical applications, considering that the data processing process will lead to high energy consumption requirements and low computing power of edge devices, the training of the neural network model is usually completed in the data center 102 .

下面简要阐述神经网络模型的训练过程。具体的,数据中心获取初始的神经网络模型(待训练或待优化的神经网络模型)以及训练样本集。进一步地,利用训练样本集对初始的神经网络模型进行多次训练。在多次训练过程中,选取损失函数值最小的一次,获得此次训练所产生的神经网络模型的模型参数,以作为训练后的神经网络模型的模型参数,从而获得训练后的神经网络模型。关于神经网络模型的训练,具体在本发明下文将进行详述阐述。The training process of the neural network model is briefly described below. Specifically, the data center obtains an initial neural network model (a neural network model to be trained or to be optimized) and a training sample set. Further, the initial neural network model is trained multiple times using the training sample set. In the multiple training process, the one with the smallest loss function value is selected, and the model parameters of the neural network model generated by this training are obtained as the model parameters of the trained neural network model, so as to obtain the trained neural network model. The training of the neural network model will be described in detail below in the present invention.

其中,初始的神经网络模型可根据实际应用场景确定。示例性地,以图像识别场景为例,该初始的神经网络模型可为卷积神经网络模型,例如上文的YOLO模型等。以语音识别场景为例,该初始的神经网络模型可为深度神经网络模型,其可包括一个或多个网络层,相邻两个网络层中的神经元可相互连接等,本发明不做限定。Wherein, the initial neural network model can be determined according to the actual application scenario. Exemplarily, taking an image recognition scene as an example, the initial neural network model may be a convolutional neural network model, such as the YOLO model mentioned above. Taking the speech recognition scene as an example, the initial neural network model may be a deep neural network model, which may include one or more network layers, and neurons in two adjacent network layers may be connected to each other, which is not limited in the present invention. .

训练样本集,包括有N组训练样本,每组训练样本均用于训练该初始的神经网络模型,以获得训练后的神经网络模型。每组训练样本中包括有输入数据和输出数据,该输入数据和输出数据一一对应。N为正整数。在不同的应用场景中,训练样本(即输入数据和输出数据)可不相同。The training sample set includes N groups of training samples, and each group of training samples is used to train the initial neural network model to obtain a trained neural network model. Each set of training samples includes input data and output data, and the input data and output data correspond one-to-one. N is a positive integer. In different application scenarios, the training samples (ie input data and output data) may be different.

举例来说,在图像识别场景中,假设边缘设备想利用神经网络模型识别图像中包括不同时段的太阳。相应地,在选择训练样本集时,输入数据需包括一天内不同时段的太阳图像,输出数据包括每张太阳图像中太阳所处的位置,或其他用于标识该太阳图像中太阳所处的时段的特征信息,这样才能使训练获得的神经网络模型,能准确识别出图像中的太阳所属的时段分类。For example, in an image recognition scenario, suppose an edge device wants to use a neural network model to identify images that include the sun at different times. Correspondingly, when selecting a training sample set, the input data should include sun images at different time periods in a day, and the output data should include the position of the sun in each sun image, or other time periods used to identify the sun in the sun image. In this way, the neural network model obtained by training can accurately identify the time period classification of the sun in the image.

相应地,数据中心在获得上述训练样本集(即包括输入数据和输出数据的多组训练样本)后,可利用训练样本集中的多张太阳图像以及每张太阳图像中太阳所处的真实时段,对初始的神经网络模型进行训练,可获得训练后的神经网络模型。具体的,数据中心可将每张太阳图像输入初始的神经网络模型,利用该神经网络模型对每张太阳图像中的像素点进行处理,以根据该太阳图像中太阳所处的位置或其他标识太阳所处时段的特征信息,预测获得该太阳图像中太阳所处的预测时段。进一步地,数据中心利用预设的损失函数计算该太阳图像中太阳所处的真实时段和预测时段之间的差值。多次重复上述操作过程,数据中心可确定出差值最小一次的神经网络模型的模型参数,并作为训练后的神经网络模型的模型参数,从而获得训练后的神经网络模型。Correspondingly, after obtaining the above-mentioned training sample set (that is, multiple sets of training samples including input data and output data), the data center can use multiple sun images in the training sample set and the real time period of the sun in each sun image, The initial neural network model is trained to obtain the trained neural network model. Specifically, the data center can input each sun image into the initial neural network model, and use the neural network model to process the pixels in each sun image to identify the sun according to the position of the sun in the sun image or other identification methods. The characteristic information of the time period is used to predict and obtain the predicted time period of the sun in the sun image. Further, the data center uses a preset loss function to calculate the difference between the actual time period in which the sun is located in the sun image and the predicted time period. By repeating the above operation process many times, the data center can determine the model parameters of the neural network model with the smallest difference value, and use them as model parameters of the trained neural network model, thereby obtaining the trained neural network model.

本发明中,数据中心利用包括输入数据和输出数据的训练样本来进行神经网络模型的训练,其具体是无人为参与、采用无监督、智能地训练方式来训练初始的神经网络模型。相比于传统技术中,利用人为标注的输入数据来实现神经网络模型的训练,能够避免人为参与带来的误差,从而提升神经网络模型训练的精确度,且无需人为参与,还能提升神经网络模型训练的便捷性。In the present invention, the data center uses training samples including input data and output data to train the neural network model. Specifically, the initial neural network model is trained in an unsupervised and intelligent training manner without human participation. Compared with the traditional technology, the use of artificially labeled input data to realize the training of neural network models can avoid errors caused by human participation, thereby improving the accuracy of neural network model training without human participation, and can also improve the neural network. Ease of model training.

举例来说,在图像识别应用场景中,传统技术在神经网络模型训练过程中,通常选用带有标注的图像作为训练样本,例如该图像标注有该图像所包括的物体,例如小狗、人物、车辆等等。便于后续直接利用带有标注的图像,对初始的神经网络模型进行训练。For example, in an image recognition application scenario, in the process of training a neural network model with traditional techniques, an image with an annotation is usually selected as a training sample. vehicles, etc. It is convenient to directly use the labeled images to train the initial neural network model.

然而本发明中,数据中心选择使用不含标注的图像作为训练样本,以用于训练初始的神经网络模型。具体的,该训练样本包括输入数据(这里即图像)和输出数据(这里即该图像所包括的物体)。其中该输出数据为将图像作为初始的神经网络模型的输入数据,利用初始的神经网络模型对图像中的像素点进行计算,以根据该图像中所包括的物体的特征信息(例如物体的轮廓特征、物体的标识等)预测获得该图像中所包括的物体。可见,本发明采用无人为标注的训练样本进行神经网络模型的训练,可提升神经网络模型训练的便捷性;同时,避免人为标注带来的误差,可提升神经网络模型的精确度。However, in the present invention, the data center chooses to use unlabeled images as training samples for training the initial neural network model. Specifically, the training sample includes input data (here, an image) and output data (here, an object included in the image). The output data is the image as the input data of the initial neural network model, and the pixel points in the image are calculated by using the initial neural network model, so as to calculate the pixel points in the image according to the feature information of the object included in the image (for example, the contour feature of the object). , object identification, etc.) to predict the objects included in the image. It can be seen that the present invention uses unmarked training samples to train the neural network model, which can improve the convenience of training the neural network model; at the same time, avoid errors caused by human marking, and can improve the accuracy of the neural network model.

边缘设备104,可从数据中心102中获取初始的神经网络模型或训练后的神经网络模型,便于利用获取的神经网络模型进行相应地数据处理。在不同应用场景中,利用获取的神经网络模型进行实际的数据处理也不同。The edge device 104 can obtain the initial neural network model or the trained neural network model from the data center 102, so as to use the obtained neural network model to perform corresponding data processing. In different application scenarios, the actual data processing using the acquired neural network model is also different.

示例性地,在图像识别场景中,输入数据为图像,该图像中包括待识别车辆的特征信息,例如车辆的标识、车辆的轮廓等用于标识车辆的特征。相应地,边缘设备可将该图像输入训练后的神经网络模型,根据该图像中车辆的特征信息获得该图像中所包括的车辆。又如,在语音识别场景中,输入数据为待识别语音。相应地,边缘设备可将待识别语音输入训练后的神经网络模型中,对待识别语音中包括的频率或波长等特征信息进行识别和处理,以获得该待识别语音对应的文字信息等,本发明不做限定。Exemplarily, in an image recognition scenario, the input data is an image, and the image includes feature information of a vehicle to be recognized, such as an identifier of the vehicle, a contour of the vehicle, and other features used to identify the vehicle. Correspondingly, the edge device can input the image into the trained neural network model, and obtain the vehicle included in the image according to the feature information of the vehicle in the image. For another example, in a speech recognition scenario, the input data is the speech to be recognized. Correspondingly, the edge device can input the speech to be recognized into the trained neural network model, and recognize and process the characteristic information such as frequency or wavelength included in the speech to be recognized, so as to obtain the text information corresponding to the speech to be recognized, etc. Not limited.

可选地,上文的训练样本集(N组训练样本)可为边缘设备104采集数据后,并将该采集数据发送给数据中心102的,便于在数据中心102侧利用该训练样本集对初始的神经网络模型进行训练。具体的,边缘设备104可先从数据中心102获取初始的神经网络模型。然后,边缘设备可通过自身设备或其他边缘设备采集获得N组输入数据,将每组输入数据输入初始的神经网络模型中,获得相应地输出数据。以此原理,边缘设备可获得包括N组训练样本在内的训练样本集,每组训练样本均包括有输入数据和输出数据,这里不再赘述。Optionally, the above training sample set (N groups of training samples) may be collected by the edge device 104 and then sent to the data center 102, so that the data center 102 can use the training sample set to The neural network model is trained. Specifically, the edge device 104 may first obtain the initial neural network model from the data center 102 . Then, the edge device can acquire N sets of input data through its own device or other edge devices, and input each set of input data into the initial neural network model to obtain corresponding output data. Based on this principle, the edge device can obtain a training sample set including N groups of training samples, and each group of training samples includes input data and output data, which will not be repeated here.

可理解的,在不同的应用场景中,该输入数据或输出数据的类型可不同,其可包括但不限于图像数据、语音数据以及文本数据等等,具体可参见前述示例,本发明这里不做赘述。具体的,当边缘设备具备数据采集能力时,边缘设备可通过自身设备采集相应地输入数据。以图像处理场景为,边缘设备具有摄像头,则边缘设备可通过摄像头采集当前所处场景的一张或多张图像,以将其作为输入数据。反之,当边缘设备不具备数据采集能力,则边缘设备可通过其他具备数据采集能力的边缘设备采集相应地输入数据。进而,边缘设备依据获得的输入数据获得对应的训练样本,具体可参见前述实施例中的相关阐述,这里不再赘述。Understandably, in different application scenarios, the types of input data or output data may be different, which may include but are not limited to image data, voice data, text data, etc. Repeat. Specifically, when the edge device has the data collection capability, the edge device can collect corresponding input data through its own device. Taking an image processing scenario as an example, the edge device has a camera, and the edge device can use the camera to collect one or more images of the current scene to use it as input data. On the contrary, when the edge device does not have the data collection capability, the edge device can collect the corresponding input data through other edge devices with the data collection capability. Further, the edge device obtains the corresponding training samples according to the obtained input data, for details, please refer to the relevant descriptions in the foregoing embodiments, which will not be repeated here.

本发明中,边缘设备104的数量并不做限定,其可为一个或多个,图示以n个为例示出,n为正整数。该边缘设备具体可为智能摄像头、智能相机、智能手机、平板电脑、掌上电脑和笔记本等之类的设备,本发明不做限定。In the present invention, the number of edge devices 104 is not limited, and it may be one or more, and the figure shows n as an example, where n is a positive integer. The edge device may specifically be a device such as a smart camera, a smart camera, a smart phone, a tablet computer, a handheld computer, a notebook, and the like, which is not limited in the present invention.

下面举例说明,边缘设备获得训练样本集的具体实施过程。以图像分类应用场景为例,边缘设备为监控设备。相应地,边缘设备可采集当前场景下的多张图像,以作为神经网络模型的输入数据。进一步地,边缘设备可从数据中心获得初始的神经网络模型,将多张输入图像分别作为输入数据利用初始的神经网络模型进行处理,以获得每张输入图像各自所属的分类,例如该输入图像为人物图像、星空图像、海景图像、沙滩图像等等。相应地,边缘设备可将每张输入图像以及该输入图像所属的分类作为一组训练样本,从而获得包括多组训练样本在内的训练样本集。可选地,边缘设备可将该训练样本集发送给数据中心,便于数据中心利用该训练样本集对初始的神经网络模型进行再训练,这里不再赘述。The following example illustrates a specific implementation process for the edge device to obtain the training sample set. Taking the image classification application scenario as an example, the edge device is a monitoring device. Correspondingly, the edge device can collect multiple images in the current scene as input data for the neural network model. Further, the edge device can obtain the initial neural network model from the data center, and use the initial neural network model to process multiple input images as input data to obtain the classification to which each input image belongs. For example, the input image is People images, star images, seascape images, beach images, and more. Correspondingly, the edge device may take each input image and the category to which the input image belongs as a set of training samples, so as to obtain a training sample set including multiple sets of training samples. Optionally, the edge device can send the training sample set to the data center, so that the data center can use the training sample set to retrain the initial neural network model, which will not be repeated here.

接下来,介绍本发明涉及的数据处理方法的相关实施例。请参见图3,是本发明实施例提供的一种数据处理方法的流程示意图。如图3所示的方法包括如下实施步骤:Next, relevant embodiments of the data processing method involved in the present invention are introduced. Please refer to FIG. 3 , which is a schematic flowchart of a data processing method provided by an embodiment of the present invention. The method shown in Figure 3 includes the following implementation steps:

步骤S301、边缘设备从数据中心获取初始的神经网络模型。Step S301, the edge device obtains an initial neural network model from the data center.

本发明中,初始的神经网络模型也可称为全量模型,其可为数据中心预先利用训练样本集训练获得的通用模型,该通用模型适用于所有或部分的应用场景中。例如,在物体检测领域中通用的YOLO模型等。In the present invention, the initial neural network model may also be referred to as a full model, which may be a general model obtained by training a data center using a training sample set in advance, and the general model is suitable for all or part of application scenarios. For example, the YOLO model commonly used in the field of object detection, etc.

步骤S302、边缘设备获取N组输入数据,将N组输入数据分别输入初始的神经网络模型中,获得N组输入数据的输出数据以及该初始的神经网络模型中每个神经元各自的激活信息。Step S302, the edge device obtains N groups of input data, respectively inputs the N groups of input data into the initial neural network model, and obtains the output data of the N groups of input data and the respective activation information of each neuron in the initial neural network model.

边缘设备在利用初始的神经网络模型进行数据处理(推理计算)时,可采集相应地输入数据。在不同应用场景中,该输入数据可不同,例如其可为图像数据、文本数据以及语音数据等等,具体可参见前述实施例中的相关阐述,这里不再赘述。进一步地,边缘设备可将该输入数据输入到初始的神经网络模型中进行计算,获得相应地输出数据。以此原理,边缘设备在利用初始的神经网络模型进行N次推理计算。也就是说,边缘设备可采集N组输入数据,使用N次该初始的神经网络模型来对这N组输入数据进行处理,以获得N组输入数据各自对应的输出数据。When the edge device uses the initial neural network model for data processing (inference calculation), it can collect the corresponding input data. In different application scenarios, the input data may be different, for example, it may be image data, text data, voice data, etc. For details, please refer to the relevant descriptions in the foregoing embodiments, which will not be repeated here. Further, the edge device can input the input data into the initial neural network model for calculation, and obtain corresponding output data. Based on this principle, the edge device uses the initial neural network model to perform N inference calculations. That is to say, the edge device can collect N groups of input data, and use the initial neural network model N times to process the N groups of input data to obtain output data corresponding to each of the N groups of input data.

可选地,边缘设备将每次获得的一组输入数据和输出数据作为一组训练样本。当使用N组输入数据对初始的神经网络模型进行计算时,边缘设备可获得N组训练样本,每组训练样本中包括有输入数据和输出数据,该输出数据为使用初始的神经网络模型对该输入数据进行处理获得的数据。其中,边缘设备每获得一个输出数据需使用一次初始的神经网络模型。Optionally, the edge device uses a set of input data and output data obtained each time as a set of training samples. When using N sets of input data to calculate the initial neural network model, the edge device can obtain N sets of training samples, each set of training samples includes input data and output data, and the output data is the output data using the initial neural network model. Input data to process the obtained data. Among them, each time the edge device obtains an output data, the initial neural network model needs to be used.

可选地,边缘设备在每次使用初始的神经网络模型对一组输入数据进行处理时,实质是利用初始的神经网络模型中的神经元对该输入数据进行计算。在计算过程中,边缘设备还可记录该初始的神经网络模型中每个神经元进行计算(被激活)时所产生的激活信息。换句话说,边缘设备每次在使用初始的神经网络模型中的每个神经元进行数据计算时,均可记录每个神经元各自的激活信息。该激活信息包括但不限于以下中的任一项或多项的组合:激活值、激活次数以及平均激活值。关于激活信息的相关介绍可参见前述实施例,这里不再赘述。Optionally, each time the edge device uses the initial neural network model to process a set of input data, it essentially uses the neurons in the initial neural network model to perform computation on the input data. During the calculation process, the edge device may also record the activation information generated when each neuron in the initial neural network model performs calculation (is activated). In other words, each time the edge device performs data computation using each neuron in the initial neural network model, it can record the individual activation information of each neuron. The activation information includes, but is not limited to, any one or a combination of the following: activation value, activation times, and average activation value. For the related introduction of the activation information, reference may be made to the foregoing embodiments, and details are not repeated here.

具体的,边缘设备在每次利用初始的神经网络模型中的神经元对输入数据进行计算时,均可记录该初始的神经网络模型中每个神经元各自的激活值。当边缘设备多次(例如M次)使用初始的神经网络模型中的神经元进行数据处理时,边缘设备可统计并记录该初始的神经网络模型中每个神经元各自的激活次数以及平均激活值等信息。其中,激活次数可为M次使用初始的神经网络模型中的神经元进行数据处理时,该神经元的M个激活值中大于或等于预设阈值的次数,作为该神经元的激活次数。M为小于等于N的正整数。平均激活值可为M次使用初始的神经网络模型中的神经元进行数据处理时,该神经元的M个激活值的平均值,作为该神经元的平均激活值。Specifically, each time the edge device uses the neurons in the initial neural network model to calculate the input data, it can record the respective activation values of each neuron in the initial neural network model. When the edge device uses the neurons in the initial neural network model for data processing multiple times (for example, M times), the edge device can count and record the respective activation times and the average activation value of each neuron in the initial neural network model. and other information. Wherein, the number of activations may be the number of times that the M activation values of the neuron are greater than or equal to the preset threshold when the neurons in the initial neural network model are used for data processing, as the activation times of the neuron. M is a positive integer less than or equal to N. The average activation value may be the average value of M activation values of the neuron when the neuron in the initial neural network model is used for data processing M times, as the average activation value of the neuron.

步骤S303、边缘设备将N组训练样本和该初始的神经网络模型中每个神经元各自的激活信息发送给数据中心。Step S303: The edge device sends the N groups of training samples and the activation information of each neuron in the initial neural network model to the data center.

相应地,数据中心接收N组训练样本以及每个神经元各自的激活信息。该训练样本包括输入数据和输出数据,该输出数据为将输入数据作为初始的神经网络模型的输入,计算获得的数据,关于输入数据和输出数据可参见前述实施例中的相关阐述,这里不再赘述。Accordingly, the data center receives N sets of training samples and the respective activation information of each neuron. The training sample includes input data and output data. The output data is the data obtained by taking the input data as the input of the initial neural network model, and calculating the data. For the input data and output data, please refer to the relevant descriptions in the foregoing embodiments, which are not repeated here. Repeat.

边缘设备将N组训练样本和每个神经元各自的激活信息发送给数据中心,便于数据中心利用这些信息对初始的神经网络模型进行再训练,以获得适应于该边缘设备(或该边缘设备部署场景)的神经网络模型。这样便于为不同的边缘设备,训练属于它们自身的神经网络模型,从而满足不同边缘设备的实际需求,提升了模型处理的实用性。换句话说,本发明可根据边缘设备或边缘设备部署场景进行个性化的神经网络模型再训练,以满足边缘设备的实时性需求。The edge device sends the N groups of training samples and the respective activation information of each neuron to the data center, so that the data center can use these information to retrain the initial neural network model to obtain a model suitable for the edge device (or the edge device deployment). scene) neural network model. In this way, it is convenient to train their own neural network models for different edge devices, so as to meet the actual needs of different edge devices and improve the practicability of model processing. In other words, the present invention can retrain a personalized neural network model according to edge devices or edge device deployment scenarios to meet the real-time requirements of edge devices.

步骤S304、数据中心根据每个神经元各自的激活信息,对初始的神经网络模型中的神经元进行剪枝处理,以获得剪枝神经网络模型。Step S304, the data center performs pruning processing on the neurons in the initial neural network model according to the respective activation information of each neuron to obtain a pruned neural network model.

本发明中,数据中心根据初始的神经网络模型中每个神经元各自的激活信息,对满足以下任一个或多个条件的神经元进行删除,以获得相应地剪枝神经网络模型。该条件具体包括:In the present invention, the data center deletes neurons satisfying any one or more of the following conditions according to the activation information of each neuron in the initial neural network model, so as to obtain a corresponding pruned neural network model. Specifically, the conditions include:

1)神经元的激活值小于或等于第一阈值;1) The activation value of the neuron is less than or equal to the first threshold;

2)神经元的激活次数小于或等于第二阈值;2) The number of activations of neurons is less than or equal to the second threshold;

3)神经元的平均激活值小于或等于第三阈值。3) The average activation value of the neuron is less than or equal to the third threshold.

这样便于将不适用该边缘设备或该边缘设备部署场景的神经元剪裁掉,以减小模型规模,减少计算量,从而节省计算资源。In this way, the neurons that are not applicable to the edge device or the deployment scenario of the edge device are easily trimmed, so as to reduce the scale of the model and the amount of computation, thereby saving computing resources.

其中,第一阈值、第二阈值以及第三阈值具体可为用户或系统自定义设置的,它们可以相同,也可不同,本发明不做限定。示例性地,如果系统想获得计算精度更高的剪枝神经网络模型,可将上述三个阈值分别设置得比较大,例如5等等。反之,系统想获得计算精度较低的剪枝神经网络模型,可将上述三个阈值分别设置得较小,例如0.01等等。可选地,上述三个阈值具体可为系统根据一系列统计数据统计获得的,或者用户根据实际经验设置的经验值等,本发明不做限定。The first threshold, the second threshold and the third threshold may be specifically set by the user or the system, and they may be the same or different, which is not limited in the present invention. Exemplarily, if the system wants to obtain a pruned neural network model with higher computational accuracy, the above three thresholds can be set to be relatively large, such as 5 and so on. Conversely, if the system wants to obtain a pruned neural network model with lower computational accuracy, the above three thresholds can be set smaller, such as 0.01 and so on. Optionally, the above three thresholds may specifically be obtained by the system according to a series of statistical data, or experience values set by the user according to actual experience, etc., which are not limited in the present invention.

举例来说,如图4A示出YOLO模型中两个网络层的结构示意图。如图4A示出第N个网络层(简称第N层,具体可为图1中YOLO模型中层1-层8中的任一层)包括6个神经元,分别为On1,On2,…On6。第N+1层包括4个神经元,分别为O(n+1)1,O(n+1)2,…O(n+1)4。相邻两层的神经元采用全连接方式,即是第N层的每个神经元和第N+1层的所有神经元都相连。在剪枝处理过程中,假设第N层中6个神经元各自的激活值,分别为0,1,0,0,1,1;第N+1层中4个神经元各自的激活值,分别为1,0,1,1。相应地,数据中心采用将激活值小于等于0的神经元进行剪枝删除,可获得如图4B所示的剪枝神经网络模型。即是,数据中心将第N层中的神经元On1,On3以及On4删除,将第N+1层中的神经元O(n+1)2删除,从而获得如图4B所示包括两个网络层的剪枝神经网络模型。For example, FIG. 4A shows a schematic structural diagram of two network layers in the YOLO model. As shown in FIG. 4A , the Nth network layer (referred to as the Nth layer, specifically, can be any one of layers 1 to 8 in the YOLO model in FIG. 1 ) includes 6 neurons, which are O n1 , O n2 , ... On6 . Layer N+1 includes 4 neurons, which are O (n+1)1 , O (n+1)2 , ... O (n+1)4 . The neurons in the two adjacent layers are fully connected, that is, each neuron in the Nth layer is connected to all the neurons in the N+1th layer. During the pruning process, it is assumed that the activation values of the 6 neurons in the Nth layer are 0, 1, 0, 0, 1, and 1 respectively; the activation values of the 4 neurons in the N+1th layer are 1, 0, 1, 1 respectively. Correspondingly, the data center adopts the pruning and deletion of neurons whose activation value is less than or equal to 0, and the pruned neural network model as shown in Fig. 4B can be obtained. That is, the data center deletes the neurons On1 , On3 and On4 in the Nth layer, and deletes the neurons O (n+1)2 in the N+1th layer, so as to obtain as shown in FIG. 4B including A pruned neural network model with two network layers.

步骤S305、数据中心根据N组训练样本,对剪枝神经网络模型进行训练,获得训练后的神经网络模型。Step S305, the data center trains the pruned neural network model according to the N groups of training samples, and obtains the trained neural network model.

本发明中,数据中心在训练剪枝神经网络模型过程中,会使用损失函数对剪枝神经网络模型中的模型参数进行更新,以获得训练后的神经网络模型。其中,该损失函数用于指示训练数据和预测数据之间的误差损失,该训练数据为将输入数据输入第一神经网络模型中获得的数据,该预测数据为将输入数据输入第二神经网络模型中获得的数据。In the present invention, in the process of training the pruned neural network model, the data center will use the loss function to update the model parameters in the pruned neural network model to obtain the trained neural network model. The loss function is used to indicate the error loss between the training data and the prediction data, the training data is the data obtained by inputting the input data into the first neural network model, and the prediction data is inputting the input data into the second neural network model data obtained in .

其中,第一神经网络模型为初始的神经网络模型中除去预设分类算法之外的网络模型。第二神经网络模型为剪枝神经网络模型中除去预设分类算法之外的网络模型。该预设分类算法为模型中用于计算输出结果的算法或规则,例如图像分类规则softmax等等。The first neural network model is a network model other than the preset classification algorithm in the initial neural network model. The second neural network model is a network model other than the preset classification algorithm in the pruning neural network model. The preset classification algorithm is an algorithm or rule used in the model to calculate the output result, such as the image classification rule softmax and so on.

在实际应用中,该预设分类算法通常设计在模型的全连接层中。当该预设分类算法设计在全连接层时,上述训练数据具体可为将输入数据输入初始的神经网络模型中,获得的全连接层之前输出的数据。上述预测数据具体可为将输入数据输入剪枝神经网络模型中获得的全连接层之前输出的数据。In practical applications, the preset classification algorithm is usually designed in the fully connected layer of the model. When the preset classification algorithm is designed in the fully connected layer, the training data may specifically be the data output before the fully connected layer obtained by inputting the input data into the initial neural network model. The above-mentioned prediction data may specifically be the data output before the fully connected layer obtained by inputting the input data into the pruning neural network model.

举例来说,引用图1所示的YOLO模型,该初始的神经网络模型为第一YOLO模型,剪枝神经网络模型为第二YOLO模型,该第一YOLO模型和第二YOLO模型均包括7个卷积层和2个全连接层,但它们每个网络层中所包括的神经元可不同。相应地,此例中训练数据可为将输入数据输入第一YOLO模型中获得的最后一个卷积层(即第七个卷积层,或全连接层之前的一个网络层)输出的数据。这里的预测数据可为将输入数据输入第二YOLO模型中获得的第七个卷积层(即最后一个卷积层)输出的数据。For example, referring to the YOLO model shown in FIG. 1, the initial neural network model is the first YOLO model, the pruning neural network model is the second YOLO model, and the first YOLO model and the second YOLO model both include 7 Convolutional layer and 2 fully connected layers, but the neurons included in each network layer can be different. Correspondingly, the training data in this example may be the data output from the last convolutional layer (ie, the seventh convolutional layer, or a network layer before the fully-connected layer) obtained by inputting the input data into the first YOLO model. The prediction data here may be the data output from the seventh convolutional layer (ie, the last convolutional layer) obtained by inputting the input data into the second YOLO model.

可理解的,为保证模型训练的精准度,数据中心可利用N组训练样本对剪枝神经网络模型进行多次训练,以获得精准度更高的训练后的神经网络模型。在模型训练过程中,数据中心将会利用预设的损失函数对模型中的模型参数进行修正,以获得训练后的神经网络模型。其中,模型训练过程,其实质是数据中心不断计算损失函数的值的过程,选取损失函数的值最小的一次所对应的模型参数,作为训练后的神经网络模型的模型参数,从而获得了训练后的神经网络模型。可理解的,在不同的神经网络模型中,该损失函数的具体表达式可不相同,本发明下文将以一个实例进行说明。Understandably, in order to ensure the accuracy of model training, the data center can use N groups of training samples to train the pruned neural network model multiple times to obtain a trained neural network model with higher accuracy. During the model training process, the data center will use the preset loss function to modify the model parameters in the model to obtain the trained neural network model. Among them, the model training process is essentially a process in which the data center continuously calculates the value of the loss function, and selects the model parameter corresponding to the minimum value of the loss function as the model parameter of the neural network model after training, so as to obtain the post-training model parameters. neural network model. It is understandable that in different neural network models, the specific expression of the loss function may be different, and the present invention will be described below with an example.

步骤S306、数据中心将初始的神经网络模型更新为训练后的神经网络模型。Step S306, the data center updates the initial neural network model to the trained neural network model.

步骤S307、边缘设备从数据中心获取训练后的神经网络模型。Step S307, the edge device obtains the trained neural network model from the data center.

步骤S308、边缘设备获取待处理数据,将待处理数据输入训练后的神经网络模型,获得该待处理数据对应的结果数据。Step S308: The edge device acquires the data to be processed, inputs the data to be processed into the trained neural network model, and obtains result data corresponding to the data to be processed.

数据中心可保存训练后的神经网络模型。可选地,数据中心可用训练后的神经网络模型来代替初始的神经网络模型,即是将初始的神经网络模型更新为训练后的神经网络模型,以供边缘设备下载和使用。The data center can save the trained neural network model. Optionally, the data center can replace the initial neural network model with a trained neural network model, that is, update the initial neural network model to a trained neural network model for the edge device to download and use.

可选地,数据中心可按照上述神经网络模型的训练原理,为每个边缘设备训练获得适合自身设备或自身所处场景的训练后的神经网络模型。在数据中心为至少一个边缘设备训练适合所述至少一个边缘设备各自的训练后的神经网络模型后,数据中心可将该训练后的神经网络模型和该边缘设备(具体可为该边缘设备的标识)进行关联保存,用以标识该训练后的神经网络模型为哪个边缘设备进行个性化训练的,或适合哪个边缘设备应用。Optionally, the data center can train each edge device to obtain a trained neural network model suitable for its own device or its own scene according to the training principle of the above-mentioned neural network model. After the data center trains the respective trained neural network models suitable for the at least one edge device for the at least one edge device, the data center may use the trained neural network model and the edge device (specifically, the identifier of the edge device). ) is associated and saved to identify which edge device the trained neural network model is for personalized training, or which edge device is suitable for application.

相应地,边缘设备根据实际需求可从数据中心获取该边缘设备对应的训练后的神经网络模型。具体的,边缘设备可向数据中心发送获取请求,该获取请求携带有该边缘设备的标识(例如设备名称、设备ID号等),该获取请求用于请求获取该边缘设备适应的训练后的神经网络模型。数据中心接收该获取请求后,根据请求中的边缘设备的标识,查询该边缘设备的标识所对应的训练后的神经网络模型,并将其发送给边缘设备。Correspondingly, the edge device can obtain the trained neural network model corresponding to the edge device from the data center according to actual needs. Specifically, the edge device may send an acquisition request to the data center, where the acquisition request carries the identifier of the edge device (eg, device name, device ID number, etc.), and the acquisition request is used to request to acquire the trained neural network adapted by the edge device network model. After receiving the acquisition request, the data center queries the trained neural network model corresponding to the identifier of the edge device according to the identifier of the edge device in the request, and sends it to the edge device.

相应地,边缘设备侧可接收数据中心发送的训练后的神经网络模型,便于后续基于该训练后的神经网络模型进行相应地数据处理。例如,边缘设备可通过自身设备或其他设备获得待处理数据,将待处理数据输入该训练后的神经网络模型中进行处理,以获得相应地结果数据,该结果数据用于指示该待处理数据对应的结果。其中,在不同的应用场景中,该待处理数据和结果数据均有所不同。Correspondingly, the edge device side can receive the trained neural network model sent by the data center, so as to facilitate subsequent data processing based on the trained neural network model. For example, an edge device can obtain data to be processed through its own device or other devices, and input the data to be processed into the trained neural network model for processing to obtain corresponding result data, which is used to indicate that the data to be processed corresponds to the result of. Wherein, in different application scenarios, the data to be processed and the result data are different.

示例性地,在图像分类场景中,待处理数据为待分类图像,该图像由至少一个像素点组成。相应地,边缘设备可将待分类图像输入训练后的神经网络模型中,利用训练后的神经网络模型中的每个神经元对该待分类图像中的每个像素点进行计算,以获得该待分类图像对应的结果数据。其中,该结果数据用于指示该待分类图像所属的分类,例如可为人物图像、星空图像、沙滩图像以及森林图像等等。Exemplarily, in an image classification scenario, the data to be processed is an image to be classified, and the image is composed of at least one pixel. Correspondingly, the edge device can input the image to be classified into the neural network model after training, and use each neuron in the neural network model after training to calculate each pixel in the image to be classified to obtain the image to be classified. The result data corresponding to the classified image. The result data is used to indicate the category to which the to-be-categorized image belongs, for example, it can be a person image, a starry sky image, a beach image, a forest image, and the like.

又如,在语音识别场景中,待处理数据可为待识别语音。相应地,边缘设备可将待识别语音输入训练后的神经网络模型中,以利用训练后的神经网络模型中的每个神经元对待识别语音进行计算,获得该待识别语音对应的结果数据。这里的结果数据可为该待识别语音对应的文本信息。换句话说,利用训练后的神经网络模型可实现语音到文字的翻译转换。For another example, in a speech recognition scenario, the data to be processed may be speech to be recognized. Correspondingly, the edge device can input the speech to be recognized into the trained neural network model, and use each neuron in the trained neural network model to calculate the speech to be recognized to obtain result data corresponding to the speech to be recognized. The result data here may be text information corresponding to the speech to be recognized. In other words, speech-to-text translation can be achieved using a trained neural network model.

为便于理解本发明的方案内容,下面以车辆检测的YOLO模型进行详细说明。In order to facilitate the understanding of the content of the solution of the present invention, the YOLO model for vehicle detection is described in detail below.

首先,数据中心可采集不同交通路口的监控图像,利用这些监控图像训练获得初始的YOLO模型。边缘设备可根据实际需求从数据中心获取该初始的YOLO模型,便于后续结合自身或自身所处场景对该模型进行再训练。进一步地,边缘设备可采集当前所处场景中预设时段内(例如一个月等)的车辆图像,将该车辆图像输入初始的YOLO模型中进行处理,以识别出该车辆图像中车辆的特征信息(例如车辆标识、车牌号、车辆轮廓等)。进而边缘设备根据该车辆图像中车辆的特征信息,获得该车辆图像中包括的目标车辆。同时,还可记录该初始的YOLO模型中每个神经元的激活值。边缘设备还将车辆图像、车辆图像中包括的目标车辆以及初始的YOLO模型中每个神经元的激活值发送给数据中心。First, the data center can collect surveillance images of different traffic intersections, and use these surveillance images to train to obtain the initial YOLO model. The edge device can obtain the initial YOLO model from the data center according to actual needs, so as to facilitate subsequent retraining of the model based on its own or its own scene. Further, the edge device can collect vehicle images within a preset period of time (for example, one month, etc.) in the current scene, and input the vehicle image into the initial YOLO model for processing to identify the characteristic information of the vehicle in the vehicle image. (e.g. vehicle identification, license plate number, vehicle outline, etc.). Further, the edge device obtains the target vehicle included in the vehicle image according to the feature information of the vehicle in the vehicle image. At the same time, the activation value of each neuron in the initial YOLO model can also be recorded. The edge device also sends the vehicle image, the target vehicle included in the vehicle image, and the activation value of each neuron in the initial YOLO model to the data center.

相应地,数据中心依据每个神经元的激活值,将初始的YOLO模型中激活值为0的神经元进行删除(剪枝),以获得相应地剪枝神经网络模型。进一步地,数据中心将车辆图像以及车辆图像中包括的目标车辆作为训练样本,利用多组训练样本来重新训练剪枝神经网络模型。具体的,数据中心可采用如下公式(1)的损失函数来调整剪枝神经网络模型的模型参数,以获得训练后的YOLO模型。Correspondingly, the data center deletes (prunes) neurons with an activation value of 0 in the initial YOLO model according to the activation value of each neuron to obtain a corresponding pruned neural network model. Further, the data center uses the vehicle image and the target vehicle included in the vehicle image as training samples, and uses multiple sets of training samples to retrain the pruning neural network model. Specifically, the data center can use the loss function of the following formula (1) to adjust the model parameters of the pruned neural network model to obtain the trained YOLO model.

Figure BDA0001784857500000111
Figure BDA0001784857500000111

其中,loss为损失函数。i为车辆图像中包括的像素点i。t为组成车辆图像的总像素点数,换句话说车辆图像中包括t个像素点。Pi为利用去除全连接层(具体为部署在全连接层中的分类规则softmax)的剪枝神经网络模型对像素点i处理后输出的数据。Fi为利用去除全连接层(具体为部署在全连接层中的分类规则softmax)的初始的YOLO模型对像素点i处理后输出的数据。Among them, loss is the loss function. i is the pixel point i included in the vehicle image. t is the total number of pixels constituting the vehicle image, in other words, the vehicle image includes t pixels. P i is the data output after processing the pixel i by using the pruning neural network model that removes the fully connected layer (specifically, the classification rule softmax deployed in the fully connected layer). F i is the data output after processing pixel i by using the initial YOLO model that removes the fully connected layer (specifically, the classification rule softmax deployed in the fully connected layer).

数据中心按照上述公式(1)所示的损失函数对剪枝神经网络模型中的模型参数进行训练和调整。可选地,为保证模型精准度,在训练过程中寻找损失函数值最小的一组模型参数,作为训练后的YOLO模型的模型参数,从而获得训练后的YOLO模型。这里本发明不做限定和详述。可选地,数据中心可将该边缘设备的标识和该训练后的YOLO模型进行关联保存,便于边缘设备侧后续根据实际需求从数据中心获取该训练后的YOLO模型。The data center trains and adjusts the model parameters in the pruned neural network model according to the loss function shown in the above formula (1). Optionally, in order to ensure the accuracy of the model, a set of model parameters with the smallest loss function value is found during the training process as the model parameters of the YOLO model after training, so as to obtain the YOLO model after training. The present invention is not limited or detailed here. Optionally, the data center can associate and save the identity of the edge device and the trained YOLO model, so that the edge device side can subsequently obtain the trained YOLO model from the data center according to actual needs.

相应地,边缘设备可从数据中心获取训练后的YOLO模型,便于后续利用该训练后的YOLO模型进行车辆检测。示例性地,假设边缘设备为部署在道路上的监控设备,该边缘设备可采集获得待处理图像,该待处理图像中包括待检测车辆。相应地,边缘设备将待处理图像作为训练后的YOLO模型的输入,利用训练后的YOLO模型中每个网络层的神经元对该待处理图像的各个像素点进行计算,识别出该待处理图像中待检测车辆的特征信息(例如车辆标识、车牌号、车辆轮廓等),进而获知该待检测车辆,从而能够便捷、高效地实现车辆检测。Correspondingly, the edge device can obtain the trained YOLO model from the data center, which is convenient for subsequent vehicle detection using the trained YOLO model. Exemplarily, it is assumed that the edge device is a monitoring device deployed on a road, and the edge device can acquire an image to be processed, and the image to be processed includes the vehicle to be detected. Correspondingly, the edge device takes the image to be processed as the input of the YOLO model after training, and uses the neurons of each network layer in the YOLO model after training to calculate each pixel point of the image to be processed, and identifies the image to be processed. The characteristic information of the vehicle to be detected (such as vehicle identification, license plate number, vehicle outline, etc.) is obtained, and then the vehicle to be detected is known, so that vehicle detection can be realized conveniently and efficiently.

通过实施本发明实施例,能够为不同边缘设备或边缘设备的部署场景,训练不同的训练后的神经网络模型,形成定制化的神经网络模型,边缘设备可以基于定制化的神经网络模型进行数据分析和处理,能够提高精度和处理效率。而且,本发明实施例中无需人工对采集数据进行标记,由数据中心的神经网络模型根据各个边缘设备发送的样本数据进行分析和处理,获得处理过数据,进一步提升了每个边缘设备的处理效率。由于每个边缘设备都可以根据定制化的神经网络模型进行数据处理,更能在短时间内获取处理结果,降低了数据处理的耗时。By implementing the embodiments of the present invention, different trained neural network models can be trained for different edge devices or deployment scenarios of edge devices to form customized neural network models, and edge devices can perform data analysis based on the customized neural network models. And processing, can improve the accuracy and processing efficiency. Moreover, in the embodiment of the present invention, it is not necessary to manually mark the collected data, and the neural network model of the data center analyzes and processes the sample data sent by each edge device to obtain processed data, which further improves the processing efficiency of each edge device. . Since each edge device can process data according to the customized neural network model, the processing results can be obtained in a short time, which reduces the time-consuming of data processing.

上文中结合图1至图3,详细阐述了本发明实施例提供的数据处理方法的相关实施例。下面将结合图5至图6,描述本发明实施例提供的数据处理装置、设备以及系统。The related embodiments of the data processing method provided by the embodiments of the present invention are described in detail above with reference to FIG. 1 to FIG. 3 . The data processing apparatus, device, and system provided by the embodiments of the present invention will be described below with reference to FIG. 5 to FIG. 6 .

请参见图5,是本发明实施例提供的一种数据处理装置的结构示意图。如图5所示的数据处理装置500,应用于边缘设备侧,其可包括获取模块501以及处理模块502;其中,Please refer to FIG. 5 , which is a schematic structural diagram of a data processing apparatus provided by an embodiment of the present invention. The data processing apparatus 500 shown in FIG. 5, applied to the edge device side, may include an acquisition module 501 and a processing module 502; wherein,

所述获取模块501,用于获取初始的神经网络模型,所述初始的神经网络模型包括至少一个神经元,所述神经元用于对所述初始的神经网络模型的输入数据进行处理,以获得所述神经元的激活信息;The obtaining module 501 is configured to obtain an initial neural network model, where the initial neural network model includes at least one neuron, and the neuron is used to process the input data of the initial neural network model to obtain: activation information of the neuron;

所述处理模块502,用于将待处理数据输入训练后的神经网络模型中,获得结果数据;The processing module 502 is used to input the data to be processed into the trained neural network model to obtain result data;

其中,所述结果数据为使用所述训练后的神经网络模型中的神经元对所述待处理数据进行处理后获得的,所述训练后的神经网络模型为使用N组训练样本对剪枝神经网络模型进行训练获得的,所述剪枝神经网络模型为根据所述至少一个神经元各自的激活信息对所述初始的神经网络模型中的至少一个神经元进行剪枝处理后获得的,所述激活信息为使用所述初始的神经网络模型中至少一个神经元进行数据处理时所述至少一个神经元各自的信息,N为正整数。Wherein, the result data is obtained by using the neurons in the trained neural network model to process the data to be processed, and the trained neural network model is obtained by using N groups of training samples to prune neural networks. obtained by training a network model, the pruned neural network model is obtained after pruning at least one neuron in the initial neural network model according to the activation information of the at least one neuron, and the The activation information is the respective information of the at least one neuron when the data processing is performed by using the at least one neuron in the initial neural network model, and N is a positive integer.

在一种可能的实施方式中,所述训练样本包括输入数据以及输出数据,其中,所述输出数据为使用所述初始的神经网络模型对所述输入数据进行计算获得的。In a possible implementation manner, the training sample includes input data and output data, wherein the output data is obtained by computing the input data using the initial neural network model.

在一种可能的实施方式中,所述激活信息包括以下中的至少一项:激活值、激活次数以及平均激活值;所述根据所述至少一个神经元各自的激活信息对所述初始的神经网络模型中的至少一个神经元进行剪枝处理包括:当所述激活信息包括激活值时,根据所述至少一个神经元各自的激活值,将所述初始的神经网络模型中激活值小于或等于第一阈值所对应的神经元进行删除;当所述激活信息包括激活次数时,根据所述至少一个神经元各自的激活次数,将所述初始的神经网络模型中激活次数小于或等于第二阈值所对应的的神经元进行删除;当所述激活信息包括平均激活值时,根据所述至少一个神经元各自的平均激活值,将所述初始的神经网络模型中平均激活值小于或等于第三阈值所对应的神经元进行删除;其中,所述激活值为每次使用所述初始的神经网络模型中的神经元进行数据处理时所述神经元的输出值,所述激活次数为M次使用所述初始的神经网络模型中的神经元进行数据处理中所述神经元的激活值大于或等于第四阈值的次数,所述平均激活值为M次使用所述初始的神经网络模型中的神经元进行数据处理中所述神经元的激活值的平均值,M为小于等于N的正整数。In a possible implementation manner, the activation information includes at least one of the following: activation value, activation times, and average activation value; The pruning process of at least one neuron in the network model includes: when the activation information includes an activation value, according to the respective activation value of the at least one neuron, the activation value in the initial neural network model is less than or equal to The neuron corresponding to the first threshold is deleted; when the activation information includes the activation times, according to the respective activation times of the at least one neuron, the activation times in the initial neural network model are less than or equal to the second threshold. The corresponding neuron is deleted; when the activation information includes an average activation value, according to the respective average activation value of the at least one neuron, the average activation value in the initial neural network model is less than or equal to the third. The neuron corresponding to the threshold is deleted; wherein, the activation value is the output value of the neuron each time the neuron in the initial neural network model is used for data processing, and the activation times is M times of use The number of times that the activation value of the neuron in the initial neural network model is greater than or equal to the fourth threshold in data processing, and the average activation value is M times using the neural network in the initial neural network model. The average value of the activation value of the neuron in the data processing, M is a positive integer less than or equal to N.

在一种可能的实施方式中,所述训练后的神经网络模型还为使用损失函数对所述剪枝神经网络模型中的参数进行更新获得的;其中,所述损失函数用于指示训练数据和预测数据之间的误差损失,所述训练数据为将所述训练样本中的输入数据输入所述初始的神经网络模型中,获得的全连接层之前输出的数据,所述预测数据为将所述训练样本中的输入数据输入所述剪枝神经网络模型中,获得的全连接层之前输出的数据。In a possible implementation manner, the trained neural network model is also obtained by using a loss function to update parameters in the pruned neural network model; wherein the loss function is used to indicate the training data and The error loss between prediction data, the training data is the data output before the full connection layer obtained by inputting the input data in the training sample into the initial neural network model, and the prediction data is the The input data in the training sample is input into the pruning neural network model, and the data obtained before the fully connected layer is obtained.

在一种可能的实施方式中,所述获取模块501具体用于从数据中心获取初始的神经网络模型。In a possible implementation manner, the obtaining module 501 is specifically configured to obtain an initial neural network model from a data center.

应理解的是,本发明实施例的装置500可以通过专用集成电路(application-specific integrated circuit,ASIC)实现,或可编程逻辑器件(programmable logicdevice,PLD)实现,上述PLD可以是复杂程序逻辑器件(complex programmable logicaldevice,CPLD),现场可编程门阵列(field-programmable gate array,FPGA),通用阵列逻辑(generic array logic,GAL)或其任意组合。也可以通过软件实现图3中所示的数据处理方法时,该装置及其各个模块也可以为软件模块。It should be understood that the apparatus 500 in this embodiment of the present invention may be implemented by an application-specific integrated circuit (ASIC), or a programmable logic device (PLD), and the PLD may be a complex program logic device ( complex programmable logical device (CPLD), field-programmable gate array (FPGA), generic array logic (GAL) or any combination thereof. When the data processing method shown in FIG. 3 can also be implemented by software, the apparatus and its respective modules can also be software modules.

本发明实施例提供的数据处理装置500可对应用于执行上述本发明实施例提供的方法,并且装置500中的各个模块的功能和/或执行的其它操作分别为了执行上述图3相应方法的流程步骤,为了简洁,这里不再赘述。The data processing apparatus 500 provided in this embodiment of the present invention can be correspondingly used to execute the method provided by the foregoing embodiment of the present invention, and the functions and/or other operations performed by each module in the apparatus 500 are respectively for executing the flow of the corresponding method in FIG. 3 above. The steps, for the sake of brevity, are not repeated here.

通过实施本发明实施例,能够为不同边缘设备或边缘设备的部署场景,设计出不同的训练后的神经网络模型。便于后续利用适合该边缘设备或该边缘设备部署场景的神经网络模型进行数据处理,提升了数据处理的精确度。By implementing the embodiments of the present invention, different trained neural network models can be designed for different edge devices or deployment scenarios of edge devices. It is convenient for subsequent data processing using a neural network model suitable for the edge device or the deployment scenario of the edge device, which improves the accuracy of data processing.

图6是本发明实施例提供的一种边缘设备的结构示意图。如图6所示的边缘设备600可包括一个或多个处理器601、通信接口602和存储器603,处理器601、通信接口602和存储器603可通过总线方式连接,也可通过无线传输等其他手段实现通信。本发明实施例以通过总线604连接为例其中,该存储器603用于存储指令,该处理器601用于执行该存储器503存储的指令。该存储器603存储程序代码,且处理器601可以调用存储器603中存储的程序代码执行以下操作:FIG. 6 is a schematic structural diagram of an edge device provided by an embodiment of the present invention. The edge device 600 shown in FIG. 6 may include one or more processors 601, a communication interface 602, and a memory 603. The processor 601, the communication interface 602, and the memory 603 may be connected by a bus, or by other means such as wireless transmission. achieve communication. The embodiment of the present invention takes the connection through the bus 604 as an example, wherein the memory 603 is used for storing instructions, and the processor 601 is used for executing the instructions stored in the memory 503 . The memory 603 stores program codes, and the processor 601 can call the program codes stored in the memory 603 to perform the following operations:

获取初始的神经网络模型,所述初始的神经网络模型包括至少一个神经元,所述神经元用于对所述初始的神经网络模型的输入数据进行处理,以获得所述神经元的激活信息;obtaining an initial neural network model, the initial neural network model includes at least one neuron, and the neuron is used to process the input data of the initial neural network model to obtain activation information of the neuron;

将待处理数据输入训练后的神经网络模型中,获得结果数据;Input the data to be processed into the trained neural network model to obtain the result data;

其中,所述结果数据为使用所述训练后的神经网络模型中的神经元对所述待处理数据进行处理后获得的,所述训练后的神经网络模型为使用N组训练样本对剪枝神经网络模型进行训练获得的,所述剪枝神经网络模型为根据所述至少一个神经元各自的激活信息对所述初始的神经网络模型中的至少一个神经元进行剪枝处理后获得的,所述激活信息为使用所述初始的神经网络模型中至少一个神经元进行数据处理时所述至少一个神经元各自的信息,N为正整数。Wherein, the result data is obtained by using the neurons in the trained neural network model to process the data to be processed, and the trained neural network model is obtained by using N groups of training samples to prune neural networks. obtained by training a network model, the pruned neural network model is obtained after pruning at least one neuron in the initial neural network model according to the activation information of the at least one neuron, and the The activation information is the respective information of the at least one neuron when the data processing is performed by using the at least one neuron in the initial neural network model, and N is a positive integer.

可选地,本发明实施例中处理器601可以调用存储器603中存储的程序代码用以执行如上图3所述方法实施例中描述的所有或部分步骤,和/或,文本中描述的其他内容等,这里不再赘述。Optionally, in this embodiment of the present invention, the processor 601 may call program codes stored in the memory 603 to execute all or part of the steps described in the method embodiment described in FIG. 3 above, and/or other content described in the text Wait, I won't go into details here.

应理解,处理器601可以由一个或者多个通用处理器构成,例如中央处理器(Central Processing Unit,CPU)。处理器601可用于运行相关的程序代码中以下功能模块的程序。该功能模块具体可包括但不限于上文所述的获取模块和/或处理模块等功能模块。也就是说,处理器601执行程序代码可以上述功能模块中的任一项或多项的功能。其中,关于这里提及的各个功能模块具体可参见前述实施例中的相关阐述,这里不再赘述。It should be understood that the processor 601 may be constituted by one or more general-purpose processors, such as a central processing unit (Central Processing Unit, CPU). The processor 601 can be used to run the programs of the following functional modules in the related program codes. The functional module may specifically include, but is not limited to, functional modules such as the acquisition module and/or the processing module described above. That is, the processor 601 executing the program code may function as any one or more of the above-mentioned functional modules. For details about each functional module mentioned here, reference may be made to the relevant descriptions in the foregoing embodiments, which will not be repeated here.

通信接口602可以为有线接口(例如以太网接口)或无线接口(例如蜂窝网络接口或使用无线局域网接口),用于与其他模块/设备进行通信。例如,本申请实施例中通信接口602具体可用于接收数据中心发送的初始的神经网络模型或训练后的神经网络模型等。Communication interface 602 may be a wired interface (eg, an Ethernet interface) or a wireless interface (eg, a cellular network interface or using a wireless local area network interface) for communicating with other modules/devices. For example, the communication interface 602 in this embodiment of the present application may be specifically configured to receive an initial neural network model or a trained neural network model sent by the data center.

存储器603可以包括易失性存储器(Volatile Memory),例如随机存取存储器(Random Access Memory,RAM);存储器也可以包括非易失性存储器(Non-VolatileMemory),例如只读存储器(Read-Only Memory,ROM)、快闪存储器(Flash Memory)、硬盘(Hard Disk Drive,HDD)或固态硬盘(Solid-State Drive,SSD);存储器603还可以包括上述种类的存储器的组合。存储器603可用于存储一组程序代码,以便于处理器601调用存储器603中存储的程序代码以实现本发明实施例中涉及的上述各功能模块的功能。The memory 603 may include a volatile memory (Volatile Memory), such as a random access memory (Random Access Memory, RAM); the memory may also include a non-volatile memory (Non-Volatile Memory), such as a read-only memory (Read-Only Memory) ROM), flash memory (Flash Memory), hard disk (Hard Disk Drive, HDD) or solid-state drive (Solid-State Drive, SSD); the memory 603 may also include a combination of the above-mentioned types of memory. The memory 603 may be used to store a set of program codes, so that the processor 601 can call the program codes stored in the memory 603 to implement the functions of the above-mentioned functional modules involved in the embodiments of the present invention.

应理解,根据本发明实施例的边缘设备600可对应于本发明实施例中的图5所示的数据处理装置500,并可以对应于执行根据本发明实施例中图3所示方法中的边缘设备侧为执行主体的操作步骤,并且边缘设备中的各个模块的上述步骤和其它操作和/或功能分别为了实现图3中的各个方法的相应流程,为了简洁,在此不再赘述。It should be understood that the edge device 600 according to the embodiment of the present invention may correspond to the data processing apparatus 500 shown in FIG. 5 in the embodiment of the present invention, and may correspond to the edge device in executing the method shown in FIG. 3 according to the embodiment of the present invention The device side is the operation steps of the execution main body, and the above steps and other operations and/or functions of each module in the edge device are to implement the corresponding processes of each method in FIG. 3 , and are not repeated here for brevity.

需要说明的,图6仅仅是本发明实施例的一种可能的实现方式,实际应用中,边缘设备还可以包括更多或更少的部件,这里不作限制。关于本发明实施例中未示出或未描述的内容,可参见前述图1-图3所述实施例中的相关阐述,这里不再赘述。It should be noted that FIG. 6 is only a possible implementation manner of the embodiment of the present invention. In practical applications, the edge device may further include more or less components, which is not limited here. For content not shown or described in the embodiments of the present invention, reference may be made to the relevant descriptions in the embodiments described in the foregoing FIG. 1 to FIG. 3 , and details are not repeated here.

通过实施本发明实施例,能够为不同边缘设备或边缘设备的部署场景,设计出不同的训练后的神经网络模型。便于后续利用适合该边缘设备或该边缘设备部署场景的神经网络模型进行数据处理,提升了数据处理的精确度。By implementing the embodiments of the present invention, different trained neural network models can be designed for different edge devices or deployment scenarios of edge devices. It is convenient for subsequent data processing using a neural network model suitable for the edge device or the deployment scenario of the edge device, which improves the accuracy of data processing.

本发明实施例还提供了一种数据处理系统,所述数据处理系统包括如上图2所示的数据中心102以及边缘设备104。其中,数据中心102中部署有初始的神经网络模型或者训练后的神经网络模型。所述边缘设备包括处理器,存储器,通信接口和总线;处理器、通信接口、存储器通过总线相互通信;通信接口,用于接收和发送数据;存储器,用于存储指令;处理器,用于调用存储器中的指令,执行图3所述方法实施例所描述的所有或部分实施步骤,这里不再赘述。An embodiment of the present invention further provides a data processing system, where the data processing system includes the data center 102 and the edge device 104 as shown in FIG. 2 above. The initial neural network model or the trained neural network model is deployed in the data center 102 . The edge device includes a processor, a memory, a communication interface and a bus; the processor, the communication interface, and the memory communicate with each other through the bus; the communication interface is used to receive and send data; the memory is used to store instructions; The instructions in the memory execute all or part of the implementation steps described in the method embodiment shown in FIG. 3 , and details are not repeated here.

上述实施例,可以全部或部分地通过软件、硬件、固件或其他任意组合来实现。当使用软件实现时,上述实施例可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载或执行所述计算机程序指令时,全部或部分地产生按照本发明实施例所述的流程或功能。所述计算机可以为通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集合的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质(例如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质。半导体介质可以是固态硬盘(solid state drive,SSD)。The above embodiments may be implemented in whole or in part by software, hardware, firmware or any other combination. When implemented in software, the above-described embodiments may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When the computer program instructions are loaded or executed on a computer, all or part of the processes or functions described in the embodiments of the present invention are generated. The computer may be a general purpose computer, special purpose computer, computer network, or other programmable device. The computer instructions may be stored in or transmitted from one computer readable storage medium to another computer readable storage medium, for example, the computer instructions may be downloaded from a website site, computer, server or data center Transmission to another website site, computer, server, or data center is by wire (eg, coaxial cable, fiber optic, digital subscriber line (DSL)) or wireless (eg, infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device such as a server, a data center, or the like that contains one or more sets of available media. The usable media may be magnetic media (eg, floppy disks, hard disks, magnetic tapes), optical media (eg, DVDs), or semiconductor media. The semiconductor medium may be a solid state drive (SSD).

以上所述,仅为本发明的具体实施方式。熟悉本技术领域的技术人员根据本发明提供的具体实施方式,可想到变化或替换,都应涵盖在本发明的保护范围之内。The above descriptions are merely specific embodiments of the present invention. Those skilled in the art can think of changes or substitutions according to the specific embodiments provided by the present invention, which should be included within the protection scope of the present invention.

Claims (12)

1.一种数据处理方法,其特征在于,所述方法包括:1. a data processing method, is characterized in that, described method comprises: 边缘设备获取初始的神经网络模型,所述初始的神经网络模型包括至少一个神经元,所述神经元用于对所述初始的神经网络模型的输入数据进行处理,以获得所述神经元的激活信息;The edge device acquires an initial neural network model, the initial neural network model includes at least one neuron, and the neuron is used to process the input data of the initial neural network model to obtain the activation of the neuron information; 所述边缘设备将待处理数据输入训练后的神经网络模型中,获得结果数据;The edge device inputs the data to be processed into the trained neural network model to obtain result data; 其中,所述结果数据为使用所述训练后的神经网络模型中的神经元对所述待处理数据进行处理后获得的,所述训练后的神经网络模型为使用N组训练样本对剪枝神经网络模型进行训练获得的,所述剪枝神经网络模型为根据所述至少一个神经元各自的激活信息对所述初始的神经网络模型中的至少一个神经元进行剪枝处理后获得的,所述激活信息为使用所述初始的神经网络模型中至少一个神经元进行数据处理时所述至少一个神经元各自的信息,N为正整数。Wherein, the result data is obtained by using the neurons in the trained neural network model to process the data to be processed, and the trained neural network model is obtained by using N groups of training samples to prune neural networks. obtained by training a network model, the pruned neural network model is obtained after pruning at least one neuron in the initial neural network model according to the activation information of the at least one neuron, and the The activation information is the respective information of the at least one neuron when the data processing is performed by using the at least one neuron in the initial neural network model, and N is a positive integer. 2.根据权利要求1所述的方法,其特征在于,所述训练样本包括输入数据以及输出数据,其中,所述输出数据为使用所述初始的神经网络模型对所述输入数据进行计算获得的。2 . The method according to claim 1 , wherein the training sample includes input data and output data, wherein the output data is obtained by calculating the input data using the initial neural network model. 3 . . 3.根据权利要求1或2所述的方法,其特征在于,所述激活信息包括以下中的至少一项:激活值、激活次数以及平均激活值;3. The method according to claim 1 or 2, wherein the activation information comprises at least one of the following: activation value, activation times and average activation value; 所述根据所述至少一个神经元各自的激活信息对所述初始的神经网络模型中的至少一个神经元进行剪枝处理包括:The performing pruning processing on at least one neuron in the initial neural network model according to the respective activation information of the at least one neuron includes: 当所述激活信息包括激活值时,根据所述至少一个神经元各自的激活值,将所述初始的神经网络模型中激活值小于或等于第一阈值所对应的神经元进行删除;When the activation information includes an activation value, according to the respective activation value of the at least one neuron, delete the neuron whose activation value is less than or equal to the first threshold in the initial neural network model; 当所述激活信息包括激活次数时,根据所述至少一个神经元各自的激活次数,将所述初始的神经网络模型中激活次数小于或等于第二阈值所对应的的神经元进行删除;When the activation information includes the activation times, according to the respective activation times of the at least one neuron, delete the neurons whose activation times are less than or equal to the second threshold in the initial neural network model; 当所述激活信息包括平均激活值时,根据所述至少一个神经元各自的平均激活值,将所述初始的神经网络模型中平均激活值小于或等于第三阈值所对应的神经元进行删除;When the activation information includes an average activation value, according to the respective average activation value of the at least one neuron, delete the neuron corresponding to the average activation value in the initial neural network model that is less than or equal to the third threshold; 其中,所述激活值为每次使用所述初始的神经网络模型中的神经元进行数据处理时所述神经元的输出值,所述激活次数为M次使用所述初始的神经网络模型中的神经元进行数据处理中所述神经元的激活值大于或等于第四阈值的次数,所述平均激活值为M次使用所述初始的神经网络模型中的神经元进行数据处理中所述神经元的激活值的平均值,M为小于等于N的正整数。Wherein, the activation value is the output value of the neuron every time the neuron in the initial neural network model is used for data processing, and the activation times is M times using the neuron in the initial neural network model The number of times that the activation value of the neuron is greater than or equal to the fourth threshold in the data processing of the neuron, and the average activation value is M times using the neuron in the initial neural network model to perform the neuron in the data processing. The average value of the activation values, M is a positive integer less than or equal to N. 4.根据权利要求1-3中任一项所述的方法,其特征在于,所述训练后的神经网络模型为使用N组训练样本对剪枝神经网络模型进行训练获得,包括:4. The method according to any one of claims 1-3, wherein the trained neural network model is obtained by training the pruned neural network model using N groups of training samples, comprising: 所述训练后的神经网络模型为基于N组训练样本使用损失函数对所述剪枝神经网络模型中的参数进行更新获得;The trained neural network model is obtained by using a loss function to update the parameters in the pruned neural network model based on N groups of training samples; 其中,所述损失函数用于指示训练数据和预测数据之间的误差损失,所述训练数据为将所述训练样本中的输入数据输入所述初始的神经网络模型中,获得的全连接层之前输出的数据,所述预测数据为将所述训练样本中的输入数据输入所述剪枝神经网络模型中,获得的全连接层之前输出的数据。Wherein, the loss function is used to indicate the error loss between the training data and the prediction data, and the training data is inputting the input data in the training sample into the initial neural network model, before the fully connected layer obtained The output data, the prediction data is the data output before the fully connected layer obtained by inputting the input data in the training sample into the pruning neural network model. 5.根据权利要求1-4中所述的方法,其特征在于,所述边缘设备获取初始的神经网络模型包括:5. The method according to claims 1-4, wherein the acquisition of the initial neural network model by the edge device comprises: 所述边缘设备从数据中心获取初始的神经网络模型。The edge device obtains the initial neural network model from the data center. 6.一种边缘设备,其特征在于,所述边缘设备包括获取模块以及处理模块;其中,6. An edge device, characterized in that the edge device comprises an acquisition module and a processing module; wherein, 所述获取模块,用于获取初始的神经网络模型,所述初始的神经网络模型包括至少一个神经元,所述神经元用于对所述初始的神经网络模型的输入数据进行处理,以获得所述神经元的激活信息;The obtaining module is configured to obtain an initial neural network model, where the initial neural network model includes at least one neuron, and the neuron is used to process the input data of the initial neural network model to obtain the activation information of the neuron; 所述处理模块,用于将待处理数据输入训练后的神经网络模型中,获得结果数据;The processing module is used to input the data to be processed into the trained neural network model to obtain the result data; 其中,所述结果数据为使用所述训练后的神经网络模型中的神经元对所述待处理数据进行处理后获得的,所述训练后的神经网络模型为使用N组训练样本对剪枝神经网络模型进行训练获得的,所述剪枝神经网络模型为根据所述至少一个神经元各自的激活信息对所述初始的神经网络模型中的至少一个神经元进行剪枝处理后获得的,所述激活信息为使用所述初始的神经网络模型中至少一个神经元进行数据处理时所述至少一个神经元各自的信息,N为正整数。Wherein, the result data is obtained by using the neurons in the trained neural network model to process the data to be processed, and the trained neural network model is obtained by using N groups of training samples to prune neural networks. obtained by training a network model, the pruned neural network model is obtained after pruning at least one neuron in the initial neural network model according to the activation information of the at least one neuron, and the The activation information is the respective information of the at least one neuron when the data processing is performed by using the at least one neuron in the initial neural network model, and N is a positive integer. 7.根据权利要求6所述的边缘设备,其特征在于,所述训练样本包括输入数据以及输出数据,其中,所述输出数据为使用所述初始的神经网络模型对所述输入数据进行计算获得的。7 . The edge device according to claim 6 , wherein the training sample includes input data and output data, wherein the output data is obtained by calculating the input data using the initial neural network model. 8 . of. 8.根据权利要求6或7所述的边缘设备,其特征在于,所述激活信息包括以下中的至少一项:激活值、激活次数以及平均激活值;8. The edge device according to claim 6 or 7, wherein the activation information comprises at least one of the following: activation value, activation times and average activation value; 所述根据所述至少一个神经元各自的激活信息对所述初始的神经网络模型中的至少一个神经元进行剪枝处理包括:The performing pruning processing on at least one neuron in the initial neural network model according to the respective activation information of the at least one neuron includes: 当所述激活信息包括激活值时,根据所述至少一个神经元各自的激活值,将所述初始的神经网络模型中激活值小于或等于第一阈值所对应的神经元进行删除;When the activation information includes an activation value, according to the respective activation value of the at least one neuron, delete the neuron whose activation value is less than or equal to the first threshold in the initial neural network model; 当所述激活信息包括激活次数时,根据所述至少一个神经元各自的激活次数,将所述初始的神经网络模型中激活次数小于或等于第二阈值所对应的的神经元进行删除;When the activation information includes the activation times, according to the respective activation times of the at least one neuron, delete the neurons whose activation times are less than or equal to the second threshold in the initial neural network model; 当所述激活信息包括平均激活值时,根据所述至少一个神经元各自的平均激活值,将所述初始的神经网络模型中平均激活值小于或等于第三阈值所对应的神经元进行删除;When the activation information includes an average activation value, according to the respective average activation value of the at least one neuron, delete the neuron corresponding to the average activation value in the initial neural network model that is less than or equal to the third threshold; 其中,所述激活值为每次使用所述初始的神经网络模型中的神经元进行数据处理时所述神经元的输出值,所述激活次数为M次使用所述初始的神经网络模型中的神经元进行数据处理中所述神经元的激活值大于或等于第四阈值的次数,所述平均激活值为M次使用所述初始的神经网络模型中的神经元进行数据处理中所述神经元的激活值的平均值,M为小于等于N的正整数。Wherein, the activation value is the output value of the neuron every time the neuron in the initial neural network model is used for data processing, and the activation times is M times using the neuron in the initial neural network model The number of times that the activation value of the neuron is greater than or equal to the fourth threshold in the data processing of the neuron, and the average activation value is M times using the neuron in the initial neural network model to perform the neuron in the data processing. The average value of the activation values, M is a positive integer less than or equal to N. 9.根据权利要求6-8中任一项所述的边缘设备,其特征在于,所述训练后的神经网络模型还为使用损失函数对所述剪枝神经网络模型中的参数进行更新获得的;9. The edge device according to any one of claims 6-8, wherein the trained neural network model is also obtained by using a loss function to update parameters in the pruned neural network model ; 其中,所述损失函数用于指示训练数据和预测数据之间的误差损失,所述训练数据为将所述训练样本中的输入数据输入所述初始的神经网络模型中,获得的全连接层之前输出的数据,所述预测数据为将所述训练样本中的输入数据输入所述剪枝神经网络模型中,获得的全连接层之前输出的数据。Wherein, the loss function is used to indicate the error loss between the training data and the prediction data, and the training data is inputting the input data in the training sample into the initial neural network model, before the fully connected layer obtained The output data, the prediction data is the data output before the fully connected layer obtained by inputting the input data in the training sample into the pruning neural network model. 10.根据权利要求6-9中任一项所述的边缘设备,其特征在于,10. The edge device according to any one of claims 6-9, wherein, 所述获取模块,具体用于从数据中心获取初始的神经网络模型。The obtaining module is specifically used to obtain the initial neural network model from the data center. 11.一种边缘设备,其特征在于,包括存储器及与所述存储器耦合的处理器;所述存储器用于存储指令,所述处理器用于执行所述指令;其中,所述处理器执行所述指令时执行如上权利要求1-5中任一项所述方法的操作步骤。11. An edge device, comprising a memory and a processor coupled with the memory; the memory is used for storing instructions, and the processor is used for executing the instructions; wherein, the processor executes the The operation steps of the method according to any one of the preceding claims 1-5 are performed when instructed. 12.一种数据处理系统,其特征在于,包括数据中心以及边缘设备,其中,所述数据中心,用于存储所述初始的神经网络模型,并对所述初始的神经网络模型进行训练,以获得训练后的神经网络模型;所述边缘设备,为如上权利要求6-9中任一项所述的边缘设备;或者,为如上权利要求11所述的边缘设备。12. A data processing system, comprising a data center and an edge device, wherein the data center is used to store the initial neural network model and train the initial neural network model to Obtain the trained neural network model; the edge device is the edge device according to any one of claims 6-9 above; or, the edge device according to claim 11 above.
CN201811016411.4A 2018-08-31 2018-08-31 Data processing method, apparatus, device and system Pending CN110874550A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201811016411.4A CN110874550A (en) 2018-08-31 2018-08-31 Data processing method, apparatus, device and system
PCT/CN2019/085468 WO2020042658A1 (en) 2018-08-31 2019-05-05 Data processing method, device, apparatus, and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811016411.4A CN110874550A (en) 2018-08-31 2018-08-31 Data processing method, apparatus, device and system

Publications (1)

Publication Number Publication Date
CN110874550A true CN110874550A (en) 2020-03-10

Family

ID=69642635

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811016411.4A Pending CN110874550A (en) 2018-08-31 2018-08-31 Data processing method, apparatus, device and system

Country Status (2)

Country Link
CN (1) CN110874550A (en)
WO (1) WO2020042658A1 (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111523640A (en) * 2020-04-09 2020-08-11 北京百度网讯科技有限公司 Training method and device of neural network model
CN112085281A (en) * 2020-09-11 2020-12-15 支付宝(杭州)信息技术有限公司 Method and device for detecting safety of business prediction model
CN113592059A (en) * 2020-04-30 2021-11-02 伊姆西Ip控股有限责任公司 Method, apparatus and computer program product for processing data
WO2021218095A1 (en) * 2020-04-30 2021-11-04 深圳市商汤科技有限公司 Image processing method and apparatus, and electronic device and storage medium
CN114422380A (en) * 2020-10-09 2022-04-29 维沃移动通信有限公司 Neural network information transmission method, device, communication equipment and storage medium
WO2022126902A1 (en) * 2020-12-18 2022-06-23 平安科技(深圳)有限公司 Model compression method and apparatus, electronic device, and medium
CN114692816A (en) * 2020-12-31 2022-07-01 华为技术有限公司 Processing method and device for neural network model
CN114925821A (en) * 2022-01-05 2022-08-19 华为技术有限公司 Compression method of neural network model and related system
WO2023279975A1 (en) * 2021-07-06 2023-01-12 华为技术有限公司 Model processing method, federated learning method, and related device

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020183568A1 (en) * 2019-03-11 2020-09-17 三菱電機株式会社 Driving assistance device and driving assistance method
CN111476364B (en) * 2020-03-18 2025-06-20 深圳赛安特技术服务有限公司 Image processing method and related equipment
CN111522657B (en) * 2020-04-14 2022-07-22 北京航空航天大学 A Decentralized Device Collaborative Deep Learning Inference Method
CN111967591B (en) * 2020-06-29 2024-07-02 上饶市纯白数字科技有限公司 Automatic pruning method and device for neural network and electronic equipment
CN111783997B (en) * 2020-06-29 2024-04-23 杭州海康威视数字技术股份有限公司 Data processing method, device and equipment
CN113935390A (en) * 2020-06-29 2022-01-14 中兴通讯股份有限公司 Data processing method, system, device and storage medium
CN112001483A (en) * 2020-08-14 2020-11-27 广州市百果园信息技术有限公司 A method and apparatus for pruning a neural network model
CN114254745B (en) * 2020-09-25 2025-06-13 北京四维图新科技股份有限公司 Pruning method, data processing method and device
CN112784967B (en) * 2021-01-29 2023-07-25 北京百度网讯科技有限公司 Information processing method and device and electronic equipment
CN112786028B (en) * 2021-02-07 2024-03-26 百果园技术(新加坡)有限公司 Acoustic model processing method, apparatus, device and readable storage medium
CN113011581B (en) * 2021-02-23 2023-04-07 北京三快在线科技有限公司 Neural network model compression method and device, electronic equipment and readable storage medium
CN116822635B (en) * 2023-05-12 2024-09-24 中国科学院深圳先进技术研究院 Track generation method, device, equipment and storage medium
CN119892625B (en) * 2025-03-28 2025-06-13 浙江华和万润信息科技有限公司 Information management method, system, terminal and storage medium based on cloud platform

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105512723A (en) * 2016-01-20 2016-04-20 南京艾溪信息科技有限公司 Artificial neural network calculating device and method for sparse connection
CN105640577A (en) * 2015-12-16 2016-06-08 深圳市智影医疗科技有限公司 Method and system automatically detecting local lesion in radiographic image
US20170061281A1 (en) * 2015-08-27 2017-03-02 International Business Machines Corporation Deep neural network training with native devices
US20170286830A1 (en) * 2016-04-04 2017-10-05 Technion Research & Development Foundation Limited Quantized neural network training and inference
CN107239825A (en) * 2016-08-22 2017-10-10 北京深鉴智能科技有限公司 Consider the deep neural network compression method of load balancing
CN107609598A (en) * 2017-09-27 2018-01-19 武汉斗鱼网络科技有限公司 Image authentication model training method, device and readable storage medium storing program for executing
US20180114114A1 (en) * 2016-10-21 2018-04-26 Nvidia Corporation Systems and methods for pruning neural networks for resource efficient inference
CN108229533A (en) * 2017-11-22 2018-06-29 深圳市商汤科技有限公司 Image processing method, model pruning method, device and equipment
CN108416440A (en) * 2018-03-20 2018-08-17 上海未来伙伴机器人有限公司 A kind of training method of neural network, object identification method and device

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107247989B (en) * 2017-06-15 2020-11-24 北京图森智途科技有限公司 A real-time computer vision processing method and device
CN108229679A (en) * 2017-11-23 2018-06-29 北京市商汤科技开发有限公司 Convolutional neural networks de-redundancy method and device, electronic equipment and storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170061281A1 (en) * 2015-08-27 2017-03-02 International Business Machines Corporation Deep neural network training with native devices
CN105640577A (en) * 2015-12-16 2016-06-08 深圳市智影医疗科技有限公司 Method and system automatically detecting local lesion in radiographic image
CN105512723A (en) * 2016-01-20 2016-04-20 南京艾溪信息科技有限公司 Artificial neural network calculating device and method for sparse connection
US20170286830A1 (en) * 2016-04-04 2017-10-05 Technion Research & Development Foundation Limited Quantized neural network training and inference
CN107239825A (en) * 2016-08-22 2017-10-10 北京深鉴智能科技有限公司 Consider the deep neural network compression method of load balancing
US20180114114A1 (en) * 2016-10-21 2018-04-26 Nvidia Corporation Systems and methods for pruning neural networks for resource efficient inference
CN107609598A (en) * 2017-09-27 2018-01-19 武汉斗鱼网络科技有限公司 Image authentication model training method, device and readable storage medium storing program for executing
CN108229533A (en) * 2017-11-22 2018-06-29 深圳市商汤科技有限公司 Image processing method, model pruning method, device and equipment
CN108416440A (en) * 2018-03-20 2018-08-17 上海未来伙伴机器人有限公司 A kind of training method of neural network, object identification method and device

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111523640A (en) * 2020-04-09 2020-08-11 北京百度网讯科技有限公司 Training method and device of neural network model
CN111523640B (en) * 2020-04-09 2023-10-31 北京百度网讯科技有限公司 Training method and device for neural network model
US11888705B2 (en) 2020-04-30 2024-01-30 EMC IP Holding Company LLC Method, device, and computer program product for processing data
CN113592059A (en) * 2020-04-30 2021-11-02 伊姆西Ip控股有限责任公司 Method, apparatus and computer program product for processing data
WO2021218095A1 (en) * 2020-04-30 2021-11-04 深圳市商汤科技有限公司 Image processing method and apparatus, and electronic device and storage medium
CN112085281B (en) * 2020-09-11 2023-03-10 支付宝(杭州)信息技术有限公司 Method and device for detecting safety of business prediction model
CN112085281A (en) * 2020-09-11 2020-12-15 支付宝(杭州)信息技术有限公司 Method and device for detecting safety of business prediction model
CN114422380B (en) * 2020-10-09 2023-06-09 维沃移动通信有限公司 Neural network information transmission method, device, communication equipment and storage medium
CN114422380A (en) * 2020-10-09 2022-04-29 维沃移动通信有限公司 Neural network information transmission method, device, communication equipment and storage medium
WO2022126902A1 (en) * 2020-12-18 2022-06-23 平安科技(深圳)有限公司 Model compression method and apparatus, electronic device, and medium
CN114692816A (en) * 2020-12-31 2022-07-01 华为技术有限公司 Processing method and device for neural network model
CN114692816B (en) * 2020-12-31 2023-08-25 华为技术有限公司 Neural network model processing method and device
WO2023279975A1 (en) * 2021-07-06 2023-01-12 华为技术有限公司 Model processing method, federated learning method, and related device
CN114925821A (en) * 2022-01-05 2022-08-19 华为技术有限公司 Compression method of neural network model and related system
CN114925821B (en) * 2022-01-05 2023-06-27 华为技术有限公司 A neural network model compression method and related system

Also Published As

Publication number Publication date
WO2020042658A1 (en) 2020-03-05

Similar Documents

Publication Publication Date Title
CN110874550A (en) Data processing method, apparatus, device and system
CN110321910B (en) Point cloud-oriented feature extraction method, device and device
CN111401516B (en) Searching method for neural network channel parameters and related equipment
CN111797983B (en) A method and device for constructing a neural network
CN110414401B (en) PYNQ-based intelligent monitoring system and monitoring method
US20160283864A1 (en) Sequential image sampling and storage of fine-tuned features
US11551076B2 (en) Event-driven temporal convolution for asynchronous pulse-modulated sampled signals
WO2016036664A1 (en) Event-driven spatio-temporal short-time fourier transform processing for asynchronous pulse-modulated sampled signals
CN111738403B (en) A neural network optimization method and related equipment
CN112905997B (en) Method, device and system for detecting poisoning attack facing deep learning model
CN117157678A (en) Methods and systems for graph-based panoramic segmentation
CN110210513A (en) Data classification method, device and terminal device
CN109446897B (en) Scene recognition method and device based on image context information
WO2024041479A1 (en) Data processing method and apparatus
CN112668631B (en) Mobile terminal community pet identification method based on convolutional neural network
CN108875693A (en) A kind of image processing method, device, electronic equipment and its storage medium
CN116432736A (en) Neural network model optimization method, device and computing equipment
CN111079837B (en) Method for detecting, identifying and classifying two-dimensional gray level images
CN111079507A (en) Behavior recognition method and device, computer device and readable storage medium
CN109376736A (en) A video small object detection method based on deep convolutional neural network
CN114742224A (en) Pedestrian re-identification method, device, computer equipment and storage medium
CN111783688A (en) A classification method of remote sensing image scene based on convolutional neural network
CN115018039A (en) Neural network distillation method, target detection method and device
CN110163206A (en) Licence plate recognition method, system, storage medium and device
CN112949590B (en) Cross-domain pedestrian re-identification model construction method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20200310