WO2020216227A9 - Image classification method and apparatus, and data processing method and apparatus - Google Patents

Image classification method and apparatus, and data processing method and apparatus Download PDF

Info

Publication number
WO2020216227A9
WO2020216227A9 PCT/CN2020/086015 CN2020086015W WO2020216227A9 WO 2020216227 A9 WO2020216227 A9 WO 2020216227A9 CN 2020086015 W CN2020086015 W CN 2020086015W WO 2020216227 A9 WO2020216227 A9 WO 2020216227A9
Authority
WO
WIPO (PCT)
Prior art keywords
mask
tensors
convolution
image
groups
Prior art date
Application number
PCT/CN2020/086015
Other languages
French (fr)
Chinese (zh)
Other versions
WO2020216227A1 (en
Inventor
韩凯
王云鹤
许春景
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2020216227A1 publication Critical patent/WO2020216227A1/en
Publication of WO2020216227A9 publication Critical patent/WO2020216227A9/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Definitions

  • N groups of mask tensors can be obtained from the register relatively quickly (compared to obtaining from external storage, the speed of obtaining parameters from the register will be faster), Can improve the execution speed of the above method to a certain extent.
  • the convolution kernel parameters of the M reference convolution kernels and the N sets of mask tensors are obtained by training the neural network according to the training image.
  • processing the multimedia data according to multiple convolution feature maps of the multimedia data includes: classifying or identifying the multimedia data according to the multiple convolution feature maps of the multimedia data.
  • the above-mentioned multimedia data is text, sound, picture (image), video, animation, etc.
  • performing deconvolution processing on multiple convolution feature maps of the road image to obtain the semantic segmentation result of the road image includes: splicing multiple convolution feature maps of the road image to obtain the target of the road image Convolution feature map: Deconvolution is performed on the target convolution feature map of the road image to obtain the semantic segmentation result of the road image.
  • each group of mask tensors in the N groups of mask tensors is composed of multiple mask tensors, and the number of bits occupied by the elements in the N groups of mask tensors is less than the number of M reference volumes.
  • the number of bits occupied by the elements in the convolution kernel parameters in the convolution kernel.
  • Each reference convolution kernel in the M reference convolution kernels corresponds to a group of mask tensors in the N groups of mask tensors.
  • FIG. 7 is a schematic flowchart of an image classification method according to an embodiment of the present application.
  • Figure 10 is a schematic diagram of the process of image classification using neural networks
  • FIG. 13 is a schematic diagram of the hardware structure of the neural network training device according to an embodiment of the present application.
  • Terminal equipment object detection
  • the mobile phone when a user uses a mobile phone to take a selfie, the mobile phone can automatically recognize the face according to the neural network model, and automatically capture the face to generate a prediction frame.
  • the neural network model in Figure 4 can be a target detection convolutional neural network model located in a mobile phone.
  • the target detection convolutional neural network model has the characteristics of fewer parameters (the convolution kernel has fewer parameters) and can be deployed in storage On mobile phones with limited resources.
  • the prediction box shown in FIG. 4 is only for illustration. For ease of understanding, the prediction box is directly displayed in the picture. In fact, the prediction box is displayed on the shooting interface of the selfie mobile phone.
  • the camera of an autonomous vehicle will capture the road image in real time.
  • the smart device in the autonomous vehicle needs to segment the captured road image to separate the road surface, roadbed, and vehicle. , Pedestrians, and other objects, and feed this information back to the control system of the autonomous vehicle, so that the autonomous vehicle can drive on the correct road area. Since autonomous driving has extremely high requirements for safety, smart devices in autonomous vehicles need to be able to quickly process and analyze captured real-time road images to obtain semantic segmentation results.
  • Deep neural network also called multi-layer neural network
  • DNN can be understood as a neural network with multiple hidden layers.
  • DNN is divided according to the positions of different layers.
  • the neural network inside the DNN can be divided into three categories: input layer, hidden layer, and output layer.
  • the first layer is the input layer
  • the last layer is the output layer
  • the number of layers in the middle are all hidden layers.
  • the layers are fully connected, that is to say, any neuron in the i-th layer must be connected to any neuron in the i+1th layer.
  • the coefficient from the kth neuron in the L-1th layer to the jth neuron in the Lth layer is defined as
  • the neural network can use an error back propagation (BP) algorithm to modify the size of the parameters in the initial neural network model during the training process, so that the reconstruction error loss of the neural network model becomes smaller and smaller. Specifically, forwarding the input signal to the output will cause error loss, and the parameters in the initial neural network model are updated by backpropagating the error loss information, so that the error loss is converged.
  • the backpropagation algorithm is a backpropagation motion dominated by error loss, and aims to obtain the optimal neural network model parameters, such as the weight matrix.
  • the I/O interface 112 returns the processing result, such as the denoising processed image obtained as described above, to the client device 140 to provide it to the user.
  • the initial convolutional layer (such as 221) often extracts more general features, which can also be called low-level features; with the convolutional neural network
  • the features extracted by the subsequent convolutional layers (for example, 226) become more and more complex, such as features such as high-level semantics, and features with higher semantics are more suitable for the problem to be solved.
  • the maximum pooling operator can take the pixel with the largest value within a specific range as the result of the maximum pooling.
  • the operators in the pooling layer should also be related to the image size.
  • the size of the image output after processing by the pooling layer can be smaller than the size of the image of the input pooling layer, and each pixel in the image output by the pooling layer represents the average value or the maximum value of the corresponding sub-region of the image input to the pooling layer.
  • the arithmetic circuit fetches the corresponding data of matrix B from the weight memory 502 and buffers it on each PE in the arithmetic circuit.
  • the arithmetic circuit fetches matrix A data and matrix B from the input memory 501 to perform matrix operations, and the partial or final result of the obtained matrix is stored in an accumulator 508.
  • the execution device 110 in FIG. 1 introduced above can execute each step of the image classification method or data processing method of the embodiment of this application.
  • the CNN model shown in FIG. 2 and the chip shown in FIG. 3 can also be used to execute this application.
  • the image classification method of the embodiment of the present application and the data processing method of the embodiment of the present application will be described in detail below with reference to the accompanying drawings.
  • the value of each mask tensor in the first group of mask tensors is the same as the size of the first reference convolution kernel.
  • all mask tensors in each group of mask tensors in the foregoing N groups of mask tensors satisfy pairwise orthogonality.
  • the second way first perform convolution processing on the image to be processed according to M reference convolution kernels to obtain M reference convolution feature maps, and then obtain the to be processed according to M reference convolution feature maps and N sets of mask tensors Multiple convolution feature maps of the image.
  • F 11 to F ks are multiple sub-convolution kernels
  • X represents the image block to be processed
  • represents the element multiplication operation
  • Y represents the convolution feature map obtained by convolution
  • B i represents the i-th reference convolution kernel
  • M j represents the j-th mask tensor.
  • the sizes of these 3 convolution feature maps are c 1 ⁇ d 1 ⁇ d 2 , c 2 ⁇ d 1 ⁇ d 2 , and c 3 ⁇ d 1 ⁇ d 2 , then ,
  • the storage space occupied by the elements in the mask tensor is smaller. Therefore, the subconvolution kernel is obtained by combining the reference convolution kernel and the mask tensor. , The number of convolution kernel parameters is reduced, and the compression of convolution kernel parameters is realized, so that the neural network can be deployed on some devices with limited storage resources to perform image classification tasks.
  • sub-convolution kernel A sub-convolution kernel B
  • sub-convolution kernel C are essentially convolution kernels in a neural network, and are used to perform convolution processing on input data.
  • the subconvolution kernel A performs convolution processing on the input data to obtain a feature map A
  • the subconvolution kernel B performs convolution processing on the input data to obtain a feature map B
  • the subconvolution kernel C performs a convolution process on the input data.
  • the data is subjected to convolution processing to obtain feature maps C respectively.
  • the parameters of the convolution kernel of the first reference convolution kernel and the gradient of the parameters of the first group of mask tensors can be determined according to parameters such as the learning rate. After S6 is executed, S2 to S5 can be executed repeatedly until the preset loss function converges.
  • the data processing method shown in Figure 12 can be applied to the scene shown in Figure 5.
  • the multimedia data is a face image.
  • the convolution feature map of the face image can be obtained.
  • the identity of the person being photographed can be determined.
  • FIG. 15 is a schematic diagram of the hardware structure of a data processing device according to an embodiment of the present application.
  • the data processing device 5000 shown in FIG. 15 is similar to the image classification device 4000 in FIG. 14.
  • the data processing device 5000 includes a memory 5001, a processor 5002, a communication interface 5003, and a bus 5004. Among them, the memory 5001, the processor 5002, and the communication interface 5003 implement communication connections between each other through the bus 5004.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

Provided in the present application are an image classification method and apparatus, which relate to the field of artificial intelligence, and specifically relate to the field of computing vision. The image classification method comprises: acquiring a convolution kernel parameter of a reference convolution kernel of a neural network and a mask tensor of the neural network, and performing a Hadamard product operation on the reference convolution kernel of the neural network and the mask tensor corresponding to the reference convolution kernel to obtain a plurality of sub-convolution kernels; and according to the plurality of sub-convolution kernels, performing convolution processing on an image to be processed, and according to a convolution feature map finally obtained from convolution, classifying the image to be processed to obtain a classification result of the image to be processed. Due to the fact that the mask tensor occupies a smaller storage space relative to the convolution kernel, some devices with limited storage resources can also deploy a neural network including the reference convolution kernel and the mask tensor, such that image classification is realized.

Description

图像分类方法、数据处理方法和装置Image classification method, data processing method and device
本申请要求于2019年04月24日提交中国专利局、申请号为201910335678.8、申请名称为“图像分类方法、数据处理方法和装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims the priority of the Chinese patent application filed with the Chinese Patent Office on April 24, 2019, the application number is 201910335678.8, and the application name is "Image Classification Method, Data Processing Method and Device", the entire content of which is incorporated herein by reference Applying.
技术领域Technical field
本申请涉及人工智能领域,并且更具体地,涉及一种图像分类方法、数据处理方法和装置。This application relates to the field of artificial intelligence, and more specifically, to an image classification method, data processing method and device.
背景技术Background technique
随着人工智能技术的快速发展,神经网络的处理能力变得越来越强,神经网络所包含的参数也越来越多,这样就使得这些神经网络在部署或者应用时往往需要占用很大的存储空间来存储神经网络的参数。这就影响了神经网络在某些存储资源受限的设备上的部署和应用。With the rapid development of artificial intelligence technology, the processing power of neural networks has become stronger and stronger, and the parameters contained in neural networks have become more and more. This makes these neural networks often take up a lot of time when they are deployed or applied. Storage space to store the parameters of the neural network. This affects the deployment and application of neural networks on some devices with limited storage resources.
以对图像进行分类的神经网络为例,很多用于图像分类的神经网络(尤其是一些网络结构比较复杂,功能比较强大的神经网络)由于包含的参数较多,因此,很难部署到一些存储空间比较有限设备(例如,手机,摄像头,智能家居)上,影响了神经网络的应用。因此,如何降低神经网络的存储开销是一个需要解决的问题。Take the neural network for image classification as an example. Many neural networks used for image classification (especially some neural networks with more complex network structure and more powerful functions) contain many parameters, so it is difficult to deploy to some storage Devices with limited space (for example, mobile phones, cameras, smart homes) affect the application of neural networks. Therefore, how to reduce the storage overhead of neural networks is a problem that needs to be solved.
发明内容Summary of the invention
本申请提供一种图像分类方法,数据处理方法和装置,以使得神经网络能够部署在一些存储资源有限的设备上并进行图像分类处理。This application provides an image classification method, data processing method and device, so that a neural network can be deployed on some devices with limited storage resources and perform image classification processing.
第一方面,提供了一种图像分类方法,该方法包括:获取神经网络的M个基准卷积核的卷积核参数;获取神经网络的N组掩码张量;对M个基准卷积核中的每个基准卷积核,以及每个基准卷积核在N组掩码张量中对应的一组掩码张量进行哈达玛积运算,得到多个子卷积核;根据多个子卷积核分别对待处理图像进行卷积处理,得到多个卷积特征图;根据多个卷积特征图对待处理图像进行分类,得到待处理图像的分类结果。In the first aspect, an image classification method is provided. The method includes: obtaining convolution kernel parameters of M reference convolution kernels of a neural network; obtaining N groups of mask tensors of the neural network; Each reference convolution kernel in and each reference convolution kernel performs the Hadamard product operation on the corresponding set of mask tensors in the N groups of mask tensors to obtain multiple subconvolution kernels; according to the multiple subconvolution kernels respectively Perform convolution processing on the image to be processed to obtain multiple convolution feature maps; classify the image to be processed according to the multiple convolution feature maps to obtain the classification result of the image to be processed.
其中,上述M和N均为正整数,上述N组掩码张量中的每组掩码张量由多个掩码张量组成,N组掩码张量中的元素存储时占用的比特数小于M个基准卷积核中卷积核参数中的元素存储时占用的比特数,M个基准卷积核中的每个基准卷积核对应N组掩码张量中的一组掩码张量。Among them, the above M and N are both positive integers, each of the above N groups of mask tensors is composed of multiple mask tensors, and the number of bits occupied by the elements in the N groups of mask tensors is less than M benchmarks. The number of bits occupied by elements in the convolution kernel parameters in the convolution kernel. Each reference convolution kernel in the M reference convolution kernels corresponds to a group of mask tensors in the N groups of mask tensors.
另外,上述基准卷积核是用于获取或者得到神经网络的其他子卷积核的一个比较基础的卷积核,该基准卷积核也可以称为基础卷积核。In addition, the above-mentioned reference convolution kernel is a relatively basic convolution kernel used to obtain or obtain other sub-convolution kernels of the neural network, and the reference convolution kernel may also be called a basic convolution kernel.
上述图像分类方法可以由图像分类装置执行,该图像分类装置可以是具有图像处理功能的电子设备,该电子设备可以是动终端(例如,智能手机),电脑,个人数字助理,可 穿戴设备,车载设备,物联网设备或者其他能够进行图像处理的设备。The above-mentioned image classification method can be executed by an image classification device, which can be an electronic device with image processing function, the electronic device can be a mobile terminal (for example, a smart phone), a computer, a personal digital assistant, a wearable device, a vehicle Equipment, Internet of Things equipment or other equipment capable of image processing.
可选地,上述方法还包括:获取待处理图像。Optionally, the above method further includes: acquiring the image to be processed.
上述待处理图像可以是待分类的图像或者图片。The foregoing image to be processed may be an image or picture to be classified.
上述获取待处理图像,既可以从摄像头获取,也可以从相册中获取。The image to be processed can be obtained from a camera or an album.
具体地,当上述方法由图像分类装置执行时,可以通过该图像分类装置的摄像头来获取图片(例如,实时拍摄图片),也可以从该图像分类装置的内部存储空间存储的相册中获取待处理图像。Specifically, when the above method is executed by an image classification device, the picture can be obtained through the camera of the image classification device (for example, real-time shooting pictures), or the to-be-processed album can be obtained from the album stored in the internal storage space of the image classification device. image.
可选地,上述M个基准卷积核的卷积核参数存储在寄存器中。Optionally, the convolution kernel parameters of the aforementioned M reference convolution kernels are stored in a register.
可选地,上述获取神经网络的M个基准卷积核的卷积核参数,包括:从寄存器中获取(读取)神经网络的M个基准卷积核的卷积核参数。Optionally, the foregoing acquiring the convolution kernel parameters of the M reference convolution kernels of the neural network includes: acquiring (reading) the convolution kernel parameters of the M reference convolution kernels of the neural network from a register.
当上述M个基准卷积核的卷积核参数存储在寄存器中时,能够较为快速地从寄存器中获取M个基准卷积核的卷积核参数(相对于从外部存储中获取,从寄存器中获取的速度会更快一些),能够在一定程度上提高上述方法执行速度。When the convolution kernel parameters of the M reference convolution kernels are stored in the registers, the convolution kernel parameters of the M reference convolution kernels can be obtained from the register relatively quickly (as opposed to obtaining from external storage, from the register The acquisition speed will be faster), which can improve the execution speed of the above method to a certain extent.
可选地,上述N组掩码张量存储在寄存器中。Optionally, the above N groups of mask tensors are stored in a register.
可选地,上述获取神经网络的N组掩码张量,包括:从寄存器中获取(读取)神经网络的N组掩码张量。Optionally, the foregoing obtaining N groups of mask tensors of the neural network includes: obtaining (reading) the N groups of mask tensors of the neural network from a register.
当上述N组掩码张量存储在寄存器中时,能够较为快速地从寄存器中获取N组掩码张量(相对于从外部存储中获取,从寄存器中获取参数的速度会更快一些),能够在一定程度上提高上述方法执行速度。When the above N groups of mask tensors are stored in registers, N groups of mask tensors can be obtained from the register relatively quickly (compared to obtaining from external storage, the speed of obtaining parameters from the register will be faster), Can improve the execution speed of the above method to a certain extent.
上述寄存器具体可以是权重存储器。The aforementioned register may specifically be a weight memory.
本申请中,在对待处理图像进行分类处理时,只需要从存储空间中获取基准卷积核的卷积核参数以及相应的掩码张量,就能够利用基准卷积核以及相应的掩码张量实现对待处理图像的卷积处理,进而实现对待处理图像的分类,而不必获取神经网络中每个卷积核的参数,可以减少神经网络部署时产生的存储开销,使得神经网络能够部署在一些存储资源有限的设备上并进行图像分类处理。In this application, when the image to be processed is classified, it is only necessary to obtain the convolution kernel parameters of the reference convolution kernel and the corresponding mask tensor from the storage space, and then the reference convolution kernel and the corresponding mask tensor can be used. Realize the convolution processing of the image to be processed, and then realize the classification of the image to be processed, without having to obtain the parameters of each convolution kernel in the neural network, which can reduce the storage overhead generated during the deployment of the neural network, so that the neural network can be deployed in some Perform image classification processing on devices with limited storage resources.
具体地,相对于基准卷积核中的参数中的元素,掩码张量中元素占用的存储空间更小,因此,采用基准卷积核与掩码张量相结合的方式得到子卷积核的方式,减少了卷积核参数的数量,实现了对卷积核参数的压缩,使得神经网络能够部署到一些存储资源受限的设备上执行图像分类任务。Specifically, compared to the elements in the parameters of the reference convolution kernel, the elements in the mask tensor occupy less storage space. Therefore, the method of combining the reference convolution kernel and the mask tensor to obtain the subconvolution kernel , The number of convolution kernel parameters is reduced, and the compression of convolution kernel parameters is realized, so that the neural network can be deployed on some devices with limited storage resources to perform image classification tasks.
可选地,上述N组掩码张量中每组掩码张量均包含T个掩码张量,对M个基准卷积核中的每个基准卷积核,以及每个基准卷积核在N组掩码张量中对应的一组掩码张量进行哈达玛积运算,得到多个子卷积核,包括:对M个基准卷积核中的每个基准卷积核,以及每个基准卷积核在N组掩码张量中对应的一组掩码张量进行哈达玛积运算,得到M×T个子卷积核。Optionally, each group of mask tensors in the above N groups of mask tensors includes T mask tensors, and each reference convolution kernel in the M reference convolution kernels, and each reference convolution kernel is masked in the N groups The corresponding set of mask tensors in the code tensor are subjected to the Hadamard product operation to obtain multiple subconvolution kernels, including: each reference convolution kernel in the M reference convolution kernels, and each reference convolution kernel is in N The corresponding group of mask tensors in the group mask tensor are subjected to the Hadamard product operation to obtain M×T subconvolution kernels.
具体地,对于一个基准卷积核来说,该基准卷积核与对应的一组掩码张量中的T个掩码张量进行哈达玛积运算可以得到T个子卷积核,那么,对于M个基准卷积核来说,通过与对应的掩码张量进行哈达玛积运算,一共可以得到M*T个子卷积核。Specifically, for a reference convolution kernel, the reference convolution kernel and T mask tensors in the corresponding set of mask tensors can get T subconvolution kernels by performing the Hadamard product operation, then, for M For the reference convolution kernel, by performing the Hadamard product operation with the corresponding mask tensor, a total of M*T subconvolution kernels can be obtained.
可选地,上述根据多个卷积特征图对待处理图像进行分类,得到待处理图像的分类结果,包括:对多个卷积特征图进行拼接,得到目标卷积特征图;根据目标卷积特征图对待 处理图像行分类,得到待处理图像的分类结果。Optionally, the foregoing classifying the image to be processed according to multiple convolution feature maps to obtain the classification result of the image to be processed includes: splicing multiple convolution feature maps to obtain the target convolution feature map; according to the target convolution feature Figure classifies the image lines to be processed, and obtains the classification result of the image to be processed.
上述多个卷积特征的宽和高应当是相同的,上述对多个卷积特征图进行拼接实质上就是将上述多个卷积特征图的通道数叠加,得到一个通道数是多个卷积特征图的通道数总和的目标卷积特征图。The width and height of the above multiple convolution features should be the same. The above splicing of multiple convolution feature maps is essentially to superimpose the number of channels of the multiple convolution feature maps to obtain a channel number of multiple convolutions. The target convolution feature map of the sum of the number of channels in the feature map.
例如,一共存在3个卷积特征图,这3个卷积特征图的大小分别为c 1×d 1×d 2,c 2×d 1×d 2,c 3×d 1×d 2,那么,对这3个卷积特征图进行拼接得到的目标特征图的大小为c×d 1×d 2,其中,c=c 1+c 2+c 3For example, there are 3 convolution feature maps in total, and the sizes of these 3 convolution feature maps are c 1 ×d 1 ×d 2 , c 2 ×d 1 ×d 2 , and c 3 ×d 1 ×d 2 , then , The size of the target feature map obtained by splicing the three convolutional feature maps is c×d 1 ×d 2 , where c=c 1 +c 2 +c 3 .
应理解,上述M个基准卷积核中的每个基准卷积核对应N组掩码张量中的一组掩码张量,上述N组掩码张量中的一组掩码张量可以对应上述M个基准卷积核中的一个或者多个卷积核。It should be understood that each of the above M reference convolution kernels corresponds to a set of mask tensors in the N sets of mask tensors, and one set of mask tensors in the N sets of mask tensors may correspond to the foregoing M sets of mask tensors. One or more convolution kernels in the reference convolution kernel.
结合第一方面,在第一方面的某些实现方式中,N小于M,M个基准卷积核中的至少两个基准卷积核对应N组掩码张量中的一组掩码张量。With reference to the first aspect, in some implementations of the first aspect, N is less than M, and at least two of the M reference convolution kernels correspond to a group of mask tensors in the N group of mask tensors.
当N小于M时,会出现多个基准卷积核共同对应同一组掩码张量(共享一组掩码张量)的情况,这种情况可以称为掩码张量共享的情况。在掩码张量共享的情况下,在进行哈达玛积运算时,部分基准卷积核会与相同的掩码张量进行运算得到子卷积核,这样能够进一步减少掩码张量的数量,可以进一步的减少存储开销。When N is less than M, there will be a situation in which multiple reference convolution kernels jointly correspond to the same set of mask tensors (sharing a set of mask tensors). This situation can be referred to as the case of mask tensor sharing. In the case of mask tensor sharing, when the Hadamard product operation is performed, part of the reference convolution kernel will be operated with the same mask tensor to obtain the subconvolution kernel, which can further reduce the number of mask tensors. Can further reduce storage overhead.
进一步的,上述N=1,此时只需要保存一组掩码张量即可,节省存储开销的效果更加明显。Further, the above N=1, only a set of mask tensors need to be saved at this time, and the effect of saving storage overhead is more obvious.
结合第一方面,在第一方面的某些实现方式中,N=M,M个基准卷积核与N组掩码张量一一对应。In combination with the first aspect, in some implementations of the first aspect, N=M, and M reference convolution kernels correspond to N sets of mask tensors one-to-one.
每个基准卷积核对应一组掩码张量,每一组掩码张量也对应一个基准卷积核,基准卷积核与掩码张量组是一一对应的关系,这种情况可以称为掩码张量独立的情况。在这种情况下,基准卷积核之间并没有共享掩码张量组。相比于掩码张量共享的情况,虽然掩码张量独立的情况下的包含的参数量稍微大一点,但是由于每个基准卷积核都是与不同组的掩码张量进行运算来得到子卷积核的,这样就使得最终根据这些子卷积核提取到的图像特征更具区分性和判别性,能够在一定程度上提高图像分类的效果。Each reference convolution kernel corresponds to a group of mask tensors, and each group of mask tensors also corresponds to a reference convolution kernel. There is a one-to-one correspondence between the reference convolution kernel and the mask tensor group. This situation can be This is called the case of mask tensor independence. In this case, the mask tensor group is not shared between the reference convolution kernels. Compared with the case of mask tensor sharing, although the amount of parameters included in the case of independent mask tensors is slightly larger, because each reference convolution kernel is operated with different sets of mask tensors Obtaining sub-convolution kernels, which makes the image features extracted based on these sub-convolution kernels more distinguishable and discriminative, which can improve the effect of image classification to a certain extent.
结合第一方面,在第一方面的某些实现方式中,上述N组掩码张量中至少一组掩码张量中的至少部分掩码张量满足两两正交。With reference to the first aspect, in some implementation manners of the first aspect, at least part of the mask tensors in at least one group of the above-mentioned N groups of mask tensors satisfy pairwise orthogonality.
当两个掩码张量满足正交时,说明这两个掩码张量中的参数的差异较大,根据这两个掩码张量与相同基准卷积核或者不同基准卷积核做哈达玛积运算,得到的子卷积核之间的差异也会比较大,从而使得根据相应的子卷积核进行卷积处理时,提取到的图像特征更具区分性和判别性,能够在一定程度上提高图像分类的效果。When the two mask tensors are orthogonal, it means that the parameters in the two mask tensors are quite different. According to the two mask tensors and the same reference convolution kernel or different reference convolution kernels, the Hadamard product is done Operation, the difference between the sub-convolution kernels obtained will also be relatively large, so that when the corresponding sub-convolution kernel is used for convolution processing, the extracted image features are more discriminative and discriminative, which can to a certain extent Improve the effect of image classification.
可选地,上述N组掩码张量中至少一组掩码张量中的全部掩码张量满足两两正交。Optionally, all mask tensors in at least one group of mask tensors in the foregoing N groups of mask tensors satisfy pairwise orthogonality.
N组掩码张量中至少一组掩码张量中的任意两个掩码张量满足两两正交时,根据基准卷积核与掩码张量进行卷积处理提取到的图像的特征更加丰富,可以提高图像的最终处理效果。When any two mask tensors in at least one group of mask tensors in the N groups of mask tensors meet pairwise orthogonality, the features of the image extracted by the convolution processing according to the reference convolution kernel and the mask tensor are more abundant, and you can Improve the final processing effect of the image.
可选地,上述N组掩码张量中每组掩码张量中的全部掩码张量满足两两正交。Optionally, all mask tensors in each group of mask tensors in the foregoing N groups of mask tensors satisfy pairwise orthogonality.
当N组掩码张量中的每组掩码张量中的全部掩码张量均满足两两正交时,能够根据基准卷积核与掩码张量进行卷积处理提取到的图像的特征更加丰富,可以提高图像的最终处 理效果。When all mask tensors in each group of mask tensors in N groups of mask tensors meet pairwise orthogonality, the features of the image extracted by convolution processing based on the reference convolution kernel and mask tensor are more abundant, which can improve The final image processing effect.
可选地,上述神经网络的基准卷积核由上述M个基准卷积核组成,上述神经网络的掩码张量由上述N组掩码张量组成。Optionally, the reference convolution kernel of the aforementioned neural network is composed of the aforementioned M reference convolution kernels, and the mask tensor of the aforementioned neural network is composed of the aforementioned N groups of mask tensors.
其中,M和N的大小可以根据神经网络构建的情况来确定。例如,上述M和N可以根据神经网络的网络结构的复杂度以及神经网络的应用需求来确定,当上述神经网络的网络结构的复杂度较高或者应用需求较高(例如,对处理能力要求较高)时,可以将M和/或N设置成较大的数值,而当上述神经网络的网络结构比较简单或者应用需求较低(例如,对处理能力要求较低)时,可以将M和/或N设置成较小的数值。Among them, the size of M and N can be determined according to the construction of the neural network. For example, the above M and N can be determined according to the complexity of the neural network structure and the application requirements of the neural network. When the network structure of the neural network has high complexity or high application requirements (for example, the processing capacity requirements are relatively high). High), M and/or N can be set to larger values, and when the network structure of the above neural network is relatively simple or the application requirements are low (for example, the requirements for processing capabilities are low), M and/or Or N is set to a smaller value.
可选地,上述M个基准卷积核大小完全相同或者完全不同或者部分相同。Optionally, the sizes of the aforementioned M reference convolution kernels are completely the same or completely different or partially the same.
当M个基准卷积核中存在不同大小的基准卷积核时,能够最终从待处理图像中提取出更丰富的图像特征。具体地,不同的基准卷积核与相应的掩码张量进行哈达玛积运算时得到的子卷积核一般也不相同,根据这些不同的子卷积核能够得到从待处理图像中提取出更全面更有区别性的特征。When there are reference convolution kernels of different sizes in the M reference convolution kernels, it is possible to finally extract richer image features from the image to be processed. Specifically, the subconvolution kernels obtained when the different reference convolution kernels and the corresponding mask tensors perform the Hadamard product operation are generally different. According to these different subconvolution kernels, it is possible to extract from the image to be processed More comprehensive and distinctive features.
进一步的,当M个基准卷积核的大小均不相同时,能够进一步的从待处理图像中提取出更丰富的图像特征,便于后续对待处理图像进行更好的分类。Further, when the sizes of the M reference convolution kernels are all different, it is possible to further extract richer image features from the image to be processed, so as to facilitate the subsequent better classification of the image to be processed.
可选地,上述N组掩码张量完全相同或者完全不同或者部分相同。Optionally, the above N groups of mask tensors are completely the same or completely different or partially the same.
应理解,上述N组掩码张量中的每组掩码张量内部包含的各个掩码张量相同。It should be understood that the mask tensors contained in each group of mask tensors in the foregoing N groups of mask tensors are the same.
上述M组基准卷积核中的每个基准卷积核都会对应N组掩码张量中的一组掩码张量,在本申请中,由于基准卷积核能够与对应的一组掩码张量中的掩码张量进行哈达玛积运算,因此,基准卷积核的大小与其对应的掩码张量的大小相同。只有这样,基准卷积核才能与对应的掩码张量进行哈达玛积运算,从而得到子卷积核。Each reference convolution kernel in the aforementioned M groups of reference convolution kernels will correspond to a group of mask tensors in the N groups of mask tensors. In this application, since the reference convolution kernel can correspond to the corresponding group of mask tensors The mask tensor performs the Hadamard product operation, so the size of the reference convolution kernel is the same as the size of the corresponding mask tensor. Only in this way can the reference convolution kernel perform the Hadamard product operation with the corresponding mask tensor to obtain the subconvolution kernel.
可选地,上述N组掩码张量中的任意一组掩码张量与对应的基准卷积核的大小相同。Optionally, any group of mask tensors in the aforementioned N groups of mask tensors has the same size as the corresponding reference convolution kernel.
也就是说,在与某个基准卷积核相对应的一组掩码张量中,每个掩码张量的大小都与对应的基准卷积核的大小相同。That is to say, in a set of mask tensors corresponding to a certain reference convolution kernel, the size of each mask tensor is the same as the size of the corresponding reference convolution kernel.
如果上述N组掩码中的第一组掩码张量与M个基准卷积核中的第一基准卷积核相对应,那么,该第一组掩码张量中的每个掩码张量的大小与第一基准卷积核的大小相同。If the first group of mask tensors in the above N groups of masks corresponds to the first reference convolution kernel in the M reference convolution kernels, then the value of each mask tensor in the first group of mask tensors The size is the same as the size of the first reference convolution kernel.
具体地,如果第一基准卷积核大小为c×d 1×d 2,其中,c表示通道数,d 1和d 2分别表示高和宽。那么,第一组掩码张量中的任意一个第一掩码张量的大小也为c×d 1×d 2(其中,c为通道数,d 1和d 2分别是高和宽)。 Specifically, if the size of the first reference convolution kernel is c×d 1 ×d 2 , where c represents the number of channels, and d 1 and d 2 represent height and width respectively. Then, the size of any first mask tensor in the first group of mask tensors is also c×d 1 ×d 2 (where c is the number of channels, and d 1 and d 2 are height and width respectively).
结合第一方面,在第一方面的某些实现方式中,上述M个基准卷积核的卷积核参数以及N组掩码张量是根据训练图像对神经网络进行训练得到的。With reference to the first aspect, in some implementations of the first aspect, the convolution kernel parameters of the M reference convolution kernels and the N sets of mask tensors are obtained by training the neural network according to the training image.
其中,上述训练图像的图像类别与待处理图像的图像类别相同。例如,当待处理图像为人体运动的图像时,训练图像可以是包含人体各种运动类型的图像。Wherein, the image category of the aforementioned training image is the same as the image category of the image to be processed. For example, when the image to be processed is an image of human motion, the training image may be an image containing various types of human motion.
具体地,在构建神经网络时,可以根据需要构建的网络的性能需求,网络结构的复杂性以及存储相应的卷积核参数和掩码张量的参数需要的存储空间的大小等因素,来确定M和N的数值以及每组掩码张量所包含的掩码张量的个数,然后初始化M个基准卷积核的卷积核参数以及N组掩码张量(也就是为这些基准卷积核和掩码张量设置一个初始值),并构造一个损失函数。接下来,就可以利用训练图像对神经网络进行训练,在训练的过程 中可以根据损失函数的大小来更新基准卷积核以及掩码张量中的参数值,当该损失函数收敛或者损失函数的函数值满足要求,或者训练次数达到预设次数时,可以停止训练,将此时基准卷积核和掩码张量中的参数值确定为基准卷积核和掩码张量的最终的参数值,接下来,就可以根据需要将包含相应参数值(也就是训练得到的基准卷积核和掩码张量的最终的参数值)的神经网络部署到需要的设备上去,进而能够利用部署该神经网络的设备进行图像分类。Specifically, when building a neural network, it can be determined according to factors such as the performance requirements of the network to be built, the complexity of the network structure, and the storage space required to store the corresponding convolution kernel parameters and mask tensor parameters. The values of M and N and the number of mask tensors contained in each group of mask tensors, and then initialize the convolution kernel parameters of M reference convolution kernels and N groups of mask tensors (that is, for these reference volume Set an initial value for the product kernel and mask tensor), and construct a loss function. Next, you can use the training image to train the neural network. During the training process, you can update the parameter values in the reference convolution kernel and the mask tensor according to the size of the loss function. When the loss function converges or the function value of the loss function When the requirements are met, or the number of training times reaches the preset number of times, training can be stopped, and the parameter values in the reference convolution kernel and mask tensor at this time are determined as the final parameter values of the reference convolution kernel and mask tensor. Next, The neural network containing the corresponding parameter values (that is, the final parameter values of the reference convolution kernel and mask tensor obtained by training) can be deployed to the required equipment as needed, and then the equipment that deploys the neural network can be used for Image classification.
第二方面,提供了一种图像分类方法,该方法包括:获取神经网络的M个基准卷积核的卷积核参数;获取神经网络的N组掩码张量;根据M个基准卷积核对待处理图像进行卷积处理,得到待处理图像的M个基准卷积特征图;对M个基准卷积特征图和N组掩码张量进行哈达玛积运算,得到待处理图像的多个卷积特征图;根据待处理图像的多个卷积特征图对待处理图像进行分类,得到待处理图像的分类结果。In the second aspect, an image classification method is provided. The method includes: obtaining convolution kernel parameters of M reference convolution kernels of a neural network; obtaining N groups of mask tensors of the neural network; and according to M reference convolution kernels Perform convolution processing on the image to be processed to obtain M reference convolution feature maps of the image to be processed; perform Hadamard product operation on the M reference convolution feature maps and N groups of mask tensors to obtain multiple convolutions of the image to be processed Product feature map: classify the image to be processed according to multiple convolution feature maps of the image to be processed to obtain the classification result of the image to be processed.
其中,上述M和N均为正整数,上述N组掩码张量中的每组掩码张量由多个掩码张量组成,N组掩码张量中的元素存储时占用的比特数小于M个基准卷积核中卷积核参数中的元素存储时占用的比特数,M个基准卷积核中的每个基准卷积核对应N组掩码张量中的一组掩码张量。Among them, the above M and N are both positive integers, each of the above N groups of mask tensors is composed of multiple mask tensors, and the number of bits occupied by the elements in the N groups of mask tensors is less than M benchmarks. The number of bits occupied by elements in the convolution kernel parameters in the convolution kernel. Each reference convolution kernel in the M reference convolution kernels corresponds to a group of mask tensors in the N groups of mask tensors.
可选地,上述方法还包括:获取待处理图像。Optionally, the above method further includes: acquiring the image to be processed.
上述待处理图像可以是待分类的图像或者图片。The foregoing image to be processed may be an image or picture to be classified.
上述获取待处理图像,既可以从摄像头获取,也可以从相册中获取。The image to be processed can be obtained from a camera or an album.
可选地,上述M个基准卷积核的卷积核参数存储在寄存器中。Optionally, the convolution kernel parameters of the aforementioned M reference convolution kernels are stored in a register.
可选地,上述获取神经网络的M个基准卷积核的卷积核参数,包括:从寄存器中获取(读取)神经网络的M个基准卷积核的卷积核参数。Optionally, the foregoing acquiring the convolution kernel parameters of the M reference convolution kernels of the neural network includes: acquiring (reading) the convolution kernel parameters of the M reference convolution kernels of the neural network from a register.
当上述M个基准卷积核的卷积核参数存储在寄存器中时,能够较为快速地从寄存器中获取M个基准卷积核的卷积核参数(相对于从外部存储中获取,从寄存器中获取参数的速度会更快一些),能够在一定程度上提高上述方法执行速度。When the convolution kernel parameters of the M reference convolution kernels are stored in the registers, the convolution kernel parameters of the M reference convolution kernels can be obtained from the register relatively quickly (as opposed to obtaining from external storage, from the register The speed of obtaining parameters will be faster), which can improve the execution speed of the above method to a certain extent.
可选地,上述N组掩码张量存储在寄存器中。Optionally, the above N groups of mask tensors are stored in a register.
可选地,上述获取神经网络的N组掩码张量,包括:从寄存器中获取(读取)神经网络的N组掩码张量。Optionally, the foregoing obtaining N groups of mask tensors of the neural network includes: obtaining (reading) the N groups of mask tensors of the neural network from a register.
当上述N组掩码张量存储在寄存器中时,能够较为快速地从寄存器中获取N组掩码张量(相对于从外部存储中获取,从寄存器中获取参数的速度会更快一些),能够在一定程度上提高上述方法执行速度。When the above N groups of mask tensors are stored in registers, N groups of mask tensors can be obtained from the register relatively quickly (compared to obtaining from external storage, the speed of obtaining parameters from the register will be faster), Can improve the execution speed of the above method to a certain extent.
上述寄存器具体可以是权重存储器。The aforementioned register may specifically be a weight memory.
本申请中,在对待处理图像进行分类处理时,只需要从存储空间中获取基准卷积核的卷积核参数以及相应的掩码张量,就能够利用基准卷积核以及相应的掩码张量实现对待处理图像的卷积处理,进而实现对待处理图像的分类,而不必获取神经网络中每个卷积核的参数,可以减少神经网络部署时产生的存储开销,使得神经网络能够部署在一些存储资源有限的设备上并进行图像分类处理。In this application, when the image to be processed is classified, it is only necessary to obtain the convolution kernel parameters of the reference convolution kernel and the corresponding mask tensor from the storage space, and then the reference convolution kernel and the corresponding mask tensor can be used. Realize the convolution processing of the image to be processed, and then realize the classification of the image to be processed, without having to obtain the parameters of each convolution kernel in the neural network, which can reduce the storage overhead generated during the deployment of the neural network, so that the neural network can be deployed in some Perform image classification processing on devices with limited storage resources.
可选地,上述根据多个卷积特征图对待处理图像进行分类,得到待处理图像的分类结果,包括:对多个卷积特征图进行拼接,得到目标卷积特征图;根据目标卷积特征图对待处理图像行分类,得到待处理图像的分类结果。Optionally, the foregoing classifying the image to be processed according to multiple convolution feature maps to obtain the classification result of the image to be processed includes: splicing multiple convolution feature maps to obtain the target convolution feature map; according to the target convolution feature Figure classifies the image lines to be processed, and obtains the classification result of the image to be processed.
结合第二方面,在第二方面的某些实现方式中,N小于M,M个基准卷积核中的至少两个基准卷积核对应N组掩码张量中的一组掩码张量。With reference to the second aspect, in some implementations of the second aspect, N is less than M, and at least two reference convolution kernels in the M reference convolution kernels correspond to one group of mask tensors in the N group of mask tensors.
当N小于M时,会出现多个基准卷积核共同对应同(共享)一组掩码张量的情况,这种情况可以称为掩码张量共享的情况。在掩码张量共享的情况下,在进行哈达玛积运算时,部分基准卷积核会与相同的掩码张量进行运算得到子卷积核,这样能够进一步减少掩码张量的数量,可以进一步的减少存储开销。When N is less than M, there will be a situation where multiple reference convolution kernels jointly correspond to the same (shared) set of mask tensors. This situation can be referred to as a shared mask tensor. In the case of mask tensor sharing, when the Hadamard product operation is performed, part of the reference convolution kernel will be operated with the same mask tensor to obtain the subconvolution kernel, which can further reduce the number of mask tensors. Can further reduce storage overhead.
结合第二方面,在第二方面的某些实现方式中,N组掩码张量中至少一组掩码张量中的至少部分掩码张量满足两两正交。With reference to the second aspect, in some implementations of the second aspect, at least part of the mask tensors in at least one set of mask tensors in the N sets of mask tensors satisfy pairwise orthogonality.
每个基准卷积核对应一组掩码张量,每一组掩码张量也对应一个基准卷积核,基准卷积核与掩码张量组是一一对应的关系,这种情况可以称为掩码张量独立的情况。在这种情况下,基准卷积核之间并没有共享掩码张量组。相比于掩码张量共享的情况,虽然掩码张量独立的情况下的包含的参数量稍微大一点,但是由于每个基准卷积核都是与不同组的掩码张量进行运算来得到子卷积核的,这样就使得最终根据这些子卷积核提取到的图像特征更具区分性和判别性,能够在一定程度上提高图像分类的效果。Each reference convolution kernel corresponds to a group of mask tensors, and each group of mask tensors also corresponds to a reference convolution kernel. There is a one-to-one correspondence between the reference convolution kernel and the mask tensor group. This situation can be This is called the case of mask tensor independence. In this case, the mask tensor group is not shared between the reference convolution kernels. Compared with the case of mask tensor sharing, although the amount of parameters included in the case of independent mask tensors is slightly larger, because each reference convolution kernel is operated with different sets of mask tensors Obtaining sub-convolution kernels, which makes the image features extracted based on these sub-convolution kernels more distinguishable and discriminative, which can improve the effect of image classification to a certain extent.
结合第二方面,在第二方面的某些实现方式中,上述N组掩码张量中至少一组掩码张量中的至少部分掩码张量满足两两正交。当两个掩码张量满足正交时,说明这两个掩码张量中的参数的差异较大,根据这两个掩码张量与相同基准卷积核或者不同基准卷积核做哈达玛积运算,得到的子卷积核之间的差异也会比较大,从而使得根据相应的子卷积核进行卷积处理时,提取到的图像特征更具区分性和判别性,能够在一定程度上提高图像分类的效果。With reference to the second aspect, in some implementation manners of the second aspect, at least part of the mask tensors in the at least one set of mask tensors in the foregoing N sets of mask tensors satisfy pairwise orthogonality. When the two mask tensors are orthogonal, it means that the parameters in the two mask tensors are quite different. According to the two mask tensors and the same reference convolution kernel or different reference convolution kernels, the Hadamard product is done Operation, the difference between the sub-convolution kernels obtained will also be relatively large, so that when the corresponding sub-convolution kernel is used for convolution processing, the extracted image features are more discriminative and discriminative, which can to a certain extent Improve the effect of image classification.
结合第二方面,在第二方面的某些实现方式中,上述M个基准卷积核的卷积核参数以及N组掩码张量是根据训练图像对神经网络进行训练得到的。With reference to the second aspect, in some implementations of the second aspect, the convolution kernel parameters of the M reference convolution kernels and the N sets of mask tensors are obtained by training the neural network according to the training image.
应理解,在上述第一方面中对相关内容的扩展、限定、解释和说明也适用于第二方面中相同的内容,这里在第二方面中不再详细描述。It should be understood that the expansion, limitation, explanation and description of the related content in the above-mentioned first aspect are also applicable to the same content in the second aspect, which will not be described in detail in the second aspect here.
第三方面,提供了一种数据处理方法,该方法包括:获取神经网络的M个基准卷积核的卷积核参数;获取神经网络的N组掩码张量;对M个基准卷积核中的每个基准卷积核,以及每个基准卷积核在N组掩码张量中对应的一组掩码张量进行哈达玛积运算,得到多个子卷积核;根据多个子卷积核分别对多媒体数据进行卷积处理,得到多媒体数据的多个卷积特征图;根据多媒体数据的多个卷积特征图对多媒体数据进行处理。In a third aspect, a data processing method is provided. The method includes: obtaining convolution kernel parameters of M reference convolution kernels of a neural network; obtaining N groups of mask tensors of the neural network; Each reference convolution kernel in and each reference convolution kernel performs the Hadamard product operation on the corresponding set of mask tensors in the N groups of mask tensors to obtain multiple subconvolution kernels; according to the multiple subconvolution kernels respectively Convolution processing is performed on the multimedia data to obtain multiple convolution feature maps of the multimedia data; the multimedia data is processed according to the multiple convolution feature maps of the multimedia data.
其中,上述M和N均为正整数,上述N组掩码张量中的每组掩码张量由多个掩码张量组成,N组掩码张量中的元素存储时占用的比特数小于M个基准卷积核中卷积核参数中的元素存储时占用的比特数,M个基准卷积核中的每个基准卷积核对应N组掩码张量中的一组掩码张量。Among them, the above M and N are both positive integers, each of the above N groups of mask tensors is composed of multiple mask tensors, and the number of bits occupied by the elements in the N groups of mask tensors is less than M benchmarks. The number of bits occupied by elements in the convolution kernel parameters in the convolution kernel. Each reference convolution kernel in the M reference convolution kernels corresponds to a group of mask tensors in the N groups of mask tensors.
可选地,上述多媒体数据是文字、声音、图片(图像)、视频、动画等等Optionally, the above-mentioned multimedia data is text, sound, picture (image), video, animation, etc.
可选地,当上述多媒体数据为图像时,根据多媒体数据的多个卷积特征图对多媒体数据进行处理,包括:根据多媒体数据的多个卷积特征图对多媒体数据进行分类或者识别。Optionally, when the foregoing multimedia data is an image, processing the multimedia data according to multiple convolution feature maps of the multimedia data includes: classifying or identifying the multimedia data according to the multiple convolution feature maps of the multimedia data.
可选地,当上述多媒体数据为图像时,根据多媒体数据的多个卷积特征图对多媒体数据进行处理,包括:根据多媒体数据的多个卷积特征图对多媒体数据进行图像处理。Optionally, when the foregoing multimedia data is an image, processing the multimedia data according to multiple convolution feature maps of the multimedia data includes: performing image processing on the multimedia data according to the multiple convolution feature maps of the multimedia data.
例如,对获取到的人脸图像进行卷积处理,得到人脸图像的卷积特征图,然后对该人 脸图像的卷积特征图进行处理,生成与人脸表情相对应的动画表情。或者,也可以将其他的表情迁移到输入的人脸图像中再输出。For example, perform convolution processing on the acquired face image to obtain a convolution feature map of the face image, and then process the convolution feature map of the face image to generate an animated expression corresponding to the facial expression. Alternatively, other expressions can also be transferred to the input face image and then output.
本申请中,在利用神经网络对多媒体数据进行处理时,只需要获取神经网络的基准卷积核的卷积核参数以及相应的掩码张量,就能够利用基准卷积核以及相应的掩码张量实现对待处理数据的卷积处理,从而能够减少利用神经网络进行卷积处理时的存储开销,进而使得神经网络能够部署到更多存储资源受限的设备上并对多媒体数据进行处理。In this application, when using a neural network to process multimedia data, only the convolution kernel parameters of the reference convolution kernel of the neural network and the corresponding mask tensor need to be obtained, and then the reference convolution kernel and the corresponding mask can be used The tensor realizes the convolution processing of the data to be processed, which can reduce the storage overhead when the neural network is used for the convolution processing, thereby enabling the neural network to be deployed on more devices with limited storage resources and process multimedia data.
第四方面,提供了一种数据处理方法,该方法包括:获取神经网络的M个基准卷积核的卷积核参数;获取神经网络的N组掩码张量;根据M个基准卷积核对待处理图像进行卷积处理,得到待处理图像的M个基准卷积特征图;对M个基准卷积特征图和N组掩码张量进行哈达玛积运算,得到多媒体数据的多个卷积特征图;据多媒体数据的多个卷积特征图对多媒体数据进行处理。In a fourth aspect, a data processing method is provided. The method includes: obtaining convolution kernel parameters of M reference convolution kernels of a neural network; obtaining N groups of mask tensors of the neural network; according to the M reference convolution kernels Perform convolution processing on the image to be processed to obtain M reference convolution feature maps of the image to be processed; perform Hadamard product operation on M reference convolution feature maps and N groups of mask tensors to obtain multiple convolutions of multimedia data Feature map: The multimedia data is processed according to multiple convolution feature maps of the multimedia data.
其中,上述M和N均为正整数,上述N组掩码张量中的每组掩码张量由多个掩码张量组成,N组掩码张量中的元素存储时占用的比特数小于M个基准卷积核中卷积核参数中的元素存储时占用的比特数,M个基准卷积核中的每个基准卷积核对应N组掩码张量中的一组掩码张量。Among them, the above M and N are both positive integers, each of the above N groups of mask tensors is composed of multiple mask tensors, and the number of bits occupied by the elements in the N groups of mask tensors is less than M benchmarks. The number of bits occupied by elements in the convolution kernel parameters in the convolution kernel. Each reference convolution kernel in the M reference convolution kernels corresponds to a group of mask tensors in the N groups of mask tensors.
可选地,上述多媒体数据是文字、声音、图片(图像)、视频、动画等等Optionally, the above-mentioned multimedia data is text, sound, picture (image), video, animation, etc.
可选地,当上述多媒体数据为图像时,根据多媒体数据的多个卷积特征图对多媒体数据进行处理,包括:根据多媒体数据的多个卷积特征图对多媒体数据进行分类或者识别。Optionally, when the foregoing multimedia data is an image, processing the multimedia data according to multiple convolution feature maps of the multimedia data includes: classifying or identifying the multimedia data according to the multiple convolution feature maps of the multimedia data.
可选地,当上述多媒体数据为图像时,根据多媒体数据的多个卷积特征图对多媒体数据进行处理,包括:根据多媒体数据的多个卷积特征图对多媒体数据进行图像处理。Optionally, when the foregoing multimedia data is an image, processing the multimedia data according to multiple convolution feature maps of the multimedia data includes: performing image processing on the multimedia data according to the multiple convolution feature maps of the multimedia data.
例如,对获取到的人脸图像进行卷积处理,得到人脸图像的卷积特征图,然后对该人脸图像的卷积特征图进行处理,生成与人脸表情相对应的动画表情。For example, convolution processing is performed on the acquired face image to obtain a convolution feature map of the face image, and then the convolution feature map of the face image is processed to generate an animated expression corresponding to the facial expression.
第五方面,提供了一种图像处理方法,该方法包括:获取神经网络的M个基准卷积核的卷积核参数;获取神经网络的N组掩码张量;根据M个基准卷积核的卷积核参数和N组掩码张量对道路画面进行卷积处理,得到道路画面的多个卷积特征图;对道路画面的多个卷积特征图进行反卷积处理,获得道路画面的语义分割结果。In a fifth aspect, an image processing method is provided. The method includes: obtaining convolution kernel parameters of M reference convolution kernels of a neural network; obtaining N groups of mask tensors of the neural network; according to the M reference convolution kernels Convolution kernel parameters and N groups of mask tensors are used to convolve the road image to obtain multiple convolution feature maps of the road image; perform deconvolution processing on the multiple convolution feature maps of the road image to obtain the road image The result of semantic segmentation.
其中,上述M和N均为正整数,上述N组掩码张量中的每组掩码张量由多个掩码张量组成,N组掩码张量中的元素存储时占用的比特数小于M个基准卷积核中卷积核参数中的元素存储时占用的比特数,M个基准卷积核中的每个基准卷积核对应N组掩码张量中的一组掩码张量。Among them, the above M and N are both positive integers, each of the above N groups of mask tensors is composed of multiple mask tensors, and the number of bits occupied by the elements in the N groups of mask tensors is less than M benchmarks. The number of bits occupied by elements in the convolution kernel parameters in the convolution kernel. Each reference convolution kernel in the M reference convolution kernels corresponds to a group of mask tensors in the N groups of mask tensors.
本申请中,在利用神经网络对道路画面进行图像处理时,只需要获取神经网络的基准卷积核的卷积核参数以及相应的掩码张量,就能够利用基准卷积核以及相应的掩码张量实现对道路画面的卷积处理,从而能够减少利用神经网络进行卷积处理时的存储开销,进而使得神经网络能够部署到更多存储资源受限的设备上并对道路画面进行图像处理。In this application, when the neural network is used for image processing on road pictures, only the convolution kernel parameters of the reference convolution kernel of the neural network and the corresponding mask tensor need to be obtained, and then the reference convolution kernel and the corresponding mask tensor can be used. The code tensor realizes the convolution processing of the road image, which can reduce the storage overhead when the neural network is used for the convolution processing, so that the neural network can be deployed to more devices with limited storage resources and image processing of the road image .
可选地,上述方法还包括:获取道路画面。Optionally, the above method further includes: acquiring a road picture.
上述方法的执行主体可以是自动驾驶车辆中的图像处理装置,上述道路画面可以是路边的监控设备获取到的,也可以是自动驾驶车辆根据摄像头实时获取的图像。The execution subject of the foregoing method may be an image processing device in an autonomous driving vehicle, and the foregoing road image may be acquired by a roadside monitoring device, or may be an image acquired by an autonomous driving vehicle in real time according to a camera.
可选地,上述对道路画面的多个卷积特征图进行反卷积处理,获得道路画面的语义分割结果,包括:对道路画面的多个卷积特征图进行拼接处理,得到道路画面的目标卷积特 征图;对道路画面的目标卷积特征图进行反卷积处理,获得道路画面的语义分割结果。Optionally, performing deconvolution processing on multiple convolution feature maps of the road image to obtain the semantic segmentation result of the road image includes: splicing multiple convolution feature maps of the road image to obtain the target of the road image Convolution feature map: Deconvolution is performed on the target convolution feature map of the road image to obtain the semantic segmentation result of the road image.
上述道路画面的多个卷积特征的宽和高应当是相同的,上述对多个卷积特征图进行拼接实质上就是将上述多个卷积特征图的通道数叠加,得到一个通道数是多个卷积特征图的通道数总和的目标卷积特征图。The width and height of the multiple convolution features of the above road image should be the same. The above splicing of multiple convolution feature maps is essentially to superimpose the number of channels of the multiple convolution feature maps to obtain one channel number. The target convolution feature map of the sum of the number of channels of the convolution feature maps.
可选地,上述根据M个基准卷积核的卷积核参数和N组掩码张量对道路画面进行卷积处理,得到道路画面的多个卷积特征图,包括:对M个基准卷积核中的每个基准卷积核,以及每个基准卷积核在N组掩码张量中对应的一组掩码张量进行哈达玛积运算,得到多个子卷积核;根据多个子卷积核分别对道路画面进行卷积处理,得到道路画面的多个卷积特征图。Optionally, the above-mentioned convolution processing is performed on the road image according to the convolution kernel parameters of the M reference convolution kernels and N sets of mask tensors to obtain multiple convolution feature maps of the road image, including: Each reference convolution kernel in the convolution kernel and each reference convolution kernel performs the Hadamard product operation on the corresponding set of mask tensors in the N groups of mask tensors to obtain multiple subconvolution kernels; according to multiple subconvolutions The kernel respectively performs convolution processing on the road image to obtain multiple convolution feature maps of the road image.
可选地,上述根据M个基准卷积核的卷积核参数和N组掩码张量对道路画面进行卷积处理,得到道路画面的多个卷积特征图,包括:根据M个基准卷积核对道路画面进行卷积处理,得到道路画面的M个基准卷积特征图;对M个基准卷积特征图和N组掩码张量进行哈达玛积运算,得到道路画面的多个卷积特征图。Optionally, the above-mentioned convolution processing is performed on the road image according to the convolution kernel parameters of the M reference convolution kernels and the N sets of mask tensors to obtain multiple convolution feature maps of the road image, including: according to the M reference volumes The product kernel performs convolution processing on the road image to obtain M reference convolution feature maps of the road image; performs Hadamard product operation on the M reference convolution feature maps and N groups of mask tensors to obtain multiple convolutions of the road image Feature map.
通过先采用基准卷积核对道路画面进行卷积处理,在得到一个基准的卷积特征图之后,再结合掩码张量来获取道路画面的多个卷积特征图,能够减少卷积计算的次数,可以在一定程度上起到降低运算量的效果。By first using the reference convolution kernel to convolve the road image, after obtaining a reference convolution feature map, combine the mask tensor to obtain multiple convolution feature maps of the road image, which can reduce the number of convolution calculations , Can reduce the amount of calculation to a certain extent.
第六方面,提供了一种图像处理方法,该方法包括:获取神经网络的M个基准卷积核的卷积核参数;获取神经网络的N组掩码张量;对M个基准卷积核中的每个基准卷积核,以及每个基准卷积核在N组掩码张量中对应的一组掩码张量进行哈达玛积运算,得到多个子卷积核;根据多个子卷积核分别对道路画面进行卷积处理,得到道路画面的多个卷积特征图;将人脸图像的多个卷积特征图与人脸图像对应身份证件的图像的卷积特征图进行对比,得到人脸图像的验证结果。In a sixth aspect, an image processing method is provided. The method includes: obtaining convolution kernel parameters of M reference convolution kernels of a neural network; obtaining N groups of mask tensors of the neural network; Each reference convolution kernel in and each reference convolution kernel performs the Hadamard product operation on the corresponding set of mask tensors in the N groups of mask tensors to obtain multiple subconvolution kernels; according to the multiple subconvolution kernels respectively Perform convolution processing on the road image to obtain multiple convolution feature maps of the road image; compare the multiple convolution feature maps of the face image with the convolution feature map of the image corresponding to the identity document of the face image to obtain the face Image verification result.
其中,上述M和N均为正整数,上述N组掩码张量中的每组掩码张量由多个掩码张量组成,N组掩码张量中的元素存储时占用的比特数小于M个基准卷积核中卷积核参数中的元素存储时占用的比特数,M个基准卷积核中的每个基准卷积核对应N组掩码张量中的一组掩码张量。Among them, the above M and N are both positive integers, each of the above N groups of mask tensors is composed of multiple mask tensors, and the number of bits occupied by the elements in the N groups of mask tensors is less than M benchmarks. The number of bits occupied by elements in the convolution kernel parameters in the convolution kernel. Each reference convolution kernel in the M reference convolution kernels corresponds to a group of mask tensors in the N groups of mask tensors.
本申请中,在利用神经网络对人脸图像进行图像处理时,只需要获取神经网络的基准卷积核的卷积核参数以及相应的掩码张量,就能够利用基准卷积核以及相应的掩码张量实现对人脸图像的卷积处理,从而能够减少利用神经网络进行卷积处理时的存储开销,进而使得神经网络能够部署到更多存储资源受限的设备上并对人脸图像进行图像处理。In this application, when using a neural network to perform image processing on a face image, only the convolution kernel parameters of the reference convolution kernel of the neural network and the corresponding mask tensor need to be obtained, and then the reference convolution kernel and the corresponding The mask tensor realizes the convolution processing of the face image, which can reduce the storage overhead when the neural network is used for the convolution processing, so that the neural network can be deployed on more storage resource-constrained devices and the face image Perform image processing.
可选地,上述方法还包括:获取人脸图像。Optionally, the above method further includes: acquiring a face image.
可选地,上述根据M个基准卷积核的卷积核参数和N组掩码张量对人脸图像进行卷积处理,得到人脸图像的多个卷积特征图,包括:对M个基准卷积核中的每个基准卷积核,以及每个基准卷积核在N组掩码张量中对应的一组掩码张量进行哈达玛积运算,得到多个子卷积核;根据多个子卷积核分别对人脸图像进行卷积处理,得到人脸图像的多个卷积特征图。Optionally, the above-mentioned convolution processing is performed on the face image according to the convolution kernel parameters of the M reference convolution kernels and the N sets of mask tensors to obtain multiple convolution feature maps of the face image, including: Each reference convolution kernel in the reference convolution kernel, and each reference convolution kernel performs the Hadamard product operation on the corresponding set of mask tensors in the N groups of mask tensors to obtain multiple subconvolution kernels; according to the multiple subconvolution kernels The convolution kernel performs convolution processing on the face image to obtain multiple convolution feature maps of the face image.
可选地,上述根据M个基准卷积核的卷积核参数和N组掩码张量对人脸图像进行卷积处理,得到人脸图像的多个卷积特征图,包括:根据M个基准卷积核对人脸图像进行卷积处理,得到人脸图像的M个基准卷积特征图;对M个基准卷积特征图和N组掩码张 量进行哈达玛积运算,得到人脸图像的多个卷积特征图。Optionally, the aforementioned convolution processing is performed on the face image according to the convolution kernel parameters of the M reference convolution kernels and the N sets of mask tensors to obtain multiple convolution feature maps of the face image, including: according to M The reference convolution kernel performs convolution processing on the face image to obtain M reference convolution feature maps of the face image; performs the Hadamard product operation on the M reference convolution feature maps and N groups of mask tensors to obtain the face image Of multiple convolutional feature maps.
通过先采用基准卷积核对人脸图像进行卷积处理,在得到一个基准的卷积特征图之后,再结合掩码张量来获取道路画面的多个卷积特征图,能够减少卷积计算的次数,可以在一定程度上起到降低运算量的效果。By first using the reference convolution kernel to perform convolution processing on the face image, after obtaining a reference convolution feature map, combine the mask tensor to obtain multiple convolution feature maps of the road image, which can reduce the convolution calculation The number of times can reduce the amount of calculation to a certain extent.
第七方面,提供了一种图像分类装置,该图像分类装置包括:存储器,用于存储神经网络的M个基准卷积核的卷积核参数和N组掩码张量;处理器,用于获取神经网络的M个基准卷积核的卷积核参数和N组掩码张量,并执行以下操作:对M个基准卷积核中的每个基准卷积核,以及每个基准卷积核在N组掩码张量中对应的一组掩码张量进行哈达玛积运算,得到多个子卷积核;根据多个子卷积核分别对待处理图像进行卷积处理,得到多个卷积特征图;根据多个卷积特征图对待处理图像进行分类,得到待处理图像的分类结果。In a seventh aspect, an image classification device is provided. The image classification device includes: a memory for storing convolution kernel parameters and N groups of mask tensors of M reference convolution kernels of a neural network; a processor for Obtain the convolution kernel parameters and N sets of mask tensors of the M reference convolution kernels of the neural network, and perform the following operations: for each reference convolution kernel in the M reference convolution kernels, and each reference convolution The kernel performs the Hadamard product operation on the corresponding set of mask tensors in the N sets of mask tensors to obtain multiple subconvolution kernels; according to the multiple subconvolution kernels, the image to be processed is convolved to obtain multiple convolution feature maps. ; Classify the image to be processed according to multiple convolution feature maps to obtain the classification result of the image to be processed.
其中,上述M和N均为正整数,N组掩码张量中的每组掩码张量由多个掩码张量组成,N组掩码张量中的元素存储时占用的比特数小于M个基准卷积核中卷积核参数中的元素存储时占用的比特数,M个基准卷积核中的每个基准卷积核对应N组掩码张量中的一组掩码张量。Among them, the above M and N are both positive integers, each group of mask tensors in the N groups of mask tensors is composed of multiple mask tensors, and the number of bits occupied by the elements in the N groups of mask tensors is less than the number of M reference volumes. The number of bits occupied by the elements in the convolution kernel parameters in the convolution kernel. Each reference convolution kernel in the M reference convolution kernels corresponds to a group of mask tensors in the N groups of mask tensors.
本申请中,在对待处理图像进行分类处理时,只需要从存储空间中获取基准卷积核的卷积核参数以及相应的掩码张量,就能够利用基准卷积核以及相应的掩码张量实现对待处理图像的卷积处理,进而实现对待处理图像的分类,而不必获取神经网络中每个卷积核的参数,可以减少神经网络部署时产生的存储开销,使得神经网络能够部署在一些存储资源有限的设备上并进行图像分类处理。In this application, when the image to be processed is classified, it is only necessary to obtain the convolution kernel parameters of the reference convolution kernel and the corresponding mask tensor from the storage space, and then the reference convolution kernel and the corresponding mask tensor can be used. Realize the convolution processing of the image to be processed, and then realize the classification of the image to be processed, without having to obtain the parameters of each convolution kernel in the neural network, which can reduce the storage overhead generated during the deployment of the neural network, so that the neural network can be deployed in some Perform image classification processing on devices with limited storage resources.
可选地,上述根据多个卷积特征图对待处理图像进行分类,得到待处理图像的分类结果,包括:对多个卷积特征图进行拼接,得到目标卷积特征图;根据目标卷积特征图对待处理图像行分类,得到待处理图像的分类结果。Optionally, the foregoing classifying the image to be processed according to multiple convolution feature maps to obtain the classification result of the image to be processed includes: splicing multiple convolution feature maps to obtain the target convolution feature map; according to the target convolution feature Figure classifies the image lines to be processed, and obtains the classification result of the image to be processed.
结合第七方面,在第七方面的某些实现方式中,N小于M,M个基准卷积核中的至少两个基准卷积核对应N组掩码张量中的一组掩码张量。With reference to the seventh aspect, in some implementations of the seventh aspect, N is less than M, and at least two reference convolution kernels in the M reference convolution kernels correspond to a set of mask tensors in the N groups of mask tensors.
结合第七方面,在第七方面的某些实现方式中,N组掩码张量中至少一组掩码张量中的至少部分掩码张量满足两两正交。With reference to the seventh aspect, in some implementation manners of the seventh aspect, at least part of the mask tensors in at least one set of mask tensors in the N sets of mask tensors satisfy pairwise orthogonality.
结合第七方面,在第七方面的某些实现方式中,上述M个基准卷积核的卷积核参数以及N组掩码张量是根据训练图像对神经网络进行训练得到的。With reference to the seventh aspect, in some implementations of the seventh aspect, the convolution kernel parameters of the M reference convolution kernels and the N sets of mask tensors are obtained by training the neural network according to the training image.
应理解,上述第七方面的图像分类装置与第一方面的图像分类方法是相对应的,第七方面中的图像分类装置能够执行第一方面中的图像分类方法,在上述第一方面中对相关内容的扩展、限定、解释和说明也适用于第七方面中相同的内容,这里对第七方面的相关内容不再详细描述。It should be understood that the image classification device in the seventh aspect described above corresponds to the image classification method in the first aspect, and the image classification device in the seventh aspect can execute the image classification method in the first aspect. The expansion, limitation, explanation and description of related content are also applicable to the same content in the seventh aspect, and the related content of the seventh aspect will not be described in detail here.
第八方面,提供了一种图像分类装置,该图像分类装置包括:存储器,用于存储神经网络的M个基准卷积核的卷积核参数和N组掩码张量;处理器,用于获取神经网络的M个基准卷积核的卷积核参数和N组掩码张量,并执行以下操作:根据M个基准卷积核对待处理图像进行卷积处理,得到待处理图像的M个基准卷积特征图;对M个基准卷积特征图和N组掩码张量进行哈达玛积运算,得到待处理图像的多个卷积特征图;根据待处理图像的多个卷积特征图对待处理图像进行分类,得到待处理图像的分类结果。In an eighth aspect, an image classification device is provided. The image classification device includes: a memory for storing convolution kernel parameters and N groups of mask tensors of M reference convolution kernels of a neural network; a processor for Obtain the convolution kernel parameters and N sets of mask tensors of the M reference convolution kernels of the neural network, and perform the following operations: perform convolution processing on the image to be processed according to the M reference convolution kernels to obtain M of the image to be processed Reference convolution feature map; Hadamard product operation is performed on M reference convolution feature maps and N groups of mask tensors to obtain multiple convolution feature maps of the image to be processed; multiple convolution feature maps of the image to be processed The image to be processed is classified, and the classification result of the image to be processed is obtained.
其中,上述M和N均为正整数,N组掩码张量中的每组掩码张量由多个掩码张量组 成,N组掩码张量中的元素存储时占用的比特数小于M个基准卷积核中卷积核参数中的元素存储时占用的比特数,M个基准卷积核中的每个基准卷积核对应N组掩码张量中的一组掩码张量Among them, the above M and N are both positive integers, each group of mask tensors in the N groups of mask tensors is composed of multiple mask tensors, and the number of bits occupied by the elements in the N groups of mask tensors is less than the number of M reference volumes. The number of bits occupied by the elements in the convolution kernel parameters in the convolution kernel. Each reference convolution kernel in the M reference convolution kernels corresponds to a set of mask tensors in the N groups of mask tensors
本申请中,在对待处理图像进行分类处理时,只需要从存储空间中获取基准卷积核的卷积核参数以及相应的掩码张量,就能够利用基准卷积核以及相应的掩码张量实现对待处理图像的卷积处理,进而实现对待处理图像的分类,而不必获取神经网络中每个卷积核的参数,可以减少神经网络部署时产生的存储开销,使得神经网络能够部署在一些存储资源有限的设备上并进行图像分类处理。In this application, when the image to be processed is classified, it is only necessary to obtain the convolution kernel parameters of the reference convolution kernel and the corresponding mask tensor from the storage space, and then the reference convolution kernel and the corresponding mask tensor can be used. Realize the convolution processing of the image to be processed, and then realize the classification of the image to be processed, without having to obtain the parameters of each convolution kernel in the neural network, which can reduce the storage overhead generated during the deployment of the neural network, so that the neural network can be deployed in some Perform image classification processing on devices with limited storage resources.
可选地,上述根据多个卷积特征图对待处理图像进行分类,得到待处理图像的分类结果,包括:对多个卷积特征图进行拼接,得到目标卷积特征图;根据目标卷积特征图对待处理图像行分类,得到待处理图像的分类结果。Optionally, the foregoing classifying the image to be processed according to multiple convolution feature maps to obtain the classification result of the image to be processed includes: splicing multiple convolution feature maps to obtain the target convolution feature map; according to the target convolution feature Figure classifies the image lines to be processed, and obtains the classification result of the image to be processed.
结合第八方面,在第八方面的某些实现方式中,N小于M,M个基准卷积核中的至少两个基准卷积核对应N组掩码张量中的一组掩码张量。With reference to the eighth aspect, in some implementations of the eighth aspect, N is less than M, and at least two reference convolution kernels in the M reference convolution kernels correspond to a set of mask tensors in the N groups of mask tensors.
结合第八方面,在第八方面的某些实现方式中,N组掩码张量中至少一组掩码张量中的至少部分掩码张量满足两两正交。With reference to the eighth aspect, in some implementation manners of the eighth aspect, at least part of the mask tensors in at least one set of mask tensors in the N sets of mask tensors satisfy pairwise orthogonality.
结合第八方面,在第八方面的某些实现方式中,上述M个基准卷积核的卷积核参数以及N组掩码张量是根据训练图像对神经网络进行训练得到的。With reference to the eighth aspect, in some implementations of the eighth aspect, the convolution kernel parameters of the M reference convolution kernels and the N sets of mask tensors are obtained by training the neural network according to the training image.
应理解,上述第八方面的图像分类装置与第二方面的图像分类方法是相对应的,第八方面中的图像分类装置能够执行第二方面中的图像分类方法,在上述第二方面中对相关内容的扩展、限定、解释和说明也适用于第八方面中相同的内容,这里对第八方面的相关内容不再详细描述。It should be understood that the image classification device in the eighth aspect described above corresponds to the image classification method in the second aspect. The image classification device in the eighth aspect can execute the image classification method in the second aspect. The expansion, limitation, explanation and description of related content are also applicable to the same content in the eighth aspect, and the related content of the eighth aspect will not be described in detail here.
第九方面,提供了一种数据处理装置,该数据处理装置包括:存储器,用于存储神经网络的M个基准卷积核的卷积核参数和N组掩码张量;处理器,用于获取神经网络的M个基准卷积核的卷积核参数和N组掩码张量,并执行以下操作:对M个基准卷积核中的每个基准卷积核,以及每个基准卷积核在N组掩码张量中对应的一组掩码张量进行哈达玛积运算,得到多个子卷积核;根据多个子卷积核分别对多媒体数据进行卷积处理,得到多媒体数据的多个卷积特征图;据多媒体数据的多个卷积特征图对多媒体数据进行处理。In a ninth aspect, a data processing device is provided. The data processing device includes: a memory for storing convolution kernel parameters and N groups of mask tensors of M reference convolution kernels of a neural network; a processor for Obtain the convolution kernel parameters and N sets of mask tensors of the M reference convolution kernels of the neural network, and perform the following operations: for each reference convolution kernel in the M reference convolution kernels, and each reference convolution The kernel performs the Hadamard product operation on the corresponding set of mask tensors in the N sets of mask tensors to obtain multiple subconvolution kernels; according to the multiple subconvolution kernels, the multimedia data are respectively convolved to obtain multiple volumes of multimedia data Product feature map: The multimedia data is processed according to multiple convolution feature maps of the multimedia data.
其中,上述M和N均为正整数,N组掩码张量中的每组掩码张量由多个掩码张量组成,N组掩码张量中的元素存储时占用的比特数小于M个基准卷积核中卷积核参数中的元素存储时占用的比特数,M个基准卷积核中的每个基准卷积核对应N组掩码张量中的一组掩码张量。Among them, the above M and N are both positive integers, each group of mask tensors in the N groups of mask tensors is composed of multiple mask tensors, and the number of bits occupied by the elements in the N groups of mask tensors is less than the number of M reference volumes. The number of bits occupied by the elements in the convolution kernel parameters in the convolution kernel. Each reference convolution kernel in the M reference convolution kernels corresponds to a group of mask tensors in the N groups of mask tensors.
应理解,上述第九方面的数据处理装置与第三方面的数据处理方法是相对应的,第九方面中的数据处理装置能够执行第三方面中的数据处理方法,在上述第三方面中对相关内容的扩展、限定、解释和说明也适用于第九方面中相同的内容,这里对第九方面的相关内容不再详细描述。It should be understood that the data processing device in the ninth aspect described above corresponds to the data processing method in the third aspect, and the data processing device in the ninth aspect can execute the data processing method in the third aspect. The expansion, limitation, explanation and description of related content are also applicable to the same content in the ninth aspect, and the related content of the ninth aspect will not be described in detail here.
第十方面,提供了一种数据处理装置,该数据处理装置包括:存储器,用于存储神经网络的M个基准卷积核的卷积核参数和N组掩码张量;处理器,用于获取神经网络的M个基准卷积核的卷积核参数和N组掩码张量,并执行以下操作:根据M个基准卷积核对多媒体数据进行卷积处理,得到多媒体数据的M个基准卷积特征图;对M个基准卷积特 征图和N组掩码张量进行哈达玛积运算,得到多媒体数据的多个卷积特征图;据多媒体数据的多个卷积特征图对多媒体数据进行处理。In a tenth aspect, a data processing device is provided. The data processing device includes: a memory for storing convolution kernel parameters and N groups of mask tensors of M reference convolution kernels of a neural network; a processor for Obtain the convolution kernel parameters and N groups of mask tensors of the M reference convolution kernels of the neural network, and perform the following operations: perform convolution processing on the multimedia data according to the M reference convolution kernels to obtain M reference volumes of the multimedia data Product feature maps; Hadamard product operations are performed on M reference convolution feature maps and N groups of mask tensors to obtain multiple convolution feature maps of multimedia data; multimedia data are processed according to multiple convolution feature maps of multimedia data deal with.
其中,上述M和N均为正整数,N组掩码张量中的每组掩码张量由多个掩码张量组成,N组掩码张量中的元素存储时占用的比特数小于M个基准卷积核中卷积核参数中的元素存储时占用的比特数,M个基准卷积核中的每个基准卷积核对应N组掩码张量中的一组掩码张量。Among them, the above M and N are both positive integers, each group of mask tensors in the N groups of mask tensors is composed of multiple mask tensors, and the number of bits occupied by the elements in the N groups of mask tensors is less than the number of M reference volumes. The number of bits occupied by the elements in the convolution kernel parameters in the convolution kernel. Each reference convolution kernel in the M reference convolution kernels corresponds to a group of mask tensors in the N groups of mask tensors.
应理解,上述第十方面的数据处理装置与第四方面的数据处理方法是相对应的,第十方面中的数据处理装置能够执行第四方面中的数据处理方法,在上述第四方面中对相关内容的扩展、限定、解释和说明也适用于第十方面中相同的内容,这里对第十方面的相关内容不再详细描述。It should be understood that the data processing device in the tenth aspect described above corresponds to the data processing method in the fourth aspect, and the data processing device in the tenth aspect can execute the data processing method in the fourth aspect. The expansion, limitation, explanation and description of related content are also applicable to the same content in the tenth aspect, and the related content of the tenth aspect will not be described in detail here.
第十一方面,提供一种计算机可读介质,该计算机可读介质存储用于设备执行的程序代码,该程序代码包括用于执行第一方面至第六方面中的任一方面中的方法。In an eleventh aspect, a computer-readable medium is provided, the computer-readable medium stores program code for execution by a device, and the program code includes a method for executing any one of the first to sixth aspects.
第十二方面,提供一种包含指令的计算机程序产品,当该计算机程序产品在计算机上运行时,使得计算机执行上述第一方面至第六方面中的任一方面中的方法。A twelfth aspect provides a computer program product containing instructions, when the computer program product runs on a computer, the computer executes the method in any one of the first to sixth aspects.
第十三方面,提供一种芯片,所述芯片包括处理器与数据接口,所述处理器通过所述数据接口读取存储器上存储的指令,执行上述第一方面至第六方面中的任一方面中的方法。In a thirteenth aspect, a chip is provided. The chip includes a processor and a data interface. The processor reads instructions stored in a memory through the data interface, and executes any one of the first to sixth aspects above. The method in the aspect.
可选地,作为一种实现方式,所述芯片还可以包括存储器,所述存储器中存储有指令,所述处理器用于执行所述存储器上存储的指令,当所述指令被执行时,所述处理器用于执行第一方面至第六方面中的任一方面中的方法。Optionally, as an implementation manner, the chip may further include a memory in which instructions are stored, and the processor is configured to execute the instructions stored in the memory. When the instructions are executed, the The processor is used to execute the method in any one of the first aspect to the sixth aspect.
附图说明Description of the drawings
图1是本申请实施例提供的系统架构的结构示意图;FIG. 1 is a schematic structural diagram of a system architecture provided by an embodiment of the present application;
图2是本申请实施例提供的根据卷积神经网络模型进行图像分类的示意图;2 is a schematic diagram of image classification according to a convolutional neural network model provided by an embodiment of the present application;
图3是本申请实施例提供的一种芯片硬件结构示意图;FIG. 3 is a schematic diagram of a chip hardware structure provided by an embodiment of the present application;
图4是手机自拍场景时示意图;Figure 4 is a schematic diagram of a mobile phone taking a selfie scene;
图5是人脸验证场景示意图;Figure 5 is a schematic diagram of a face verification scene;
图6是语音识别和机器翻译场景的示意图;Figure 6 is a schematic diagram of a speech recognition and machine translation scenario;
图7是本申请实施例的图像分类方法的示意性流程图;FIG. 7 is a schematic flowchart of an image classification method according to an embodiment of the present application;
图8是根据基准卷积核和掩码张量得到子卷积核的示意图;FIG. 8 is a schematic diagram of obtaining a sub-convolution kernel according to a reference convolution kernel and a mask tensor;
图9是根据基准卷积核和掩码张量得到子卷积核的示意图;FIG. 9 is a schematic diagram of obtaining a sub-convolution kernel according to a reference convolution kernel and a mask tensor;
图10是利用神经网络进行图像分类的过程的示意图;Figure 10 is a schematic diagram of the process of image classification using neural networks;
图11是获取基准卷积核的卷积核参数以及掩码张量的过程的示意图;11 is a schematic diagram of the process of obtaining convolution kernel parameters and mask tensor of the reference convolution kernel;
图12是本申请实施例的数据处理方法的示意性流程图;FIG. 12 is a schematic flowchart of a data processing method according to an embodiment of the present application;
图13是本申请实施例的神经网络训练装置的硬件结构示意图;FIG. 13 is a schematic diagram of the hardware structure of the neural network training device according to an embodiment of the present application;
图14是本申请实施例的图像分类装置的硬件结构示意图;FIG. 14 is a schematic diagram of the hardware structure of an image classification device according to an embodiment of the present application;
图15是本申请实施例的数据处理装置的硬件结构示意图。FIG. 15 is a schematic diagram of the hardware structure of a data processing device according to an embodiment of the present application.
具体实施方式Detailed ways
下面将结合附图,对本申请中的技术方案进行描述。The technical solution in this application will be described below in conjunction with the drawings.
本申请实施例提供了图像的分类方法以及数据处理方法。The embodiment of the application provides an image classification method and a data processing method.
其中,本申请实施例的数据处理方法能够应用在计算机视觉等领域的各种场景中,例如,本申请实施例的处理方法可以应用在人脸识别、图像分类、目标检测、语义分割等场景中。Among them, the data processing method of the embodiment of this application can be applied to various scenarios in the fields of computer vision, for example, the processing method of the embodiment of this application can be applied to scenes such as face recognition, image classification, target detection, semantic segmentation, etc. .
为了更形象的理解本申请实施例的数据处理方法的应用场景,下面以具体的场景为例进行说明。In order to more vividly understand the application scenario of the data processing method in the embodiment of the present application, the following takes a specific scenario as an example for description.
终端设备物体检测:Terminal equipment object detection:
这是一个目标检测的问题,当用户使用终端设备(例如,手机,平板等)进行拍照时,终端设备可以自动抓取人脸、动物等目标(这个过程中终端设备实现了对人脸或者其他物体的识别以及抓取等),能够帮助终端设备自动对焦、美化等。因此,终端设备需要一个体积小、运行快的目标检测卷积神经网络模型,从而为用户带来更好的用户体验,提升终端设备的产品品质。This is a problem of target detection. When a user uses a terminal device (for example, mobile phone, tablet, etc.) to take a picture, the terminal device can automatically capture human faces, animals and other targets (in this process, the terminal device realizes the detection of human faces or other Object recognition and grabbing, etc.), which can help terminal equipment to automatically focus and beautify. Therefore, terminal devices need a small, fast-running target detection convolutional neural network model, so as to bring users a better user experience and improve the product quality of the terminal device.
例如,如图4所示,用户在使用手机进行自拍时,手机能够根据神经网络模型自动识别人脸,并自动抓取人脸,生成预测框。图4中的神经网络模型可以是位于手机中的目标检测卷积神经网络模型,该目标检测卷积神经网络模型具有参数量少的特点(卷积核的参数量比较少),能够部署在储存资源有限的手机上。另外,应理解,图4中所示的预测框仅为示意,这里为了便于理解,将预测框直接显示在了图片中,实际上该预测框是显示在自拍手机的拍摄界面的。For example, as shown in Figure 4, when a user uses a mobile phone to take a selfie, the mobile phone can automatically recognize the face according to the neural network model, and automatically capture the face to generate a prediction frame. The neural network model in Figure 4 can be a target detection convolutional neural network model located in a mobile phone. The target detection convolutional neural network model has the characteristics of fewer parameters (the convolution kernel has fewer parameters) and can be deployed in storage On mobile phones with limited resources. In addition, it should be understood that the prediction box shown in FIG. 4 is only for illustration. For ease of understanding, the prediction box is directly displayed in the picture. In fact, the prediction box is displayed on the shooting interface of the selfie mobile phone.
自动驾驶场景下的语义分割:Semantic segmentation in autonomous driving scenarios:
自动驾驶车辆的摄像头会实时捕捉到道路画面,为了能够使自动驾驶车辆识别道路上的不同物体,自动驾驶车辆中的智能设备需要对捕捉到的道路画面进行分割,从而分出路面、路基、车辆、行人等不同物体,并将这些信息反馈到自动驾驶车辆的控制系统中,使得自动驾驶车辆行驶在正确的道路区域。由于自动驾驶对安全性要求极高,因此,自动驾驶车辆中的智能设备需要能够对捕捉到的实时道路画面都进行快速的处理和分析,得到语义分割结果。The camera of an autonomous vehicle will capture the road image in real time. In order to enable the autonomous vehicle to recognize different objects on the road, the smart device in the autonomous vehicle needs to segment the captured road image to separate the road surface, roadbed, and vehicle. , Pedestrians, and other objects, and feed this information back to the control system of the autonomous vehicle, so that the autonomous vehicle can drive on the correct road area. Since autonomous driving has extremely high requirements for safety, smart devices in autonomous vehicles need to be able to quickly process and analyze captured real-time road images to obtain semantic segmentation results.
入口闸机人脸验证:Face verification at entrance gate:
这是一个图像相似度比对问题。在高铁、机场等入口的闸机上,乘客进行人脸认证时,摄像头会拍摄人脸图像,对于拍摄到的人脸图像可以采用卷积神经网络抽取图像特征,然后将抽取到的图像特征与存储在系统中的身份证件的图像特征进行相似度计算,如果相似度高就验证成功。This is an image similarity comparison problem. At the gates at the entrances of high-speed railways and airports, when passengers perform face authentication, the camera will take facial images. For the captured facial images, convolutional neural networks can be used to extract image features, and then the extracted image features will be stored The similarity calculation is performed on the image features of the ID documents in the system, and the verification is successful if the similarity is high.
例如,如图5所示,神经网络模型对拍摄到的人脸图像进行处理得到特征A,神经网络模型对身份证件的图像进行处理,得到特征B,接下来,通过对特征A和特征B的相似性可以确定被拍摄者与身份证件上的人是否属于同一个人,如果特征A和特征B的相似性满足要求(例如,特征A和特征B的相似性大于或者等于预设的相似性阈值),那么,可以确定被拍摄者与身份证件上的人属于同一个人。For example, as shown in Figure 5, the neural network model processes the captured face image to obtain feature A, and the neural network model processes the image of the ID document to obtain feature B. Next, by comparing feature A and feature B Similarity can determine whether the person being photographed and the person on the ID card belong to the same person, if the similarity of feature A and feature B meets the requirements (for example, the similarity of feature A and feature B is greater than or equal to the preset similarity threshold) , Then, it can be determined that the person being photographed and the person on the ID card belong to the same person.
翻译机同声传译:Simultaneous interpretation by translator:
这是一个语音识别和机器翻译问题。在语音识别和机器翻译问题上,卷积神经网络也是常用到的识别模型。在同声传译的场景下,必须要采用高效的神经网络来做到实时语音 识别并进行翻译,以带来更好的用户体验。This is a speech recognition and machine translation problem. In speech recognition and machine translation, convolutional neural networks are also commonly used recognition models. In the scenario of simultaneous interpretation, an efficient neural network must be used to achieve real-time speech recognition and translation, so as to bring a better user experience.
例如,如图6所示,输入的语音为英文“Hello world!”,通过神经网络模型对接收到的语音进行识别,并根据识别结果进行机器翻译,输出相应的译文为中文“世界,你好!”,这里的译文既可以包括译文的语音也可以包括译文的文本。For example, as shown in Figure 6, the input voice is English "Hello world!", the received voice is recognized through the neural network model, and machine translation is performed according to the recognition result, and the corresponding translation is output as Chinese "World, Hello !", the translation here can include both the voice and the text of the translation.
在上述几种应用场景(终端设备物体检测、自动驾驶场景下的语义分割、入口闸机人脸验证和翻译机同声传译)中,都需要采取性能相对较高的神经网络模型进行相应的数据处理,但是很多情况下,需要部署的设备的存储空间有限,因此,如何在这些存储资源有限的设备上部署性能相对较高,但是参数相对较少的神经网络,进而进行数据处理是一个很重要的问题,因此,本申请提供了一种数据处理方法,该方法通过部署一些参数较少的神经网络模型,使得一些存储资源有限的设备也能够实现对数据的高效处理,具体的过程会在下文中详细描述。In the above several application scenarios (terminal device object detection, semantic segmentation in autonomous driving scenarios, face verification at entrance gates, and simultaneous interpretation by translators), relatively high-performance neural network models need to be adopted for corresponding data However, in many cases, the storage space of the devices that need to be deployed is limited. Therefore, how to deploy neural networks with relatively high performance but relatively few parameters on these devices with limited storage resources is very important for data processing. Therefore, this application provides a data processing method that deploys some neural network models with fewer parameters, so that some devices with limited storage resources can also achieve efficient data processing. The specific process will be described below A detailed description.
由于本申请实施例涉及大量神经网络的应用,为了便于理解,下面先对本申请实施例可能涉及的神经网络的相关术语和概念进行介绍。Since the embodiments of the present application involve a large number of applications of neural networks, in order to facilitate understanding, the following first introduces related terms and concepts of neural networks that may be involved in the embodiments of the present application.
(1)神经网络(1) Neural network
神经网络可以是由神经单元组成的,神经单元可以是指以x s和截距1为输入的运算单元,该运算单元的输出可以为: A neural network can be composed of neural units. A neural unit can refer to an arithmetic unit that takes x s and intercept 1 as inputs. The output of the arithmetic unit can be:
Figure PCTCN2020086015-appb-000001
Figure PCTCN2020086015-appb-000001
其中,s=1、2、……n,n为大于1的自然数,W s为x s的权重,b为神经单元的偏置。f为神经单元的激活函数(activation functions),用于将非线性特性引入神经网络中,来将神经单元中的输入信号转换为输出信号。该激活函数的输出信号可以作为下一层卷积层的输入,激活函数可以是sigmoid函数。神经网络是将多个上述单一的神经单元联结在一起形成的网络,即一个神经单元的输出可以是另一个神经单元的输入。每个神经单元的输入可以与前一层的局部接受域相连,来提取局部接受域的特征,局部接受域可以是由若干个神经单元组成的区域。 Among them, s=1, 2,...n, n is a natural number greater than 1, W s is the weight of x s , and b is the bias of the neural unit. f is the activation function of the neural unit, which is used to introduce nonlinear characteristics into the neural network to convert the input signal in the neural unit into an output signal. The output signal of the activation function can be used as the input of the next convolutional layer, and the activation function can be a sigmoid function. A neural network is a network formed by connecting multiple above-mentioned single neural units together, that is, the output of one neural unit can be the input of another neural unit. The input of each neural unit can be connected with the local receptive field of the previous layer to extract the characteristics of the local receptive field. The local receptive field can be a region composed of several neural units.
(2)深度神经网络(2) Deep neural network
深度神经网络(deep neural network,DNN),也称多层神经网络,可以理解为具有多层隐含层的神经网络。按照不同层的位置对DNN进行划分,DNN内部的神经网络可以分为三类:输入层,隐含层,输出层。一般来说第一层是输入层,最后一层是输出层,中间的层数都是隐含层。层与层之间是全连接的,也就是说,第i层的任意一个神经元一定与第i+1层的任意一个神经元相连。Deep neural network (DNN), also called multi-layer neural network, can be understood as a neural network with multiple hidden layers. DNN is divided according to the positions of different layers. The neural network inside the DNN can be divided into three categories: input layer, hidden layer, and output layer. Generally speaking, the first layer is the input layer, the last layer is the output layer, and the number of layers in the middle are all hidden layers. The layers are fully connected, that is to say, any neuron in the i-th layer must be connected to any neuron in the i+1th layer.
虽然DNN看起来很复杂,但是就每一层的工作来说,其实并不复杂,简单来说就是如下线性关系表达式:
Figure PCTCN2020086015-appb-000002
其中,
Figure PCTCN2020086015-appb-000003
是输入向量,
Figure PCTCN2020086015-appb-000004
是输出向量,
Figure PCTCN2020086015-appb-000005
是偏移向量,W是权重矩阵(也称系数),α()是激活函数。每一层仅仅是对输入向量
Figure PCTCN2020086015-appb-000006
经过如此简单的操作得到输出向量
Figure PCTCN2020086015-appb-000007
由于DNN层数多,系数W和偏移向量
Figure PCTCN2020086015-appb-000008
的数量也比较多。这些参数在DNN中的定义如下所述:以系数W为例:假设在一个三层的DNN中,第二层的第4个神经元到第三层的第2个神经元的线性系数定义为
Figure PCTCN2020086015-appb-000009
上标3代表系数W所在的层数,而下标对应的是输出的第三层索引2和输入的第二层索引4。
Although DNN looks complicated, it is not complicated in terms of the work of each layer. In simple terms, it is the following linear relationship expression:
Figure PCTCN2020086015-appb-000002
among them,
Figure PCTCN2020086015-appb-000003
Is the input vector,
Figure PCTCN2020086015-appb-000004
Is the output vector,
Figure PCTCN2020086015-appb-000005
Is the offset vector, W is the weight matrix (also called coefficient), and α() is the activation function. Each layer is just the input vector
Figure PCTCN2020086015-appb-000006
After such a simple operation, the output vector is obtained
Figure PCTCN2020086015-appb-000007
Due to the large number of DNN layers, the coefficient W and the offset vector
Figure PCTCN2020086015-appb-000008
The number is also relatively large. The definition of these parameters in the DNN is as follows: Take the coefficient W as an example: Suppose that in a three-layer DNN, the linear coefficients from the fourth neuron in the second layer to the second neuron in the third layer are defined as
Figure PCTCN2020086015-appb-000009
The superscript 3 represents the number of layers where the coefficient W is located, and the subscript corresponds to the output third layer index 2 and the input second layer index 4.
综上,第L-1层的第k个神经元到第L层的第j个神经元的系数定义为
Figure PCTCN2020086015-appb-000010
In summary, the coefficient from the kth neuron in the L-1th layer to the jth neuron in the Lth layer is defined as
Figure PCTCN2020086015-appb-000010
需要注意的是,输入层是没有W参数的。在深度神经网络中,更多的隐含层让网络更 能够刻画现实世界中的复杂情形。理论上而言,参数越多的模型复杂度越高,“容量”也就越大,也就意味着它能完成更复杂的学习任务。训练深度神经网络的也就是学习权重矩阵的过程,其最终目的是得到训练好的深度神经网络的所有层的权重矩阵(由很多层的向量W形成的权重矩阵)。It should be noted that the input layer has no W parameter. In deep neural networks, more hidden layers make the network more capable of portraying complex situations in the real world. Theoretically speaking, a model with more parameters is more complex and has a greater "capacity", which means it can complete more complex learning tasks. Training a deep neural network is also a process of learning a weight matrix, and its ultimate goal is to obtain the weight matrix of all layers of the trained deep neural network (a weight matrix formed by vectors W of many layers).
(3)卷积神经网络(3) Convolutional neural network
卷积神经网络(convolutional neuron network,CNN)是一种带有卷积结构的深度神经网络。卷积神经网络包含了一个由卷积层和子采样层构成的特征抽取器,该特征抽取器可以看作是滤波器。卷积层是指卷积神经网络中对输入信号进行卷积处理的神经元层。在卷积神经网络的卷积层中,一个神经元可以只与部分邻层神经元连接。一个卷积层中,通常包含若干个特征平面,每个特征平面可以由一些矩形排列的神经单元组成。同一特征平面的神经单元共享权重,这里共享的权重就是卷积核。共享权重可以理解为提取图像信息的方式与位置无关。卷积核可以以随机大小的矩阵的形式初始化,在卷积神经网络的训练过程中卷积核可以通过学习得到合理的权重。另外,共享权重带来的直接好处是减少卷积神经网络各层之间的连接,同时又降低了过拟合的风险。Convolutional neural network (convolutional neuron network, CNN) is a deep neural network with convolutional structure. The convolutional neural network contains a feature extractor composed of a convolution layer and a sub-sampling layer. The feature extractor can be regarded as a filter. The convolutional layer refers to the neuron layer that performs convolution processing on the input signal in the convolutional neural network. In the convolutional layer of a convolutional neural network, a neuron can be connected to only part of the neighboring neurons. A convolutional layer usually contains several feature planes, and each feature plane can be composed of some rectangularly arranged neural units. Neural units in the same feature plane share weights, and the shared weights here are the convolution kernels. Sharing weight can be understood as the way to extract image information has nothing to do with location. The convolution kernel can be initialized in the form of a matrix of random size. During the training of the convolutional neural network, the convolution kernel can obtain reasonable weights through learning. In addition, the direct benefit of sharing weights is to reduce the connections between the layers of the convolutional neural network, while reducing the risk of overfitting.
(4)循环神经网络(recurrent neural networks,RNN)是用来处理序列数据的。在传统的神经网络模型中,是从输入层到隐含层再到输出层,层与层之间是全连接的,而对于每一层层内之间的各个节点是无连接的。这种普通的神经网络虽然解决了很多难题,但是却仍然对很多问题无能无力。例如,你要预测句子的下一个单词是什么,一般需要用到前面的单词,因为一个句子中前后单词并不是独立的。RNN之所以称为循环神经网路,即一个序列当前的输出与前面的输出也有关。具体的表现形式为网络会对前面的信息进行记忆并应用于当前输出的计算中,即隐含层本层之间的节点不再无连接而是有连接的,并且隐含层的输入不仅包括输入层的输出还包括上一时刻隐含层的输出。理论上,RNN能够对任何长度的序列数据进行处理。对于RNN的训练和对传统的CNN或DNN的训练一样。(4) Recurrent Neural Networks (RNN) are used to process sequence data. In the traditional neural network model, from the input layer to the hidden layer and then to the output layer, the layers are fully connected, and each node in each layer is disconnected. Although this ordinary neural network has solved many problems, it is still incapable of many problems. For example, if you want to predict what the next word of a sentence will be, you generally need to use the previous word, because the preceding and following words in a sentence are not independent. The reason why RNN is called recurrent neural network is that the current output of a sequence is also related to the previous output. The specific form is that the network will memorize the previous information and apply it to the calculation of the current output, that is, the nodes between the hidden layer are no longer unconnected but connected, and the input of the hidden layer includes not only The output of the input layer also includes the output of the hidden layer at the previous moment. In theory, RNN can process sequence data of any length. The training of RNN is the same as the training of traditional CNN or DNN.
既然已经有了卷积神经网络,为什么还要循环神经网络?原因很简单,在卷积神经网络中,有一个前提假设是:元素之间是相互独立的,输入与输出也是独立的,比如猫和狗。但现实世界中,很多元素都是相互连接的,比如股票随时间的变化,再比如一个人说了:我喜欢旅游,其中最喜欢的地方是云南,以后有机会一定要去。这里填空,人类应该都知道是填“云南”。因为人类会根据上下文的内容进行推断,但如何让机器做到这一步?RNN就应运而生了。RNN旨在让机器像人一样拥有记忆的能力。因此,RNN的输出就需要依赖当前的输入信息和历史的记忆信息。Now that there is already a convolutional neural network, why bother to recycle neural networks? The reason is simple. In convolutional neural networks, there is a premise that the elements are independent of each other, and the input and output are also independent, such as cats and dogs. But in the real world, many elements are connected to each other, such as the change of stocks over time, and another person said: I like traveling, and my favorite place is Yunnan, and I must go if I have a chance in the future. Filling in the blanks here, humans should all know that it is filling in "Yunnan". Because humans will make inferences based on the content of the context, but how to make the machine do this step? RNN came into being. RNN aims to make machines have memory capabilities like humans. Therefore, the output of RNN needs to rely on current input information and historical memory information.
(5)损失函数(5) Loss function
在训练深度神经网络的过程中,因为希望深度神经网络的输出尽可能的接近真正想要预测的值,所以可以通过比较当前网络的预测值和真正想要的目标值,再根据两者之间的差异情况来更新每一层神经网络的权重向量(当然,在第一次更新之前通常会有初始化的过程,即为深度神经网络中的各层预先配置参数),比如,如果网络的预测值高了,就调整权重向量让它预测低一些,不断地调整,直到深度神经网络能够预测出真正想要的目标值或与真正想要的目标值非常接近的值。因此,就需要预先定义“如何比较预测值和目标值之间的差异”,这便是损失函数(loss function)或目标函数(objective function),它们是用于衡量预测值和目标值的差异的重要方程。其中,以损失函数举例,损失函数的输出 值(loss)越高表示差异越大,那么深度神经网络的训练就变成了尽可能缩小这个loss的过程。In the process of training a deep neural network, because it is hoped that the output of the deep neural network is as close as possible to the value that you really want to predict, you can compare the predicted value of the current network with the target value you really want, and then based on the difference between the two To update the weight vector of each layer of neural network (of course, there is usually an initialization process before the first update, that is, pre-configured parameters for each layer in the deep neural network), for example, if the predicted value of the network If it is high, adjust the weight vector to make its prediction lower, and keep adjusting until the deep neural network can predict the really wanted target value or a value very close to the really wanted target value. Therefore, it is necessary to predefine "how to compare the difference between the predicted value and the target value". This is the loss function or objective function, which is used to measure the difference between the predicted value and the target value. Important equation. Among them, take the loss function as an example. The higher the output value (loss) of the loss function, the greater the difference. Then the training of the deep neural network becomes a process of reducing this loss as much as possible.
(6)反向传播算法(6) Backpropagation algorithm
神经网络可以采用误差反向传播(back propagation,BP)算法在训练过程中修正初始的神经网络模型中参数的大小,使得神经网络模型的重建误差损失越来越小。具体地,前向传递输入信号直至输出会产生误差损失,通过反向传播误差损失信息来更新初始的神经网络模型中参数,从而使误差损失收敛。反向传播算法是以误差损失为主导的反向传播运动,旨在得到最优的神经网络模型的参数,例如权重矩阵。The neural network can use an error back propagation (BP) algorithm to modify the size of the parameters in the initial neural network model during the training process, so that the reconstruction error loss of the neural network model becomes smaller and smaller. Specifically, forwarding the input signal to the output will cause error loss, and the parameters in the initial neural network model are updated by backpropagating the error loss information, so that the error loss is converged. The backpropagation algorithm is a backpropagation motion dominated by error loss, and aims to obtain the optimal neural network model parameters, such as the weight matrix.
(7)像素值(7) Pixel value
图像的像素值可以是一个红绿蓝(RGB)颜色值,像素值可以是表示颜色的长整数。例如,像素值为256*Red+100*Green+76Blue,其中,Blue代表蓝色分量,Green代表绿色分量,Red代表红色分量。各个颜色分量中,数值越小,亮度越低,数值越大,亮度越高。对于灰度图像来说,像素值可以是灰度值。The pixel value of the image can be a red-green-blue (RGB) color value, and the pixel value can be a long integer representing the color. For example, the pixel value is 256*Red+100*Green+76Blue, where Blue represents the blue component, Green represents the green component, and Red represents the red component. In each color component, the smaller the value, the lower the brightness, and the larger the value, the higher the brightness. For grayscale images, the pixel values can be grayscale values.
如图1所示,本申请实施例提供了一种系统架构100。在图1中,数据采集设备160用于采集训练数据。针对本申请实施例的图像分类方法来说,训练数据可以包括训练图像以及训练图像对应的分类结果,其中,训练图像的结果可以是人工预先标注的结果。而针对本申请实施例的数据处理方法来说,训练数据的具体类型与待处理的数据的数据类型相同以及数据处理的具体过程有关,例如,当待处理的数据为待处理图像,本申请实施例的数据处理方法是对待处理的图像进行降噪处理的话,那么,本申请实施例的数据处理方法对应的训练数据可以包括原始图像,以及在原始图像上加上噪声之后的噪声图像。As shown in FIG. 1, an embodiment of the present application provides a system architecture 100. In FIG. 1, a data collection device 160 is used to collect training data. For the image classification method of the embodiment of the present application, the training data may include training images and classification results corresponding to the training images, where the results of the training images may be manually pre-labeled results. For the data processing method of the embodiment of this application, the specific type of training data is the same as the data type of the data to be processed and the specific process of data processing is related. For example, when the data to be processed is an image to be processed, this application implements If the data processing method of the example is to perform noise reduction processing on the image to be processed, then the training data corresponding to the data processing method of the embodiment of the present application may include the original image and the noise image after adding noise to the original image.
在采集到训练数据之后,数据采集设备160将这些训练数据存入数据库130,训练设备120基于数据库130中维护的训练数据训练得到目标模型/规则101。After the training data is collected, the data collection device 160 stores the training data in the database 130, and the training device 120 trains to obtain the target model/rule 101 based on the training data maintained in the database 130.
下面对训练设备120基于训练数据得到目标模型/规则101进行描述,训练设备120对输入的原始图像进行处理,将输出的图像与原始图像进行对比,直到训练设备120输出的图像与原始图像的差值小于一定的阈值,从而完成目标模型/规则101的训练。The following describes the target model/rule 101 obtained by the training device 120 based on the training data. The training device 120 processes the input original image and compares the output image with the original image until the output image of the training device 120 differs from the original image. The difference is less than a certain threshold, thereby completing the training of the target model/rule 101.
上述目标模型/规则101能够用于实现本申请实施例的图像分类方法或者数据处理方法,即,将待处理图像通过相关预处理后输入该目标模型/规则101,即可得到去噪处理后的图像。本申请实施例中的目标模型/规则101具体可以为神经网络。需要说明的是,在实际的应用中,所述数据库130中维护的训练数据不一定都来自于数据采集设备160的采集,也有可能是从其他设备接收得到的。另外需要说明的是,训练设备120也不一定完全基于数据库130维护的训练数据进行目标模型/规则101的训练,也有可能从云端或其他地方获取训练数据进行模型训练,上述描述不应该作为对本申请实施例的限定。The above-mentioned target model/rule 101 can be used to implement the image classification method or data processing method of the embodiment of the present application, that is, the image to be processed is input into the target model/rule 101 after relevant preprocessing to obtain the denoising processed image image. The target model/rule 101 in the embodiment of the present application may specifically be a neural network. It should be noted that in actual applications, the training data maintained in the database 130 may not all come from the collection of the data collection device 160, and may also be received from other devices. In addition, it should be noted that the training device 120 does not necessarily perform the training of the target model/rule 101 completely based on the training data maintained by the database 130. It may also obtain training data from the cloud or other places for model training. The above description should not be used as a reference to this application. Limitations of Examples.
根据训练设备120训练得到的目标模型/规则101可以应用于不同的系统或设备中,如应用于图1所示的执行设备110,所述执行设备110可以是终端,如手机终端,平板电脑,笔记本电脑,增强现实(augmented reality,AR)AR/虚拟现实(virtual reality,VR),车载终端等,还可以是服务器或者云端等。在图1中,执行设备110配置输入/输出(input/output,I/O)接口112,用于与外部设备进行数据交互,用户可以通过客户设备140向I/O接口112输入数据,所述输入数据在本申请实施例中可以包括:客户设备输入的待处理图像。The target model/rule 101 trained according to the training device 120 can be applied to different systems or devices, such as the execution device 110 shown in FIG. 1. The execution device 110 may be a terminal, such as a mobile phone terminal, a tablet computer, Notebook computers, augmented reality (AR) AR/virtual reality (VR), vehicle-mounted terminals, etc., can also be servers or clouds. In FIG. 1, the execution device 110 is configured with an input/output (input/output, I/O) interface 112 for data interaction with external devices. The user can input data to the I/O interface 112 through the client device 140. The input data in this embodiment of the application may include: the image to be processed input by the client device.
预处理模块113和预处理模块114用于根据I/O接口112接收到的输入数据(如待处理图像)进行预处理,在本申请实施例中,也可以没有预处理模块113和预处理模块114(也可以只有其中的一个预处理模块),而直接采用计算模块111对输入数据进行处理。The preprocessing module 113 and the preprocessing module 114 are used for preprocessing according to the input data (such as the image to be processed) received by the I/O interface 112. In the embodiment of the present application, the preprocessing module 113 and the preprocessing module may not be provided 114 (there may only be one preprocessing module), and the calculation module 111 is directly used to process the input data.
在执行设备110对输入数据进行预处理,或者在执行设备110的计算模块111执行计算等相关的处理过程中,执行设备110可以调用数据存储系统150中的数据、代码等以用于相应的处理,也可以将相应处理得到的数据、指令等存入数据存储系统150中。When the execution device 110 preprocesses input data, or when the calculation module 111 of the execution device 110 performs calculations and other related processing, the execution device 110 may call data, codes, etc. in the data storage system 150 for corresponding processing , The data, instructions, etc. obtained by corresponding processing may also be stored in the data storage system 150.
最后,I/O接口112将处理结果,如上述得到的去噪处理后的图像返回给客户设备140,从而提供给用户。Finally, the I/O interface 112 returns the processing result, such as the denoising processed image obtained as described above, to the client device 140 to provide it to the user.
值得说明的是,训练设备120可以针对不同的目标或称不同的任务,基于不同的训练数据生成相应的目标模型/规则101,该相应的目标模型/规则101即可以用于实现上述目标或完成上述任务,从而为用户提供所需的结果。It is worth noting that the training device 120 can generate corresponding target models/rules 101 based on different training data for different goals or tasks, and the corresponding target models/rules 101 can be used to achieve the above goals or complete The above tasks provide the user with the desired result.
在附图1中所示情况下,用户可以手动给定输入数据,该手动给定可以通过I/O接口112提供的界面进行操作。另一种情况下,客户设备140可以自动地向I/O接口112发送输入数据,如果要求客户设备140自动发送输入数据需要获得用户的授权,则用户可以在客户设备140中设置相应权限。用户可以在客户设备140查看执行设备110输出的结果,具体的呈现形式可以是显示、声音、动作等具体方式。客户设备140也可以作为数据采集端,采集如图所示输入I/O接口112的输入数据及输出I/O接口112的输出结果作为新的样本数据,并存入数据库130。当然,也可以不经过客户设备140进行采集,而是由I/O接口112直接将如图所示输入I/O接口112的输入数据及输出I/O接口112的输出结果,作为新的样本数据存入数据库130。In the case shown in FIG. 1, the user can manually set input data, and the manual setting can be operated through the interface provided by the I/O interface 112. In another case, the client device 140 can automatically send input data to the I/O interface 112. If the client device 140 is required to automatically send the input data and the user's authorization is required, the user can set the corresponding authority in the client device 140. The user can view the result output by the execution device 110 on the client device 140, and the specific presentation form may be a specific manner such as display, sound, and action. The client device 140 can also be used as a data collection terminal to collect the input data of the input I/O interface 112 and the output result of the output I/O interface 112 as new sample data, and store it in the database 130 as shown in the figure. Of course, it is also possible not to collect through the client device 140, but the I/O interface 112 directly uses the input data input to the I/O interface 112 and the output result of the output I/O interface 112 as a new sample as shown in the figure. The data is stored in the database 130.
值得注意的是,附图1仅是本申请实施例提供的一种系统架构的示意图,图中所示设备、器件、模块等之间的位置关系不构成任何限制,例如,在附图1中,数据存储系统150相对执行设备110是外部存储器,在其它情况下,也可以将数据存储系统150置于执行设备110中。It is worth noting that Fig. 1 is only a schematic diagram of a system architecture provided by an embodiment of the present application, and the positional relationship between the devices, devices, modules, etc. shown in the figure does not constitute any limitation. For example, in Fig. 1 The data storage system 150 is an external memory relative to the execution device 110. In other cases, the data storage system 150 may also be placed in the execution device 110.
如图1所示,根据训练设备120训练得到目标模型/规则101,该目标模型/规则101在本申请实施例中可以是本申请中的神经网络,具体的,本申请实施例提供的神经网络可以CNN,深度卷积神经网络(deep convolutional neural networks,DCNN),循环神经网络(recurrent neural network,RNNS)等等。As shown in FIG. 1, the target model/rule 101 is obtained by training according to the training device 120. The target model/rule 101 may be the neural network in the present application in the embodiment of the application. Specifically, the neural network provided in the embodiment of the present application Can be CNN, deep convolutional neural networks (deep convolutional neural networks, DCNN), recurrent neural networks (recurrent neural network, RNNS) and so on.
由于CNN是一种非常常见的神经网络,下面结合图2重点对CNN的结构进行详细的介绍。如上文的基础概念介绍所述,卷积神经网络是一种带有卷积结构的深度神经网络,是一种深度学习(deep learning)架构,深度学习架构是指通过机器学习的算法,在不同的抽象层级上进行多个层次的学习。作为一种深度学习架构,CNN是一种前馈(feed-forward)人工神经网络,该前馈人工神经网络中的各个神经元可以对输入其中的图像作出响应。Since CNN is a very common neural network, the structure of CNN will be introduced in detail below in conjunction with Figure 2. As mentioned in the introduction to the basic concepts above, a convolutional neural network is a deep neural network with a convolutional structure. It is a deep learning architecture. A deep learning architecture refers to a machine learning algorithm. Multi-level learning is carried out on the abstract level of As a deep learning architecture, CNN is a feed-forward artificial neural network. Each neuron in the feed-forward artificial neural network can respond to the input image.
如图2所示,卷积神经网络(CNN)200可以包括输入层210,卷积层/池化层220(其中池化层为可选的),以及神经网络层230。下面对这些层的相关内容做详细介绍。As shown in FIG. 2, a convolutional neural network (CNN) 200 may include an input layer 210, a convolutional layer/pooling layer 220 (the pooling layer is optional), and a neural network layer 230. The following is a detailed introduction to the relevant content of these layers.
卷积层/池化层220:Convolutional layer/pooling layer 220:
卷积层:Convolutional layer:
如图2所示卷积层/池化层220可以包括如示例221-226层,举例来说:在一种实现中, 221层为卷积层,222层为池化层,223层为卷积层,224层为池化层,225为卷积层,226为池化层;在另一种实现方式中,221、222为卷积层,223为池化层,224、225为卷积层,226为池化层。即卷积层的输出可以作为随后的池化层的输入,也可以作为另一个卷积层的输入以继续进行卷积操作。The convolutional layer/pooling layer 220 as shown in FIG. 2 may include layers 221-226, for example: in an implementation, layer 221 is a convolutional layer, layer 222 is a pooling layer, and layer 223 is a convolutional layer. Layers, 224 is the pooling layer, 225 is the convolutional layer, and 226 is the pooling layer; in another implementation, 221 and 222 are the convolutional layers, 223 is the pooling layer, and 224 and 225 are the convolutional layers. Layer, 226 is the pooling layer. That is, the output of the convolutional layer can be used as the input of the subsequent pooling layer, or as the input of another convolutional layer to continue the convolution operation.
下面将以卷积层221为例,介绍一层卷积层的内部工作原理。The following will take the convolutional layer 221 as an example to introduce the internal working principle of a convolutional layer.
卷积层221可以包括很多个卷积算子,卷积算子也称为核,其在图像处理中的作用相当于一个从输入图像矩阵中提取特定信息的过滤器,卷积算子本质上可以是一个权重矩阵,这个权重矩阵通常被预先定义,在对图像进行卷积操作的过程中,权重矩阵通常在输入图像上沿着水平方向一个像素接着一个像素(或两个像素接着两个像素……这取决于步长stride的取值)的进行处理,从而完成从图像中提取特定特征的工作。该权重矩阵的大小应该与图像的大小相关,需要注意的是,权重矩阵的纵深维度(depth dimension)和输入图像的纵深维度是相同的,在进行卷积运算的过程中,权重矩阵会延伸到输入图像的整个深度。因此,和一个单一的权重矩阵进行卷积会产生一个单一纵深维度的卷积化输出,但是大多数情况下不使用单一权重矩阵,而是应用多个尺寸(行×列)相同的权重矩阵,即多个同型矩阵。每个权重矩阵的输出被堆叠起来形成卷积图像的纵深维度,这里的维度可以理解为由上面所述的“多个”来决定。不同的权重矩阵可以用来提取图像中不同的特征,例如一个权重矩阵用来提取图像边缘信息,另一个权重矩阵用来提取图像的特定颜色,又一个权重矩阵用来对图像中不需要的噪点进行模糊化等。该多个权重矩阵尺寸(行×列)相同,经过该多个尺寸相同的权重矩阵提取后的卷积特征图的尺寸也相同,再将提取到的多个尺寸相同的卷积特征图合并形成卷积运算的输出。The convolution layer 221 can include many convolution operators. The convolution operator is also called a kernel. Its function in image processing is equivalent to a filter that extracts specific information from the input image matrix. The convolution operator is essentially It can be a weight matrix. This weight matrix is usually pre-defined. In the process of convolution on the image, the weight matrix is usually one pixel after one pixel (or two pixels after two pixels) along the horizontal direction on the input image. ...It depends on the value of stride) to complete the work of extracting specific features from the image. The size of the weight matrix should be related to the size of the image. It should be noted that the depth dimension of the weight matrix and the depth dimension of the input image are the same. During the convolution operation, the weight matrix will extend to Enter the entire depth of the image. Therefore, convolution with a single weight matrix will produce a single depth dimension convolution output, but in most cases, a single weight matrix is not used, but multiple weight matrices of the same size (row × column) are applied. That is, multiple homogeneous matrices. The output of each weight matrix is stacked to form the depth dimension of the convolutional image, where the dimension can be understood as determined by the "multiple" mentioned above. Different weight matrices can be used to extract different features in the image. For example, one weight matrix is used to extract edge information of the image, another weight matrix is used to extract specific colors of the image, and another weight matrix is used to eliminate unwanted noise in the image. Perform fuzzification, etc. The multiple weight matrices have the same size (row×column), the size of the convolution feature maps extracted by the multiple weight matrices of the same size are also the same, and then the multiple extracted convolution feature maps of the same size are combined to form The output of the convolution operation.
这些权重矩阵中的权重值在实际应用中需要经过大量的训练得到,通过训练得到的权重值形成的各个权重矩阵可以用来从输入图像中提取信息,从而使得卷积神经网络200进行正确的预测。The weight values in these weight matrices need to be obtained through a lot of training in practical applications. Each weight matrix formed by the weight values obtained through training can be used to extract information from the input image, so that the convolutional neural network 200 can make correct predictions. .
当卷积神经网络200有多个卷积层的时候,初始的卷积层(例如221)往往提取较多的一般特征,该一般特征也可以称之为低级别的特征;随着卷积神经网络200深度的加深,越往后的卷积层(例如226)提取到的特征越来越复杂,比如高级别的语义之类的特征,语义越高的特征越适用于待解决的问题。When the convolutional neural network 200 has multiple convolutional layers, the initial convolutional layer (such as 221) often extracts more general features, which can also be called low-level features; with the convolutional neural network As the network 200 deepens, the features extracted by the subsequent convolutional layers (for example, 226) become more and more complex, such as features such as high-level semantics, and features with higher semantics are more suitable for the problem to be solved.
池化层:Pooling layer:
由于常常需要减少训练参数的数量,因此卷积层之后常常需要周期性的引入池化层,在如图2中220所示例的221-226各层,可以是一层卷积层后面跟一层池化层,也可以是多层卷积层后面接一层或多层池化层。在图像处理过程中,池化层的唯一目的就是减少图像的空间大小。池化层可以包括平均池化算子和/或最大池化算子,以用于对输入图像进行采样得到较小尺寸的图像。平均池化算子可以在特定范围内对图像中的像素值进行计算产生平均值作为平均池化的结果。最大池化算子可以在特定范围内取该范围内值最大的像素作为最大池化的结果。另外,就像卷积层中用权重矩阵的大小应该与图像尺寸相关一样,池化层中的运算符也应该与图像的大小相关。通过池化层处理后输出的图像尺寸可以小于输入池化层的图像的尺寸,池化层输出的图像中每个像素点表示输入池化层的图像的对应子区域的平均值或最大值。Since it is often necessary to reduce the number of training parameters, it is often necessary to periodically introduce a pooling layer after the convolutional layer. In the 221-226 layers as illustrated by 220 in Figure 2, it can be a convolutional layer followed by a layer The pooling layer can also be a multi-layer convolutional layer followed by one or more pooling layers. In the image processing process, the only purpose of the pooling layer is to reduce the size of the image space. The pooling layer may include an average pooling operator and/or a maximum pooling operator for sampling the input image to obtain a smaller size image. The average pooling operator can calculate the pixel values in the image within a specific range to generate an average value as the result of average pooling. The maximum pooling operator can take the pixel with the largest value within a specific range as the result of the maximum pooling. In addition, just as the size of the weight matrix used in the convolutional layer should be related to the image size, the operators in the pooling layer should also be related to the image size. The size of the image output after processing by the pooling layer can be smaller than the size of the image of the input pooling layer, and each pixel in the image output by the pooling layer represents the average value or the maximum value of the corresponding sub-region of the image input to the pooling layer.
神经网络层230:Neural network layer 230:
在经过卷积层/池化层220的处理后,卷积神经网络200还不足以输出所需要的输出信息。因为如前所述,卷积层/池化层220只会提取特征,并减少输入图像带来的参数。然而为了生成最终的输出信息(所需要的类信息或其他相关信息),卷积神经网络200需要利用神经网络层230来生成一个或者一组所需要的类的数量的输出。因此,在神经网络层230中可以包括多层隐含层(如图2所示的231、232至23n)以及输出层240,该多层隐含层中所包含的参数可以根据具体的任务类型的相关训练数据进行预先训练得到,例如该任务类型可以包括图像识别,图像分类,图像超分辨率重建等等。After processing by the convolutional layer/pooling layer 220, the convolutional neural network 200 is not enough to output the required output information. Because as mentioned above, the convolutional layer/pooling layer 220 only extracts features and reduces the parameters brought by the input image. However, in order to generate the final output information (the required class information or other related information), the convolutional neural network 200 needs to use the neural network layer 230 to generate one or a group of required classes of output. Therefore, the neural network layer 230 may include multiple hidden layers (231, 232 to 23n as shown in FIG. 2) and an output layer 240. The parameters contained in the multiple hidden layers can be based on specific task types. The relevant training data of the, for example, the task type can include image recognition, image classification, image super-resolution reconstruction and so on.
在神经网络层230中的多层隐含层之后,也就是整个卷积神经网络200的最后层为输出层240,该输出层240具有类似分类交叉熵的损失函数,具体用于计算预测误差,一旦整个卷积神经网络200的前向传播(如图2由210至240方向的传播为前向传播)完成,反向传播(如图2由240至210方向的传播为反向传播)就会开始更新前面提到的各层的权重值以及偏差,以减少卷积神经网络200的损失,及卷积神经网络200通过输出层输出的结果和理想结果之间的误差。After the multiple hidden layers in the neural network layer 230, that is, the final layer of the entire convolutional neural network 200 is the output layer 240. The output layer 240 has a loss function similar to the classification cross entropy, which is specifically used to calculate the prediction error. Once the forward propagation of the entire convolutional neural network 200 (as shown in Figure 2 from 210 to 240 is the forward propagation) is completed, the back propagation (as shown in Figure 2 is the propagation from 240 to 210 is the back propagation) Start to update the weight values and deviations of the aforementioned layers to reduce the loss of the convolutional neural network 200 and the error between the output result of the convolutional neural network 200 through the output layer and the ideal result.
需要说明的是,如图2所示的卷积神经网络200仅作为一种卷积神经网络的示例,在具体的应用中,卷积神经网络还可以以其他网络模型的形式存在。It should be noted that the convolutional neural network 200 shown in FIG. 2 is only used as an example of a convolutional neural network. In specific applications, the convolutional neural network may also exist in the form of other network models.
图3为本申请实施例提供的一种芯片硬件结构,该芯片包括神经网络处理器50。该芯片可以被设置在如图1所示的执行设备110中,用以完成计算模块111的计算工作。该芯片也可以被设置在如图1所示的训练设备120中,用以完成训练设备120的训练工作并输出目标模型/规则101。如图2所示的卷积神经网络中各层的算法均可在如图3所示的芯片中得以实现。FIG. 3 is a chip hardware structure provided by an embodiment of the application, and the chip includes a neural network processor 50. The chip may be set in the execution device 110 as shown in FIG. 1 to complete the calculation work of the calculation module 111. The chip can also be set in the training device 120 as shown in FIG. 1 to complete the training work of the training device 120 and output the target model/rule 101. The algorithms of each layer in the convolutional neural network as shown in Figure 2 can be implemented in the chip as shown in Figure 3.
神经网络处理器NPU 50NPU作为协处理器挂载到主中央处理器(central processing unit,CPU)(host CPU)上,由主CPU分配任务。NPU的核心部分为运算电路50,控制器504控制运算电路503提取存储器(权重存储器或输入存储器)中的数据并进行运算。The neural network processor NPU 50 NPU is mounted as a coprocessor to a main central processing unit (central processing unit, CPU) (host CPU), and the main CPU distributes tasks. The core part of the NPU is the arithmetic circuit 50. The controller 504 controls the arithmetic circuit 503 to extract data from the memory (weight memory or input memory) and perform calculations.
在一些实现中,运算电路503内部包括多个处理单元(process engine,PE)。在一些实现中,运算电路503是二维脉动阵列。运算电路503还可以是一维脉动阵列或者能够执行例如乘法和加法这样的数学运算的其它电子线路。在一些实现中,运算电路503是通用的矩阵处理器。In some implementations, the arithmetic circuit 503 includes multiple processing units (process engines, PE). In some implementations, the arithmetic circuit 503 is a two-dimensional systolic array. The arithmetic circuit 503 may also be a one-dimensional systolic array or other electronic circuits capable of performing mathematical operations such as multiplication and addition. In some implementations, the arithmetic circuit 503 is a general-purpose matrix processor.
举例来说,假设有输入矩阵A,权重矩阵B,输出矩阵C。运算电路从权重存储器502中取矩阵B相应的数据,并缓存在运算电路中每一个PE上。运算电路从输入存储器501中取矩阵A数据与矩阵B进行矩阵运算,得到的矩阵的部分结果或最终结果,保存在累加器(accumulator)508中。For example, suppose there is an input matrix A, a weight matrix B, and an output matrix C. The arithmetic circuit fetches the corresponding data of matrix B from the weight memory 502 and buffers it on each PE in the arithmetic circuit. The arithmetic circuit fetches matrix A data and matrix B from the input memory 501 to perform matrix operations, and the partial or final result of the obtained matrix is stored in an accumulator 508.
向量计算单元507可以对运算电路的输出做进一步处理,如向量乘,向量加,指数运算,对数运算,大小比较等等。例如,向量计算单元507可以用于神经网络中非卷积/非FC层的网络计算,如池化(pooling),批归一化(batch normalization),局部响应归一化(local response normalization)等。The vector calculation unit 507 can perform further processing on the output of the arithmetic circuit, such as vector multiplication, vector addition, exponential operation, logarithmic operation, size comparison and so on. For example, the vector calculation unit 507 can be used for network calculations in the non-convolutional/non-FC layer of the neural network, such as pooling, batch normalization, local response normalization, etc. .
在一些实现种,向量计算单元能507将经处理的输出的向量存储到统一缓存器506。例如,向量计算单元507可以将非线性函数应用到运算电路503的输出,例如累加值的向量,用以生成激活值。在一些实现中,向量计算单元507生成归一化的值、合并值,或二者均有。在一些实现中,处理过的输出的向量能够用作到运算电路503的激活输入,例如 用于在神经网络中的后续层中的使用。In some implementations, the vector calculation unit 507 can store the processed output vector in the unified buffer 506. For example, the vector calculation unit 507 may apply a nonlinear function to the output of the arithmetic circuit 503, such as a vector of accumulated values, to generate the activation value. In some implementations, the vector calculation unit 507 generates a normalized value, a combined value, or both. In some implementations, the processed output vector can be used as an activation input to the arithmetic circuit 503, for example for use in subsequent layers in a neural network.
统一存储器506用于存放输入数据以及输出数据。The unified memory 506 is used to store input data and output data.
权重数据直接通过存储单元访问控制器505(direct memory access controller,DMAC)将外部存储器中的输入数据搬运到输入存储器501和/或统一存储器506、将外部存储器中的权重数据存入权重存储器502,以及将统一存储器506中的数据存入外部存储器。The weight data directly transfers the input data in the external memory to the input memory 501 and/or the unified memory 506 through the storage unit access controller 505 (direct memory access controller, DMAC), and stores the weight data in the external memory into the weight memory 502, And the data in the unified memory 506 is stored in the external memory.
总线接口单元(bus interface unit,BIU)510,用于通过总线实现主CPU、DMAC和取指存储器509之间进行交互。The bus interface unit (BIU) 510 is used to implement interaction between the main CPU, the DMAC, and the fetch memory 509 through the bus.
与控制器504连接的取指存储器(instruction fetch buffer)509,用于存储控制器504使用的指令;An instruction fetch buffer 509 connected to the controller 504 is used to store instructions used by the controller 504;
控制器504,用于调用指存储器509中缓存的指令,实现控制该运算加速器的工作过程。The controller 504 is configured to call the instructions cached in the memory 509 to control the working process of the computing accelerator.
入口:可以根据实际发明说明这里的数据是说明数据,比如探测到车辆速度?障碍物距离等Entrance: It can be explained according to the actual invention that the data here is explanatory data, such as the detected vehicle speed? Obstacle distance etc.
一般地,统一存储器506,输入存储器501,权重存储器502以及取指存储器509均为片上(On-Chip)存储器,外部存储器为该NPU外部的存储器,该外部存储器可以为双倍数据率同步动态随机存储器(double data rate synchronous dynamic random access memory,简称DDR SDRAM)、高带宽存储器(high bandwidth memory,HBM)或其他可读可写的存储器。Generally, the unified memory 506, the input memory 501, the weight memory 502, and the instruction fetch memory 509 are all on-chip (On-Chip) memories, and the external memory is a memory external to the NPU. The external memory can be a double data rate synchronous dynamic random access memory. Memory (double data rate synchronous dynamic random access memory, referred to as DDR SDRAM), high bandwidth memory (HBM) or other readable and writable memory.
其中,图2所示的卷积神经网络中各层的运算可以由运算电路303或向量计算单元307执行。Among them, the operations of each layer in the convolutional neural network shown in FIG. 2 can be executed by the arithmetic circuit 303 or the vector calculation unit 307.
上文中介绍的图1中的执行设备110能够执行本申请实施例的图像分类方法或者数据处理方法的各个步骤,图2所示的CNN模型和图3所示的芯片也可以用于执行本申请实施例的图像分类方法或者数据处理方法的各个步骤。下面结合附图对本申请实施例的图像分类方法和本申请实施例的数据处理方法进行详细的介绍。The execution device 110 in FIG. 1 introduced above can execute each step of the image classification method or data processing method of the embodiment of this application. The CNN model shown in FIG. 2 and the chip shown in FIG. 3 can also be used to execute this application. Each step of the image classification method or data processing method of the embodiment. The image classification method of the embodiment of the present application and the data processing method of the embodiment of the present application will be described in detail below with reference to the accompanying drawings.
在下面结合介绍本申请实施例的图像分类方法和数据处理方法时,会涉及到对待处理图像或者待处理数据进行卷积处理,其中,卷积处理得到的既可以称为卷积特征图,也可以直接称为特征图。When the image classification method and data processing method of the embodiments of the present application are introduced in the following, it will involve the convolution processing of the image to be processed or the data to be processed, where the convolution processing can be called convolution feature map or It can be called a feature map directly.
图7是本申请实施例的图像分类方法的示意性流程图。图7所示的方法可以由图像分类装置执行,该图像分类装置可以是具有图像处理功能的电子设备。该电子设备具体可以是移动终端(例如,智能手机),电脑,个人数字助理,可穿戴设备,车载设备,物联网设备或者其他能够进行图像处理的设备。Fig. 7 is a schematic flowchart of an image classification method according to an embodiment of the present application. The method shown in FIG. 7 may be executed by an image classification device, which may be an electronic device with image processing functions. The electronic device may specifically be a mobile terminal (for example, a smart phone), a computer, a personal digital assistant, a wearable device, a vehicle-mounted device, an Internet of Things device or other devices capable of image processing.
图7所示的方法包括步骤1001至1004,下面分别对这些步骤进行详细的描述。The method shown in FIG. 7 includes steps 1001 to 1004, which are described in detail below.
1001、获取神经网络的M个基准卷积核的卷积核参数。1001. Obtain convolution kernel parameters of M reference convolution kernels of the neural network.
其中,上述M为正整数。Wherein, the above M is a positive integer.
1002、获取神经网络的N组掩码张量。1002. Obtain N sets of mask tensors of the neural network.
其中,上述N组掩码张量中的每组掩码张量由多个掩码张量组成,上述N组掩码张量中的元素存储时占用的比特数小于M个基准卷积核中卷积核参数中的元素存储时占用的比特数(一般情况下,掩码张量中元素占用的存储空间会远远小于卷积核参数中的元素占用的存储空间),M个基准卷积核中的每个基准卷积核对应N组掩码张量中的一组掩 码张量。Among them, each group of mask tensors in the above N groups of mask tensors is composed of multiple mask tensors, and the number of bits occupied by elements in the above N groups of mask tensors is less than the convolution kernel parameters in the M reference convolution kernels. The number of bits occupied by the elements in the storage (in general, the storage space occupied by the elements in the mask tensor will be much less than the storage space occupied by the elements in the convolution kernel parameters), each of the M reference convolution kernels The convolution kernel corresponds to a group of mask tensors in N groups of mask tensors.
上述M个基准卷积核的卷积核参数以及N组掩码张量可以存储在寄存器中。此时,可以从寄存器中读取上述M个基准卷积核的卷积核参数以及N组掩码张量。该寄存器具体可以是权重寄存器,也就是神经网络中用于存储卷积核参数的寄存器。The convolution kernel parameters of the M reference convolution kernels and the N sets of mask tensors can be stored in registers. At this time, the convolution kernel parameters of the M reference convolution kernels and the N sets of mask tensors can be read from the register. The register may specifically be a weight register, that is, a register used to store convolution kernel parameters in a neural network.
应理解,上述神经网络的基准卷积核由上述M个基准卷积核组成,上述神经网络的掩码张量由上述N组掩码张量组成。该神经网络在部署时只需要保存M个基准卷积核的卷积核参数,以及N组掩码张量即可,而不必再逐个存储每个卷积核的参数。能够节省神经网络部署时所需要的存储空间,使得该神经网络也能够部署到一些存储资源受限的设备上。It should be understood that the reference convolution kernel of the foregoing neural network is composed of the foregoing M reference convolution kernels, and the mask tensor of the foregoing neural network is composed of the foregoing N groups of mask tensors. The neural network only needs to save the convolution kernel parameters of M reference convolution kernels and N groups of mask tensors during deployment, instead of storing the parameters of each convolution kernel one by one. The storage space required for the deployment of the neural network can be saved, so that the neural network can also be deployed on some devices with limited storage resources.
另外,上述M和N的大小可以根据神经网络构建的情况来确定。例如,上述M和N可以根据神经网络的网络结构的复杂度以及神经网络的应用需求来确定,当上述神经网络的网络结构的复杂度较高或者应用需求较高(例如,对处理能力要求较高)时,可以将M和/或N设置成较大的数值,而当上述神经网络的网络结构比较简单或者应用需求较低(例如,对处理能力要求较低)时,可以将M和/或N设置成较小的数值。In addition, the size of the above M and N can be determined according to the construction of the neural network. For example, the above M and N can be determined according to the complexity of the neural network structure and the application requirements of the neural network. When the network structure of the neural network has high complexity or high application requirements (for example, the processing capacity requirements are relatively high). High), M and/or N can be set to larger values, and when the network structure of the above neural network is relatively simple or the application requirements are low (for example, the requirements for processing capabilities are low), M and/or Or N is set to a smaller value.
应理解,上述M个基准卷积核的大小可以完全相同、完全不同或者部分相同。It should be understood that the sizes of the aforementioned M reference convolution kernels may be completely the same, completely different, or partially the same.
当M个基准卷积核中存在不同大小的基准卷积核时,能够从待处理图像中提取出较多的图像特征。When there are reference convolution kernels of different sizes among the M reference convolution kernels, more image features can be extracted from the image to be processed.
进一步的,当M个基准卷积核的大小均不相同时,能够进一步的从待处理图像中提取出更多的图像特征,便于后续对待处理图像进行更好的分类。Further, when the sizes of the M reference convolution kernels are all different, more image features can be further extracted from the image to be processed, so that the subsequent image to be processed can be better classified.
与上述M个基准卷积核类似,上述N组掩码张量完全相同、完全不同或者部分相同。Similar to the foregoing M reference convolution kernels, the foregoing N sets of mask tensors are completely the same, completely different, or partially the same.
可选地,上述N组掩码张量中的每组掩码张量内部包含的各个掩码张量的大小相同。Optionally, the mask tensors contained in each of the above N groups of mask tensors have the same size.
可选地,上述M个基准卷积核中的每个基准卷积核对应N组掩码张量中的一组掩码张量,上述N组掩码张量中的一组掩码张量可以对应上述M个基准卷积核中的一个或者多个卷积核。Optionally, each of the foregoing M reference convolution kernels corresponds to a set of mask tensors in N sets of mask tensors, and one set of mask tensors in the foregoing N sets of mask tensors may correspond to the foregoing M One or more convolution kernels among the reference convolution kernels.
可选地,上述N组掩码张量中的任意一组掩码张量与对应的基准卷积核的大小相同。Optionally, any group of mask tensors in the aforementioned N groups of mask tensors has the same size as the corresponding reference convolution kernel.
也就是说,在与某个基准卷积核相对应的一组掩码张量中,每个掩码张量的大小都与对应的基准卷积核的大小相同。That is to say, in a set of mask tensors corresponding to a certain reference convolution kernel, the size of each mask tensor is the same as the size of the corresponding reference convolution kernel.
如果上述N组掩码中的第一组掩码张量与M个基准卷积核中的第一基准卷积核相对应,那么,该第一组掩码张量中的每个掩码张量的大小与第一基准卷积核的大小相同。If the first group of mask tensors in the above N groups of masks corresponds to the first reference convolution kernel in the M reference convolution kernels, then the value of each mask tensor in the first group of mask tensors The size is the same as the size of the first reference convolution kernel.
具体地,如果第一基准卷积核大小为c×d 1×d 2,其中,c表示通道数,d 1和d 2分别表示高和宽。那么,第一组掩码张量中的任意一个第一掩码张量的大小也为c×d 1×d 2(其中,c为通道数,d 1和d 2分别是高和宽)。 Specifically, if the size of the first reference convolution kernel is c×d 1 ×d 2 , where c represents the number of channels, and d 1 and d 2 represent height and width respectively. Then, the size of any first mask tensor in the first group of mask tensors is also c×d 1 ×d 2 (where c is the number of channels, and d 1 and d 2 are height and width respectively).
本申请中,由于掩码张量的大小与相对应的基准卷积核的尺寸大小相同。使得基准卷积核与掩码张量之间通过运算得到的卷积核的尺寸大小与基准卷积核的尺寸大小相同,使得根据基准卷积核和掩码张量能够得到相同尺寸大小的卷积核,便于后续根据得到的卷积核对待处理图像进行统一的处理。In this application, the size of the mask tensor is the same as the size of the corresponding reference convolution kernel. Make the size of the convolution kernel obtained by calculation between the reference convolution kernel and the mask tensor the same as the size of the reference convolution kernel, so that the volume of the same size can be obtained according to the reference convolution kernel and the mask tensor The integration kernel facilitates subsequent unified processing of the image to be processed based on the obtained convolution kernel.
具体地,如果第一基准卷积核大小为c×d 1×d 2,其中,c表示通道数,d 1和d 2分别表示高和宽。那么,第一组掩码张量中的任意一个第一掩码张量的大小也为c×d 1×d 2(其中,c为通道数,d 1和d 2分别是高和宽)。 Specifically, if the size of the first reference convolution kernel is c×d 1 ×d 2 , where c represents the number of channels, and d 1 and d 2 represent height and width respectively. Then, the size of any first mask tensor in the first group of mask tensors is also c×d 1 ×d 2 (where c is the number of channels, and d 1 and d 2 are height and width respectively).
当基准卷积核的尺寸大小要与掩码张量的尺寸大小相同时,能够保证基准卷积核与掩码张量之间能够正常进行哈达玛积运算,从而根据基准卷积核和掩码张量得到子卷积核。When the size of the reference convolution kernel is the same as the size of the mask tensor, it can be ensured that the Hadamard product operation can be performed normally between the reference convolution kernel and the mask tensor, so that according to the reference convolution kernel and the mask The tensor gets the sub-convolution kernel.
其中,哈达玛积运算也可以称为元素相乘操作,它是矩阵中的一种运算。如果A=(a ij)和B=(b ij)是两个同阶矩阵,并且c ij=a ij×b ij,则称矩阵C=(c ij)为A和B的哈达玛积,或称为基本积。 Among them, the Hadamard product operation can also be called the element multiplication operation, which is an operation in the matrix. If A=(a ij ) and B=(b ij ) are two matrices of the same order, and c ij =a ij ×b ij , then the matrix C=(c ij ) is called the Hadamard product of A and B, or It is called the basic product.
可选地,上述掩码张量为L值掩码张量。也就是说,对于某个掩码张量来说,该掩码张量中的元素的取值可能有L种。其中,L为大于或者等于2的正整数。Optionally, the above-mentioned mask tensor is an L-value mask tensor. In other words, for a certain mask tensor, there may be L kinds of values of the elements in the mask tensor. Among them, L is a positive integer greater than or equal to 2.
一般来说,L的取值越小,掩码张量占用的存储空间越小。Generally speaking, the smaller the value of L, the smaller the storage space occupied by the mask tensor.
可选地,上述中的掩码张量为二值化掩码张量,此时,掩码张量中每个元素只有两种可能的取值,占用的比特位大大减少。Optionally, the above-mentioned mask tensor is a binary mask tensor. At this time, each element in the mask tensor has only two possible values, and the occupied bits are greatly reduced.
当信息库中的掩码张量为二值化掩码张量时,占用的存储空间很小,节省存储空间的效果比较明显。When the mask tensor in the information database is a binary mask tensor, the storage space occupied is small, and the effect of saving storage space is obvious.
二值化掩码张量中元素的候选取值可以是[0,1]或者[0,-1]或者[1,-1]。The candidate values of the elements in the binary mask tensor can be [0, 1] or [0, -1] or [1, -1].
对于上述M个基准卷积核与N组掩码张量来说,M的数值一般大于或者等于N。也就是说,上述M个基准卷积核中的每个基准卷积核可以对应N组掩码张量中的一组掩码张量,上述N组掩码张量中的一组掩码张量可以对应上述M个基准卷积核中的一个或者多个卷积核。当M大于N或者M=N时,上述M个基准卷积核与N组掩码张量有不同的对应关系,下面分别对M>N和M=N这两种情况进行介绍。For the aforementioned M reference convolution kernels and N sets of mask tensors, the value of M is generally greater than or equal to N. That is to say, each of the above M reference convolution kernels can correspond to a set of mask tensors in the N sets of mask tensors, and a set of mask tensors in the above N sets of mask tensors can correspond to the above One or more convolution kernels among the M reference convolution kernels. When M is greater than N or M=N, the above-mentioned M reference convolution kernels have different correspondences with N groups of mask tensors. The two cases of M>N and M=N are respectively introduced below.
第一种情况:M>NThe first case: M>N
在第一种情况下,M个基准卷积核中的至少两个基准卷积核共同对应N组掩码张量中的一组掩码张量。In the first case, at least two reference convolution kernels in the M reference convolution kernels jointly correspond to one group of mask tensors in the N group of mask tensors.
例如,M=3,N=2,M个基准卷积核包括第一基准卷积核,第二基准卷积核和第三基准卷积核,N组掩码张量包括第一组掩码张量和第二组掩码,那么,M个基准卷积核与N组掩码张量的对应关系可以如表1所示。For example, M=3, N=2, M reference convolution kernels include the first reference convolution kernel, the second reference convolution kernel and the third reference convolution kernel, and the N sets of mask tensors include the first set of masks Tensor and the second set of masks, then, the correspondence between M reference convolution kernels and N sets of mask tensors can be shown in Table 1.
表1Table 1
Figure PCTCN2020086015-appb-000011
Figure PCTCN2020086015-appb-000011
如表1所示,第一基准卷积核和第二基准卷积核均对应于第一组掩码张量,第三基准卷积核对应于第二组掩码张量,在根据基准卷积核与掩码张量对待处理图像进行卷积处理时,可以根据第一基准卷积核和第一组掩码张量,第二基准卷积核与第一组掩码张量以及第三基准卷积核和第二组掩码张量对待处理图像分别进行卷积处理,最终得到待处理图像的卷积特征图。As shown in Table 1, both the first reference convolution kernel and the second reference convolution kernel correspond to the first group of mask tensors, and the third reference convolution kernel corresponds to the second group of mask tensors. When the convolution kernel and mask tensor are used for convolution processing on the image to be processed, the first reference convolution kernel and the first group of mask tensors, the second reference convolution kernel and the first group of mask tensors, and the third The reference convolution kernel and the second set of mask tensor respectively perform convolution processing on the image to be processed, and finally obtain the convolution feature map of the image to be processed.
在第一种情况中,N还可以等于1,此时,M组基准卷积核均对应一组掩码张量,此时掩码张量被多个基准卷积核共享(这种情况可以称为掩码张量共享),采用掩码张量共享的方式能够进一步减少掩码张量带来的存储开销。In the first case, N can also be equal to 1. At this time, the M groups of reference convolution kernels correspond to a set of mask tensors, and the mask tensor is shared by multiple reference convolution kernels (this case can be It is called mask tensor sharing), and the use of mask tensor sharing can further reduce the storage overhead caused by the mask tensor.
在上述第一种情况中,会出现多个基准卷积核对应同一组掩码张量的情况,也就是说, 在第一种情况下,不同的基准卷积核可以共享相同的掩码张量,因此,上述第一种情况也可以称为掩码张量共享的情况。In the first case above, there will be multiple reference convolution kernels corresponding to the same set of mask tensors, that is, in the first case, different reference convolution kernels can share the same mask tensor Therefore, the first case mentioned above can also be called the case of mask tensor sharing.
下面结合图8对掩码张量共享的情况做进一步的说明。The following describes the sharing of the mask tensor with reference to FIG. 8.
如图8所示,基准卷积核1和基准卷积核2共享一组掩码张量,共享的这一组掩码张量包括掩码张量1和掩码张量2。通过基准卷积核1分别与掩码张量1和掩码张量2的运算,能够得到子卷积核1和子卷积核2,通过基准卷积核2分别与掩码张量1和掩码张量2的运算,能够得到子卷积核3和子卷积核4。As shown in FIG. 8, the reference convolution kernel 1 and the reference convolution kernel 2 share a set of mask tensors, and the shared set of mask tensors includes mask tensor 1 and mask tensor 2. Through the operation of the reference convolution kernel 1 and the mask tensor 1 and the mask tensor 2, the subconvolution kernel 1 and the subconvolution kernel 2 can be obtained. The reference convolution kernel 2 is respectively compared with the mask tensor 1 and the mask tensor 2. Operation of code tensor 2 can obtain sub-convolution kernel 3 and sub-convolution kernel 4.
在根据基准卷积核1与掩码张量1进行运算时,具体可以是对基准卷积核1和掩码张量1对应的掩码张量进行哈达玛积运算(也就是元素相乘运算),得到子卷积核1的参数,其他子卷积核的计算过程与之类似。When performing operations based on the reference convolution kernel 1 and the mask tensor 1, specifically it can be the Hadamard product operation (that is, the element multiplication operation) on the mask tensor corresponding to the reference convolution kernel 1 and the mask tensor 1. ), the parameters of sub-convolution kernel 1 are obtained, and the calculation process of other sub-convolution kernels is similar.
下面结合公式对掩码张量共享时的相关运算进行详细描述。The following describes in detail the related operations when the mask tensor is shared with the formula.
假设输入数据(相当于上文中的待处理图像)为
Figure PCTCN2020086015-appb-000012
其中,c为通道数,h和w分别表示输入数据的长和宽(当输入数据为图像时,h和w分别表示图像的长和宽)。神经网络中的一个卷积核可以记为
Figure PCTCN2020086015-appb-000013
其中,c仍然表示通道数,d 1×d 2表示卷积核的尺寸。在神经网络中,一个卷积层往往包含很多个卷积核,神经网络中卷积层的卷积操作可以用公式(1)来表示。
Suppose the input data (equivalent to the image to be processed above) is
Figure PCTCN2020086015-appb-000012
Among them, c is the number of channels, h and w respectively represent the length and width of the input data (when the input data is an image, h and w represent the length and width of the image respectively). A convolution kernel in the neural network can be written as
Figure PCTCN2020086015-appb-000013
Among them, c still represents the number of channels, and d 1 ×d 2 represents the size of the convolution kernel. In a neural network, a convolutional layer often contains many convolution kernels, and the convolution operation of the convolutional layer in the neural network can be expressed by formula (1).
[Y 1,...,Y n]=[F 1*X,...,F n*X]             (1) [Y 1 ,...,Y n ]=[F 1 *X,...,F n *X] (1)
在上述公式(1)中,X表示输入数据,F 1、F 2、…F n分别表示卷积层中的n个卷积核,*代表着卷积操作,
Figure PCTCN2020086015-appb-000014
是对输入数据进行卷积处理后输出的卷积特征图,H′和W′分别表示输出的卷积特征图的长和宽。
In the above formula (1), X represents the input data, F 1 , F 2 ,...F n respectively represent n convolution kernels in the convolution layer, and * represents the convolution operation,
Figure PCTCN2020086015-appb-000014
It is the convolution feature map output after convolution processing on the input data. H'and W'respectively represent the length and width of the output convolution feature map.
由公式(1)可知,一个卷积层的卷积操作往往需要对很多个卷积参数进行计算得到,为了减少卷积核的参数,可以采用一个基准卷积核和一组掩码张量来生成大量的子卷积核,以减少卷积核的参数。It can be seen from formula (1) that the convolution operation of a convolution layer often requires calculation of many convolution parameters. In order to reduce the parameters of the convolution kernel, a reference convolution kernel and a set of mask tensors can be used. Generate a large number of sub-convolution kernels to reduce the parameters of the convolution kernel.
下面结合公式对根据基准卷积核以及二值化掩码张量(这里以二值化掩码张量为例进行说明)得到多个子卷积核,进而进行卷积运算的情况进行说明。The following describes the case where multiple sub-convolution kernels are obtained according to the reference convolution kernel and the binarized mask tensor (here, the binarized mask tensor is used as an example), and then the convolution operation is performed.
假设,基准卷积核为
Figure PCTCN2020086015-appb-000015
二值化掩码张量为
Figure PCTCN2020086015-appb-000016
来说,那么,可以通过对基准卷积核和二值化掩码张量进行哈达玛积运算,得到多个子卷积核,具体计算过程可以如公式(2)所示。
Suppose, the reference convolution kernel is
Figure PCTCN2020086015-appb-000015
The binary mask tensor is
Figure PCTCN2020086015-appb-000016
In other words, then, by performing the Hadamard product operation on the reference convolution kernel and the binarized mask tensor, multiple subconvolution kernels can be obtained, and the specific calculation process can be as shown in formula (2).
Figure PCTCN2020086015-appb-000017
Figure PCTCN2020086015-appb-000017
在上述公式(2)中,B i表示第i个基准卷积核,i的取值范围为[1,k],M j表示第j个二值化掩码张量,j的取值范围为[1,s],ο表示哈达玛积运算(也可以称为元素相乘操作),通过将一个基准卷积核与s个二值化掩码张量进行运算,能够得到s个子卷积核,这样,通过k个基准卷积核和s个二值掩码(该k个基准卷积核共用该s个二值化掩码),就能够得到与原卷积操作(如公式(1)所示,有n个卷积核进行卷积计算)相同数量(k×s=n)的子卷积核,用这些子卷积核来进行卷积计算,得到n个通道的输出特征图的计算过程如公式(3)所示。 In the above formula (2), B i represents the i-th reference convolution kernel, the value range of i is [1, k], M j represents the j-th binarized mask tensor, and the value range of j Is [1, s], ο represents the Hadamard product operation (also called element-wise multiplication operation), and s subconvolutions can be obtained by operating a reference convolution kernel with s binarization mask tensors In this way, through k reference convolution kernels and s binary masks (the k reference convolution kernels share the s binary masks), the original convolution operation (such as formula (1 As shown in ), there are n convolution kernels for convolution calculation) The same number (k×s=n) of subconvolution kernels, use these subconvolution kernels to perform convolution calculations, and obtain the output feature maps of n channels The calculation process of is shown in formula (3).
Figure PCTCN2020086015-appb-000018
Figure PCTCN2020086015-appb-000018
也就是说,通过k个基准卷积核和s个二值掩码张量得到n个子卷积核,并利用n个子卷积核进行卷积运算,能够达到传统方案中直接采用n个卷积核进行卷积计算的效果,并且,通过采用k个基准卷积核和s个二值掩码能够大大减少参数的数量。具体地,由于k小于n,因此,卷积核的参数量减少,另外,二值化掩码对于存储需求极低,相对于卷积核来说,需要保存的参数很少,因此,采用k个基准卷积核和s个二值掩码相组合的方式能够减少参数的数量。That is to say, n sub-convolution kernels are obtained through k reference convolution kernels and s binary mask tensors, and n sub-convolution kernels are used for convolution operation, which can achieve the direct use of n convolutions in the traditional scheme The effect of convolution calculation by the kernel, and the number of parameters can be greatly reduced by using k reference convolution kernels and s binary masks. Specifically, since k is less than n, the parameter amount of the convolution kernel is reduced. In addition, the binarization mask has very low storage requirements. Compared with the convolution kernel, there are few parameters that need to be saved. Therefore, k The combination of a reference convolution kernel and s binary masks can reduce the number of parameters.
当采用k个基准卷积核和s个二值化掩码张量得到n子卷积核时,能够实现对卷积核参数的压缩,具体的参数压缩率可以如公式(4)所示。When k reference convolution kernels and s binarization mask tensors are used to obtain n sub-convolution kernels, the parameters of the convolution kernels can be compressed, and the specific parameter compression ratio can be as shown in formula (4).
Figure PCTCN2020086015-appb-000019
Figure PCTCN2020086015-appb-000019
在上述公式(4)中,r 1为参数压缩率,k为基准卷积核的数量,n为子卷积核的数量,c为卷积核的通道数,d 1和d 2是卷积核的尺寸,s为二值化掩码张量的个数。 In the above formula (4), r 1 is the parameter compression ratio, k is the number of reference convolution kernels, n is the number of sub-convolution kernels, c is the number of channels of the convolution kernel, and d 1 and d 2 are convolutions. The size of the kernel, s is the number of binarized mask tensors.
由公式(4)可知,相对于直接采用n个卷积核的方式,采用k个基准卷积核和s个二值化掩码张量的方式能够实现对卷积核参数的有效压缩。It can be seen from formula (4) that, compared to the method of directly using n convolution kernels, the method of using k reference convolution kernels and s binarization mask tensors can achieve effective compression of convolution kernel parameters.
第二种情况:M=NThe second case: M=N
在第二种情况下,M个基准卷积核与N组掩码张量一一对应(这种对应的方式可以称为掩码张量独立)。In the second case, the M reference convolution kernels have a one-to-one correspondence with N sets of mask tensors (this correspondence method can be called mask tensor independence).
例如,M=3,N=3,M个基准卷积核包括第一基准卷积核,第二基准卷积核以及第三基准卷积核,N组掩码张量包括第一组掩码张量、第二组掩码张量和第三组掩码张量,那么,该M个基准卷积核与N组掩码张量的对应关系可以如表2所示。For example, M=3, N=3, M reference convolution kernels include the first reference convolution kernel, the second reference convolution kernel, and the third reference convolution kernel. The N sets of mask tensors include the first set of masks. The tensor, the second group of mask tensors, and the third group of mask tensors, then, the correspondence between the M reference convolution kernels and the N groups of mask tensors can be shown in Table 2.
表2Table 2
基准卷积核Reference convolution kernel 掩码张量组Mask tensor group
第一基准卷积核The first benchmark convolution kernel 第一组掩码张量The first set of mask tensors
第二基准卷积核The second benchmark convolution kernel 第二组掩码张量The second set of mask tensors
第三基准卷积核The third benchmark convolution kernel 第三组掩码张量The third set of mask tensors
如表2所示,第一基准卷积核对应于第一组掩码张量,第二基准卷积核对应于第二组掩码张量,第三基准卷积核对应于第三组掩码张量。在根据基准卷积核与掩码张量对待处理图像进行卷积处理时,可以分别根据第一基准卷积核和第一组掩码张量,第二基准卷积核和第二组掩码张量,以及第三基准卷积核和第三组掩码张量对待处理图像进行卷积处理,最终得到待处理图像的卷积特征图。As shown in Table 2, the first reference convolution kernel corresponds to the first group of mask tensors, the second reference convolution kernel corresponds to the second group of mask tensors, and the third reference convolution kernel corresponds to the third group of masks. Code tensor. When performing convolution processing on the image to be processed according to the reference convolution kernel and mask tensor, the first reference convolution kernel and the first set of mask tensors, the second reference convolution kernel and the second set of masks can be used respectively. The tensor, the third reference convolution kernel and the third set of mask tensors are subjected to convolution processing on the image to be processed, and finally the convolution feature map of the image to be processed is obtained.
下面结合公式对掩码张量独立的相关内容进行详细说明。The following describes the related content of mask tensor independence in detail with the formula.
通过k个基准卷积核和ks个二值化掩码,生成和原卷积操作相同数量(k×s=n)的子卷积核,并根据这些子卷积核进行卷积计算的过程可以如公式(5)所示。Through k reference convolution kernels and ks binarization masks, the same number of subconvolution kernels as the original convolution operation (k×s=n) are generated, and the convolution calculation process is performed according to these subconvolution kernels It can be as shown in formula (5).
Figure PCTCN2020086015-appb-000020
Figure PCTCN2020086015-appb-000020
相比于掩码张量共享的方式,虽然掩码张量独立的方式对应的参数量稍微大一点,但 是由于每个基准卷积核对应的是不同组的掩码张量,使得最终卷积生成的特征更具区分性和判别性。Compared with the mask tensor sharing method, although the mask tensor independent method corresponds to a slightly larger amount of parameters, since each reference convolution kernel corresponds to a different set of mask tensors, the final convolution The generated features are more discriminative and discriminative.
当采用k个基准卷积核和ks个二值化掩码张量得到n子卷积核时,能够实现对卷积核参数的压缩,具体的参数压缩率可以如公式(6)所示。When k reference convolution kernels and ks binarization mask tensors are used to obtain n sub-convolution kernels, the parameters of the convolution kernels can be compressed, and the specific parameter compression ratio can be as shown in formula (6).
Figure PCTCN2020086015-appb-000021
Figure PCTCN2020086015-appb-000021
在上述公式(6)中,r 2为参数压缩率,k为基准卷积核的数量,n为子卷积核的数量,c为卷积核的通道数,d 1和d 2是卷积核的尺寸,ks为二值化掩码张量的个数。 In the above formula (6), r 2 is the parameter compression rate, k is the number of reference convolution kernels, n is the number of sub-convolution kernels, c is the number of channels of the convolution kernel, and d 1 and d 2 are convolutions. The size of the kernel, ks is the number of binary mask tensors.
由公式(6)可知,采用k个基准卷积核和ks个二值化掩码张量得到n子卷积核,也能够实现对卷积核参数的有效压缩。It can be seen from formula (6) that using k reference convolution kernels and ks binarization mask tensors to obtain n subconvolution kernels can also achieve effective compression of convolution kernel parameters.
为了更形象的理解掩码张量独立的情况,下面结合图9进行说明。In order to understand the situation of mask tensor independence more vividly, the following description is made with reference to FIG. 9.
如图9所示,基准卷积核1和基准卷积核2分别对应不同组的掩码张量,基准卷积核1对应的是第一组掩码张量,基准卷积核2对应的是第二组掩码张量。其中,第一组掩码张量包括掩码张量1和掩码张量2,第二组掩码张量包括掩码张量3和掩码张量4。在获得子卷积核时,基准卷积核1分别与掩码张量1和掩码张量2进行运算,可以得到子卷积核1和子卷积核2,基准卷积核2分别与掩码张量3和掩码张量4进行运算,得到子卷积核3和子卷积核4。As shown in Figure 9, the reference convolution kernel 1 and the reference convolution kernel 2 respectively correspond to different sets of mask tensors. The reference convolution kernel 1 corresponds to the first group of mask tensors, and the reference convolution kernel 2 corresponds to Is the second set of mask tensors. Among them, the first group of mask tensors includes mask tensor 1 and mask tensor 2, and the second group of mask tensors includes mask tensor 3 and mask tensor 4. When the subconvolution kernel is obtained, the reference convolution kernel 1 is operated with the mask tensor 1 and the mask tensor 2, and the subconvolution kernel 1 and the subconvolution kernel 2 can be obtained. The reference convolution kernel 2 and the mask tensor 2 can be obtained. Code tensor 3 and mask tensor 4 are operated to obtain sub-convolution kernel 3 and sub-convolution kernel 4.
可选地,上述N组掩码张量中至少一组掩码张量中的至少部分掩码张量满足两两正交。Optionally, at least part of the mask tensors in the at least one group of mask tensors in the above N groups of mask tensors satisfy pairwise orthogonality.
在利用神经网络中的卷积核对输入的图像进行卷积处理时,一般是不同卷积核之间的差异越大,利用卷积核提取到的特征越丰富,进而能够得到相对更好的处理结果,因此,当上述N组掩码张量中至少一组掩码张量中的至少部分掩码张量满足两两正交时,能够使得后续进行卷积处理时得到更丰富的特征的可能性变大,可能会提高最终的处理效果。When using the convolution kernel in the neural network to perform convolution processing on the input image, generally the larger the difference between different convolution kernels, the richer the features extracted by the convolution kernel, which can be relatively better processed As a result, therefore, when at least part of the mask tensors in at least one group of mask tensors in the above N groups of mask tensors satisfy pairwise orthogonality, the possibility of obtaining richer features during subsequent convolution processing can be increased. May improve the final processing effect.
可选地,上述N组掩码张量中至少一组掩码张量中的全部掩码张量满足两两正交。Optionally, all mask tensors in at least one group of mask tensors in the foregoing N groups of mask tensors satisfy pairwise orthogonality.
N组掩码张量中至少一组掩码张量中的任意两个掩码张量满足两两正交时,根据基准卷积核与掩码张量进行卷积处理提取到的图像的特征更加丰富,可以提高图像的最终处理效果。When any two mask tensors in at least one group of mask tensors in the N groups of mask tensors meet pairwise orthogonality, the features of the image extracted by the convolution processing according to the reference convolution kernel and the mask tensor are more abundant, and you can Improve the final processing effect of the image.
可选地,上述N组掩码张量中每组掩码张量中的全部掩码张量满足两两正交。Optionally, all mask tensors in each group of mask tensors in the foregoing N groups of mask tensors satisfy pairwise orthogonality.
当N组掩码张量中的每组掩码张量中的全部掩码张量均满足两两正交时,能够根据基准卷积核与掩码张量进行卷积处理提取到的图像的特征更加丰富,可以提高图像的最终处理效果。When all mask tensors in each group of mask tensors in N groups of mask tensors meet pairwise orthogonality, the features of the image extracted by convolution processing based on the reference convolution kernel and mask tensor are more abundant, which can improve The final image processing effect.
假设一组掩码张量中共有s个二值掩码张量,那么,可以将这s个二值掩码张量向量化并拼成矩阵M,为了使得s个二值化掩码张量中的任意两个二值化掩码张量满足两两正交的需求,该矩阵M应当近似是一个正交矩阵以使得根据s个二值化掩码张量和基准卷积核生成的卷积核具有明显的区别。因此,可以在上述s个二值掩码张量上加上如公式(7)所示的正则项:Assuming that there are s binary mask tensors in a group of mask tensors, then these s binary mask tensors can be vectorized and assembled into a matrix M, in order to make any two of the s binary mask tensors A binarization mask tensor meets the requirement of pairwise orthogonality. The matrix M should be approximately an orthogonal matrix so that the convolution kernel generated from the s binarization mask tensor and the reference convolution kernel has obvious The difference. Therefore, the regular term shown in formula (7) can be added to the above s binary mask tensors:
Figure PCTCN2020086015-appb-000022
Figure PCTCN2020086015-appb-000022
在上述公式(7)中,I是一个单位矩阵,||·|| F是Frobenius范数,d 1和d 2分别表示卷 积核的高和宽,c是卷积核的输入通道数,L orth表示正则项。通过正则项的约束,能够使得上述s个二值掩码张量之间的相关性很小,进而使得根据同一基准卷积核生成的卷积核也更具多样性和区分性。 In the above formula (7), I is an identity matrix, ||·|| F is the Frobenius norm, d 1 and d 2 represent the height and width of the convolution kernel respectively, and c is the number of input channels of the convolution kernel, L orth represents the regular term. Through the constraint of the regular term, the correlation between the above s binary mask tensors can be made small, and the convolution kernel generated according to the same reference convolution kernel is also more diverse and distinguishable.
1003、根据M个基准卷积核的卷积核参数和N组掩码张量对待处理图像进行卷积处理,得到待处理图像的多个卷积特征图。1003. Perform convolution processing on the image to be processed according to the convolution kernel parameters of the M reference convolution kernels and the N sets of mask tensors to obtain multiple convolution feature maps of the image to be processed.
应理解,在上述步骤1003之前,可以先获取待处理图像。It should be understood that before step 1003, the image to be processed may be acquired first.
上述待处理图像可以是待分类的图像或者图片。当图7所示的方法由电子设备执行时,该待处理图像可以是电子设备通过摄像头拍摄到的图像,或者,该待处理图像还可以是从电子设备内部存储的图像(例如,电子设备的相册中的图片)。The foregoing image to be processed may be an image or picture to be classified. When the method shown in FIG. 7 is executed by an electronic device, the image to be processed may be an image captured by the electronic device through a camera, or the image to be processed may also be an image stored inside the electronic device (for example, the image of the electronic device). Picture in the album).
在步骤1003中对待处理图像进行处理,得到待处理图像的多个卷积特征图的具体实现方式有多种,下面对其中常见的两种方式进行介绍。In step 1003, the image to be processed is processed to obtain multiple convolution feature maps of the image to be processed. There are many specific implementation manners. Two common methods are introduced below.
第一种方式:先获得多个卷积核,然后再利用该多个卷积核对待处理图像进行卷积处理,得到待处理图像的多个卷积特征图。The first method: first obtain multiple convolution kernels, and then use the multiple convolution kernels to perform convolution processing on the image to be processed to obtain multiple convolution feature maps of the image to be processed.
具体地,在第一种方式下,获取待处理图像的多个卷积特征图的具体过程包括:Specifically, in the first manner, the specific process of obtaining multiple convolution feature maps of the image to be processed includes:
(1)对M个基准卷积核中的每个基准卷积核,以及每个基准卷积核在N组掩码张量中对应的一组掩码张量进行哈达玛积运算,得到多个子卷积;(1) Perform the Hadamard product operation on each reference convolution kernel in the M reference convolution kernels and a set of mask tensors corresponding to each reference convolution kernel in the N groups of mask tensors to obtain multiple subvolumes product;
(2)根据多个子卷积核分别对待处理图像进行卷积处理,得到多个卷积特征图。(2) Perform convolution processing on the image to be processed according to multiple sub-convolution kernels to obtain multiple convolution feature maps.
第二种方式:先根据M个基准卷积核对待处理图像进行卷积处理,得到M个基准卷积特征图,然后再根据M个基准卷积特征图以及N组掩码张量得到待处理图像的多个卷积特征图。The second way: first perform convolution processing on the image to be processed according to M reference convolution kernels to obtain M reference convolution feature maps, and then obtain the to be processed according to M reference convolution feature maps and N sets of mask tensors Multiple convolution feature maps of the image.
具体地,在第二种方式下,获取待处理图像的多个卷积特征图的具体过程包括:Specifically, in the second way, the specific process of obtaining multiple convolution feature maps of the image to be processed includes:
(3)根据M个基准卷积核对待处理图像进行卷积处理,得到待处理图像的M个基准卷积特征图;(3) Perform convolution processing on the image to be processed according to M reference convolution kernels to obtain M reference convolution feature maps of the image to be processed;
(4)对M个基准卷积特征图和N组掩码张量进行哈达玛积运算,得到待处理图像的多个卷积特征图。(4) Perform Hadamard product operation on M reference convolution feature maps and N groups of mask tensors to obtain multiple convolution feature maps of the image to be processed.
采用第二种方式能够减少卷积计算的次数,当存在M个基准卷积核时,只需要进行M次卷积计算,而不必再生成M*N个卷积核之后进行M*N次卷积运算,很可能会在整体上减少运算的复杂度,提高数据处理效率。Using the second method can reduce the number of convolution calculations. When there are M reference convolution kernels, only M convolution calculations are required, instead of generating M*N convolution kernels and then performing M*N convolutions. Product operations are likely to reduce the complexity of operations as a whole and improve data processing efficiency.
应理解,上述基准卷积特征图是指采用基准卷积核对待处理图像进行卷积处理得到的卷积特征图。It should be understood that the aforementioned reference convolution feature map refers to a convolution feature map obtained by performing convolution processing on an image to be processed using a reference convolution kernel.
上述第二种计算方式也可以称为高效前向计算方式,在这种方式下,通过将卷积计算提前,利用基准卷积核进行卷积计算,能够减少卷积计算的计算量。下面结合具体的公式对第二种方式下减少卷积计算的计算量进行说明。The above-mentioned second calculation method may also be referred to as an efficient forward calculation method. In this method, by advancing the convolution calculation and using the reference convolution kernel to perform the convolution calculation, the calculation amount of the convolution calculation can be reduced. The following describes the reduction of the calculation amount of the convolution calculation in the second method in combination with a specific formula.
对于一个图像块
Figure PCTCN2020086015-appb-000023
来说,采用传统的卷积计算方式进行计算时,该图像块要与每个卷积核种的元素相乘,然后加和,具体可以如公式(8)所示。
For an image block
Figure PCTCN2020086015-appb-000023
In other words, when the traditional convolution calculation method is used for calculation, the image block needs to be multiplied by the elements of each convolution kernel, and then summed, as shown in formula (8).
Figure PCTCN2020086015-appb-000024
Figure PCTCN2020086015-appb-000024
其中,在上述公式(8)中,F 1至F n表示n个卷积核,X表示待处理的图像块,ο表示元素相乘操作,Y表示卷积得到的卷积特征图,假设F 1至F n是对应的卷积核参数均为 c×d 1×d 2,那么,采用公式(8)所示的传统卷积过程包括ncd 1d 2次乘法和ncd 1d 2次加法计算。 Among them, in the above formula (8), F 1 to F n represent n convolution kernels, X represents the image block to be processed, ο represents the element multiplication operation, Y represents the convolution feature map obtained by convolution, assuming F 1 to F n are the corresponding convolution kernel parameters are c×d 1 ×d 2 , then, the traditional convolution process shown in formula (8) includes ncd 1 d 2 multiplications and ncd 1 d 2 additions .
而采用基准卷积核与掩码张量得到多个子卷积核,然后再利用该多个子卷积核对图像块进行卷积处理的计算过程可以如公式(9)所示,待处理图像块需要与每个子卷积核进的元素相乘,然后加和。Using the reference convolution kernel and the mask tensor to obtain multiple subconvolution kernels, and then using the multiple subconvolution kernels to perform convolution processing on the image block, the calculation process can be as shown in formula (9). The image block to be processed requires Multiply the elements of each subconvolution kernel, and then add them.
Figure PCTCN2020086015-appb-000025
Figure PCTCN2020086015-appb-000025
其中,在上述公式(9)中,F 11至F ks是多个子卷积核,X表示待处理的图像块,ο表示元素相乘操作,Y表示卷积得到的卷积特征图,B i表示第i个基准卷积核,M j表示第j个掩码张量。 Among them, in the above formula (9), F 11 to F ks are multiple sub-convolution kernels, X represents the image block to be processed, ο represents the element multiplication operation, Y represents the convolution feature map obtained by convolution, B i Represents the i-th reference convolution kernel, and M j represents the j-th mask tensor.
由上述公式(9)可以看出,图像块和基准卷积核的元素相乘XοB i被重复计算了s次,而实际上只需计算一次,并将计算结果缓存起来。缓存的中间结果为
Figure PCTCN2020086015-appb-000026
这样,公式(9)就可以简化为公式(10)。
It can be seen from the above formula (9) that the multiplication of the image block and the element of the reference convolution kernel XοB i has been repeatedly calculated s times, but in fact it only needs to be calculated once and the calculation result is cached. The intermediate result of the cache is
Figure PCTCN2020086015-appb-000026
In this way, formula (9) can be simplified to formula (10).
Figure PCTCN2020086015-appb-000027
Figure PCTCN2020086015-appb-000027
当M j是二值化的掩码张量时,这里的C iοM j可以通过耗时极小的掩码(masking)操作实现。上述高效前向计算方式包含kcd 1d 2次乘法,ncd 1d 2次加法计算以及可以忽略的ncd 1d 2次掩码操作。和传统卷积操作相比,基准卷积核对乘法操作的减少比例为r 2=s,大大减少了乘法操作的次数,降低了计算的复杂度。 When M j is a binarized mask tensor, C i οM j here can be implemented by a masking operation that takes very little time. Efficient forward calculation above comprising kcd 1 d 2 multiplications, ncd 1 d 2 is calculated and additions can be ignored ncd 1 d 2 times masking. Compared with the traditional convolution operation, the reduction ratio of the reference convolution kernel to the multiplication operation is r 2 =s, which greatly reduces the number of multiplication operations and reduces the computational complexity.
1004、根据待处理图像的多个卷积特征图对待处理图像进行分类,得到待处理图像的分类结果。1004. Classify the image to be processed according to multiple convolution feature maps of the image to be processed to obtain a classification result of the image to be processed.
可选地,上述根据所述多个卷积特征图对所述待处理图像进行分类,得到所述待处理图像的分类结果,包括:对所述多个卷积特征图进行拼接,得到目标卷积特征图;根据目标卷积特征图对待处理图像行分类,得到所述待处理图像的分类结果。Optionally, the foregoing classifying the image to be processed according to the multiple convolution feature maps to obtain the classification result of the image to be processed includes: splicing the multiple convolution feature maps to obtain the target volume Product feature map; classify the image lines to be processed according to the target convolution feature map to obtain the classification result of the image to be processed.
上述多个卷积特征的宽和高应当是相同的,上述对多个卷积特征图进行拼接实质上就是将上述多个卷积特征图的通道数叠加,得到一个通道数是多个卷积特征图的通道数总和的目标卷积特征图。The width and height of the above multiple convolution features should be the same. The above splicing of multiple convolution feature maps is essentially to superimpose the number of channels of the multiple convolution feature maps to obtain a channel number of multiple convolutions. The target convolution feature map of the sum of the number of channels in the feature map.
例如,一共存在3个卷积特征图,这3个卷积特征图的大小分别为c 1×d 1×d 2,c 2×d 1×d 2,c 3×d 1×d 2,那么,对这3个卷积特征图进行拼接得到的目标特征图的大小为c×d 1×d 2,其中,c=c 1+c 2+c 3For example, there are 3 convolution feature maps in total, and the sizes of these 3 convolution feature maps are c 1 ×d 1 ×d 2 , c 2 ×d 1 ×d 2 , and c 3 ×d 1 ×d 2 , then , The size of the target feature map obtained by splicing the three convolutional feature maps is c×d 1 ×d 2 , where c=c 1 +c 2 +c 3 .
本申请中,在对待处理图像进行分类处理时,只需要从存储空间中获取基准卷积核的卷积核参数以及相应的掩码张量,就能够利用基准卷积核以及相应的掩码张量实现对待处理图像的卷积处理,进而实现对待处理图像的分类,而不必获取神经网络中每个卷积核的参数,可以减少神经网络部署时产生的存储开销,使得神经网络能够部署在一些存储资源有限的设备上并进行图像分类处理。In this application, when the image to be processed is classified, it is only necessary to obtain the convolution kernel parameters of the reference convolution kernel and the corresponding mask tensor from the storage space, and then the reference convolution kernel and the corresponding mask tensor can be used. Realize the convolution processing of the image to be processed, and then realize the classification of the image to be processed, without having to obtain the parameters of each convolution kernel in the neural network, which can reduce the storage overhead generated during the deployment of the neural network, so that the neural network can be deployed in some Perform image classification processing on devices with limited storage resources.
具体地,相对于基准卷积核中的参数中的元素,掩码张量中元素占用的存储空间更小, 因此,采用基准卷积核与掩码张量相结合的方式得到子卷积核的方式,减少了卷积核参数的数量,实现了对卷积核参数的压缩,使得神经网络能够部署到一些存储资源受限的设备上执行图像分类任务。Specifically, compared with the elements in the parameters of the reference convolution kernel, the storage space occupied by the elements in the mask tensor is smaller. Therefore, the subconvolution kernel is obtained by combining the reference convolution kernel and the mask tensor. , The number of convolution kernel parameters is reduced, and the compression of convolution kernel parameters is realized, so that the neural network can be deployed on some devices with limited storage resources to perform image classification tasks.
下面对本申请实施例的图像分类方法对应的神经网络模型能够减少存储开销的具体原因进行分析。对于神经网络中的一个卷积层来说,它的卷积核的参数量为nⅹcⅹd 1ⅹd 2,其中,n是该卷积层包含的卷积核个数,c是卷积核的通道数,d 1和d 2分别是卷积核的高度和宽度。该卷积层在对一个输入图像进行卷积计算的计算量为hⅹwⅹnⅹcⅹd 1ⅹd 2次乘法和加法,其中,h和w分别表示该卷积层输出的卷积特征图的高度和宽度。 The following analyzes specific reasons why the neural network model corresponding to the image classification method of the embodiment of the present application can reduce storage overhead. For a convolutional layer in a neural network, the parameter of its convolution kernel is nⅹcⅹd 1 ⅹd 2 , where n is the number of convolution kernels contained in the convolution layer, and c is the number of channels of the convolution kernel , D 1 and d 2 are the height and width of the convolution kernel, respectively. The calculation amount for the convolutional layer to perform convolution calculation on an input image is hⅹwⅹnⅹcⅹd 1 ⅹd 2 multiplications and additions, where h and w represent the height and width of the convolution feature map output by the convolution layer.
由于一个卷积层中n个卷积核之间存在参数冗余,在保持卷积层输入特征和输出特征维数一定的情况下,可以考虑使用少量的k个基准卷积核(k小于n)和存储需求极低的二值化掩码,通过基准卷积核和掩码张量的两两结合,可以衍生出n个子卷积核(k<n),其中,子卷积核的参数全部来自于基准卷积核和二值化掩码,这样就能够减少卷积核的参数量,减少神经网络部署时由于保存卷积核参数产生的存储开销。Due to the parameter redundancy between n convolution kernels in a convolutional layer, while keeping the input feature and output feature dimension of the convolutional layer constant, you can consider using a small number of k reference convolution kernels (k less than n) and a binarization mask with extremely low storage requirements. Through the pairwise combination of the reference convolution kernel and the mask tensor, n sub-convolution kernels (k<n) can be derived. Among them, the sub-convolution kernel The parameters all come from the reference convolution kernel and the binarization mask, so that the parameter amount of the convolution kernel can be reduced, and the storage overhead caused by saving the convolution kernel parameters during neural network deployment can be reduced.
为了更形象的说明本申请实施例的图像分类方法,下面结合图10对本申请实施例的整个过程进行介绍。如图10所示,基准卷积核与掩码张量进行运算,能够得到神经网络种的子卷积核,这些子卷积核可以对输入图片(该输入图片为一张猫的图片)进行处理,得到输入图片的卷积特征图,接下来,可以再利用神经网络种的分类器对输入图片的卷积特征图进行处理,得到输入图片属于各个类型图片的概率(该输入图片属于猫的概率最高),然后可以将概率值大于一定数值的类别确定为输入图片的类别(由于该输入图片属于猫的概率最高,因此,可以将该输入图片的类别确定为猫),并将该输入图片的类别信息输出。In order to illustrate the image classification method of the embodiment of the present application more vividly, the entire process of the embodiment of the present application will be introduced below with reference to FIG. 10. As shown in Figure 10, the reference convolution kernel and the mask tensor can be operated to obtain the sub-convolution kernels of the neural network. These sub-convolution kernels can perform on the input picture (the input picture is a picture of a cat) After processing, the convolution feature map of the input picture is obtained. Next, the neural network classifier can be used to process the convolution feature map of the input picture to obtain the probability that the input picture belongs to each type of picture (the input picture belongs to the cat The highest probability), then the category with the probability value greater than a certain value can be determined as the category of the input picture (because the input picture has the highest probability of being a cat, the category of the input picture can be determined as cat), and the input picture The category information is output.
由图10可知,对于神经网络来说,只需要保存基准卷积核的卷积核参数与掩码张量,就能够通过后续推导得到很多子卷积核,而不必保存每个子卷积核的参数,能够节省神经网络部署或者应用时所占用的存储空间,便于将该神经网络部署到一些存储资源有限的设备上,进而在这些设备上实现对图像的分类或者识别。It can be seen from Figure 10 that for the neural network, it is only necessary to save the convolution kernel parameters and mask tensor of the reference convolution kernel, and then many subconvolution kernels can be obtained through subsequent derivation, instead of saving each subconvolution kernel. Parameters can save the storage space occupied by neural network deployment or application, and it is convenient to deploy the neural network on some devices with limited storage resources, and then realize the classification or recognition of images on these devices.
仍以图10所示的处理过程为例,基于基准卷积核的神经网络包括N层卷积层(图10示出了其中的一层),假设神经网络中原有的卷积层中一共包含16个大小为3*7*7的普通卷积核。那么,当采用掩码张量独立的方式进行处理时,该卷积层可以需要4个3*7*7的全栈卷积核以及16个3*7*7的二值化掩码张量。这样,每个基准卷积核都可以和对应的4个掩码张量进行元素级相乘,得到4个子卷积核。这样,根据4个子卷积核就可以总共生成16个子卷积核,用来替代原网络的16个普通卷积核。这种情况下,该层全栈卷积核的参数量为4*3*7*7=588,二值掩码张量的参数量为16*3*7*7/32=73.5,总参数量为588+73.5=661.5。而使用普通卷积核的卷积层的参数量为16*3*7*7=2352,参数量压缩了2352/661.5=3.56倍,从而实现了对参数的有效压缩。Still taking the processing process shown in Figure 10 as an example, the neural network based on the reference convolution kernel includes N layers of convolutional layers (Figure 10 shows one of them), assuming that the original convolutional layer in the neural network contains a total of 16 ordinary convolution kernels with a size of 3*7*7. Then, when the mask tensor is used for processing independently, the convolutional layer may need 4 3*7*7 full-stack convolution kernels and 16 3*7*7 binarized mask tensors . In this way, each reference convolution kernel can be element-level multiplied by the corresponding 4 mask tensors to obtain 4 subconvolution kernels. In this way, a total of 16 sub-convolution kernels can be generated based on the 4 sub-convolution kernels, which are used to replace the 16 ordinary convolution kernels of the original network. In this case, the parameter quantity of the full-stack convolution kernel of this layer is 4*3*7*7=588, the parameter quantity of the binary mask tensor is 16*3*7*7/32=73.5, and the total parameter quantity It is 588+73.5=661.5. The parameter amount of the convolution layer using the common convolution kernel is 16*3*7*7=2352, and the parameter amount is compressed by 2352/661.5=3.56 times, thereby realizing effective compression of the parameters.
应理解,图7所示的图像分类方法可以应用图4所示的场景中。具体地,当通过手机自拍获取到待拍摄的图像之后,可以根据图7所示的方法对待拍摄图像进行图像分类,在得到分类结果之后,根据图像分类结果在拍摄界面生成预测框,便于用户进行更好的拍摄。It should be understood that the image classification method shown in FIG. 7 can be applied to the scene shown in FIG. 4. Specifically, after the image to be shot is obtained through the self-portrait of the mobile phone, the image to be shot can be classified according to the method shown in FIG. 7. After the classification result is obtained, the prediction frame is generated on the shooting interface according to the image classification result, which is convenient for the user to perform Better shooting.
图7所示的图像分类方法可以应用在自动驾驶场景中,通过图7所示的图像分类方法对车辆行驶过程中捕捉到的道路画面进行图像分类,识别出不同类别的物体,进而得到道 路的语义分割结果。The image classification method shown in Fig. 7 can be applied in an autonomous driving scene. The image classification method shown in Fig. 7 is used to classify the road pictures captured during the driving process of the vehicle to identify objects of different categories, and then obtain the road image Semantic segmentation results.
可选地,上述基准卷积核参数库中的基准卷积核的卷积核参数以及中的掩码张量是根据训练图像对神经网络进行训练得到的。Optionally, the convolution kernel parameters of the reference convolution kernel and the mask tensor in the reference convolution kernel parameter library are obtained by training the neural network according to the training image.
其中,上述训练图像的图像类别与待处理图像的图像类别相同。例如,当待处理图像为人体运动的图像时,训练图像可以是包含人体各种运动类型的图像。Wherein, the image category of the aforementioned training image is the same as the image category of the image to be processed. For example, when the image to be processed is an image of human motion, the training image may be an image containing various types of human motion.
具体地,在构建神经网络时,可以根据需要构建的网络的性能需求,网络结构的复杂性以及存储相应的卷积核参数和掩码张量的参数需要的存储空间的大小等因素,来确定M和N的数值以及每组掩码张量所包含的掩码张量的个数,然后初始化M个基准卷积核的卷积核参数以及N组掩码张量(也就是为这些基准卷积核和掩码张量设置一个初始值),并构造一个损失函数。接下来,就可以利用训练图像对神经网络进行训练,在训练的过程中可以根据损失函数的大小来更新基准卷积核以及掩码张量中的参数值,当该损失函数收敛或者损失函数的函数值满足要求,或者训练次数达到预设次数时,可以停止训练,将此时基准卷积核和掩码张量中的参数值确定为基准卷积核和掩码张量的最终的参数值,接下来,就可以根据需要将包含相应参数值(也就是训练得到的基准卷积核和掩码张量的最终的参数值)的神经网络部署到需要的设备上去,进而能够利用部署该神经网络的设备进行图像分类。Specifically, when building a neural network, it can be determined according to factors such as the performance requirements of the network to be built, the complexity of the network structure, and the storage space required to store the corresponding convolution kernel parameters and mask tensor parameters. The values of M and N and the number of mask tensors contained in each group of mask tensors, and then initialize the convolution kernel parameters of M reference convolution kernels and N groups of mask tensors (that is, for these reference volume Set an initial value for the product kernel and mask tensor), and construct a loss function. Next, you can use the training image to train the neural network. During the training process, you can update the parameter values in the reference convolution kernel and the mask tensor according to the size of the loss function. When the loss function converges or the function value of the loss function When the requirements are met, or the number of training times reaches the preset number of times, training can be stopped, and the parameter values in the reference convolution kernel and mask tensor at this time are determined as the final parameter values of the reference convolution kernel and mask tensor. Next, The neural network containing the corresponding parameter values (that is, the final parameter values of the reference convolution kernel and mask tensor obtained by training) can be deployed to the required equipment as needed, and then the equipment that deploys the neural network can be used for Image classification.
为了更好地理解获取基准卷积核的卷积核参数以及掩码张量的过程,下面结合图11对获取一个基准卷积核的卷积核参数和一组掩码张量的过程进行说明。In order to better understand the process of obtaining the convolution kernel parameters of the reference convolution kernel and the mask tensor, the process of obtaining the convolution kernel parameters of a reference convolution kernel and a set of mask tensors will be described below in conjunction with Figure 11. .
图11是获取基准卷积核的卷积核参数以及掩码张量的过程的示意图。FIG. 11 is a schematic diagram of the process of obtaining convolution kernel parameters and mask tensor of the reference convolution kernel.
图11所示的过程包括步骤S1至S6,通过步骤S1至S6能够得到基准卷积核的卷积核参数以及掩码张量的参数。The process shown in FIG. 11 includes steps S1 to S6, and the convolution kernel parameters of the reference convolution kernel and the mask tensor parameters can be obtained through the steps S1 to S6.
下面分别对这些步骤进行详细介绍。These steps are described in detail below.
S1、初始化基准卷积核和掩码张量。S1. Initialize the reference convolution kernel and mask tensor.
应理解,在S1中可以初始化一个基准卷积核的卷积核参数以及对应的一组掩码张量中各个元素的取值,通过初始化操作可以得到图11所示的第一基准卷积核和第一组掩码张量。其中,第一组掩码张量包括掩码张量1、掩码张量2以及掩码张量3(图11中未示出)。It should be understood that the convolution kernel parameters of a reference convolution kernel and the value of each element in the corresponding set of mask tensors can be initialized in S1. Through the initialization operation, the first reference convolution kernel and the first reference convolution kernel shown in FIG. 11 can be obtained. A set of mask tensors. Among them, the first group of mask tensors includes mask tensor 1, mask tensor 2, and mask tensor 3 (not shown in FIG. 11).
S2、根据第一基准卷积核和第一组掩码张量,生成子卷积核。S2. Generate a subconvolution kernel according to the first reference convolution kernel and the first set of mask tensors.
其中,在S2中,根据第一基准卷积核和第一组掩码张量生成的子卷积核具体包括子卷积核A、子卷积核B和子卷积核C。Among them, in S2, the subconvolution kernel generated according to the first reference convolution kernel and the first group of mask tensors specifically includes the subconvolution kernel A, the subconvolution kernel B, and the subconvolution kernel C.
具体地,在S2中,可以根据第一基准卷积核与掩码张量1生成子卷积核A,根据第一基准卷积核与掩码张量2生成子卷积核B,根据第一基准卷积核与掩码张量3生成子卷积核C。Specifically, in S2, the subconvolution kernel A can be generated according to the first reference convolution kernel and the mask tensor 1, and the subconvolution kernel B can be generated according to the first reference convolution kernel and the mask tensor 2. A reference convolution kernel and mask tensor 3 generate subconvolution kernel C.
其中,上述子卷积核A、子卷积核B和子卷积核C本质上属于神经网络内的卷积核,用于对输入数据进行卷积处理。Among them, the aforementioned sub-convolution kernel A, sub-convolution kernel B, and sub-convolution kernel C are essentially convolution kernels in a neural network, and are used to perform convolution processing on input data.
S3、利用子卷积核对输入数据进行处理,得到输入数据的卷积特征图。S3. Use the sub-convolution kernel to process the input data to obtain a convolution feature map of the input data.
具体地,在S3中,子卷积核A对输入数据进行卷积处理分别得到特征图A,子卷积核B对输入数据进行卷积处理分别得到特征图B,子卷积核C对输入数据进行卷积处理分别得到特征图C。Specifically, in S3, the subconvolution kernel A performs convolution processing on the input data to obtain a feature map A, and the subconvolution kernel B performs convolution processing on the input data to obtain a feature map B, and the subconvolution kernel C performs a convolution process on the input data. The data is subjected to convolution processing to obtain feature maps C respectively.
上述输入数据具体可以是待处理图像。The aforementioned input data may specifically be an image to be processed.
另外,在获得输入数据的卷积特征图时,也可以先采用第一卷积核对输入数据进行处理,得到初始的卷积特征图,然后再根据该初始特征图和掩码张量1生成特征图A,根据该初始特征图和掩码张量2生成特征图B,根据该初始特征图和掩码张量3生成特征图C。采用这种方式能够减少卷积运算的次数,降低运算量。In addition, when obtaining the convolution feature map of the input data, the first convolution kernel can also be used to process the input data to obtain the initial convolution feature map, and then the feature can be generated based on the initial feature map and the mask tensor 1. In Figure A, a feature map B is generated based on the initial feature map and the mask tensor 2, and a feature map C is generated based on the initial feature map and the mask tensor 3. Using this method can reduce the number of convolution operations and reduce the amount of operations.
S4、对特征图A、特征图B和特征图C进行拼接,得到拼接特征图。S4. Splicing the feature map A, the feature map B, and the feature map C to obtain a spliced feature map.
S5、根据拼接特征图确定预先设置的损失函数是否收敛。S5. Determine whether the preset loss function converges according to the splicing feature map.
当S5中判断出损失函数没有收敛时,说明神经模型的训练已经满足要求,接下来,可以执行S6。When it is determined in S5 that the loss function has not converged, it means that the training of the neural model has met the requirements, and then S6 can be executed.
S6、按照一定的梯度更新第一基准卷积核的卷积核参数和/或第一组掩码张量中的参数。S6. Update the convolution kernel parameters of the first reference convolution kernel and/or the parameters in the first group of mask tensors according to a certain gradient.
在S6中,可以根据学习率等参数来确定更新第一基准卷积核的卷积核参数以及第一组掩码张量的参数的梯度。在执行完S6之后,可以继续重复执行S2至S5,直到预先设置的损失函数收敛。In S6, the parameters of the convolution kernel of the first reference convolution kernel and the gradient of the parameters of the first group of mask tensors can be determined according to parameters such as the learning rate. After S6 is executed, S2 to S5 can be executed repeatedly until the preset loss function converges.
当S5中判断出损失函数收敛时,说明神经模型的训练已经满足要求,接下来可以执行S7。When it is determined in S5 that the loss function has converged, it means that the training of the neural model has met the requirements, and S7 can be executed next.
S7、获得第一基准卷积核的卷积核参数以及第一组掩码张量中的掩码张量的参数。S7. Obtain the convolution kernel parameters of the first reference convolution kernel and the parameters of the mask tensor in the first group of mask tensors.
应理解,为了便于理解和说明,上述图11仅以一个基准卷积核和一组掩码张量为例进行了说明,当存在多个基准卷积核和多组掩码张量时,也可以采用图11所示的过程来确定基准卷积核的卷积核参数以及掩码张量的参数,只是初始化时需要初始化多个基准卷积核的卷积核参数以及多组掩码张量的参数,并且在更新参数时,也需要更新多个基准卷积核的卷积核参数和/或多组掩码张量的参数。It should be understood that, for ease of understanding and explanation, the above-mentioned Figure 11 only takes one reference convolution kernel and a set of mask tensors as an example for illustration. When there are multiple reference convolution kernels and multiple sets of mask tensors, it is also The process shown in Figure 11 can be used to determine the convolution kernel parameters of the reference convolution kernel and the parameters of the mask tensor, but it is necessary to initialize the convolution kernel parameters of multiple reference convolution kernels and multiple sets of mask tensors during initialization. When updating the parameters, it is also necessary to update the convolution kernel parameters of multiple reference convolution kernels and/or the parameters of multiple mask tensors.
在神经网络进行训练的过程中,需要进行卷积计算,并计算神经网络模型对应的损失函数,当损失函数收敛时对应的基准卷积核的卷积核参数以及掩码张量就是最终得到的基准卷积核参数核掩码张量。下面结合公式对这些过程进行详细的介绍。In the process of neural network training, it is necessary to perform convolution calculation and calculate the loss function corresponding to the neural network model. When the loss function converges, the convolution kernel parameters and mask tensor of the corresponding reference convolution kernel are finally obtained Baseline convolution kernel parameter kernel mask tensor. These processes are described in detail below in conjunction with formulas.
卷积操作可以通过矩阵乘法实现,具体地,在进行卷积计算之前,可以先将输入特征图划分为l=H×W个区块(每个区块大小为d 1×d 2×c),并将这些区块向量化,则这些小块对应的可以得到向量如公式(11)所示。 The convolution operation can be realized by matrix multiplication. Specifically, before the convolution calculation, the input feature map can be divided into l=H×W blocks (each block size is d 1 × d 2 × c) , And vectorize these blocks, the corresponding vector of these small blocks is shown in formula (11).
Figure PCTCN2020086015-appb-000028
Figure PCTCN2020086015-appb-000028
类似的,可以将输出特征图进行向量化处理,得到的结果如公式(12)所示,对所有的子卷积核也可以进行向量化处理,得到的结果如公式(13)所示。Similarly, the output feature map can be vectorized, and the result obtained is as shown in formula (12), and all subconvolution kernels can also be vectorized, and the result obtained is as shown in formula (13).
Figure PCTCN2020086015-appb-000029
Figure PCTCN2020086015-appb-000029
Figure PCTCN2020086015-appb-000030
Figure PCTCN2020086015-appb-000030
这里以掩码张量共享的情况为例,需要优化的变量有两个,这两个变量分别如公式(14)和公式(15)所示。Here, taking the case of mask tensor sharing as an example, there are two variables that need to be optimized, and these two variables are shown in formula (14) and formula (15) respectively.
Figure PCTCN2020086015-appb-000031
Figure PCTCN2020086015-appb-000031
Figure PCTCN2020086015-appb-000032
Figure PCTCN2020086015-appb-000032
其中,B为基准卷积核,M为掩码张量,具体地,基准卷积核包括B 1、…、B k,掩 码张量包括M 1、…、M kAmong them, B is the reference convolution kernel and M is the mask tensor. Specifically, the reference convolution kernel includes B 1 , ..., B k , and the mask tensor includes M 1 , ..., M k .
基准卷积核的卷积操作可以用公式(16)来表示。The convolution operation of the reference convolution kernel can be expressed by formula (16).
Figure PCTCN2020086015-appb-000033
Figure PCTCN2020086015-appb-000033
以上述基准卷积核为基础的神经网络的目标函数如公式(17)所示。The objective function of the neural network based on the above-mentioned reference convolution kernel is shown in formula (17).
minL=L 0(B,M)+λL ortho(M)              (17) minL=L 0 (B,M)+λL ortho (M) (17)
其中,L 0是任务相关的损失函数,如分类任务的交叉熵损失,η为学习率,L ortho(M)为正交损失函数。如公式(18)和(19)所示,可以通过标准的反向传播算法,我们可以计算得到两个变量的梯度。 Among them, L 0 is the loss function related to the task, such as the cross entropy loss of the classification task, η is the learning rate, and L ortho (M) is the orthogonal loss function. As shown in formulas (18) and (19), we can calculate the gradient of the two variables through the standard backpropagation algorithm.
Figure PCTCN2020086015-appb-000034
Figure PCTCN2020086015-appb-000034
Figure PCTCN2020086015-appb-000035
Figure PCTCN2020086015-appb-000035
接下来,可以根据公式(20)更新B。Next, B can be updated according to formula (20).
Figure PCTCN2020086015-appb-000036
Figure PCTCN2020086015-appb-000036
在更新M时,由于它是二值化的,梯度下降无法直接应用。因此,可以先定义一个代理变量M,如公式(21)所示。When updating M, since it is binarized, gradient descent cannot be directly applied. Therefore, a proxy variable M can be defined first, as shown in formula (21).
M=sin(H)                      (21)M=sin(H) (21)
接下来,在根据公式(22)计算梯度,根据公式(23)更新变量H,通过更新变量H能够间接的实现更新M。Next, calculate the gradient according to formula (22), update the variable H according to formula (23), and update M indirectly by updating the variable H.
Figure PCTCN2020086015-appb-000037
Figure PCTCN2020086015-appb-000037
Figure PCTCN2020086015-appb-000038
Figure PCTCN2020086015-appb-000038
在对B和M每次进行更新后可以确定公式(17)是否收敛,如果公式(17)不收敛则要继续更新B和M,再计算公式(17)。如果公式(17)收敛,则对应的B和M就是最终的要确定的参数。After each update of B and M, it can be determined whether formula (17) converges. If formula (17) does not converge, continue to update B and M, and then calculate formula (17). If formula (17) converges, the corresponding B and M are the final parameters to be determined.
上文结合图7至图11对本申请实施例的图像分类方法进行了详细的描述,下面结合图12对本申请实施例的数据处理方法进行描述。The image classification method of the embodiment of the present application is described in detail above with reference to FIGS. 7 to 11, and the data processing method of the embodiment of the present application is described below with reference to FIG. 12.
图12是本申请实施例的数据处理方法的示意性流程图。图12所示的方法可以由数据处理装置执行,该数据处理装置可以是具有数据处理(尤其是多媒体数据处理)功能的电子设备。该电子设备具体可以是移动终端(例如,智能手机),电脑,个人数字助理,可穿戴设备,车载设备,物联网设备或者其他能够进行图像处理的设备。FIG. 12 is a schematic flowchart of a data processing method according to an embodiment of the present application. The method shown in FIG. 12 may be executed by a data processing device, which may be an electronic device with data processing (especially multimedia data processing) functions. The electronic device may specifically be a mobile terminal (for example, a smart phone), a computer, a personal digital assistant, a wearable device, a vehicle-mounted device, an Internet of Things device or other devices capable of image processing.
图12所示的方法包括步骤2001至2004,下面分别对这些步骤进行介绍。The method shown in FIG. 12 includes steps 2001 to 2004, which are respectively introduced below.
2001、获取神经网络的M个基准卷积核的卷积核参数。2001. Obtain the convolution kernel parameters of the M reference convolution kernels of the neural network.
其中,上述M为正整数。Wherein, the above M is a positive integer.
2002、获取所述神经网络的N组掩码张量。2002. Obtain N groups of mask tensors of the neural network.
其中,上述N为正整数,N组掩码张量中的每组掩码张量由多个掩码张量组成,N组掩码张量中的元素存储时占用的比特数小于M个基准卷积核中卷积核参数中的元素存储时占用的比特数,M个基准卷积核中的每个基准卷积核对应N组掩码张量中的一组掩码张量。Among them, the above N is a positive integer, each mask tensor in the N groups of mask tensors is composed of multiple mask tensors, and the number of bits occupied by the elements in the N groups of mask tensors is less than that of the M reference convolution kernels. The number of bits occupied by the elements in the convolution kernel parameters when storing, each of the M reference convolution kernels corresponds to a group of mask tensors in the N groups of mask tensors.
应理解,上述步骤2001和2002的执行过程与图7所示的方法中的步骤1001和步骤1002执行的过程相同,上文中对步骤1001和步骤1002的相关描述也适用于步骤2001和2002,为了避免不必要的重复,这里不再重复介绍。It should be understood that the execution process of the above steps 2001 and 2002 is the same as the execution process of step 1001 and step 1002 in the method shown in FIG. To avoid unnecessary repetition, the introduction will not be repeated here.
2003、根据M个基准卷积核以及N组掩码张量对多媒体数据进行卷积处理,得到多媒体数据的多个卷积特征图。2003. Perform convolution processing on multimedia data according to M reference convolution kernels and N groups of mask tensors to obtain multiple convolution feature maps of the multimedia data.
上述步骤2003中得到多媒体数据的多个卷积特征图的过程与图7所示的方法中的步骤1003类似,主要的区别在于,步骤1003是对待处理图像进行卷积处理,而步骤2003是对多媒体数据进行处理。因此,步骤2003的具体处理过程可以参见上文中步骤1003的处理过程。The process of obtaining multiple convolutional feature maps of multimedia data in step 2003 is similar to step 1003 in the method shown in FIG. 7. The main difference is that step 1003 is to perform convolution processing on the image to be processed, and step 2003 is to Multimedia data is processed. Therefore, the specific processing procedure of step 2003 can refer to the processing procedure of step 1003 above.
2004、根据多媒体数据的多个卷积特征图对多媒体数据进行处理。2004. Process multimedia data according to multiple convolution feature maps of multimedia data.
上述多媒体数据可以是文字、声音、图片(图像)、视频、动画等等。The above-mentioned multimedia data may be text, sound, picture (image), video, animation, etc.
具体地,当上述多媒体数据为待处理图像时,可以根据多个卷积特征图对多媒体数据进行识别或者分类。Specifically, when the foregoing multimedia data is an image to be processed, the multimedia data may be identified or classified according to multiple convolution feature maps.
或者,当多媒体数据为待处理图像时,可以根据多个卷积特征图对多媒体数据进行图像处理。例如,对获取到的人脸图像进行卷积处理,得到人脸图像的卷积特征图,然后对该人脸图像的卷积特征图进行处理,生成与人脸表情相对应的动画表情。或者,也可以将其他的表情迁移到输入的人脸图像中再输出。Or, when the multimedia data is an image to be processed, the multimedia data can be image processed according to multiple convolution feature maps. For example, convolution processing is performed on the acquired face image to obtain a convolution feature map of the face image, and then the convolution feature map of the face image is processed to generate an animated expression corresponding to the facial expression. Alternatively, other expressions can also be transferred to the input face image and then output.
本申请中,在利用神经网络对多媒体数据进行处理时,只需要获取神经网络的基准卷积核的卷积核参数以及相应的掩码张量,就能够利用基准卷积核以及相应的掩码张量实现对待处理数据的卷积处理,从而能够减少利用神经网络进行卷积处理时的存储开销,进而使得神经网络能够部署到更多存储资源受限的设备上并对多媒体数据进行处理。In this application, when using a neural network to process multimedia data, only the convolution kernel parameters of the reference convolution kernel of the neural network and the corresponding mask tensor need to be obtained, and then the reference convolution kernel and the corresponding mask can be used The tensor realizes the convolution processing of the data to be processed, which can reduce the storage overhead when the neural network is used for the convolution processing, thereby enabling the neural network to be deployed on more devices with limited storage resources and process multimedia data.
图12所示的数据处理方法可以应用在图5所示的场景下,此时,多媒体数据就是人脸图像,通过对人脸图像进行卷积处理,能够得到人脸图像的卷积特征图,接下来,再将人脸图像的卷积特征图与相应身份证件对应的卷积特征图进行对比,就能够确定被拍摄者的身份。The data processing method shown in Figure 12 can be applied to the scene shown in Figure 5. At this time, the multimedia data is a face image. By convolution processing on the face image, the convolution feature map of the face image can be obtained. Next, by comparing the convolution feature map of the face image with the convolution feature map corresponding to the corresponding ID document, the identity of the person being photographed can be determined.
为了验证本申请实施例的采用基准卷积核和掩码张量降低存储开销的效果。下面采用ImageNet数据集采用本申请实施例的基准卷积核的效果进行测试,在这里,将使用了全栈卷积核的CNN叫做最小可用网络(minimum viable networks,MVnet)。表1示出了本申请实施例的图像分类方法分别利用标准模型VGG-16,ResNet-50以及ImageNet数据集进行测试的结果。In order to verify the effect of using the reference convolution kernel and mask tensor to reduce storage overhead in the embodiments of the present application. The following uses the ImageNet data set to test the effect of the benchmark convolution kernel of the embodiment of the application. Here, the CNN using the full-stack convolution kernel is called minimum viable networks (MVnet). Table 1 shows the test results of the image classification methods of the embodiments of the present application using the standard models VGG-16, ResNet-50 and ImageNet datasets.
在测试采用本申请实施例采用基准卷积核和掩码张量的效果时,并不改变现有神经网络模型的结构(层数、每层的卷积核尺寸、参数等),而是仅仅根据本申请所提出的基准卷积核的计算方式来对每一层的卷积核个数进行减少。When testing the effects of using the reference convolution kernel and mask tensor in the embodiments of this application, the structure of the existing neural network model (the number of layers, the size of the convolution kernel of each layer, parameters, etc.) is not changed, but only According to the calculation method of the reference convolution kernel proposed in this application, the number of convolution kernels in each layer is reduced.
表3示出了本申请采用基准卷积核在ImageNet 2012数据集上的结果统计,其中,MVNet-A表示使用了掩码张量共享的基准卷积核的CNN,MVNet-B表示使用了掩码张量 独立的基准卷积核的CNN,括号中的s表示掩码张量的个数。Table 3 shows the results of the application using the benchmark convolution kernel on the ImageNet 2012 data set, where MVNet-A represents the CNN that uses the reference convolution kernel shared by the mask tensor, and MVNet-B represents the use of the mask. Code tensor independent reference convolution kernel CNN, s in parentheses represents the number of mask tensors.
表3table 3
Figure PCTCN2020086015-appb-000039
Figure PCTCN2020086015-appb-000039
如表3所示,在VGG-16模型下,无论是采用掩码张量共享的基准卷积核还是掩码张量独立的基准卷积核对应的前1预测错误率和前5预测错误率与之前的方法基本保持一致,但是相对应的参数量以及相应的内存开销都有明显的减少。尤其是采用掩码张量共享的基准卷积核减少的内存开销更为明显。As shown in Table 3, under the VGG-16 model, the top 1 prediction error rate and the top 5 prediction error rate corresponding to the reference convolution kernel shared by the mask tensor or the reference convolution kernel independent of the mask tensor It is basically the same as the previous method, but the corresponding parameter amount and corresponding memory overhead are significantly reduced. In particular, the memory overhead reduced by the reference convolution kernel sharing the mask tensor is more obvious.
在ResNet-50模型下,无论是采用掩码张量共享的基准卷积核还是掩码张量独立的基准卷积核,对应的参数量以及内存开销也都有明显的减少,同时,前1预测错误率和前5预测错误率与之前的方法基本保持一致。Under the ResNet-50 model, whether it is a reference convolution kernel shared by the mask tensor or a reference convolution kernel independent of the mask tensor, the corresponding parameter amount and memory overhead are also significantly reduced. At the same time, the first 1 The prediction error rate and the top 5 prediction error rates are basically consistent with the previous method.
在表3的最后两行,在掩码张量独立的情况下,当采用了更小的基准卷积核更多的掩码张量时,对应的参数量和内存开销又有进一步的减少。In the last two rows of Table 3, when the mask tensors are independent, when a smaller reference convolution kernel and more mask tensors are used, the corresponding parameter amount and memory overhead are further reduced.
表3示出的主要是利用本申请所提出的基准卷积核替换现有深度卷积神经网络模型中传统的卷积核后减少存储开销的效果。Table 3 mainly shows the effect of reducing the storage overhead after replacing the traditional convolution kernel in the existing deep convolutional neural network model with the reference convolution kernel proposed in this application.
另外,在表3中,MV Net-A(s=4)、MV Net-B(s=4)、MV Net-A(s=4)、MV Net-B(s=4)以及MV Net-B(s=32)均采用了前向计算的方式(先采用基准卷积核对待处理图像进行卷积处理,然后再结合掩码张量得到待处理图像的卷积特征图)来获得卷积特征图。由表3可知,这些情况下都很大程度少减少了乘法量,起到了减少运算量的效果。In addition, in Table 3, MV Net-A (s=4), MV Net-B (s=4), MV Net-A (s=4), MV Net-B (s=4) and MV Net- B(s=32) all adopt the method of forward calculation (first use the reference convolution kernel to convolve the image to be processed, and then combine the mask tensor to obtain the convolution feature map of the image to be processed) to obtain the convolution Feature map. It can be seen from Table 3 that in these cases, the amount of multiplication is greatly reduced, which has the effect of reducing the amount of calculation.
另外,在上述表3中,第一列分别表示不同的方法或者架构,其中,相关方法或架构相应的论文链接如下:In addition, in Table 3 above, the first column respectively represents different methods or architectures. Among them, the links to papers corresponding to related methods or architectures are as follows:
BN low-rank:https://arxiv.org/pdf/1511.06067.pdfBN low-rank: https://arxiv.org/pdf/1511.06067.pdf
ThiNet-Conv,ThiNet-30:http://openaccess.thecvf.com/content_ICCV_2017/papers/Luo_ThiNet_A_Filter_ICCV_2017_paper.pdfThiNet-Conv, ThiNet-30: http://openaccess.thecvf.com/content_ICCV_2017/papers/Luo_ThiNet_A_Filter_ICCV_2017_paper.pdf
ShiftResNet:http://openaccess.thecvf.com/content_cvpr_2018/papers/Wu_Shift_A_Zero_CVPR_2018_paper.pdfShiftResNet: http://openaccess.thecvf.com/content_cvpr_2018/papers/Wu_Shift_A_Zero_CVPR_2018_paper.pdf
Versatile-v2:https://papers.nips.cc/paper/7433-learning-versatile-filters-for-efficient-convolutional-neural-networksVersatile-v2: https://papers.nips.cc/paper/7433-learning-versatile-filters-for-efficient-convolutional-neural-networks
实际上,还可以将本申请提出的基准卷积核嵌入到一些轻量级的深度卷积神经网络模型中,来验证其参数量和内存开销减少的效果。如表4所示,通过将本申请提供的基准卷积核和掩码张量嵌入到MobileNet中,替换其中传统的卷积核,并在ImageNet数据集上进行训练。虽然MobileNet中绝大多数的卷积核尺寸都是1x1的,但是应用本申请所提出的基准卷积核仍然可以将其内存和计算开销减少近一半。In fact, it is also possible to embed the reference convolution kernel proposed in this application into some lightweight deep convolutional neural network models to verify the effect of reducing the amount of parameters and memory overhead. As shown in Table 4, by embedding the reference convolution kernel and mask tensor provided in this application into MobileNet, replacing the traditional convolution kernel, and training on the ImageNet data set. Although most of the convolution kernels in MobileNet are 1x1 in size, applying the benchmark convolution kernel proposed in this application can still reduce its memory and computational overhead by nearly half.
表4Table 4
方法method 内存 RAM 乘法量Multiplier 前1预测错误率(%)Top 1 prediction error rate (%)
MobileNet-v1MobileNet-v1 16.116.1 569569 29.429.4
MV Net-B(s=2,MobileNet-v1)MV Net-B(s=2,MobileNet-v1) 10.510.5 299299 29.929.9
MobileNet-v2MobileNet-v2 13.213.2 300300 28.228.2
MV Net-B(s=4,MobileNet-v2)MV Net-B(s=4,MobileNet-v2) 7.57.5 9393 29.929.9
如表4所示,MV Net-B(s=2,MobileNet-v1)是在原来的结构MobileNet-v1上嵌入了基准卷积核,其中,s=2表示一组掩码张量所包含的掩码张量的个数,MV Net-B(s=2,MobileNet-v1)与MobileNet-v1相比,内存和乘法量都有明显的减少。MV Net-B(s=2,MobileNet-v2)是在原来的结构MobileNet-v2上嵌入了基准卷积核,其中,s=2表示一组掩码张量所包含的掩码张量的个数,MV Net-B(s=2,MobileNet-v2)与MobileNet-v2相比,内存和乘法量也都有明显的减少(内存减少了几乎一半)。As shown in Table 4, MV Net-B (s=2, MobileNet-v1) embeds the reference convolution kernel on the original structure MobileNet-v1. Among them, s=2 means that a set of mask tensors contains The number of mask tensors, MV Net-B (s=2, MobileNet-v1) Compared with MobileNet-v1, the memory and multiplication amount are significantly reduced. MV Net-B (s=2, MobileNet-v2) embeds the reference convolution kernel on the original structure MobileNet-v2, where s=2 represents the number of mask tensors contained in a set of mask tensors Compared with MobileNet-v2, MV Net-B (s=2, MobileNet-v2) also has a significant reduction in memory and multiplication (memory is reduced by almost half).
另外,在表4中,MV Net-B(s=2,MobileNet-v1)和MV Net-B(s=2,MobileNet-v2)均采用了前向计算的方式(先采用基准卷积核对待处理图像进行卷积处理,然后再结合掩码张量得到待处理图像的卷积特征图)来获得卷积特征图。In addition, in Table 4, MV Net-B (s=2, MobileNet-v1) and MV Net-B (s=2, MobileNet-v2) both use the forward calculation method (first use the reference convolution kernel to treat The processed image is subjected to convolution processing, and then combined with the mask tensor to obtain the convolution feature map of the image to be processed) to obtain the convolution feature map.
其中,MV Net-B(s=2,MobileNet-v1)与MobileNet-v1的传统计算方式相比(采用各个子卷积核对待处理图像进行处理,得到卷积特征图),乘法量下降了接近一半。MV Net-B(s=2,MobileNet-v2)与MobileNet-v2的传统计算方式相比,乘法量下降了超过三倍。Among them, MV Net-B (s=2, MobileNet-v1) compared with the traditional calculation method of MobileNet-v1 (using each sub-convolution kernel to process the image to be processed to obtain the convolution feature map), the amount of multiplication has dropped by nearly half. Compared with the traditional calculation method of MobileNet-v2, MV Net-B (s=2, MobileNet-v2) reduces the amount of multiplication by more than three times.
由此可见,将本申请实施例提出的基准卷积核嵌入到一些轻量级的深度卷积神经网络模型之后存储开销减少的效果非常明显,另外,当基准卷积核再结合前向计算的方式进行计算时,计算量减少的效果也比较明显。It can be seen that after embedding the reference convolution kernel proposed in the embodiments of this application into some lightweight deep convolutional neural network models, the effect of reducing storage overhead is very obvious. In addition, when the reference convolution kernel is combined with the forward calculation The effect of reducing the amount of calculation is also obvious when performing calculations.
应理解,在上述表3和表4对比测试效果的时候,并未给出每种情况下基准卷积核的个数以及掩码张量的组数,这主要是因为每种情况下的基准卷积核的个数以及掩码张量的组数需要是根据具体应用的网络架构来确定。It should be understood that when comparing the test results in Table 3 and Table 4 above, the number of reference convolution kernels and the number of mask tensor groups in each case are not given. This is mainly because of the benchmark in each case The number of convolution kernels and the number of mask tensors need to be determined according to the network architecture of the specific application.
图13是本申请实施例提供的神经网络训练装置的硬件结构示意图。图13所示的神经网络训练装置3000(该装置3000具体可以是一种计算机设备)包括存储器3001、处理器3002、通信接口3003以及总线3004。其中,存储器3001、处理器3002、通信接口3003通过总线3004实现彼此之间的通信连接。FIG. 13 is a schematic diagram of the hardware structure of a neural network training device provided by an embodiment of the present application. The neural network training device 3000 shown in FIG. 13 (the device 3000 may specifically be a computer device) includes a memory 3001, a processor 3002, a communication interface 3003, and a bus 3004. Among them, the memory 3001, the processor 3002, and the communication interface 3003 implement communication connections between each other through the bus 3004.
存储器3001可以是只读存储器(read only memory,ROM),静态存储设备,动态存储设备或者随机存取存储器(random access memory,RAM)。存储器3001可以存储程序,当存储器3001中存储的程序被处理器3002执行时,处理器3002和通信接口3003用于执行本申请实施例的神经网络的训练方法的各个步骤。The memory 3001 may be a read only memory (ROM), a static storage device, a dynamic storage device, or a random access memory (RAM). The memory 3001 may store a program. When the program stored in the memory 3001 is executed by the processor 3002, the processor 3002 and the communication interface 3003 are used to execute each step of the neural network training method of the embodiment of the present application.
处理器3002可以采用通用CPU,微处理器,应用专用集成电路(application specific integrated circuit,ASIC),图形处理器(graphics processing unit,GPU)或者一个或多个集成电路,用于执行相关程序,以实现本申请实施例的神经网络的训练装置中的单元所需执行的功能,或者执行本申请方法实施例的神经网络的训练方法。The processor 3002 may adopt a general-purpose CPU, a microprocessor, an application specific integrated circuit (application specific integrated circuit, ASIC), a graphics processing unit (GPU), or one or more integrated circuits for executing related programs. Implement the functions required by the units in the neural network training device of the embodiment of the application, or execute the neural network training method of the method embodiment of the application.
处理器3002还可以是一种集成电路芯片,具有信号的处理能力。在实现过程中,本申请的神经网络的训练方法的各个步骤可以通过处理器3002中的硬件的集成逻辑电路或者软件形式的指令完成。上述的处理器3002还可以是通用处理器、数字信号处理器(digital signal processing,DSP)、专用集成电路(ASIC)、现成可编程门阵列(field programmable gate array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。可以实现或者执行本申请实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。结合本申请实施例所公开的方法的步骤可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器3001,处理器3002读取存储器3001中的信息,结合其硬件完成本申请实施例的神经网络的训练装置中包括的单元所需执行的功能,或者执行本申请方法实施例的神经网络的训练方法。The processor 3002 may also be an integrated circuit chip with signal processing capabilities. In the implementation process, each step of the neural network training method of the present application can be completed by hardware integrated logic circuits in the processor 3002 or instructions in the form of software. The aforementioned processor 3002 may also be a general-purpose processor, a digital signal processing (digital signal processing, DSP), an application specific integrated circuit (ASIC), a ready-made programmable gate array (field programmable gate array, FPGA) or other programmable logic devices , Discrete gates or transistor logic devices, discrete hardware components. The methods, steps, and logical block diagrams disclosed in the embodiments of the present application can be implemented or executed. The general-purpose processor may be a microprocessor or the processor may also be any conventional processor or the like. The steps of the method disclosed in the embodiments of the present application may be directly embodied as being executed and completed by a hardware decoding processor, or executed and completed by a combination of hardware and software modules in the decoding processor. The software module can be located in a mature storage medium in the field such as random access memory, flash memory, read-only memory, programmable read-only memory, or electrically erasable programmable memory, registers. The storage medium is located in the memory 3001, and the processor 3002 reads the information in the memory 3001, and combines its hardware to complete the functions required by the units included in the neural network training device of the embodiment of the application, or perform the functions of the method embodiment of the application. Training method of neural network.
通信接口3003使用例如但不限于收发器一类的收发装置,来实现装置3000与其他设备或通信网络之间的通信。例如,可以通过通信接口3003获取训练数据(如本申请实施例中的原始图像和在原始图像上加上噪声后得到的噪声图像)。The communication interface 3003 uses a transceiver device such as but not limited to a transceiver to implement communication between the device 3000 and other devices or communication networks. For example, training data (such as the original image in the embodiment of the present application and the noise image obtained after adding noise to the original image) can be obtained through the communication interface 3003.
总线3004可包括在装置3000各个部件(例如,存储器3001、处理器3002、通信接口3003)之间传送信息的通路。The bus 3004 may include a path for transferring information between various components of the device 3000 (for example, the memory 3001, the processor 3002, and the communication interface 3003).
图14是本申请实施例的图像分类装置的硬件结构示意图。图14所示的图像分类装置4000包括存储器4001、处理器4002、通信接口4003以及总线4004。其中,存储器4001、处理器4002、通信接口4003通过总线4004实现彼此之间的通信连接。FIG. 14 is a schematic diagram of the hardware structure of an image classification device according to an embodiment of the present application. The image classification device 4000 shown in FIG. 14 includes a memory 4001, a processor 4002, a communication interface 4003, and a bus 4004. Among them, the memory 4001, the processor 4002, and the communication interface 4003 implement communication connections between each other through the bus 4004.
存储器4001可以是ROM,静态存储设备和RAM。存储器4001可以存储程序,当存储器4001中存储的程序被处理器4002执行时,处理器4002和通信接口4003用于执行本申请实施例的图像分类方法的各个步骤。The memory 4001 may be ROM, static storage device and RAM. The memory 4001 may store a program. When the program stored in the memory 4001 is executed by the processor 4002, the processor 4002 and the communication interface 4003 are used to execute each step of the image classification method of the embodiment of the present application.
处理器4002可以采用通用的,CPU,微处理器,ASIC,GPU或者一个或多个集成电路,用于执行相关程序,以实现本申请实施例的图像分类装置中的单元所需执行的功能,或者执行本申请方法实施例的图像分类方法。The processor 4002 may adopt a general-purpose CPU, a microprocessor, an ASIC, a GPU, or one or more integrated circuits to execute related programs to realize the functions required by the units in the image classification device of the embodiment of the present application. Or execute the image classification method in the method embodiment of this application.
处理器4002还可以是一种集成电路芯片,具有信号的处理能力。在实现过程中,本申请实施例的图像分类方法的各个步骤可以通过处理器4002中的硬件的集成逻辑电路或者软件形式的指令完成。上述的处理器4002还可以是通用处理器、DSP、ASIC、FPGA或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。可以实现或者执行本申请实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。结合本申请实施例所公开的方法的步骤可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程 存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器4001,处理器4002读取存储器4001中的信息,结合其硬件完成本申请实施例的图像分类装置中包括的单元所需执行的功能,或者执行本申请方法实施例的图像分类方法。The processor 4002 may also be an integrated circuit chip with signal processing capability. In the implementation process, each step of the image classification method in the embodiment of the present application can be completed by an integrated logic circuit of hardware in the processor 4002 or instructions in the form of software. The aforementioned processor 4002 may also be a general-purpose processor, DSP, ASIC, FPGA or other programmable logic device, discrete gate or transistor logic device, or discrete hardware component. The methods, steps, and logical block diagrams disclosed in the embodiments of the present application can be implemented or executed. The general-purpose processor may be a microprocessor or the processor may also be any conventional processor or the like. The steps of the method disclosed in the embodiments of the present application may be directly embodied as being executed and completed by a hardware decoding processor, or executed and completed by a combination of hardware and software modules in the decoding processor. The software module can be located in a mature storage medium in the field, such as random access memory, flash memory, read-only memory, programmable read-only memory, or electrically erasable programmable memory, registers. The storage medium is located in the memory 4001, and the processor 4002 reads the information in the memory 4001, and combines its hardware to complete the functions required by the units included in the image classification apparatus of the embodiment of the present application, or perform the image classification of the method embodiment of the present application method.
通信接口4003使用例如但不限于收发器一类的收发装置,来实现装置4000与其他设备或通信网络之间的通信。例如,可以通过通信接口4003获取训练数据。The communication interface 4003 uses a transceiver device such as but not limited to a transceiver to implement communication between the device 4000 and other devices or a communication network. For example, the training data can be obtained through the communication interface 4003.
总线4004可包括在装置4000各个部件(例如,存储器4001、处理器4002、通信接口4003)之间传送信息的通路。The bus 4004 may include a path for transferring information between various components of the device 4000 (for example, the memory 4001, the processor 4002, and the communication interface 4003).
应注意,尽管装置3000和4000仅仅示出了存储器、处理器、通信接口,但是在具体实现过程中,本领域的技术人员应当理解,装置3000和4000还包括实现正常运行所必须的其他器件。同时,根据具体需要,本领域的技术人员应当理解,装置3000和4000还可包括实现其他附加功能的硬件器件。此外,本领域的技术人员应当理解,装置3000和4000也可仅仅包括实现本申请实施例所必须的器件,而不必包括图13或图14中所示的全部器件。It should be noted that although the devices 3000 and 4000 only show a memory, a processor, and a communication interface, in a specific implementation process, those skilled in the art should understand that the devices 3000 and 4000 also include other devices necessary for normal operation. At the same time, according to specific needs, those skilled in the art should understand that the devices 3000 and 4000 may also include hardware devices that implement other additional functions. In addition, those skilled in the art should understand that the devices 3000 and 4000 may also only include the necessary components for implementing the embodiments of the present application, and not necessarily all the components shown in FIG. 13 or FIG. 14.
图15是本申请实施例的数据处理装置的硬件结构示意图。图15所示的数据处理装置5000与图14中的图像分类装置4000类似,数据处理装置5000包括存储器5001、处理器5002、通信接口5003以及总线5004。其中,存储器5001、处理器5002、通信接口5003通过总线5004实现彼此之间的通信连接。FIG. 15 is a schematic diagram of the hardware structure of a data processing device according to an embodiment of the present application. The data processing device 5000 shown in FIG. 15 is similar to the image classification device 4000 in FIG. 14. The data processing device 5000 includes a memory 5001, a processor 5002, a communication interface 5003, and a bus 5004. Among them, the memory 5001, the processor 5002, and the communication interface 5003 implement communication connections between each other through the bus 5004.
存储器5001可以是ROM,静态存储设备和RAM。存储器5001可以存储程序,当存储器5001中存储的程序被处理器5002执行时,处理器5002和通信接口5003用于执行本申请实施例的图像分类方法的各个步骤。The memory 5001 may be ROM, static storage device and RAM. The memory 5001 may store a program. When the program stored in the memory 5001 is executed by the processor 5002, the processor 5002 and the communication interface 5003 are used to execute each step of the image classification method of the embodiment of the present application.
处理器5002可以采用通用的,CPU,微处理器,ASIC,GPU或者一个或多个集成电路,用于执行相关程序,以实现本申请实施例的图像分类装置中的单元所需执行的功能,或者执行本申请方法实施例的数据处理方法。The processor 5002 may adopt a general-purpose CPU, a microprocessor, an ASIC, a GPU, or one or more integrated circuits to execute related programs to realize the functions required by the units in the image classification device of the embodiment of the present application. Or execute the data processing method in the method embodiment of this application.
上文中对图14所示的图像分类装置4000内部的模块和单元的相关描述内容也适用于图15中的数据处理装置5000内部的模块和单元,为了避免不必要的重复,这里适当省略相关描述。The foregoing descriptions of the modules and units inside the image classification device 4000 shown in FIG. 14 are also applicable to the modules and units inside the data processing device 5000 in FIG. 15. In order to avoid unnecessary repetition, relevant descriptions are omitted here. .
可以理解,上述装置3000相当于1中的训练设备120,上述装置4000和装置5000相当于图1中的执行设备110。It can be understood that the foregoing device 3000 is equivalent to the training device 120 in 1, and the foregoing device 4000 and the device 5000 are equivalent to the execution device 110 in FIG. 1.
另外,上述装置4000具体可以是具有图像分类功能的电子设备,上述装置5000具体可以是具有数据处理(尤其是多媒体数据处理)功能的电子设备,这里的电子设备具体可以动终端(例如,智能手机),电脑,个人数字助理,可穿戴设备,车载设备,物联网设备等等。In addition, the above-mentioned device 4000 may specifically be an electronic device with an image classification function, and the above-mentioned device 5000 may specifically be an electronic device with a data processing (especially multimedia data processing) function. The electronic device here may specifically move a terminal (for example, a smart phone). ), computers, personal digital assistants, wearable devices, in-vehicle devices, IoT devices, etc.
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。A person of ordinary skill in the art may be aware that the units and algorithm steps of the examples described in combination with the embodiments disclosed herein can be implemented by electronic hardware or a combination of computer software and electronic hardware. Whether these functions are executed by hardware or software depends on the specific application and design constraint conditions of the technical solution. Professionals and technicians can use different methods for each specific application to implement the described functions, but such implementation should not be considered beyond the scope of this application.
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统、装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。Those skilled in the art can clearly understand that, for the convenience and conciseness of description, the specific working process of the above-described system, device, and unit can refer to the corresponding process in the foregoing method embodiment, which will not be repeated here.
在本申请所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。In the several embodiments provided in this application, it should be understood that the disclosed system, device, and method may be implemented in other ways. For example, the device embodiments described above are only illustrative. For example, the division of the units is only a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components can be combined or It can be integrated into another system, or some features can be ignored or not implemented. In addition, the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。The units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。In addition, the functional units in each embodiment of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、ROM、RAM、磁碟或者光盘等各种可以存储程序代码的介质。If the function is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium. Based on this understanding, the technical solution of this application essentially or the part that contributes to the existing technology or the part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium, including Several instructions are used to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the method described in each embodiment of the present application. The aforementioned storage media include: U disk, mobile hard disk, ROM, RAM, magnetic disk or optical disk and other media that can store program codes.
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。The above are only specific implementations of this application, but the protection scope of this application is not limited to this. Any person skilled in the art can easily think of changes or substitutions within the technical scope disclosed in this application. Should be covered within the scope of protection of this application. Therefore, the protection scope of this application should be subject to the protection scope of the claims.

Claims (18)

  1. 一种图像分类方法,其特征在于,包括:An image classification method, characterized in that it includes:
    获取神经网络的M个基准卷积核的卷积核参数,M为正整数;Obtain the convolution kernel parameters of the M reference convolution kernels of the neural network, where M is a positive integer;
    获取所述神经网络的N组掩码张量,N为正整数,所述N组掩码张量中的每组掩码张量由多个掩码张量组成,所述N组掩码张量中的元素存储时占用的比特数小于M个基准卷积核中卷积核参数中的元素存储时占用的比特数,所述M个基准卷积核中的每个基准卷积核对应所述N组掩码张量中的一组掩码张量;Obtain N groups of mask tensors of the neural network, where N is a positive integer, each group of mask tensors in the N groups of mask tensors is composed of multiple mask tensors, and the elements in the N groups of mask tensors The number of bits occupied during storage is less than the number of bits occupied during storage of elements in the convolution kernel parameters in the M reference convolution kernels. Each reference convolution kernel of the M reference convolution kernels corresponds to the N sets of masks. A set of mask tensors in the code tensor;
    对所述M个基准卷积核中的每个基准卷积核,以及所述每个基准卷积核在所述N组掩码张量中对应的一组掩码张量进行哈达玛积运算,得到多个子卷积核;Perform a Hadamard product operation on each reference convolution kernel in the M reference convolution kernels and a set of mask tensors corresponding to each reference convolution kernel in the N groups of mask tensors to obtain Multiple sub-convolution kernels;
    根据所述多个子卷积核分别对待处理图像进行卷积处理,得到多个卷积特征图;Performing convolution processing on the image to be processed respectively according to the multiple sub-convolution kernels to obtain multiple convolution feature maps;
    根据所述多个卷积特征图对所述待处理图像进行分类,得到所述待处理图像的分类结果。The image to be processed is classified according to the multiple convolution feature maps to obtain a classification result of the image to be processed.
  2. 如权利要求1所述的方法,其特征在于,N小于M,所述M个基准卷积核中的至少两个基准卷积核对应所述N组掩码张量中的一组掩码张量。The method according to claim 1, wherein N is less than M, and at least two reference convolution kernels in the M reference convolution kernels correspond to a group of mask tensors in the N groups of mask tensors.
  3. 如权利要求1或2所述的方法,其特征在于,所述N组掩码张量中至少一组掩码张量中的至少部分掩码张量满足两两正交。The method according to claim 1 or 2, wherein at least part of the mask tensors in the at least one group of mask tensors in the N groups of mask tensors satisfy pairwise orthogonality.
  4. 一种图像分类方法,其特征在于,包括:An image classification method, characterized in that it includes:
    获取神经网络的M个基准卷积核的卷积核参数,M为正整数;Obtain the convolution kernel parameters of the M reference convolution kernels of the neural network, where M is a positive integer;
    获取所述神经网络的N组掩码张量,N为正整数,所述N组掩码张量中的每组掩码张量由多个掩码张量组成,所述N组掩码张量中的元素存储时占用的比特数小于M个基准卷积核中卷积核参数中的元素存储时占用的比特数,所述M个基准卷积核中的每个基准卷积核对应所述N组掩码张量中的一组掩码张量;Obtain N groups of mask tensors of the neural network, where N is a positive integer, each group of mask tensors in the N groups of mask tensors is composed of multiple mask tensors, and the elements in the N groups of mask tensors The number of bits occupied during storage is less than the number of bits occupied during storage of elements in the convolution kernel parameters in the M reference convolution kernels. Each reference convolution kernel of the M reference convolution kernels corresponds to the N sets of masks. A set of mask tensors in the code tensor;
    根据所述M个基准卷积核对待处理图像进行卷积处理,得到所述待处理图像的M个基准卷积特征图;Performing convolution processing on the image to be processed according to the M reference convolution kernels to obtain M reference convolution feature maps of the image to be processed;
    对所述M个基准卷积特征图和所述N组掩码张量进行哈达玛积运算,得到所述待处理图像的多个卷积特征图;Performing a Hadamard product operation on the M reference convolution feature maps and the N groups of mask tensors to obtain multiple convolution feature maps of the image to be processed;
    根据所述待处理图像的多个卷积特征图对所述待处理图像进行分类,得到所述待处理图像的分类结果。The image to be processed is classified according to a plurality of convolution feature maps of the image to be processed to obtain a classification result of the image to be processed.
  5. 如权利要求4所述的方法,其特征在于,N小于M,所述M个基准卷积核中的至少两个基准卷积核对应所述N组掩码张量中的一组掩码张量。The method according to claim 4, wherein N is less than M, and at least two reference convolution kernels in the M reference convolution kernels correspond to a group of mask tensors in the N groups of mask tensors.
  6. 如权利要求4或5所述的方法,其特征在于,所述N组掩码张量中至少一组掩码张量中的至少部分掩码张量满足两两正交。The method according to claim 4 or 5, wherein at least part of the mask tensors in the at least one group of mask tensors in the N groups of mask tensors satisfy pairwise orthogonality.
  7. 一种数据处理方法,其特征在于,包括:A data processing method, characterized by comprising:
    获取神经网络的M个基准卷积核的卷积核参数,M为正整数;Obtain the convolution kernel parameters of the M reference convolution kernels of the neural network, where M is a positive integer;
    获取所述神经网络的N组掩码张量,N为正整数,所述N组掩码张量中的每组掩码张量由多个掩码张量组成,所述N组掩码张量中的元素存储时占用的比特数小于M个基准卷积核中卷积核参数中的元素存储时占用的比特数,所述M个基准卷积核中的每个基 准卷积核对应所述N组掩码张量中的一组掩码张量;Obtain N groups of mask tensors of the neural network, where N is a positive integer, each group of mask tensors in the N groups of mask tensors is composed of multiple mask tensors, and the elements in the N groups of mask tensors The number of bits occupied during storage is less than the number of bits occupied during storage of elements in the convolution kernel parameters in the M reference convolution kernels. Each reference convolution kernel of the M reference convolution kernels corresponds to the N sets of masks. A set of mask tensors in the code tensor;
    对所述M个基准卷积核中的每个基准卷积核,以及所述每个基准卷积核在所述N组掩码张量中对应的一组掩码张量进行哈达玛积运算,得到多个子卷积核;Perform a Hadamard product operation on each reference convolution kernel in the M reference convolution kernels and a set of mask tensors corresponding to each reference convolution kernel in the N groups of mask tensors to obtain Multiple sub-convolution kernels;
    根据所述多个子卷积核分别对多媒体数据进行卷积处理,得到所述多媒体数据的多个卷积特征图;Performing convolution processing on the multimedia data respectively according to the multiple sub-convolution kernels to obtain multiple convolution feature maps of the multimedia data;
    根据所述多媒体数据的多个卷积特征图对所述多媒体数据进行处理。The multimedia data is processed according to multiple convolution feature maps of the multimedia data.
  8. 一种数据处理方法,其特征在于,包括:A data processing method, characterized by comprising:
    获取神经网络的M个基准卷积核的卷积核参数,M为正整数;Obtain the convolution kernel parameters of the M reference convolution kernels of the neural network, where M is a positive integer;
    获取所述神经网络的N组掩码张量,N为正整数,所述N组掩码张量中的每组掩码张量由多个掩码张量组成,所述N组掩码张量中的元素存储时占用的比特数小于M个基准卷积核中卷积核参数中的元素存储时占用的比特数,所述M个基准卷积核中的每个基准卷积核对应所述N组掩码张量中的一组掩码张量;Obtain N groups of mask tensors of the neural network, where N is a positive integer, each group of mask tensors in the N groups of mask tensors is composed of multiple mask tensors, and the elements in the N groups of mask tensors The number of bits occupied during storage is less than the number of bits occupied during storage of elements in the convolution kernel parameters in the M reference convolution kernels. Each reference convolution kernel of the M reference convolution kernels corresponds to the N sets of masks. A set of mask tensors in the code tensor;
    根据所述M个基准卷积核对多媒体数据进行卷积处理,得到所述多媒体数据的M个基准卷积特征图;Performing convolution processing on multimedia data according to the M reference convolution kernels to obtain M reference convolution feature maps of the multimedia data;
    对所述M个基准卷积特征图和所述N组掩码张量进行哈达玛积运算,得到所述多媒体数据的多个卷积特征图;Performing a Hadamard product operation on the M reference convolution feature maps and the N groups of mask tensors to obtain multiple convolution feature maps of the multimedia data;
    据所述多媒体数据的多个卷积特征图对所述多媒体数据进行处理。The multimedia data is processed according to multiple convolution feature maps of the multimedia data.
  9. 一种图像分类装置,其特征在于,包括:An image classification device, characterized by comprising:
    存储器,用于存储神经网络的M个基准卷积核的卷积核参数和N组掩码张量,其中,M和N均为正整数,所述N组掩码张量中的每组掩码张量由多个掩码张量组成,所述N组掩码张量中的元素存储时占用的比特数小于M个基准卷积核中卷积核参数中的元素存储时占用的比特数,所述M个基准卷积核中的每个基准卷积核对应所述N组掩码张量中的一组掩码张量;The memory is used to store the convolution kernel parameters of the M reference convolution kernels of the neural network and N groups of mask tensors, where M and N are both positive integers, and each group of mask tensors in the N groups of mask tensors The quantity is composed of multiple mask tensors. The number of bits occupied by the elements in the N groups of mask tensors when stored is less than the number of bits occupied by the elements in the convolution kernel parameters in the M reference convolution kernels. The M Each of the reference convolution kernels corresponds to a group of mask tensors in the N groups of mask tensors;
    处理器,用于获取所述神经网络的M个基准卷积核的卷积核参数和N组掩码张量,并执行以下操作:The processor is configured to obtain the convolution kernel parameters and N groups of mask tensors of the M reference convolution kernels of the neural network, and perform the following operations:
    对所述M个基准卷积核中的每个基准卷积核,以及所述每个基准卷积核在所述N组掩码张量中对应的一组掩码张量进行哈达玛积运算,得到多个子卷积核;Perform a Hadamard product operation on each reference convolution kernel in the M reference convolution kernels and a set of mask tensors corresponding to each reference convolution kernel in the N groups of mask tensors to obtain Multiple sub-convolution kernels;
    根据所述多个子卷积核分别对待处理图像进行卷积处理,得到多个卷积特征图;Performing convolution processing on the image to be processed respectively according to the multiple sub-convolution kernels to obtain multiple convolution feature maps;
    根据所述多个卷积特征图对所述待处理图像进行分类,得到所述待处理图像的分类结果。The image to be processed is classified according to the multiple convolution feature maps to obtain a classification result of the image to be processed.
  10. 如权利要求9所述的装置,其特征在于,N小于M,所述M个基准卷积核中的至少两个基准卷积核对应所述N组掩码张量中的一组掩码张量。The apparatus according to claim 9, wherein N is less than M, and at least two reference convolution kernels in the M reference convolution kernels correspond to a group of mask tensors in the N groups of mask tensors.
  11. 如权利要求9或10所述的装置,其特征在于,所述N组掩码张量中至少一组掩码张量中的至少部分掩码张量满足两两正交。The device according to claim 9 or 10, wherein at least part of the mask tensors in the at least one group of mask tensors in the N groups of mask tensors satisfy pairwise orthogonality.
  12. 一种图像分类装置,其特征在于,包括:An image classification device, characterized by comprising:
    存储器,用于存储神经网络的M个基准卷积核的卷积核参数和N组掩码张量,其中,M和N均为正整数,所述N组掩码张量中的每组掩码张量由多个掩码张量组成,所述N组掩码张量中的元素存储时占用的比特数小于M个基准卷积核中卷积核参数中的元素存储时占用的比特数,所述M个基准卷积核中的每个基准卷积核对应所述N组掩码张量中 的一组掩码张量;The memory is used to store the convolution kernel parameters of the M reference convolution kernels of the neural network and N groups of mask tensors, where M and N are both positive integers, and each group of mask tensors in the N groups of mask tensors The quantity is composed of multiple mask tensors. The number of bits occupied by the elements in the N groups of mask tensors when stored is less than the number of bits occupied by the elements in the convolution kernel parameters in the M reference convolution kernels. The M Each of the reference convolution kernels corresponds to a group of mask tensors in the N groups of mask tensors;
    处理器,用于获取所述神经网络的M个基准卷积核的卷积核参数和N组掩码张量,并执行以下操作:The processor is configured to obtain the convolution kernel parameters and N groups of mask tensors of the M reference convolution kernels of the neural network, and perform the following operations:
    根据所述M个基准卷积核对待处理图像进行卷积处理,得到所述待处理图像的M个基准卷积特征图;Performing convolution processing on the image to be processed according to the M reference convolution kernels to obtain M reference convolution feature maps of the image to be processed;
    对所述M个基准卷积特征图和所述N组掩码张量进行哈达玛积运算,得到所述待处理图像的多个卷积特征图;Performing a Hadamard product operation on the M reference convolution feature maps and the N groups of mask tensors to obtain multiple convolution feature maps of the image to be processed;
    根据所述待处理图像的多个卷积特征图对所述待处理图像进行分类,得到所述待处理图像的分类结果。The image to be processed is classified according to a plurality of convolution feature maps of the image to be processed to obtain a classification result of the image to be processed.
  13. 如权利要求12所述的装置,其特征在于,N小于M,所述M个基准卷积核中的至少两个基准卷积核对应所述N组掩码张量中的一组掩码张量。The device according to claim 12, wherein N is less than M, and at least two reference convolution kernels in the M reference convolution kernels correspond to a group of mask tensors in the N groups of mask tensors.
  14. 如权利要求12或13所述的装置,其特征在于,所述N组掩码张量中至少一组掩码张量中的至少部分掩码张量满足两两正交。The device according to claim 12 or 13, wherein at least part of the mask tensors in the at least one group of mask tensors in the N groups of mask tensors satisfy pairwise orthogonality.
  15. 一种数据处理装置,其特征在于,包括:A data processing device, characterized by comprising:
    存储器,用于存储神经网络的M个基准卷积核的卷积核参数和N组掩码张量,其中,M和N均为正整数,所述N组掩码张量中的每组掩码张量由多个掩码张量组成,所述N组掩码张量中的元素存储时占用的比特数小于M个基准卷积核中卷积核参数中的元素存储时占用的比特数,所述M个基准卷积核中的每个基准卷积核对应所述N组掩码张量中的一组掩码张量;The memory is used to store the convolution kernel parameters of the M reference convolution kernels of the neural network and N groups of mask tensors, where M and N are both positive integers, and each group of mask tensors in the N groups of mask tensors The quantity is composed of multiple mask tensors. The number of bits occupied by the elements in the N groups of mask tensors when stored is less than the number of bits occupied by the elements in the convolution kernel parameters in the M reference convolution kernels. The M Each of the reference convolution kernels corresponds to a group of mask tensors in the N groups of mask tensors;
    处理器,用于获取所述神经网络的M个基准卷积核的卷积核参数和N组掩码张量,并执行以下操作:The processor is configured to obtain the convolution kernel parameters and N groups of mask tensors of the M reference convolution kernels of the neural network, and perform the following operations:
    对所述M个基准卷积核中的每个基准卷积核,以及所述每个基准卷积核在所述N组掩码张量中对应的一组掩码张量进行哈达玛积运算,得到多个子卷积核;Perform a Hadamard product operation on each reference convolution kernel in the M reference convolution kernels and a set of mask tensors corresponding to each reference convolution kernel in the N groups of mask tensors to obtain Multiple sub-convolution kernels;
    根据所述多个子卷积核分别对多媒体数据进行卷积处理,得到所述多媒体数据的多个卷积特征图;Performing convolution processing on the multimedia data respectively according to the multiple sub-convolution kernels to obtain multiple convolution feature maps of the multimedia data;
    据所述多媒体数据的多个卷积特征图对所述多媒体数据进行处理。The multimedia data is processed according to multiple convolution feature maps of the multimedia data.
  16. 一种数据处理装置,其特征在于,包括:A data processing device, characterized by comprising:
    存储器,用于存储神经网络的M个基准卷积核的卷积核参数和N组掩码张量,其中,M和N均为正整数,所述N组掩码张量中的每组掩码张量由多个掩码张量组成,所述N组掩码张量中的元素存储时占用的比特数小于M个基准卷积核中卷积核参数中的元素存储时占用的比特数,所述M个基准卷积核中的每个基准卷积核对应所述N组掩码张量中的一组掩码张量;The memory is used to store the convolution kernel parameters of the M reference convolution kernels of the neural network and N groups of mask tensors, where M and N are both positive integers, and each group of mask tensors in the N groups of mask tensors The quantity is composed of multiple mask tensors. The number of bits occupied by the elements in the N groups of mask tensors when stored is less than the number of bits occupied by the elements in the convolution kernel parameters in the M reference convolution kernels. The M Each of the reference convolution kernels corresponds to a group of mask tensors in the N groups of mask tensors;
    处理器,用于获取所述神经网络的M个基准卷积核的卷积核参数和N组掩码张量,并执行以下操作:The processor is configured to obtain the convolution kernel parameters and N groups of mask tensors of the M reference convolution kernels of the neural network, and perform the following operations:
    根据所述M个基准卷积核对多媒体数据进行卷积处理,得到所述多媒体数据的M个基准卷积特征图;Performing convolution processing on multimedia data according to the M reference convolution kernels to obtain M reference convolution feature maps of the multimedia data;
    对所述M个基准卷积特征图和所述N组掩码张量进行哈达玛积运算,得到所述多媒体数据的多个卷积特征图;Performing a Hadamard product operation on the M reference convolution feature maps and the N groups of mask tensors to obtain multiple convolution feature maps of the multimedia data;
    据所述多媒体数据的多个卷积特征图对所述多媒体数据进行处理。The multimedia data is processed according to multiple convolution feature maps of the multimedia data.
  17. 一种计算机可读存储介质,其特征在于,所述计算机可读介质存储用于设备执行的程序代码,该程序代码包括用于执行如权利要求1-8中任一项所述的方法。A computer-readable storage medium, wherein the computer-readable medium stores a program code for device execution, and the program code includes a method for executing the method according to any one of claims 1-8.
  18. 一种芯片,其特征在于,所述芯片包括处理器与数据接口,所述处理器通过所述数据接口读取存储器上存储的指令,以执行如权利要求1-8中任一项所述的方法。A chip, characterized in that the chip comprises a processor and a data interface, and the processor reads the instructions stored on the memory through the data interface to execute the method according to any one of claims 1-8 method.
PCT/CN2020/086015 2019-04-24 2020-04-22 Image classification method and apparatus, and data processing method and apparatus WO2020216227A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910335678.8 2019-04-24
CN201910335678.8A CN110188795B (en) 2019-04-24 2019-04-24 Image classification method, data processing method and device

Publications (2)

Publication Number Publication Date
WO2020216227A1 WO2020216227A1 (en) 2020-10-29
WO2020216227A9 true WO2020216227A9 (en) 2020-11-26

Family

ID=67715037

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/086015 WO2020216227A1 (en) 2019-04-24 2020-04-22 Image classification method and apparatus, and data processing method and apparatus

Country Status (2)

Country Link
CN (1) CN110188795B (en)
WO (1) WO2020216227A1 (en)

Families Citing this family (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110188795B (en) * 2019-04-24 2023-05-09 华为技术有限公司 Image classification method, data processing method and device
CN110738235B (en) * 2019-09-16 2023-05-30 平安科技(深圳)有限公司 Pulmonary tuberculosis judging method, device, computer equipment and storage medium
CN110780923B (en) * 2019-10-31 2021-09-14 合肥工业大学 Hardware accelerator applied to binary convolution neural network and data processing method thereof
CN110995688B (en) * 2019-11-27 2021-11-16 深圳申朴信息技术有限公司 Personal data sharing method and device for internet financial platform and terminal equipment
CN110991643B (en) * 2019-12-25 2024-01-30 北京奇艺世纪科技有限公司 Model deployment method and device, electronic equipment and storage medium
CN111126572B (en) * 2019-12-26 2023-12-08 北京奇艺世纪科技有限公司 Model parameter processing method and device, electronic equipment and storage medium
CN111275166B (en) * 2020-01-15 2023-05-02 华南理工大学 Convolutional neural network-based image processing device, equipment and readable storage medium
CN111260037B (en) * 2020-02-11 2023-10-13 深圳云天励飞技术股份有限公司 Convolution operation method and device of image data, electronic equipment and storage medium
CN111381968B (en) * 2020-03-11 2023-04-25 中山大学 Convolution operation optimization method and system for efficiently running deep learning task
CN111539462B (en) * 2020-04-15 2023-09-19 苏州万高电脑科技有限公司 Image classification method, system, device and medium for simulating biological vision neurons
CN111860582B (en) * 2020-06-11 2021-05-11 北京市威富安防科技有限公司 Image classification model construction method and device, computer equipment and storage medium
CN111708641B (en) * 2020-07-14 2024-03-19 腾讯科技(深圳)有限公司 Memory management method, device, equipment and computer readable storage medium
CN111860522B (en) * 2020-07-23 2024-02-02 中国平安人寿保险股份有限公司 Identity card picture processing method, device, terminal and storage medium
CN112215243A (en) * 2020-10-30 2021-01-12 百度(中国)有限公司 Image feature extraction method, device, equipment and storage medium
CN112686249B (en) * 2020-12-22 2022-01-25 中国人民解放军战略支援部队信息工程大学 Grad-CAM attack method based on anti-patch
WO2022141511A1 (en) * 2020-12-31 2022-07-07 深圳市优必选科技股份有限公司 Image classification method, computer device, and storage medium
CN112686320B (en) * 2020-12-31 2023-10-13 深圳市优必选科技股份有限公司 Image classification method, device, computer equipment and storage medium
CN113138957A (en) * 2021-03-29 2021-07-20 北京智芯微电子科技有限公司 Chip for neural network inference and method for accelerating neural network inference
CN113392899B (en) * 2021-06-10 2022-05-10 电子科技大学 Image classification method based on binary image classification network
CN113536943B (en) * 2021-06-21 2024-04-12 上海赫千电子科技有限公司 Road traffic sign recognition method based on image enhancement
CN113537325B (en) * 2021-07-05 2023-07-11 北京航空航天大学 Deep learning method for image classification based on extracted high-low layer feature logic
CN113537492B (en) * 2021-07-19 2024-04-26 第六镜科技(成都)有限公司 Model training and data processing method, device, equipment, medium and product
CN113642589B (en) * 2021-08-11 2023-06-06 南方科技大学 Image feature extraction method and device, computer equipment and readable storage medium
CN114491399A (en) * 2021-12-30 2022-05-13 深圳云天励飞技术股份有限公司 Data processing method and device, terminal equipment and computer readable storage medium
CN114239814B (en) * 2022-02-25 2022-07-08 杭州研极微电子有限公司 Training method of convolution neural network model for image processing
CN115294381B (en) * 2022-05-06 2023-06-30 兰州理工大学 Small sample image classification method and device based on feature migration and orthogonal prior
CN115170917B (en) * 2022-06-20 2023-11-07 美的集团(上海)有限公司 Image processing method, electronic device and storage medium
CN115797709B (en) * 2023-01-19 2023-04-25 苏州浪潮智能科技有限公司 Image classification method, device, equipment and computer readable storage medium
CN117314938B (en) * 2023-11-16 2024-04-05 中国科学院空间应用工程与技术中心 Image segmentation method and device based on multi-scale feature fusion decoding

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9202144B2 (en) * 2013-10-30 2015-12-01 Nec Laboratories America, Inc. Regionlets with shift invariant neural patterns for object detection
CN104517103A (en) * 2014-12-26 2015-04-15 广州中国科学院先进技术研究所 Traffic sign classification method based on deep neural network
WO2017129325A1 (en) * 2016-01-29 2017-08-03 Fotonation Limited A convolutional neural network
CN106127297B (en) * 2016-06-02 2019-07-12 中国科学院自动化研究所 The acceleration of depth convolutional neural networks based on tensor resolution and compression method
US9779786B1 (en) * 2016-10-26 2017-10-03 Xilinx, Inc. Tensor operations and acceleration
US10037490B2 (en) * 2016-12-13 2018-07-31 Google Llc Performing average pooling in hardware
US11586905B2 (en) * 2017-10-11 2023-02-21 Arizona Board Of Regents On Behalf Of Arizona State University Systems and methods for customizing kernel machines with deep neural networks
CN107886164A (en) * 2017-12-20 2018-04-06 东软集团股份有限公司 A kind of convolutional neural networks training, method of testing and training, test device
CN108229360B (en) * 2017-12-26 2021-03-19 美的集团股份有限公司 Image processing method, device and storage medium
CN108304795B (en) * 2018-01-29 2020-05-12 清华大学 Human skeleton behavior identification method and device based on deep reinforcement learning
CN110188795B (en) * 2019-04-24 2023-05-09 华为技术有限公司 Image classification method, data processing method and device

Also Published As

Publication number Publication date
WO2020216227A1 (en) 2020-10-29
CN110188795A (en) 2019-08-30
CN110188795B (en) 2023-05-09

Similar Documents

Publication Publication Date Title
WO2020216227A9 (en) Image classification method and apparatus, and data processing method and apparatus
WO2020221200A1 (en) Neural network construction method, image processing method and devices
US20220092351A1 (en) Image classification method, neural network training method, and apparatus
WO2021043168A1 (en) Person re-identification network training method and person re-identification method and apparatus
WO2021042828A1 (en) Neural network model compression method and apparatus, and storage medium and chip
WO2020253416A1 (en) Object detection method and device, and computer storage medium
WO2021043112A1 (en) Image classification method and apparatus
WO2021120719A1 (en) Neural network model update method, and image processing method and device
WO2021147325A1 (en) Object detection method and apparatus, and storage medium
WO2021018163A1 (en) Neural network search method and apparatus
WO2020177607A1 (en) Image denoising method and apparatus
WO2022001805A1 (en) Neural network distillation method and device
WO2021018245A1 (en) Image classification method and apparatus
WO2022042713A1 (en) Deep learning training method and apparatus for use in computing device
WO2021155792A1 (en) Processing apparatus, method and storage medium
WO2021022521A1 (en) Method for processing data, and method and device for training neural network model
WO2022052601A1 (en) Neural network model training method, and image processing method and device
WO2021057056A1 (en) Neural architecture search method, image processing method and device, and storage medium
WO2021008206A1 (en) Neural architecture search method, and image processing method and device
WO2021218517A1 (en) Method for acquiring neural network model, and image processing method and apparatus
US20220335583A1 (en) Image processing method, apparatus, and system
WO2021013095A1 (en) Image classification method and apparatus, and method and apparatus for training image classification model
WO2021164750A1 (en) Method and apparatus for convolutional layer quantization
WO2021018251A1 (en) Image classification method and device
WO2022001372A1 (en) Neural network training method and apparatus, and image processing method and apparatus

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20794289

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20794289

Country of ref document: EP

Kind code of ref document: A1