CN108304921A - The training method and image processing method of convolutional neural networks, device - Google Patents

The training method and image processing method of convolutional neural networks, device Download PDF

Info

Publication number
CN108304921A
CN108304921A CN201810136118.5A CN201810136118A CN108304921A CN 108304921 A CN108304921 A CN 108304921A CN 201810136118 A CN201810136118 A CN 201810136118A CN 108304921 A CN108304921 A CN 108304921A
Authority
CN
China
Prior art keywords
neural networks
convolutional neural
training
layer
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810136118.5A
Other languages
Chinese (zh)
Other versions
CN108304921B (en
Inventor
程光亮
石建萍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sensetime Technology Development Co Ltd
Original Assignee
Beijing Sensetime Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sensetime Technology Development Co Ltd filed Critical Beijing Sensetime Technology Development Co Ltd
Priority to CN201810136118.5A priority Critical patent/CN108304921B/en
Publication of CN108304921A publication Critical patent/CN108304921A/en
Application granted granted Critical
Publication of CN108304921B publication Critical patent/CN108304921B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/29Graphical models, e.g. Bayesian networks

Abstract

The invention discloses a kind of training method of convolutional neural networks and image processing method, device, computer storage media, computer readable storage medium, computer program, the training method of the convolutional neural networks includes:At least one non-liner revision layer being located at after linear structure layer is deleted in the first convolutional neural networks, obtains the second convolutional neural networks;Markup information based on training image and the training image exercises supervision training to second convolutional neural networks.Described image processing method includes:By training complete the second convolutional neural networks at least one serial branch and/or at least a parallel branch merges processing, obtain third convolutional neural networks;Wherein, at least one non-liner revision layer being located at after linear structure layer is deleted in second convolutional neural networks;Image is inputted into the third convolutional neural networks;Described image is handled through the third convolutional neural networks, obtains the handling result of described image.

Description

The training method and image processing method of convolutional neural networks, device
Technical field
At the training method and image of depth learning technology field more particularly to a kind of convolutional neural networks Manage method, apparatus, computer equipment, computer readable storage medium, computer program.
Background technology
With the development of depth convolutional neural networks, deeper or broader network structure constantly refreshes each computer and regards Feel the accuracy rate of data set.However, the intensification of network or widening the operation that will certainly increase network in training and test process Time.In recent years, more and more in order to run depth convolutional neural networks on the platform of low-power consumption, low computing resource Researcher begin to focus on lightweight and training and test process the time required to shorter network.
How the convolutional neural networks of some lightweights are designed to realize shorter training time or shorter testing time Become an important directions of deep learning research.
Invention content
An embodiment of the present invention provides a kind of training method of convolutional neural networks and image processing method, device, calculating Machine equipment, computer readable storage medium, computer program.
The embodiment of the present invention provides a kind of training method of convolutional neural networks, the method includes:
At least one non-liner revision layer being located at after linear structure layer is deleted in the first convolutional neural networks, obtains Two convolutional neural networks;
Markup information based on training image and the training image exercises supervision instruction to second convolutional neural networks Practice.
In said program, first convolutional neural networks include residual unit;
It is described that at least one non-liner revision layer being located at after linear structure layer is deleted in the first convolutional neural networks, it obtains To the second convolutional neural networks, including:
It is deleted in first convolutional neural networks non-after being partially or completely located at linear structure layer in residual unit Linear modification level obtains the second convolutional neural networks.
In said program, the linear structure layer includes:Convolutional layer or batch normalization layer.
In said program, the markup information based on training image and the training image, to second convolution god It exercises supervision training through network, including:
The training parameter of second convolutional neural networks is initialized, and iteration executes following steps:Training image is defeated Enter second convolutional neural networks, is based on the training parameter, calculates the handling result of the training sample, update the instruction Practice parameter;Until the handling result of the training image restrains;Wherein, the handling result convergence of the training image refers to described The handling result of training image and the difference of the markup information of the training image are less than or equal to target door limit value.
The embodiment of the present invention provides a kind of image processing method, the method includes:
At least one serial branch and/or an at least parallel branch in the second convolutional neural networks that training is completed carry out Merging treatment obtains third convolutional neural networks;Wherein, at least one in second convolutional neural networks to be located at linear structure Non-liner revision layer after layer is deleted;
Image is inputted into the third convolutional neural networks;
Described image is handled through the third convolutional neural networks, obtains the handling result of described image.
In said program, the serial branch includes the linear structure layer of at least two concatenations;
Processing is merged to the serial branch, including:A linear structure layer in the serial branch is incorporated to separately One linear structure layer.
In said program, the linear structure layer of at least two concatenations, including:The convolution kernel of two concatenations is of different sizes Convolutional layer;
Processing is merged to the serial branch, including:It deletes a convolutional layer and closes the parameter of the convolutional layer of deletion And in another convolutional layer.
In said program, the linear structure layer of at least two concatenations, including:The convolutional layer of concatenation and batch normalization Layer;
Processing is merged to the serial branch, including:Delete batch normalization layer and will criticize normalized scale and partially Shifting parameter is merged into the convolution nuclear parameter of convolutional layer.
In said program, the parallel branch includes:Parallel linear structure branch and redirect branch;
Processing is merged to the parallel branch, including:Deletion redirects branch and the convolutional layer in linear structure branch The center of convolution kernel add 1.
In said program, the parallel branch includes:Two parallel linear structure branches;
Processing is merged to the parallel branch, including:It deletes the linear structure branch in parallel branch and will delete Weight vectors in the convolution kernel of the linear structure branch removed are added to the center of the convolution kernel of another linear structure.
It is described that the training image is handled through the third convolutional neural networks in said program, including:
Object identifying, classification processing, image segmentation processing or object are carried out to image through the third convolutional neural networks Detection/tracking processing.
In said program, after being partially or completely located at linear structure layer in the residual unit of second convolutional neural networks Non-liner revision layer be deleted.
In said program, the linear structure layer includes:Convolutional layer or batch normalization layer.
The embodiment of the present invention provides a kind of training device of convolutional neural networks, and described device includes:
Deleting unit, for deleted in the first convolutional neural networks it is at least one be located at it is non-linear after linear structure layer Modification level obtains the second convolutional neural networks;
Training unit is used for the markup information based on training image and the training image, to second convolutional Neural Network exercises supervision training.
In said program, first convolutional neural networks include residual unit;
The deleting unit is partially or completely located at for being deleted in residual unit in first convolutional neural networks Non-liner revision layer after linear structure layer, obtains the second convolutional neural networks.
In said program, the linear structure layer includes:Convolutional layer or batch normalization layer.
In said program, the training unit, the training parameter for initializing second convolutional neural networks, and repeatedly Substitute performance following steps:Training image is inputted into second convolutional neural networks, the training parameter is based on, calculates the instruction The handling result for practicing sample, updates the training parameter;Until the handling result of the training image restrains;Wherein, the instruction The handling result convergence for practicing image refers to the difference of the handling result of the training image and the markup information of the training image Less than or equal to target door limit value.
The embodiment of the present invention provides a kind of image processing apparatus, and described device includes:
Combining unit, at least one serial branch in the second convolutional neural networks for completion will to be trained and/or at least One parallel branch merges processing, obtains third convolutional neural networks;Wherein, at least one in second convolutional neural networks A non-liner revision layer being located at after linear structure layer is deleted;
Image processing unit, for image to be inputted the third convolutional neural networks;Through the third convolutional Neural net Network handles described image, obtains the handling result of described image.
In said program, the serial branch includes the linear structure layer of at least two concatenations;
The combining unit, for the linear structure layer in the serial branch to be incorporated to another linear structure layer.
In said program, the linear structure layer of at least two concatenations, including:The convolution kernel of two concatenations is of different sizes Convolutional layer;
The combining unit, for deleting a convolutional layer and the parameter of the convolutional layer of deletion being merged into another convolutional layer In.
In said program, the linear structure layer of at least two concatenations, including:The convolutional layer of concatenation and batch normalization Layer;
The combining unit, for deleting batch normalization layer and normalized scale will be criticized and offset parameter is merged into volume In the convolution nuclear parameter of lamination.
In said program, the parallel branch includes:Parallel linear structure branch and redirect branch;
The combining unit, for merging processing to the parallel branch, including:Deletion redirects branch and online The center of the convolution kernel of convolutional layer adds 1 in property structure branch.
In said program, the parallel branch includes:Two parallel linear structure branches;
The combining unit, for deleting the linear structure branch in parallel branch and by a linear structure of deletion Weight vectors in the convolution kernel of branch are added to the center of the convolution kernel of another linear structure.
In said program, described image processing unit, for carrying out object to image through the third convolutional neural networks Identification, classification processing, image segmentation processing or object detection/tracking processing.
In said program, after being partially or completely located at linear structure layer in the residual unit of second convolutional neural networks Non-liner revision layer be deleted.
In said program, the linear structure layer includes:Convolutional layer or batch normalization layer.
The embodiment of the present invention provides a kind of computer equipment, and the computer equipment includes memory and processor, described Computer executable instructions are stored on memory, the processor runs the computer executable instructions on the memory When, realize the training method or image processing method of convolutional neural networks provided in an embodiment of the present invention.
The embodiment of the present invention provides a kind of computer readable storage medium, is stored thereon with computer program, the computer When program is executed by processor, the training method or image processing method of convolutional neural networks provided in an embodiment of the present invention are realized Method.
The embodiment of the present invention provides a kind of computer program, including computer instruction, when the computer instruction is in equipment Processor in when running, realize the training method or image processing method of convolutional neural networks provided in an embodiment of the present invention Method.
In the technical solution of the embodiment of the present invention, at least one in the second convolutional neural networks that can complete training is serial Branch and/or at least a parallel branch merges processing, obtains third convolutional neural networks;Wherein, the second convolution god Through in network it is at least one be located at linear structure layer after non-liner revision layer be deleted;Image is inputted into the third convolution god Through network;Described image is handled through the third convolutional neural networks, obtains the handling result of described image.In this way, For third convolutional neural networks compare the second convolutional neural networks, serial branch and/or parallel branch are incorporated, as far as possible Under the premise of not influencing measuring accuracy, the length of testing speech of convolutional neural networks is substantially reduced.
Using the technical solution of the embodiment of the present invention, the compression and acceleration of convolutional neural networks may be implemented, it can be very big Ground reduces the run time of network and required video memory.
Description of the drawings
The attached drawing of a part for constitution instruction describes presently filed embodiment, and together with description for solving Release the principle of the application.
The application can be more clearly understood according to following detailed description with reference to attached drawing, wherein:
Fig. 1 is the flow diagram of the training method of the convolutional neural networks of the embodiment of the present invention;
Fig. 2 (a) is the schematic diagram before the first residual unit is simplified;
Fig. 2 (b) is the schematic diagram after the first residual unit is simplified;
Fig. 2 (c) is the schematic diagram before second of residual unit is simplified;
Fig. 2 (d) is the schematic diagram after second of residual unit is simplified;
Fig. 3 is the flow diagram of the image processing method of the embodiment of the present invention;
Fig. 4 (a) is the schematic diagram before jump module is simplified;
Fig. 4 (b) is the schematic diagram before projection module is simplified;
Fig. 4 (c) is the schematic diagram before jump module and projection module are simplified;
Fig. 5 (a) is the structural schematic diagram of the second convolutional neural networks;
Fig. 5 (b) is the structural schematic diagram of third convolutional neural networks;
Fig. 6 is the structure composition schematic diagram of the training device of the convolutional neural networks of the embodiment of the present invention;
Fig. 7 is the structure composition schematic diagram of the image processing apparatus of the embodiment of the present invention;
Fig. 8 is the structure composition schematic diagram of the computer equipment of the embodiment of the present invention.
Specific implementation mode
The characteristics of in order to more fully hereinafter understand the embodiment of the present invention and technology contents, below in conjunction with the accompanying drawings to this hair The realization of bright embodiment is described in detail, appended attached drawing purposes of discussion only for reference, is not used for limiting the embodiment of the present invention.
Simultaneously, it should be appreciated that for ease of description, the size of attached various pieces shown in the drawings is not according to reality Proportionate relationship draw.
It is illustrative to the description only actually of at least one exemplary embodiment below, is never used as to the application And its application or any restrictions that use.
Technology, method known to person of ordinary skill in the relevant and equipment may be not discussed in detail, but In the case of appropriate, the technology, method and apparatus should be considered as part of specification.
It should be noted that:Similar label and letter indicate similar terms in following attached drawing, therefore, once a certain item exists It is defined, then it need not be further discussed in subsequent attached drawing in one attached drawing.
The embodiment of the present application can be applied to the electronic equipments such as terminal device, computer system and server, can be with crowd Mostly other general either dedicated computing system environments or configuration operate together.Suitable for terminal device, computer system with And the example of well-known terminal device, computing system, environment and/or configuration that the electronic equipments such as server are used together, Including but not limited to:It is personal computer system, server computer system, thin client, thick client computer, hand-held or above-knee set It is standby, microprocessor-based system, set-top box, programmable consumer electronics, NetPC Network PC, little types Ji calculate machine Xi Tong ﹑ Large computer system and distributed cloud computing technology environment, etc. including any of the above described system.
The electronic equipments such as terminal device, computer system and server can be in the computer executed by computer system It is described under the general context of system executable instruction (such as program module).In general, program module may include routine, program, Target program, component, logic and data structure etc., they execute specific task or realize specific abstract data class Type.Computer system/server can be implemented in distributed cloud computing environment, in distributed cloud computing environment, task be by What the remote processing devices being linked through a communication network executed.In distributed cloud computing environment, program module can be located at packet It includes on the Local or Remote computing system storage medium of storage device.
Fig. 1 is the flow diagram of the training method of the convolutional neural networks of the embodiment of the present invention, as shown in Figure 1, described Method includes the following steps:
Step 101:At least one non-liner revision being located at after linear structure layer is deleted in the first convolutional neural networks Layer, obtains the second convolutional neural networks.
In the embodiment of the present invention, the first convolutional neural networks can be any one residual error network, the first convolution god Include residual unit through network, is deleted in residual unit in first convolutional neural networks and be partially or completely located at linear junction Non-liner revision layer after structure layer, obtains the second convolutional neural networks.
In the embodiment of the present invention, the linear structure layer includes:Convolutional layer or batch normalization layer.
With reference to Fig. 2 (a), Fig. 2 (b), Fig. 2 (c) and Fig. 2 (d), wherein Fig. 2 (a) and Fig. 2 (c) is two kinds classical residual Poor unit, classical residual unit include two kinds of structure, respectively:By 1*1 convolution, 3*3 convolution sum 1*1 convolution is facilitated Bottleneck (bottleneck) structure (such as Fig. 2 (a) shown in), and formed structure (such as Fig. 2 is concatenated by two 3*3 convolution (c) shown in).It is ReLU layers subsequent (namely non-to delete BN layers (namely batch normalization layers) for Fig. 2 (a) by Fig. 2 (b) Linear modification level), equally, Fig. 2 (d) deletes BN layers of (namely batch normalization layer) subsequent ReLU for Fig. 2 (c) Layer (namely non-liner revision layer).It should be noted that the embodiment of the present invention is deleted for subsequent ReLU layers not to summation (sum) layer, As it can be seen that after what the embodiment of the present invention deleted is part ReLU layer in residual unit, namely deletion is located at linear structure layer ReLU layers.
Step 102:Markup information based on training image and the training image, to second convolutional neural networks into Row supervised training.
Optionally, the training parameter of second convolutional neural networks is initialized, and iteration executes following steps:It will train Image inputs second convolutional neural networks, is based on the training parameter, calculates the handling result of the training sample, updates The training parameter;Until the handling result of the training image restrains;Wherein, the handling result of the training image, which restrains, is The difference for referring to the handling result of the training image and the markup information of the training image is less than or equal to target door limit value.
In the technical solution of the embodiment of the present invention, is deleted in the first convolutional neural networks and at least one be located at linear structure Non-liner revision layer after layer, obtains the second convolutional neural networks;Markup information based on training image and the training image, It exercises supervision training to second convolutional neural networks.In this way, the second convolutional neural networks compare the first convolutional neural networks For, non-liner revision layer is deleted, under the premise of not influencing training precision, substantially reduces the training of convolutional neural networks Duration.
In more classical depth convolutional neural networks, one can be concatenated after each convolutional layer (Convolution) Normalization (BN, Batch Normalization) layer and non-liner revision (ReLU, Rectified Linear Units) are criticized, Wherein, BN layers of main function is to solve " internal covariance deflection " phenomenon, and ReLU layers are used as a kind of non-liner revisions Layer, main function are that the positive value in inputting retains, and negative value is set to 0.
Residual error network (ResNet) is a kind of depth convolutional neural networks, and residual error network is made of residual unit, the application People is found through experiments that, the ReLU layers in residual unit are removed, the classification performance of network is not interfered with not only, can make net instead The process performance of network increased.
Based on above-mentioned Germicidal efficacy, the embodiment of the present invention proposes a kind of novel convolutional neural networks (namely volume Two Product neural network), which is to old convolutional neural networks (namely first convolutional neural networks) The non-liner revision layer (i.e. ReLU layers) after being located at linear structure layer is deleted in structure come what is realized.This novel convolution god It can simplify network training through network, even preferably goal task treatment effect similar with residual error network can be obtained in addition.
During network training, training parameter is continued to optimize during training namely training parameter is One dynamic change value, for the process of network training, training parameter has been optimized the process of network test, I.e. training parameter is a fixed optimal value, therefore, can further be simplified the convolutional neural networks that above-mentioned training is completed, For example, by it is above-mentioned training complete convolutional neural networks in serial branch and/or parallel branch merge, finally merge into The only network structure of 3*3 convolutional layers and ReLU layers of composition, so as to shorten the time of the network operation, and can substantially reduce institute Need video memory.It being found through experiments that, the test that the convolutional neural networks that the embodiment of the present invention is proposed can obtain 2-4 times accelerates, In addition required memory is only the 1/4 of original residual error network, thus the convolutional neural networks that are proposed of the embodiment of the present invention can be Low-power consumption, low computing resource platform on run, and the processing accuracy completely the same with old network can be kept.
It verifies, after some ReLU layers in residual unit is removed, will not only cause at task through a large number of experiments The reduction of precision is managed, task processing accuracy can also increased instead, this has also turned out in traditional residual error network " each volume Lamination or batch normalization layer back concatenate one ReLU layers " it is not necessarily to.Therefore, the embodiment of the present invention is suitably reduced ReLU layers of use in network, task processing accuracy is similar or better network structure to obtaining, such as classification task, image The tasks such as semantic segmentation task, target detection.
Fig. 3 be the embodiment of the present invention image processing method flow diagram, as shown in figure 3, the method includes with Lower step:
Step 301:At least one serial branch and/or at least one in the second convolutional neural networks that training is completed is parallel Branch merges processing, obtains third convolutional neural networks;Wherein, at least one in second convolutional neural networks to be located at Non-liner revision layer after linear structure layer is deleted.
In the embodiment of the present invention, the second convolutional neural networks completed are trained to may be employed without limitation of shown in FIG. 1 Method trains to obtain, and it is network training that can be based on supervision or non supervision model memory.
In one embodiment, it is partially or completely located at linear structure in the residual unit of second convolutional neural networks Non-liner revision layer after layer is deleted.
In one embodiment, the linear structure layer is the network layer with linear process function, such as may include But it is not limited to:Convolutional layer or batch normalization layer, convolutional layer may include but be not limited to Convolution layers, pond layer.
In the embodiment of the present invention, at least one serial branch and/or extremely in the second convolutional neural networks that training is completed A few parallel branch merges processing, for being traditional basic network layer by processed branch's equivalent process, for example, logical It is with the linear structure layer with serial branch's equivalent process ability that merging treatment, which is crossed, by serial branch process;In another example logical Merging treatment is crossed by parallel branch processing as with the linear structure layer with parallel branch equivalent process ability;Etc..Pass through Such merging treatment mode switchs to the more complicated part of the second convolutional neural networks with equivalent process ability tradition Network layer, under the premise of not influencing measuring accuracy as far as possible, improve test run speed, shorten length of testing speech, reduce The resources such as the memory needed for convolutional neural networks operation.
Optionally, merging treatment includes one or more in following processing mode:
Mode one:In the case that the serial branch includes the linear structure layer of at least two concatenations, by described serial point A linear structure layer in branch is incorporated to another linear structure layer.
Here, the linear structure layer of the two concatenations can (such as be convolutional layer, different convolutional layers with structure having the same Convolution kernel size can be identical or different), it is possible to have (such as one is convolutional layer to different structures, the other is BN Layer).
Such as:The linear structure layer of at least two concatenations, including:Two convolution kernels concatenated convolution of different sizes Layer, such case delete a convolutional layer and the parameter of the convolutional layer of deletion are merged into another convolutional layer.Optionally, another The convolution nuclear parameter that the center of one convolutional layer merges the convolutional layer deleted (such as uses parameter matrix multiplication mode to be closed And).
Such as:The linear structure layer of at least two concatenations, including:The convolutional layer of concatenation and batch normalization layer, it is this Situation deletes batch normalization layer and batch normalized scale and offset parameter is merged into the convolution nuclear parameter of convolutional layer, false If the convolution nuclear parameter of convolutional layer is z, z=W*x+b, wherein W and b is weight matrix and the offset of convolutional layer respectively, is returned batch One scale changed and offset parameter obtain after being merged into the convolution nuclear parameter of convolutional layer Wherein, c is related to batch normalized scale, and d is related to batch normalized offset parameter.
Mode two:In the case that the parallel branch includes parallel linear structure branch and redirects branch, deletion redirects The branch and center of the convolution kernel of convolutional layer adds 1 in linear structure branch.
Assuming that the weight matrix of convolutional layer is in linear structure branchThe volume of convolutional layer in linear structure branch The center of product core adds 1 to obtainWherein, i represents i-th of weight filtering, and j represents i-th of weight filter J-th of channel of wave, (k, s) represent weight position.
Mode three:In the case that the parallel branch includes two parallel linear structure branches, delete in parallel branch A linear structure branch and the weight vectors in the convolution kernel of a linear structure branch of deletion are added to another linear structure Convolution kernel center.
Assuming that the weight matrix of convolutional layer is in linear structure branchThe convolution kernel of convolutional layer in the linear structure branch Center add the weight vectors in the convolution kernel of another linear structure branch to obtain Wherein, i represents i-th of weight filtering, and j represents j-th of channel of i-th of weight filtering, and (k, s) represents weight position, Represent the weight matrix of the convolutional layer of another linear structure branch.
In a kind of optional realization method, parallel branch that the second convolutional neural networks include may include following two basic At least one of structural unit, projection module (projection-module) and jump module (skip-module), such as Fig. 4 (a) it is jump module, Fig. 4 (b) is projection module.Wherein, jump module includes the convolutional layer of a 3*3, and is jumped by one Turn layer and will output and input directly to be connected;Projection module is respectively by the convolutional layer of the convolutional layer of 3*3 and 1*1 to defeated Enter characteristic pattern and carry out convolution operation, and the two acquired results are subjected to output of the fusion as the module.Fig. 4 (a) and Fig. 4 (b) Shown in module can be through the embodiment of the present invention scheme carry out network merging, to can obtain as shown in Fig. 4 (c) Network structure module, the network structure module is similar with traditional network structure, i.e., include any branch and residual error knot Structure can greatly save network test time and required video memory.
Step 302:Image is inputted into the third convolutional neural networks;Through the third convolutional neural networks to the figure As being handled, the handling result of described image is obtained.
In one embodiment, Object identifying, classification processing, image are carried out to image through the third convolutional neural networks Dividing processing or object detection/tracking processing.Here, it can be any type of to people, object etc. to carry out Object identifying to image Object is identified.
Fig. 5 (a) is the structural schematic diagram of the second convolutional neural networks, as shown in Fig. 5 (a), the second convolution nerve net Network be by the first convolutional neural networks convolutional layer or BN layers of subsequent ReLU layers of deletion obtain, the second convolution nerve net Network includes multiple residual units, and in each residual unit, only sum layers of reservation is ReLU layers subsequent.
Fig. 5 (b) is the structural schematic diagram of third convolutional neural networks, as shown in Fig. 5 (b), the third convolutional Neural net Network is to merge to obtain to the serial branch of the second convolutional neural networks kind and parallel branch, obtained third convolutional Neural net Network has the structure simplified.
It should be understood that the number of each layer of structure described in the embodiment of the present invention is not limited, below in conjunction with Fig. 4 (a), Fig. 4 (b) and Fig. 4 (c) is described in further detail the technical solution of the embodiment of the present invention.
(1) in the convolutional layer for being merged into BN layers before it
In general, BN layers include two steps, i.e. normalization operation and sized/translation.In the training process, By calculating mean value and variance progress parameter update to the training data in each iterative process.During prediction, it is above-mentioned because Son will not change because of the variation of data, i.e., these factors are fixed.The operation of BN layers at this time can be considered as one Linear structure.Furthermore it is known that the operation of convolutional layer can also be considered as linear operation.It should be understood that two linear operations can merge It can be merged into convolutional layer that is, at test phase for a linear operation by BN layers and handle together.Net can be saved in this way Network carries out required run time and required video memory when BN operations.Shown in jump module shown in Fig. 4 (a) and Fig. 4 (b) For projection module, its BN layers can be deleted.
(2) parallel branch is merged
1) parallel branch of jump module merges
After being merged into convolutional layer by BN layers, jump module is converted into the convolution operation of a skip operation and a 3*3, The convolution operation of the skip operation and 3*3 are two parallel operations, and corresponding network structure is parallel branch structure, can be incited somebody to action Skip operation is merged into the convolution operation of 3*3.Its basic operation is:The center in i-th of channel of i convolution kernel is added 1. As shown in Fig. 4 (c), which can be merged into the basic unit needed for traditional network, eliminate one of branch And the information mixing operation between different branches, significantly reduce network operation time and required video memory.
2) parallel branch of projection module merges
After being merged into convolutional layer by BN layers, projection module is converted into two convolution operations, the i.e. convolution operation of 1*1 and 3* 3 convolution operation, the convolution operation of the 1*1 and the convolution operation of 3*3 are two parallel operations, and corresponding network structure is flat The convolution operation of 1*1 can be equally merged into the convolution operation of 3*3 by row branched structure.Its basic operation is:By i-th Weight vectors in the convolution kernel of 1*1 are added to the center of the convolution kernel of i-th of 3*3.As shown in Fig. 4 (c), the projection module The basic unit needed for traditional network is equally can be merged into, the letter between one of branch and different branches is eliminated Mixing operation is ceased, network operation time and required video memory are significantly reduced.
By aforesaid operations, there are following both sides advantages for the technical solution of the embodiment of the present invention:(1) present invention is implemented The network structure of example includes the jump module and projection module of similar residual error network, and the introducing of these modules can overcome traditional god Through network gradient extinction tests present in network training process, effective gradient anti-pass can be carried out to network and network is instructed Practice;(2) network structure of the embodiment of the present invention and is used in training using the network structure of similar residual error in network test Similar to the network structure of traditional network, the run time and video memory needed for BN layers and multiple-branching construction, Ke Yiming are saved Accelerate network operation speed aobviously.
The technical solution of the embodiment of the present invention, 1) it can be used for carrying out the research in terms of network acceleration and memory saving.2) It can be in platform (such as programmable gate array (FPGA, the Field-Programmable of some low-power consumption and low calculation resources Gate Array), PX2 and TX1) on run depth convolutional neural networks.3) it is not limited to image classification task, is extended also to In image, semantic segmentation and object detection task.4) thought can be merged using network and carries out further network acceleration and model Research in terms of compression.
Fig. 6 is the structure composition schematic diagram of the training device of the convolutional neural networks of the embodiment of the present invention, as shown in fig. 6, The training device of the convolutional neural networks includes:
Deleting unit 601, for deleted in the first convolutional neural networks it is at least one be located at it is non-after linear structure layer Linear modification level obtains the second convolutional neural networks;
Training unit 602 is used for the markup information based on training image and the training image, to second convolution god It exercises supervision training through network.
In one embodiment, first convolutional neural networks include residual unit;
The deleting unit 601, for being deleted in first convolutional neural networks in residual unit partly or entirely It is located at the non-liner revision layer after linear structure layer, obtains the second convolutional neural networks.
In one embodiment, the linear structure layer includes:Convolutional layer or batch normalization layer.
In one embodiment, the training unit 602, the training ginseng for initializing second convolutional neural networks Number, and iteration executes following steps:Training image is inputted into second convolutional neural networks, is based on the training parameter, meter The handling result for calculating the training sample updates the training parameter;Until the handling result of the training image restrains;Its In, the handling result convergence of the training image refers to that the handling result of the training image is believed with the mark of the training image The difference of breath is less than or equal to target door limit value.
It will be appreciated by those skilled in the art that the reality of each unit in the training device of convolutional neural networks shown in fig. 6 Existing function can refer to the associated description of the training method of aforementioned convolutional neural networks and understand.Convolutional neural networks shown in fig. 6 Training device in the function of each unit can be realized by running on the program on processor, can also pass through specific logic Circuit and realize.
Fig. 7 is the structure composition schematic diagram of the image processing apparatus of the embodiment of the present invention, as shown in fig. 7, at described image Managing device includes:
Combining unit 701, at least one serial branch in the second convolutional neural networks for completion will to be trained and/or extremely A few parallel branch merges processing, obtains third convolutional neural networks;Wherein, in second convolutional neural networks at least One non-liner revision layer being located at after linear structure layer is deleted;
Image processing unit 702, for image to be inputted the third convolutional neural networks;Through the third convolutional Neural Network handles described image, obtains the handling result of described image.
In one embodiment, the serial branch includes the linear structure layer of at least two concatenations;
The combining unit 701, for the linear structure layer in the serial branch to be incorporated to another linear structure layer.
In one embodiment, the linear structure layer of at least two concatenations, including:The convolution kernel size of two concatenations Different convolutional layers;
The combining unit 701, for deleting a convolutional layer and the parameter of the convolutional layer of deletion being merged into another convolution In layer.
In one embodiment, the linear structure layer of at least two concatenations, including:The convolutional layer of concatenation and batch normalizing Change layer;
The combining unit 701, for deleting batch normalization layer and normalized scale will be criticized and offset parameter is merged into In the convolution nuclear parameter of convolutional layer.
In one embodiment, the parallel branch includes:Parallel linear structure branch and redirect branch;
The combining unit 701, for merging processing to the parallel branch, including:Deletion redirect branch and The center of the convolution kernel of convolutional layer adds 1 in linear structure branch.
In one embodiment, the parallel branch includes:Two parallel linear structure branches;
The combining unit 701, for deleting the linear structure branch in parallel branch and by a linear junction of deletion Weight vectors in the convolution kernel of structure branch are added to the center of the convolution kernel of another linear structure.
In one embodiment, described image processing unit 702, for through the third convolutional neural networks to image into Row Object identifying, classification processing, image segmentation processing or object detection/tracking processing.
In one embodiment, it is partially or completely located at linear structure in the residual unit of second convolutional neural networks Non-liner revision layer after layer is deleted.
In one embodiment, the linear structure layer includes:Convolutional layer or batch normalization layer.
It will be appreciated by those skilled in the art that the realization function of each unit in image processing apparatus shown in Fig. 7 can join Understand according to the associated description of aforementioned image processing method.The function of each unit in image processing apparatus shown in Fig. 7 can lead to It crosses the program run on processor and realizes, can also be realized by specific logic circuit.
If above-mentioned apparatus of the embodiment of the present invention is realized in the form of software function module and is sold as independent product Or it in use, can also be stored in a computer read/write memory medium.Based on this understanding, the embodiment of the present invention Substantially the part that contributes to existing technology can be expressed in the form of software products technical solution in other words, the meter Calculation machine software product is stored in a storage medium, including some instructions are used so that a computer equipment (can be People's computer, server or network equipment etc.) execute all or part of each embodiment the method for the present invention.And it is preceding The storage medium stated includes:USB flash disk, mobile hard disk, read-only memory (ROM, Read Only Memory), magnetic disc or CD etc. The various media that can store program code.It is combined in this way, the embodiment of the present invention is not limited to any specific hardware and software.
Correspondingly, the embodiment of the present invention also provides a kind of computer storage media, wherein being stored with, computer is executable to be referred to It enables, the above-mentioned method of the embodiment of the present invention is realized when which is executed by processor.
Fig. 8 is the structure composition schematic diagram of the computer equipment of the embodiment of the present invention, as shown in figure 8, computer equipment 100 May include that (processor 1002 can include but is not limited to microprocessor to one or more (one is only shown in figure) processors 1002 Device (MCU, Micro Controller Unit) or programmable logic device (FPGA, Field Programmable Gate ) etc. Array processing unit), memory 1004 for storing data and the transmitting device 1006 for communication function. It will appreciated by the skilled person that structure shown in Fig. 8 is only to illustrate, the structure of above-mentioned electronic device is not made At restriction.For example, computer equipment 100 may also include more either less components than shown in Fig. 8 or have and Fig. 8 Shown different configuration.
Memory 1004 can be used for storing the software program and module of application software, such as the method in the embodiment of the present invention Corresponding program instruction/module, processor 1002 are stored in software program and module in memory 1004 by operation, from And perform various functions application and data processing, that is, realize above-mentioned method.Memory 1004 may include high speed random storage Device may also include nonvolatile memory, such as one or more magnetic storage device, flash memory or other are non-volatile solid State memory.In some instances, memory 1004 can further comprise the memory remotely located relative to processor 1002, These remote memories can pass through network connection to computer equipment 100.The example of above-mentioned network includes but not limited to interconnect Net, intranet, LAN, mobile radio communication and combinations thereof.
Transmitting device 1006 is used to receive via a network or transmission data.Above-mentioned network specific example may include The wireless network that the communication providers of computer equipment 100 provide.In an example, transmitting device 1006 includes a network Adapter (NIC, Network Interface Controller), can be connected with other network equipments by base station so as to It is communicated with internet.In an example, transmitting device 1006 can be radio frequency (RF, Radio Frequency) module, It is used to wirelessly be communicated with internet.
It, in the absence of conflict, can be in any combination between technical solution recorded in the embodiment of the present invention.
In several embodiments provided by the present invention, it should be understood that disclosed method and smart machine, Ke Yitong Other modes are crossed to realize.Apparatus embodiments described above are merely indicative, for example, the division of the unit, only Only a kind of division of logic function, formula that in actual implementation, there may be another division manner, such as:Multiple units or component can be tied It closes, or is desirably integrated into another system, or some features can be ignored or not executed.In addition, shown or discussed each group At the mutual coupling in part or direct-coupling or communication connection can be by some interfaces, equipment or unit it is indirect Coupling or communication connection, can be electrical, mechanical or other forms.
The above-mentioned unit illustrated as separating component can be or may not be and be physically separated, aobvious as unit The component shown can be or may not be physical unit, you can be located at a place, may be distributed over multiple network lists In member;Some or all of wherein unit can be selected according to the actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in various embodiments of the present invention can be fully integrated into a second processing unit, Can also be each unit individually as a unit, it can also be during two or more units be integrated in one unit; The form that hardware had both may be used in above-mentioned integrated unit is realized, the form that hardware adds SFU software functional unit can also be used real It is existing.
The above description is merely a specific embodiment, but scope of protection of the present invention is not limited thereto, any Those familiar with the art in the technical scope disclosed by the present invention, can easily think of the change or the replacement, and should all contain Lid is within protection scope of the present invention.

Claims (10)

1. a kind of training method of convolutional neural networks, which is characterized in that the method includes:
At least one non-liner revision layer being located at after linear structure layer is deleted in the first convolutional neural networks, obtains volume Two Product neural network;
Markup information based on training image and the training image exercises supervision training to second convolutional neural networks.
2. according to the method described in claim 1, it is characterized in that, first convolutional neural networks include residual unit;
It is described that at least one non-liner revision layer being located at after linear structure layer is deleted in the first convolutional neural networks, obtain Two convolutional neural networks, including:
It is deleted in first convolutional neural networks non-linear after being partially or completely located at linear structure layer in residual unit Modification level obtains the second convolutional neural networks.
3. method according to claim 1 or 2, which is characterized in that the linear structure layer includes:Convolutional layer batch is returned One changes layer.
4. method according to any one of claims 1 to 3, which is characterized in that described to be based on training image and the training The markup information of image exercises supervision training to second convolutional neural networks, including:
The training parameter of second convolutional neural networks is initialized, and iteration executes following steps:Training image is inputted into institute The second convolutional neural networks are stated, the training parameter is based on, calculate the handling result of the training sample, update the training ginseng Number;Until the handling result of the training image restrains;Wherein, the handling result convergence of the training image refers to the training The handling result of image and the difference of the markup information of the training image are less than or equal to target door limit value.
5. a kind of image processing method, which is characterized in that the method includes:
By training complete the second convolutional neural networks at least one serial branch and/or at least a parallel branch merges Processing, obtains third convolutional neural networks;Wherein, in second convolutional neural networks it is at least one be located at linear structure layer after Non-liner revision layer be deleted;
Image is inputted into the third convolutional neural networks;
Described image is handled through the third convolutional neural networks, obtains the handling result of described image.
6. a kind of training device of convolutional neural networks, which is characterized in that described device includes:
Deleting unit, for deleting at least one non-liner revision being located at after linear structure layer in the first convolutional neural networks Layer, obtains the second convolutional neural networks;
Training unit is used for the markup information based on training image and the training image, to second convolutional neural networks Exercise supervision training.
7. a kind of image processing apparatus, which is characterized in that described device includes:
Combining unit, at least one serial branch in the second convolutional neural networks for completion will to be trained and/or at least together Row branch merges processing, obtains third convolutional neural networks;Wherein, at least one in second convolutional neural networks to set Non-liner revision layer after linear structure sheaf is deleted;
Image processing unit, for image to be inputted the third convolutional neural networks;Through the third convolutional neural networks pair Described image is handled, and the handling result of described image is obtained.
8. a kind of computer equipment, which is characterized in that the computer equipment includes memory and processor, on the memory Computer executable instructions are stored with, the processor realizes right when running the computer executable instructions on the memory It is required that the method described in 1 to 4 any one of them method or claim 5.
9. a kind of computer readable storage medium, which is characterized in that be stored with computer program, the computer program is by processor When execution, the method described in Claims 1-4 any one of them method or claim 5 is realized.
10. a kind of computer program, which is characterized in that including computer instruction, when the computer instruction is in the processing of equipment When being run in device, the method described in Claims 1-4 any one of them method or claim 5 is realized.
CN201810136118.5A 2018-02-09 2018-02-09 Convolutional neural network training method and image processing method and device Active CN108304921B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810136118.5A CN108304921B (en) 2018-02-09 2018-02-09 Convolutional neural network training method and image processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810136118.5A CN108304921B (en) 2018-02-09 2018-02-09 Convolutional neural network training method and image processing method and device

Publications (2)

Publication Number Publication Date
CN108304921A true CN108304921A (en) 2018-07-20
CN108304921B CN108304921B (en) 2021-02-02

Family

ID=62865006

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810136118.5A Active CN108304921B (en) 2018-02-09 2018-02-09 Convolutional neural network training method and image processing method and device

Country Status (1)

Country Link
CN (1) CN108304921B (en)

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109360202A (en) * 2018-10-26 2019-02-19 京东方科技集团股份有限公司 Measurement method and its device, the training method and its device of roentgen dose X
CN109816098A (en) * 2019-01-25 2019-05-28 京东方科技集团股份有限公司 The processing method and appraisal procedure of neural network, data analysing method and device
CN109872288A (en) * 2019-01-31 2019-06-11 深圳大学 For the network training method of image denoising, device, terminal and storage medium
CN110222693A (en) * 2019-06-03 2019-09-10 第四范式(北京)技术有限公司 The method and apparatus for constructing character recognition model and identifying character
CN110473520A (en) * 2019-07-19 2019-11-19 上海麦图信息科技有限公司 A kind of air control Chinese and English voice method of discrimination based on deep learning
CN110689020A (en) * 2019-10-10 2020-01-14 湖南师范大学 Segmentation method of mineral flotation froth image and electronic equipment
CN110971901A (en) * 2018-09-29 2020-04-07 杭州海康威视数字技术股份有限公司 Convolutional neural network processing method and device
CN110969217A (en) * 2018-09-28 2020-04-07 杭州海康威视数字技术股份有限公司 Method and device for processing image based on convolutional neural network
WO2020125806A1 (en) * 2018-12-17 2020-06-25 Shanghai United Imaging Intelligence Co., Ltd. Systems and methods for image segmentation
WO2020134828A1 (en) * 2018-12-27 2020-07-02 深圳云天励飞技术有限公司 Neural network inference structure optimization method and device
CN111798520A (en) * 2020-09-08 2020-10-20 平安国际智慧城市科技股份有限公司 Image processing method, device, equipment and medium based on convolutional neural network
WO2020221200A1 (en) * 2019-04-28 2020-11-05 华为技术有限公司 Neural network construction method, image processing method and devices
CN112669452A (en) * 2021-01-08 2021-04-16 天津大学 Object positioning method based on convolutional neural network multi-branch structure
CN113158982A (en) * 2021-05-17 2021-07-23 广东中卡云计算有限公司 Semi-intrusive target key point marking method
WO2023078051A1 (en) * 2021-11-05 2023-05-11 杭州研极微电子有限公司 Quantization-aware training method and apparatus, and device, medium and convolutional neural network
TWI807204B (en) * 2020-07-28 2023-07-01 鴻海精密工業股份有限公司 Image processing method, device, electronic equipment and storage medium
CN116562338A (en) * 2022-01-27 2023-08-08 美的集团(上海)有限公司 Multi-branch convolution structure, neural network model, and determination method and determination device thereof

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100094594A1 (en) * 2008-10-09 2010-04-15 Masahide Yamashita Predictive failure reporting system, predictive failure reporting method, and method for maintaining image forming apparatus
CN105957086A (en) * 2016-05-09 2016-09-21 西北工业大学 Remote sensing image change detection method based on optimized neural network model

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100094594A1 (en) * 2008-10-09 2010-04-15 Masahide Yamashita Predictive failure reporting system, predictive failure reporting method, and method for maintaining image forming apparatus
CN105957086A (en) * 2016-05-09 2016-09-21 西北工业大学 Remote sensing image change detection method based on optimized neural network model

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
XIANGYU ZHANG 等: "ShuffleNet:An Extremely Efficient Convolutional Neural Network for Mobile", 《ARXIV:1707.01083V2 [CS.CV]》 *
隋江华: "AND-OR模糊神经网络研究及在船舶控制中的应用", 《AND-OR模糊神经网络研究及在船舶控制中的应用 *

Cited By (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110969217A (en) * 2018-09-28 2020-04-07 杭州海康威视数字技术股份有限公司 Method and device for processing image based on convolutional neural network
CN110969217B (en) * 2018-09-28 2023-11-17 杭州海康威视数字技术股份有限公司 Method and device for image processing based on convolutional neural network
CN110971901B (en) * 2018-09-29 2022-08-02 杭州海康威视数字技术股份有限公司 Processing method, device and equipment of convolutional neural network and storage medium
CN110971901A (en) * 2018-09-29 2020-04-07 杭州海康威视数字技术股份有限公司 Convolutional neural network processing method and device
CN109360202A (en) * 2018-10-26 2019-02-19 京东方科技集团股份有限公司 Measurement method and its device, the training method and its device of roentgen dose X
WO2020125806A1 (en) * 2018-12-17 2020-06-25 Shanghai United Imaging Intelligence Co., Ltd. Systems and methods for image segmentation
US11836925B2 (en) 2018-12-17 2023-12-05 Shanghai United Imaging Intelligence Co., Ltd. Systems and methods for image segmentation
US11341734B2 (en) 2018-12-17 2022-05-24 Shanghai United Imaging Intelligence Co., Ltd. Systems and methods for image segmentation
US20220284687A1 (en) * 2018-12-17 2022-09-08 Shanghai United Imaging Intelligence Co., Ltd. Systems and methods for image segmentation
WO2020134828A1 (en) * 2018-12-27 2020-07-02 深圳云天励飞技术有限公司 Neural network inference structure optimization method and device
CN109816098B (en) * 2019-01-25 2021-09-07 京东方科技集团股份有限公司 Processing method and evaluation method of neural network, and data analysis method and device
CN109816098A (en) * 2019-01-25 2019-05-28 京东方科技集团股份有限公司 The processing method and appraisal procedure of neural network, data analysing method and device
CN109872288A (en) * 2019-01-31 2019-06-11 深圳大学 For the network training method of image denoising, device, terminal and storage medium
WO2020221200A1 (en) * 2019-04-28 2020-11-05 华为技术有限公司 Neural network construction method, image processing method and devices
CN110222693A (en) * 2019-06-03 2019-09-10 第四范式(北京)技术有限公司 The method and apparatus for constructing character recognition model and identifying character
CN110222693B (en) * 2019-06-03 2022-03-08 第四范式(北京)技术有限公司 Method and device for constructing character recognition model and recognizing characters
CN110473520A (en) * 2019-07-19 2019-11-19 上海麦图信息科技有限公司 A kind of air control Chinese and English voice method of discrimination based on deep learning
CN110689020A (en) * 2019-10-10 2020-01-14 湖南师范大学 Segmentation method of mineral flotation froth image and electronic equipment
TWI807204B (en) * 2020-07-28 2023-07-01 鴻海精密工業股份有限公司 Image processing method, device, electronic equipment and storage medium
CN111798520A (en) * 2020-09-08 2020-10-20 平安国际智慧城市科技股份有限公司 Image processing method, device, equipment and medium based on convolutional neural network
CN111798520B (en) * 2020-09-08 2020-12-22 平安国际智慧城市科技股份有限公司 Image processing method, device, equipment and medium based on convolutional neural network
CN112669452A (en) * 2021-01-08 2021-04-16 天津大学 Object positioning method based on convolutional neural network multi-branch structure
CN113158982A (en) * 2021-05-17 2021-07-23 广东中卡云计算有限公司 Semi-intrusive target key point marking method
WO2023078051A1 (en) * 2021-11-05 2023-05-11 杭州研极微电子有限公司 Quantization-aware training method and apparatus, and device, medium and convolutional neural network
CN116562338A (en) * 2022-01-27 2023-08-08 美的集团(上海)有限公司 Multi-branch convolution structure, neural network model, and determination method and determination device thereof

Also Published As

Publication number Publication date
CN108304921B (en) 2021-02-02

Similar Documents

Publication Publication Date Title
CN108304921A (en) The training method and image processing method of convolutional neural networks, device
US11361585B2 (en) Method and system for face recognition via deep learning
Jabri et al. Revisiting visual question answering baselines
CN110366734B (en) Optimizing neural network architecture
US20190354868A1 (en) Multi-task neural networks with task-specific paths
US20170228618A1 (en) Video classification method and apparatus
CN105701120B (en) The method and apparatus for determining semantic matching degree
CN108805259A (en) neural network model training method, device, storage medium and terminal device
CN107958285A (en) The mapping method and device of the neutral net of embedded system
CN110532996A (en) The method of visual classification, the method for information processing and server
CN109791625A (en) Face recognition is carried out using artificial neural network
CN110147711A (en) Video scene recognition methods, device, storage medium and electronic device
KR20190031318A (en) Domain Separation Neural Networks
CN110766080B (en) Method, device and equipment for determining labeled sample and storage medium
CN108885682A (en) Use Processing with Neural Network cell image
CN104933428B (en) A kind of face identification method and device based on tensor description
CN106897746A (en) Data classification model training method and device
CN112396106B (en) Content recognition method, content recognition model training method, and storage medium
CN110109543A (en) C-VEP recognition methods based on subject migration
CN110288007A (en) The method, apparatus and electronic equipment of data mark
CN109902716A (en) A kind of training method and image classification method being aligned disaggregated model
CN110457677A (en) Entity-relationship recognition method and device, storage medium, computer equipment
CN112507106B (en) Deep learning model training method and device and FAQ similarity discrimination method
CN111954860A (en) System and method for predicting fine-grained antagonistic multi-player movements
CN109447096A (en) A kind of pan path prediction technique and device based on machine learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant