CN110399918A - A kind of method and apparatus of target identification - Google Patents

A kind of method and apparatus of target identification Download PDF

Info

Publication number
CN110399918A
CN110399918A CN201910671904.XA CN201910671904A CN110399918A CN 110399918 A CN110399918 A CN 110399918A CN 201910671904 A CN201910671904 A CN 201910671904A CN 110399918 A CN110399918 A CN 110399918A
Authority
CN
China
Prior art keywords
neural network
network model
layers
default
cut
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910671904.XA
Other languages
Chinese (zh)
Other versions
CN110399918B (en
Inventor
陈海波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenlan Robot Shanghai Co ltd
Original Assignee
Deep Blue Technology Shanghai Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Deep Blue Technology Shanghai Co Ltd filed Critical Deep Blue Technology Shanghai Co Ltd
Priority to CN201910671904.XA priority Critical patent/CN110399918B/en
Publication of CN110399918A publication Critical patent/CN110399918A/en
Application granted granted Critical
Publication of CN110399918B publication Critical patent/CN110399918B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/082Learning methods modifying the architecture, e.g. adding, deleting or silencing nodes or connections

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computational Linguistics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of method and apparatus of target identification, this method comprises: obtaining the image data for being used for target identification;Described image data input neural network model is subjected to target identification, obtain the recognition result of the target identification, wherein, default neural network model is inputted using training sample to be trained to obtain the neural network model, during to default neural network model training, determine whether the default neural network model includes BN layers of batch normalization;If so, being cut to the number of plies of the BN layer in the default neural network model, otherwise, the weight of the network layer of the default neural network model is cut.The present invention by default neural network model combine directly with indirect structural sparse method, weight itself to structuring is cut with BN layers, neural network model after being cut by above-mentioned neural network carries out target identification, so that the speed of target identification is faster, it is more efficient.

Description

A kind of method and apparatus of target identification
Technical field
The present invention relates to target identification technology field more particularly to a kind of method and apparatus of target identification.
Background technique
With computer technology, the development of nerual network technique, more and more people carry out target with neural network model Identification, but during carrying out target identification with neural network model, neural network is wanted to reach good pattern-recognition effect Fruit, it is necessary to have deeper depth, but for particular problem, depth also brings along very much over-fitting risk deeply and increases, trains difficulty The problems such as degree increases, and too deep network is limited therefore right sometimes to the performance help for improving pattern-recognition under concrete scene The cutting of network progress level.Network pruning refers to the structure by changing network, and the redundancy section in network is rejected.Foundation The difference of object is cut, network pruning can be divided into level cutting, multiple granularities such as nerve connection grade cutting.
The cutting object of nerve connection grade is a specific network connection or a specific parameter, the result of cutting Usually obtain more sparse network.The cutting of nerve connection grade is often more fine controllable, and the influence to network performance is most It is small.But the cutting of nerve connection grade will lead to network and lose normality, and the network weight tensor by cutting becomes sparse, therefore The storage and operation rule using sparse tensor are needed in storage and operation, are unfavorable for parallel.
The cutting object that level is cut is whole network layer, is primarily suitable for the more model of the network number of plies, the knot of cutting Fruit is that neural network becomes more " shallow ", eliminates several modules of depth residual error network, and actually a kind of level is cut.Mind The cutting object cut through first grade is single neuron or filter, cutting the result is that neural network becomes more " thin ".Nerve The target that connection grade is cut is single Neural connection weight, cutting the result is that making neural network more " sparse ".
In conclusion when carrying out target identification using neural network model, it is single to use a kind of network pruning mode, if Cut excessively, all can bring new problem while obtaining effect, need at present it is a kind of can be in above two network pruning side The network pruning mode of portfolio effect is identified between formula with auxiliary mark, so that the effect of target identification is more preferable.
Summary of the invention
The present invention provides a kind of method and apparatus of target identification, specifically includes:
According to first aspect present invention, a kind of method of target identification is provided, this method comprises:
Obtain the image data for being used for target identification;
Described image data input neural network model is subjected to target identification, obtains the identification knot of the target identification Fruit, wherein input default neural network model using training sample and be trained to obtain the neural network model, to described During default neural network model training, determine whether the default neural network model includes BN layers of batch normalization; If so, being cut to the number of plies of the BN layer in the default neural network model, otherwise, to the default neural network model The weight of network layer cut.
It is every that end is trained to default neural network model using n training sample in a kind of possible implementation Afterwards, however, it is determined that when current preset neural network model includes BN layers, to the BN layers of the current preset neural network model The number of plies cut, however, it is determined that when current preset neural network model does not include BN layers, to the current preset neural network The weight of the network layer of model is cut, and n is the default positive integer less than training sample sum.
In one possible implementation, increase by first by the loss function for the default neural network model to punish Item is penalized, the weight of the network layer of the default neural network model is cut.
It in one possible implementation, include for adjusting in default neural network model in first penalty term The first adjustment coefficient and cutting weighted value range of the weighted value of i-th of network layer;
To i-th of network layer in default neural network model, according to the first adjustment coefficient described in the first penalty term, to pre- If the weighted value of i-th of network layer is adjusted in neural network model;
Weighted value range is cut according to the first penalty term, to i-th network layer in default neural network model Weight is cut.
In one possible implementation, by dynamically adjusting the first adjustment system in first penalty term Number, is adjusted the weighted value of i-th of network layer in default neural network model, makes weighted value in i-th of network layer It is most in the quantity for cutting the weight within the scope of weighted value.
In one possible implementation, the first penalty term L1 are as follows:
Wherein, xiMeet ε 1 for i-th of network layeri< | xi|≤C1iWeight, [ε 1i, C1i] be i-th of network layer sanction Weighted value range is cut, N is that i-th of network layer meets weighted value in the quantity for the weight for cutting weighted value range, and λ 1 is the first tune Integral coefficient.
In one possible implementation, the second punishment is increased by the BN layer for the default neural network model , the BN layers of the number of plies of the default neural network model is cut.
It in one possible implementation, include for adjusting in default neural network model in second penalty term The BN layers of output valve range of second adjustment coefficient and cutting of BN layers of output valve;
To BN layers in default neural network model, according to second adjustment coefficient described in the second penalty term, to default nerve BN layers of output valve are adjusted in network model;
BN layers of output valve range are cut according to the second penalty term, and BN layers in default neural network model are cut out It cuts.
In one possible implementation, by dynamically adjusting the second adjustment system in second penalty term Number, is adjusted BN layers of output valve in default neural network model, meets BN layers of output valve and is cutting BN layers of output valve model The quantity of BN layer in enclosing is most.
In one possible implementation, the second penalty term L2 are as follows: 2 ∑ of λk∈τg(k);
Wherein, g (k) is to meet 2 < of ε | g (k) | the BN layer output valve of≤C2, wherein [ε 2, C2] is to cut BN layers of output It is worth range, λ 2 is second adjustment coefficient.
According to second aspect of the present invention, a kind of equipment of target identification, the equipment includes memory and processor, described Memory stores executable program, and the processor realizes following process when the executable program is performed:
Obtain the image data for being used for target identification;
Described image data input neural network model is subjected to target identification, obtains the identification knot of the target identification Fruit, wherein input default neural network model using training sample and be trained to obtain the neural network model, to described During default neural network model training, determine whether the default neural network model includes BN layers of batch normalization; If so, being cut to the number of plies of the BN layer in the default neural network model, otherwise, to the default neural network model The weight of network layer cut.
According to third aspect present invention, a kind of computer storage medium is provided, the computer storage medium is stored with meter Calculation machine program, which, which is performed, realizes above-mentioned method.
A kind of method and apparatus of target identification provided by the invention compared with prior art, has the following advantages that and beneficial Effect:
The present invention by default neural network model combine directly with indirect structural sparse method, to structuring Weight itself cut with BN layers, greatly accelerate training speed used in rarefaction, and cutting degree and dimension can be increased It is constant to hold network performance.
Detailed description of the invention
To describe the technical solutions in the embodiments of the present invention more clearly, make required in being described below to embodiment Attached drawing is briefly introduced, it should be apparent that, drawings in the following description are only some embodiments of the invention, for this For the those of ordinary skill in field, without any creative labor, it can also be obtained according to these attached drawings His attached drawing.
Fig. 1 is a kind of schematic diagram of the method for target identification that the embodiment of the present invention one provides;
Fig. 2 is the BN layer output valve curve graph that the embodiment of the present invention one provides;
Fig. 3 is a kind of schematic diagram of the equipment of target identification provided by Embodiment 2 of the present invention;
Fig. 4 is a kind of schematic diagram of the device of target identification provided by Embodiment 2 of the present invention.
Specific embodiment
To make the objectives, technical solutions, and advantages of the present invention clearer, below in conjunction with attached drawing to the present invention make into It is described in detail to one step, it is clear that the described embodiment is only a part of the embodiment of the present invention, instead of all the embodiments. Based on the embodiments of the present invention, obtained by those of ordinary skill in the art without making creative efforts all Other embodiments shall fall within the protection scope of the present invention.
The application scenarios of description of the embodiment of the present invention are the technical solutions in order to more clearly illustrate the embodiment of the present invention, The restriction for technical solution provided in an embodiment of the present invention is not constituted, those of ordinary skill in the art are it is found that with newly answering With the appearance of scene, technical solution provided in an embodiment of the present invention is equally applicable for similar technical problem.Wherein, at this In the description of invention, unless otherwise indicated, the meaning of " plurality " is two or more.
Neural network cuts the structure referred to by changing neural network, and the redundancy section in neural network is rejected.According to Multiple granularities such as according to the difference for cutting object, neural network cutting can be divided into level and cut, and nerve connection grade is cut.Nerve is even The cutting for connecing grade will lead to neural network and lose normality, and the neural network weight tensor by cutting becomes sparse, therefore The storage and operation rule using sparse tensor are needed when storage and operation, are unfavorable for parallel.Using neural network model into It is single to use a kind of neural network cutting method when row target identification, if cut excessively, can all be brought while obtaining effect New problem, need at present it is a kind of can between above two neural network cutting method portfolio effect neural network cutting method It is identified with auxiliary mark, so that the effect of target identification is more preferable.
Therefore the embodiment of the present invention provides a kind of method and apparatus of target identification.
To make the objectives, technical solutions, and advantages of the present invention clearer, below in conjunction with attached drawing to the present invention make into It is described in detail to one step, it is clear that described embodiments are only a part of the embodiments of the present invention, rather than whole implementation Example.Based on the embodiments of the present invention, obtained by those of ordinary skill in the art without making creative efforts All other embodiment, shall fall within the protection scope of the present invention.
For above-mentioned scene, the embodiment of the present invention is described in further detail with reference to the accompanying drawings of the specification.
Embodiment one
The present embodiment provides a kind of methods of target identification, as shown in Figure 1, specifically includes the following steps:
Step 101, the image data for being used for target identification is obtained;
Step 102, described image data input neural network model is subjected to target identification, obtains the target identification Recognition result, wherein input default neural network model using training sample and be trained to obtain the neural network model, In During to the default neural network model training, determine whether the default neural network model includes that batch normalizes BN layers;If so, being cut to the number of plies of the BN layer in the default neural network model, otherwise, to the default nerve net The weight of the network layer of network model is cut.
It is right by combining default neural network model directly with indirect structural sparse method in the above method Weight of structuring itself is cut with BN layers, greatly accelerates training speed used in rarefaction, and can increase cutting journey It spends and maintains network performance constant, the neural network model after being cut by above-mentioned neural network carries out target identification, so that mesh Identify other speed faster, it is more efficient.
As an alternative embodiment, every be trained knot to default neural network model using n training sample Shu Hou, however, it is determined that when current preset neural network model includes BN layers, to the BN of the current preset neural network model The number of plies of layer is cut, however, it is determined that when current preset neural network model does not include BN layers, to the current preset nerve net The weight of the network layer of network model is cut, and n is the default positive integer less than training sample sum.
In the present embodiment, the specific value of n is not limited, n can be 1 or 10 or any one is less than training The positive integer of total sample number, for example, in an implementation, can be instructed using 1 training sample to default neural network model with every After white silk, the default neural network model is once cut or 10 training samples of every utilization are to default nerve After network model is trained, the default neural network model is once cut.
BN layers and convolutional layer, active coating, full articulamentum are equally also one layer belonged in neural network model.Neural network Other layers in model in addition to output layer, because lower layer network has updated parameter in the training process, and cause back layer to input The variation of data distribution.Therefore it needs when each layer of input, is pre-processed by BN layers, such as the input of network layer third layer Data X3 is normalized to it: mean value 0, variance 1, then inputs third layer calculating again, to solve asking for data distribution variation It inscribes.
Because being normalized to the output data of a certain layer of network layer, it is then fed into next layer network layer and influences whether this layer The feature that network layer is learnt, in order to recover feature that original a certain layer is acquired.Therefore we introduce this can Learn reconstruction parameter γ, β, network can be learnt to recover the primitive network feature distribution to be learnt.y(k)(k)x(k)+ β(k)Each of neural network model neuron X (k) can have parameter γ, β as a pair.
If it is determined that when current preset neural network model includes BN layers, to described in the current preset neural network model BN layers of the number of plies is cut.Specific embodiment is as follows:
As an alternative embodiment, increasing by the second punishment by the BN layer for the default neural network model , the BN layers of the number of plies of the default neural network model is cut.
It include the second adjustment system for adjusting BN layers of output valve in default neural network model in second penalty term BN layers of output valve range of number and cutting;
To BN layers in default neural network model, according to second adjustment coefficient described in the second penalty term, to default nerve BN layers of output valve are adjusted in network model;
By dynamically adjusting the second adjustment coefficient in second penalty term, to BN in default neural network model Layer output valve is adjusted, and meets BN layers of output valve most in the quantity for cutting the BN layer within the scope of BN layers of output valve.
BN layers of output valve range are cut according to the second penalty term, and BN layers in default neural network model are cut out It cuts.
The second penalty term L2 are as follows: 2 ∑ of λk∈τg(k);
Wherein, g (k) is to meet 2 < of ε | g (k) | the BN layer output valve of≤C2, wherein [ε 2, C2] is to cut BN layers of output It is worth range, λ 2 is second adjustment coefficient.
In the present embodiment, the specific value of the λ 2 is not limited, λ 2 is usually a number less than 1.In training During neural network model, the curve after all BN layers of output valves sort from small to large is drawn out, as shown in Figure 2, wherein Curve S1 is the curve that penalty term is not added, and curve S2 is the curve added after penalty term.In specific implementation, according to hands-on Situation adjusts λ 2 by dynamic, and training pattern makes BN layers of output valve close to the BN layer of ε 2 under the premise of guaranteeing training effect It is more as far as possible.
In the present embodiment, not to the ε 2, the value of C2 is limited, and ε 2 is usually 0, it is assumed that the value of C2 is 0.2, then to BN Layer output valve is greater than the 0 BN layer less than 0.2 and is cut.
Neural network model is to be connected by node in layer by side, has weight on each side, weighs to network layer It is cut again, that is, when the weight very little on certain sides, then it is assumed that such side is inessential, and then removes these sides.
If it is determined that when current preset neural network model does not include BN layers, to the net of the current preset neural network model The weight of network layers is cut.Specific embodiment is as follows:
It is punished as an alternative embodiment, increasing by first by the loss function for the default neural network model Item is penalized, the weight of the network layer of the default neural network model is cut.
It include for adjusting the weighted value of i-th of network layer in default neural network model in first penalty term One regulation coefficient and cutting weighted value range;
To i-th of network layer in default neural network model, according to the first adjustment coefficient described in the first penalty term, to pre- If the weighted value of i-th of network layer is adjusted in neural network model;
By dynamically adjusting the first adjustment coefficient in first penalty term, in default neural network model The weighted value of i network layer is adjusted, and weighted value in i-th of network layer is made to cut the weight within the scope of weighted value Quantity is most.
Weighted value range is cut according to the first penalty term, to i-th network layer in default neural network model Weight is cut.
As an alternative embodiment, the first penalty term L1 are as follows:
Wherein, xiMeet ε 1 for i-th of network layeri< | xi|≤C1iWeight, [ε 1i, C1i] be i-th of network layer sanction Weighted value range is cut, N is that i-th of network layer meets weighted value in the quantity for the weight for cutting weighted value range, and λ 1 is the first tune Integral coefficient.
In the present embodiment, the specific value of the λ 1 is not limited, λ 1 is usually a number less than 1.Specific In implementation, the case where according to hands-on, λ 1 is adjusted by dynamic, training pattern makes BN under the premise of guaranteeing training effect The BN layer of layer output valve close to ε 1 are more as far as possible.
In the present embodiment, not to the ε 1, the value of C1 is limited, and ε 2 is usually 0, it is assumed that the value of C1 is 0.2, then to power Weight values are greater than 0 weight less than 0.2 and are cut.
Embodiment two
Based on identical inventive concept, the present embodiment provides a kind of equipment of target identification, as shown in figure 3, the equipment packet Processor 301 and memory 302 are included, wherein the memory 302 is stored with executable program, the processor 301 is above-mentioned Following process is realized when executable program is performed:
Obtain the image data for being used for target identification;
Described image data input neural network model is subjected to target identification, obtains the identification knot of the target identification Fruit, wherein input default neural network model using training sample and be trained to obtain the neural network model, to described During default neural network model training, determine whether the default neural network model includes BN layers of batch normalization; If so, being cut to the number of plies of the BN layer in the default neural network model, otherwise, to the default neural network model The weight of network layer cut.
As an alternative embodiment, the processor 301 is specifically used for:
After n training sample of every utilization is trained default neural network model, however, it is determined that current preset nerve When network model includes BN layers, the BN layers of the number of plies of the current preset neural network model is cut, however, it is determined that When current preset neural network model does not include BN layers, the weight of the network layer of the current preset neural network model is carried out It cuts, n is the default positive integer less than training sample sum.
As an alternative embodiment, the processor 301 is specifically used for the current preset neural network mould The weight of the network layer of type is cut, comprising:
By increasing by the first penalty term for the loss function of the default neural network model, to the default neural network The weight of the network layer of model is cut.
As an alternative embodiment, including for adjusting in default neural network model in first penalty term The first adjustment coefficient and cutting weighted value range of the weighted value of i-th of network layer;
The processor 301 is specifically used for:
To i-th of network layer in default neural network model, according to the first adjustment coefficient described in the first penalty term, to pre- If the weighted value of i-th of network layer is adjusted in neural network model;
Weighted value range is cut according to the first penalty term, to i-th network layer in default neural network model Weight is cut.
As an alternative embodiment, the processor 301 is specifically used for:
By dynamically adjusting the first adjustment coefficient in first penalty term, in default neural network model The weighted value of i network layer is adjusted, and weighted value in i-th of network layer is made to cut the weight within the scope of weighted value Quantity is most.
As an alternative embodiment, the first penalty term L1 are as follows:
Wherein, xiMeet ε 1 for i-th of network layeri< | xi|≤C1iWeight, [ε 1i, C1i] be i-th of network layer sanction Weighted value range is cut, N is that i-th of network layer meets weighted value in the quantity for the weight for cutting weighted value range, and λ 1 is the first tune Integral coefficient.
As an alternative embodiment, the processor 301 is specifically used for the current preset neural network mould The BN layers of the number of plies of type is cut, comprising:
By increasing by the second penalty term for the BN layer of the default neural network model, to the default neural network model The BN layers of the number of plies cut.
As an alternative embodiment, including for adjusting in default neural network model in second penalty term The BN layers of output valve range of second adjustment coefficient and cutting of BN layers of output valve;
The processor 301 is specifically used for:
To BN layers in default neural network model, according to second adjustment coefficient described in the second penalty term, to default nerve BN layers of output valve are adjusted in network model;
BN layers of output valve range are cut according to the second penalty term, and BN layers in default neural network model are cut out It cuts.
As an alternative embodiment, the processor 301 is specifically used for:
By dynamically adjusting the second adjustment coefficient in second penalty term, to BN in default neural network model Layer output valve is adjusted, and meets BN layers of output valve most in the quantity for cutting the BN layer within the scope of BN layers of output valve.
As an alternative embodiment, the second penalty term L2 are as follows: 2 ∑ of λk∈τg(k);
Wherein, g (k) is to meet 2 < of ε | g (k) | the BN layer output valve of≤C2, wherein [ε 2, C2] is to cut BN layers of output It is worth range, λ 2 is second adjustment coefficient.
Based on identical inventive concept, the present embodiment also provides a kind of device of target identification, as shown in figure 4, the device Include:
Image data acquisition unit 401, for obtaining the image data for being used for target identification;
Object-recognition unit 402 obtains institute for described image data input neural network model to be carried out target identification State the recognition result of target identification, wherein input default neural network model using training sample and be trained to obtain the mind Whether the default neural network model is determined during to default neural network model training through network model Including BN layers of batch normalization;If so, the number of plies of the BN layer in the default neural network model is cut, it is otherwise, right The weight of the network layer of the default neural network model is cut.
As an alternative embodiment, above-mentioned object-recognition unit 402 is specifically used for:
After n training sample of every utilization is trained default neural network model, however, it is determined that current preset nerve When network model includes BN layers, the BN layers of the number of plies of the current preset neural network model is cut, however, it is determined that When current preset neural network model does not include BN layers, the weight of the network layer of the current preset neural network model is carried out It cuts, n is the default positive integer less than training sample sum.
As an alternative embodiment, above-mentioned object-recognition unit 402 is specifically used for the current preset nerve The weight of the network layer of network model is cut, comprising:
By increasing by the first penalty term for the loss function of the default neural network model, to the default neural network The weight of the network layer of model is cut.
As an alternative embodiment, including for adjusting in default neural network model in first penalty term The first adjustment coefficient and cutting weighted value range of the weighted value of i-th of network layer;
Above-mentioned object-recognition unit 402 is specifically used for:
To i-th of network layer in default neural network model, according to the first adjustment coefficient described in the first penalty term, to pre- If the weighted value of i-th of network layer is adjusted in neural network model;
Weighted value range is cut according to the first penalty term, to i-th network layer in default neural network model Weight is cut.
As an alternative embodiment, above-mentioned object-recognition unit 402 is specifically used for:
By dynamically adjusting the first adjustment coefficient in first penalty term, in default neural network model The weighted value of i network layer is adjusted, and weighted value in i-th of network layer is made to cut the weight within the scope of weighted value Quantity is most.
As an alternative embodiment, the first penalty term L1 are as follows:
Wherein, xiMeet ε 1 for i-th of network layeri< | xi|≤C1iWeight, [ε 1i, C1i] be i-th of network layer sanction Weighted value range is cut, N is that i-th of network layer meets weighted value in the quantity for the weight for cutting weighted value range, and λ 1 is the first tune Integral coefficient.
As an alternative embodiment, above-mentioned object-recognition unit 402 is specifically used for the current preset nerve The BN layers of the number of plies of network model is cut, comprising:
By increasing by the second penalty term for the BN layer of the default neural network model, to the default neural network model The BN layers of the number of plies cut.
As an alternative embodiment, including for adjusting in default neural network model in second penalty term The BN layers of output valve range of second adjustment coefficient and cutting of BN layers of output valve;
Above-mentioned object-recognition unit 402 is specifically used for:
To BN layers in default neural network model, according to second adjustment coefficient described in the second penalty term, to default nerve BN layers of output valve are adjusted in network model;
BN layers of output valve range are cut according to the second penalty term, and BN layers in default neural network model are cut out It cuts.
As an alternative embodiment, above-mentioned object-recognition unit 402 is specifically used for:
By dynamically adjusting the second adjustment coefficient in second penalty term, to BN in default neural network model Layer output valve is adjusted, and meets BN layers of output valve most in the quantity for cutting the BN layer within the scope of BN layers of output valve.
As an alternative embodiment, the second penalty term L2 are as follows: 2 ∑ of λk∈τg(k);
Wherein, g (k) is to meet 2 < of ε | g (k) | the BN layer output valve of≤C2, wherein [ε 2, C2] is to cut BN layers of output It is worth range, λ 2 is second adjustment coefficient.
Embodiment three
The embodiment of the present invention also provides a kind of computer-readable non-volatile memory medium, including program code, when above-mentioned For program code when running on computing terminal, above procedure code is for making the computing terminal execute the embodiments of the present invention The step of one method.
It should be understood by those skilled in the art that, the embodiment of the present invention can provide as method, system or computer program Product.Therefore, complete hardware embodiment, complete software embodiment or reality combining software and hardware aspects can be used in the present invention Apply the form of example.Moreover, it wherein includes the computer of computer usable program code that the present invention, which can be used in one or more, The shape for the computer program product implemented in usable storage medium (including but not limited to magnetic disk storage and optical memory etc.) Formula.
The present invention be referring to according to the method for the embodiment of the present invention, the process of equipment (system) and computer program product Figure and/or block diagram describe.It should be understood that every one stream in flowchart and/or the block diagram can be realized by computer program instructions The combination of process and/or box in journey and/or box and flowchart and/or the block diagram.It can provide these computer programs Instruct the processor of general purpose computer, special purpose computer, Embedded Processor or other programmable data processing devices to produce A raw machine, so that being generated by the instruction that computer or the processor of other programmable data processing devices execute for real The equipment for the function of being specified in present one or more flows of the flowchart and/or one or more blocks of the block diagram.
These computer program instructions, which may also be stored in, is able to guide computer or other programmable data processing devices with spy Determine in the computer-readable memory that mode works, so that it includes referring to that instruction stored in the computer readable memory, which generates, Enable the manufacture of equipment, the commander equipment realize in one box of one or more flows of the flowchart and/or block diagram or The function of being specified in multiple boxes.
These computer program instructions also can be loaded onto a computer or other programmable data processing device, so that counting Series of operation steps are executed on calculation machine or other programmable devices to generate computer implemented processing, thus in computer or The instruction executed on other programmable devices is provided for realizing in one or more flows of the flowchart and/or block diagram one The step of function of being specified in a box or multiple boxes.
Obviously, various changes and modifications can be made to the invention without departing from essence of the invention by those skilled in the art Mind and range.In this way, if these modifications and changes of the present invention belongs to the range of the claims in the present invention and its equivalent technologies Within, then the present invention is also intended to include these modifications and variations.

Claims (12)

1. a kind of method of target identification characterized by comprising
Obtain the image data for being used for target identification;
Described image data input neural network model is subjected to target identification, obtains the recognition result of the target identification, In, default neural network model is inputted using training sample and is trained to obtain the neural network model, to described default During neural network model training, determine whether the default neural network model includes BN layers of batch normalization;If so, The number of plies of BN layer in the default neural network model is cut, otherwise, to the net of the default neural network model The weight of network layers is cut.
2. the method according to claim 1, wherein
After n training sample of every utilization is trained default neural network model, however, it is determined that current preset neural network When model includes BN layers, the BN layers of the number of plies of the current preset neural network model is cut, however, it is determined that current When default neural network model does not include BN layers, the weight of the network layer of the current preset neural network model is cut out It cuts, n is the default positive integer less than training sample sum.
3. method according to claim 1 or 2, which is characterized in that the network of the current preset neural network model The weight of layer is cut, comprising:
By increasing by the first penalty term for the loss function of the default neural network model, to the default neural network model The weight of network layer cut.
4. according to the method described in claim 3, it is characterized in that, including for adjusting default nerve in first penalty term The first adjustment coefficient of the weighted value of i-th of network layer and cutting weighted value range in network model;
To i-th of network layer in default neural network model, according to the first adjustment coefficient described in the first penalty term, to default mind Weighted value through i-th of network layer in network model is adjusted;
Weighted value range is cut according to the first penalty term, to the weight of i-th of network layer in default neural network model It is cut.
5. according to the method described in claim 4, it is characterized in that,
By dynamically adjusting the first adjustment coefficient in first penalty term, to i-th in default neural network model The weighted value of network layer is adjusted, and makes the number of weight of the weighted value within the scope of cutting weighted value in i-th of network layer Amount is most.
6. according to the method described in claim 4, it is characterized in that,
The first penalty term L1 are as follows:
Wherein, xiMeet ε 1 for i-th of network layeri< | xi|≤C1iWeight, [ε 1i, C1i] weighed for the cutting of i-th of network layer Weight values range, N are that i-th of network layer meets weighted value in the quantity for the weight for cutting weighted value range, and λ 1 is the first adjustment system Number.
7. method according to claim 1 or 2, which is characterized in that described in the current preset neural network model BN layers of the number of plies is cut, comprising:
By increasing by the second penalty term for the BN layer of the default neural network model, to the institute of the default neural network model BN layers of the number of plies is stated to be cut.
8. the method according to the description of claim 7 is characterized in that including for adjusting default nerve in second penalty term The BN layers of output valve range of second adjustment coefficient and cutting of BN layers of output valve in network model;
To BN layers in default neural network model, according to second adjustment coefficient described in the second penalty term, to default neural network BN layers of output valve are adjusted in model;
BN layers of output valve range are cut according to the second penalty term, and BN layers in default neural network model are cut.
9. according to the method described in claim 8, it is characterized in that,
It is defeated to BN layers in default neural network model by dynamically adjusting the second adjustment coefficient in second penalty term Value is adjusted out, meets BN layers of output valve most in the quantity for cutting the BN layer within the scope of BN layers of output valve.
10. according to the method described in claim 8, it is characterized in that,
The second penalty term L2 are as follows: 2 ∑ of λγ∈τg(k);
Wherein, g (k) is to meet 2 < of ε | g (k) | the BN layer output valve of≤C2, wherein [ε 2, C2] is to cut BN layers of output valve model It encloses, λ 2 is second adjustment coefficient.
11. a kind of equipment of target identification, which is characterized in that the equipment includes processor and memory, wherein the memory Executable program is stored, the processor realizes following process when the executable program is performed:
Obtain the image data for being used for target identification;
Described image data input neural network model is subjected to target identification, obtains the recognition result of the target identification, In, default neural network model is inputted using training sample and is trained to obtain the neural network model, to described default During neural network model training, determine whether the default neural network model includes BN layers of batch normalization;If so, The number of plies of BN layer in the default neural network model is cut, otherwise, to the net of the default neural network model The weight of network layers is cut.
12. a kind of computer can storage medium, be stored thereon with computer program, which is characterized in that the program is held by processor The step of the method as any such as claim 1~10 is realized when row.
CN201910671904.XA 2019-07-24 2019-07-24 Target identification method and device Active CN110399918B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910671904.XA CN110399918B (en) 2019-07-24 2019-07-24 Target identification method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910671904.XA CN110399918B (en) 2019-07-24 2019-07-24 Target identification method and device

Publications (2)

Publication Number Publication Date
CN110399918A true CN110399918A (en) 2019-11-01
CN110399918B CN110399918B (en) 2021-11-19

Family

ID=68324992

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910671904.XA Active CN110399918B (en) 2019-07-24 2019-07-24 Target identification method and device

Country Status (1)

Country Link
CN (1) CN110399918B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108197666A (en) * 2018-01-30 2018-06-22 咪咕文化科技有限公司 A kind of processing method, device and the storage medium of image classification model
CN109409507A (en) * 2018-10-31 2019-03-01 上海鹰瞳医疗科技有限公司 Neural network construction method and equipment
CN109598340A (en) * 2018-11-15 2019-04-09 北京知道创宇信息技术有限公司 Method of cutting out, device and the storage medium of convolutional neural networks
CN109671020A (en) * 2018-12-17 2019-04-23 北京旷视科技有限公司 Image processing method, device, electronic equipment and computer storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108197666A (en) * 2018-01-30 2018-06-22 咪咕文化科技有限公司 A kind of processing method, device and the storage medium of image classification model
CN109409507A (en) * 2018-10-31 2019-03-01 上海鹰瞳医疗科技有限公司 Neural network construction method and equipment
CN109598340A (en) * 2018-11-15 2019-04-09 北京知道创宇信息技术有限公司 Method of cutting out, device and the storage medium of convolutional neural networks
CN109671020A (en) * 2018-12-17 2019-04-23 北京旷视科技有限公司 Image processing method, device, electronic equipment and computer storage medium

Also Published As

Publication number Publication date
CN110399918B (en) 2021-11-19

Similar Documents

Publication Publication Date Title
CN111950723B (en) Neural network model training method, image processing method, device and terminal equipment
CN109299716A (en) Training method, image partition method, device, equipment and the medium of neural network
CN104933428B (en) A kind of face identification method and device based on tensor description
CN110880036A (en) Neural network compression method and device, computer equipment and storage medium
CN108664893A (en) A kind of method for detecting human face and storage medium
CN108229647A (en) The generation method and device of neural network structure, electronic equipment, storage medium
CN106709565A (en) Optimization method and device for neural network
CN111814907B (en) Quantum generation countermeasure network algorithm based on condition constraint
CN108171663A (en) The image completion system for the convolutional neural networks that feature based figure arest neighbors is replaced
CN106339719A (en) Image identification method and image identification device
CN109558901A (en) A kind of semantic segmentation training method and device, electronic equipment, storage medium
CN110263818A (en) Method, apparatus, terminal and the computer readable storage medium of resume selection
CN109325516A (en) A kind of integrated learning approach and device towards image classification
CN107506350A (en) A kind of method and apparatus of identification information
CN107229966A (en) A kind of model data update method, apparatus and system
CN110097177A (en) A kind of network pruning method based on pseudo- twin network
CN108805257A (en) A kind of neural network quantization method based on parameter norm
CN116416561A (en) Video image processing method and device
CN112288087A (en) Neural network pruning method and device, electronic equipment and storage medium
CN107871103A (en) Face authentication method and device
WO2023207039A1 (en) Data processing method and apparatus, and device and storage medium
CN110378389A (en) A kind of Adaboost classifier calculated machine creating device
CN110309774A (en) Iris segmentation method, apparatus, storage medium and electronic equipment
CN112132062B (en) Remote sensing image classification method based on pruning compression neural network
CN108549899A (en) A kind of image-recognizing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20240508

Address after: Room 6227, No. 999, Changning District, Shanghai 200050

Patentee after: Shenlan robot (Shanghai) Co.,Ltd.

Country or region after: China

Address before: Unit 1001, 369 Weining Road, Changning District, Shanghai, 200336 (9th floor of actual floor)

Patentee before: DEEPBLUE TECHNOLOGY (SHANGHAI) Co.,Ltd.

Country or region before: China