CN108108754A - The training of identification network, again recognition methods, device and system again - Google Patents

The training of identification network, again recognition methods, device and system again Download PDF

Info

Publication number
CN108108754A
CN108108754A CN201711360237.0A CN201711360237A CN108108754A CN 108108754 A CN108108754 A CN 108108754A CN 201711360237 A CN201711360237 A CN 201711360237A CN 108108754 A CN108108754 A CN 108108754A
Authority
CN
China
Prior art keywords
distance
pictures
loss
convolutional neural
picture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201711360237.0A
Other languages
Chinese (zh)
Other versions
CN108108754B (en
Inventor
罗浩
张弛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Maigewei Technology Co Ltd
Original Assignee
Beijing Maigewei Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Maigewei Technology Co Ltd filed Critical Beijing Maigewei Technology Co Ltd
Priority to CN201711360237.0A priority Critical patent/CN108108754B/en
Publication of CN108108754A publication Critical patent/CN108108754A/en
Application granted granted Critical
Publication of CN108108754B publication Critical patent/CN108108754B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computational Linguistics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The present invention provides the training of identification network, again recognition methods, device and system again, by obtaining batch processing training data;The corresponding feature vector of N pictures that batch processing training data includes is obtained respectively, according to the corresponding feature vector of every pictures, is calculated the distance of feature vector between any two, and is obtained distance matrix;According to calculated distance matrix, the positive sample pair of selected distance maximum and the negative sample pair apart from minimum, utilize loss of the two boundary samples of selection to calculating convolutional neural networks, so as to training pattern, loss of the positive sample pair and negative sample being most difficult to by study to calculating convolutional neural networks, the generalization ability of convolutional neural networks model can be increased, improve accuracy of identification.

Description

The training of identification network, again recognition methods, device and system again
Technical field
The present invention relates to image identification technical field, more particularly, to the training of identification network, again recognition methods, device again And system.
Background technology
The problem of identification again to pedestrian in safety monitoring Video Applications is one extremely important.Pedestrian identifies again, is Refer in whether some pedestrian in detecting some camera once appeared in other cameras.At present, table is mainly passed through What sign study was identified again with two methods of metric learning.Representative learning is using each pedestrian as a classification, by row People identifies again is converted into image classification problem.Metric learning is the feature for extracting every pedestrian's picture, calculates the spy of two kinds of pictures Then the distance of sign selects positive sample pair and negative sample pair in training sample at random, this method is for participating in convolutional Neural The sample of network is largely the simple sample pair easily distinguished for, so limits the extensive energy of convolutional neural networks Power.
The content of the invention
In view of this, it is an object of the invention to provide identify network again training, recognition methods, device and system again, The generalization ability of convolutional neural networks model can be increased, improve accuracy of identification.
In a first aspect, an embodiment of the present invention provides again identify network training method, the described method includes:
Batch processing training data is obtained, the batch processing training data includes N pictures, wherein, N is positive integer;
It will be obtained described per the corresponding feature vector of pictures per pictures respectively by convolutional neural networks;
The distance of described eigenvector between any two is calculated, and distance matrix is obtained according to the distance;
Positive sample is obtained according to the distance matrix and adjusts the distance to adjust the distance with negative sample;
It is adjusted the distance according to the positive sample and adjusts the distance to obtain the loss of the convolutional neural networks with the negative sample, according to The loss is trained the convolutional neural networks;
Another batching processing training data is reacquired, and repeats the above steps and the convolutional neural networks is instructed Practice, until the loss of the convolutional neural networks restrains.
It is further, described that per pictures, corresponding corresponding training label, the trained label include the identification respectively The identifier ID of object, it is described according to the distance matrix to obtain positive sample and adjust the distance to adjust the distance with negative sample, including:
The identical distance of the identifier ID distance different with the identifier ID is chosen from the distance matrix;
The identical distance of the identifier ID is recombinated, obtains positive sample distance matrix;
The different distance of the identifier ID is recombinated, obtains negative sample distance matrix;
Maximum distance is chosen from the positive sample distance matrix, is adjusted the distance as the positive sample;
Minimum distance is chosen from the negative sample distance matrix, is adjusted the distance as the negative sample.
Further, described adjusted the distance according to the positive sample adjusts the distance to obtain the convolutional Neural net with the negative sample The loss of network, including:
It is adjusted the distance according to the positive sample and adjusts the distance to obtain boundary sample loss with the negative sample;
According to boundary sample loss and the weighted average of Classification Loss, the damage of the convolutional neural networks is obtained It loses.
Further, described adjusted the distance according to the positive sample adjusts the distance to obtain boundary sample loss with the negative sample, Including:
The boundary sample loss is calculated according to the following formula:
Le={ max (Mp)-min(MN)+α}+
Wherein, LeIt is lost for the boundary sample, max (Mp) adjust the distance for the positive sample, min (MN) it is the negative sample Originally adjust the distance, α is the boundary threshold manually set.
Further, it is described according to boundary sample loss and the weighted average of Classification Loss, obtain the convolution The loss of neutral net, including:
The loss of the convolutional neural networks is calculated according to the following formula:
Loss=λ LID+(1-λ)Le
Wherein, Loss be the convolutional neural networks loss, LIDFor the weighted average of the Classification Loss of the N pictures Value, LeIt is lost for the boundary sample, λ is weight parameter, λ ∈ (0,1).
Further, it is described to calculate the distance of described eigenvector between any two, and obtained according to the distance apart from square Battle array, including:
Regularization is carried out to the feature vector corresponding per pictures, it is described right respectively per pictures to obtain The feature vector for the regularization answered;
Every pictures and other are calculated according to the feature vector per the corresponding regularization of pictures respectively The distance of N-1 pictures, so as to obtain the distance matrix.
Further, the feature vector according to the corresponding regularization of every pictures calculates described every respectively The distance of pictures and other N-1 pictures, so as to obtain the distance matrix, including:
The distance per pictures with other N-1 pictures is calculated according to the following formula:
Wherein, d (picture 1, picture 2) is the distance of the first picture and second picture, fn1For the first picture regularization Feature vector, fn2For the feature vector of the second picture regularization.
Further, the N pictures in the batch processing training data include P different pedestrians, and each pedestrian includes K The picture of Zhang Butong, wherein, the corresponding different pictures of each pedestrian are continuously placed.
Second aspect, the embodiment of the present invention also provide recognition methods again, the described method includes:
Obtain picture to be checked and pedestrian's pictures to be searched;
An at least pictures in the picture to be checked and pedestrian's pictures to be searched are passed through into trained convolution Neutral net obtains the feature vector of the picture to be checked and the spy of at least pictures in pedestrian's pictures to be searched Sign vector, wherein, the convolutional neural networks of the training are obtained by the training method of above-mentioned heavy identification network;
Calculate the feature vector of the picture to be checked and the spy of at least pictures in pedestrian's pictures to be searched Levy the distance between vector;
The identity of the pedestrian in the picture to be checked is determined according to the distance.
The third aspect, the embodiment of the present invention also provide the training device for identifying network again, and described device includes:
Batch processing training data acquisition module, for obtaining batch processing training data, the batch processing training data includes N Pictures, wherein, N is positive integer;
First eigenvector acquisition module, for will be obtained every described per pictures respectively by convolutional neural networks The corresponding feature vector of picture;
Distance matrix acquisition module for calculating the distance of described eigenvector between any two, and is obtained according to the distance To distance matrix;
Sample is adjusted the distance acquisition module, for according to the distance matrix obtain positive sample adjust the distance with negative sample to away from From;
Training module adjusts the distance to obtain the convolutional Neural net with the negative sample for being adjusted the distance according to the positive sample The loss of network is trained the convolutional neural networks according to the loss;Another batching processing training data is reacquired, And repeat to be trained the convolutional neural networks, until the loss of the convolutional neural networks restrains.
Fourth aspect, the embodiment of the present invention also provide weight identification device, and described device includes:
Picture acquisition module, for obtaining picture to be checked and pedestrian's pictures to be searched;
Second feature vector acquisition module, for by the picture to be checked and pedestrian's pictures to be searched extremely Few pictures obtain the feature vector of the picture to be checked and the pedestrian to be searched by trained convolutional neural networks At least feature vector of a pictures in pictures, wherein, the convolutional neural networks of the training are by weighing as described above The training method of identification network obtains;
Distance calculation module, for calculating in the feature vector of the picture to be checked and pedestrian's pictures to be searched At least the distance between feature vector of a pictures;
Determining module, for determining the identity of the pedestrian in the picture to be checked according to the distance.
5th aspect, the embodiment of the present invention also provides the training system for identifying network again, including memory and processor, institute The computer program that is stored with and can run on the processor in memory is stated, the processor performs the computer program The step of training method of the above-mentioned heavy identification networks of Shi Shixian.
6th aspect, the embodiment of the present invention also provides weight identifying system, including memory and processor, in the memory The computer program that can be run on the processor is stored with, the processor is realized above-mentioned when performing the computer program The step of recognition methods again.
7th aspect, the embodiment of the present invention also provide computer readable storage medium, the computer readable storage medium On be stored with computer program, the computer program performs the above-mentioned training method for identifying network again when being run by processor Or again recognition methods the step of.
An embodiment of the present invention provides the training of identification network, again recognition methods, device and system again, at acquisition batch Manage training data;The corresponding feature vector of N pictures that batch processing training data includes is obtained respectively, according to every pictures point Not corresponding feature vector calculates the distance of feature vector between any two, and obtains distance matrix;According to calculated distance Matrix, the positive sample pair of selected distance maximum and the negative sample pair of Euclidean distance minimum, utilize the two boundary samples of selection To calculating the loss of convolutional neural networks, so as to training pattern, rolled up by the positive sample pair and negative sample that learn to be most difficult to calculating The loss of product neutral net can increase the generalization ability of convolutional neural networks model, improve accuracy of identification.
Other features and advantages of the present invention will illustrate in the following description, also, partly become from specification It obtains it is clear that being understood by implementing the present invention.The purpose of the present invention and other advantages are in specification, claims And specifically noted structure is realized and obtained in attached drawing.
For the above objects, features and advantages of the present invention is enable to be clearer and more comprehensible, preferred embodiment cited below particularly, and coordinate Appended attached drawing, is described in detail below.
Description of the drawings
It, below will be to specific in order to illustrate more clearly of the specific embodiment of the invention or technical solution of the prior art Embodiment or attached drawing needed to be used in the description of the prior art are briefly described, it should be apparent that, in describing below Attached drawing is some embodiments of the present invention, for those of ordinary skill in the art, before not making the creative labor It puts, can also be obtained according to these attached drawings other attached drawings.
Fig. 1 is the schematic diagram for the electronic equipment that the embodiment of the present invention one provides;
Fig. 2 is the training method flow chart of heavy identification network provided by Embodiment 2 of the present invention;
Fig. 3 is the flow chart of step S103 in the heavy training method for identifying network provided by Embodiment 2 of the present invention;
Fig. 4 is the flow chart of step S104 in the heavy training method for identifying network provided by Embodiment 2 of the present invention;
Fig. 5 is the flow chart of step S105 in the heavy training method for identifying network provided by Embodiment 2 of the present invention;
Fig. 6 is the recognition methods flow chart again that the embodiment of the present invention three provides;
Fig. 7 is the training device schematic diagram for identifying network again that the embodiment of the present invention four provides;
Fig. 8 is the heavy identification device schematic diagram that the embodiment of the present invention five provides.
Icon:
10- batch processing training data acquisition modules;20- first eigenvector acquisition modules;30- distance matrixs obtain mould Block;40- samples are adjusted the distance acquisition module;50- training modules;70- picture acquisition modules;80- second feature vector acquisition modules; 90- distance calculation modules;91- determining modules;100- electronic equipments;102- processors;104- storage devices;106- input dresses It puts;108- output devices;110- image collecting devices;112- bus systems.
Specific embodiment
To make the purpose, technical scheme and advantage of the embodiment of the present invention clearer, below in conjunction with attached drawing to the present invention Technical solution be clearly and completely described, it is clear that described embodiment be part of the embodiment of the present invention rather than Whole embodiments.Based on the embodiments of the present invention, those of ordinary skill in the art are not making creative work premise Lower all other embodiments obtained, belong to the scope of protection of the invention.
For ease of understanding the present embodiment, describe in detail below to the embodiment of the present invention.
Embodiment one:
Fig. 1 is the schematic diagram for the electronic equipment that the embodiment of the present invention one provides.
With reference to Fig. 1, the training for identifying network again that is used to implement the embodiment of the present invention, again recognition methods, device and system Exemplary electronic device 100, including one or more processors 102, one or more storage device 104, input unit 106, Output device 108 and image collecting device 110, the bindiny mechanism that these components pass through bus system 112 and/or other forms (not shown) interconnects.It should be noted that the component and structure of electronic equipment 100 shown in FIG. 1 are only exemplary, and not restrictive , as needed, the electronic equipment can also have other assemblies and structure.
The processor 102 can be central processing unit (CPU) or be performed with data-handling capacity and/or instruction The processing unit of the other forms of ability, and other components in the electronic equipment 100 can be controlled desired to perform Function.
The storage device 104 can include one or more computer program products, and the computer program product can To include various forms of computer readable storage mediums, such as volatile memory and/or nonvolatile memory.It is described easy The property lost memory can for example include random access memory (Random Access Memory, abbreviation RAM) and/or delay at a high speed Rush memory (cache) etc..The nonvolatile memory for example can include read-only memory (Read-Only Memory, Abbreviation ROM), hard disk, flash memory etc..One or more computer programs can be stored on the computer readable storage medium to refer to Order, processor 102 can run described program instruction, to realize in invention described below embodiment and (be realized by processor) Client functionality and/or other desired functions.It can also be stored in the computer readable storage medium various Application program and various data, such as various data of application program use and/or generation etc..
The input unit 106 can be the device that user is used for input instruction, and can include keyboard, mouse, wheat One or more of gram wind and touch-screen etc..
The output device 108 can export various information (for example, image or sound) to external (for example, user), and And one or more of display, loud speaker etc. can be included.
Described image harvester 110 can shoot the desired image of user (such as photo, video etc.), and will be clapped The image taken the photograph is stored in the storage device 104 so that other components use.
Illustratively, be used to implement provide according to embodiments of the present invention again identify network training, again recognition methods, dress It puts and may be implemented as with the exemplary electronic device of system on the mobile terminals such as smart mobile phone, tablet computer.
Embodiment two:
Fig. 2 is the training method flow chart of heavy identification network provided by Embodiment 2 of the present invention.
With reference to Fig. 2, this method comprises the following steps:
Step S101 obtains batch processing training data, and batch processing training data includes N pictures, wherein, N is positive integer;
Optionally, for batch processing training data trained every time, batch processing training data includes N pictures, N=P × K, wherein, P is the quantity of different pedestrians, and K is the quantity of the corresponding different photos of each pedestrian, wherein, each pedestrian corresponds to Different photos continuously place.It can so make to be positive sample distance at the distance matrix diagonal calculated, other It is negative sample distance.
Convolutional neural networks include customized convolutional neural networks and pre-training convolutional neural networks, if using Self-defined convolutional neural networks then choose whether to pre-process batch processing training data according to actual demand;If using Be pre-training convolutional neural networks, then need to pre-process batch processing training data, for example with the residual error of pre-training Network, concrete processing procedure are:Picture size is transformed to 224*224 pixel images, which is BGR channel formats, such as public Shown in formula (1):
Batch processing training data is made in the case where using pre-training convolutional neural networks by formula (1), to batch processing The pretreatment that training data carries out, so that batch processing training data meets the call format of picture.
Convolutional neural networks over-fitting to batch processing training data, it is necessary to carry out data augmentation, data augmentation in order to prevent Mode include flip horizontal, fuzzy, random shearing and illumination variation etc..
Step S102, will per pictures respectively by convolutional neural networks, obtain the corresponding feature of every pictures to Amount;
Specifically, will be input in convolutional neural networks per pictures, by the propagated forward algorithm of convolutional neural networks, The feature vector of the pictures can be exported, so as to which N pictures are input in convolutional neural networks, so as to calculate different pictures Feature vector.
Step S103 calculates the distance of feature vector between any two, and obtains distance matrix according to distance;
Here, after the corresponding feature vector of every pictures is calculated, then every pictures and remaining N-1 pictures are calculated So as to obtain the distance of different pictures, the distance matrix M of N × N sizes is obtained further according to the distance of different pictures for distance.It needs Illustrate, calculate the distance per pictures and remaining N-1 pictures, can be Euclidean distance, or COS distance or Mahalanobis distance is not limited thereto.
Step S104 obtains positive sample according to distance matrix and adjusts the distance to adjust the distance with negative sample;
Here, according to corresponding trained label, one is selected from positive sample to the inside by distance matrix M obtained by calculation It is right, a pair is selected the inside from negative sample, finally selects four pictures as sample group, and calculates boundary sample loss.
Step S105 adjusts the distance according to positive sample and adjusts the distance to obtain the loss of convolutional neural networks with negative sample, according to damage Mistake is trained convolutional neural networks;
Step S106 reacquires another batching processing training data, and repeat the above steps to convolutional neural networks into Row training, until the loss of convolutional neural networks restrains.
In embodiments of the present invention, by obtaining batch processing training data;Obtaining batch processing training data respectively includes The corresponding feature vector of N pictures, according to the corresponding feature vector of every pictures, calculate feature vector between any two Distance, and obtain distance matrix;According to calculated distance matrix, the positive sample pair of selected distance maximum and apart from minimum Negative sample pair using loss of the two boundary samples of selection to calculating convolutional neural networks, so as to training pattern, passes through The loss of the positive sample pair being most difficult to and negative sample to calculating convolutional neural networks is practised, the general of convolutional neural networks model can be increased Change ability improves accuracy of identification.
Further, comprise the following steps with reference to Fig. 3, step S103:
Step S201 carries out Regularization to the corresponding feature vector of every pictures, obtains every pictures difference The feature vector of corresponding regularization;
Specifically, it is necessary to carry out canonical to the feature vector of extraction after calculating is per the corresponding feature vector of pictures Change is handled.For example, the first picture and second picture are respectively f in the feature vector after convolutional neural networks, obtained1And f2, Regularization is carried out to feature vector, from formula (2):
Wherein, | f | it is characterized the mould of vector f, then, f1Feature vector with f2 regularizations is fn1And fn2
Step S202 calculates every pictures and other respectively according to the feature vector of the corresponding regularization of every pictures The distance of N-1 pictures, so as to obtain distance matrix.
Further, in step S202, the distance of different pictures is calculated according to formula (3):
Wherein, d (picture 1, picture 2) is the distance of the first picture and second picture, fn1For the spy of the first picture regularization Sign vector, fn2For the feature vector of second picture regularization.
Specifically, the distance of every pictures and other N-1 pictures can be calculated respectively by formula (3), further according to The distance being calculated obtains the distance matrix M of N × N sizes.What formula (3) was calculated can be Euclidean distance.
Further, per pictures, corresponding corresponding training label, training label include the identifier of identification object respectively ID comprises the following steps with reference to Fig. 4, step S104:
Step S301 chooses the identical Euclidean distance of the identifier ID distance different with identifier ID from distance matrix;
Optionally, training data includes N pictures, and a trained label, training tag representation are all corresponded to per pictures Identify the identifier ID of object, identifier ID can be represented with nonnegative integer.By the way that nonnegative integer is converted into one-hot vectors, Only there are one element it is 1 in one-hot vectors, other elements 0, what element 1 represented is which identifies object, i.e. identifier ID.For example, the first picture corresponds to the first training label, the first training label is expressed as [1,0,0 ..., 0] with one-hot vectors; Second picture corresponds to the second training label, and the second training label is expressed as [0,1,0 ..., 0] with one-hot vectors;3rd picture Corresponding 3rd training label, the 3rd training label are expressed as [0,0,1 ..., 0] other pictures with one-hot vectors, The dimension of vector is equal to total classification number of identifier ID, and therefore not to repeat here.
The identical distance of identifier ID is recombinated, obtains positive sample distance matrix by step S302;
The different distance of identifier ID is recombinated, obtains negative sample distance matrix by step S303;
Step S304 chooses maximum distance from positive sample distance matrix, adjusts the distance as positive sample;
Step S305 chooses minimum distance from negative sample distance matrix, adjusts the distance as negative sample.
Specifically, calculated distance matrix M, according to training label, by the positive sample pair of identical identifier ID Corresponding distance is recombinated, and obtains positive sample distance matrix Mp, then by the negative sample of different ID to corresponding distance into Row restructuring, obtains negative sample distance matrix MN, then from most unlike positive sample distance matrix in choose maximum distance max (MP) And minimum distance min (M are chosen from the negative sample distance matrix being most likeN), in this way, selecting one to the inside from positive sample It is right, a pair is selected the inside from negative sample, finally selects four pictures as sample group, and defines boundary sample loss.
Further, comprise the following steps with reference to Fig. 5, step S105:
Step S401 adjusts the distance according to positive sample and adjusts the distance to obtain boundary sample loss with negative sample;
Step S402 according to boundary sample loss and the weighted average of Classification Loss, obtains the damage of convolutional neural networks It loses.
Here, it is that picture classification result calculates cross entropy with one-hot vectors to calculate Classification Loss.
Further, in step S401, boundary sample loss is calculated according to formula (4):
Le={ max (Mp)-min(MN)+α}+ (4)
Wherein, LeIt is lost for boundary sample, max (Mp) adjust the distance for positive sample, min (MN) adjust the distance for negative sample, α is The boundary threshold manually set.
Specifically, L is lost by calculating boundary samplee, positive sample and negative sample can be separated in feature space Come.
When calculating the loss of convolutional neural networks, calculated by four figures, and traditional method is using ternary Group is calculated, wherein it is shared to have a figure.The application selects a pair from positive sample to the inside, and the inside is chosen from negative sample Choosing is a pair of, that is, chooses the sample being most difficult to calculating.Wherein, α is the boundary threshold of size 0, can be set as 0.3, { a }+ It is that (0, nonlinear function a), boundary sample loses L to maxePositive sample and negative sample can be separated in feature space Come.But since the application has chosen the sample being most difficult to carrying out costing bio disturbance, institute's easy mistake on training set in this way Fitting, and with LIDAs constraint, so as to obtain the loss of convolutional neural networks.
Further, in step S402, the loss of convolutional neural networks is calculated according to formula (5):
Loss=λ LID+(1-λ)Le (5)
Wherein, Loss be convolutional neural networks loss, LIDFor the weighted average of the Classification Loss of N pictures, LeFor Boundary sample loses, and λ is weight parameter, λ ∈ (0,1).λ could be provided as 0.5.
Embodiment three:
Fig. 6 is the recognition methods flow chart again for identifying network again that the embodiment of the present invention three provides.
With reference to Fig. 6, this method comprises the following steps:
Step S501 obtains picture to be checked and pedestrian's pictures to be searched;
An at least pictures in picture to be checked and pedestrian's pictures to be searched are passed through trained convolution by step S502 Neutral net obtains the feature vector of picture to be checked and the feature vector of at least pictures in pedestrian's pictures to be searched, Wherein, trained convolutional neural networks are to identify that the training method of network obtains again by above-mentioned;
Step S503 calculates the feature vector of picture to be checked and the spy of at least pictures in pedestrian's pictures to be searched Levy the distance between vector;
Step S504 determines the identity of the pedestrian in picture to be checked according to distance.
After the completion of by identifying that the training method of network is trained again, can by identify again the recognition methods again of network into Row detection, by calculating the feature vector of picture to be checked and the feature vector of at least pictures in pedestrian's pictures to be searched The distance between, minimum distance is chosen in calculated distance, it, will if minimum distance is less than the threshold value of setting Corresponding picture is most like picture in pedestrian's pictures to be searched, and is same pedestrian with picture to be checked.
Example IV:
The embodiment of the present invention also provides the training device for identifying network again, and the training device of the heavy identification network is mainly used for The training method for identifying network again that the above of the embodiment of the present invention is provided is performed, below to provided in an embodiment of the present invention Identify that the training device of network does specific introduction again.
Fig. 7 is the training device schematic diagram for identifying network again that the embodiment of the present invention four provides.
With reference to Fig. 7, the device include batch processing training data acquisition module 10, first eigenvector acquisition module 20, away from It adjusts the distance acquisition module 40 and training module 50 from matrix acquisition module 30, sample.
Batch processing training data acquisition module 10, for obtaining batch processing training data, batch processing training data includes N Picture, wherein, N is positive integer;
First eigenvector acquisition module 20, for every figure will to be obtained per pictures respectively by convolutional neural networks The corresponding feature vector of piece;
Distance matrix acquisition module 30 for calculating the distance of feature vector between any two, and obtains distance according to distance Matrix;
Sample is adjusted the distance acquisition module 40, is adjusted the distance for obtaining positive sample according to distance matrix and is adjusted the distance with negative sample;
Training module 50 adjusts the distance to obtain the loss of convolutional neural networks with negative sample for being adjusted the distance according to positive sample, Convolutional neural networks are trained according to loss;Another batching processing training data is reacquired, and is repeated to convolutional Neural Network is trained, until the loss of convolutional neural networks restrains.
Further, per pictures, corresponding corresponding training label, training label include the identifier of identification object respectively ID, sample acquisition module 40 of adjusting the distance are specifically used for:
The identical distance of the identifier ID distance different with identifier ID is chosen from distance matrix;
The identical distance of identifier ID is recombinated, obtains positive sample distance matrix;
The different distance of identifier ID is recombinated, obtains negative sample distance matrix;
Maximum distance is chosen from positive sample distance matrix, is adjusted the distance as positive sample;
Minimum distance is chosen from negative sample distance matrix, is adjusted the distance as negative sample.
Further, training module 50 is specifically used for:
It is adjusted the distance according to positive sample and adjusts the distance to obtain boundary sample loss with negative sample;
According to boundary sample loss and the weighted average of Classification Loss, the loss of convolutional neural networks is obtained.
Further, training module 50 is specifically used for:
Boundary sample loss is calculated according to the following formula:
Le={ max (Mp)-min(MN)+α}+
Wherein, LeIt is lost for boundary sample, max (Mp) adjust the distance for positive sample, min (MN) adjust the distance for negative sample, α is The boundary threshold manually set.
Further, training module 50 is specifically used for:
The loss of convolutional neural networks is calculated according to the following formula:
Loss=λ LID+(1-λ)Le
Wherein, Loss be convolutional neural networks loss, LIDFor the weighted average of the Classification Loss of N pictures, LeFor Boundary sample loses, and λ is weight parameter, λ ∈ (0,1).
Further, distance matrix acquisition module 30 is specifically used for:
Regularization is carried out to the corresponding feature vector of every pictures, obtains the corresponding canonical of every pictures The feature vector of change;
Every pictures and other N-1 pictures are calculated according to the feature vector of the corresponding regularization of every pictures respectively Distance, so as to obtain distance matrix.
Further, distance matrix acquisition module 30 is specifically used for:
The distance per pictures with other N-1 pictures is calculated according to the following formula:
Wherein, d (picture 1, picture 2) is the distance of the first picture and second picture, fn1For the spy of the first picture regularization Sign vector, fn2For the feature vector of second picture regularization.
Further, the N pictures in batch processing training data include P different pedestrians, and each pedestrian includes K not Same picture, wherein, the corresponding different pictures of each pedestrian are continuously placed.
It is to be understood that in some embodiments, batch processing training data acquisition module 10, first eigenvector acquisition module 20th, distance matrix acquisition module 30, sample adjust the distance acquisition module 40 and training module 50 can also electronic equipment as shown in Figure 1 Processor 102 in 100 is realized.
The training device of heavy identification network provided in an embodiment of the present invention, network is identified with what above-described embodiment provided again Training method has identical technical characteristic, so can also solve the technical issues of identical, reaches identical technique effect.
Embodiment five:
The embodiment of the present invention also provides weight identification device, and it is above-mentioned which is mainly used for the execution embodiment of the present invention The recognition methods again that content is provided does specific introduction to heavy identification device provided in an embodiment of the present invention below.
Fig. 8 is the heavy identification device schematic diagram that the embodiment of the present invention five provides.
With reference to Fig. 8, which includes:Picture acquisition module 70, second feature vector acquisition module 80, distance calculation module 90 and determining module 91;
Picture acquisition module 70, for obtaining picture to be checked and pedestrian's pictures to be searched;
Second feature vector acquisition module 80, for by least one in picture to be checked and pedestrian's pictures to be searched Picture by trained convolutional neural networks, obtain picture to be checked feature vector and pedestrian's pictures to be searched at least one The feature vector of pictures, wherein, trained convolutional neural networks are by identifying that the training method of network obtains again;
Distance calculation module 90, for calculating at least one in the feature vector of picture to be checked and pedestrian's pictures to be searched The distance between feature vector of pictures;
Determining module 91, for determining the identity of the pedestrian in picture to be checked according to distance.
It is to be understood that in some embodiments, picture acquisition module 70, second feature vector acquisition module 80, distance calculate Module 90 and determining module 91 processor 102 in electronic equipment 100 can also be realized as shown in Figure 1.
Heavy identification device provided in an embodiment of the present invention has identical skill with the recognition methods again that above-described embodiment provides Art feature so can also solve the technical issues of identical, reaches identical technique effect.
The computer program product that the embodiment of the present invention is provided, the computer-readable storage including storing program code Medium, the instruction that said program code includes can be used for performing the method described in previous methods embodiment, and specific implementation can be joined See embodiment of the method, details are not described herein.
It is apparent to those skilled in the art that for convenience and simplicity of description, the system of foregoing description With the specific work process of device, the corresponding process in preceding method embodiment is may be referred to, details are not described herein.
In addition, in the description of the embodiment of the present invention, unless otherwise clearly defined and limited, term " installation ", " phase Even ", " connection " should be interpreted broadly, for example, it may be being fixedly connected or being detachably connected or be integrally connected;It can To be mechanical connection or be electrically connected;It can be directly connected, can also be indirectly connected by intermediary, Ke Yishi Connection inside two elements.For the ordinary skill in the art, with concrete condition above-mentioned term can be understood at this Concrete meaning in invention.
If the function is realized in the form of SFU software functional unit and is independent production marketing or in use, can be with It is stored in a computer read/write memory medium.Based on such understanding, technical scheme is substantially in other words The part contribute to the prior art or the part of the technical solution can be embodied in the form of software product, the meter Calculation machine software product is stored in a storage medium, is used including some instructions so that a computer equipment (can be People's computer, server or network equipment etc.) perform all or part of the steps of the method according to each embodiment of the present invention. And foregoing storage medium includes:USB flash disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), arbitrary access are deposited The various media that can store program code such as reservoir (RAM, Random Access Memory), magnetic disc or CD.
In the description of the present invention, it is necessary to explanation, term " " center ", " on ", " under ", "left", "right", " vertical ", The orientation or position relationship of the instructions such as " level ", " interior ", " outer " be based on orientation shown in the drawings or position relationship, merely to Convenient for the description present invention and simplify description rather than instruction or imply signified device or element must have specific orientation, With specific azimuth configuration and operation, therefore it is not considered as limiting the invention.In addition, term " first ", " second ", " the 3rd " is only used for description purpose, and it is not intended that instruction or hint relative importance.
Finally it should be noted that:Embodiment described above is only the specific embodiment of the present invention, to illustrate the present invention Technical solution, rather than its limitations, protection scope of the present invention is not limited thereto, although with reference to the foregoing embodiments to this hair It is bright to be described in detail, it will be understood by those of ordinary skill in the art that:Any one skilled in the art In the technical scope disclosed by the present invention, can still modify to the technical solution recorded in previous embodiment or can be light It is readily conceivable that variation or equivalent substitution is carried out to which part technical characteristic;And these modifications, variation or replacement, do not make The essence of appropriate technical solution departs from the spirit and scope of technical solution of the embodiment of the present invention, should all cover the protection in the present invention Within the scope of.Therefore, protection scope of the present invention described should be subject to the protection scope in claims.

Claims (14)

1. a kind of training method of heavy identification network, which is characterized in that the described method includes:
Batch processing training data is obtained, the batch processing training data includes N pictures, wherein, N is positive integer;
It will be obtained described per the corresponding feature vector of pictures per pictures respectively by convolutional neural networks;
The distance of described eigenvector between any two is calculated, and distance matrix is obtained according to the distance;
Positive sample is obtained according to the distance matrix and adjusts the distance to adjust the distance with negative sample;
It is adjusted the distance according to the positive sample and adjusts the distance to obtain the loss of the convolutional neural networks with the negative sample, according to described Loss is trained the convolutional neural networks;
Another batching processing training data is reacquired, and repeats the above steps and the convolutional neural networks is trained, directly Loss to the convolutional neural networks restrains.
2. the training method of heavy identification network according to claim 1, which is characterized in that described to be corresponded to respectively per pictures Corresponding training label, the trained label includes the identifier ID of the identification object, described to be obtained according to the distance matrix It adjusts the distance to positive sample and adjusts the distance with negative sample, including:
The identical distance of the identifier ID distance different with the identifier ID is chosen from the distance matrix;
The identical distance of the identifier ID is recombinated, obtains positive sample distance matrix;
The different distance of the identifier ID is recombinated, obtains negative sample distance matrix;
Maximum distance is chosen from the positive sample distance matrix, is adjusted the distance as the positive sample;
Minimum distance is chosen from the negative sample distance matrix, is adjusted the distance as the negative sample.
3. the training method of heavy identification network according to claim 1, which is characterized in that described according to the positive sample pair Distance and the negative sample adjust the distance to obtain the loss of the convolutional neural networks, including:
It is adjusted the distance according to the positive sample and adjusts the distance to obtain boundary sample loss with the negative sample;
According to boundary sample loss and the weighted average of Classification Loss, the loss of the convolutional neural networks is obtained.
4. the training method of heavy identification network according to claim 3, which is characterized in that described according to the positive sample pair Distance and the negative sample adjust the distance to obtain boundary sample loss, including:
The boundary sample loss is calculated according to the following formula:
Le={ max (Mp)-min(MN)+α}+
Wherein, LeIt is lost for the boundary sample, max (Mp) adjust the distance for the positive sample, min (MN) it is the negative sample pair Distance, α are the boundary threshold manually set.
5. the training method of heavy identification network according to claim 3, which is characterized in that described according to the boundary sample Loss and the weighted average of Classification Loss, obtain the loss of the convolutional neural networks, including:
The loss of the convolutional neural networks is calculated according to the following formula:
Loss=λ LID+(1-λ)Le
Wherein, Loss be the convolutional neural networks loss, LIDFor the weighted average of the Classification Loss of the N pictures, LeIt is lost for the boundary sample, λ is weight parameter, λ ∈ (0,1).
6. the training method of heavy identification network according to claim 1, which is characterized in that the calculating described eigenvector Distance between any two, and distance matrix is obtained according to the distance, including:
Regularization is carried out to the feature vector corresponding per pictures, is obtained described corresponding per pictures The feature vector of regularization;
Every pictures and other N-1 are calculated according to the feature vector per the corresponding regularization of pictures respectively The distance of picture, so as to obtain the distance matrix.
7. the training method of heavy identification network according to claim 6, which is characterized in that described according to every pictures The feature vector of corresponding regularization calculates the distance per pictures with other N-1 pictures respectively, so as to obtain The distance matrix, including:
The distance per pictures with other N-1 pictures is calculated according to the following formula:
Wherein, d (picture 1, picture 2) is the distance of the first picture and second picture, fn1For the spy of the first picture regularization Sign vector, fn2For the feature vector of the second picture regularization.
8. the training method of heavy identification network according to claim 1, which is characterized in that in the batch processing training data N pictures include the different pedestrians of P, each pedestrian includes K different pictures, wherein, each pedestrian is corresponding Different pictures are continuously placed.
9. a kind of heavy recognition methods, which is characterized in that the described method includes:
Obtain picture to be checked and pedestrian's pictures to be searched;
An at least pictures in the picture to be checked and pedestrian's pictures to be searched are passed through into trained convolutional Neural Network, obtain the feature vector of the picture to be checked and the feature of at least pictures in pedestrian's pictures to be searched to Amount, wherein, the convolutional neural networks of the training are the training for identifying network again by claim 1 to 8 any one of them Method obtains;
Calculate the feature vector of the picture to be checked and the feature of at least pictures in pedestrian's pictures to be searched to The distance between amount;
The identity of the pedestrian in the picture to be checked is determined according to the distance.
10. a kind of training device of heavy identification network, which is characterized in that described device includes:
Batch processing training data acquisition module, for obtaining batch processing training data, the batch processing training data includes N figures Piece, wherein, N is positive integer;
First eigenvector acquisition module, for will be obtained described per pictures per pictures respectively by convolutional neural networks Corresponding feature vector;
Distance matrix acquisition module, for calculating the distance of described eigenvector between any two, and according to the distance obtain away from From matrix;
Sample is adjusted the distance acquisition module, is adjusted the distance for obtaining positive sample according to the distance matrix and is adjusted the distance with negative sample;
Training module adjusts the distance to obtain the convolutional neural networks with the negative sample for being adjusted the distance according to the positive sample Loss is trained the convolutional neural networks according to the loss;Another batching processing training data is reacquired, is laid equal stress on It is multiple that the convolutional neural networks are trained, until the loss of the convolutional neural networks restrains.
11. a kind of heavy identification device, which is characterized in that described device includes:
Picture acquisition module, for obtaining picture to be checked and pedestrian's pictures to be searched;
Second feature vector acquisition module, for by least one in the picture to be checked and pedestrian's pictures to be searched Pictures obtain the feature vector of the picture to be checked and pedestrian's picture to be searched by trained convolutional neural networks The feature vector of an at least pictures is concentrated, wherein, the convolutional neural networks of the training are any by claim 1 to 8 Identify that the training method of network obtains again described in;
Distance calculation module, for calculating in the feature vector of the picture to be checked and pedestrian's pictures to be searched at least The distance between feature vector of one pictures;
Determining module, for determining the identity of the pedestrian in the picture to be checked according to the distance.
12. a kind of training system of heavy identification network, which is characterized in that including memory and processor, deposited in the memory The computer program that can be run on the processor is contained, the processor realizes above-mentioned power when performing the computer program Profit requires the step of method any one of 1 to 8.
13. a kind of heavy identifying system, which is characterized in that including memory and processor, being stored in the memory can be in institute The computer program run on processor is stated, the processor is realized when performing the computer program in the claims 9 The step of described method.
14. a kind of computer readable storage medium, which is characterized in that be stored with computer on the computer readable storage medium Program performs any one of the claims 1 to 8 or claim 9 institute when the computer program is run by processor The step of method stated.
CN201711360237.0A 2017-12-15 2017-12-15 Training and re-recognition method, device and system for re-recognition network Active CN108108754B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711360237.0A CN108108754B (en) 2017-12-15 2017-12-15 Training and re-recognition method, device and system for re-recognition network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711360237.0A CN108108754B (en) 2017-12-15 2017-12-15 Training and re-recognition method, device and system for re-recognition network

Publications (2)

Publication Number Publication Date
CN108108754A true CN108108754A (en) 2018-06-01
CN108108754B CN108108754B (en) 2022-07-22

Family

ID=62216607

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711360237.0A Active CN108108754B (en) 2017-12-15 2017-12-15 Training and re-recognition method, device and system for re-recognition network

Country Status (1)

Country Link
CN (1) CN108108754B (en)

Cited By (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109034109A (en) * 2018-08-16 2018-12-18 新智数字科技有限公司 A kind of pedestrian based on clustering algorithm recognition methods and device again
CN109063768A (en) * 2018-08-01 2018-12-21 北京旷视科技有限公司 Vehicle recognition methods, apparatus and system again
CN109063790A (en) * 2018-09-27 2018-12-21 北京地平线机器人技术研发有限公司 Object identifying model optimization method, apparatus and electronic equipment
CN109063776A (en) * 2018-08-07 2018-12-21 北京旷视科技有限公司 Image identifies network training method, device and image recognition methods and device again again
CN109063607A (en) * 2018-07-17 2018-12-21 北京迈格威科技有限公司 The method and device that loss function for identifying again determines
CN109145991A (en) * 2018-08-24 2019-01-04 北京地平线机器人技术研发有限公司 Image group generation method, image group generating means and electronic equipment
CN109145766A (en) * 2018-07-27 2019-01-04 北京旷视科技有限公司 Model training method, device, recognition methods, electronic equipment and storage medium
CN109165589A (en) * 2018-08-14 2019-01-08 北京颂泽科技有限公司 Vehicle based on deep learning recognition methods and device again
CN109214271A (en) * 2018-07-17 2019-01-15 北京迈格威科技有限公司 The method and device that loss function for identifying again determines
CN109242029A (en) * 2018-09-19 2019-01-18 广东省智能制造研究所 Identify disaggregated model training method and system
CN109344842A (en) * 2018-08-15 2019-02-15 天津大学 A kind of pedestrian's recognition methods again based on semantic region expression
CN109472248A (en) * 2018-11-22 2019-03-15 广东工业大学 A kind of pedestrian recognition methods, system and electronic equipment and storage medium again
CN109684950A (en) * 2018-12-12 2019-04-26 联想(北京)有限公司 A kind of processing method and electronic equipment
CN109697457A (en) * 2018-11-26 2019-04-30 上海图森未来人工智能科技有限公司 Object weighs the training method of identifying system, object recognition methods and relevant device again
CN109977822A (en) * 2019-03-15 2019-07-05 广州市网星信息技术有限公司 Data supply method, model training method, device, system, equipment and medium
CN110414326A (en) * 2019-06-18 2019-11-05 平安科技(深圳)有限公司 Sample data processing method, device, computer installation and storage medium
CN110414550A (en) * 2019-06-14 2019-11-05 北京迈格威科技有限公司 Training method, device, system and the computer-readable medium of human face recognition model
CN110765943A (en) * 2019-10-23 2020-02-07 深圳市商汤科技有限公司 Network training and recognition method and device, electronic equipment and storage medium
CN111027434A (en) * 2018-12-29 2020-04-17 北京地平线机器人技术研发有限公司 Training method and device for pedestrian recognition model and electronic equipment
CN111178403A (en) * 2019-12-16 2020-05-19 北京迈格威科技有限公司 Method and device for training attribute recognition model, electronic equipment and storage medium
CN111488798A (en) * 2020-03-11 2020-08-04 北京迈格威科技有限公司 Fingerprint identification method and device, electronic equipment and storage medium
CN111626212A (en) * 2020-05-27 2020-09-04 腾讯科技(深圳)有限公司 Method and device for identifying object in picture, storage medium and electronic device
CN112085041A (en) * 2019-06-12 2020-12-15 北京地平线机器人技术研发有限公司 Training method and training device for neural network and electronic equipment
CN112257553A (en) * 2020-10-20 2021-01-22 大连理工大学 Pedestrian re-identification method based on cyclic matrix
CN112381147A (en) * 2020-11-16 2021-02-19 虎博网络技术(上海)有限公司 Dynamic picture similarity model establishing method and device and similarity calculating method and device
WO2021043168A1 (en) * 2019-09-05 2021-03-11 华为技术有限公司 Person re-identification network training method and person re-identification method and apparatus
CN112613341A (en) * 2020-11-25 2021-04-06 北京迈格威科技有限公司 Training method and device, fingerprint identification method and device, and electronic device
CN112749565A (en) * 2019-10-31 2021-05-04 华为终端有限公司 Semantic recognition method and device based on artificial intelligence and semantic recognition equipment
CN113361568A (en) * 2021-05-18 2021-09-07 北京迈格威科技有限公司 Target identification method, device and electronic system
CN112446270B (en) * 2019-09-05 2024-05-14 华为云计算技术有限公司 Training method of pedestrian re-recognition network, pedestrian re-recognition method and device

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130303085A1 (en) * 2012-05-11 2013-11-14 Research In Motion Limited Near field communication tag data management
US20150133344A1 (en) * 2008-09-12 2015-05-14 University Of Washington Sequence tag directed subassembly of short sequencing reads into long sequencing reads
CN105808732A (en) * 2016-03-10 2016-07-27 北京大学 Integration target attribute identification and precise retrieval method based on depth measurement learning
CN106778527A (en) * 2016-11-28 2017-05-31 中通服公众信息产业股份有限公司 A kind of improved neutral net pedestrian recognition methods again based on triple losses
CN106803063A (en) * 2016-12-21 2017-06-06 华中科技大学 A kind of metric learning method that pedestrian recognizes again
CN106919909A (en) * 2017-02-10 2017-07-04 华中科技大学 The metric learning method and system that a kind of pedestrian recognizes again
CN107038448A (en) * 2017-03-01 2017-08-11 中国科学院自动化研究所 Target detection model building method
CN107103281A (en) * 2017-03-10 2017-08-29 中山大学 Face identification method based on aggregation Damage degree metric learning
CN107330396A (en) * 2017-06-28 2017-11-07 华中科技大学 A kind of pedestrian's recognition methods again based on many attributes and many strategy fusion study

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150133344A1 (en) * 2008-09-12 2015-05-14 University Of Washington Sequence tag directed subassembly of short sequencing reads into long sequencing reads
US20170247687A1 (en) * 2008-09-12 2017-08-31 University Of Washington Error detection in sequence tag directed subassemblies of short sequencing reads
US20130303085A1 (en) * 2012-05-11 2013-11-14 Research In Motion Limited Near field communication tag data management
CN105808732A (en) * 2016-03-10 2016-07-27 北京大学 Integration target attribute identification and precise retrieval method based on depth measurement learning
CN106778527A (en) * 2016-11-28 2017-05-31 中通服公众信息产业股份有限公司 A kind of improved neutral net pedestrian recognition methods again based on triple losses
CN106803063A (en) * 2016-12-21 2017-06-06 华中科技大学 A kind of metric learning method that pedestrian recognizes again
CN106919909A (en) * 2017-02-10 2017-07-04 华中科技大学 The metric learning method and system that a kind of pedestrian recognizes again
CN107038448A (en) * 2017-03-01 2017-08-11 中国科学院自动化研究所 Target detection model building method
CN107103281A (en) * 2017-03-10 2017-08-29 中山大学 Face identification method based on aggregation Damage degree metric learning
CN107330396A (en) * 2017-06-28 2017-11-07 华中科技大学 A kind of pedestrian's recognition methods again based on many attributes and many strategy fusion study

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
GUANGTAO ZHAI等: "Perceptual image quality assessment: a survey", 《SCIENCE CHINA(INFORMATION SCIENCES)》, no. 11, 10 December 2016 (2016-12-10) *
QI WANG等: "Discriminative fine-grained network for vehicle re-identification using two-stage re-ranking", 《SCIENCE CHINA(INFORMATION SCIENCES)》, no. 11, 2 August 2016 (2016-08-02) *
吕永强等: "融合三元卷积神经网络与关系网络的小样本食品图像识别", 《计算机科学》, no. 01, 15 March 2015 (2015-03-15) *

Cited By (43)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109063607B (en) * 2018-07-17 2022-11-25 北京迈格威科技有限公司 Method and device for determining loss function for re-identification
CN109063607A (en) * 2018-07-17 2018-12-21 北京迈格威科技有限公司 The method and device that loss function for identifying again determines
CN109214271B (en) * 2018-07-17 2022-10-18 北京迈格威科技有限公司 Method and device for determining loss function for re-identification
CN109214271A (en) * 2018-07-17 2019-01-15 北京迈格威科技有限公司 The method and device that loss function for identifying again determines
CN109145766B (en) * 2018-07-27 2021-03-23 北京旷视科技有限公司 Model training method and device, recognition method, electronic device and storage medium
CN109145766A (en) * 2018-07-27 2019-01-04 北京旷视科技有限公司 Model training method, device, recognition methods, electronic equipment and storage medium
CN109063768A (en) * 2018-08-01 2018-12-21 北京旷视科技有限公司 Vehicle recognition methods, apparatus and system again
CN109063776A (en) * 2018-08-07 2018-12-21 北京旷视科技有限公司 Image identifies network training method, device and image recognition methods and device again again
CN109063776B (en) * 2018-08-07 2021-08-10 北京旷视科技有限公司 Image re-recognition network training method and device and image re-recognition method and device
CN109165589A (en) * 2018-08-14 2019-01-08 北京颂泽科技有限公司 Vehicle based on deep learning recognition methods and device again
CN109344842A (en) * 2018-08-15 2019-02-15 天津大学 A kind of pedestrian's recognition methods again based on semantic region expression
CN109034109B (en) * 2018-08-16 2021-03-23 新智数字科技有限公司 Pedestrian re-identification method and device based on clustering algorithm
CN109034109A (en) * 2018-08-16 2018-12-18 新智数字科技有限公司 A kind of pedestrian based on clustering algorithm recognition methods and device again
CN109145991A (en) * 2018-08-24 2019-01-04 北京地平线机器人技术研发有限公司 Image group generation method, image group generating means and electronic equipment
CN109242029A (en) * 2018-09-19 2019-01-18 广东省智能制造研究所 Identify disaggregated model training method and system
CN109063790A (en) * 2018-09-27 2018-12-21 北京地平线机器人技术研发有限公司 Object identifying model optimization method, apparatus and electronic equipment
CN109472248A (en) * 2018-11-22 2019-03-15 广东工业大学 A kind of pedestrian recognition methods, system and electronic equipment and storage medium again
CN109472248B (en) * 2018-11-22 2022-03-25 广东工业大学 Pedestrian re-identification method and system, electronic equipment and storage medium
CN109697457A (en) * 2018-11-26 2019-04-30 上海图森未来人工智能科技有限公司 Object weighs the training method of identifying system, object recognition methods and relevant device again
CN109684950A (en) * 2018-12-12 2019-04-26 联想(北京)有限公司 A kind of processing method and electronic equipment
CN111027434B (en) * 2018-12-29 2023-07-11 北京地平线机器人技术研发有限公司 Training method and device of pedestrian recognition model and electronic equipment
CN111027434A (en) * 2018-12-29 2020-04-17 北京地平线机器人技术研发有限公司 Training method and device for pedestrian recognition model and electronic equipment
CN109977822A (en) * 2019-03-15 2019-07-05 广州市网星信息技术有限公司 Data supply method, model training method, device, system, equipment and medium
CN112085041A (en) * 2019-06-12 2020-12-15 北京地平线机器人技术研发有限公司 Training method and training device for neural network and electronic equipment
CN110414550A (en) * 2019-06-14 2019-11-05 北京迈格威科技有限公司 Training method, device, system and the computer-readable medium of human face recognition model
CN110414550B (en) * 2019-06-14 2022-07-29 北京迈格威科技有限公司 Training method, device and system of face recognition model and computer readable medium
CN110414326A (en) * 2019-06-18 2019-11-05 平安科技(深圳)有限公司 Sample data processing method, device, computer installation and storage medium
CN110414326B (en) * 2019-06-18 2024-05-07 平安科技(深圳)有限公司 Sample data processing method, device, computer device and storage medium
WO2021043168A1 (en) * 2019-09-05 2021-03-11 华为技术有限公司 Person re-identification network training method and person re-identification method and apparatus
CN112446270B (en) * 2019-09-05 2024-05-14 华为云计算技术有限公司 Training method of pedestrian re-recognition network, pedestrian re-recognition method and device
CN110765943A (en) * 2019-10-23 2020-02-07 深圳市商汤科技有限公司 Network training and recognition method and device, electronic equipment and storage medium
CN112749565A (en) * 2019-10-31 2021-05-04 华为终端有限公司 Semantic recognition method and device based on artificial intelligence and semantic recognition equipment
CN111178403B (en) * 2019-12-16 2023-10-17 北京迈格威科技有限公司 Method, device, electronic equipment and storage medium for training attribute identification model
CN111178403A (en) * 2019-12-16 2020-05-19 北京迈格威科技有限公司 Method and device for training attribute recognition model, electronic equipment and storage medium
CN111488798B (en) * 2020-03-11 2023-12-29 天津极豪科技有限公司 Fingerprint identification method, fingerprint identification device, electronic equipment and storage medium
CN111488798A (en) * 2020-03-11 2020-08-04 北京迈格威科技有限公司 Fingerprint identification method and device, electronic equipment and storage medium
CN111626212A (en) * 2020-05-27 2020-09-04 腾讯科技(深圳)有限公司 Method and device for identifying object in picture, storage medium and electronic device
CN111626212B (en) * 2020-05-27 2023-09-26 腾讯科技(深圳)有限公司 Method and device for identifying object in picture, storage medium and electronic device
CN112257553A (en) * 2020-10-20 2021-01-22 大连理工大学 Pedestrian re-identification method based on cyclic matrix
CN112381147A (en) * 2020-11-16 2021-02-19 虎博网络技术(上海)有限公司 Dynamic picture similarity model establishing method and device and similarity calculating method and device
CN112381147B (en) * 2020-11-16 2024-04-26 虎博网络技术(上海)有限公司 Dynamic picture similarity model establishment and similarity calculation method and device
CN112613341A (en) * 2020-11-25 2021-04-06 北京迈格威科技有限公司 Training method and device, fingerprint identification method and device, and electronic device
CN113361568A (en) * 2021-05-18 2021-09-07 北京迈格威科技有限公司 Target identification method, device and electronic system

Also Published As

Publication number Publication date
CN108108754B (en) 2022-07-22

Similar Documents

Publication Publication Date Title
CN108108754A (en) The training of identification network, again recognition methods, device and system again
US11495264B2 (en) Method and system of clipping a video, computing device, and computer storage medium
CN110473141A (en) Image processing method, device, storage medium and electronic equipment
JP6309549B2 (en) Deformable expression detector
CN108734162A (en) Target identification method, system, equipment and storage medium in commodity image
CN111541943B (en) Video processing method, video operation method, device, storage medium and equipment
WO2016144578A1 (en) Methods and systems for generating enhanced images using multi-frame processing
CN106650615B (en) A kind of image processing method and terminal
CN110023989B (en) Sketch image generation method and device
CN103353881B (en) Method and device for searching application
CN111914908B (en) Image recognition model training method, image recognition method and related equipment
CN107689035A (en) A kind of homography matrix based on convolutional neural networks determines method and device
CN107172354A (en) Method for processing video frequency, device, electronic equipment and storage medium
CN111126347B (en) Human eye state identification method, device, terminal and readable storage medium
CN109063691A (en) A kind of recognition of face bottom library optimization method and system
CN108875931A (en) Neural metwork training and image processing method, device, system
CN110929785A (en) Data classification method and device, terminal equipment and readable storage medium
CN107564063A (en) A kind of virtual object display methods and device based on convolutional neural networks
CN107918767A (en) Object detection method, device, electronic equipment and computer-readable medium
CN104539942B (en) Video lens switching detection method and its device based on frame difference cluster
CN109063776A (en) Image identifies network training method, device and image recognition methods and device again again
WO2015064292A1 (en) Image feature amount-related processing system, processing method, and program
CN111382791B (en) Deep learning task processing method, image recognition task processing method and device
CN110956131B (en) Single-target tracking method, device and system
CN108289176A (en) One kind, which is taken pictures, searches topic method, searches topic device and terminal device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant