CN109117823A - A kind of across the scene pedestrian based on multilayer neural network knows method for distinguishing again - Google Patents

A kind of across the scene pedestrian based on multilayer neural network knows method for distinguishing again Download PDF

Info

Publication number
CN109117823A
CN109117823A CN201811010519.2A CN201811010519A CN109117823A CN 109117823 A CN109117823 A CN 109117823A CN 201811010519 A CN201811010519 A CN 201811010519A CN 109117823 A CN109117823 A CN 109117823A
Authority
CN
China
Prior art keywords
sample
pedestrian
layer
neural network
multilayer neural
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811010519.2A
Other languages
Chinese (zh)
Inventor
顾晓清
倪彤光
王洪元
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changzhou University
Original Assignee
Changzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changzhou University filed Critical Changzhou University
Priority to CN201811010519.2A priority Critical patent/CN109117823A/en
Publication of CN109117823A publication Critical patent/CN109117823A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/103Static body considered as a whole, e.g. static pedestrian or occupant recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • G06F18/2135Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on approximation criteria, e.g. principal component analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/467Encoded features or binary features, e.g. local binary patterns [LBP]

Abstract

The invention discloses a kind of, and across the scene pedestrian based on multilayer neural network knows method for distinguishing again, and its step are as follows: (1) acquiring pedestrian's video under current scene in camera, intercept video frame;(2) pedestrian image in the video frame being truncated to step 1 carries out feature extraction and dimension-reduction treatment, using sample to composition aiming field training set Xt, obtain test set Xo;(3) the tape identification data under associated scenario are handled, using sample to composition source domain training set Xs;(4) training set X, X=[X are establisheds,Xt];(5) multilayer neural network model is trained using X;(6) model obtained according to step 5 is in test set XoIn identified to the sample identified.The present invention selects the multilayer neural network for having achievable complex nonlinear mapping as learning model, and the tape identification data of associated scenario are added in the model learning of new scene using transfer learning thought, so that more acurrate to the study of new scene effective.

Description

A kind of across the scene pedestrian based on multilayer neural network knows method for distinguishing again
Technical field
The present invention relates to computer visions and area of pattern recognition, and in particular to it is a kind of based on multilayer neural network across field Scape pedestrian knows method for distinguishing again
Background technique
With a large amount of video cameras in public places universal, the application of picture and video data based on pedestrian also gradually by Pay attention to more extensive, the important application of one of them is that pedestrian identifies again.Pedestrian identifies again to be referred to and is imaged by non-overlap More period pedestrian video datas of machine acquisition search the technology about same a group traveling together.In the present that public safety is paid more and more attention It, pedestrian identifies again and has been to be concerned by more and more people.The application in various frontiers is identified again with pedestrian, in practical item The next critically important technical problem of part is that pedestrian's weight identifying system how is disposed in a new scene.Because in new scene Often without a large amount of marked data, and flag data takes time and effort very much, and training data and its deficient of mark influence To the foundation of pedestrian's weight identification model in new scene, it also be easy to cause the wrong identification of target.
An effective ways for solving the problems, such as this are the thought for introducing transfer learning.According to the thought of transfer learning, if The data that the data supplemental training current scene (aiming field) of associated scenario (source domain) can be used, can be with the performance of lifting system. Although the data of these related fieldss may be out-of-date, or with the data distribution of current scene be it is inconsistent, it includes Valuable information but can help the data of current scene to establish effective identifying system.Currently, more and more researchers taste Examination studies pedestrian's weight identification model using the method for transfer learning, is learned as Wang Hongyuan et al. has invented one kind based on migration Practise asymmetric multitask discrimination model carry out pedestrian identification again (a kind of recognition methods again of the pedestrian based on transfer learning, specially Sharp application number 201711112527.3);Zhang Dongping et al. has invented the pedestrian based on transfer learning, and recognition methods (is based on migration again The pedestrian of study recognition methods again, number of patent application 201510445055.8), the model parameter of source domain data is acquired first, then Model parameter is moved into aiming field, obtains the model of target numeric field data.But both methods all existing defects, former approach By linear transformation by the data projection of luv space to new feature space, tend not to capture the non-linear of pedestrian image Structure;Later approach target numeric field data is not participating in the building of building source domain model, and the difference in two fields is not filled Divide and consider, in addition, this method does not account for the uneven class size between data, is easy to cause the local learning ability of model not yet Foot.Status and many deficiencies for pedestrian again recognition methods, the invention proposes a kind of based on multilayer neural network across field Scape pedestrian knows method for distinguishing again.
Summary of the invention
The main object of the present invention is the non-linear and high-precision processing capacity of processing having using neural network, is provided A kind of across the scene pedestrian easy to operate, highly reliable based on multilayer neural network knows method for distinguishing again, and emphasis improves existing side The not high defect of the accuracy of identification for across the scene pedestrian weight identification model that method is established.
The present invention adopts the following technical solutions to realize:
A kind of across the scene pedestrian based on multilayer neural network knows method for distinguishing again, which comprises the following steps:
Step 1. acquires pedestrian's video under current scene in camera, intercepts video frame;
Pedestrian image in the video frame that step 2. is truncated to step 1 carries out feature extraction and dimension-reduction treatment, is worked as The aiming field training data of tape identification under preceding sceneThe not aiming field test set of tape identificationxti And xoiIt is aiming field training sample and aiming field test sample, y respectivelytiIt is xtiMark, NtAnd N0It is aiming field training respectively Training sample with like-identified is formed positive sample pair according to the mark of sample by the quantity of sample and test sample, different The training sample of mark forms negative sample pair, and sample is to composition aiming field training set Xt
Step 3. is using feature extraction and dimension reduction method described in step 2 to the tape identification data under associated scenarioIt is handled, according to the mark of sample, the training sample with like-identified is formed into positive sample pair, difference mark The training sample of knowledge forms negative sample pair, and sample is to composition source domain training set Xs, xsiIt is source domain training sample, ysiIt is xsiMark Know, NsIt is the quantity of source domain training sample;
Step 4. establishes training set X, X=[Xs,Xt];
Step 5. trains multilayer neural network model using training set X;
The multilayer neural network model that step 6. is obtained according to step 5 is in aiming field test set XoIn treat pedestrian and identify again Sample z carry out pedestrian identify again;
Wherein, the pedestrian image in the video frame being truncated to described in above-mentioned steps 2 to step 1 carries out feature extraction and drop Dimension processing, it is characterised in that: the image of feature to be extracted is normalized first, image block is pixel size, figure Each fritter of picture is 50% in region unit Duplication both horizontally and vertically;Then feature is carried out for the image after piecemeal It extracts, mainly extracts RGB, YCbCr and HS color characteristic totally 8 Color Channels, establish the histogram of 16bin, and extract HOG With LBP feature and establish histogram;75 piecemeals, according to the content of feature extraction, each piece are shared for each pedestrian image In have the feature vectors of 484 dimensions;Dimension-reduction treatment is carried out using high dimensional feature of the principal component analytical method to pedestrian image again;
Multilayer neural network model is trained described in above-mentioned steps 5 using training set X, which is characterized in that the multilayer Neural network is made of input layer, multiple hidden layers and output layer, and wherein first layer is input layer, and the second layer is hidden to M layers Containing layer, the last layer i.e. M+1 layers are output layers, are to connect entirely between layers, any one neuron of preceding layer is with after One layer any one neuron is connected;
The foundation trains multilayer neural network model to comprise the steps of: using training set X
Step 5.1. initializes the 1st layer to M layers of weight matrixAnd bias vectorWherein b(m) It is initialized as 0 vector, W(m)In each componentObedience is uniformly distributed,
WhereinIndicate W(m)The i-th row jth arrange element, as m=1, the value of n is equal on neural network first layer Neuron number, as m=2 ..., M, the value of n is equal to the neuron number on neural network m-1 layer;
The first layer of multilayer neural network described in step 5.2. receives input data set, i.e. training set X;
Step 5.3. iteration each time, is calculate by the following formula to obtain the output h of the multilayer neural network first layer(1):
WhereinIt is nonlinear activation function,It can be tanh sigmoid function, x indicates any in X Sample;
Step 5.4. is by h(1)As the input of hidden layer, the layer-by-layer propagated forward of hidden layer, each layer is by the output of preceding layer As the input of this layer, it is calculate by the following formula to obtain m layers of output of the multilayer neural network
Wherein M layers of output is denoted as f (x):
The f (x) that step 5.5. is obtained according to step 5.4 establishes across the scene similarity measurement SMCS of source domain and aiming field (similarity measure for cross scenario):
Wherein | | | |2Indicate the operation of 2- norm, the first item of formula (5) is all pedestrian sample f (x) of aiming field training set Mean value, Section 2 is the mean value of the f (x) of the pedestrian sample of positive sample pair in source domain, and Section 3 is negative sample pair in source domain The pedestrian sample of the mean value of the f (x) of pedestrian sample, positive sample pair is identified asPedestrian's number is Ns+, pedestrian's sample of negative sample pair Originally it is identified asPedestrian's number is Ns-Ns+
Step 5.6. establishes the optimization function J of multilayer neural network model in the output layer of the multilayer neural network:
Wherein g () is Logistic loss functionα and β is normal number, source domain Xs With aiming field XtSample in training set is to (xi,xj) participatory (6) first itemCalculating, I and j respectively indicates the label of sample, if sample is to (xi,xj) it is positive sample pair, then lij=1, if sample is to (xi,xj) be Negative sample pair then defines l using following formulaij:
Wherein functionτ is a normal number, | | | |FIndicate the operation of F- norm;
It is solved in formula (6) using gradient descent methodWith
Wherein λ is learning rate, to formula (6) derivation, is obtainedWithSpecific calculating formula is as follows, according to from defeated Layer adjusts W to the direction of input layer out(m)And b(m)Value,
Wherein TThe transposition operation of representing matrix, when M=1,2 ..., when M-1,WithSpecific calculating formula is as follows:
As m=M,WithSpecific calculating formula is as follows:
Wherein Θ expression dot-product operation, c,WithIt can be calculated with following formula,
Step 5.7. is obtained according to step 5.6Value calculating formula (6) value, be denoted as Jk, wherein k table Show the number of current iteration, calculates the optimization function J obtained with last iterationk-1Difference, judgement | Jk-Jk-1| it is whether small Whether it is greater than maximum number of iterations in the value of ε or k, if it is not, then turning to step 5.3;If so, currentWithFor model optimal solution, so far model training is finished, and turns to step 6;
The multilayer neural network model obtained described in above-mentioned steps 6 according to step 5 is in aiming field test set XoIn treat The sample z that pedestrian identifies again carries out pedestrian and identifies again, it is characterised in that: sample z and data set X will be identified again to pedestrianoIn The sample of each image to be matched substitutes into the multilayer neural network model that step 5 obtains, and calculatesValue, wherein xoi∈ Xo, ifValue be less than threshold tau, it is determined that in the target pedestrian and the test set in the target pedestrian image Pedestrian to be matched be the same pedestrian, ifValue be greater than threshold tau, it is determined that in the target pedestrian image Pedestrian to be matched in target pedestrian and the test set is not the same pedestrian.
Compared with prior art, the invention has the advantages that
1) accuracy of identification is high: the multilayer mind for having and any complex nonlinear mapping can be achieved is selected in recognition methods to this pedestrian again Through network as specific intelligence learning model, the number of plies of multilayer neural network can be freely arranged according to actual needs, this hair It is bright that there is very strong robustness, memory capability, non-linear mapping capability and powerful self-learning capability.
2) the tape identification data of associated scenario are added to the model of new scene using the thought of transfer learning by the present invention In habit, the study to model in new scene is helped, in addition, the identification information of source domain sample is utilized to show source domain in SMCS measurement With the global and local distributional difference on aiming field so that it is more acurrate to the study of new scene effectively.
3) predict simply, conveniently: this prediction technique realizes that the automatic identification identified again to pedestrian, user's operation are simple, square Just.
Detailed description of the invention
Fig. 1 is the general flow chart that a kind of across scene pedestrian based on multilayer neural network of the invention knows method for distinguishing again;
Specific embodiment
To make the objectives, technical solutions, and advantages of the present invention clearer, below in conjunction with Concrete facts example, and reference Attached drawing, the present invention is described in more detail.
Total implementation flow chart of the invention is as shown in Figure 1, specific implementation is as follows:
Step 1. acquires pedestrian's video under current scene in camera, intercepts video frame, and the present embodiment uses data set I-LIDS image is as the video frame images under current scene;
Step 2. carries out feature extraction and dimension-reduction treatment to the pedestrian image in the video frame of data set i-LIDS;i-LIDS Data set includes 119 pedestrians, 476 width image altogether, and in specific implementation, place is normalized in the image of feature to be extracted first Reason, the pixel of each sub-picture are all processed into 128 × 48, and the present embodiment takes the region unit of pixel size 16 × 16, image it is every One fritter is spaced 8 pixels, Duplication 50% both horizontally and vertically mobile every time.Then for the image after piecemeal into Row feature extraction mainly extracts RGB, YCbCr and HS color characteristic totally 8 Color Channels, establishes the histogram of 16bin, and It extracts HOG and LBP feature and simultaneously establishes histogram, pedestrian image each in this way shares 75 piecemeals, according to the content of feature extraction, There are 484 dimensional feature vectors in each piece, each image totally 36300 dimensional feature.The present embodiment uses principal component analytical method pair The high dimensional feature of pedestrian image carries out dimension-reduction treatment, is down to 300 dimensions.I-LIDS data set is divided into two portions of training set and test set Point, aiming field training set X is constituted in the present embodimenttTotally 76 width pedestrian image will be with like-identified according to the mark of sample Training sample forms positive sample to 15 pairs, and the training sample composition negative sample of different identification is to 23 pairs, aiming field test set XoAltogether 400 width images, aiming field training set and aiming field test set sample are expressed as xtiAnd xoi
Step 3. is using feature extraction and dimension reduction method described in step 2 to the data of the tape identification under associated scenarioIt is handled, xsiIt is source domain training sample, ysiIt is xsiMark, NsIt is the quantity of source domain training sample, this reality It applies and uses CAVIAR data set as source domain training set in example, select 50 pedestrians of CAVIAR data set, 1000 width image, according to The mark of sample, by the training sample composition positive sample with like-identified to 250 pairs, the training sample composition of different identification is negative Sample is to 250 pairs, and sample is to composition source domain training set Xs
Step 4. establishes training set X, X=[Xs,Xt];
Step 5. trains multilayer neural network model using training set X, which is characterized in that the multilayer neural network by Input layer, multiple hidden layers and output layer composition, wherein first layer is input layer, and the second layer is hidden layer to M layers, last Layer i.e. M+1 layers are output layers, are to connect entirely between layers, any one neuron of preceding layer and later layer it is any one A neuron is connected;Train the detailed step of multilayer neural network model as follows using training set X:
Step 5.1. initializes the 1st layer to M layers of weight matrixAnd bias vectorWherein b(m) It is initialized as 0 vector, W(m)In each componentObedience is uniformly distributed,
WhereinIndicate W(m)The i-th row jth column element, as m=1, the value of n is equal to the neuron on first layer Number, as m=2 ..., M, the value of n is equal to the neuron number on m-1 layer, takes M=3 in the present embodiment, neural network totally 4 Layer, the neuron number from first layer to top is 400 → 300 → 200 → 200;
The first layer of multilayer neural network described in step 5.2. receives input data set, i.e. training set X;
Step 5.3. iteration each time, is calculate by the following formula to obtain the output h of the multilayer neural network first layer(1):
Wherein x indicates the arbitrary sample in X,It is nonlinear activation function, in the present embodimentUsing tanh function,
Step 5.4. is by h(1)As the input of hidden layer, the layer-by-layer propagated forward of hidden layer, each layer is by the output of preceding layer As the input of this layer, it is calculate by the following formula to obtain the output h of the layers 2 and 3 of the multilayer neural network(2)And h(3)Point It is not:
The f (x) that step 5.5. is obtained according to step 5.4 establishes across the scene similarity measurement SMCS of source domain and aiming field (similarity measure for cross scenario):
Wherein | | | |2Indicate the operation of 2- norm, the first item of formula (5) is the equal of all pedestrian sample f (x) of aiming field Value, Section 2 is the mean value of the f (x) of the pedestrian sample of positive sample pair in source domain, and Section 3 is the pedestrian of negative sample pair in source domain The pedestrian sample of the mean value of the f (x) of sample, positive sample pair is identified asPedestrian's number is Ns+, the pedestrian sample mark of negative sample pair Knowledge isPedestrian's number is Ns-Ns+
Step 5.6. establishes the optimization function J of multilayer neural network model in the output layer of the multilayer neural network:
Wherein g () is Logistic loss functionλ=1 in the present embodiment, α and β are It is normal number, α=0.1 in the present embodiment, β=10, source domain XsWith aiming field XtSample in training set is to (xi,xj) participatory (6) first itemCalculating, i and j respectively indicate the label of sample, if sample is to (xi, xj) it is positive sample pair, then lij=1, if sample is to (xi,xj) it is negative sample pair, then l is defined using following formulaij:
Wherein function·||FIndicate the operation of F- norm, τ is a normal number, this reality Apply τ=3 in example;
It is solved in formula (6) using gradient descent methodWith
Wherein λ is learning rate, and λ=0.3 in the present embodiment obtains formula (6) derivationWithIt is specific to calculate Formula is as follows, adjusts W according to the direction from output layer to input layer(m)And b(m)Value,
Wherein TThe transposition operation of representing matrix, when M=1,2 ..., when M-1,WithSpecific calculating formula is as follows:
As m=M,WithSpecific calculating formula is as follows:
Wherein Θ expression dot-product operation, c,WithIt can be calculated with following formula,
Step 5.7. is obtained according to step 5.6Solution calculating formula (6) value, be denoted as Jk, wherein k table Show the number of current iteration, calculates the optimization function J obtained with last iterationk-1Difference, judgement | Jk-Jk-1| it is whether small Whether it is greater than maximum number of iterations in the value of ε or k, if it is not, then turning to step 5.3;If so, currentWithFor model optimal solution, so far model training is finished, and turns to step 6, ε=0.01 in the present embodiment, maximum number of iterations It is 1000;
The multilayer neural network model that step 6. is obtained according to step 5 is in aiming field test set XoIn treat pedestrian and identify again Sample z carry out pedestrian identify again, it is characterised in that: sample z and data set X will be identified again to pedestrianoEach of it is to be matched The sample of image substitutes into the multilayer neural network model that step 5 obtains, and calculatesValue, wherein xoi∈ Xo, ifValue be less than threshold tau, it is determined that in the target pedestrian and the test set in the target pedestrian image to It is the same pedestrian with pedestrian, ifValue be greater than threshold tau, it is determined that the target line in the target pedestrian image Pedestrian to be matched in people and the test set is not the same pedestrian.
By the pedestrian of method of the invention and existing some mainstreams, recognition methods compares the present embodiment again, compares The results are shown in Table 1.As it can be seen from table 1 in all methods of comparison, the recognition accuracy highest of the method for the present invention is known Other accuracy rate has had reached field advanced level.
Table 1: the method for the present invention is compared with KISSME, DDML, GPLMNN, cAMT-DCA and OurTransD recognition accuracy
Example discussed above is only intended to illustrate the present invention, but not to limit the present invention.The technology of this field Personnel can according to the present invention disclosed the technical disclosures make the various various other modifications for not departing from essence of the invention and Change, these modifications and changes are still within the scope of the present invention.

Claims (1)

1. a kind of across scene pedestrian based on multilayer neural network knows method for distinguishing again, which comprises the following steps:
Step 1. acquires pedestrian's video under current scene in camera, intercepts video frame;
Pedestrian image in the video frame that step 2. is truncated to step 1 carries out feature extraction and dimension-reduction treatment, obtains working as front court The aiming field training data of tape identification under scapeThe not aiming field test set of tape identificationxtiAnd xoi It is aiming field training sample and aiming field test sample, y respectivelytiIt is xtiMark, NtAnd N0It is aiming field training sample respectively The training sample with like-identified is formed by positive sample pair, different identification according to the mark of sample with the quantity of test sample Training sample form negative sample pair, sample to constitute aiming field training set Xt
Step 3. is using feature extraction and dimension reduction method described in step 2 to the tape identification data under associated scenario It is handled, according to the mark of sample, the training sample with like-identified is formed into positive sample pair, the training sample of different identification This composition negative sample pair, sample is to composition source domain training set Xs, xsiIt is source domain training sample, ysiIt is xsiMark, NsIt is source domain The quantity of training sample;
Step 4. establishes training set X, X=[Xs,Xt];
Step 5. trains multilayer neural network model using training set X;
The multilayer neural network model that step 6. is obtained according to step 5 is in aiming field test set XoIn treat the sample that pedestrian identifies again This z carries out pedestrian and identifies again;
Wherein, the pedestrian image in the video frame being truncated to described in above-mentioned steps 2 to step 1 carries out at feature extraction and dimensionality reduction Reason, it is characterised in that: the image of feature to be extracted is normalized first, image block is pixel size, image Each fritter is 50% in region unit Duplication both horizontally and vertically;Then feature is carried out for the image after piecemeal to mention Take, mainly extract RGB, YCbCr and HS color characteristic totally 8 Color Channels, establish the histogram of 16bin, and extract HOG and LBP feature simultaneously establishes histogram;75 piecemeals are shared for each pedestrian image, according to the content of feature extraction, in each piece There is the feature vector of 484 dimensions;Dimension-reduction treatment is carried out using high dimensional feature of the principal component analytical method to pedestrian image again;
Multilayer neural network model is trained described in above-mentioned steps 5 using training set X, which is characterized in that the multilayer nerve Network is made of input layer, multiple hidden layers and output layer, and wherein first layer is input layer, and the second layer is hidden layer to M layers, The last layer i.e. M+1 layers are output layers, are to connect entirely between layers, any one neuron of preceding layer and later layer Any one neuron is connected;
The foundation trains multilayer neural network model to comprise the steps of: using training set X
Step 5.1. initializes the 1st layer to M layers of weight matrixAnd bias vectorWherein b(m)Initially Turn to 0 vector, W(m)In each componentObedience is uniformly distributed,
WhereinIndicate W(m)The i-th row jth arrange element, as m=1, the value of n is equal to the mind on neural network first layer Through first number, as m=2 ..., M, the value of n is equal to the neuron number on neural network m-1 layer;
The first layer of multilayer neural network described in step 5.2. receives input data set, i.e. training set X;
Step 5.3. iteration each time, is calculate by the following formula to obtain the output h of the multilayer neural network first layer(1):
WhereinIt is nonlinear activation function,It can be tanh sigmoid function, x indicates the arbitrary sample in X;
Step 5.4. is by h(1)As the input of hidden layer, the layer-by-layer propagated forward of hidden layer, each layer using the output of preceding layer as This layer of input is calculate by the following formula to obtain m layers of output of the multilayer neural network
Wherein M layers of output is denoted as f (x):
The f (x) that step 5.5. is obtained according to step 5.4 establishes across the scene similarity measurement SMCS of source domain and aiming field (similarity measure for cross scenario):
Wherein | | | |2Indicating the operation of 2- norm, the first item of formula (5) is the mean value of all pedestrian sample f (x) of aiming field, second Item is the mean value of the f (x) of the pedestrian sample of positive sample pair in source domain, and Section 3 is the f of the pedestrian sample of negative sample pair in source domain (x) pedestrian sample of mean value, positive sample pair is identified asPedestrian's number is Ns+, the pedestrian sample of negative sample pair is identified as Pedestrian's number is Ns-Ns+
Step 5.6. establishes the optimization function J of multilayer neural network model in the output layer of the multilayer neural network:
Wherein g () is Logistic loss functionα and β is normal number, source domain XsAnd target Domain XtSample in training set is to (xi,xj) participatory (6) first itemCalculating, i and j points Not Biao Shi sample label, if sample is to (xi,xj) it is positive sample pair, then lij=1, if sample is to (xi,xj) it is negative sample It is right, then l is defined using following formulaij:
Wherein functionτ is a normal number, | | | |FIndicate the operation of F- norm;
It is solved in formula (6) using gradient descent methodWith
Wherein λ is learning rate, to formula (6) derivation, is obtainedWithSpecific calculating formula is as follows, according to from output layer to The direction of input layer adjusts W(m)And b(m)Value,
Wherein TThe transposition operation of representing matrix, works as m= 1,2 ..., when M-1,WithSpecific calculating formula is as follows:
As m=M,WithSpecific calculating formula is as follows:
Wherein Θ expression dot-product operation, c,WithIt can be calculated with following formula,
Step 5.7. is obtained according to step 5.6Value calculating formula (6) value, be denoted as Jk, wherein k indicates to work as The number of preceding iteration calculates the optimization function J obtained with last iterationk-1Difference, judgement | Jk-Jk-1| whether it is less than ε Or whether the value of k is greater than maximum number of iterations, if it is not, then turning to step 5.3;If so, currentWithFor model optimal solution, so far model training is finished, and turns to step 6;
Multilayer neural network model is obtained in aiming field test set X according to step 5 described in above-mentioned steps 6oIn treat pedestrian and know again Other sample z carries out pedestrian and identifies again, it is characterised in that: sample z and data set X will be identified again to pedestrianoEach of to Sample with image substitutes into step 5 and obtains multilayer neural network model, calculatesValue, wherein xoi∈XoIfValue be less than threshold tau, it is determined that in the target pedestrian and the test set in the target pedestrian image to It is the same pedestrian with pedestrian, ifValue be greater than threshold tau, it is determined that the target line in the target pedestrian image Pedestrian to be matched in people and the test set is not the same pedestrian.
CN201811010519.2A 2018-08-31 2018-08-31 A kind of across the scene pedestrian based on multilayer neural network knows method for distinguishing again Pending CN109117823A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811010519.2A CN109117823A (en) 2018-08-31 2018-08-31 A kind of across the scene pedestrian based on multilayer neural network knows method for distinguishing again

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811010519.2A CN109117823A (en) 2018-08-31 2018-08-31 A kind of across the scene pedestrian based on multilayer neural network knows method for distinguishing again

Publications (1)

Publication Number Publication Date
CN109117823A true CN109117823A (en) 2019-01-01

Family

ID=64860364

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811010519.2A Pending CN109117823A (en) 2018-08-31 2018-08-31 A kind of across the scene pedestrian based on multilayer neural network knows method for distinguishing again

Country Status (1)

Country Link
CN (1) CN109117823A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111488760A (en) * 2019-01-25 2020-08-04 复旦大学 Few-sample pedestrian re-identification method based on deep multi-example learning
WO2020186914A1 (en) * 2019-03-20 2020-09-24 北京沃东天骏信息技术有限公司 Person re-identification method and apparatus, and storage medium
WO2020258714A1 (en) * 2019-06-24 2020-12-30 深圳云天励飞技术有限公司 Rider re-identification method, apparatus and device
CN113780135A (en) * 2021-08-31 2021-12-10 中国科学技术大学先进技术研究院 Cross-scene VOCs gas leakage detection method and system and storage medium
CN114758081A (en) * 2022-06-15 2022-07-15 之江实验室 Pedestrian re-identification three-dimensional data set construction method and device based on nerve radiation field

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107862300A (en) * 2017-11-29 2018-03-30 东华大学 A kind of descending humanized recognition methods of monitoring scene based on convolutional neural networks

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107862300A (en) * 2017-11-29 2018-03-30 东华大学 A kind of descending humanized recognition methods of monitoring scene based on convolutional neural networks

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
TONGGUANG NI等: "Discriminative deep transfer metric learning for cross-scenario person re-identification", 《JOURNAL OF ELECTRONIC IMAGING》 *
王冲等: "基于跨场景迁移学习的行人再识别", 《计算机工程与设计》 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111488760A (en) * 2019-01-25 2020-08-04 复旦大学 Few-sample pedestrian re-identification method based on deep multi-example learning
CN111488760B (en) * 2019-01-25 2023-05-02 复旦大学 Few-sample pedestrian re-recognition method based on deep multi-example learning
WO2020186914A1 (en) * 2019-03-20 2020-09-24 北京沃东天骏信息技术有限公司 Person re-identification method and apparatus, and storage medium
CN111723611A (en) * 2019-03-20 2020-09-29 北京沃东天骏信息技术有限公司 Pedestrian re-identification method and device and storage medium
WO2020258714A1 (en) * 2019-06-24 2020-12-30 深圳云天励飞技术有限公司 Rider re-identification method, apparatus and device
CN113780135A (en) * 2021-08-31 2021-12-10 中国科学技术大学先进技术研究院 Cross-scene VOCs gas leakage detection method and system and storage medium
CN113780135B (en) * 2021-08-31 2023-08-04 中国科学技术大学先进技术研究院 Cross-scene VOCs gas leakage detection method, system and storage medium
CN114758081A (en) * 2022-06-15 2022-07-15 之江实验室 Pedestrian re-identification three-dimensional data set construction method and device based on nerve radiation field
WO2023093186A1 (en) * 2022-06-15 2023-06-01 之江实验室 Neural radiation field-based method and apparatus for constructing pedestrian re-identification three-dimensional data set

Similar Documents

Publication Publication Date Title
CN108596277B (en) Vehicle identity recognition method and device and storage medium
CN109117823A (en) A kind of across the scene pedestrian based on multilayer neural network knows method for distinguishing again
Li et al. Infrared and visible image fusion using a deep learning framework
CN106250870B (en) A kind of pedestrian's recognition methods again of joint part and global similarity measurement study
CN105469041B (en) Face point detection system based on multitask regularization and layer-by-layer supervision neural network
CN110599537A (en) Mask R-CNN-based unmanned aerial vehicle image building area calculation method and system
CN110517293A (en) Method for tracking target, device, system and computer readable storage medium
CN106650630A (en) Target tracking method and electronic equipment
CN110246181B (en) Anchor point-based attitude estimation model training method, attitude estimation method and system
CN108389220B (en) Remote sensing video image motion target real-time intelligent cognitive method and its device
CN105096307B (en) The method of detection object in paired stereo-picture
CN106446930A (en) Deep convolutional neural network-based robot working scene identification method
CN109978918A (en) A kind of trajectory track method, apparatus and storage medium
CN109410171B (en) Target significance detection method for rainy image
CN110176024B (en) Method, device, equipment and storage medium for detecting target in video
CN107689157B (en) Traffic intersection passable road planning method based on deep learning
CN108470178B (en) Depth map significance detection method combined with depth credibility evaluation factor
CN106529441B (en) Depth motion figure Human bodys' response method based on smeared out boundary fragment
CN104517095A (en) Head division method based on depth image
CN106127812A (en) A kind of passenger flow statistical method of non-gate area, passenger station based on video monitoring
CN113610046B (en) Behavior recognition method based on depth video linkage characteristics
CN110516512B (en) Training method of pedestrian attribute analysis model, pedestrian attribute identification method and device
Madani et al. A human-like visual-attention-based artificial vision system for wildland firefighting assistance
CN108875505A (en) Pedestrian neural network based recognition methods and device again
CN111612024A (en) Feature extraction method and device, electronic equipment and computer-readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20190101

WD01 Invention patent application deemed withdrawn after publication