CN108121970A - A kind of recognition methods again of the pedestrian based on difference matrix and matrix measures - Google Patents

A kind of recognition methods again of the pedestrian based on difference matrix and matrix measures Download PDF

Info

Publication number
CN108121970A
CN108121970A CN201711417699.1A CN201711417699A CN108121970A CN 108121970 A CN108121970 A CN 108121970A CN 201711417699 A CN201711417699 A CN 201711417699A CN 108121970 A CN108121970 A CN 108121970A
Authority
CN
China
Prior art keywords
mrow
msub
msubsup
mfrac
matrix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN201711417699.1A
Other languages
Chinese (zh)
Inventor
胡瑞敏
王正
兰佳梅
李嘉麒
梁超
陈军
陈宇静
渠慎明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University WHU
Original Assignee
Wuhan University WHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University WHU filed Critical Wuhan University WHU
Priority to CN201711417699.1A priority Critical patent/CN108121970A/en
Publication of CN108121970A publication Critical patent/CN108121970A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/53Recognition of crowd images, e.g. recognition of crowd congestion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/30Scenes; Scene-specific elements in albums, collections or shared content, e.g. social network photos or video
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/103Static body considered as a whole, e.g. static pedestrian or occupant recognition

Abstract

The present invention relates to a kind of recognition methods again of the pedestrian based on difference matrix and matrix measures, and it is an object of the invention to promote weight recognition effect using the otherness between different pedestrians.This method obtains the feature description of pedestrian first, Feature Descriptor is converted into difference matrix from feature vector again, utilize the difference between different pedestrians, introduce in difference projection matrix L2 between projection matrix L1 and difference, realize different images to the distance between measurement matrix measures are transformed by vector metric, the metric form not only make use of the individual profiling information of certain a group traveling together, it make use of the contact between different pedestrians to a deeper level, improve the multi-cam similarly hereinafter matched accuracy of a group traveling together.

Description

A kind of recognition methods again of the pedestrian based on difference matrix and matrix measures
Technical field
The invention belongs to monitor video retrieval technique fields, are related to a kind of pedestrian recognition methods more particularly to a kind of base again In the pedestrian of difference matrix and matrix measures again recognition methods.
Background technology
China has put into vast resources and has built the monitoring of city video network, and video monitoring system is dismissed exhibition and popularized to public security Organ's solving criminal cases bring huge mode and change, and video investigation technology obtains greatly development and application.But effect is simultaneously Not equal to benefit.In actual video investigation, a large amount of investigators need to retrieve for examination that the crime time is front and rear, near spot Monitor video, and progressively expand and retrieve for examination scope, to search same pedestrian target from the video captured by multiple cameras Moving frame and track, and then suspected target is locked, investigates and tracked, it is necessary to expend substantial amounts of manpower and time.Public security work Timeliness demand advance pedestrian weight identification technology development.Pedestrian identifies again, i.e., with computer vision, machine learning method Judge whether some pedestrian in some camera once appeared in the technology in other cameras.The technology is aiding in video Investigator quickly and accurately has found moving frame and the track of suspected target, and case-solving rate is improved to public security department, safeguards people group Many securities of the lives and property are of great significance.
Recognition methods can be divided into two classes to existing pedestrian again:The first kind mainly constructs the visual signature of robust, then uses The distance function (such as Euclidean distance) of standard carries out similarity measurement;Second class mainly by learn a suitable scale into Capable more accurately distance metric.The above method all be consider some pedestrian appearance variation, without consider the pedestrian and Difference relation between other pedestrians.
Chinese patent literature CN106548139A, discloses a kind of pedestrian and knows again open (bulletin) day 2017.03.29 Other method, which mainly extracts color of image histogram when extracting feature using sliding window, in metric calculation, with band Search image feature vector does not do Euclidean distance for 0 characteristic dimension and target to be searched feature, and this method does not account for not With the relation between pedestrian, therefore the result that the algorithm obtains also has optimization space.
Chinese patent literature CN106599795A, open (bulletin) day 2017.04.26 disclose a kind of based on scale Dynamic low resolution pedestrian recognition methods again apart from the study of tapering function interface, the invention introduce scale apart from gradual change letter Number, generates positive negative sample feasible and infeasible scale apart from tapering function, this method is measurement of adjusting the distance respectively It practises, but does not account for the relation between different samples, thus this method also has optimization space.
Chinese patent literature CN105224937A, open (bulletin) day 2016.01.06 disclose a kind of based on human body The fine granularity semanteme color pedestrian recognition methods again of component locations constraint, the invention introduce fine granularity color representation and human body portion Promotion of the part position constraint relational implementation to pedestrian's weight recognition effect of semantic color, what this method was related to is single pedestrian portion Part position constraint relation does not account for the component locations restriction relation between different pedestrians, thus this method also has optimization sky Between.
Chinese patent literature CN105930768A, open (bulletin) day 2016.09.07 disclose a kind of based on space-time The recognition methods again of the target of constraint, goal description information includes visual signature information in this method, across camera temporal characteristics are believed Breath and camera space characteristic information.This method mainly considers space-time restriction and the row based on difference matrix and matrix measures Recognition methods research angle is different to people again.
Chinese patent literature CN105138998A, open (bulletin) day 2015.12.09 disclose a kind of based on visual angle Recognition methods and system, this method pass through the adaptive sub-space learning algorithm in visual angle to the pedestrian of adaptive space learning algorithm again Acquistion recycles transformation matrix to be calculated into row distance and is identified again with pedestrian to transformation matrix.This method is obtained using visual angle change Transformation matrix carries out calculating measurement, and herein based on difference matrix come to carry out calculating degree quantifier elimination angle be different.
The content of the invention
In view of the deficiencies of the prior art, the present invention provides a kind of based on the pedestrian of difference matrix and matrix measures weight Feature Descriptor is converted into difference matrix by recognition methods, this method from feature vector, using projecting square in different pedestrian's differences Battle array and projection matrix between difference, realize different images to the distance between measurement matrix measures are transformed by vector metric, should Metric form not only make use of the individual profiling information of certain a group traveling together, make use of the contact between different pedestrians to a deeper level, Improve the multi-cam similarly hereinafter matched accuracy of a group traveling together.
The technical solution adopted in the present invention is:
A kind of recognition methods again of the pedestrian based on difference matrix and matrix measures, which is characterized in that comprise the following steps:
Step 1:Pedestrian's feature description under different cameras is converted into difference matrix description by vector, is specifically included:
Step 1.1:The feature description under different cameras is defined, is specifically:Being marked under two cameras A, B M people O={ o1,o2,...,oM, for pedestrian oiFeature under camera A or camera B is described asOrWherein, NfRepresent the dimension of feature vector,WithTwo training sets being illustrated respectively under camera A and camera B,For in camera A Under test query data,Represent the test data under camera B, N is The number of test data under camera B,For test data;
Step 1.2:Feature is described to be converted into difference matrix by feature vector, is specifically:The spy of given a series of images SignFor the description X of image I, the image of camera ADifference matrix be described asThe image of camera BDifference matrix be described asIn this way, feature description is converted to difference square by feature vector Battle array;
Step 2:Projection matrix between projection matrix and difference is introduced in the difference of different pedestrians, is specifically the introduction of different rows Projection matrix in the difference of peopleThe projection matrix between differenceTo make the difference between same pedestrian Mutation is small, and the difference between different pedestrians becomes larger, wherein NrIt represents with reference to the number of character image under each camera, for every One image pairCalculating matrix distance represents the distance between pedestrian, wherein, with Frobenius norm calculation squares Battle array distance, is expressed as:
Step 3:Learn the matrix measures mode newly proposed, specifically include:
Step 3.1:The object function of structural matrix scale learning, specifically:
For a pair of of projection matrix of same person under different camerasWithDue to differing under different cameras Cause property can effectively reduce, and be referred to as consistent item, another pair projection matrixWithWherein i ≠ j, this maintains matrixes Distinguishability, we term it differentiate item;
Step 3.2:Projection matrix between sparse difference, specifically for projection matrix L between the difference of different pedestrians2, consider It is useful to otherness to not all people, groups of people are that have very strong recognizable ability and can reduce noise, we are right Otherness has done sparse selection, and the selection of otherness is done using following normal form, projection matrix L difference2It is dilute to do 2,1 models It dredges, formula is defined as follows:
Espr(L2)=| | L2||2,1 (4)
Step 3.3:Object function to the end is obtained, specifically by formula Econ, formula Edis, EsprIt is combined into a target Function
E(L1,L2)=Econ(L1,L2)+Edis(L1,L2)+uEspr(L2) (5)
Wherein Econ(L1,L2) it is L1And L2Consistent item, Edis(L1,L2) it is L1And L2Differentiation item, μ be sparse item Espr (L2) weight parameter;
Step 4:Result optimizing is carried out, is specifically included:
Step 4.1:The gradient of object function is sought, formula is as follows:
Wherein:
Wherein:
Wherein,
G (z)=(1+e-βz)-1 (13)
For logistic loss functions lβ(z) derivative
D is diagonal matrix,
lmRepresent L2M rows;
Step 4.2:The study of optimization algorithm learns this measurement using the optimization algorithm of iteration, and optimization process is as follows:
WhereinWithResult of calculation during iteration n times, λ are represented respectively1> 0, λ2> 0 be each step gradient updating from The dynamic step-length determined, algorithm iteration number are up to 1000 times or meet | En+1-En| < ε, ε=1 × 10-8When, stop changing Generation.
In a kind of above-mentioned recognition methods again of the pedestrian based on difference matrix and matrix measures, step 3.1, consistent item is Econ, differentiation item is Edis, it is respectively defined as:
Wherein lβ(z) and e (sk) definition difference it is as follows:
e(sk) it is to be defined as below:
One sample triple is defined asS is the size of this collection, for each A sample sk, it is necessary to meetWherein error function is expressed as
Compared with existing pedestrian's weight identification technology based on distance metric, the present invention has the following advantages and beneficial effect: 1) compared with prior art, the present invention not only allows for variation between same a group traveling together, it is also contemplated that between different pedestrians Variation relation can carry out more effectively identifying again;2) present invention is realized is converted into matrix degree by distance metric by vector metric Amount, the optimization in distance metric level is so that the expansion and applicability of method are very strong.
Description of the drawings
Fig. 1 is the method for the present invention flow chart.
Fig. 2 simplifies method flow diagram for the present invention.
Specific embodiment
Understand for the ease of those of ordinary skill in the art and implement the present invention, with reference to the accompanying drawings and embodiments to this hair It is bright to be described in further detail, it should be understood that implementation example described herein is merely to illustrate and explain the present invention, not For limiting the present invention.
It should be noted that:The present invention receive project of national nature science fund project (U1611461, U1404618, 61671336) National 863 project (2015AA016306), Ministry of Public Security's technical research planning item (2016JSYJA12), Hubei Province Technological innovation major project (2016AAA015), Jiangsu Province's Natural Science Fund In The Light item (BK20160386) and regional scientific development The subsidy of planning item (172102210186).
The present invention is a kind of based on difference matrix and matrix measures pedestrian recognition methods again.This method Feature Descriptor from Feature vector is converted into difference matrix, utilizes projection matrix, realization between projection matrix and difference in the difference between different pedestrians Different images to the distance between measurement matrix measures are transformed by vector metric, which not only make use of certain a line The individual profiling information of people, make use of the contact between different pedestrians to a deeper level, improve multi-cam similarly hereinafter a group traveling together Matched accuracy.
Referring to Fig.1, the present embodiment uses MATLAB7 to be tested as Simulation Experimental Platform on data set VIPeR. VIPeR data sets are there are two 632 outdoor pedestrian images pair under camera, wherein every image pixel size is 128*48; We have randomly selected three subsets not being overlapped on each data set, respectively training set, test set, reference set, On data set VIPeR, the reference set of selection is 100 images pair, and test and training image are to being respectively 200.
The present invention flow be:
Step 1:Pedestrian's feature description under different cameras is converted into difference matrix description by vector description, it is specific Realization includes two sub-steps:
Step 1.1:M people O={ o being marked under two cameras A, B1,o2,...,oM, for pedestrian oiIt is taking the photograph As the feature under head A (or camera B) is described as(or),Wherein, NfRepresent the dimension of feature vector Degree,WithIt is illustrated respectively under camera A and camera B Two training sets,For the test query data under camera A,Table Showing the test data under camera B, N is the number of the test data under camera B,For test data;
Step 1.2:The feature of given a series of imagesFor the description X of image I, camera shooting Head A hypographsDifference matrix be described asCamera B hypographsDifference Different matrix description isIn this way, feature description is converted by feature vector Into difference matrix;
Step 2:Introduce between different pedestrians projection matrix in differenceThe projection matrix between differenceThe difference between same pedestrian is made to become smaller, the difference between different pedestrians becomes larger, wherein NrRepresent every With reference to the number of character image under a camera, for each image pairUsing Frobenius norms, it is new away from Become from metric form
Step 3:Learn the matrix measures mode newly proposed, specific implementation includes following sub-step:
Step 3.1:The object function of structural matrix scale learning, this object function are formed by two, taken the photograph for difference As the projection matrix of same person under headWithSince the inconsistency under different cameras can effectively reduce, we Referred to as consistent item, another pair projection matrixWithWherein i ≠ j, this maintains the distinguishability of matrix, we term it Differentiate item, wherein, consistent item EconWith differentiation item EdisItem can be respectively defined as:
Wherein lβ(z) and e (sk) definition difference it is as follows:
e(sk) it is to be defined as below:
One sample triple is defined asS is the size of this collection, for every One sample sk, it is necessary to meetWherein error function is expressed as
Step 3.2:For projection matrix L between difference2, it is contemplated that not all people is useful, groups of people to otherness There is very strong recognizable ability and noise can be reduced, we have done otherness sparse selection, and otherness is done using 2,1 normal form Selection, sparse formula is defined as follows:
Espr(L2)=| | L2||2,1 (4)
Step 3.3:Finally we are by formula Econ, formula Edis, EsprIt is combined into an object function
E(L1,L2)=Econ(L1,L2)+Edis(L1,L2)+uEspr(L2) (5)
Step 4:The optimization of algorithm is optimized using gradient decline, and specific implementation includes following sub-step:
Step 4.1:The gradient of object function is sought, formula is as follows:
Wherein:
Wherein:
Wherein,
G (z)=(1+e-βz)-1 (13)
For logistic loss functions lβ(z) derivative
D is diagonal matrix,
lmRepresent L2M rows
Step 4.2, this measurement is learnt using the optimization algorithm of iteration, optimization process is as follows:
Wherein λ1> 0, λ2> 0 is the step-length that each step gradient updating automatically determines, and algorithm iteration number is up to 1000 Secondary or satisfaction | En+1-En| < ε, ε=1 × 10-8When, stop iteration.
Step 5:The CMC value after sorting consistence is calculated, CMC value refers in n times inquiry herein, has before return in r result The probability of correct pedestrian's object, when r result before return, CMC value is higher, represents that pedestrian retrieval performance is better.
The above process carries out each test sample k inquiry, calculates the average CMC value of k inquiry, and exports, and k takes herein 10.The average CMC value of documents 1, document 2, document 3 and 4 pedestrian of document recognition methods again, is shown in Table 1.It can be sent out from table 1 Existing, the retrieval performance of sorting consistence pedestrian of the invention recognition methods again is significantly improved.
Average CMC value (%) of the table 1 in 1,5,10,25 result before being returned respectively on VIPeR
Method 1 5 10 25
1 method of document 29.1 52.5 65.9 79.9
2 method of document 33.3 65.1 78.3 88.5
4 method of document 3+ documents 31.2 59.8 74.0 83.5
3 methods of document+the method for the present invention 37.3 67.4 80.3 89.5
It should be appreciated that the part that this specification does not elaborate belongs to the prior art.
It should be appreciated that the above-mentioned description for preferred embodiment is more detailed, can not therefore be considered to this The limitation of invention patent protection scope, those of ordinary skill in the art are not departing from power of the present invention under the enlightenment of the present invention Profit is required under protected ambit, can also be made replacement or deformation, be each fallen within protection scope of the present invention, this hair It is bright scope is claimed to be determined by the appended claims.

Claims (2)

1. a kind of recognition methods again of the pedestrian based on difference matrix and matrix measures, which is characterized in that comprise the following steps:
Step 1:Pedestrian's feature description under different cameras is converted into difference matrix description by vector, is specifically included:
Step 1.1:The feature description under different cameras is defined, is specifically:M people O being marked under two cameras A, B ={ o1,o2,...,oM, for pedestrian oiFeature under camera A or camera B is described asOr Wherein, NfRepresent the dimension of feature vector,WithTable respectively Show two training sets under camera A and camera B,For the test query data under camera A,Represent the test data under camera B, N is the test number under camera B According to number,For test data;
Step 1.2:Feature is described to be converted into difference matrix by feature vector, is specifically:The feature of given a series of imagesFor the description X of image I, the image of camera ADifference matrix be described asThe image of camera BDifference matrix be described asIn this way, feature description is converted to difference square by feature vector Battle array;
Step 2:Projection matrix between projection matrix and difference is introduced in the difference of different pedestrians, is specifically the introduction of different pedestrians' Projection matrix in differenceThe projection matrix between differenceBecome the difference between same pedestrian Small, the difference between different pedestrians becomes larger, wherein NrIt represents with reference to the number of character image under each camera, for each Image pairCalculating matrix distance represents the distance between pedestrian, wherein, with Frobenius norm calculations matrixes away from From being expressed as:
<mrow> <mi>d</mi> <mrow> <mo>(</mo> <msubsup> <mi>X</mi> <mi>p</mi> <mi>A</mi> </msubsup> <mo>,</mo> <msubsup> <mi>X</mi> <mi>q</mi> <mi>B</mi> </msubsup> <mo>)</mo> </mrow> <mo>=</mo> <mo>|</mo> <mo>|</mo> <msub> <mi>L</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <msubsup> <mi>X</mi> <mi>p</mi> <mi>A</mi> </msubsup> <mo>-</mo> <msubsup> <mi>X</mi> <mi>q</mi> <mi>B</mi> </msubsup> <mo>)</mo> </mrow> <msub> <mi>L</mi> <mn>2</mn> </msub> <mo>|</mo> <msubsup> <mo>|</mo> <mi>F</mi> <mn>2</mn> </msubsup> <mo>;</mo> </mrow>
Step 3:Learn the matrix measures mode newly proposed, specifically include:
Step 3.1:The object function of structural matrix scale learning, specifically:
For a pair of of projection matrix of same person under different camerasWithDue to the inconsistency under different cameras It can effectively reduce, be referred to as consistent item, another pair projection matrixWithWherein i ≠ j, this maintains matrix can area Other property, we term it differentiate item;
Step 3.2:Projection matrix between sparse difference, specifically for projection matrix L between the difference of different pedestrians2, it is contemplated that it is not Owner is useful to otherness, and groups of people are that have very strong recognizable ability and can reduce noise, we are to otherness Sparse selection has been done, the selection of otherness is done using following normal form, projection matrix L difference2It is sparse to do 2,1 model, formula It is defined as follows:
Espr(L2)=| | L2||2,1 (4)
Step 3.3:Object function to the end is obtained, specifically by formula Econ, formula Edis, EsprIt is combined into an object function
E(L1,L2)=Econ(L1,L2)+Edis(L1,L2)+uEspr(L2) (5)
Wherein Econ(L1,L2) it is L1And L2Consistent item, Edis(L1,L2) it is L1And L2Differentiation item, μ be sparse item Espr(L2) Weight parameter;
Step 4:Result optimizing is carried out, is specifically included:
Step 4.1:The gradient of object function is sought, formula is as follows:
<mrow> <mfrac> <mrow> <mo>&amp;part;</mo> <mi>E</mi> <mrow> <mo>(</mo> <msub> <mi>L</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>L</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> </mrow> <mrow> <mo>&amp;part;</mo> <msub> <mi>L</mi> <mn>1</mn> </msub> </mrow> </mfrac> <mo>=</mo> <mfrac> <mrow> <mo>&amp;part;</mo> <msub> <mi>E</mi> <mrow> <mi>c</mi> <mi>o</mi> <mi>n</mi> </mrow> </msub> <mrow> <mo>(</mo> <msub> <mi>L</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>L</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> </mrow> <mrow> <mo>&amp;part;</mo> <msub> <mi>L</mi> <mn>1</mn> </msub> </mrow> </mfrac> <mo>+</mo> <mfrac> <mrow> <mo>&amp;part;</mo> <msub> <mi>E</mi> <mrow> <mi>d</mi> <mi>i</mi> <mi>s</mi> </mrow> </msub> <mrow> <mo>(</mo> <msub> <mi>L</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>L</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> </mrow> <mrow> <mo>&amp;part;</mo> <msub> <mi>L</mi> <mn>1</mn> </msub> </mrow> </mfrac> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>6</mn> <mo>)</mo> </mrow> </mrow>
<mrow> <mfrac> <mrow> <mo>&amp;part;</mo> <mi>E</mi> <mrow> <mo>(</mo> <msub> <mi>L</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>L</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> </mrow> <mrow> <mo>&amp;part;</mo> <msub> <mi>L</mi> <mn>2</mn> </msub> </mrow> </mfrac> <mo>=</mo> <mfrac> <mrow> <mo>&amp;part;</mo> <msub> <mi>E</mi> <mrow> <mi>c</mi> <mi>o</mi> <mi>n</mi> </mrow> </msub> <mrow> <mo>(</mo> <msub> <mi>L</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>L</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> </mrow> <mrow> <mo>&amp;part;</mo> <msub> <mi>L</mi> <mn>2</mn> </msub> </mrow> </mfrac> <mo>+</mo> <mfrac> <mrow> <mo>&amp;part;</mo> <msub> <mi>E</mi> <mrow> <mi>d</mi> <mi>i</mi> <mi>s</mi> </mrow> </msub> <mrow> <mo>(</mo> <msub> <mi>L</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>L</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> </mrow> <mrow> <mo>&amp;part;</mo> <msub> <mi>L</mi> <mn>2</mn> </msub> </mrow> </mfrac> <mo>+</mo> <mi>u</mi> <mfrac> <mrow> <mo>&amp;part;</mo> <msub> <mi>E</mi> <mrow> <mi>s</mi> <mi>p</mi> <mi>r</mi> </mrow> </msub> <mrow> <mo>(</mo> <msub> <mi>L</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>L</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> </mrow> <mrow> <mo>&amp;part;</mo> <msub> <mi>L</mi> <mn>2</mn> </msub> </mrow> </mfrac> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>7</mn> <mo>)</mo> </mrow> </mrow>
Wherein:
<mrow> <mfrac> <mrow> <mo>&amp;part;</mo> <msub> <mi>E</mi> <mrow> <mi>c</mi> <mi>o</mi> <mi>n</mi> </mrow> </msub> <mrow> <mo>(</mo> <msub> <mi>L</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>L</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> </mrow> <mrow> <mo>&amp;part;</mo> <msub> <mi>L</mi> <mn>1</mn> </msub> </mrow> </mfrac> <mo>=</mo> <mfrac> <mn>2</mn> <mi>M</mi> </mfrac> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>M</mi> </munderover> <msub> <mi>L</mi> <mn>1</mn> </msub> <msub> <mi>Z</mi> <mi>i</mi> </msub> <msub> <mi>L</mi> <mn>2</mn> </msub> <msubsup> <mi>KL</mi> <mn>2</mn> <mi>T</mi> </msubsup> <msubsup> <mi>Z</mi> <mi>i</mi> <mi>T</mi> </msubsup> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>8</mn> <mo>)</mo> </mrow> </mrow>
<mrow> <mfrac> <mrow> <mo>&amp;part;</mo> <msub> <mi>E</mi> <mrow> <mi>c</mi> <mi>o</mi> <mi>n</mi> </mrow> </msub> <mrow> <mo>(</mo> <msub> <mi>L</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>L</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> </mrow> <mrow> <mo>&amp;part;</mo> <msub> <mi>L</mi> <mn>2</mn> </msub> </mrow> </mfrac> <mo>=</mo> <mfrac> <mn>2</mn> <mi>M</mi> </mfrac> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>M</mi> </munderover> <msubsup> <mi>Z</mi> <mi>i</mi> <mi>T</mi> </msubsup> <msubsup> <mi>L</mi> <mn>1</mn> <mi>T</mi> </msubsup> <msub> <mi>L</mi> <mn>1</mn> </msub> <msub> <mi>Z</mi> <mi>i</mi> </msub> <msub> <mi>L</mi> <mn>2</mn> </msub> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>9</mn> <mo>)</mo> </mrow> </mrow>
Wherein:
<mrow> <mfrac> <mrow> <mo>&amp;part;</mo> <msub> <mi>E</mi> <mrow> <mi>d</mi> <mi>i</mi> <mi>s</mi> </mrow> </msub> <mrow> <mo>(</mo> <msub> <mi>L</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>L</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> </mrow> <mrow> <mo>&amp;part;</mo> <msub> <mi>L</mi> <mn>1</mn> </msub> </mrow> </mfrac> <mo>=</mo> <mfrac> <mn>2</mn> <mi>S</mi> </mfrac> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>k</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>S</mi> </munderover> <mi>g</mi> <mrow> <mo>(</mo> <mi>e</mi> <mo>(</mo> <msub> <mi>k</mi> <mi>s</mi> </msub> <mo>)</mo> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <msub> <mi>L</mi> <mn>1</mn> </msub> <msub> <mi>U</mi> <mi>k</mi> </msub> <msub> <mi>L</mi> <mn>2</mn> </msub> <msubsup> <mi>L</mi> <mn>2</mn> <mi>T</mi> </msubsup> <msubsup> <mi>U</mi> <mi>k</mi> <mi>T</mi> </msubsup> <mo>-</mo> <msub> <mi>L</mi> <mn>1</mn> </msub> <msub> <mi>V</mi> <mi>k</mi> </msub> <msub> <mi>L</mi> <mn>2</mn> </msub> <msubsup> <mi>L</mi> <mn>2</mn> <mi>T</mi> </msubsup> <msubsup> <mi>V</mi> <mi>k</mi> <mi>T</mi> </msubsup> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>10</mn> <mo>)</mo> </mrow> </mrow>
<mrow> <mfrac> <mrow> <mo>&amp;part;</mo> <msub> <mi>E</mi> <mrow> <mi>d</mi> <mi>i</mi> <mi>s</mi> </mrow> </msub> <mrow> <mo>(</mo> <msub> <mi>L</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>L</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> </mrow> <mrow> <mo>&amp;part;</mo> <msub> <mi>L</mi> <mn>2</mn> </msub> </mrow> </mfrac> <mo>=</mo> <mfrac> <mn>2</mn> <mi>S</mi> </mfrac> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>k</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>S</mi> </munderover> <mi>g</mi> <mrow> <mo>(</mo> <mi>e</mi> <mo>(</mo> <msub> <mi>k</mi> <mi>s</mi> </msub> <mo>)</mo> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <msubsup> <mi>U</mi> <mi>k</mi> <mi>T</mi> </msubsup> <msubsup> <mi>L</mi> <mn>1</mn> <mi>T</mi> </msubsup> <msub> <mi>L</mi> <mn>1</mn> </msub> <msub> <mi>U</mi> <mi>k</mi> </msub> <msub> <mi>L</mi> <mn>2</mn> </msub> <mo>-</mo> <msubsup> <mi>V</mi> <mi>k</mi> <mi>T</mi> </msubsup> <msubsup> <mi>L</mi> <mn>1</mn> <mi>T</mi> </msubsup> <msub> <mi>L</mi> <mn>1</mn> </msub> <msub> <mi>V</mi> <mi>k</mi> </msub> <msub> <mi>L</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>11</mn> <mo>)</mo> </mrow> </mrow>
<mrow> <mfrac> <mrow> <mo>&amp;part;</mo> <msub> <mi>E</mi> <mrow> <mi>s</mi> <mi>p</mi> <mi>r</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>L</mi> <mn>2</mn> <mo>)</mo> </mrow> </mrow> <mrow> <mo>&amp;part;</mo> <msub> <mi>L</mi> <mn>2</mn> </msub> </mrow> </mfrac> <mo>=</mo> <mn>2</mn> <msub> <mi>DL</mi> <mn>2</mn> </msub> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>12</mn> <mo>)</mo> </mrow> </mrow>
Wherein,
G (z)=(1+e-βz)-1 (13)
For logistic loss functions lβ(z) derivative
<mrow> <msub> <mi>Z</mi> <mi>i</mi> </msub> <mo>=</mo> <msubsup> <mi>X</mi> <mi>i</mi> <mi>A</mi> </msubsup> <mo>-</mo> <msubsup> <mi>X</mi> <mi>i</mi> <mi>B</mi> </msubsup> <mo>,</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>14</mn> <mo>)</mo> </mrow> </mrow>
<mrow> <msub> <mi>U</mi> <mi>k</mi> </msub> <mo>=</mo> <msubsup> <mi>X</mi> <mi>i</mi> <mi>A</mi> </msubsup> <mo>-</mo> <msubsup> <mi>X</mi> <mi>i</mi> <mi>B</mi> </msubsup> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>15</mn> <mo>)</mo> </mrow> </mrow>
<mrow> <msub> <mi>V</mi> <mi>k</mi> </msub> <mo>=</mo> <msubsup> <mi>X</mi> <mi>i</mi> <mi>A</mi> </msubsup> <mo>-</mo> <msubsup> <mi>X</mi> <mi>j</mi> <mi>B</mi> </msubsup> <mo>,</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>16</mn> <mo>)</mo> </mrow> </mrow>
D is diagonal matrix,
lmRepresent L2M rows;
Step 4.2:The study of optimization algorithm learns this measurement using the optimization algorithm of iteration, and optimization process is as follows:
<mrow> <msubsup> <mi>L</mi> <mn>1</mn> <mrow> <mi>n</mi> <mo>+</mo> <mn>1</mn> </mrow> </msubsup> <mo>=</mo> <msubsup> <mi>L</mi> <mn>1</mn> <mi>n</mi> </msubsup> <mo>-</mo> <msub> <mi>&amp;lambda;</mi> <mn>1</mn> </msub> <mo>&amp;dtri;</mo> <mi>E</mi> <mrow> <mo>(</mo> <msub> <mi>L</mi> <mn>1</mn> </msub> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>17</mn> <mo>)</mo> </mrow> </mrow>
<mrow> <msubsup> <mi>L</mi> <mn>2</mn> <mrow> <mi>n</mi> <mo>+</mo> <mn>1</mn> </mrow> </msubsup> <mo>=</mo> <msubsup> <mi>L</mi> <mn>2</mn> <mi>n</mi> </msubsup> <mo>-</mo> <msub> <mi>&amp;lambda;</mi> <mn>2</mn> </msub> <mo>&amp;dtri;</mo> <mi>E</mi> <mrow> <mo>(</mo> <msub> <mi>L</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>18</mn> <mo>)</mo> </mrow> </mrow>
WhereinWithResult of calculation during iteration n times, λ are represented respectively1> 0, λ2> 0 is that each step gradient updating is determined automatically Fixed step-length, algorithm iteration number are up to 1000 times or meet | En+1-En| < ε, ε=1 × 10-8When, stop iteration.
2. a kind of recognition methods again of the pedestrian based on difference matrix and matrix measures according to claim 1, feature exist In in step 3.1, consistent item is Econ, differentiation item is Edis, it is respectively defined as:
<mrow> <msub> <mi>E</mi> <mrow> <mi>c</mi> <mi>o</mi> <mi>n</mi> </mrow> </msub> <mrow> <mo>(</mo> <msub> <mi>L</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>L</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>1</mn> <mi>M</mi> </mfrac> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>M</mi> </munderover> <mi>d</mi> <mrow> <mo>(</mo> <msubsup> <mi>X</mi> <mi>i</mi> <mi>A</mi> </msubsup> <mo>,</mo> <msubsup> <mi>X</mi> <mi>i</mi> <mi>B</mi> </msubsup> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow>
<mrow> <msub> <mi>E</mi> <mrow> <mi>d</mi> <mi>i</mi> <mi>s</mi> </mrow> </msub> <mrow> <mo>(</mo> <msub> <mi>L</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>L</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>1</mn> <mi>S</mi> </mfrac> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>k</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>s</mi> </munderover> <msub> <mi>l</mi> <mi>&amp;beta;</mi> </msub> <mrow> <mo>(</mo> <mi>e</mi> <mo>(</mo> <msub> <mi>s</mi> <mi>k</mi> </msub> <mo>)</mo> <mo>)</mo> </mrow> <mo>,</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> </mrow>
Wherein lβ(z) and e (sk) definition difference it is as follows:
<mrow> <msub> <mi>l</mi> <mi>&amp;beta;</mi> </msub> <mrow> <mo>(</mo> <mi>z</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>1</mn> <mi>&amp;beta;</mi> </mfrac> <mi>l</mi> <mi>o</mi> <mi>g</mi> <mrow> <mo>(</mo> <mn>1</mn> <mo>+</mo> <msup> <mi>e</mi> <mrow> <mi>&amp;beta;</mi> <mi>z</mi> </mrow> </msup> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>3</mn> <mo>)</mo> </mrow> </mrow>
e(sk) it is to be defined as below:
One sample triple is defined asS is the size of this collection, for each Sample sk, it is necessary to meetWherein error function is expressed as
CN201711417699.1A 2017-12-25 2017-12-25 A kind of recognition methods again of the pedestrian based on difference matrix and matrix measures Withdrawn CN108121970A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711417699.1A CN108121970A (en) 2017-12-25 2017-12-25 A kind of recognition methods again of the pedestrian based on difference matrix and matrix measures

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711417699.1A CN108121970A (en) 2017-12-25 2017-12-25 A kind of recognition methods again of the pedestrian based on difference matrix and matrix measures

Publications (1)

Publication Number Publication Date
CN108121970A true CN108121970A (en) 2018-06-05

Family

ID=62231620

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711417699.1A Withdrawn CN108121970A (en) 2017-12-25 2017-12-25 A kind of recognition methods again of the pedestrian based on difference matrix and matrix measures

Country Status (1)

Country Link
CN (1) CN108121970A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108985216A (en) * 2018-07-10 2018-12-11 常州大学 A kind of pedestrian head detection method based on multiple logistic regression Fusion Features
CN109800794A (en) * 2018-12-27 2019-05-24 上海交通大学 A kind of appearance similar purpose identifies fusion method and system across camera again
CN116193274A (en) * 2023-04-27 2023-05-30 北京博瑞翔伦科技发展有限公司 Multi-camera safety control method and system

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106803063A (en) * 2016-12-21 2017-06-06 华中科技大学 A kind of metric learning method that pedestrian recognizes again

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106803063A (en) * 2016-12-21 2017-06-06 华中科技大学 A kind of metric learning method that pedestrian recognizes again

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
ZHENG WANG ETAL.: "Person Re-identification via discrepancy matrix and matrix metric", 《 IEEE TRANSACTIONS ON CYBERNETICS》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108985216A (en) * 2018-07-10 2018-12-11 常州大学 A kind of pedestrian head detection method based on multiple logistic regression Fusion Features
CN108985216B (en) * 2018-07-10 2022-01-25 常州大学 Pedestrian head detection method based on multivariate logistic regression feature fusion
CN109800794A (en) * 2018-12-27 2019-05-24 上海交通大学 A kind of appearance similar purpose identifies fusion method and system across camera again
CN116193274A (en) * 2023-04-27 2023-05-30 北京博瑞翔伦科技发展有限公司 Multi-camera safety control method and system

Similar Documents

Publication Publication Date Title
CN111126360B (en) Cross-domain pedestrian re-identification method based on unsupervised combined multi-loss model
CN107194341B (en) Face recognition method and system based on fusion of Maxout multi-convolution neural network
CN110598543B (en) Model training method based on attribute mining and reasoning and pedestrian re-identification method
CN111222434A (en) Method for obtaining evidence of synthesized face image based on local binary pattern and deep learning
CN109961051A (en) A kind of pedestrian&#39;s recognition methods again extracted based on cluster and blocking characteristic
CN111881714A (en) Unsupervised cross-domain pedestrian re-identification method
CN110796057A (en) Pedestrian re-identification method and device and computer equipment
CN109063649B (en) Pedestrian re-identification method based on twin pedestrian alignment residual error network
CN104268593A (en) Multiple-sparse-representation face recognition method for solving small sample size problem
CN107463954B (en) A kind of template matching recognition methods obscuring different spectrogram picture
CN109034035A (en) Pedestrian&#39;s recognition methods again based on conspicuousness detection and Fusion Features
CN112464730B (en) Pedestrian re-identification method based on domain-independent foreground feature learning
CN106991355A (en) The face identification method of the analytical type dictionary learning model kept based on topology
CN105574475A (en) Common vector dictionary based sparse representation classification method
CN107862680B (en) Target tracking optimization method based on correlation filter
Chandran et al. Missing child identification system using deep learning and multiclass SVM
CN103473545A (en) Text-image similarity-degree measurement method based on multiple features
CN108121970A (en) A kind of recognition methods again of the pedestrian based on difference matrix and matrix measures
CN109635647B (en) Multi-picture multi-face clustering method based on constraint condition
CN110852152A (en) Deep hash pedestrian re-identification method based on data enhancement
CN113920472A (en) Unsupervised target re-identification method and system based on attention mechanism
CN109670423A (en) A kind of image identification system based on deep learning, method and medium
CN103714340A (en) Self-adaptation feature extracting method based on image partitioning
CN113158891B (en) Cross-camera pedestrian re-identification method based on global feature matching
CN110969101A (en) Face detection and tracking method based on HOG and feature descriptor

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20180605