CN108764167A - A kind of target of space time correlation recognition methods and system again - Google Patents

A kind of target of space time correlation recognition methods and system again Download PDF

Info

Publication number
CN108764167A
CN108764167A CN201810543066.3A CN201810543066A CN108764167A CN 108764167 A CN108764167 A CN 108764167A CN 201810543066 A CN201810543066 A CN 201810543066A CN 108764167 A CN108764167 A CN 108764167A
Authority
CN
China
Prior art keywords
target
camera
time
probability
checked
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810543066.3A
Other languages
Chinese (zh)
Other versions
CN108764167B (en
Inventor
张重阳
孔熙雨
归琳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Jiaotong University
Original Assignee
Shanghai Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Jiaotong University filed Critical Shanghai Jiaotong University
Priority to CN201810543066.3A priority Critical patent/CN108764167B/en
Publication of CN108764167A publication Critical patent/CN108764167A/en
Application granted granted Critical
Publication of CN108764167B publication Critical patent/CN108764167B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The present invention relates to a kind of target of space time correlation recognition methods again, the pixel motion rate of target in the method combination video data estimates the probability distribution that target in every section of video data crosses over two durations apart from certain adjacent camera;Based on the duration probability, you can first carry out screening pretreatment to the candidate target of appearance in video, filter out beyond the candidate target for rationally crossing over time interval, reduction similar purpose is by the probability that error hiding is tracking target.The invention further relates to a kind of target of space time correlation weight identifying systems.The matching result that the present invention generates is constrained by space-time position and target movable information, compare original mating structure that is unfettered, relying only on visual signature, can effectively promote the accuracy rate identified again.

Description

A kind of target of space time correlation recognition methods and system again
Technical field
The present invention relates to a kind of target weight identification technologies specifically to refer to a kind of target of space time correlation Recognition methods and corresponding target weight identifying system again.
Background technology
Target identifies problem again, is judged in image or video sequence with the presence or absence of specific using computer vision technique The problem of target.Specifically, when using video tracking specific objective, since video source comes from fixed position, so working as target When leaving field range, across the video relay tracking of progress is needed at this moment to detect asking for the specific objective in other video sources Topic then belongs to target and identifies problem again.
Target identifies that problem carries out characteristic matching using the target visual feature obtained in image again, provides possible candidate Target.Since the feature between different targets has similarity, is matched using feature and be also possible to candidate target occur The case where not being real tracking target.
By the retrieval discovery to the prior art, although target weight identification technology is widely used in relay tracking at present, But target weight identification module therein, much with only the visual signature information of target in image.Such as the patent No. CN201210201004.7, patent name:Intelligent vision sensing network moving target relay tracking system based on GPS and GIS, Although combining GPS and GIS information, i.e. spatial information is screened, the utilization for time, spatial information, Jin Jinting It is stayed in the stage for drawing GIS map and target trajectory, there is no by temporal information, spatial information, is directly used in target The technology that weight recognition accuracy is promoted, target weight identification module, is still based only on visual signature progress, still may be due to Visual signature it is similar, section generates the very low candidate target of a large amount of possibilities at the time of unreasonable.
Although in addition, have the combining target time, location information patent, such as Application Number (patent) CN201610221993.4 is a kind of recognition methods again of the target based on space-time restriction, camera shooting of this kind of method to each pair of adjoining Head gives one most short run duration, and according to Weibull distribution, and the candidate target time of occurrence measured, provides target and exists The probability that the moment occurs, and joint probability distribution is provided in conjunction with vision matching feature, but this kind of method does not account for target Real time kinematics state, this kind of method place one's entire reliance upon time of measuring for particular moment target probability of occurrence, it is main there are two Problem:First, the patent according at the time of description be the local moment of each video camera clock, and be not described as unified Global location time service information, so as to cause the moment asynchronous because the clock of different cameras is asynchronous, and then direct shadow Ring entire prediction result.In addition prior, which only considered across common problems such as time, space lengths, not examine Consider a sex chromosome mosaicism such as displacement of targets direction and speed:Most short run duration is directly specified to adjacent video camera, what which embodied Information content and the path distance of adjacent video camera in GIS information do not have essential distinction, only the embodiment of space length constraint;And In a practical situation, there are the individual differences such as speed between different targets, some target movement rates are very fast, and some is relatively slow, such as Fruit wants the more acurrate target that provides to appear in time Estimate in camera coverage, just must combining target real time kinematics information, Different target possible arrival time is calculated, predicts with this.For example, being imaged there are two the similar target of visual signature In the visuals field machine A, to move to video camera B, one is tracking target, another is not, according to the algorithm of the patent, obtain two The probability distribution that a target enters the moment is completely the same Weibull distribution, but if two target movement rates one fast one It is a slow, then time that they reach B just has larger difference, and provided with Application Number (patent) CN201610221993.4 The conclusion that method obtains is just completely different, causes candidate target to screen inaccurate.
In further retrieval, not yet found at present i.e. in conjunction with visual signature, space-time restriction, and can combining target individual Movable information, and the target recognition methods again of time synchronization is carried out using the unified time service in the whole world.
Invention content
For existing target again recognition methods with using based on target visual feature, space time correlation use of information it is insufficient existing Shape provides a kind of target of space time correlation recognition methods again.
To achieve the above object, the present invention adopts the following technical scheme that:The pixel of target in present invention combination video data Movement rate, the probability for estimating target across two durations apart from certain adjacent camera in every section of video data divide Cloth;Based on the duration probability, you can first carry out screening pretreatment to there is candidate target in video, filter out beyond rationally across The more candidate target of time interval, reduce similar purpose by error hiding be tracking target probability, the matching result of generation, i.e., by To the constraint of space-time position and target movable information, original mating structure that is unfettered, relying only on visual signature is compared, can be had Effect promotes the accuracy rate identified again.
First purpose according to the present invention provides a kind of target of space time correlation recognition methods again, including:
For camera CiIn a selected target a to be checked, record its initial time tsAnd tracking is proceeded by, it utilizes Tracking result obtains its pixel motion rate VaWith direction of motion information, and it is special for the vision that identifies again to extract target a to be checked Sign;
Using GIS information, obtain spatially with camera CiAdjacent and match with target a directions of advance to be checked M Camera set, to each adjacent camera M in the setj, camera C is obtained using GIS or manual measurementiTo camera MjActual path length Li,j
Target a to be checked is from camera CiTo closing on camera MjCross over time ti,j, in path length Li,jCertain situation Under, utilize linear velocity-time modelPrediction obtains, and the time is crossed over using the predictionBy camera MjIn appear in time intervalTarget as the candidate target identified again, wherein δ isSystem Standard deviation is counted, that is, is assumedNormal Distribution obtains its standard deviation as δ using training data;
To camera MjIn each candidate target b, extract its visual signature for being used to identify again, utilize candidate target b It first appears in camera MjWhen the unified time service information in the whole world that arrives of synchronous acquisition, as the target in camera MjAppearance Time te;Each candidate target b is calculated in camera M using motion trackingjIn pixel motion rate Vb, using linear Rate-time model prediction obtains the time of its leap To camera MjIn per a pair (Vb,Li,j), it waits Select target b's to cross over the timeIt is assumed that it is t to obey a mean valuemean, variance σ2Normal distribution, be based on the distribution, calculate Candidate target b is in given (Vb,Li,j) under the conditions of with moment teAppear in camera MiProbability Ptimespace((te-ts))~N (tmean, σ2);
Based on the visual signature of target a to be checked and candidate target b, using target again recognition methods, to each candidate target b Calculate its identification probability Pvision;By the P of each candidate target bvisionWith PtimespaceIt is multiplied, obtained product is as the target weight Identification probability is ranked up by the probability, the final result identified again.
Preferably, described obtain its pixel motion rate Va and direction of motion information using tracking result, refer to:Pixel Movement rate is that distance, image acquisition interval are the movement velocity of chronomere as unit of image pixel, is not related to Realistic objective movement rate, therefore need not be demarcated, without the additional collecting device of increase;The direction of motion is then to combine to take the photograph As head calibration information, image is divided into a Direction interval on plane space per N degree, which area is target direction of motion fall in Between, i.e., using the section as the direction of motion of target.
Preferably, described obtain its pixel motion rate V using tracking resultaWith direction of motion information, wherein:Pixel Movement rate is that distance, image acquisition interval are the movement velocity of chronomere as unit of image pixel, is not related to Realistic objective movement rate;The direction of motion is then to combine camera calibration information, and image is divided on plane space per N degree Which section one Direction interval, target direction of motion fall in, i.e., using the section as the direction of motion of target.
Preferably, it is described utilize GIS information, obtain spatially with camera CiIt is adjacent and with the advance sides target a to be checked To the M camera set to match, refer to:Centered on target a directions of motion section to be checked, in addition two that space is adjacent Section, as the search range of adjacent camera, be in direction range and with camera CiSpatially adjacent camera shooting Head constitutes the adjoining camera set to match with target a directions of advance to be checked.
Preferably, the target a to be checked is from camera CiTo closing on camera MjCross over time ti,j, in path length Li,jIn the case of certain, linear velocity-time model is utilizedPrediction obtains, and refers to:
It is V for a pixel speedaTarget, across path Li,jTime, meet linear relationship:
In formula:ɑ, β are model parameter, this linear relation model utilizes collected training data under line, is fitted Obtain the parameters of model;The data that model parameter can be arrived according to online acquisition carry out dynamic learning update.
Preferably, in Image Acquisition by read collecting device carry GPS or Big Dipper whole world time service module or other Global time service equipment and module, the time service information for having the whole world unified of acquisition, as life of the equipment when acquiring this frame image At the time, and with candidate target b in MjThe generated time of image is first appeared as time of occurrence te
Preferably, described predict to obtain the time of its leap using linear velocity-time model Refer to:
It is V for a pixel speedbTarget, across path Li,jTime, meet linear relationship:
In formula:η, θ are model parameter, this linear relation model utilizes collected training data under line, is fitted Obtain the parameters of model;The data that model parameter can be arrived according to online acquisition carry out dynamic learning update.
Preferably, described to camera MjIn per a pair (Vb,Li,j), candidate target b's crosses over the timeIt is assumed that obeying One mean value is tmean, variance σ2Normal distribution, refer to:
By the pixel motion rate of candidate target b, it is quantified as M speed grade, falls the pixel speed in a speed grade Rate Vb, with the Mean Speed V of the grade intervalmeanTo replace former rate;For each given conditional combination (Vmean, Li,j), candidate target b's crosses over the timeIt is (t to obey a parametermean, σ2) normal distribution, the ginseng of this normal distribution Number (tmean, σ2), fitting is trained by collected training data under line and is obtained;These model parameters, also can be according to adopting online The data collected carry out dynamic learning update.
Preferably, described be based on the distribution, candidate target b is calculated in given (Vb,Li,j) under the conditions of with moment teGo out Present camera MiProbability Ptimespace((te-ts))~N (tmean, σ2), refer to:
It is assumed that target b is target to be checked, then it is from camera CiTo camera MjReally cross over time tb=te-ts, because Across the timeNormal Distribution, using normal distribution model and this it is true cross over the time, be calculated with the time cross over CiTo MjProbabilityAnd using this probability as candidate target b in given (Vb, Li,j) under the conditions of with moment teAppear in camera MiProbability
Second purpose according to the present invention provides a kind of target weight identifying system of space time correlation, including:
Object detecting and tracking module:For camera CiIn a selected target a to be checked, record its initial time ts And tracking is proceeded by, obtain its pixel motion rate V using tracking resultaWith direction of motion information;
Visual Feature Retrieval Process and weight identification module:It is based on object detecting and tracking module as a result, each mesh to be checked of extraction The visual signature for identifying again of mark and candidate target;Using GIS information obtain spatially with camera CiIt is adjacent and with wait for The M camera set that target a directions of advance match is looked into, to each adjacent camera M in the setj, utilize GIS or people Work measurement obtains camera CiTo camera MjActual path length Li,j;Target a to be checked is from camera CiTo closing on camera MjCross over time ti,j, in path length Li,jIn the case of certain, linear velocity-time model is utilizedPrediction It obtains;The time is crossed over using the predictionBy camera MjIn appear in time intervalMesh Be denoted as attaching most importance to the candidate target of identification, and wherein δ isSS it is poor, that is, assumeNormal Distribution utilizes training number According to obtaining its standard deviation as δ;
Space time correlation and object filtering module:To camera MjIn each candidate target b, first appeared and taken the photograph using it As head MjWhen the unified time service information in the whole world that arrives of synchronous acquisition, as the target in camera MjTime of occurrence te;Utilize fortune Each candidate target is calculated in camera M in motion trackingjIn pixel motion rate Vb;Similarly, a linear speed is utilized Rate-time model is predicted to obtain the time of its leapTo camera MjIn per a pair (Vb,Li,j), it is candidate Target b's crosses over the timeIt is assumed that it is t to obey a mean valuemean, variance σ2Normal distribution, be based on the distribution, calculate b exist Given (Vb,Li,j) under the conditions of with moment teAppear in camera MiProbability Ptimespace((te-ts))~N (tmean, σ2);
Identification probability calculates and the module that reorders:Based on the visual information feature of target a to be checked and candidate target b, utilize Target recognition methods again calculates its identification probability P to each candidate target bvision;By the P of each candidate target bvisionWith PtimespaceIt is multiplied, obtained product is ranked up, the most termination identified again as target weight identification probability by the probability Fruit.
The above-mentioned target a to be checked of the present invention, candidate target b visual signature include but not limited to traditional people such as color and vein The depth characteristic that work design feature and deep neural network model learn.The present invention, which uses, combines actual range variation letter The video data of breath is analyzed, and calculates the movement velocity of target in real time, it is contemplated that the time that target occurs, is filtered out beyond Reasonable area Between range candidate target, promote target and identify the accuracy rate of problem again.
Compared with prior art, the embodiment of the present invention has the following effects that:
Method and system of the present invention is a kind of combining target movable information, to normally only use image data or The target that time, position data carry out identifies problem again, the method for promoting weight identification process accuracy rate.
Method and system of the present invention, the incidence relation using across camera video data in the time, spatially, with And the movable information of target individual, counterweight identification candidate target are associated on space-time, filter out or reduce dereferenced on space-time Candidate target effectively promote the precision that target identifies again to obtain more accurate candidate target range.
Description of the drawings
Fig. 1 is that the target of space time correlation in one embodiment of the invention identifies embodiment module diagram again.
Specific implementation mode
With reference to specific embodiment, the present invention is described in detail.Following embodiment will be helpful to the technology of this field Personnel further understand the present invention, but the invention is not limited in any way.It should be pointed out that the ordinary skill of this field For personnel, without departing from the inventive concept of the premise, various modifications and improvements can be made.These belong to the present invention Protection domain.
The pixel motion rate of target in present invention combination video data estimates target in every section of video data and crosses over two The probability distribution of the duration of the certain adjacent camera of a distance;Based on the duration probability, you can first to occurring in video Candidate target carries out screening pretreatment, filter out beyond rationally cross over time interval candidate target, reduce similar purpose by accidentally With the probability for tracking target.
Specifically, in the target again in the embodiment of recognition methods of space time correlation of the present invention, be referred to following steps into Row:
S1:For camera CiIn a selected target a to be checked, record its initial time tsAnd tracking is proceeded by, profit Its pixel motion rate V is obtained with tracking resultaWith direction of motion information, and its visual signature for being used to identify again is extracted;Its Middle visual signature includes but not limited to that the traditional artificial design feature such as color and vein and deep neural network model learn Depth characteristic.
S2:Using GIS information, obtain spatially with CiM camera shooting that be adjacent and matching with target direction of advance to be checked Head set;To each adjacent camera M in the setj, C is obtained using GIS or manual measurementi—〉MjActual path length Li,j
S3:Target a to be checked is from CiTo closing on camera MjCross over time ti,j, in path length Li,jIt, can in the case of certain Utilize a linear velocity-time modelPrediction obtains.The time is crossed over using the predictionIt can be by Mj In appear in time intervalTarget as the candidate target identified again;
S4:To MjIn each candidate target b, on the one hand extract its visual signature for being used to identify again, vision is special here Sign includes but not limited to the traditional artificial design features such as color and vein and the depth spy that deep neural network model learns Sign;On the other hand, it is first appeared in M using candidate target bjWhen the unified time service information in the whole world that arrives of synchronous acquisition, as this Target is in MjTime of occurrence te;Each candidate target is calculated in M using motion trackingjIn pixel motion rate Vb;Together Sample, it predicts to obtain the time of its leap using a linear velocity-time modelTo MjIn per a pair of (Vb,Li,j), candidate target b's crosses over the timeIt can be assumed that one mean value of obedience is tmean, variance σ2Normal distribution. Based on the distribution, b is calculated in given (Vb,Li,j) under the conditions of with moment teAppear in MiProbability Ptimespace((te-ts))~N (tmean, σ2);
S5:Based on the visual signature of target a to be checked and candidate target b, using target again recognition methods, to each candidate mesh Mark b calculates its identification probability Pvision;By the P of each candidate target bvisionWith PtimespaceIt is multiplied, obtained product is as the target Weight identification probability, is ranked up, the final result identified again by the probability.
Certain above-described embodiment step execution sequence can also adjust according to actual conditions, it is not required that in strict accordance with upper The step of stating carries out, this is it will be understood by those skilled in the art that will be understood by.
In the present embodiment, in the S1:Its pixel motion rate Va and direction of motion information are obtained using tracking result, is Refer to:Pixel motion rate be as unit of image pixel distance, image acquisition interval for chronomere movement velocity, and It is not related to realistic objective movement rate, therefore need not be demarcated, without the additional collecting device of increase;The direction of motion is then In conjunction with camera calibration information, image is divided into a Direction interval on plane space per N degree, target direction of motion is fallen Which section, i.e., using the section as the direction of motion of target.
In the present embodiment, in the S2, using GIS information, obtain it is spatially adjacent with C and with target advance side to be checked To the M camera set to match, refer to:Centered on target direction of motion section, in addition two sections that space is adjacent, As the search range of adjacent camera, be in direction range and with C spatially adjacent cameras, constitute with it is to be checked The adjoining camera set that target direction of advance matches.
In the present embodiment, in the S3, target a to be checked is from CiTo closing on camera MjCross over time ti,j, in path length Spend Li,jIn the case of certain, using a linear velocity-time modelPrediction obtain, in particular to:For One pixel speed is VaTarget, across path Li,jTime, meet linear relationship:ɑ, β are model parameter;This linear relation model can utilize collected under line Training data is fitted to obtain the parameters of model;These model parameters, the data that can be also arrived according to online acquisition, into Mobile state study update.
In the present embodiment, in the S4, to MjIn each candidate target b, first appeared in M using itjWhen synchronous acquisition The global unified time service information arrived, as the target in MjTime of occurrence te, specifically refer to:Pass through reading in Image Acquisition Collecting device is taken to carry GPS or Big Dipper whole world time service module or other global time service equipments and module, acquisition has whole world system One time service information, as generated time of the equipment when acquiring this frame image, and with target b in MjFirst appear the life of image Time of occurrence t at the time as be
In the present embodiment, in the S4, predict to obtain the time of its leap using a linear velocity-time modelIt specifically refers to:It is V for a pixel speedbTarget, across path Li,jTime, meet linear Relationship:η, θ are model parameter;This linear relation model can be utilized and be adopted under line The training data collected is fitted to obtain the parameters of model;These model parameters, the number that can be also arrived according to online acquisition According to progress dynamic learning update.
In the present embodiment, in the S4, to MjIn per a pair (Vb,Li,j), candidate target b's crosses over the timeIt can be false Surely it is t to obey a mean valuemean, variance σ2Normal distribution, in particular to:By the pixel motion rate of candidate target b, amount Turn to M speed grade;Fall the pixel rate V in a speed gradeb, with the Mean Speed V of the grade intervalmeanTo replace Former rate;For each given conditional combination (Vmean, Li,j), target b's crosses over the timeObeying a parameter is (tmean, σ2) normal distribution, the parameter (t of this normal distributionmean, σ2), it is trained by collected training data under line Fitting obtains;These model parameters, the data that can be also arrived according to online acquisition carry out dynamic learning update.
In the present embodiment, in the S4, it is based on the distribution, calculates b in given (Vb,Li,j) under the conditions of with moment teOccur In MiProbability Ptimespace((te-ts))~N (tmean, σ2), refer to:It is assumed that target b is target to be checked, then it is from CiTo Mj's Really cross over time tb=te-ts;Because across the timeNormal Distribution, using normal distribution model and this really across The more time is calculated and crosses over C with the timei--〉MjProbabilityAnd with this Probability is as b in given (Vb,Li,j) under the conditions of with moment teAppear in MiProbability
On the basis of the above method, the present invention is modeled using movable information carries out target weight recognition accuracy promotion.Another In one system embodiment, target weight identifying system include mainly object detecting and tracking module, Visual Feature Retrieval Process with again Identification module, space time correlation and object filtering module, identification probability are calculated to be formed with the module that reorders, wherein:
Object detecting and tracking module:For camera CiIn a selected target a to be checked, record its initial time ts And tracking is proceeded by, obtain its pixel motion rate V using tracking resultaWith direction of motion information;
Visual Feature Retrieval Process and weight identification module:It is based on object detecting and tracking module as a result, each mesh to be checked of extraction The visual signature for identifying again of mark and candidate target, visual signature here includes but not limited to traditional people such as color and vein The depth characteristic that work design feature and deep neural network model learn;Using GIS information, obtain spatially with camera shooting Head CiM camera set that be adjacent and matching with target a directions of advance to be checked, to each adjacent camera shooting in the set Head Mj, camera C is obtained using GIS or manual measurementiTo camera MjActual path length Li,j;Target a to be checked is from camera shooting Head CiTo closing on camera MjCross over time ti,j, in path length Li,jIn the case of certain, a linear velocity-time is utilized ModelPrediction obtains;The time is crossed over using the predictionBy camera MjIn appear in time interval Target as the candidate target identified again, wherein δ isSS it is poor, that is, assume Normal Distribution obtains its standard deviation as δ using training data;
Space time correlation and object filtering module:To camera MjIn each candidate target b, first appeared and taken the photograph using it As head MjWhen the unified time service information in the whole world that arrives of synchronous acquisition, as the target in camera MjTime of occurrence te;Utilize fortune Each candidate target is calculated in camera M in motion trackingjIn pixel motion rate Vb;Similarly, a linear speed is utilized Rate-time model is predicted to obtain the time of its leapTo camera MjIn per a pair (Vb,Li,j), it is candidate Target b's crosses over the timeIt is assumed that it is t to obey a mean valuemean, variance σ2Normal distribution, be based on the distribution, calculate b exist Given (Vb,Li,j) under the conditions of with moment teAppear in camera MiProbability Ptimespace((te-ts))~N (tmean, σ2);
Identification probability calculates and the module that reorders:Based on the visual information feature of target a to be checked and candidate target b, utilize Target recognition methods again calculates its identification probability P to each candidate target bvision;By the P of each candidate target bvisionWith PtimespaceIt is multiplied, obtained product is ranked up, the most termination identified again as target weight identification probability by the probability Fruit.
Shown in referring to Fig.1, in one embodiment:
The present invention carries out application note to be identified as embodiment again across camera shooting head in video monitoring system.Video In monitoring system across camera shooting head identifying again, refer to appearing in some specific pedestrian target a in initial camera C, When it leaves other camera views that C enters under video surveillance network, the target based on above-mentioned space time correlation identifies again Method carries out candidate target constraint using across camera space time correlation, and is merged with the visual information of target itself, general to combine The form of rate is come to assist visual signature be the probability of same target with target a to be checked with each candidate target of determination.
If initial camera C, the video of acquisition is sent into object detecting and tracking module, to a selected pedestrian target a, As target to be checked into line trace;Further, it can use in conjunction with the advanced trackings such as the correlation filtering of depth characteristic, profit With tracking result, i.e., the target following frame on continuous time, change with time situation at calculating tracking box center, obtains target A pixel motion rates VaWith direction of motion information.Similarly, it in the target camera B adjacent with C, is examined using same target It surveys and tracking module, is detected and trace analysis, obtains the information such as the pixel motion rate of the candidate target of each detection.
Target camera C to be checked and target camera B, the target detection frame etc. of the two object detecting and tracking module output Content is sent into Visual Feature Retrieval Process and weight identification module, extracts visual signature and carries out weight identification probability P based on this featurevision It calculates.
Target camera C to be checked and target camera B, the target a pixels fortune of the two object detecting and tracking module output Dynamic rate VaWith direction of motion information, it is sent to space time correlation and object filtering module, candidate target is carried out in conjunction with information such as GIS Screening based on space time correlation calculates the identification probability P under each candidate target space-time restrictiontimespace
Visual Feature Retrieval Process and weight identification module, two probability of space time correlation and the output of object filtering module, send together It is calculated to identification probability and obtains new identification probability with the module that reorders, progress probability multiplication, and be based on the probability into rearrangement Sequence obtains final heavy recognition result.
The course of work of system described in above-described embodiment and the function of realization are as follows:
(1) it is each detection target and candidate target, all generates comprising target movable information, location information, unified time service Temporal information, accurate, specific, synchronous space time information is provided for video analysis.
(2) visual signature, space-time restriction, and the movable information of energy combining target individual are combined, and is uniformly awarded using the whole world When carry out the target recognition methods again of time synchronization.
Modules in the above embodiment of the present invention system, the technology of specific implementation may be used above-mentioned target and identify again Method corresponds to the relevant art in step, and details are not described herein.
To sum up, target of the invention recognition methods and system again, in conjunction with visual signature, space-time restriction, and can combining target The movable information of individual, and time synchronization is carried out using the unified time service in the whole world, three's fusion identifies accurately again to promote target Rate.The matching result that the present invention generates is constrained by space-time position and target movable information, compare it is original it is unfettered, The mating structure of visual signature is relied only on, the accuracy rate identified again can be effectively promoted.
It should be noted that the target provided by the invention step in recognition methods again, can utilize the target Corresponding module, device, unit etc. are achieved in weight identifying system, and those skilled in the art are referred to the skill of the system Art scheme realizes the step flow of the method, that is, the embodiment in the system can be regarded as realizing the preferred of the method Example, it will not be described here.
One skilled in the art will appreciate that in addition to realizing system provided by the invention in a manner of pure computer readable program code And its other than each device, completely can by by method and step carry out programming in logic come so that system provided by the invention and its Each device is in the form of logic gate, switch, application-specific integrated circuit, programmable logic controller (PLC) and embedded microcontroller etc. To realize identical function.So system provided by the invention and its every device are considered a kind of hardware component, and it is right The device for realizing various functions for including in it can also be considered as the structure in hardware component;It can also will be for realizing each The device of kind function is considered as either the software module of implementation method can be the structure in hardware component again.
Specific embodiments of the present invention are described above.It is to be appreciated that the invention is not limited in above-mentioned Particular implementation, those skilled in the art can make various deformations or amendments within the scope of the claims, this not shadow Ring the substantive content of the present invention.

Claims (9)

1. a kind of target of space time correlation recognition methods again, which is characterized in that including:
For camera CiIn a selected target a to be checked, record its initial time tsAnd tracking is proceeded by, utilize tracking As a result its pixel motion rate V is obtainedaWith direction of motion information, and target a to be checked is extracted for the visual signature that identifies again;
Using GIS information obtain spatially with camera CiM camera that be adjacent and matching with target a directions of advance to be checked Set, to each adjacent camera M in the setj, camera C is obtained using GIS or manual measurementiTo camera MjReality Border path length Li,j
Target a to be checked is from camera CiTo closing on camera MjCross over time ti,j, in path length Li,jIn the case of certain, profit With linear velocity-time modelPrediction obtains, and the time is crossed over using the predictionBy camera MjIn go out Present time sectionTarget as the candidate target identified again, wherein δ isSS Difference assumesNormal Distribution obtains its standard deviation as δ using training data;
To camera MjIn each candidate target b, extract its visual signature for being used to identify again, gone out for the first time using candidate target b Present camera MjWhen the unified time service information in the whole world that arrives of synchronous acquisition, as the target in camera MjTime of occurrence te; Each candidate target b is calculated in camera M using motion trackingjIn pixel motion rate Vb, using linear velocity-when Between model prediction obtain its leap time(Vb,Li,j);To camera MjIn per a pair (Vb,Li,j), candidate target b Cross over the timeIt is assumed that it is t to obey a mean valuemean, variance σ2Normal distribution, be based on the distribution, calculate candidate target B is in given (Vb,Li,j) under the conditions of with moment teAppear in camera MiProbability Ptimespace((te-ts))~N (tmean, σ2);
Each candidate target b is calculated using target again recognition methods based on the visual signature of target a to be checked and candidate target b Its identification probability Pvision;By the P of each candidate target bvisionWith PtimespaceIt is multiplied, obtained product identifies again as the target Probability is ranked up by the probability, the final result identified again.
2. a kind of target recognition methods again of space time correlation according to claim 1, which is characterized in that described to utilize tracking As a result its pixel motion rate V is obtainedaWith direction of motion information, wherein:Pixel motion rate be as unit of image pixel away from From the movement velocity that, image acquisition interval is chronomere, it is not related to realistic objective movement rate;The direction of motion is then In conjunction with camera calibration information, image is divided into a Direction interval on plane space per N degree, target direction of motion is fallen Which section, i.e., using the section as the direction of motion of target.
3. a kind of target recognition methods again of space time correlation according to claim 1, which is characterized in that described to utilize GIS Information, obtain spatially with camera CiM camera set that be adjacent and matching with target a directions of advance to be checked refer to: Centered on target a directions of motion section to be checked, in addition two sections that space is adjacent, the search model as adjacent camera Enclose, be in direction range and with camera CiSpatially adjacent camera is constituted and target a direction of advance phases to be checked Matched adjacent camera set.
4. a kind of target recognition methods again of space time correlation according to claim 1, which is characterized in that the target to be checked A is from camera CiTo closing on camera MjCross over time ti,j, in path length Li,jIn the case of certain, using linear velocity-when Between modelPrediction obtains, and refers to:
It is V for a pixel speedaTarget, across path Li,jTime, meet linear relationship:
In formula:ɑ, β are model parameter, this linear relation model utilizes collected training data under line, is fitted to obtain The parameters of model;The data that model parameter can be arrived according to online acquisition carry out dynamic learning update.
5. a kind of target recognition methods again of space time correlation according to claim 1, which is characterized in that in Image Acquisition Have by reading the included GPS of collecting device or Big Dipper whole world time service module or other global time service equipments and module, acquisition The unified time service information in the whole world, as generated time of the equipment when acquiring this frame image, and with candidate target b in camera Mj The generated time of image is first appeared as time of occurrence te
6. a kind of target recognition methods again of space time correlation according to claim 1, which is characterized in that described using linear Rate-time model prediction obtains the time of its leapRefer to:
It is V for a pixel speedbTarget, across path Li,jTime, meet linear relationship:
In formula:η, θ are model parameter, this linear relation model utilizes collected training data under line, is fitted to obtain The parameters of model;The data that model parameter can be arrived according to online acquisition carry out dynamic learning update.
7. a kind of target recognition methods again of space time correlation according to claim 1, which is characterized in that described to camera MjIn per a pair (Vb,Li,j), candidate target b's crosses over the timeIt is assumed that it is t to obey a mean valuemean, variance σ2Normal state It is distributed, refers to:
By the pixel motion rate of candidate target b, it is quantified as M speed grade, falls the pixel rate V in a speed gradeb, With the Mean Speed V of the grade intervalmeanTo replace former rate;For each given conditional combination (Vmean, Li,j), it is candidate Target b's crosses over the timeIt is (t to obey a parametermean, σ2) normal distribution, the parameter (t of this normal distributionmean, σ2), fitting is trained by collected training data under line and is obtained;These model parameters can also be arrived according to online acquisition Data carry out dynamic learning update.
8. a kind of target recognition methods again of space time correlation according to claim 1, which is characterized in that described to be based on this point Cloth calculates candidate target b in given (Vb,Li,j) under the conditions of with moment teAppear in camera MiProbability Ptimespace((te- ts))~N (tmean, σ2), refer to:
It is assumed that target b is target to be checked, then it is from camera CiTo camera MjReally cross over time tb=te-ts, because crossing over TimeNormal Distribution, using normal distribution model and this it is true cross over the time, be calculated with the time cross over CiIt arrives MjProbabilityAnd using this probability as candidate target b in given (Vb,Li,j) Under the conditions of with moment teAppear in camera MiProbability
9. a kind of target weight identifying system of space time correlation, which is characterized in that including:
Object detecting and tracking module:For camera CiIn a selected target a to be checked, record its initial time tsAnd it opens Begin into line trace, its pixel motion rate V is obtained using tracking resultaWith direction of motion information;
Visual Feature Retrieval Process and weight identification module:It is based on object detecting and tracking module as a result, each target to be checked of extraction and The visual signature for identifying again of candidate target;Using GIS information obtain spatially with camera CiIt is adjacent and with mesh to be checked The M camera set that mark a directions of advance match, to each adjacent camera M in the setj, surveyed using GIS or artificial Measure camera CiTo camera MjActual path length Li,j;Target a to be checked is from camera CiTo closing on camera Mj's Across time ti,j, in path length Li,jIn the case of certain, linear velocity-time model is utilizedIt measures in advance It arrives;The time is crossed over using the predictionBy camera MjIn appear in time intervalTarget As the candidate target identified again, wherein δ isSS it is poor, that is, assumeNormal Distribution utilizes training data Its standard deviation is obtained as δ;
Space time correlation and object filtering module:To camera MjIn each candidate target b, first appeared in camera using it MjWhen the unified time service information in the whole world that arrives of synchronous acquisition, as the target in camera MjTime of occurrence te;Using movement with Each candidate target is calculated in camera M in trackjIn pixel motion rate Vb;Similarly, using one linear velocity-when Between model prediction obtain its leap timeTo camera MjIn per a pair (Vb,Li,j), candidate target b Cross over the timeIt is assumed that it is t to obey a mean valuemean, variance σ2Normal distribution, be based on the distribution, calculate b given (Vb,Li,j) under the conditions of with moment teAppear in camera MiProbability Ptimespace((te-ts))~N (tmean, σ2);
Identification probability calculates and the module that reorders:Based on the visual information feature of target a to be checked and candidate target b, target is utilized Recognition methods again calculates its identification probability P to each candidate target bvision;By the P of each candidate target bvisionWith Ptimespace It is multiplied, obtained product is ranked up, the final result identified again as target weight identification probability by the probability.
CN201810543066.3A 2018-05-30 2018-05-30 Space-time correlated target re-identification method and system Active CN108764167B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810543066.3A CN108764167B (en) 2018-05-30 2018-05-30 Space-time correlated target re-identification method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810543066.3A CN108764167B (en) 2018-05-30 2018-05-30 Space-time correlated target re-identification method and system

Publications (2)

Publication Number Publication Date
CN108764167A true CN108764167A (en) 2018-11-06
CN108764167B CN108764167B (en) 2020-09-29

Family

ID=64004566

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810543066.3A Active CN108764167B (en) 2018-05-30 2018-05-30 Space-time correlated target re-identification method and system

Country Status (1)

Country Link
CN (1) CN108764167B (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109558831A (en) * 2018-11-27 2019-04-02 成都索贝数码科技股份有限公司 It is a kind of fusion space-time model across camera shooting head's localization method
CN109598240A (en) * 2018-12-05 2019-04-09 深圳市安软慧视科技有限公司 Video object quickly recognition methods and system again
CN110087039A (en) * 2019-04-30 2019-08-02 苏州科达科技股份有限公司 Monitoring method, device, equipment, system and storage medium
CN110110598A (en) * 2019-04-01 2019-08-09 桂林电子科技大学 The pedestrian of a kind of view-based access control model feature and space-time restriction recognition methods and system again
CN110264497A (en) * 2019-06-11 2019-09-20 浙江大华技术股份有限公司 Track determination method and device, the storage medium, electronic device of duration
CN110728702A (en) * 2019-08-30 2020-01-24 深圳大学 High-speed cross-camera single-target tracking method and system based on deep learning
CN110796074A (en) * 2019-10-28 2020-02-14 桂林电子科技大学 Pedestrian re-identification method based on space-time data fusion
CN111061825A (en) * 2019-12-10 2020-04-24 武汉大学 Method for identifying matching and correlation of space-time relationship between mask and reloading camouflage identity
WO2020093829A1 (en) * 2018-11-09 2020-05-14 阿里巴巴集团控股有限公司 Method and device for real-time statistical analysis of pedestrian flow in open space
CN111178284A (en) * 2019-12-31 2020-05-19 珠海大横琴科技发展有限公司 Pedestrian re-identification method and system based on spatio-temporal union model of map data
CN111666823A (en) * 2020-05-14 2020-09-15 武汉大学 Pedestrian re-identification method based on individual walking motion space-time law collaborative identification
CN113688776A (en) * 2021-09-06 2021-11-23 北京航空航天大学 Space-time constraint model construction method for cross-field target re-identification

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8098888B1 (en) * 2008-01-28 2012-01-17 Videomining Corporation Method and system for automatic analysis of the trip of people in a retail space using multiple cameras
CN103810476A (en) * 2014-02-20 2014-05-21 中国计量学院 Method for re-identifying pedestrians in video monitoring network based on small-group information correlation
CN107133575A (en) * 2017-04-13 2017-09-05 中原智慧城市设计研究院有限公司 A kind of monitor video pedestrian recognition methods again based on space-time characteristic
CN107255468A (en) * 2017-05-24 2017-10-17 纳恩博(北京)科技有限公司 Method for tracking target, target following equipment and computer-readable storage medium
CN107545256A (en) * 2017-09-29 2018-01-05 上海交通大学 A kind of camera network pedestrian recognition methods again of combination space-time and network consistency

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8098888B1 (en) * 2008-01-28 2012-01-17 Videomining Corporation Method and system for automatic analysis of the trip of people in a retail space using multiple cameras
CN103810476A (en) * 2014-02-20 2014-05-21 中国计量学院 Method for re-identifying pedestrians in video monitoring network based on small-group information correlation
CN107133575A (en) * 2017-04-13 2017-09-05 中原智慧城市设计研究院有限公司 A kind of monitor video pedestrian recognition methods again based on space-time characteristic
CN107255468A (en) * 2017-05-24 2017-10-17 纳恩博(北京)科技有限公司 Method for tracking target, target following equipment and computer-readable storage medium
CN107545256A (en) * 2017-09-29 2018-01-05 上海交通大学 A kind of camera network pedestrian recognition methods again of combination space-time and network consistency

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
SHOUCHENG NI: "LEARNING DISCRIMINATIVE AND SHAREABLE PATCHES FOR SCENE CLASSIFICATION", 《IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP2016)》 *
章东平: "距离度量学习的摄像网络中行人重识别", 《中国计量大学学报》 *

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020093829A1 (en) * 2018-11-09 2020-05-14 阿里巴巴集团控股有限公司 Method and device for real-time statistical analysis of pedestrian flow in open space
CN109558831A (en) * 2018-11-27 2019-04-02 成都索贝数码科技股份有限公司 It is a kind of fusion space-time model across camera shooting head's localization method
CN109558831B (en) * 2018-11-27 2023-04-07 成都索贝数码科技股份有限公司 Cross-camera pedestrian positioning method fused with space-time model
CN109598240A (en) * 2018-12-05 2019-04-09 深圳市安软慧视科技有限公司 Video object quickly recognition methods and system again
CN109598240B (en) * 2018-12-05 2019-11-05 深圳市安软慧视科技有限公司 Video object quickly recognition methods and system again
CN110110598A (en) * 2019-04-01 2019-08-09 桂林电子科技大学 The pedestrian of a kind of view-based access control model feature and space-time restriction recognition methods and system again
CN110087039A (en) * 2019-04-30 2019-08-02 苏州科达科技股份有限公司 Monitoring method, device, equipment, system and storage medium
CN110087039B (en) * 2019-04-30 2021-09-14 苏州科达科技股份有限公司 Monitoring method, device, equipment, system and storage medium
CN110264497A (en) * 2019-06-11 2019-09-20 浙江大华技术股份有限公司 Track determination method and device, the storage medium, electronic device of duration
CN110264497B (en) * 2019-06-11 2021-09-17 浙江大华技术股份有限公司 Method and device for determining tracking duration, storage medium and electronic device
CN110728702B (en) * 2019-08-30 2022-05-20 深圳大学 High-speed cross-camera single-target tracking method and system based on deep learning
CN110728702A (en) * 2019-08-30 2020-01-24 深圳大学 High-speed cross-camera single-target tracking method and system based on deep learning
CN110796074A (en) * 2019-10-28 2020-02-14 桂林电子科技大学 Pedestrian re-identification method based on space-time data fusion
CN110796074B (en) * 2019-10-28 2022-08-12 桂林电子科技大学 Pedestrian re-identification method based on space-time data fusion
CN111061825A (en) * 2019-12-10 2020-04-24 武汉大学 Method for identifying matching and correlation of space-time relationship between mask and reloading camouflage identity
CN111178284A (en) * 2019-12-31 2020-05-19 珠海大横琴科技发展有限公司 Pedestrian re-identification method and system based on spatio-temporal union model of map data
CN111666823A (en) * 2020-05-14 2020-09-15 武汉大学 Pedestrian re-identification method based on individual walking motion space-time law collaborative identification
CN113688776A (en) * 2021-09-06 2021-11-23 北京航空航天大学 Space-time constraint model construction method for cross-field target re-identification
CN113688776B (en) * 2021-09-06 2023-10-20 北京航空航天大学 Space-time constraint model construction method for cross-field target re-identification

Also Published As

Publication number Publication date
CN108764167B (en) 2020-09-29

Similar Documents

Publication Publication Date Title
CN108764167A (en) A kind of target of space time correlation recognition methods and system again
CN106096577B (en) A kind of target tracking method in camera distribution map
CN103164706B (en) Object counting method and device based on video signal analysis
CN105894542B (en) A kind of online method for tracking target and device
CN102542289B (en) Pedestrian volume statistical method based on plurality of Gaussian counting models
CN107145851A (en) Constructions work area dangerous matter sources intelligent identifying system
CN106355604B (en) Tracking image target method and system
CN109886241A (en) Driver fatigue detection based on shot and long term memory network
CN103310444B (en) A kind of method of the monitoring people counting based on overhead camera head
WO2007018523A2 (en) Method and apparatus for stereo, multi-camera tracking and rf and video track fusion
CN107145862A (en) A kind of multiple features matching multi-object tracking method based on Hough forest
CN109948471A (en) Based on the traffic haze visibility detecting method for improving InceptionV4 network
Dridi Tracking individual targets in high density crowd scenes analysis of a video recording in hajj 2009
CN106447698B (en) A kind of more pedestrian tracting methods and system based on range sensor
CN106504274A (en) A kind of visual tracking method and system based under infrared camera
CN102663491B (en) Method for counting high density population based on SURF characteristic
CN101303726A (en) System for tracking infrared human body target based on corpuscle dynamic sampling model
CN109583373A (en) A kind of pedestrian identifies implementation method again
Wang et al. Multiple-human tracking by iterative data association and detection update
CN112684430A (en) Indoor old person walking health detection method and system, storage medium and terminal
Liciotti et al. An intelligent RGB-D video system for bus passenger counting
Makrigiorgis et al. Extracting the fundamental diagram from aerial footage
CN109977796A (en) Trail current detection method and device
Bazo et al. Baptizo: A sensor fusion based model for tracking the identity of human poses
CN102184409A (en) Machine-vision-based passenger flow statistics method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant