CN103927762A - Target vehicle automatic tracking method and device - Google Patents

Target vehicle automatic tracking method and device Download PDF

Info

Publication number
CN103927762A
CN103927762A CN201310011103.3A CN201310011103A CN103927762A CN 103927762 A CN103927762 A CN 103927762A CN 201310011103 A CN201310011103 A CN 201310011103A CN 103927762 A CN103927762 A CN 103927762A
Authority
CN
China
Prior art keywords
target
candidate region
information
similarity parameter
object module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201310011103.3A
Other languages
Chinese (zh)
Other versions
CN103927762B (en
Inventor
章合群
潘石柱
张兴明
傅利泉
朱江明
吴军
吴坚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN201310011103.3A priority Critical patent/CN103927762B/en
Publication of CN103927762A publication Critical patent/CN103927762A/en
Application granted granted Critical
Publication of CN103927762B publication Critical patent/CN103927762B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a target vehicle automatic tracking method and device. The target vehicle automatic tracking method and device are used for acquiring continuous tracking information of target vehicles and providing more information basis for vehicle peccancy. The method comprises the following steps of receiving a current frame image; determining detected targets according to the detection information included in the current frame image, and establishing a detected target set; regarding any one of tracked targets included in a tracked target set, when determining that the tracked target is not matched with any detected targets included in the detected target set, determining a target area corresponding to the tracked target; determining a current candidate area according to an area corresponding to target frame area information in the current frame image; respectively determining a target model corresponding to the target area and an observation model corresponding to the current candidate area; determining a tracking trace of the tracked target according to the determined target model and the observation model.

Description

A kind of target vehicle automatic tracking method and device
Technical field
The present invention relates to technical field of video monitoring, relate in particular to a kind of target vehicle automatic tracking method and device.
Background technology
Traditional electronic police system, from realizing angle, can be divided into coil electricity alarm system and video electricity alarm system.Coil electricity alarm system triggers ground induction coil by vehicle, captures target thereby video camera obtains signal pulse.But there is following shortcoming in coil electricity alarm system: 1) lays coil, need to cut road surface, destroy appearance of city, increase cost of labor; 2) the common wagon flow of road is large, and the easy loss of coil, has increased maintenance cost; 3) when multiple nonmaneuvering target vehicle consolidated movement, easily cause coil to be triggered, candid photograph leads to errors; 4) capture signal and have certain delay, in the time that the speed of a motor vehicle is very fast, easily cause capturing the deviation of position; 5) can only capture single frames target, can only identify the act of violating regulations of making a dash across the red light according to single frames target, and cannot realize illegal lane change, utilize the track of turning round to keep straight on, utilize that Through Lane turns round, hypervelocity, line ball, drive in the wrong direction, yellow card road occupying, motor vehicle take the more act of violating regulations such as bicycle lane, parking offense.
And video electricity alarm system is by every two field picture is processed, analyze video content, detect identification car plate, analyze vehicle peccancy behavior with this, and realize and capturing.Current video electricity alarm system, for the real-time of total system, can only detect the car plate of identification limited area (normally stop line is with upper/lower positions), can only identify make a dash across the red light, illegal lane change, line ball and the limited act of violating regulations such as drive in the wrong direction.Thereby existing video electricity alarm system, owing to can only the car plate of limited area being identified, makes recognition result have certain limitation, cannot provide more information foundation for vehicle peccancy.
Summary of the invention
The embodiment of the present invention provides a kind of vehicle target automatic tracking method, in order to obtain the Continuous Tracking information of target vehicle, for vehicle peccancy provides more information foundation.
The embodiment of the present invention provides a kind of vehicle target automatic tracking method, comprising:
Receive current frame image;
Extract the detection target in described current frame image, set up and detect goal set;
For each tracking target comprising in tracking target set, if when each detection target that described tracking target comprises with described detection goal set is not all mated, determine the target area of described tracking target correspondence in previous frame image according to target frame area information, described tracking target set is that described target frame area information comprises center position information and the dimension information in target frame region according to history image acquisition of information;
Using described target frame area information on current frame image corresponding region as current candidate region, and the characteristic information of determining described target area is object module corresponding to described target area, and the characteristic information of determining described candidate region is observation model corresponding to described candidate region;
According to the object module of determining and observation model, determine the pursuit path of described tracking target.
The embodiment of the present invention provides a kind of target vehicle autotracker, comprising:
Receiving element, for receiving current frame image;
Unit is set up in goal set, for extracting the detection target of described current frame image, sets up and detects goal set;
The first determining unit, for each tracking target that set comprises for tracking target, each that determine that described tracking target comprises with described detection goal set detects target while all not mating, determine described tracking target corresponding target area in previous frame image according to target frame area information, described tracking target set is according to history image acquisition of information;
The second determining unit, for using described region of search information on current frame image corresponding region as current candidate region, and the characteristic information of determining described target area is object module corresponding to described target area, and the characteristic information of determining described candidate region is observation model corresponding to described candidate region
The 3rd determining unit, for according to object module and the observation model determined, determines the pursuit path of described tracking target.
Target vehicle automatic tracking method and device that the embodiment of the present invention provides, if when a certain tracking target in definite tracking target set is not all mated with each the detection target in the detection goal set of current foundation, according to target frame area information, determine that this tracking target is in previous frame image, the region that this target frame area information is corresponding is target area, determine that this target frame area information corresponding region in current frame image is current candidate region, and determine respectively described target area and corresponding object module and the observation model in described candidate region, further, can determine the pursuit path of this tracking target according to the object module of determining and observation model.Like this, after the tracking target in tracking target set is lost, can be according to this tracking target the image information in previous frame image estimate this tracking target after picture frame in movement locus, thereby, realized to this tracking target from motion tracking.
Other features and advantages of the present invention will be set forth in the following description, and, partly from instructions, become apparent, or understand by implementing the present invention.Object of the present invention and other advantages can be realized and be obtained by specifically noted structure in write instructions, claims and accompanying drawing.
Brief description of the drawings
Accompanying drawing described herein is used to provide a further understanding of the present invention, forms a part of the present invention, and schematic description and description of the present invention is used for explaining the present invention, does not form inappropriate limitation of the present invention.In the accompanying drawings:
Fig. 1 is in the embodiment of the present invention, the implementing procedure schematic diagram of target vehicle automatic tracking method;
Fig. 2 is in the embodiment of the present invention, tracking target and detection target detection flow path match schematic flow sheet;
Fig. 3 is in the embodiment of the present invention, the structural representation of target vehicle autotracker.
Embodiment
In order to obtain the Continuous Tracking information of target vehicle, for vehicle peccancy provides more information foundation, the embodiment of the present invention provides a kind of vehicle target automatic tracking method and device.
Below in conjunction with Figure of description, the preferred embodiments of the present invention are described, be to be understood that, preferred embodiment described herein is only for description and interpretation the present invention, be not intended to limit the present invention, and in the situation that not conflicting, the feature in embodiment and embodiment in the present invention can combine mutually.
In order to understand better the embodiment of the present invention, below paper average drifting (Meanshift) algorithm.MeanShift algorithm be a kind of estimate based on kernel function probability density without ginseng Fast Pattern Matching algorithm, conventionally the Expressive Features using color histogram as target, histogram probability distribution is weighted by kernel function, calculate the similarity between object module and candidate region, obtain the MeanShift motion vector of target by iterative computation similarity maximal value.
Given d dimension space R din n sample point its center is X 0, target model features can be expressed as and the maximum that wherein m is feature space quantizes progression, and u is the value of series after quantizing, for the probability that u occurs in object module, can be expressed as:
q ^ u = C Σ i = 1 n k [ | | x i s - x 0 h | | 2 ] δ [ b ( x i s ) - u ] - - - ( 1 )
Wherein, C is normalization coefficient, the section function that k (x) is kernel function, and the window width that h is kernel function, for each point on object module is to central point X 0normalization distance square, δ is KroneckerDelta function, and δ = 1 b ( x i s ) = u 0 b ( x i s ) ≠ u , be mapped as the quantized level corresponding to pixel value of position.
Accordingly, establish for the sample point of candidate region, its center is y, and the observation model feature of candidate region can be expressed as p ^ u ( y ) = { p ^ u ( y ) } u = 0,1,2 , · · · , m - 1 , And Σ u = 0 m - 1 p ^ u ( y ) = 1 , Equally, m is the maximum progression that quantizes, the probability occurring in observation model can be expressed as:
p ^ u ( y ) = C h Σ i = 1 n h k [ | | x i s - y h | | 2 ] δ [ b ( x i s ) - u ] - - - ( 2 )
Wherein, C hfor normalization coefficient, the section function that k (x) is kernel function, the window width that h is kernel function, for each point on observation model to the normalization distance of central point y square, δ is KroneckerDelta function.
Therefore object tracking can be reduced to and find optimum y value, makes with it is the most similar, with similarity Bhattacharrya coefficient measure, that is:
ρ ^ ( y ) ≡ ρ [ p ( y ) , q ] = Σ u = 1 m p u ( y ) q ^ u - - - ( 3 )
Concrete, ask peaked process is: with the last subcenter point of target position y 0, as the initial value of this calculating, then at y 0in neighborhood, find optimal objective position y 1, make maximum is right ? point place can obtain by Taylor expansion:
ρ [ p ( y ) , q ] ≈ 1 2 Σ u = 1 m p ( y 0 ) q u + 1 2 Σ u = 1 m p u ( y ) q u p u ( y 0 ) - - - ( 4 )
By formula (2) substitution formula (4), omit higher order term, arrangement can obtain:
ρ [ p ( y ) , q ] ≈ 1 2 Σ u = 1 m p ( y 0 ) q u + C h 2 Σ i = 1 n w i k ( | | y - x i h | | 2 ) - - - ( 5 )
Wherein:
w i = Σ u = 1 m δ [ b ( x i ) - u ] q u p u ( y 0 ) - - - ( 6 )
From formula (5), left value of formula (5) is irrelevant with y value, just can obtain therefore right of formula (5) is carried out to MeanShift iterative computation maximal value, finds optimal objective position y 1.Optimal objective position iterative formula is as follows:
y 1 ( x ) = Σ i = 1 n G ( x i - x h ) w ( x i ) x i Σ i = 1 n G ( x i - x h ) w ( x i ) - - - ( 7 )
Wherein w (x i) be the weighted value of the each impact point in observation model, section function g (x)=-k'(x of kernel function G (x)), and G (x)=g (|| x|| 2).
Based on foregoing description.In the embodiment of the present invention, can read in order the tracking target information in tracking target chain, each tracking target is carried out from motion tracking, below describing single target is example, and the implementation process of the embodiment of the present invention is described.
As shown in Figure 1, the implementing procedure schematic diagram of the target vehicle automatic tracking method providing for the embodiment of the present invention, comprises the following steps:
S101, reception current frame image;
Detection target in S102, extraction current frame image, sets up and detects goal set;
When concrete enforcement, can be according at least one default region of search information, identify the target vehicle number-plate number information on described current frame image, in corresponding region, wherein, region of search information comprises positional information and the dimension information of region of search, and each the target vehicle number-plate number identifying is one and detects target.Like this, after receiving current frame image, detection and Identification result in this two field picture is added and detected in goal set, and testing result mainly comprises positional information and the dimension information of target vehicle car plate, and recognition result mainly comprises number information, the colouring information of target vehicle car plate.Preferably, in order to orient rapidly and accurately the initial position of effective detection target, the size of region of search (being generally rectangle frame) can be set, for example, in 2,000,000 images, be set to 200*200 pixel, 300*400 pixel is set in 5,000,000 images.In the region of search arranging, detect, and extract recognition result and set up detection goal set, wherein, detect goal set and comprise at least one detection target, each detects number information, colouring information that target at least comprises target vehicle car plate; Certainly, can also comprise dimension information and the positional information of target vehicle car plate.
Thus, each detects target and can be, but not limited to comprise following four attribute informations: number information, colouring information, positional information and the dimension information of target vehicle car plate.
S103, for each tracking target comprising in tracking target set, if this tracking target with detect that goal set comprises each detect target while all not mating, determine the target area that this tracking target is corresponding;
Wherein, tracking target set is according to history image acquisition of information.Concrete, tracking target set is the detection target composition satisfying condition in the detection goal set of setting up according to the detection information of extracting in the image obtaining before.Obviously, in each tracking target, also can be, but not limited to comprise four attribute informations that above-mentioned detection target comprises.Thereby, for each tracking target, the positional information that in step S102, default region of search information can be taking the positional information of this tracking target as a region of search, in the time receiving current frame image, be as the criterion with this tracking position of object information, in current frame image, determine region of search according to the dimension information of default region of search.As can be seen here, comprise several tracking targets in tracking target set, default region of search information just comprises several.
When concrete enforcement, target area corresponding to tracking target is that this tracking target is in previous frame image, the region of determining according to target frame area information corresponding to this tracking target, wherein, target frame area information can be, but not limited to comprise center position information and the dimension information in target frame region.
S104, according to this target frame area information corresponding region on current frame image, determine current candidate region;
S105, determine observation model corresponding to object module and current candidate region corresponding to target area respectively;
S106, according to object module and the observation model determined, determine the pursuit path of described tracking target.
Wherein, in step S103, for each tracking target in tracking target set and each the detection target detecting in goal set, utilize the number information of target vehicle car plate as detecting unique criterion of mating between target and tracking target, as long as meet each character content and the position consistency that comprise in number-plate number information, think single character match, finally generate in order coupling character substring, statistics character substring length, concrete coupling flow process as shown in Figure 2, can comprise the following steps:
S1031, detect target for each detection in goal set, extract the number-plate number information in the detection information of this detection target;
S1032, judge whether the number-plate number information extracted is empty, if so, execution step S1031, otherwise, execution step S1033;
S1033, for this detection target, extract successively each tracking target in tracking target set, judged whether that tracking target is in its hunting zone, in judged result when being, execution step S1034, otherwise execution step S1038;
S1034, for each tracking target in this tracking target set, add up the character that comprises in this tracking target and the quantity that detects the character match comprising in target;
Concrete, extract successively each character comprising in number-plate number information corresponding to this tracking target, and the character content that judges same position in the number-plate number information whether this character is corresponding with this detection target is identical, if identical definite character match success, the quantity of the successful character of record matching;
S1035, judge coupling character quantity whether exceed predetermined threshold value, if so, execution step S1046, if not, execution step S1037;
S1036, this tracking target of mark are that the match is successful state, and perform step S1039;
S1037, this tracking target of mark are the unsuccessful state of coupling, and flow process finishes;
When concrete enforcement, because conventional Chinese car plate is generally made up of 7 characters, in order to ensure validity and the certain error of tolerance of coupling, default threshold value can be, but not limited to be set to 4.Especially, if when arbitrary tracking target is mated with multiple detection targets generations, can get the maximum detection target of coupling character quantity is optimal result.
S1038, this detection target is added in tracking target set, and return to execution step S1031;
S1039, to detect the corresponding information comprising in this tracking target of detection information updating of target.
In the embodiment of the present invention, for the unsuccessful tracking target of arbitrary coupling in tracking target set, can the image information in previous frame image be starting condition according to this tracking target, start to carry out from motion tracking.Because classical MeanShift algorithm is only better to the monotrack effect based on color characteristic, and to target scale sensitive.In the embodiment of the present invention, for the automatic tracking problem of many vehicle targets in transport solution application scenarios, carry out target frame size self-adaptation simultaneously, in order to ensure real-time, in the embodiment of the present invention, only utilize the half-tone information of image to follow the tracks of.
Concrete, for the unsuccessful tracking target of coupling, according to target frame area information, obtain the target area of this tracking target correspondence in previous frame image, wherein, target frame area information comprises target frame central point information and target frame dimension information, determines that the region of determining in previous frame image according to default target frame central point and size is target area, intercepts target area image.
Preferably, main flow high-definition camera is generally 2,000,000 or 5,000,000 pixels, in order to ensure the real-time of multiple goal from motion tracking, in the embodiment of the present invention, can be by the image scaling of former resolution to 1/4 of former width, 1/4 size as image processing of former height.Especially, because target area is car plate, it is undersized, and the theoretical foundation of MeanShift is that the degree of overlapping of target in two two field pictures of front and back is higher, and the MeanShift motion vector calculating is just more accurate.Therefore,, in the embodiment of the present invention, before intercepting target area image, target frame region can be expanded according to certain rule.For example, the area images that target frame can be determined after according to 20 pixels of the each expansion of four neighborhoods are up and down as target area image.After determining target area image, the characteristic information of storage target area image is as object module.In addition, be as the criterion with target frame area information, using the region that on current frame image, target frame area information is determined as current candidate region, and the characteristic information of storing current candidate region image is as observation model.Equally, in the time obtaining candidate region image, also can be by candidate region according to 20 pixels of the each expansion of four neighborhoods up and down.
It should be noted that, in the embodiment of the present invention, the characteristic information of image can be, but not limited to comprise gray scale and the corresponding statistics with histogram data thereof of image, gradient and corresponding statistics with histogram data thereof, and utilize angle point, LBP(local binary patterns), SIFT(yardstick invariant features) etc. the image texture information etc. of operator extraction.
Based on the above-mentioned object module of determining and observation model, getting kernel function is unit Gaussian function: its section function is: k (x)=e -x, based on this, in the embodiment of the present invention, in step S106, can carry out from motion tracking according to following steps:
Step 1061, determine object module and gray probability density histogram corresponding to observation model respectively;
Concrete, determine according to following formula the gray probability density histogram that described object module is corresponding: ModelHist ( i ) = Σ x Σ y k ( R 2 ) * gray ( x , y ) , Wherein: according to two dimensional image space, k (x)=e -x; (x, y) represents the coordinate of any point on target area; (x 0, y 0) represent the center point coordinate of target area; W represents the half of target area width; H represents the half of target area height; R 2represent in target area that any point (x, y) is to current central point (x 0, y 0) normalization distance square.I represents grey level quantization level; When concrete enforcement, it is 32 that total quantization level can be, but not limited to, if total quantization level is 32 o'clock, and i=0,1,2 ..., 31, gray (x, y) represents that (x, y) locates gray-scale value and quantize rear residing quantized level.
Similarly, can determine the gray probability density histogram that observation model is corresponding according to following formula: NewHist ( i ) = Σ x ′ Σ y ′ k ( R ′ 2 ) * gray ( x ′ , y ′ ) , Wherein: R ′ 2 = ( x ′ - x ′ 0 w ′ ) 2 + ( y ′ - y ′ 0 h ′ ) 2 ; (x', y') represents the coordinate of any point on candidate region; (x' 0, y' 0) represent the center point coordinate of candidate region; W' represents the half of candidate region width; H' represents the half of candidate region height.
Step 1062, determine the similarity parameter between observation model and object module;
Traditional calculations similarity parameter formula is Pasteur (Bhattacharyya) distance because this formula all will carry out a multiplication and open radical sign computing at every turn, m is larger, and computing is more consuming time, and device processes resource overhead is larger.In the embodiment of the present invention, in order to reduce device resource expense, adopt simple and direct effective calculating formula of similarity wherein ρ is normalization coefficient, and b is the gray level after quantizing, p brepresent the probability that b occurs in observation model; q brepresent the probability that b occurs in object module; M is the maximum progression that quantizes, and for example, can get m=32, and min () function representation is got minimum value operation.
It is pre-conditioned whether the similarity parameter that step 1063, judgement calculate meets, if so, and process ends, otherwise, execution step 1064;
When concrete enforcement, when any below similarity parameter meets in three conditions, determine that similarity parameter meets pre-conditioned:
Condition one: whether the similarity parameter that judgement is determined is greater than preset value Th_1, if, determine that similarity parameter meets pre-conditioned, otherwise, determine that similarity parameter does not meet pre-conditioned, wherein, Th_1 value is higher, and the confidence level that characterizes current average drifting result is larger, for example, when concrete enforcement, Th_1 can be, but not limited to be set to 0.99.
Condition two: the distance between the central point of the current candidate region that judgement redefines out and the center position of the last current candidate region of determining is less than or equal to preset value Th_2, if, determine that similarity parameter meets pre-conditioned, otherwise, determine that similarity parameter does not meet pre-conditioned, wherein, Th_2 value is less, and the confidence level that characterizes current average drifting result is larger, for example, when concrete enforcement, Th_2 can be, but not limited to be set to 0.The method that redefines the central point of current candidate region can be referring to step 4.
Condition three: whether the number of times that the step of carrying out definite similarity parameter is returned in judgement exceedes preset value Th_3, if so, determine that similarity parameter meets pre-conditioned, otherwise, determine that similarity parameter does not meet pre-conditioned, judge whether exceedance Th_3 of MeanShift iterations, according to classical MeanShift theory, iterations has exceeded after certain threshold value, little on final effect impact, but can cause increase consuming time, preferably, Th_3 can be, but not limited to be set to 5.
Step 1064, redefine the center position information of described current candidate region, and redefine current candidate region according to the center position information redefining out and default target frame area size information, and return to execution step 1061~1062.
When concrete enforcement, can redefine in accordance with the following methods the center position information of described current candidate region:
Determine the probability density core weight parameter of impact point in observation model according to following formula: wherein: g (x)=e -x; Calculate local gray level probability density extreme point according to following formula, the center position information of the current candidate region redefining out: M x = Σ x Σ y ω ( x , y ) * x M y = Σ x Σ y ω ( x , y ) * y , Wherein: (M x, M y) represent the center position information of the current candidate region recalculated.
Central point (M after a MeanShift iterative computation x, M y) with central point (x last time 0, y 0) Euclidean distance (being the step-length of MeanShift) StepDist = ( x 0 - M x ) 2 + ( y 0 - M y ) 2 , Then by new central point (M x, M y) assignment give (x 0, y 0).
When concrete enforcement, after often carrying out an iteration, according to MeanShift algorithm, need to revise target frame area information, the more general method that current MeanShift takes in field is revised for frame ± 10% increment according to target, in each iterative process, repeatedly calculate, while getting observation model histogram and object module histogram similarity parameter the highest (ρ maximum), be optimum.Although the method has better effects in the time that target frame size is amplified, the self-adaptation cannot meet size and dwindle time.And in electronic police application scenarios, the movement tendency of normal vehicle is to move to distant view from close shot, target frame size is dwindled gradually.In the present invention, according to the singularity of scene, the image of acquisition bottom edge, to stop line position, is near-end scene, and target size is larger, and feature is obvious, and it is all better that car plate detects recognition result, can when from motion tracking, carry out size self-adaptation.
Based on this, the vehicle target automatic tracking method that the embodiment of the present invention provides, before redefining the center position information of described current candidate region, can also comprise the following steps:
Center position information according to default primary importance information with the current candidate region redefining out, judges whether the central point of candidate region exceedes default primary importance;
In judged result when being, according to following formula revise goal frame area size information: CurrSize = BaseSize * ( 1 - ScaleRatio * CurrLineDist LineDist ) , Wherein: CurrSize represents revised target frame dimension information; BaseSize represents the target frame dimension information before correction; ScaleRatio, CurrScaleRatio and LineDist are preset value; CurrLineDist is that the central point of current candidate region is to the air line distance of primary importance.
Especially, after revise goal frame area size information, also comprise: the center position information according to default second place information with the current candidate region redefining out, judges whether the central point of candidate region exceedes the second place; In judged result when being, from according to deleting described tracking target goal set.
Concrete, in the embodiment of the present invention, will carry out the adaptive initial position of size (being primary importance) and be set to stop line, and the minimum edge of zebra stripes that highest distance position (being the second place) is distant view (being also to follow the tracks of end position).Conventionally stop line is 700-900 pixel to the air line distance on the minimum edge of distant view zebra stripes, accordingly, target area image is on average reduced into full-sized 0.36, therefore, in the embodiment of the present invention, cut-off linear distance mean value LineDist=800 pixel, dwindling maximum magnitude is ScaleRatio=1-0.36=0.64, the convergent-divergent rate of note current location is CurrScaleRatio, and current location is CurrLineDist to the air line distance of stop line, note target original dimension is BaseSize, the current CurrSize that is of a size of:
CurrSize = BaseSize * ( 1 - CurrScaleRatio ) = BaseSize * ( 1 - ScaleRatio * CurrLineDist LineDist )
When concrete enforcement, target frame central spot with upper/lower positions, is not carried out the adaptive correction of size in stop line, has only exceeded stop line position, just revises the target frame size of revised size during as next Automatic Target Tracking according to as above formula.When tracking target has exceeded the minimum edge of distant view zebra stripes, distance now has enough met the judgement of various acts of violating regulations and has arrested, and can delete in advance this automatically track target.Turn left or turn right to have arrived the edge of video when target, now also delete in advance target.Special, retrograde vehicle is pressed the setting of electronic police system entirety, and in the above position of stop line, None-identified goes out car plate, therefore without detecting recognition result, travel downwards and stride across after stop line, just have car plate and detect recognition result, if now open from motion tracking, do not carry out the self-adaptation of size and improve.
It should be noted that, above each data are only preferably data instance of the embodiment of the present invention, while specifically enforcement, can adjust according to actual needs, and the embodiment of the present invention does not limit this.
When concrete enforcement, after above-mentioned MeanShift iterative process, if ρ value is greater than predetermined threshold value Th_4, think that MeanShift result is effective, otherwise, this target deleted in advance.Under normal circumstances, the target of each linear movement is independently tracked, is independent of each other, and ρ value is all larger.But when there is target occlusion, turn round or turning around or available light produce light and shade when sudden change, it is obvious that ρ value may reduce.Therefore,, in order to ensure the continuity of target following and the average drifting result of inhibition relatively large deviation, in the embodiment of the present invention, Th_4 can be, but not limited to be set to 0.3.
For effective MeanShift result, get ρ value and predetermined threshold value Th_5 compares again, if be more than or equal to this threshold value, think that current results and real goal position are very approaching, can utilize current results to upgrade object module., wherein, Th_5 can be, but not limited to be set to 0.9, and object module update method is as follows:
If when the target frame size after current MeanShift result adaptive correction is less than target frame size corresponding to To Template,, taking object module central point as benchmark, intercept respective regions as target area by the revised target frame of MeanShift size.Otherwise according to target model life size is upgraded.More new formula is: pixelModel (x, y)=α * pixelCurr (x, y)+(1-α) pixelModel (x, y), (x, y) is the side-play amount of the target frame initial point that in current goal frame, each point is corresponding with respect to the object module before upgrading; PixelModel (x, y) is the gray-scale value of pixel corresponding in the object module after upgrading; PixelCurr (x, y) is the gray-scale value of pixel corresponding in the object module before upgrading; α is preset value, and α ∈ (0,1) preferably, and in order to upgrade fast, α can be, but not limited to be set to 0.9.
If ρ value is less than Th_5, do not upgrade To Template.Special, the situation that the automobile storage that turns around to travel or be close to as vehicle is being blocked mutually, ρ value after its every frame MeanShift iterative processing is all less than Th_5, now never carry out object module renewal, but as long as ensure to carry out the self-adaptation of target frame size, still can obtain accurately tracking results automatically.
Based on same inventive concept, a kind of target vehicle autotracker is also provided in the embodiment of the present invention, because the principle that said apparatus is dealt with problems is similar to target vehicle automatic tracking method, therefore the enforcement of said apparatus and equipment can be referring to the enforcement of method, repeats part and repeat no more.
As shown in Figure 3, the structural representation of the target vehicle autotracker providing for the embodiment of the present invention, comprising:
Receiving element 301, for receiving current frame image;
Unit 302 is set up in goal set, for extracting the detection target of described current frame image, sets up and detects goal set;
The first determining unit 303, for each tracking target that set comprises for tracking target, each that determine that described tracking target comprises with described detection goal set detects target while all not mating, determine described tracking target corresponding target area in previous frame image according to target frame area information, described tracking target set is according to history image acquisition of information;
The second determining unit 304, for using described region of search information on current frame image corresponding region as current candidate region, and the characteristic information of determining described target area is object module corresponding to described target area, and the characteristic information of determining described candidate region is observation model corresponding to described candidate region;
The 3rd determining unit 305, for according to object module and the observation model determined, determines the pursuit path of tracking target.
When concrete enforcement, the target vehicle autotracker that the embodiment of the present invention provides, can also comprise the first judging unit 306 and the 4th determining unit 307, wherein:
The 3rd determining unit 305, can, for according to object module and the observation model determined, determine the similarity parameter between described observation model and described object module according to preset algorithm; And when being no in the judged result of the first judging unit 306, the center position information redefining out according to the 4th determining unit 307 and the dimension information of described region of search redefine current candidate region, again carry out the step of determining described similarity parameter, until described similarity parameter meets pre-conditioned;
The first judging unit 306, for judging whether the similarity parameter that the 3rd determining unit 305 is determined meets pre-conditioned;
The 4th determining unit 307, when being no in the judged result of the first judging unit 306, redefines the center position information of described current candidate region.
Wherein, the 3rd determining unit 305 can comprise:
First determines subelement, for determining respectively described object module and gray probability density histogram corresponding to described observation model;
Second determines subelement, for determine the similarity parameter between the gray probability density histogram of object module and the gray probability density histogram of described observation model according to following formula: wherein: ρ represents normalization coefficient; B represents to quantize gray level; p brepresent the probability that b occurs in observation model; q brepresent the probability that b occurs in object module; M represents to quantize progression; Min () represents to get minimum value operation.
Wherein: first determines subelement, specifically for determine the gray probability density histogram that described object module is corresponding according to following formula: ModelHist ( i ) = Σ x Σ y k ( R 2 ) * gray ( x , y ) , Wherein: k (x)=e -x; (x, y) represents the coordinate of any point on target area; (x 0, y 0) represent the center point coordinate of target area; W represents the half of target area width; H represents the half of target area height; I represents grey level quantization level; Gray (x, y) represents that (x, y) locates the quantized level after gray-scale value quantizes; And determine according to following formula the gray probability density histogram that described observation model is corresponding: NewHist ( i ) = Σ x ′ Σ y ′ k ( R ′ 2 ) * gray ( x ′ , y ′ ) , Wherein: R ′ 2 = ( x ′ - x ′ 0 w ′ ) 2 + ( y ′ - y ′ 0 h ′ ) 2 ; (x', y') represents the coordinate of any point on candidate region; (x' 0, y' 0) represent the center point coordinate of candidate region; W' represents the half of candidate region width; H' represents the half of candidate region height.
When concrete enforcement, the 4th determining unit 307, comprising:
The 3rd determines subelement, for determine the probability density core weight parameter of described observation model impact point according to following formula: ω ( x , y ) = g ( R 2 ) * ModelHist ( i ) NewHist ( i ) , Wherein: g (x)=e -x;
The 4th determines subelement, for redefine the center position information of described current candidate region according to following formula: M x = Σ x Σ y ω ( x , y ) * x M y = Σ x Σ y ω ( x , y ) * y , Wherein: (M x, M y) represent the center position information of the current candidate region recalculated.
When concrete enforcement, the vehicle target autotracker that the embodiment of the present invention provides, can also comprise:
Expanding element, before determining object module corresponding to target area in the second determining unit 304, respectively expands predetermined number pixel by described target area according to four neighborhoods up and down.
When concrete enforcement, the vehicle target autotracker that the embodiment of the present invention provides, can also comprise:
The second judging unit, for the center position information of the current candidate region that redefines out according to default primary importance information and the 4th determining unit 307, judges whether the central point of described candidate region exceedes default primary importance;
Amending unit, in the judged result of described the second judging unit when being, according to target frame area size information described in following formula correction: CurrSize = BaseSize * ( 1 - ScaleRatio * CurrLineDist LineDist ) , Wherein: CurrSize represents revised target frame dimension information; BaseSize represents the target frame dimension information before correction; ScaleRatio is the first preset value; CurrScaleRatio is the second preset value; CurrLineDist is that the central point of current candidate region is to the air line distance of primary importance; LineDist is the 3rd preset value.
When concrete enforcement, the vehicle target autotracker that the embodiment of the present invention provides, can also comprise:
The 3rd judging unit, be used at described amending unit after the described target frame of correction area size information, center position information according to default second place information with the current candidate region redefining out, judges whether the central point of described candidate region exceedes the second place;
The first delete cells, in the judged result of the 3rd judging unit when being, from described according to deleting described tracking target goal set.
When concrete enforcement, the first judging unit 306, can for judging whether described similarity parameter is greater than the 4th preset value, if so, determines that described similarity parameter meets pre-conditioned, otherwise, determine that described similarity parameter does not meet pre-conditioned; Or the distance between the central point of the central point of the current candidate region redefining out specifically for judgement and the last current candidate region of determining is less than or equal to the 5th preset value, if, determine that described similarity parameter meets pre-conditioned, otherwise, determine that described similarity parameter does not meet pre-conditioned; Or whether the number of times that the step of carrying out definite described similarity parameter is returned in concrete judgement exceedes the 6th preset value, if so, determine that described similarity parameter meets pre-conditioned, otherwise, determine that described similarity parameter does not meet pre-conditioned.
When concrete enforcement, the vehicle target autotracker that the embodiment of the present invention provides, can also comprise:
Updating block, if while being greater than the 7th preset value for described similarity parameter, upgrade described object module according to following formula: pixelModel (x, y)=α * pixelCurr (x, y)+(1-α) pixelModel (x, y), wherein: (x, y) is the side-play amount of the target frame initial point that in current goal frame, each point is corresponding with respect to the object module before upgrading; PixelModel (x, y) is the gray-scale value of pixel corresponding in the object module after upgrading; PixelCurr (x, y) is the gray-scale value of pixel corresponding in the object module before upgrading; α is the 8th preset value, and described the 7th preset value is less than described the 4th preset value.
When concrete enforcement, the vehicle target autotracker that the embodiment of the present invention provides, can also comprise:
The second delete cells, for judge at the first judging unit described similarity parameter whether meet pre-conditioned before, if when described similarity parameter is less than the 9th preset value, from described tracking target set, delete described tracking target.
Target vehicle automatic tracking method and device that the embodiment of the present invention provides, if when a certain tracking target in definite tracking target set is not all mated with each the detection target in the detection goal set of current foundation, according to target frame area information, determine that this tracking target is in previous frame image, the region that this target frame area information is corresponding is target area, determine that this target frame area information corresponding region in current frame image is current candidate region, and determine respectively described target area and corresponding object module and the observation model in described candidate region, further, utilize preset algorithm to calculate the similarity parameter between described candidate family and described object module, if the similarity parameter obtaining does not meet when pre-conditioned, determine the central point of current candidate region, according to the dimension information comprising in the central point of determining and target frame area information, redefine current candidate region, and calculate the similarity parameter between the observation model that described object module is corresponding with the current candidate region redefining out, repeat above-mentioned steps, until that similarity parameter meets is pre-conditioned.Like this, after the tracking target in tracking target set is lost, can be according to this tracking target the image information in previous frame image estimate this tracking target after picture frame in movement locus, thereby, realized to this tracking target from motion tracking.
Those skilled in the art should understand, embodiments of the invention can be provided as method, system or computer program.Therefore, the present invention can adopt complete hardware implementation example, completely implement software example or the form in conjunction with the embodiment of software and hardware aspect.And the present invention can adopt the form at one or more upper computer programs of implementing of computer-usable storage medium (including but not limited to magnetic disk memory, CD-ROM, optical memory etc.) that wherein include computer usable program code.
The present invention is with reference to describing according to process flow diagram and/or the block scheme of the method for the embodiment of the present invention, equipment (system) and computer program.Should understand can be by the flow process in each flow process in computer program instructions realization flow figure and/or block scheme and/or square frame and process flow diagram and/or block scheme and/or the combination of square frame.Can provide these computer program instructions to the processor of multi-purpose computer, special purpose computer, Embedded Processor or other programmable data processing device to produce a machine, the instruction that makes to carry out by the processor of computing machine or other programmable data processing device produces the device for realizing the function of specifying at flow process of process flow diagram or multiple flow process and/or square frame of block scheme or multiple square frame.
These computer program instructions also can be stored in energy vectoring computer or the computer-readable memory of other programmable data processing device with ad hoc fashion work, the instruction that makes to be stored in this computer-readable memory produces the manufacture that comprises command device, and this command device is realized the function of specifying in flow process of process flow diagram or multiple flow process and/or square frame of block scheme or multiple square frame.
These computer program instructions also can be loaded in computing machine or other programmable data processing device, make to carry out sequence of operations step to produce computer implemented processing on computing machine or other programmable devices, thereby the instruction of carrying out is provided for realizing the step of the function of specifying in flow process of process flow diagram or multiple flow process and/or square frame of block scheme or multiple square frame on computing machine or other programmable devices.
Although described the preferred embodiments of the present invention, once those skilled in the art obtain the basic creative concept of cicada, can make other change and amendment to these embodiment.So claims are intended to be interpreted as comprising preferred embodiment and fall into all changes and the amendment of the scope of the invention.
Obviously, those skilled in the art can carry out various changes and modification and not depart from the spirit and scope of the present invention the present invention.Like this, if these amendments of the present invention and within modification belongs to the scope of the claims in the present invention and equivalent technologies thereof, the present invention is also intended to comprise these changes and modification interior.

Claims (22)

1. a target vehicle automatic tracking method, is characterized in that, comprising:
Receive current frame image;
Extract the detection target in described current frame image, set up and detect goal set;
For each tracking target comprising in tracking target set, if when each detection target that described tracking target comprises with described detection goal set is not all mated, determine the target area of described tracking target correspondence in previous frame image according to target frame area information corresponding to described tracking target, described tracking target set is that described target frame area information comprises center position information and the dimension information in target frame region according to history image acquisition of information;
Using described target frame area information on current frame image corresponding region as current candidate region, and the characteristic information of determining described target area is object module corresponding to described target area, and the characteristic information of determining described candidate region is observation model corresponding to described candidate region;
According to the object module of determining and observation model, determine the pursuit path of described tracking target.
2. the method for claim 1, is characterized in that, according to the object module of determining and observation model, determines the pursuit path of described tracking target, specifically comprises:
According to the object module of determining and observation model, determine the similarity parameter between described observation model and described object module according to preset algorithm;
Judge whether described similarity parameter meets pre-conditioned;
While being no, redefine the center position information of described current candidate region in judged result; Redefine current candidate region according to the dimension information of the center position information redefining out and described region of search, and return and carry out the step of determining described similarity parameter, until described similarity parameter meets pre-conditioned.
3. method as claimed in claim 2, is characterized in that, according to the object module of determining and observation model, determines the similarity parameter between described observation model and described object module according to preset algorithm, specifically comprises:
Determine respectively described object module and gray probability density histogram corresponding to described observation model;
Determine the similarity parameter between the gray probability density histogram of object module and the gray probability density histogram of described observation model according to following formula: wherein:
ρ represents normalization coefficient;
B represents to quantize gray level;
P brepresent the probability that b occurs in observation model;
Q brepresent the probability that b occurs in object module;
M represents to quantize progression;
Min () represents to get minimum value operation.
4. method as claimed in claim 3, is characterized in that,
Determine according to following formula the gray probability density histogram that described object module is corresponding:
ModelHist ( i ) = Σ x Σ y k ( R 2 ) * gray ( x , y ) , Wherein:
k(x)=e -x
R 2 = ( x - x 0 w ) 2 + ( y - y 0 h ) 2 ;
(x, y) represents the coordinate of any point on target area;
(x 0, y 0) represent the center point coordinate of target area;
W represents the half of target area width;
H represents the half of target area height;
I represents grey level quantization level;
Gray (x, y) represents that (x, y) locates the quantized level after gray-scale value quantizes;
Determine according to following formula the gray probability density histogram that described observation model is corresponding:
NewHist ( i ) = Σ x ′ Σ y ′ k ( R ′ 2 ) * gray ( x ′ , y ′ ) , Wherein:
R ′ 2 = ( x ′ - x ′ 0 w ′ ) 2 + ( y ′ - y ′ 0 h ′ ) 2 ;
(x', y') represents the coordinate of any point on candidate region;
(x' 0, y' 0) represent the center point coordinate of candidate region;
W' represents the half of candidate region width;
H' represents the half of candidate region height.
5. method as claimed in claim 4, is characterized in that, redefines the center position information of described current candidate region, specifically comprises:
Determine the probability density core weight parameter of impact point in described observation model according to following formula: ω ( x , y ) = g ( R 2 ) * ModelHist ( i ) NewHist ( i ) , Wherein:
g(x)=e -x
Redefine the center position information of described current candidate region according to following formula: M x = Σ x Σ y ω ( x , y ) * x M y = Σ x Σ y ω ( x , y ) * y , Wherein:
(M x, M y) represent the center position information of the current candidate region recalculated.
6. method as claimed in claim 2, is characterized in that, before determining object module corresponding to described target area, also comprises:
Predetermined number pixel is respectively expanded to according to four neighborhoods up and down in described target area.
7. method as claimed in claim 2, is characterized in that, before redefining the center position information of described current candidate region, also comprises:
Center position information according to default primary importance information with the current candidate region redefining out, judges whether the central point of described candidate region exceedes default primary importance;
In judged result when being, according to target frame area size information described in following formula correction:
CurrSize = BaseSize * ( 1 - ScaleRatio * CurrLineDist LineDist ) , Wherein:
CurrSize represents revised target frame area size information;
BaseSize represents the target frame area size information before correction;
ScaleRatio is the first preset value;
CurrScaleRatio is the second preset value;
CurrLineDist is that the central point of current candidate region is to the air line distance of primary importance;
LineDist is the 3rd preset value.
8. method as claimed in claim 7, is characterized in that, after the described target frame of correction area size information, also comprises:
Center position information according to default second place information with the current candidate region redefining out, judges whether the central point of described candidate region exceedes the second place;
In judged result when being, from described according to deleting described tracking target goal set.
9. the method as described in claim as arbitrary in claim 2~8, is characterized in that, judges in accordance with the following methods whether described similarity parameter meets pre-conditioned:
Judge whether described similarity parameter is greater than the 4th preset value, if so, determine that described similarity parameter meets pre-conditioned, otherwise, determine that described similarity parameter does not meet pre-conditioned; Or
Distance between the central point of the central point of the current candidate region that judgement redefines out and the last current candidate region of determining is less than or equal to the 5th preset value, if, determine that described similarity parameter meets pre-conditioned, otherwise, determine that described similarity parameter does not meet pre-conditioned; Or
Whether the number of times that the step of carrying out definite similarity parameter is returned in judgement exceedes the 6th preset value, if so, determines that described similarity parameter meets pre-conditioned, otherwise, determine that described similarity parameter does not meet pre-conditioned.
10. method as claimed in claim 9, is characterized in that, if when described similarity parameter is greater than the 7th preset value, described the 7th preset value is less than described the 4th preset value, also comprises:
Upgrade described object module according to following formula:
PixelModel (x, y)=α * pixelCurr (x, y)+(1-α) pixelModel (x, y), wherein:
(x, y) is the side-play amount of the target frame initial point that in current goal frame, each point is corresponding with respect to the object module before upgrading;
PixelModel (x, y) is the gray-scale value of pixel corresponding in the object module after upgrading;
PixelCurr (x, y) is the gray-scale value of pixel corresponding in the object module before upgrading;
α is the 8th preset value.
11. methods as claimed in claim 2, is characterized in that, judge described similarity parameter whether meet pre-conditioned before, also comprise:
If when described similarity parameter is less than the 9th preset value, from described tracking target set, delete described tracking target.
12. 1 kinds of target vehicle autotrackers, is characterized in that, comprising:
Receiving element, for receiving current frame image;
Unit is set up in goal set, for extracting the detection target of described current frame image, sets up and detects goal set;
The first determining unit, for each tracking target that set comprises for tracking target, if when each detection target that described tracking target comprises with described detection goal set is not all mated, determine the target area of described tracking target correspondence in previous frame image according to target frame area information corresponding to described tracking target, described tracking target set is that described target frame area information comprises center position information and the dimension information in target frame region according to history image acquisition of information;
The second determining unit, for using described target frame area information on current frame image corresponding region as current candidate region, and the characteristic information of determining described target area is object module corresponding to described target area, and the characteristic information of determining described candidate region is observation model corresponding to described candidate region;
The 3rd determining unit, for according to object module and the observation model determined, determines the pursuit path of described tracking target.
13. devices as claimed in claim 12, is characterized in that, also comprise the first judging unit and the 4th determining unit, wherein:
Described the 3rd determining unit, specifically for according to object module and the observation model determined, determines the similarity parameter between described observation model and described object module according to preset algorithm; And when being no in the judged result of the first judging unit, the center position information redefining out according to the 4th determining unit and the dimension information of described region of search redefine current candidate region, again carry out the step of determining described similarity parameter, until described similarity parameter meets pre-conditioned;
Described the first judging unit, for judging whether the similarity parameter that described the 3rd determining unit is determined meets pre-conditioned;
The 4th determining unit, when being no in the judged result of the first judging unit, redefines the center position information of described current candidate region.
14. devices as claimed in claim 13, is characterized in that, described the 3rd determining unit, comprising:
First determines subelement, for determining respectively described object module and gray probability density histogram corresponding to described observation model;
Second determines subelement, for determine the similarity parameter between the gray probability density histogram of object module and the gray probability density histogram of described observation model according to following formula: wherein:
ρ represents normalization coefficient;
B represents to quantize gray level;
P brepresent the probability that b occurs in observation model;
Q brepresent the probability that b occurs in object module;
M represents to quantize progression;
Min () represents to get minimum value operation.
15. devices as claimed in claim 14, is characterized in that,
Described first determines subelement, specifically for determine the gray probability density histogram that described object module is corresponding according to following formula: ModelHist ( i ) = Σ x Σ y k ( R 2 ) * gray ( x , y ) , Wherein: k (x)=e -x; (x, y) represents the coordinate of any point on target area; (x 0, y 0) represent the center point coordinate of target area; W represents the half of target area width; H represents the half of target area height; I represents grey level quantization level; Gray (x, y) represents that (x, y) locates the quantized level after gray-scale value quantizes; And determine according to following formula the gray probability density histogram that described observation model is corresponding: NewHist ( i ) = Σ x ′ Σ y ′ k ( R ′ 2 ) * gray ( x ′ , y ′ ) , Wherein: R ′ 2 = ( x ′ - x ′ 0 w ′ ) 2 + ( y ′ - y ′ 0 h ′ ) 2 ; (x', y') represents the coordinate of any point on candidate region; (x' 0, y' 0) represent the center point coordinate of candidate region; W' represents the half of candidate region width; H' represents the half of candidate region height.
16. devices as claimed in claim 15, is characterized in that, described the 4th determining unit, comprising:
The 3rd determines subelement, for determine the probability density core weight parameter of described observation model impact point according to following formula: ω ( x , y ) = g ( R 2 ) * ModelHist ( i ) NewHist ( i ) , Wherein: g (x)=e -x;
The 4th determines subelement, for redefine the center position information of described current candidate region according to following formula: M x = Σ x Σ y ω ( x , y ) * x M y = Σ x Σ y ω ( x , y ) * y , Wherein: (M x, M y) represent the center position information of the current candidate region recalculated.
17. devices as claimed in claim 12, is characterized in that, also comprise:
Expanding element, before determining object module corresponding to described target area in described the second determining unit, respectively expands predetermined number pixel by described target area according to four neighborhoods up and down.
18. devices as claimed in claim 13, is characterized in that, also comprise:
The second judging unit, for the center position information of the current candidate region that redefines out according to default primary importance information and described the 4th determining unit, judges whether the central point of described candidate region exceedes default primary importance;
Amending unit, in the judged result of described the second judging unit when being, the dimension information according to target frame region described in following formula correction: CurrSize = BaseSize * ( 1 - ScaleRatio * CurrLineDist LineDist ) , Wherein: CurrSize represents revised target frame area size information; BaseSize represents the target frame area size information before correction; ScaleRatio is the first preset value; CurrScaleRatio is the second preset value; CurrLineDist is that the central point of current candidate region is to the air line distance of primary importance; LineDist is the 3rd preset value.
19. devices as claimed in claim 18, is characterized in that, also comprise:
The 3rd judging unit, after revising the dimension information in described target frame region at described amending unit, center position information according to default second place information with the current candidate region redefining out, judges whether the central point of described candidate region exceedes the second place;
The first delete cells, in the judged result of described the 3rd judging unit when being, from described according to deleting described tracking target goal set.
Device as described in 20. claims as arbitrary in claim 13~19, is characterized in that,
Described the first judging unit, specifically for judging whether described similarity parameter is greater than the 4th preset value, if so, determines that described similarity parameter meets pre-conditioned, otherwise, determine that described similarity parameter does not meet pre-conditioned; Or the distance between the central point of the central point of the current candidate region redefining out specifically for judgement and the last current candidate region of determining is less than or equal to the 5th preset value, if, determine that described similarity parameter meets pre-conditioned, otherwise, determine that described similarity parameter does not meet pre-conditioned; Or whether the number of times that the step of carrying out definite described similarity parameter is returned in concrete judgement exceedes the 6th preset value, if so, determine that described similarity parameter meets pre-conditioned, otherwise, determine that described similarity parameter does not meet pre-conditioned.
21. devices as claimed in claim 20, is characterized in that, also comprise:
Updating block, if while being greater than the 7th preset value for described similarity parameter, upgrade described object module according to following formula: pixelModel (x, y)=α * pixelCurr (x, y)+(1-α) pixelModel (x, y), wherein: (x, y) is the side-play amount of the target frame initial point that in current goal frame, each point is corresponding with respect to the object module before upgrading; PixelModel (x, y) is the gray-scale value of pixel corresponding in the object module after upgrading; PixelCurr (x, y) is the gray-scale value of pixel corresponding in the object module before upgrading; α is the 8th preset value, and described the 7th preset value is less than described the 4th preset value.
22. devices as claimed in claim 13, is characterized in that, also comprise:
The second delete cells, for judge at described the first judging unit described similarity parameter whether meet pre-conditioned before, if when described similarity parameter is less than the 9th preset value, from described tracking target set, delete described tracking target.
CN201310011103.3A 2013-01-11 2013-01-11 Target vehicle automatic tracking method and device Active CN103927762B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310011103.3A CN103927762B (en) 2013-01-11 2013-01-11 Target vehicle automatic tracking method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310011103.3A CN103927762B (en) 2013-01-11 2013-01-11 Target vehicle automatic tracking method and device

Publications (2)

Publication Number Publication Date
CN103927762A true CN103927762A (en) 2014-07-16
CN103927762B CN103927762B (en) 2017-03-22

Family

ID=51145973

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310011103.3A Active CN103927762B (en) 2013-01-11 2013-01-11 Target vehicle automatic tracking method and device

Country Status (1)

Country Link
CN (1) CN103927762B (en)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104268889A (en) * 2014-09-30 2015-01-07 重庆大学 Vehicle tracking method based on automatic characteristic weight correction
CN104700432A (en) * 2015-03-24 2015-06-10 银江股份有限公司 Self-adaptive adhered vehicle separating method
CN105730336A (en) * 2014-12-10 2016-07-06 比亚迪股份有限公司 Reverse driving assistant and vehicle
CN106909935A (en) * 2017-01-19 2017-06-30 博康智能信息技术有限公司上海分公司 A kind of method for tracking target and device
CN106909934A (en) * 2017-01-19 2017-06-30 博康智能信息技术有限公司上海分公司 A kind of method for tracking target and device based on adaptable search
CN108986472A (en) * 2017-05-31 2018-12-11 杭州海康威视数字技术股份有限公司 One kind turns around vehicle monitoring method and device
CN109213134A (en) * 2017-07-03 2019-01-15 百度在线网络技术(北京)有限公司 The method and apparatus for generating automatic Pilot strategy
CN109389624A (en) * 2018-10-22 2019-02-26 中国科学院福建物质结构研究所 Model drift rejection method and its device based on measuring similarity
CN109813327A (en) * 2019-02-01 2019-05-28 安徽中科美络信息技术有限公司 A kind of vehicle driving trace absent compensation method
CN110287817A (en) * 2019-06-05 2019-09-27 北京字节跳动网络技术有限公司 Target identification and the training method of Model of Target Recognition, device and electronic equipment
CN110458045A (en) * 2019-07-22 2019-11-15 浙江大华技术股份有限公司 Acquisition methods, image processing method and the device of response probability histogram
CN110930429A (en) * 2018-09-19 2020-03-27 杭州海康威视数字技术股份有限公司 Target tracking processing method, device and equipment and readable medium
CN110969097A (en) * 2019-11-18 2020-04-07 浙江大华技术股份有限公司 Linkage tracking control method, equipment and storage device for monitored target
CN111832549A (en) * 2020-06-29 2020-10-27 深圳市优必选科技股份有限公司 Data labeling method and device
CN113361388A (en) * 2021-06-03 2021-09-07 北京百度网讯科技有限公司 Image data correction method and device, electronic equipment and automatic driving vehicle

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1912950A (en) * 2006-08-25 2007-02-14 浙江工业大学 Device for monitoring vehicle breaking regulation based on all-position visual sensor
JP2011193198A (en) * 2010-03-15 2011-09-29 Omron Corp Monitoring camera terminal
CN102567705A (en) * 2010-12-23 2012-07-11 北京邮电大学 Method for detecting and tracking night running vehicle

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1912950A (en) * 2006-08-25 2007-02-14 浙江工业大学 Device for monitoring vehicle breaking regulation based on all-position visual sensor
JP2011193198A (en) * 2010-03-15 2011-09-29 Omron Corp Monitoring camera terminal
CN102567705A (en) * 2010-12-23 2012-07-11 北京邮电大学 Method for detecting and tracking night running vehicle

Cited By (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104268889A (en) * 2014-09-30 2015-01-07 重庆大学 Vehicle tracking method based on automatic characteristic weight correction
CN105730336B (en) * 2014-12-10 2018-12-21 比亚迪股份有限公司 Reverse aid and vehicle
CN105730336A (en) * 2014-12-10 2016-07-06 比亚迪股份有限公司 Reverse driving assistant and vehicle
CN104700432A (en) * 2015-03-24 2015-06-10 银江股份有限公司 Self-adaptive adhered vehicle separating method
CN104700432B (en) * 2015-03-24 2017-11-03 银江股份有限公司 A kind of adaptive adhesion Method of Vehicle Segmentation
CN106909935A (en) * 2017-01-19 2017-06-30 博康智能信息技术有限公司上海分公司 A kind of method for tracking target and device
CN106909934A (en) * 2017-01-19 2017-06-30 博康智能信息技术有限公司上海分公司 A kind of method for tracking target and device based on adaptable search
CN108986472A (en) * 2017-05-31 2018-12-11 杭州海康威视数字技术股份有限公司 One kind turns around vehicle monitoring method and device
CN108986472B (en) * 2017-05-31 2020-10-30 杭州海康威视数字技术股份有限公司 Method and device for monitoring vehicle turning round
CN109213134A (en) * 2017-07-03 2019-01-15 百度在线网络技术(北京)有限公司 The method and apparatus for generating automatic Pilot strategy
CN110930429B (en) * 2018-09-19 2023-03-31 杭州海康威视数字技术股份有限公司 Target tracking processing method, device and equipment and readable medium
CN110930429A (en) * 2018-09-19 2020-03-27 杭州海康威视数字技术股份有限公司 Target tracking processing method, device and equipment and readable medium
CN109389624A (en) * 2018-10-22 2019-02-26 中国科学院福建物质结构研究所 Model drift rejection method and its device based on measuring similarity
CN109389624B (en) * 2018-10-22 2022-04-19 中国科学院福建物质结构研究所 Model drift suppression method and device based on similarity measurement
CN109813327A (en) * 2019-02-01 2019-05-28 安徽中科美络信息技术有限公司 A kind of vehicle driving trace absent compensation method
CN110287817B (en) * 2019-06-05 2021-09-21 北京字节跳动网络技术有限公司 Target recognition and target recognition model training method and device and electronic equipment
CN110287817A (en) * 2019-06-05 2019-09-27 北京字节跳动网络技术有限公司 Target identification and the training method of Model of Target Recognition, device and electronic equipment
CN110458045A (en) * 2019-07-22 2019-11-15 浙江大华技术股份有限公司 Acquisition methods, image processing method and the device of response probability histogram
CN110969097A (en) * 2019-11-18 2020-04-07 浙江大华技术股份有限公司 Linkage tracking control method, equipment and storage device for monitored target
CN110969097B (en) * 2019-11-18 2023-05-12 浙江大华技术股份有限公司 Method, equipment and storage device for controlling linkage tracking of monitoring target
CN111832549A (en) * 2020-06-29 2020-10-27 深圳市优必选科技股份有限公司 Data labeling method and device
CN111832549B (en) * 2020-06-29 2024-04-23 深圳市优必选科技股份有限公司 Data labeling method and device
CN113361388A (en) * 2021-06-03 2021-09-07 北京百度网讯科技有限公司 Image data correction method and device, electronic equipment and automatic driving vehicle
CN113361388B (en) * 2021-06-03 2023-11-24 北京百度网讯科技有限公司 Image data correction method and device, electronic equipment and automatic driving vehicle

Also Published As

Publication number Publication date
CN103927762B (en) 2017-03-22

Similar Documents

Publication Publication Date Title
CN103927762A (en) Target vehicle automatic tracking method and device
Aker et al. Using deep networks for drone detection
EP3686779B1 (en) Method and device for attention-based lane detection without post-processing by using lane mask and testing method and testing device using the same
CN103530600B (en) Licence plate recognition method under complex illumination and system
US7623681B2 (en) System and method for range measurement of a preceding vehicle
JP5223675B2 (en) Vehicle detection device, vehicle detection method, and vehicle detection program
CN115049700A (en) Target detection method and device
CN116363319B (en) Modeling method, modeling device, equipment and medium for building roof
CN111435436A (en) Perimeter anti-intrusion method and device based on target position
CN103871081A (en) Method for tracking self-adaptive robust on-line target
CN114219829A (en) Vehicle tracking method, computer equipment and storage device
Revilloud et al. A lane marker estimation method for improving lane detection
CN114332781A (en) Intelligent license plate recognition method and system based on deep learning
CN104463238B (en) A kind of automobile logo identification method and system
CN113763412B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN103295003B (en) A kind of vehicle checking method based on multi-feature fusion
CN111401143A (en) Pedestrian tracking system and method
Zeng et al. Enhancing underground visual place recognition with Shannon entropy saliency
CN114820765A (en) Image recognition method and device, electronic equipment and computer readable storage medium
WO2018042208A1 (en) Street asset mapping
CN117671617A (en) Real-time lane recognition method in container port environment
CN111428567A (en) Pedestrian tracking system and method based on affine multi-task regression
CN111144361A (en) Road lane detection method based on binaryzation CGAN network
CN113836251B (en) Cognitive map construction method, device, equipment and medium
CN107578037B (en) Lane line detection method based on analog property estimation

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant