CN110246153A - A kind of moving target real-time detection tracking based on video monitoring - Google Patents

A kind of moving target real-time detection tracking based on video monitoring Download PDF

Info

Publication number
CN110246153A
CN110246153A CN201910365305.5A CN201910365305A CN110246153A CN 110246153 A CN110246153 A CN 110246153A CN 201910365305 A CN201910365305 A CN 201910365305A CN 110246153 A CN110246153 A CN 110246153A
Authority
CN
China
Prior art keywords
point
pixel
frame
kth
frame image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910365305.5A
Other languages
Chinese (zh)
Inventor
王东洁
王卫
陈昌建
唐飞
罗艳丽
卫彪
李凯
尚兵兵
刘江明
王利梅
莫申林
李志学
汪彬彬
张宇
何丹娜
王微
张超
童强
高鑫
产文涛
潘思宇
杨春合
苏翔
袁泉
范留洋
童少康
赵成亮
应普
徐冬雨
孙晓伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui Sun Create Electronic Co Ltd
Original Assignee
Anhui Sun Create Electronic Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui Sun Create Electronic Co Ltd filed Critical Anhui Sun Create Electronic Co Ltd
Priority to CN201910365305.5A priority Critical patent/CN110246153A/en
Publication of CN110246153A publication Critical patent/CN110246153A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/215Motion-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The present invention relates to a kind of moving target real-time detection tracking based on video monitoring, includes the following steps: in dual camera photographed scene, detects to obtain the characteristic point of every frame image in video by Harris Corner Detection Algorithm;According to Epipolar geometry constraint principles by characteristic point classification, the foreground features point and background characteristics point of image are obtained;Foreground area is obtained by the foreground features point of image;Foreground features point is merged to the moving target for obtaining image by the perspective transform of homography matrix with foreground area;Background characteristics point is enhanced by calculus of finite differences, obtains enhancing treated image;Moving target is detected by the way that minimum circumscribed rectangle frame is arranged;Pass through the Klaman filter-tracking moving target based on motion target area minimum circumscribed rectangle frame center of attraction.The present invention reduces the motion artifacts between multiple target sport foreground and movement background, improve the accuracy and robustness of moving object detection.

Description

A kind of moving target real-time detection tracking based on video monitoring
Technical field
The invention belongs to video brainpower watch and control fields, and it is real-time in particular relate to a kind of moving target based on video monitoring Detecting and tracking method.
Background technique
Video monitoring is widely used to safe city base video platform, and field of intelligent monitoring mainly includes following Research direction: moving object detection, tracking moving object, moving Object Segmentation, target classification and identification, event detection, behavior Understand and describes.For the big complex scene of safe city public place mobility of people, need to detect moving target, Then the event on moving target periphery is identified, therefore the tracking of multiple mobile object real-time detection is particularly important.
Currently used moving target detecting method includes background null method, frame differential method and optical flow method.Background disappears Division detects moving target using the difference of present image and background, is easy to be influenced by ambient light intensity variation;Interframe Calculus of finite differences extracts the moving target in image using the time difference, poor to the slow target detection effect of movement speed;Optical flow method The exact outline of moving target can not be obtained, computation complexity is high, and noiseproof feature is poor, while real-time not can guarantee.
In video sequence shooting process, since public place mobility of people is big, moving target is more, therefore the scene Multi-target detection tracking it is extremely difficult.
Summary of the invention
According to problems of the prior art, the present invention provides a kind of moving targets based on video monitoring to examine in real time Tracking is surveyed, which reduce the motion artifacts between multiple target sport foreground and movement background, improve moving object detection Accuracy and robustness.
The invention adopts the following technical scheme:
As shown in Figure 1, a kind of moving target real-time detection tracking based on video monitoring, which is characterized in that including Following steps:
S1 detects to obtain every frame image in video by Harris Corner Detection Algorithm in dual camera photographed scene Characteristic point;
S2 obtains the foreground features point and background characteristics point of image according to Epipolar geometry constraint principles by characteristic point classification;
S3 obtains foreground area by the foreground features point of image;
S4 merges foreground features point to obtain the movement in image with foreground area by the perspective transform of homography matrix Target;
S5 enhances background characteristics point by calculus of finite differences, obtains enhancing treated image;
S6 detects the moving target in enhancing treated image by setting minimum circumscribed rectangle frame;
S7 is detected by the Klaman filter-tracking based on motion target area minimum circumscribed rectangle frame center of attraction Moving target.
Preferably, step S1 includes the following steps:
S11 calculates second-order partial differential coefficient I of the pixel I (x, y) in the direction x of every frame image in videoxWith in the direction y Second-order partial differential coefficient Iy, calculation formula is as follows:
Wherein, x, y respectively indicate the abscissa of pixel, ordinate, second-order partial differential coefficient IxWith second-order partial differential coefficient IyTable respectively Show the gradient of pixel I (x, y) in abscissa direction and in the ordinate,For convolution symbol, T is transposition symbol;
S12 calculates separately the gradient product I in the direction x and the direction x, the direction y and the direction y, the direction x and the direction yx 2=Ix* Ix、Iy 2=Iy*Iy、Ixy=Ix*Iy, and by Gaussian filter algorithm respectively to gradient product Ix 2、Iy 2、IxyGaussian noise is carried out to disappear It removes;
S13 calculates the angle point amount cim of each pixel in every frame image, and calculation formula is as follows:
According to Harris Corner Detection Algorithm, when angle point amount cim is greater than setting threshold values and angle point amount cim is in setting regions Local maximum when, then the corresponding point of local maximum is considered as angle point, and angle point is characteristic point p (xt,yt)。
It is further preferred that step S2 includes the following steps:
S21 according to Epipolar geometry constraint principles, obtains polar curve, the equation of polar curve is in the photographed scene of dual camera Ax'+By'+C=0;Wherein, A, B, C are coefficient, and x', y' respectively indicate the abscissa and ordinate of polar curve;
S22 calculates characteristic point p (xt,yt) with the orthogonal distance d of polar curve, calculation formula is as follows:
If characteristic point p (xt,yt) be greater than the distance threshold Th of setting with the orthogonal distance d of polar curve, then by such characteristic point It is classified as foreground features point, is otherwise background characteristics point, can obtain:
Still more preferably, step S3 includes the following steps:
S31 extracts the foreground features point of -1 frame image of kth, and the intersection with the foreground features point of kth frame image is to mend Foreground features point is filled, wherein k=2,3,4,5 ...;All supplement foreground features points between every adjacent two field pictures are closed And obtain supplement foreground features point set;
S32 sets PkFor the set of supplement foreground features point all before kth frame image, it is defined as follows:
Pk={ I(x,y,k-1)|I(x+Δx,y+Δy,k)∈ foreground features point }
Wherein, I(x,y,k-1)It is the set of the foreground features point of -1 frame image of kth, I(x+Δx,y+Δy,k)It is kth frame image and The intersection of the foreground features point of k-1 frame image, x, y respectively indicate the transverse and longitudinal coordinate of certain foreground features point of -1 frame image of kth, x+ Δ x, y+ Δ y respectively indicates this prospect characteristic point in the transverse and longitudinal coordinate of kth frame, and wherein Δ x indicates the horizontal seat on kth frame image The offset relative to the abscissa on -1 frame image of kth is marked, Δ y indicates the ordinate on kth frame image relative to kth -1 The offset of ordinate on frame image;
Therefore, PkIndicate the foreground area of kth frame.
Still more preferably, step S4 includes the following steps:
S41, in the photographed scene of dual camera, two cameras are respectively camera a and camera b, settingPoint where respectively camera a and camera b, by pointAlong pointTo pointDirection project camera shooting On the corresponding plane of delineation of head a, such as following formula:
Wherein, Ka、KbIt is respectivelyIntrinsic Matrix, HbaFor pointTo two visual angles where camera a at As the distance of the corresponding plane of delineation;
R is set as pointTo pointSpin matrix, t is a littleTo pointDisplacement, n is normal line vector, and d' is to take the photograph As the plan range between the corresponding plane of delineation of the head a plane of delineation corresponding with camera b, then haveIf Determine homography matrix H=Ka·Hba·(Kb)-1
Assuming thatBe respectively (x', y', 1) and (x, y, 1), homography matrix H to homogeneous coordinates is defined as:
Wherein, h11、h12、h13、h21、h22、h23、h31、h32、h33Indicate the element of homography matrix H;To homography square Battle array H carries out perspective transform and obtains:
And then it converts and obtains:
S42 solves by background characteristics point and using the method for minimum median square to obtain movement mesh to every frame image Mark D:
Wherein, x'i、y'iIt is that two visual angles where camera a are imaged the abscissa at corresponding plane of delineation midpoint and vertical sit Mark;∑ is summation symbol, and m is the number of foreground features point.
Still more preferably, step S5 includes the following steps:
S51 obtains perspective transform frame image using perspective transform to -1 frame image of kth, to perspective transform frame image and kth Frame image obtains kth frame moving target background area pixels value BI (x using calculus of finite differencesi, yi, k), it is shown below:
Wherein, frame (xi, yi, k) be kth frame image pixel;frame(xi, yi, k-1) ' it is frame (xi, yi, K) pixel of the perspective transform frame image obtained through perspective transform;Y is the difference experience threshold values of setting;BI(xi, yi, k)=0 When corresponding pixel be background characteristics point, BI (xi, yi, k)=255 when corresponding pixel be moving region pixel;
S52 calculates the average light stream vectors V of -1 frame of kth and kth frame background characteristics point:
Wherein, ViIndicate the light stream vectors of background characteristics point, N is the number of background characteristics point;
Background area is enhanced, i.e., by the light stream vectors V of all background characteristics pointsiIn addition average light stream vectors V, into And obtain enhancing treated image.
Still more preferably, step S6 includes the following steps:
S61 is scanned across enhancing treated image from top to bottom, when the pixel for occurring pixel value for the first time and being greater than BI' When point, recording the corresponding abscissa of the pixel is t1, when occurring pixel of the pixel value greater than BI' for the last time, record The corresponding abscissa of the pixel is t2;It is from left to right scanned across the kth frame image of enhancing processing, when occurring pixel for the first time When value is greater than the pixel of BI', recording the corresponding ordinate of the pixel is l1, is greater than BI' when occurring pixel value for the last time Pixel when, record the corresponding ordinate of the pixel be l2;BI' indicates pixel value threshold value;
It obtains line space T=(t1-t2), column pitch L=(l1-l2);
S62, using t1 as x coordinate, l1 is y-coordinate, and line space T is width, and column pitch L is height, in matlab Rectangle (' Position', [x coordinate, y-coordinate is wide, high]) function, draw minimum circumscribed rectangle frame;In every frame image Minimum circumscribed rectangle frame follows the movement of Moving Objects and moves, and realizes the detection for Moving Objects.
Still more preferably, step S7 includes the following steps:
S71 is calculate by the following formula the center of attraction of minimum circumscribed rectangle frame:
Wherein, PiCertain point coordinate in (x, y) minimum circumscribed rectangle frame, N' are of all the points in minimum circumscribed rectangle frame Number;
S72 calculates region of search SR:
SR=α W* β H
Wherein, W, H are respectively the width and height of minimum circumscribed rectangle frame, α, β be respectively the width empirical value set and Height empirical value;Realize that target chases after by the Klaman filter based on motion target area minimum circumscribed rectangle frame center of attraction Track.
Still more preferably, the pixel value threshold value BI' is set as 0.5;The width empirical value α and height empirical value β It is set as 2.
The advantages and beneficial effects of the present invention are:
1) prospect of the present invention when obtaining foreground area, by extracting the foreground features point of -1 frame of kth, with kth frame The intersection of characteristic point is to supplement foreground features point;All supplement foreground features points between every two adjacent frames are merged, Supplement foreground features point set is obtained, and foreground features point set will be supplemented as foreground area;Due in entire extraction process, It is all linked with one another between two adjacent frames, so that the extraction accuracy rate and continuity of foreground area are preferable, and then improve movement mesh Mark the accuracy and robustness of detection.
2) present invention carries out binaryzation to image using perspective transform and calculus of finite differences, then calculates -1 frame of kth and kth frame is carried on the back The average light stream vectors V of scape characteristic point, then by the light stream vectors V of all background characteristics pointsiIn addition average light stream vectors V, in turn Obtain enhancing treated image;By above-mentioned backoff algorithm, reduce the fortune between multiple target sport foreground and movement background Dynamic interference.
Detailed description of the invention
Fig. 1 is the flow chart of method of the invention.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete Site preparation description, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.It is based on Embodiment in the present invention, it is obtained by those of ordinary skill in the art without making creative efforts every other Embodiment shall fall within the protection scope of the present invention.
As shown in Figure 1, a kind of moving target real-time detection tracking based on video monitoring, includes the following steps:
S1 detects to obtain the characteristic point of every frame image in video by Harris Corner Detection Algorithm;
Specifically, step S1 includes the following steps:
1) second-order partial differential coefficient I of the pixel I (x, y) in the direction x of every frame image in video is calculatedxWith the two of the direction y Rank partial derivative Iy, calculation formula is as follows:
Wherein, x, y respectively indicate the abscissa of pixel, ordinate, second-order partial differential coefficient IxWith second-order partial differential coefficient IyTable respectively Show the gradient of pixel I (x, y) in abscissa direction and in the ordinate,For convolution symbol, T is transposition symbol;
2) the gradient product I in the direction x and the direction x, the direction y and the direction y, the direction x and the direction y is calculated separatelyx 2=Ix*Ix、 Iy 2=Iy*Iy、Ixy=Ix*Iy, and by Gaussian filter algorithm respectively to gradient product Ix 2、Iy 2、IxyCarry out Gaussian noise elimination;
3) the angle point amount cim of each pixel in every frame image is calculated, calculation formula is as follows:
According to Harris Corner Detection Algorithm, when angle point amount cim is greater than setting threshold values and angle point amount cim is in setting regions Local maximum when, then the corresponding point of local maximum is considered as angle point, and angle point is characteristic point p (xt,yt)。
S2 obtains the foreground features point and background characteristics point of image according to Epipolar geometry constraint principles by characteristic point classification;
Specifically, step S2 includes the following steps:
1) in the photographed scene of dual camera, according to Epipolar geometry constraint principles, polar curve is obtained, the equation of polar curve is Ax'+By'+C=0;Wherein, A, B, C are coefficient, and x', y' respectively indicate the abscissa and ordinate of polar curve;
2) characteristic point p (x is calculatedt,yt) with the orthogonal distance d of polar curve, calculation formula is as follows:
If characteristic point p (xt,yt) be greater than the distance threshold Th of setting with the orthogonal distance d of polar curve, then by such characteristic point It is classified as foreground features point, is otherwise background characteristics point, can obtain:
S3 obtains foreground area by the foreground features point of image;
Specifically, step S3 includes the following steps:
1) the foreground features point for extracting -1 frame image of kth, the intersection with the foreground features point of kth frame image are to supplement Foreground features point, wherein k=2,3,4,5 ...;All supplement foreground features points between every adjacent two field pictures are closed And obtain supplement foreground features point set;
2) P is setkFor the set of supplement foreground features point all before kth frame image, it is defined as follows:
Pk={ I(x,y,k-1)|I(x+Δx,y+Δy,k)∈ foreground features point }
Wherein, I(x,y,k-1)It is the set of the foreground features point of -1 frame image of kth, I(x+Δx,y+Δy,k)It is kth frame image and The intersection of the foreground features point of k-1 frame image, x, y respectively indicate the transverse and longitudinal coordinate of certain foreground features point of -1 frame image of kth, x+ Δ x, y+ Δ y respectively indicates this prospect characteristic point in the transverse and longitudinal coordinate of kth frame, and wherein Δ x indicates the horizontal seat on kth frame image The offset relative to the abscissa on -1 frame image of kth is marked, Δ y indicates the ordinate on kth frame image relative to kth -1 The offset of ordinate on frame image;
Therefore, PkIndicate the foreground area of kth frame.
Foreground features point is merged the movement mesh for obtaining image by the perspective transform of homography matrix by S4 with foreground area Mark;
Specifically, step S4 includes the following steps:
1) in the photographed scene of dual camera, two cameras are respectively camera a and camera b, setting Point where respectively camera a and camera b, by pointAlong pointTo pointTo project camera a corresponding in direction The plane of delineation on, such as following formula:
Wherein, Ka、KbIt is respectivelyIntrinsic Matrix, HbaFor pointIt is imaged to two visual angles where camera a The distance of the corresponding plane of delineation;
R is set as pointTo pointSpin matrix, t is a littleTo pointDisplacement, n is normal line vector, and d' is to take the photograph As the plan range between the corresponding plane of delineation of the head a plane of delineation corresponding with camera b, then haveIf Determine homography matrix H=Ka·Hba·(Kb)-1
Assuming thatBe respectively (x', y', 1) and (x, y, 1), homography matrix H to homogeneous coordinates is defined as:
Wherein, h11、h12、h13、h21、h22、h23、h31、h32、h33Indicate the element of homography matrix H;To homography square Battle array H carries out perspective transform and obtains:
And then it converts and obtains:
2) it to every frame image, solves to obtain moving target by background characteristics point and using the method for minimum median square D:
Wherein, x'i、y'iIt is that two visual angles where camera a are imaged the abscissa at corresponding plane of delineation midpoint and vertical sit Mark;∑ is summation symbol, and m is the number of foreground features point.
S5 enhances background characteristics point by calculus of finite differences, obtains enhancing treated image;
Specifically, step S5 includes the following steps:
1) perspective transform frame image is obtained using perspective transform to -1 frame image of kth, to perspective transform frame image and kth frame Image obtains kth frame moving target background area pixels value BI (x using calculus of finite differencesi, yi, k), it is shown below:
Wherein, frame (xi, yi, k) be kth frame image pixel;frame(xi, yi, k-1) ' it is frame (xi, yi, K) pixel of the perspective transform frame image obtained through perspective transform;Y is the difference experience threshold values of setting;BI(xi, yi, k)=0 When corresponding pixel be background characteristics point, BI (xi, yi, k)=255 when corresponding pixel be moving region pixel;
2) the average light stream vectors V of -1 frame of kth and kth frame background characteristics point is calculated:
Wherein, ViIndicate the light stream vectors of background characteristics point, N is the number of background characteristics point;
Background area is enhanced, i.e., by the light stream vectors V of all background characteristics pointsiIn addition average light stream vectors V, into And obtain enhancing treated image.
S6 detects moving target by the way that minimum circumscribed rectangle frame is arranged;
Specifically, step S6 includes the following steps:
1) it is scanned across enhancing treated image from top to bottom, when the pixel for occurring pixel value for the first time and being greater than 0.5 When, recording the corresponding abscissa of the pixel is t1, and when occurring pixel of the pixel value greater than 0.5 for the last time, record should The corresponding abscissa of pixel is t2;It is from left to right scanned across the kth frame image of enhancing processing, when occurring pixel value for the first time When pixel greater than 0.5, recording the corresponding ordinate of the pixel is l1, when occurring pixel value for the last time greater than 0.5 When pixel, recording the corresponding ordinate of the pixel is l2;
It obtains line space T=(t1-t2), column pitch L=(l1-l2);
2) using t1 as x coordinate, l1 is y-coordinate, and line space T is width, and column pitch L is height, in matlab Rectangle (' Position', [x coordinate, y-coordinate is wide, high]) function, draw minimum circumscribed rectangle frame;In every frame image Minimum circumscribed rectangle frame follows the movement of Moving Objects and moves, and realizes the detection for Moving Objects.
S7 moves mesh by the Klaman filter-tracking based on motion target area minimum circumscribed rectangle frame center of attraction Mark;
Specifically, step S7 includes the following steps:
1) it is calculate by the following formula the center of attraction of minimum circumscribed rectangle frame:
Wherein, PiCertain point coordinate in (x, y) minimum circumscribed rectangle frame, N' are of all the points in minimum circumscribed rectangle frame Number;
2) region of search S is calculatedR:
SR=α W* β H
Wherein, W, H are respectively the width and height of minimum circumscribed rectangle frame, and α, β are set as 2;By being based on moving target The Klaman filter of region minimum circumscribed rectangle frame center of attraction realizes target tracking.
In conclusion the present invention provides a kind of moving target real-time detection tracking based on video monitoring, subtracts Lack the motion artifacts between multiple target sport foreground and movement background, improves the accuracy and robust of moving object detection Property.

Claims (9)

1. a kind of moving target real-time detection tracking based on video monitoring, which comprises the steps of:
S1 detects to obtain the spy of every frame image in video in dual camera photographed scene by Harris Corner Detection Algorithm Sign point;
S2 obtains the foreground features point and background characteristics point of image according to Epipolar geometry constraint principles by characteristic point classification;
S3 obtains foreground area by the foreground features point of image;
S4 merges foreground features point with foreground area to obtain the movement mesh in image by the perspective transform of homography matrix Mark;
S5 enhances background characteristics point by calculus of finite differences, obtains enhancing treated image;
S6 detects the moving target in enhancing treated image by setting minimum circumscribed rectangle frame;
S7, the fortune detected by the Klaman filter-tracking based on motion target area minimum circumscribed rectangle frame center of attraction Moving-target.
2. a kind of moving target real-time detection tracking based on video monitoring according to claim 1, feature exist In step S1 includes the following steps:
S11 calculates second-order partial differential coefficient I of the pixel I (x, y) in the direction x of every frame image in videoxWith the Second Order Partial in the direction y Derivative Iy, calculation formula is as follows:
Wherein, x, y respectively indicate the abscissa of pixel, ordinate, second-order partial differential coefficient IxWith second-order partial differential coefficient IyRespectively indicate picture Gradient of the vegetarian refreshments I (x, y) in abscissa direction and in the ordinate,For convolution symbol, T is transposition symbol;
S12 calculates separately the gradient product I in the direction x and the direction x, the direction y and the direction y, the direction x and the direction yx 2=Ix*Ix、Iy 2 =Iy*Iy、Ixy=Ix*Iy, and by Gaussian filter algorithm respectively to gradient product Ix 2、Iy 2、IxyCarry out Gaussian noise elimination;
S13 calculates the angle point amount cim of each pixel in every frame image, and calculation formula is as follows:
According to Harris Corner Detection Algorithm, when angle point amount cim is greater than setting threshold values and angle point amount cim is the office in setting regions When portion's maximum, then the corresponding point of local maximum is considered as angle point, and angle point is characteristic point p (xt,yt)。
3. a kind of moving target real-time detection tracking based on video monitoring according to claim 2, feature exist In step S2 includes the following steps:
S21 according to Epipolar geometry constraint principles, obtains polar curve, the equation of polar curve is Ax'+ in the photographed scene of dual camera By'+C=0;Wherein, A, B, C are coefficient, and x', y' respectively indicate the abscissa and ordinate of polar curve;
S22 calculates characteristic point p (xt,yt) with the orthogonal distance d of polar curve, calculation formula is as follows:
If characteristic point p (xt,yt) be greater than the distance threshold Th of setting with the orthogonal distance d of polar curve, then by such characteristic point classification For foreground features point, it is otherwise background characteristics point, can obtains:
4. a kind of moving target real-time detection tracking based on video monitoring according to claim 3, feature exist In step S3 includes the following steps:
S31 extracts the foreground features point of -1 frame image of kth, and the intersection with the foreground features point of kth frame image is before supplementing Scape characteristic point, wherein k=2,3,4,5 ...;All supplement foreground features points between every adjacent two field pictures are merged, Obtain supplement foreground features point set;
S32 sets PkFor the set of supplement foreground features point all before kth frame image, it is defined as follows:
Pk={ I(x,y,k-1)|I(x+Δx,y+Δy,k)∈ foreground features point }
Wherein, I(x,y,k-1)It is the set of the foreground features point of -1 frame image of kth, I(x+Δx,y+Δy,k)It is kth frame image and kth -1 The intersection of the foreground features point of frame image, x, y respectively indicate the transverse and longitudinal coordinate of certain foreground features point of -1 frame image of kth, x+ Δ X, y+ Δ y respectively indicates this prospect characteristic point in the transverse and longitudinal coordinate of kth frame, and wherein Δ x indicates the abscissa on kth frame image Relative to the offset of the abscissa on -1 frame image of kth, Δ y indicates the ordinate on kth frame image relative to -1 frame of kth The offset of ordinate on image;
Therefore, PkIndicate the foreground area of kth frame.
5. a kind of moving target real-time detection tracking based on video monitoring according to claim 4, feature exist In step S4 includes the following steps:
S41, in the photographed scene of dual camera, two cameras are respectively camera a and camera b, settingPoint Not Wei point where camera a and camera b, by pointAlong pointTo pointTo project camera a corresponding in direction On the plane of delineation, such as following formula:
Wherein, Ka、KbIt is respectivelyIntrinsic Matrix, HbaFor pointIt is imaged and corresponds to two visual angles where camera a The plane of delineation distance;
R is set as pointTo pointSpin matrix, t is a littleTo pointDisplacement, n is normal line vector, and d' is camera a Plan range between the corresponding plane of delineation plane of delineation corresponding with camera b, then haveSetting is singly answered Property matrix H=Ka·Hba·(Kb)-1
Assuming thatBe respectively (x', y', 1) and (x, y, 1), homography matrix H to homogeneous coordinates is defined as:
Wherein, h11、h12、h13、h21、h22、h23、h31、h32、h33Indicate the element of homography matrix H;To homography matrix H into Row perspective transform obtains:
And then it converts and obtains:
S42 solves by background characteristics point and using the method for minimum median square to obtain moving target D to every frame image:
Wherein, x'i、y'iIt is the abscissa and ordinate that corresponding plane of delineation midpoint is imaged in two visual angles where camera a;∑ For symbol of summing, m is the number of foreground features point.
6. a kind of moving target real-time detection tracking based on video monitoring according to claim 5, feature exist In step S5 includes the following steps:
S51 obtains perspective transform frame image using perspective transform to -1 frame image of kth, to perspective transform frame image and kth frame figure As obtaining kth frame moving target background area pixels value BI (x using calculus of finite differencesi, yi, k), it is shown below:
Wherein, frame (xi, yi, k) be kth frame image pixel;frame(xi, yi, k-1) ' it is frame (xi, yi, k) and warp The pixel for the perspective transform frame image that perspective transform obtains;Y is the difference experience threshold values of setting;BI(xi, yi, k)=0 when pair The pixel answered is background characteristics point, BI (xi, yi, k)=255 when corresponding pixel be moving region pixel;
S52 calculates the average light stream vectors V of -1 frame of kth and kth frame background characteristics point:
Wherein, ViIndicate the light stream vectors of background characteristics point, N is the number of background characteristics point;
Background area is enhanced, i.e., by the light stream vectors V of all background characteristics pointsiIn addition average light stream vectors V, and then To enhancing treated image.
7. a kind of moving target real-time detection tracking based on video monitoring according to claim 6, feature exist In step S6 includes the following steps:
S61 is scanned across enhancing treated image from top to bottom, when occurring pixel that pixel value is greater than BI' for the first time, Recording the corresponding abscissa of the pixel is that t1 records the pixel when occurring pixel of the pixel value greater than BI' for the last time The corresponding abscissa of point is t2;It is from left to right scanned across the kth frame image of enhancing processing, is greater than when occurring pixel value for the first time When the pixel of BI', record the corresponding ordinate of the pixel be l1, when occur for the last time pixel value be greater than BI' pixel When point, recording the corresponding ordinate of the pixel is l2;BI' indicates pixel value threshold value;
It obtains line space T=(t1-t2), column pitch L=(l1-l2);
S62, using t1 as x coordinate, l1 is y-coordinate, and line space T is width, and column pitch L is height, with the rectangle in matlab (' Position', [x coordinate, y-coordinate is wide, high]) function, draw minimum circumscribed rectangle frame;Minimum in every frame image is external Rectangle frame follows the movement of Moving Objects and moves, and realizes the detection for Moving Objects.
8. a kind of moving target real-time detection tracking based on video monitoring according to claim 7, feature exist In step S7 includes the following steps:
S71 is calculate by the following formula the center of attraction of minimum circumscribed rectangle frame:
Wherein, PiCertain point coordinate in (x, y) minimum circumscribed rectangle frame, N' is the number of all the points in minimum circumscribed rectangle frame;
S72 calculates region of search SR:
SR=α W* β H
Wherein, W, H are respectively the width and height of minimum circumscribed rectangle frame, and α, β are respectively the width empirical value set and height Empirical value;Target tracking is realized by the Klaman filter based on motion target area minimum circumscribed rectangle frame center of attraction.
9. a kind of moving target real-time detection tracking based on video monitoring according to claim 8, feature exist In: the pixel value threshold value BI' is set as 0.5;The width empirical value α and height empirical value β are set as 2.
CN201910365305.5A 2019-04-30 2019-04-30 A kind of moving target real-time detection tracking based on video monitoring Pending CN110246153A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910365305.5A CN110246153A (en) 2019-04-30 2019-04-30 A kind of moving target real-time detection tracking based on video monitoring

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910365305.5A CN110246153A (en) 2019-04-30 2019-04-30 A kind of moving target real-time detection tracking based on video monitoring

Publications (1)

Publication Number Publication Date
CN110246153A true CN110246153A (en) 2019-09-17

Family

ID=67883629

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910365305.5A Pending CN110246153A (en) 2019-04-30 2019-04-30 A kind of moving target real-time detection tracking based on video monitoring

Country Status (1)

Country Link
CN (1) CN110246153A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110689554A (en) * 2019-09-25 2020-01-14 深圳大学 Background motion estimation method and device for infrared image sequence and storage medium
CN111738093A (en) * 2020-05-28 2020-10-02 哈尔滨工业大学 Automatic speed measuring method for curling balls based on gradient characteristics
CN111860192A (en) * 2020-06-24 2020-10-30 国网宁夏电力有限公司检修公司 Moving object identification method and system
CN112001949A (en) * 2020-08-13 2020-11-27 地平线(上海)人工智能技术有限公司 Method and device for determining moving speed of target point, readable storage medium and equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002093486A2 (en) * 2001-05-11 2002-11-21 Koninklijke Philips Electronics N.V. Motion-based tracking with pan-tilt zoom camera
CN102156995A (en) * 2011-04-21 2011-08-17 北京理工大学 Video movement foreground dividing method in moving camera
CN104408701A (en) * 2014-12-03 2015-03-11 中国矿业大学 Large-scale scene video image stitching method
US20180211107A1 (en) * 2015-06-22 2018-07-26 Photomyne Ltd. System and Method for Detecting Objects in an Image
CN108399627A (en) * 2018-03-23 2018-08-14 云南大学 Video interframe target method for estimating, device and realization device
CN109087330A (en) * 2018-06-08 2018-12-25 中国人民解放军军事科学院国防科技创新研究院 It is a kind of based on by slightly to the moving target detecting method of smart image segmentation

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002093486A2 (en) * 2001-05-11 2002-11-21 Koninklijke Philips Electronics N.V. Motion-based tracking with pan-tilt zoom camera
CN102156995A (en) * 2011-04-21 2011-08-17 北京理工大学 Video movement foreground dividing method in moving camera
CN104408701A (en) * 2014-12-03 2015-03-11 中国矿业大学 Large-scale scene video image stitching method
US20180211107A1 (en) * 2015-06-22 2018-07-26 Photomyne Ltd. System and Method for Detecting Objects in an Image
CN108399627A (en) * 2018-03-23 2018-08-14 云南大学 Video interframe target method for estimating, device and realization device
CN109087330A (en) * 2018-06-08 2018-12-25 中国人民解放军军事科学院国防科技创新研究院 It is a kind of based on by slightly to the moving target detecting method of smart image segmentation

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
柏森 等编著: "《信息隐藏算法及应用》", 30 September 2015, 北京:国防工业出版社 *
潘志安 等: "移动摄像视频的多运动目标实时跟踪算法", 《控制工程》 *
赵小川编著: "《MATLAB图像处理 程序实现与模块化仿真》", 31 January 2014, 北京:北京航空航天大学出版 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110689554A (en) * 2019-09-25 2020-01-14 深圳大学 Background motion estimation method and device for infrared image sequence and storage medium
WO2021057455A1 (en) * 2019-09-25 2021-04-01 深圳大学 Background motion estimation method and apparatus for infrared image sequence, and storage medium
CN110689554B (en) * 2019-09-25 2022-04-12 深圳大学 Background motion estimation method and device for infrared image sequence and storage medium
CN111738093A (en) * 2020-05-28 2020-10-02 哈尔滨工业大学 Automatic speed measuring method for curling balls based on gradient characteristics
CN111738093B (en) * 2020-05-28 2024-03-29 哈尔滨工业大学 Automatic speed measuring method for curling balls based on gradient characteristics
CN111860192A (en) * 2020-06-24 2020-10-30 国网宁夏电力有限公司检修公司 Moving object identification method and system
CN112001949A (en) * 2020-08-13 2020-11-27 地平线(上海)人工智能技术有限公司 Method and device for determining moving speed of target point, readable storage medium and equipment
CN112001949B (en) * 2020-08-13 2023-12-05 地平线(上海)人工智能技术有限公司 Method, device, readable storage medium and equipment for determining target point moving speed

Similar Documents

Publication Publication Date Title
CN110246153A (en) A kind of moving target real-time detection tracking based on video monitoring
Lipton Local application of optic flow to analyse rigid versus non-rigid motion
KR101175097B1 (en) Panorama image generating method
CN108537212B (en) Student behavior detection method based on motion estimation
Strehl et al. Detecting moving objects in airborne forward looking infra-red sequences
Toropov et al. Traffic flow from a low frame rate city camera
US9576204B2 (en) System and method for automatic calculation of scene geometry in crowded video scenes
Pan et al. Dual pixel exploration: Simultaneous depth estimation and image restoration
KR20050066400A (en) Apparatus and method for the 3d object tracking using multi-view and depth cameras
CN111383252B (en) Multi-camera target tracking method, system, device and storage medium
Linger et al. Aerial image registration for tracking
TWI509568B (en) Method of detecting multiple moving objects
Gondal et al. On dynamic scene geometry for view-invariant action matching
JP4913801B2 (en) Shielding object image identification apparatus and method
Riche et al. 3D Saliency for abnormal motion selection: The role of the depth map
Almomani et al. Segtrack: A novel tracking system with improved object segmentation
Shimada et al. Change detection on light field for active video surveillance
Sincan et al. Moving object detection by a mounted moving camera
CN113409334B (en) Centroid-based structured light angle point detection method
Yu et al. Accurate motion detection in dynamic scenes based on ego-motion estimation and optical flow segmentation combined method
Evans et al. Suppression of detection ghosts in homography based pedestrian detection
CN105354576B (en) A kind of method of target's feature-extraction, target's feature-extraction module, object module creation module and intelligent image monitoring device
Heflin et al. Correcting rolling-shutter distortion of CMOS sensors using facial feature detection
Yan et al. Turning an urban scene video into a cinemagraph
Zhang et al. 3d pedestrian tracking based on overhead cameras

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20190917