CN107240120A - The tracking and device of moving target in video - Google Patents

The tracking and device of moving target in video Download PDF

Info

Publication number
CN107240120A
CN107240120A CN201710254328.XA CN201710254328A CN107240120A CN 107240120 A CN107240120 A CN 107240120A CN 201710254328 A CN201710254328 A CN 201710254328A CN 107240120 A CN107240120 A CN 107240120A
Authority
CN
China
Prior art keywords
moving target
target
tracking
video
visual angle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710254328.XA
Other languages
Chinese (zh)
Other versions
CN107240120B (en
Inventor
毛丽娟
盛斌
李震
郑鹭宾
赵刚
郑凌寒
张沛
蒋妍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Jiaotong University
Shanghai University of Sport
Original Assignee
Shanghai Jiaotong University
Shanghai University of Sport
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Jiaotong University, Shanghai University of Sport filed Critical Shanghai Jiaotong University
Priority to CN201710254328.XA priority Critical patent/CN107240120B/en
Publication of CN107240120A publication Critical patent/CN107240120A/en
Application granted granted Critical
Publication of CN107240120B publication Critical patent/CN107240120B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The present invention relates to the tracking and device of moving target in a kind of video, the tracking of moving target comprises the following steps in the video:Calculate the shielding rate of the moving target of present frame in the video under the first visual angle;The learning rate of space-time context model is calculated according to the shielding rate of moving target, and according to the space-time context model of learning rate renewal moving target;The image feature value of the moving target in present frame is obtained, the context prior model of moving target is updated according to image feature value;Convolution algorithm is carried out to the space-time context model after renewal and context prior model, the tracing positional of the moving target of next frame in the video under the first visual angle is obtained.The tracking computation complexity of moving target in above-mentioned video, tracks efficiency high and tracking accuracy is high.Correspondingly, the present invention also provides a kind of tracks of device of moving target in video.

Description

The tracking and device of moving target in video
Technical field
The present invention relates to video tracking technical field, in more particularly to a kind of video the tracking of moving target and Device.
Background technology
With flourishing for information technology, application of the computer vision technique in video tracking field is more and more extensive, Especially in sport event video analysis, carrying out competitive sports analysis by computer vision pursuit movement target can be significantly Cost of labor is reduced, accuracy of analysis is improved.Track algorithm in recent years based on online machine learning is developed rapidly, Such as online Boosting algorithms, " tracking-detection-study " algorithm and the track algorithm based on compressed sensing, however, The above-mentioned various motion target tracking methods based on online machine learning constantly learn new model due to needing so that calculate Complexity is high, influence tracking efficiency, and easily produces tracking drifting problem, and tracking accuracy is low.
The content of the invention
Based on this, it is necessary to for conventional motion method for tracking target exist tracking efficiency it is low, track accuracy it is low There is provided the tracking and device of moving target in a kind of fast and accurately effective video for problem.
The tracking of moving target, comprises the following steps in a kind of video:
Calculate the shielding rate of the moving target of present frame in the video under the first visual angle;
The learning rate of space-time context model is calculated according to the shielding rate of moving target, and is updated according to learning rate The space-time context model of moving target;
The image feature value of the moving target in present frame is obtained, is updated according to image feature value above and below moving target Literary prior model;
Convolution algorithm is carried out to the space-time context model after renewal and context prior model, obtained under the first visual angle Video in next frame moving target tracing positional.
The tracking of moving target in above-mentioned video, by the fortune for calculating the present frame in the video under the first visual angle The shielding rate of moving-target calculates the learning rate of space-time context model, and according to the space-time of learning rate renewal moving target Context model;The context prior model of moving target is updated further according to image feature value;Above and below the space-time after renewal Literary model and context prior model carries out convolution algorithm, obtains the moving target of the next frame in the video under the first visual angle Tracing positional.Motion target tracking method is by updating the space-time context model context of moving target in above-mentioned video Prior model is the track and localization that next frame moving target can be achieved, as long as carrying out model modification, it is not necessary to learn always New model, effectively reduces computation complexity, the tracking of moving target effectively in lifting tracking efficiency, also, above-mentioned video Method is dynamically determined the learning rate of space-time context model to update space-time context mould according to the circumstance of occlusion of moving target Type, can avoid moving target from learning the model to mistake when being blocked by other objects, be prevented effectively from appearance tracking and drift about, greatly It is big to improve tracking accuracy.
In one of the embodiments, the shielding rate of the moving target of the present frame in the video under the first visual angle of calculating The step of include:
Whether include intersection point between the tracking box of the different moving targets of detection present frame;
When including intersection point between the tracking box of different moving targets, calculate different moving targets tracking box it Between lap length and width, and the shielded area that blocks of moving target is calculated according to length and width;
Obtain the tracking box area of moving target prestored, calculate the shielding rate of moving target for shielded area with The ratio of tracking box area.
In one of the embodiments, learning rate is calculated using below equation:
Wherein:
E is natural logrithm;
Δ S is the shielding rate of moving target;
k、It is normal parameter.
In one of the embodiments, the step of image feature value for obtaining the moving target in present frame, includes:
Obtain color intensity of the moving target on red channel in present frame, the color intensity on green channel And the color intensity on blue channel;
The be moving target color intensity on red channel, the color intensity on green channel, and in blue channel On color intensity assign corresponding color intensity weighted value;
Summation is weighted to the color intensity on each passage, the image spy of the moving target in present frame is obtained Value indicative.
In one of the embodiments, the tracking of moving target also includes in above-mentioned video:
Site boundary region is followed the trail of in extraction, is set up and is followed the trail of place vertical view two dimensional model, and tracing positional is projected to tracking field The first projection coordinate that ground is overlooked in two dimensional model.
In one of the embodiments, the tracking of moving target also includes in above-mentioned video:
The video under the second visual angle is obtained, and calculates frame of video corresponding with next frame in the video under the second visual angle Moving target tracing positional follow the trail of place overlook two dimensional model in the second projection coordinate;
Respectively by the video under the shielding rate and the second visual angle of the current frame motion target in the video under the first visual angle The shielding rate of current frame motion target be compared with default shielding rate threshold value;
When working as in the video under the shielding rate and the second visual angle of the current frame motion target in the video under the first visual angle The shielding rate of previous frame moving target is respectively less than or during equal to default shielding rate threshold value, is projected according to the first projection coordinate and second Coordinate calculates moving target and is following the trail of the target projection coordinate that place is overlooked in two dimensional model;
When the shielding rate of the current frame motion target in the video under the first visual angle is more than default shielding rate threshold value, choosing The second projection coordinate is taken as moving target and is following the trail of the target projection coordinate that place is overlooked in two dimensional model;When the second visual angle Under video in the shielding rate of current frame motion target when being more than default shielding rate threshold value, choose the conduct of the first projection coordinate Moving target is following the trail of the target projection coordinate that place is overlooked in two dimensional model.
In one of the embodiments, the second projection coordinate, which is chosen, as moving target is following the trail of the two-dimentional mould of place vertical view After the step of target projection coordinate in type, in addition to:The first projection coordinate is modified according to the second projection coordinate;
The first projection coordinate is chosen as moving target and is following the trail of the target projection coordinate that place is overlooked in two dimensional model After step, in addition to:The second projection coordinate is modified according to the first projection coordinate.
The tracks of device of moving target in a kind of video, including:
Shielding rate computing module, the shielding rate of the moving target for calculating the present frame in the video under the first visual angle;
Space-time context model update module, for calculating space-time context model according to the shielding rate of moving target Learning rate, and according to the space-time context model of learning rate renewal moving target;
Context prior model update module, the image feature value for obtaining the moving target in present frame, according to figure As characteristic value updates the context prior model of moving target;
Tracking module, for carrying out convolution algorithm to the space-time context model after renewal and context prior model, is obtained The tracing positional of the moving target of the next frame in video under to the first visual angle.
In one of the embodiments, space-time context model update module includes:
Breakpoint detection submodule, for detect present frame different moving targets tracking box between whether include hand over Point;
Shielded area calculating sub module, for when including intersection point between the tracking box of different moving targets, calculating not The length and width of lap between the tracking box of same moving target, and moving target hair is calculated according to length and width The raw shielded area blocked;
Shielding rate calculating sub module, the tracking box area for obtaining the moving target prestored, calculates moving target Shielding rate for shielded area and tracking box area ratio.
In one of the embodiments, learning rate computing module calculates learning rate using below equation:
Wherein:
E is natural logrithm;
Δ S is the shielding rate of moving target;
k、It is normal parameter.
Brief description of the drawings
Fig. 1 is the flow chart of the tracking of moving target in video in one embodiment;
Fig. 2 is the flow chart of the shielding rate of calculating moving target in one embodiment;
Fig. 3 is the shielded area Computing Principle schematic diagram of moving target in one embodiment;
Fig. 4 is the flow chart of the tracking of moving target in video in another embodiment;
Fig. 5 is the spatio-temporal context information interface display schematic diagram of moving target in one embodiment;
Fig. 6 is tracking place vertical view two dimensional model display schematic diagram in one embodiment;
Fig. 7 is the structural representation of the tracks of device of moving target in video in one embodiment;
Fig. 8 is the structural representation of space-time context model update module in one embodiment;
Fig. 9 is the structural representation of context prior model update module in one embodiment;
Figure 10 is the structural representation of space-time context model update module in another embodiment.
Embodiment
In order to make the purpose , technical scheme and advantage of the present invention be clearer, below in conjunction with drawings and Examples, The present invention will be described in further detail.It should be appreciated that specific embodiment described herein is only to explain this hair It is bright, it is not intended to limit the present invention.
Referring to Fig. 1, in a kind of video moving target tracking, comprise the following steps:
Step 102:Calculate the shielding rate of the moving target of present frame in the video under the first visual angle.
Specifically, the shielding rate of moving target represents the degree that moving target is blocked, and is blocked according to moving target Areal calculation obtain.Whether terminal detection moving target blocks, and when moving target is blocked, then calculates motion The shielding rate of target;Otherwise, the shielding rate of moving target is 0.
Step 104:The learning rate of space-time context model is calculated according to the shielding rate of moving target, and according to study Speed updates the space-time context model of moving target.
Specifically, spatial context model concern moving target and the spatial relation of its local context, including away from From and direction relationses.Video sequence is continuous, and temporal context dependence is also extremely important for tracking result, often The space-time context model of the moving target of one frame learns the space-time context model that previous frame tracks target by learning rate Obtained with spatial context model.When moving target is blocked by other objects, the display model of moving target can be changed Become, now the spatial context model credibility reduction of moving target, in the present embodiment, by the space-time for adjusting moving target The learning rate of context model with prevent study to mistake model.Specifically, situation about being blocked according to moving target The learning rate of space-time context model is dynamically determined, and it is next in the video under the first visual angle of learning rate renewal The space-time context model of the moving target of frame.
Step 106:The image feature value of the moving target in present frame is obtained, motion mesh is updated according to image feature value Target context prior model.
Specifically, the space of context prior model reflection moving target current local context itself is constituted, and up and down The characteristics of image in literary space is relevant with its locus structure, in the present embodiment, and terminal obtains the moving target in present frame Image feature value, the context prior model of moving target is updated according to image feature value.
Step 108:Convolution algorithm is carried out to the space-time context model after renewal and context prior model, the is obtained The tracing positional of the moving target of the next frame in video under one visual angle.
The tracking of moving target in above-mentioned video, by the fortune for calculating the present frame in the video under the first visual angle The shielding rate of moving-target calculates the learning rate of space-time context model, and according to the space-time of learning rate renewal moving target Context model;The context prior model of moving target is updated further according to image feature value;Above and below the space-time after renewal Literary model and context prior model carries out convolution algorithm, obtains the moving target of the next frame in the video under the first visual angle Tracing positional.Motion target tracking method is by updating the space-time context model context of moving target in above-mentioned video Prior model is the track and localization that next frame moving target can be achieved, as long as carrying out model modification, it is not necessary to learn always New model, effectively reduces computation complexity, the tracking of moving target effectively in lifting tracking efficiency, also, above-mentioned video Method is dynamically determined the learning rate of space-time context model to update space-time context mould according to the circumstance of occlusion of moving target Type, can avoid moving target from learning the model to mistake when being blocked by other objects, be prevented effectively from appearance tracking and drift about, greatly It is big to improve tracking accuracy.
As shown in Fig. 2 in one embodiment, step 102 includes:
Step 1022:Whether include intersection point between the tracking box of the different moving targets of detection present frame.
In order to ensure the accuracy of moving target initial position, the tracking after being is laid a good foundation, the present embodiment In, the initial position of moving target in the first frame of the video under the first visual angle is manually demarcated by man-machine interaction, people Work selectes tracking box, determines the initial position of each moving target, specifically, in the present embodiment, tracking box is rectangle frame.Tracking During, whether terminal includes intersection point between detecting the tracking box of different moving targets in real time, if different moving targets Tracking box between include intersection point, then it represents that blocked between moving target, execution step 1024;Otherwise, moving target does not have Block, the shielding rate for directly obtaining moving target is 0.
Step 1024:When including intersection point between the tracking box of different moving targets, different moving targets are calculated The length and width of lap between tracking box, and the block surface that moving target is blocked is calculated according to length and width Product.
As shown in figure 3, being to the right X-axis using the upper left corner of tracking box as origin in the present embodiment, set up for Y-axis downwards Coordinate system.Complete present frame tracking obtain moving target tracing positional can access tracking box summit position coordinates. For ease of calculating, the top left corner apex coordinate and lower right corner apex coordinate that the present embodiment chooses tracking box are calculated, wherein, Tracking box K1 top left corner apex coordinate is (minX1, minY1), and lower right corner apex coordinate is (maxX1, maxY1), tracking box K2 top left corner apex coordinate is (minX2, minY2), and lower right corner apex coordinate is (maxX2, maxY2).Tracking box K1 and with Track frame K2 includes two intersection points, respectively intersection point E and intersection point F, according to the abscissa of tracking box K1 bottom right angular vertex and with The coordinate that the ordinate of track frame K2 top left corner apex can obtain intersection point E points is (maxX1, minY2);Similarly, according to tracking The abscissa in the frame K2 upper left corners and the coordinate for obtaining intersection point F with the ordinate in the tracking box K1 lower right corner are (minX2, maxY1). Tracking box K1 and tracking box K2 laps length and width can be calculated by getting after intersection point E and intersection point F coordinate.Its In, calculate the difference of intersection point E abscissa and the abscissa of the second tracking box K2 top left corner apex, obtain tracking box K1 and with The width of track frame K2 laps, and calculate the difference of intersection point F ordinate and the ordinate of the second tracking box K2 top left corner apex Obtain the length of tracking box K1 and tracking box K2 laps.Further, computational length and the product of width are to be transported The shielded area that moving-target is blocked, shielded area Soverlap=(maxX1-minX2) * (maxY1-minY2).
In the present embodiment, the length and width of lap between tracking box are further calculated by calculating intersecting point coordinate Degree obtains shielded area to calculate.However, it is desirable to which explanation, above example is not used to the tool calculated shielded area Body is limited.Such as, in other embodiments, tracking box K1 top left corner apex and the seat of bottom right angular vertex can also directly be passed through Mark and tracking box K2 top left corner apex and lower right corner apex coordinate calculate shielded area.For purposes of illustration only, still by taking Fig. 3 as an example Illustrate, in one embodiment, define minX=max (minX1, minX2), i.e. minX is among minX1 and minX2 Higher value;Meanwhile, maxX=min (maxX1, maxX2) is defined, maxX is the smaller value in maxX1 and maxX2; minY =max (minY1, minY2), minY are the higher value in minY1 and minY2;MaxY=min (maxY1, maxY2), maxY For the smaller value in maxY1 and maxY2.During tracking, terminal is according to the size for comparing minX and maxdX in real time, and minY With maxY size, judge whether to block between tracking box according to comparative result and calculate block surface when blocking Product.If specifically, minX<MaxX and minY<MaxY, then tracking box K1 and tracking box K2 are overlapping, and moving target occurs Block, calculating shielded area is:Soverlap=(maxX-minX) * (maxY-minY).As shown in Fig. 3, in the present embodiment, MinX=minX2, minY=minY2, maxX=maxX1, maxY=maxY1, shielded area Soverlap=(maxX1- minX2)*(maxY1-minY2)。
Step 1026:The tracking box area of the moving target prestored is obtained, calculates the shielding rate of moving target to hide Block face accumulates the ratio with tracking box area.
Specifically, in step 1022, calculated after demarcation motion target tracking frame and store tracking box area, step 1024 Calculate after the shielded area for obtaining moving target, terminal reads tracking box area, calculate the ratio of shielded area and tracking box area Value, the shielding rate for obtaining moving target is as follows:
Wherein SoverlapFor the shielded area of moving target, S0For tracking box area.
In one embodiment, in step 104, learning rate is calculated using below equation:
Wherein:E is natural logrithm;Δ S is the shielding rate of moving target;k、It is normal parameter.Specifically, k value Scope is 2~4;Span be 1.5~2.5.In one embodiment, k=3,
In one embodiment, in step 106, obtain present frame in moving target image feature value the step of wrap Include:Obtain present frame in color intensity of the moving target on red channel, the color intensity on green channel and Color intensity on blue channel;The color intensity on red channel that is moving target, the color on green channel are strong Degree, and the corresponding color intensity weighted value of color intensity imparting on blue channel;To the color intensity on each passage Summation is weighted, the image feature value of the moving target in present frame is obtained.
Specifically, color intensity of the moving target on red channel, the color intensity on green channel, and in indigo plant It is logical in red channel, green according to different motion target that color intensity on chrominance channel assigns corresponding color intensity weighted value The difference size of color intensity on road and blue channel determines that color intensity difference is bigger, the color intensity power on the passage Weight values are bigger.In the present embodiment, the color characteristic of each moving target is determined by the color distortion between different motion target Value updates for context prior model, to ensure that the tracking of context prior model is accurate, further improves tracking accurate Property.
In one embodiment, the tracking of moving target also includes in above-mentioned video:Extract and follow the trail of site boundary area Domain, is set up and follows the trail of place vertical view two dimensional model, and tracing positional is projected to the first projection followed the trail of in the vertical view two dimensional model of place Coordinate.
The coordinate and position relationship that obtained moving target is tracked by step 102 to step 108 are all at the first visual angle The obtained former view of shot by camera in position, in order to which result visualization displaying will be followed the trail of in order to trace analysis, this In embodiment, the tracing positional of vertical view each moving target of two dimensional model simultaneous display in a tracking place is set up.In the tracking Each moving target in two dimensional model is overlooked in place has a target identification, completes after each frame tracking, by moving target Target identification is moved to the first projection corresponding with the tracing positional currently determined by corresponding first projection coordinate in previous frame At coordinate.
General, the two dimensional model figure for following the trail of place is top view, and the shooting angle of former video is usually with certain The side view of angle.In the present embodiment, turning on visual angle and data yardstick is carried out according to video camera position and angle Change, the tracing positional of moving target is synchronously included on the vertical view two dimensional model for following the trail of place.Specifically, the present embodiment The transforming relationship of original video image and two dimensional model is set up by homogeneous transformation.First, the projective transformation of two dimensional surface is represented For the product of vector and a 3x3 matrix under homogeneous coordinates, as x '=Hx, specific homograph matrix is expressed as follows:
From above homograph matrix, plane shock wave is transformed to eight frees degree, solves eight in transformation matrix Unknown number can try to achieve homograph matrix, complete target projection conversion.Because one group of correspondence point coordinates can be by above-mentioned Matrix Multiplication Formula obtains two equations, it is desirable to all unknown numbers in former transformation matrix, it is necessary to four prescription journeys, therefore, if it is desired to homograph square Battle array, it is only necessary to know that corresponding four groups of point coordinates.Specifically, in the present embodiment, it is true by extracting tracking site boundary region Surely four groups of apex coordinates in place are followed the trail of, in the hope of transformation matrix, two-dimensional projection transformation are realized.The present embodiment passes through single strain The two-dimensional projection transformation of matrix computations 3 d video images is changed, the parameter information without obtaining picture pick-up device, video analytic system Easy to use, conversion flexibility is high.
In one embodiment, the tracking of moving target also includes in above-mentioned video:Obtain regarding under the second visual angle Frequently, and calculate the tracing positional of the moving target of frame of video corresponding with next frame in video under the second visual angle and following the trail of The second projection coordinate that place is overlooked in two dimensional model;Respectively by the current frame motion target in the video under the first visual angle The shielding rate of current frame motion target in video under shielding rate and the second visual angle is compared with default shielding rate threshold value; Current frame motion in the video under the shielding rate and the second visual angle of the current frame motion target in the video under the first visual angle The shielding rate of target is respectively less than or during equal to default shielding rate threshold value, is calculated according to the first projection coordinate and the second projection coordinate Moving target is following the trail of the target projection coordinate that place is overlooked in two dimensional model;Present frame in the video under the first visual angle When the shielding rate of moving target is more than default shielding rate threshold value, the second projection coordinate is chosen as moving target and is following the trail of place The target projection coordinate overlooked in two dimensional model;When the shielding rate of the current frame motion target in the video under the second visual angle is big When default shielding rate threshold value, the first projection coordinate is chosen as moving target and is following the trail of the mesh that place is overlooked in two dimensional model Mark projection coordinate.
Specifically, the determination of next frame motion target tracking position and tracing positional are being chased after in the video under the second visual angle Track place overlook two dimensional model in the second projection coordinate transfer process and principle with it is next in the video under the first visual angle The determination of frame motion target tracking position and tracing positional are following the trail of turning for the second projection coordinate that place is overlooked in two dimensional model Commutation is same, will not be described here.
In multiple mobile object follows the trail of scene, due to target complicated movement, extremely easily there is large area and block even complete Situation about blocking entirely, if two tracking box, which are superimposed together, can occur drift saltus step, also, when motion mesh during tracking When mark occurs significantly to block, even if drift saltus step does not occur, due to the object that can not judge to have hiding relation under this visual angle Between distance, the coordinate information that the object that is blocked is obtained is also inaccurate.Therefore, the present embodiment is sent out according to moving target Raw situation about blocking judge the moving target obtained under the first visual angle the first projection coordinate and the second visual angle under obtained motion Whether the second projection coordinate of target is wrong, if the situation that moving target is blocked under a certain visual angle is more serious, moves mesh Target shielding rate is more than default shielding rate threshold value, then projection coordinate's mistake of the moving target obtained under the visual angle is chosen another The projection coordinate of the moving target obtained under individual visual angle is used as final target projection coordinate;If moving mesh under two visual angles Target shielding rate is both less than or equal to default shielding rate threshold value, then the projection coordinate of the moving target obtained under two visual angles is equal It is correct, is now that the first projection coordinate and the second projection coordinate assign weighted value, is thrown according to the first projection coordinate and second Shadow coordinate is weighted and calculated, and obtains moving target and is following the trail of the target projection coordinate that place is overlooked in two dimensional model, simultaneously According to the first projection coordinate and the second projection coordinate to being optimized to tracking result, it is ensured that tracking is accurate.
Specifically, because during tracking, the size of tracking box is fixed, but the moving target of tracking is being imaged There is near big and far smaller relation under machine visual angle.Therefore, in one embodiment, the default shielding rate of circumstance of occlusion is defined in definition Distance dependent of the threshold value with target on two dimensional model apart from video camera, the two-dimentional mould of place vertical view is being followed the trail of according to moving target Distance in type apart from video camera calculates default shielding rate threshold value, and is transported according in the first multi-view video and the second multi-view video Moving-target calculates the tracing positional of next frame moving target and the second visual angle in the first multi-view video apart from the distance of video camera and regarded The weight of the tracing positional of next frame moving target in frequency.
The present embodiment is followed the trail of simultaneously by being carried out to the video under different visual angles, by the motion mesh obtained under each visual angle Target tracing positional projects to tracking place and overlooked in two dimensional model, and situation about being blocked according to moving target is by two The tracking result of same moving target is unitized under individual visual angle, the video frequency tracking based on double-visual angle carries out excellent to tracking result Change, it is ensured that tracking is accurate, greatly improves tracking accuracy.
In one embodiment, the second projection coordinate is chosen to overlook in two dimensional model in tracking place as moving target Target projection coordinate the step of after, in addition to:The first projection coordinate is modified according to the second projection coordinate;Choose First projection coordinate after the step of target projection coordinate in two dimensional model is overlooked in place is followed the trail of, goes back as moving target Including:The second projection coordinate is modified according to the first projection coordinate.In the present embodiment, when the tracking knot under a certain visual angle When really wrong, the tracking result of mistake is modified by the tracking result under another visual angle, and according to revised tracking As a result space-time context model is updated, to ensure that follow-up tracking result is accurate, tracking accuracy is further improved.
Further, for the ease of understanding technical scheme, below in conjunction with Fig. 4 to Fig. 6, with football video with The tracking of moving target in above-mentioned video is described in detail exemplified by track.For purposes of illustration only, Liang Ge football teams are defined For team A and team B, wherein, team A team member is designated rectangle in the sportsman that court is overlooked in two dimensional model, and team B team member is in ball The sportsman that field is overlooked in two dimensional model is designated circle.
The tracking of moving target, comprises the following steps in a kind of video:
1) moving target initial position, is determined, motion target tracking frame is demarcated.
First, t two field pictures are read, each sportsman's (i.e. moving target) in t frames is determined by manually demarcating tracking box Initial position.Specifically, during artificial demarcation sportsman's tracking box, tracking box can be selected using mouse, respectively under the first visual angle The initial position of sportsman is demarcated in the first frame in video in video and under the second visual angle, determines regarding under the first visual angle In video in frequency and under the second visual angle in the first frame sportsman initial position.Further, the demarcation of sportsman's initial position is completed Afterwards, terminal further calculates and stores the tracking box area of each sportsman's tracking box.
2) shielding rate of the moving target of present frame, is calculated.
Specifically, corresponding present frame in the video in the video under the first visual angle and under the second visual angle is calculated respectively The shielding rate of each sportsman, is blocked according to the tracking box area of each sportsman's tracking box and current sportsman in present frame The screening of each sportsman of corresponding present frame in video in video under the first visual angle of shielded area calculating and under the second visual angle Gear rate, the Computing Principle and process of the shielding rate of specific moving target are described in detail in the aforementioned embodiment, no longer go to live in the household of one's in-laws on getting married herein State.
3) learning rate of space-time context model, is calculated, space-time context model is updated.
Temporal contextual information is the temporal associativity of continuous interframe, and contextual information spatially is by tracking Target and the combination that the background image in scope can be determined around it.Target is tracked first using spatio-temporal context information Need to set up trace model, specifically, the probability problem of position occurs in Target Tracking Problem, as one target.O is made to want The target of tracking, x is the two-dimensional coordinate point on image, and P (x | o) denotation coordination x occurs in target o, by target following Conversion is for the computational problem of maximum confidence.
Order:
M (x)=P (x | o);Formula (4)
Then when confidence map M (x) values are maximum, corresponding coordinate x, you can be considered the position that target o most probables occur Put.As shown in figure 5, being for local context region in the range of solid box in the range of target region, outer dashed line frame.With Target's center position coordinates x*Represent target position, z be in local context region a bit.Define target x*Part on Context area is Ωc(x*), and define the contextual feature collection of this regional area and be combined into XC=c (z)=(I (z), z) | z ∈ Ωc(x*), wherein, I (z) is the image feature value at z coordinate.Using total probability formula, centre is characterized as with local context Amount, formula (4) formula is deployed, you can to obtain:
Wherein, P (x | c (z), o) represent when to the o and during its local context feature c (z) of setting the goal, target appears in x The probability of point, it is that tracking target position and the spatial relationship of its contextual information establish spatial context model.And P (c (z) | o) a certain probability appeared in up and down for text feature c (z) in target o is represented, it is that target o context priori is general Rate, is the outward appearance prior model of current local context.Wherein, context prior model is represented when by calculating confidence map M (x) when carrying out target prodiction, selected is the context similar to previous frame target present position outward appearance, and space Context model then ensure that selected new target location is not only similar to former target in appearance, and on locus It is also with rational, so as to avoid to a certain extent because the similar object of other outward appearances occurs and forms interference, it is to avoid Cause the drift phenomenon in tracking.
Based on above-mentioned, in the present embodiment, specific Mathematical Models are carried out to each section in formula (5) in advance, Specifically include confidence map modeling, spatial context model modeling and the modeling of context prior model.
First, the modeling of confidence map is as follows:Due to known to the target location in the first frame in video (according to initial frame Collimation mark is tracked to obtain surely), it is nearer that confidence map M (x) should meet distance objective x* positions, this bigger property of its confidence level Matter.Therefore, make:
Wherein, b is the normal parameter of normalization;α is the normal parameter of yardstick, and β is that function curve image controls normal parameter.α and tracking The size of target is related, and span is 1.75~2.75;β spans are 0.5~1.5.In one embodiment, α =2.25, β=1.
Next, and spatial context model P (x | c (z), modeling o) is as follows:Because spatial context model is concerned with Track the spatial relation of target and its local context, i.e., including distance and direction relationses, thus definition P (x | c (z), o) For the function of a Non-radial symmetric:
P (x | c (z), o)=hsc (x-z);Formula (7)
Wherein:X be target position, z be its local context in a certain position, even if then when have two point z1, z2 with Target's center position x* distances are mutually simultaneously as its present position is different, and hsc (x*-z1) ≠ hsc (x *-z2) shows They represent different contexts for x*, effectively to distinguish different spatial relationships, prevent ambiguity.
Finally, context prior model P (c (z) | o) modeling is as follows:The reflection of context prior model is current office The space of portion/context itself is constituted, and is intuitively considered, characteristics of image and its locus structure that should be with context space It is relevant.Therefore, make:
P (c (z) | o)=I (z) ωσ(z-x*);Formula (8)
Wherein, I (z) is the image feature value at z points in local context region, ωσ(Δ) is weight function.
Specifically, during tracking, can analogy people's ocular pursuit something process, distance tracking target it is nearer above and below Literary region, it is believed that be more related to tracking target, therefore importance is higher, and the context more remote apart from tracking target Region, it is believed that be and tracking target more unrelated part, therefore importance is relatively low.Accordingly, define:
Wherein, Δ is distance between two points, λ to normalize normal parameter, the value for making P (c (z) | o) be in 0 to 1 it Between, to meet the definition of probability function;σ is scale parameter, related to tracking target sizes.
Formula (9) substitution formula (8) is obtained into context prior model as follows:
That is, the space of local context to be constituted to the Gauss weighted sum for being modeled as each point image feature value in this region.
Further, in the present embodiment, above-mentioned confidence map modeling, spatial context model modeling and context are completed first Test after model modeling, further updated according to above-mentioned confidence map, spatial context model and context prior model above and below space-time Literary model:
First, formula (5) is substituted into formula (6), formula (7) and formula (10), obtained:
Wherein, hsc (x-z) is spatial context model, i.e., the object that each two field picture will be calculated and learnt.
According to convolutionDefinition:
Formula (11) is variable to be turned to:
According to convolution theorem, have:
Then:
Wherein,WithFourier and inverse-Fourier transform are represented respectively.
It is assumed that during t frames, it is known that target's center positionAnd the local context region Ω of targetc(x*), it can calculate Obtain tracking the spatial context model in target and its local context region in t frames, be designated asBecause handled Be continuous video sequence, temporal context dependence is also most important for tracking result.For by this dimension Account for, space-time context model learning rate ρ is set, the space-time context model that each frame is tracked into target is expressed as History space-time context model and spatial context model two parts of new acquistion, it is as follows:
Wherein,For the spatial context model of t frames;For the space-time context model of t frames,For The space-time context model of t+1 frames.
It is general, under the situation that there is the similar tracking target of multiple outward appearances, when blocking situation, i.e. target Display model have occurred that large change, and now space-time context model still learns and updated according to equal speed, It will constantly learn to error model and then final lose tracks target.The present embodiment is dynamic according to the circumstance of occlusion of moving target State determines learning rate, and learning rate ρ is the dynamic value automatically updated, can effectively prevent from updating too fast and losing go through completely History model information.Specifically, when tracking target is blocked by other objects, target appearance model can be changed, Jin Erkong Between the reduction of context model confidence level, it is necessary to reduce learning rate, to prevent learning the model to mistake, it is ensured that tracking is accurate.This In embodiment, motion obtained above is updated according to the learning rate that the shielding rate of moving target calculates space-time context model The space-time context model of target.The learning rate of space-time context model is calculated with specific reference to formula (3), it is not superfluous herein State.
4) image feature value of moving target, is obtained, the context prior model of moving target is updated.
Specifically, calculating the image feature value for obtaining moving target by below equation:
I (x)=w1·IR(x)+w2·IG(x)+w3·IB(x);Formula (17)
Wherein, IR(x) color intensity on red channel is in for x;IG(x) it is that the color that x is on green channel is strong Degree;IB(x) color intensity on blue channel is in for x;w1、w2、w3For weights and w1+w2+w3=1.In one embodiment In, two teams' team uniform color is more obvious in R channel differences, by IR(x) larger weights, w are assigned1=0.4, w2=0.3, w3= 0.3。
5) convolution algorithm, is carried out to the space-time context model after renewal and context prior model, next frame fortune is obtained The tracing positional of moving-target.
Specifically, in t+1 frames, it is known that after its space-time context model is updatedT+1 frames are calculated again Context prior modelConvolutional calculation can be carried out by formula (7) to formula (11) and obtain t+1 The confidence map of frame, it is as follows:
Then have, as the confidence map M of t+1 framesT+ Zou(x) when taking maximum, corresponding x values are considered in t+1 frames The center of moving targetSo that it is determined that the tracing positional of moving target, i.e., determine regarding under the first visual angle respectively Next frame is traced the trace bit that next frame in the tracing positional of sportsman, and video under the second visual angle is traced sportsman in frequency Put.
6), set up court and overlook two dimensional model, by the tracing positional of next frame moving target in the video under the first visual angle The first projection coordinate overlooked to court in two dimensional model is projected, by next frame moving target in the video under the second visual angle Tracing positional projects the second projection coordinate overlooked to court in two dimensional model.
Specifically, in the present embodiment, four angle points for choosing football match place half-court are used as Calculation Plane homograph Four reference points of matrix.First, examined by a series of thresholding-techniques in Digital Image Processing and Hough transformation straight line Survey, extract court edge area;Scattered line segment is merged again, court sideline linear equation is obtained and obtains four groups Point coordinates is demarcated, finally, the transition matrix at two visual angles is obtained according to four groups of demarcation point coordinates, two-dimentional mould is overlooked in specific court Type is as shown in Figure 6.
7), detection the first projection coordinate and the second projection coordinate whether mistake.
Specifically, respectively by regarding under the shielding rate and the second visual angle of the present frame sportsman in the video under the first visual angle The shielding rate of present frame sportsman in frequency is compared with default shielding rate threshold value, judges the first projection coordinate and the second projection Coordinate whether mistake.If the shielding rate of the present frame sportsman in the video under the first visual angle is more than default shielding rate threshold value, First projection coordinate's mistake;If the shielding rate of the present frame sportsman in the video under the second visual angle is more than default shielding rate threshold It is worth, then second projection coordinate's mistake.If under the shielding rate and the second visual angle of the present frame sportsman in the video under the first visual angle Video in the shielding rate of present frame sportsman be respectively less than or equal to default shielding rate threshold value, then the first projection coordinate and Two projection coordinates are all correct.In the present embodiment, preset shielding rate threshold value and two dimensional model middle-range is overlooked in court according to sportsman Obtained from being calculated with a distance from video camera, wherein, sportsman is in the distance that court is overlooked in two dimensional model apart from video camera:
Wherein, [x, y] is the coordinate that present frame sportsman is overlooked on two dimensional model in court, height, and width is respectively Two dimensional model height and width are overlooked in court.
Then, presetting shielding rate threshold value is:
Threshold=γ e-μ·Δd;Formula (20)
Wherein, threshold is default shielding rate threshold value;γ and μ are all normal parameter, and γ is used to adjust default shielding rate Changes of threshold scope, μ is used to adjust default shielding rate changes of threshold speed.
8), when the first projection coordinate or second projection coordinate's mistake, the projection coordinate for choosing another visual angle is used as sportsman Target projection coordinate.
Specifically, the shooting angle of the video under the first visual angle and the video under the second visual angle is different, under the first visual angle Sportsman in obtained video is shot to block, but now, sportsman will not block in the video shot under the second visual angle, Therefore, the first projection coordinate and the second projection coordinate will not make a mistake simultaneously.Therefore, when first projection coordinate's mistake, choose Second projection coordinate is as the target projection coordinate of sportsman, and t frames, which are tracked, to be terminated.When second projection coordinate's mistake, the is chosen One projection coordinate is as the target projection coordinate of sportsman, and t frames, which are tracked, to be terminated.
Further, in one embodiment, when the tracking result under a certain visual angle is wrong, another visual angle is also passed through Under tracking result to mistake tracking result be modified, to ensure that follow-up tracking result is accurate.It is assumed that the first projection coordinate Mistake, under the first visual angle, the maximum likelihood position that sportsman provided by confidence map that is tracked for occurring tracking drift is P1, will The projection matrix that first multi-view video is changed to court vertical view two dimensional model is H1, now under the second visual angle, it is tracked sportsman's Maximum likelihood position is P2, the projection matrix that the second multi-view video is changed to court vertical view two dimensional model is H2, then P2In ball The second projection coordinate that field is overlooked on two dimensional model is H2·P2, due to P2For the result correctly tracked, then by error tracking Position P1Being updated to first projection coordinate of the correct tracing positional under the first visual angle is:
P1=H1 -1·H2·P2;Formula (21)
Similarly, if second projection coordinate's mistake, is modified according to the first projection coordinate to the second projection coordinate, Specific amendment principle is identical with the amendment principle of above-mentioned first projection coordinate, repeats no more.Complete the first projection coordinate or second After the amendment of projection coordinate, the space-time context model under correspondence visual angle is updated according further to revised tracking result, Ensure that follow-up tracking result is accurate.
9), when the first projection coordinate and all correct the second projection coordinate, sat according to the first projection coordinate and the second projection Mark calculates the target projection coordinate of sportsman.
When the first projection coordinate and all correct the second projection coordinate, pass through the first projection coordinate and the second projection coordinate Auxiliary adjustment determines the target projection coordinate of sportsman mutually, determines that the tracking of t frames terminates after target projection coordinate.Specifically, exist In image after projective transformation, in the place nearer from video camera, the location of sportsman is more clear, and from taking the photograph Camera remote position, sportsman are because deform upon stretching, and residing particular location is more obscured.Therefore, when target is from a certain When video camera under visual angle is nearer, it is believed that the tracking result in the shot by camera video is more reliable, i.e., by the visual angle The tracking result of gained, when finally determining target location, shared weight is bigger, therefore according to the distance of target range video camera Determine the weighted value of the first projection coordinate and the second projection coordinate.
It is assumed that under the first visual angle, video camera is in position as shown in Figure 6.Define the position of the photography under the first visual angle For origin, the coordinate that team B team member sportsman M is overlooked on two dimensional model in court is posmodel1=[x1 y1], then have:
As shown in fig. 6, the position for video camera under the second visual angle is under the opposite of the video camera under the first visual angle, the second visual angle It is pos that the tracking result to sportsman M obtained by video camera, which is transformed into the coordinate overlooked on two dimensional model in court,model2=[x2 y2], then the distance of the video camera under the visual angle of sportsman M distances second is:
Wherein, width and height are respectively the width and height that two dimensional model is overlooked in court.
Then, merge after the first projection coordinate and the second projection coordinate, final positions of the sportsman M on the two dimensional model of court For:posfinal=[x y],
Further, in one embodiment, according to above-mentioned steps (1) to step (10), to the football under two visual angles Video is tracked, and is followed the trail of operation and is realized on PC computers, hardware environment:Central processing unit:Intel Core i5, dominant frequency For 2.5GHz, interior save as 8GB.Programmed environment is Matlab 2014a.Former video under two visual angles is avi forms, per frame figure Piece size is 1696x1080, and video size about 20MB, two video lengths about 18 seconds are per second to take 30 frames, amounts to about 540 In frame, the present embodiment, following rate reaches 1s/ frames, tracking rate of accuracy reached to 100%.
Referring to Fig. 7, in a kind of video moving target tracks of device 700, including:
Shielding rate computing module 702, the moving target for calculating the present frame in the video under first visual angle is blocked Rate.
Space-time context model update module 704, for calculating space-time context model according to the shielding rate of moving target Learning rate, and according to learning rate update moving target space-time context model.
Context prior model update module 706, the image feature value for obtaining the moving target in present frame, root The context prior model of moving target is updated according to image feature value.
Tracking module 708, for carrying out convolution fortune to the space-time context model after renewal and context prior model Calculate, obtain the tracing positional of the moving target of next frame in the video under the first visual angle.
As shown in figure 8, in one embodiment, space-time context model update module 704 includes:
Breakpoint detection submodule 7042, for detect present frame different moving targets tracking box between whether wrap Include intersection point.
Shielded area calculating sub module 7044, for when including intersection point between the tracking box of different moving targets, counting The length and width of lap between the tracking box of different moving targets, and motion mesh is calculated according to length and width Mark the shielded area blocked.
Shielding rate calculating sub module 7046, the tracking box area for obtaining the moving target prestored, calculates motion The shielding rate of target is the ratio of shielded area and tracking box area.
In one of the embodiments, learning rate computing module 702 calculates learning rate using below equation:
Wherein:E is natural logrithm;Δ S is the shielding rate of moving target;k、It is normal parameter.
As shown in figure 9, in one embodiment, context prior model update module 706 includes:
Color intensity acquisition submodule 7062:For obtaining color of the moving target in present frame on red channel Intensity, the color intensity on green channel and the color intensity on blue channel.
Color intensity weighted value chooses submodule 7064, for be moving target on red channel color intensity, Color intensity on green channel, and the corresponding color intensity weighted value of color intensity imparting on blue channel.
Image feature value computing module 7066, for being weighted summation to the color intensity on each passage, is obtained The image feature value of moving target in present frame.
As shown in Figure 10, in one embodiment, the tracks of device 700 of moving target also includes in video:
Two dimensional model projection module 710, site boundary region is followed the trail of for extracting, and is set up and is followed the trail of the two-dimentional mould of place vertical view Type, tracing positional is projected to the first projection coordinate followed the trail of in the vertical view two dimensional model of place.
In one embodiment, the tracks of device 700 of moving target is used to obtain the video under the second visual angle in video, And the tracing positional of the moving target of frame of video corresponding with next frame in the video under the second visual angle is calculated in tracking place The second projection coordinate overlooked in two dimensional model.As shown in Figure 10, the tracks of device 700 of moving target also includes in video:
Shielding rate comparison module 712, for blocking the current frame motion target in the video under the first visual angle respectively The shielding rate of current frame motion target in video under rate and the second visual angle is compared with default shielding rate threshold value.
First object projection coordinate chooses module 714, for when the current frame motion target in the video under the first visual angle Shielding rate and the second visual angle under video in the shielding rate of current frame motion target be respectively less than or equal to default shielding rate threshold During value, moving target is calculated according to the first projection coordinate and the second projection coordinate and is following the trail of the mesh that place is overlooked in two dimensional model Mark projection coordinate.
Second target projection coordinate chooses module 716, for when the current frame motion target in the video under the first visual angle Shielding rate when being more than default shielding rate threshold value, choose the second projection coordinate as moving target and overlook two dimension following the trail of place Target projection coordinate in model;When the shielding rate of the current frame motion target in the video under the second visual angle is more than default screening During gear rate threshold value, the first projection coordinate is chosen as moving target and is following the trail of the target projection seat that place is overlooked in two dimensional model Mark.
As shown in Figure 10, in one embodiment, the tracks of device 700 of moving target also includes in video:
Projection coordinate's correcting module 718, for when the shielding rate of the current frame motion target in the video under the first visual angle During more than default shielding rate threshold value, the first projection coordinate is modified according to the second projection coordinate;And, when the second visual angle Under video in the shielding rate of current frame motion target when being more than default shielding rate threshold value, according to the first projection coordinate to the Two projection coordinates are modified.
Each technical characteristic of embodiment described above can be combined arbitrarily, to make description succinct, not to above-mentioned reality Apply all possible combination of each technical characteristic in example to be all described, as long as however, the combination of these technical characteristics is not deposited In contradiction, the scope of this specification record is all considered to be.
Embodiment described above only expresses the several embodiments of the present invention, and it describes more specific and detailed, but simultaneously Can not therefore it be construed as limiting the scope of the patent.It should be pointed out that coming for one of ordinary skill in the art Say, without departing from the inventive concept of the premise, various modifications and improvements can be made, these belong to the guarantor of the present invention Protect scope.Therefore, the protection domain of patent of the present invention should be determined by the appended claims.

Claims (10)

1. the tracking of moving target in a kind of video, it is characterised in that comprise the following steps:
Calculate the shielding rate of the moving target of present frame in the video under the first visual angle;
The learning rate of space-time context model is calculated according to the shielding rate of moving target, and fortune is updated according to the learning rate The space-time context model of moving-target;
The image feature value of the moving target in present frame is obtained, the context of moving target is updated according to described image characteristic value Prior model;
Convolution algorithm is carried out to the space-time context model after renewal and the context prior model, the first visual angle is obtained Under video in next frame moving target tracing positional.
2. according to the method described in claim 1, it is characterised in that present frame in the video calculated under the first visual angle The step of shielding rate of moving target, includes:
Whether include intersection point between the tracking box of the different moving targets of detection present frame;
It is overlapping between the tracking box of the different moving targets of calculating when including intersection point between the tracking box of different moving targets Partial length and width, and the shielded area that moving target is blocked is calculated according to the length and width;
The tracking box area of the moving target prestored is obtained, the shielding rate for calculating moving target is the shielded area and institute State the ratio of tracking box area.
3. according to the method described in claim 1, it is characterised in that the learning rate is calculated using below equation:
Wherein:
E is natural logrithm;
Δ S is the shielding rate of moving target;
k、It is normal parameter.
4. according to the method described in claim 1, it is characterised in that the characteristics of image for obtaining the moving target in present frame The step of value, includes:
Obtain present frame in color intensity of the moving target on red channel, the color intensity on green channel and Color intensity on blue channel;
The be moving target color intensity on red channel, the color intensity on green channel, and on blue channel Color intensity assigns corresponding color intensity weighted value;
Summation is weighted to the color intensity on each passage, the image feature value of the moving target in present frame is obtained.
5. according to the method described in claim 1, it is characterised in that also include:
Extract and follow the trail of site boundary region, set up and follow the trail of place vertical view two dimensional model, the tracing positional is projected to described and chased after The first projection coordinate that track place is overlooked in two dimensional model.
6. method according to claim 5, it is characterised in that also include:
The video under the second visual angle is obtained, and calculates frame of video corresponding with next frame in the video under second visual angle The second projection coordinate that the tracing positional of moving target is overlooked in two dimensional model in the tracking place;
Respectively by regarding under the shielding rate of the current frame motion target in the video under first visual angle and second visual angle The shielding rate of current frame motion target in frequency is compared with default shielding rate threshold value;
When in the video under the shielding rate and second visual angle of the current frame motion target in the video under first visual angle The shielding rate of current frame motion target be respectively less than or during equal to the default shielding rate threshold value, according to first projection coordinate The target projection coordinate that moving target is overlooked in two dimensional model in the tracking place is calculated with second projection coordinate;
When the shielding rate of the current frame motion target in the video under first visual angle is more than the default shielding rate threshold value, Choose the target projection coordinate that second projection coordinate is overlooked in two dimensional model as moving target in the tracking place;When When the shielding rate of the current frame motion target in video under second visual angle is more than the default shielding rate threshold value, institute is chosen State the target projection coordinate that the first projection coordinate is overlooked in two dimensional model as moving target in the tracking place.
7. method according to claim 6, it is characterised in that
It is described to choose the target throwing that second projection coordinate overlooks in two dimensional model as moving target in the tracking place After the step of shadow coordinate, in addition to:
First projection coordinate is modified according to second projection coordinate;
It is described to choose the target throwing that first projection coordinate overlooks in two dimensional model as moving target in the tracking place After the step of shadow coordinate, in addition to:
Second projection coordinate is modified according to first projection coordinate.
8. the tracks of device of moving target in a kind of video, it is characterised in that including:
Shielding rate computing module, the shielding rate of the moving target for calculating the present frame in the video under the first visual angle;
Space-time context model update module, the study speed for calculating space-time context model according to the shielding rate of moving target Rate, and according to the space-time context model of learning rate renewal moving target;
Context prior model update module, the image feature value for obtaining the moving target in present frame, according to the figure As characteristic value updates the context prior model of moving target;
Tracking module, for carrying out convolution fortune to the space-time context model after renewal and the context prior model Calculate, obtain the tracing positional of the moving target of next frame in the video under the first visual angle.
9. device according to claim 8, it is characterised in that space-time context model update module includes:
Breakpoint detection submodule, for detect present frame different moving targets tracking box between whether include intersection point;
Shielded area calculating sub module, for when including intersection point between the tracking box of different moving targets, calculating different The length and width of lap between the tracking box of moving target, and moving target is calculated according to the length and width The shielded area blocked;
Shielding rate calculating sub module, the tracking box area for obtaining the moving target prestored, calculates the screening of moving target Gear rate is the ratio of the shielded area and the tracking box area.
10. device according to claim 8, it is characterised in that the learning rate computing module uses below equation meter Calculate the learning rate:
Wherein:
E is natural logrithm;
Δ S is the shielding rate of moving target;
k、It is normal parameter.
CN201710254328.XA 2017-04-18 2017-04-18 Method and device for tracking moving target in video Active CN107240120B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710254328.XA CN107240120B (en) 2017-04-18 2017-04-18 Method and device for tracking moving target in video

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710254328.XA CN107240120B (en) 2017-04-18 2017-04-18 Method and device for tracking moving target in video

Publications (2)

Publication Number Publication Date
CN107240120A true CN107240120A (en) 2017-10-10
CN107240120B CN107240120B (en) 2019-12-17

Family

ID=59983446

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710254328.XA Active CN107240120B (en) 2017-04-18 2017-04-18 Method and device for tracking moving target in video

Country Status (1)

Country Link
CN (1) CN107240120B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108022254A (en) * 2017-11-09 2018-05-11 华南理工大学 A kind of space-time contextual target tracking based on sign point auxiliary
CN109636828A (en) * 2018-11-20 2019-04-16 北京京东尚科信息技术有限公司 Object tracking methods and device based on video image
CN111223104A (en) * 2018-11-23 2020-06-02 杭州海康威视数字技术股份有限公司 Package extraction and tracking method and device and electronic equipment
CN111241872A (en) * 2018-11-28 2020-06-05 杭州海康威视数字技术股份有限公司 Video image shielding method and device
CN112489086A (en) * 2020-12-11 2021-03-12 北京澎思科技有限公司 Target tracking method, target tracking device, electronic device, and storage medium
CN115712354A (en) * 2022-07-06 2023-02-24 陈伟 Man-machine interaction system based on vision and algorithm

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11889227B2 (en) 2020-10-05 2024-01-30 Samsung Electronics Co., Ltd. Occlusion processing for frame rate conversion using deep learning

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105631895A (en) * 2015-12-18 2016-06-01 重庆大学 Temporal-spatial context video target tracking method combining particle filtering
CN105976401A (en) * 2016-05-20 2016-09-28 河北工业职业技术学院 Target tracking method and system based on partitioned multi-example learning algorithm
CN106127798A (en) * 2016-06-13 2016-11-16 重庆大学 Dense space-time contextual target tracking based on adaptive model
CN106485732A (en) * 2016-09-09 2017-03-08 南京航空航天大学 A kind of method for tracking target of video sequence

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105631895A (en) * 2015-12-18 2016-06-01 重庆大学 Temporal-spatial context video target tracking method combining particle filtering
CN105976401A (en) * 2016-05-20 2016-09-28 河北工业职业技术学院 Target tracking method and system based on partitioned multi-example learning algorithm
CN106127798A (en) * 2016-06-13 2016-11-16 重庆大学 Dense space-time contextual target tracking based on adaptive model
CN106485732A (en) * 2016-09-09 2017-03-08 南京航空航天大学 A kind of method for tracking target of video sequence

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108022254A (en) * 2017-11-09 2018-05-11 华南理工大学 A kind of space-time contextual target tracking based on sign point auxiliary
CN108022254B (en) * 2017-11-09 2022-02-15 华南理工大学 Feature point assistance-based space-time context target tracking method
CN109636828A (en) * 2018-11-20 2019-04-16 北京京东尚科信息技术有限公司 Object tracking methods and device based on video image
CN111223104A (en) * 2018-11-23 2020-06-02 杭州海康威视数字技术股份有限公司 Package extraction and tracking method and device and electronic equipment
CN111223104B (en) * 2018-11-23 2023-10-10 杭州海康威视数字技术股份有限公司 Method and device for extracting and tracking package and electronic equipment
CN111241872A (en) * 2018-11-28 2020-06-05 杭州海康威视数字技术股份有限公司 Video image shielding method and device
CN111241872B (en) * 2018-11-28 2023-09-22 杭州海康威视数字技术股份有限公司 Video image shielding method and device
CN112489086A (en) * 2020-12-11 2021-03-12 北京澎思科技有限公司 Target tracking method, target tracking device, electronic device, and storage medium
CN115712354A (en) * 2022-07-06 2023-02-24 陈伟 Man-machine interaction system based on vision and algorithm

Also Published As

Publication number Publication date
CN107240120B (en) 2019-12-17

Similar Documents

Publication Publication Date Title
CN107240120A (en) The tracking and device of moving target in video
CN109903312B (en) Football player running distance statistical method based on video multi-target tracking
CN111462200B (en) Cross-video pedestrian positioning and tracking method, system and equipment
CN106651942B (en) Three-dimensional rotating detection and rotary shaft localization method based on characteristic point
CN104778690B (en) A kind of multi-target orientation method based on camera network
US8401304B2 (en) Detecting an object in an image using edge detection and morphological processing
CN109558879A (en) A kind of vision SLAM method and apparatus based on dotted line feature
CN106780620A (en) A kind of table tennis track identification positioning and tracking system and method
Zheng et al. A novel projective-consistent plane based image stitching method
CN110825234A (en) Projection type augmented reality tracking display method and system for industrial scene
WO2019164498A1 (en) Methods, devices and computer program products for global bundle adjustment of 3d images
CN102243765A (en) Multi-camera-based multi-objective positioning tracking method and system
CN106780564B (en) A kind of anti-interference contour tracing method based on Model Prior
CN112364865B (en) Method for detecting small moving target in complex scene
WO2022191140A1 (en) 3d position acquisition method and device
Dai et al. Geometry-based object association and consistent labeling in multi-camera surveillance
CN105138979A (en) Method for detecting the head of moving human body based on stereo visual sense
CN107274477A (en) A kind of background modeling method based on three dimensions top layer
CN117036404A (en) Monocular thermal imaging simultaneous positioning and mapping method and system
Lin et al. Vanishing point-based image transforms for enhancement of probabilistic occupancy map-based people localization
CN115880643A (en) Social distance monitoring method and device based on target detection algorithm
Wang et al. Research on omnidirectional ORB-SLAM2 for mobile robots
CN117593650B (en) Moving point filtering vision SLAM method based on 4D millimeter wave radar and SAM image segmentation
Lu et al. A new real time environment perception method based on visual image for micro UAS flight control
Han et al. 3D Geographic Trajectories’ Generation and Visualization of Dynamic Objects in Surveillance Videos

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant