CN101751677A - Target continuous tracking method based on multi-camera - Google Patents

Target continuous tracking method based on multi-camera Download PDF

Info

Publication number
CN101751677A
CN101751677A CN200810240364A CN200810240364A CN101751677A CN 101751677 A CN101751677 A CN 101751677A CN 200810240364 A CN200810240364 A CN 200810240364A CN 200810240364 A CN200810240364 A CN 200810240364A CN 101751677 A CN101751677 A CN 101751677A
Authority
CN
China
Prior art keywords
target
zone
moving target
camera
moving
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN200810240364A
Other languages
Chinese (zh)
Other versions
CN101751677B (en
Inventor
谭铁牛
黄凯奇
蔡莹皓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenyang Institute of Automation of CAS
Original Assignee
Shenyang Institute of Automation of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenyang Institute of Automation of CAS filed Critical Shenyang Institute of Automation of CAS
Priority to CN 200810240364 priority Critical patent/CN101751677B/en
Publication of CN101751677A publication Critical patent/CN101751677A/en
Application granted granted Critical
Publication of CN101751677B publication Critical patent/CN101751677B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention discloses a target continuous tracking method based on multi-camera, which is based on multi-channel video and includes target detection, target tracking, target characteristic extraction, the computation of arrival area and departure area, camera area connectivity computation , target matching, data association and target continuous tracking. The camera area connectivity is combined with the target matching information to judge that the target is a new target or appears under other cameras, and the continuous tracking of the moving target is realized accordingly. The target continuous tracking system based on multi-camera has significant meaning on improving the network and large-range intelligent monitoring system. The target continuous tracking method based on multi-camera is applicable to large-range intelligent visual monitoring, expands the monitoring area and automatically processes and analyzes the mass data acquired by the multiple cameras.

Description

Target continuous tracking method based on multiple-camera
Technical field
The invention belongs to area of pattern recognition, relate to technology such as Flame Image Process and computer vision, particularly relate to relevant treatment based on the target Continuous Tracking of multiple-camera.
Background technology
At present, China has had a considerable amount of camera supervised systems to put in the practical application various environment, zone and place are monitored in real time.These camera records a large amount of video datas.And these data all realize monitoring by arranging the special messenger to supervise usually at present, because the intensity of this work is big, watch-keeping makes the people become tired, and is insensitive to critical incident, thereby occur the situation of careless omission easily.For eliminating these existing drawbacks, pressing for computing machine can be under the situation that does not need the people to participate in, follow the tracks of and discern by moving target and behavior in the image sequence that video camera is taken, thereby anomalous event is made early warning automatically, avoid the generation of crime.
The intelligent video monitoring technology just computer vision and mode identification technology in the application of video monitoring.The intelligent video monitoring technology provides monitoring and the useful key message of early warning by the content in the automatic analysis and understanding video.At present, the intelligent vision monitoring technology is usually included under the single camera, to the interesting target in the monitoring scene be that moving target detects automatically, tracking and behavioural analysis, realize real-time monitoring to scene.Yet the field range of single camera is limited, in order to satisfy the needs of large-range monitoring, just needs a plurality of video camera collaborative works.Rapid increase along with the video camera number, traditional needs that can't satisfy monitor task based on artificial passive monitoring, how automatically the content in the video of a plurality of shot by camera of analysis realizes that truly large-range monitoring still is in the primary stage of research.
A key issue based on the intelligent video monitoring on a large scale of multiple-camera is how to utilize video camera adjacent on the space, enlarges monitoring range, and the moving target that passes a plurality of camera coverage scopes is carried out Continuous Tracking.Wherein may there be the overlapping region in the field range of a plurality of video cameras, also may have Non-overlapping Domain.There are a plurality of video cameras in the overlapping visual field to be meant to exist between the field range of a plurality of video cameras and overlap.Non-overlapped visual field video camera is meant between the field range of a plurality of video cameras and has a part of blind area, and target can not be predicted in the motion of this blind area, thereby causes moving target discontinuous on room and time.In actual applications, in order to cover scene on a large scale, for considerations such as installation costs, non-overlapped visual field video camera is widely applied in the supervisory system to be monitored various environment, zone and place.
The correlative study of multiple-camera target following at present mainly concentrates on based on targeted message with based on single method for solving of answering (homography) constraint.Based on the video camera that the correlation technique of the multiple-camera target following of targeted message need be calibrated, this is unpractical in actual conditions, in case because the position change of camera just need demarcate again video camera.Based on targeted message with all require based on single method for solving that should retrain to have the overlapping region between the field range of video camera, thereby limited its application in intelligent video monitoring system on a large scale.
The technology of intelligent video monitoring on a large scale based on multiple-camera is based upon on the basis of single camera monitoring, it is that moving target carries out target detection and tracking that its major technique comprises the interesting target in each road video, then each moving target is carried out feature extraction, area connectivity in conjunction with video camera realizes object matching, and realizes the Continuous Tracking of target under the multichannel video camera by the related data corresponding technology.
Summary of the invention
In order to solve the limitation of prior art, can't directly apply to the problem of large-range monitoring, the present invention discloses a kind of target continuous tracking method based on multiple-camera, specifically be meant the field range of leaving a video camera when moving target enter another video camera within sweep of the eye the time, discern this moving object and it is carried out unique sign, and handing-over target detection and tracing task, realize the Continuous Tracking of moving target in scene on a large scale.
For reaching described purpose, target continuous tracking method based on multiple-camera provided by the invention, based on multi-channel video, comprise target detection, target following, target's feature-extraction, calculate and arrive the zone and leave the zone, calculating camera area connectivity, object matching, data association and target Continuous Tracking process, step is as follows:
Step S1: the image sequence that each road video camera is collected carries out target detection, is used to obtain moving target;
Step S2: adopt Kalman Filter Technology that each moving target is carried out target following, be used to obtain this moving target complete track under each road video camera;
Step S3: the color histogram that calculates each movement destination image zone realizes that color characteristic extracts and mean profile is realized Shape Feature Extraction, is used to obtain the feature description of this moving target robust;
Step S4: adopt the initial point position and the terminating point position of k-average technology cluster movement objective orbit, be used to obtain the arrival zone under the video camera of every road and leave the zone;
Step S5: determine the arrival zone of each road video camera and leave arrival zone and other video camera under regional and leave annexation between the zone by experience or video data;
Step S6: according to the annexation between the camera area, the moving target under the different cameras is carried out object matching, be used to obtain similarity between the moving target;
Step S7:, calculate the moving target that leaves under i the video camera and from j the video camera corresponding relation between the moving target of arrival down, realization data association according to similarity information and the temporal information between the moving target;
Step S8: the moving target that passes a plurality of video cameras is carried out the Continuous Tracking that unique sign realizes moving target under the multiple-camera.
Beneficial effect of the present invention: the present invention is in conjunction with moving object detection, tracking and matching technique, according to camera area connectivity, realized the Continuous Tracking of moving target in scene on a large scale.Thereby it is limited to have solved the single camera field range, the problem that can't monitor scene on a large scale.This method exists between a plurality of camera coverage scopes under the situation in overlapping region and zero lap zone, can both be that moving target identifies and follows the tracks of to the interesting target in the scene.Target Continuous Tracking based on multiple-camera is networked for improving, and the intelligent monitor system tool has very important significance on a large scale.Be used for intelligent vision monitoring on a large scale, enlarge guarded region, to a plurality of camera acquisitions to mass data handle automatically and analyze.
Description of drawings
Fig. 1 illustrates the target continuous tracking method process flow diagram based on multiple-camera.
The position that Fig. 2 illustrates a plurality of video cameras concerns example.
Fig. 3 illustrates the moving object detection process flow diagram.
Fig. 4 (a), Fig. 4 (b) illustrate the example as a result of moving object detection.
Fig. 5 illustrates the pairing foreground area of moving target after the normalization.
Fig. 6 illustrates the motion target tracking flow process.
Fig. 7 (a) illustrates the example as a result of motion target tracking to Fig. 7 (d).
Fig. 8 illustrates the target's feature-extraction process flow diagram.
Fig. 9 illustrates to calculate and arrives the zone and leave regional process flow diagram.
Figure 10 (a), Figure 10 (b) illustrate the arrival zone of video camera and leave area schematic.
Figure 11 illustrates video camera and arrives the zone and leave annexation between the zone.
Figure 12 illustrates object matching and data association process flow diagram.
Figure 13 (a) illustrates moving object Continuous Tracking synoptic diagram to Figure 13 (c).
Figure 14 illustrates target Continuous Tracking process flow diagram.
Embodiment
Describe each related detailed problem in the technical solution of the present invention in detail below in conjunction with accompanying drawing.Be to be noted that described embodiment only is intended to be convenient to the understanding of the present invention, and it is not played any qualification effect.
Realize that the hardware minimalist configuration that the inventive method needs is: at P43.0G CPU, the computing machine of 512M internal memory; Lowest resolution is 320 * 240 monitoring camera; Frame per second is the multi-channel video capture card of 25 frame per seconds.On the hardware of this configuration level, adopt the C Plus Plus programming to realize this method, can reach real-time processing, other modes repeat no more.
Fig. 1 has provided the target continuous tracking method process flow diagram based on multiple-camera.Comprising video camera 1, video camera 2 and video camera N, is that moving target carries out target detection to the interesting target among the video camera 1-N at first, respectively the moving target in each road video is followed the tracks of then, obtains the complete track of moving target under each scene.Then each moving target is carried out feature extraction, and calculate video camera 1-N arrival zone and leave the zone, the result of feature extraction realizes object matching in conjunction with the annexation between the video camera 1-N zone, realizes the Continuous Tracking of target under the multichannel video camera by the related data corresponding technology at last.
The position that Fig. 2 has provided a plurality of video cameras concerns exemplary plot.Comprise among Fig. 2 that two-way outdoor pick-up machine is video camera 1 and video camera 2, and one road live pick up machine is a video camera 3.Be shown in broken lines the field range zone of every road video camera among Fig. 2, wherein do not had overlapping field range zone between video camera 1, video camera 2 and the video camera 3.It should be noted that Fig. 2 only provides an example of a plurality of camera position relations.Can there be Non-overlapping Domain between the field range of video camera, also can has the overlapping region.
Fig. 3 has provided the moving object detection process flow diagram, and described moving object detection step is as follows:
Step S11: the image sequence that each road video camera is collected makes up background model;
Step S12: from image sequence, region of variation is extracted from background model, be used to obtain moving target;
Step S13: the moving target that obtains is carried out morphological operation and connected domain analysis, obtain cutting apart moving target accurately.
In monitoring, moving object is the emphasis of monitoring, and the purpose of moving object detection is from image sequence region of variation to be extracted from background image.The mixed Gauss model method is to the adaptivity height of background, and to object in the variation of brightness, the background trickle move, target etc. has good adaptability at a slow speed, in the present invention, adopts the mixed Gauss model method that image sequence is carried out background modeling.Comprise some noise points and cavity in the resulting two-value foreground image of mixed Gauss model method.The present invention adopts medium filtering to remove noise, adopts morphologic corrosion expansive working to remove cavity in the foreground image in addition.
The foreground area size of the image sequence that is extracted is 320 * 240.Wherein foreground area may comprise a plurality of moving targets, obtains the foreground area profile by the connected domain analysis and cuts apart moving target accurately, and the foreground area of all moving target correspondences is normalized to identical size with image-region.
Fig. 4 (a) and Fig. 4 (b) have provided the example as a result of moving object detection, and its Fig. 4 (a) is an original sequence, the two-value foreground image that Fig. 4 (b) obtains for medium filtering behind the mixed Gauss model and morphological operation.Fig. 5 is the pairing foreground area of moving target after the normalization.
Fig. 6 has provided the motion target tracking flow process:
Step S21: at first according to the moving object detection result, ask for the centroid position of the pairing image-region of each moving target in the previous frame, this moving target is represented with A.
Step S22: predict by the centroid position in the position prediction present frame of previous frame moving target A by Kalman filtering;
Step S23: the observed reading with the centroid position of predicted value and each moving target B of present frame compares then;
Step S24: the Euclidean distance dist between calculating moving target A and the moving target B (A, B).
Step S25: if this is that moving target A compares for the first time, (A B) is worth mindist and corresponding sports target B to preserve dist.If not for the first time relatively, preserve dist (A, B) and the smaller value of mindist to mindist and corresponding sports target B.
Step S26: if mindist less than certain threshold value T, then thinks and found the corresponding target of moving target A in present frame in the previous frame.The centroid position of this target in the centroid position of moving target in the previous frame and the present frame is coupled together, obtain target trajectory.
Step S27: greater than certain threshold value T, then moving target B is that emerging moving target or moving target A disappear from current scene as if mindist.Threshold value T is 5 pixel values in an embodiment, and other repeats no more.
Fig. 7 (a), Fig. 7 (b), Fig. 7 (c) and Fig. 7 (d) have provided the example as a result of motion target tracking.Fig. 7 (a) (b), (c), (d) is respectively the 132nd, 211,247,316 frames of image sequence, among the figure each moving target is carried out label respectively, and wherein label is shown is 010,011 to Fig. 7 (a), and it is 010,011 that Fig. 7 (b) illustrates label; It is 010,011 that Fig. 7 (c) illustrates label; It is 010,011 that Fig. 7 (d) illustrates label, and label shown in the figure is identical to be indicated as same moving target.
Fig. 8 illustrates the target's feature-extraction process flow diagram, and described target's feature-extraction step is as follows:
Step S31: the image-region and the corresponding foreground area of all moving target correspondences are normalized to identical size;
Step S32: the color histogram that calculates each moving target correspondence image zone realizes that color characteristic extracts;
Step S33: the mean profile that calculates each moving target correspondence image zone is realized Shape Feature Extraction.
Because video camera is far away to the distance of moving target, resolution is lower, can't effectively extract the face characteristic and the gait feature of moving target.The present invention passes through to extract presentations such as color characteristic (appearance) feature of moving target, thereby effectively portrays moving target, makes to have the property of differentiation between the moving target.Wherein color characteristic can comprise color masterplate (color templates), histogram (histogram), square (moments), main colouring information (dominant colors) etc.Shape facility can comprise mean profile, in shape hereinafter (shape context) etc.It should be noted that color characteristic and shape facility are subjected to the influence of visual angle change and light variation easily, the feature of being extracted should be to appropriate illumination variation and visual angle change robust.Can adopt constant technology of color (color constancy) or color normalization (color normalization) that moving target is carried out extracting color characteristic again after the pre-service.Also can adopt shape normalization technology (pose normalization) that the shape correction of moving target is arrived same attitude, extract its shape facility again.
Fig. 9 illustrates to calculate and arrives the zone and leave regional process flow diagram.Step S41:, can obtain the complete track of moving target under each scene by motion target tracking.The initial point position and the terminating point position of all movement objective orbits of record in a period of time.Step S42: adopt the initial point position of all moving targets in k-average technology cluster a period of time and the coordinate figure of terminating point position, thereby obtain the arrival zone under each each scene of road video camera and leave the zone.
Figure 10 (a) Figure 10 (b) shows the arrival zone in video camera 2 and the video camera 3 and leaves area schematic.Wherein elliptical region illustrates the arrival zone of moving target and leaves the zone, the starting point or the final position of "+" expression track.It should be noted that same zone both can be that the arrival zone also can be to leave the zone.If certain moving target is about to leave the field range of video camera, then the final on trajectory region is for leaving the zone.If moving target appears under the field range of video camera for the first time, then the track starting area is for arriving the zone.Arriving the zone and leave the zone also can be definite by experience.
Annexation between the described calculating camera area is: determine that by experience video camera arrives the zone and leaves the annexation between the zone or determine that by video data video camera arrives the zone and leaves annexation between the zone.Figure 11 shows video camera and arrives the zone and leave annexation between the zone.Comprise among the figure: the arrival of video camera 1 zone and leave the zone and be zone 1, zone 2, zone 3 and zone 4, the arrival zone of video camera 2 and leave the zone and be zone 5, zone 6 and zone 7; The arrival zone of video camera 3 is zone 8, zone 9 and regional 10 with leaving the zone; For example, line between zone 6 and the zone 10 is represented, may appear at after a period of time the zone 10, after promptly moving target disappears from the field range of video camera 2 from zone 6 moving targets of going out, enter a blind area, may appear under the field range of video camera 3 after a period of time.Video camera arrives annexation regional and that leave between the zone and can specify by experience, also can learn automatically obtain by video data and object matching information.Automatically the correlation technique of learning to obtain video camera arrival zone and leave the annexation between the zone by video data comprises " Inference ofnon-overlapping camera network topology by measuring statisticaldependence ", Proceedings of International Conference on Computer Vision, 2005, pp.1842-1849, " Bridging the Gaps between Cameras ", Proceedings ofComputer Vision and Pattern Recognition, 2004, pp.205-210 etc.It should be noted that video camera arrives annexation regional and that leave between the zone and may change and change along with the time, and the annexation between the zone not two-way.Work hours for example seldom have moving target to pass zone 10 and arrive zone 6, thus zone 10 do not exist to the connections in zone 6, yet zone 6 is to the connections existence between the zone 10.In the quitting time, most of moving target appears in the zone 6 after passing zone 10.Can arrive the zone and leave annexation between the zone by the correlation technique online updating.Line between zone 10 and the zone 6 is represented, may appear at after a period of time the zone 6 from zone 10 moving targets of going out.Line between zone 2 and the zone 5 is represented, may appear at after a period of time the zone 5 from zone 2 moving targets of going out.Line between zone 5 and the zone 2 is represented, may appear at after a period of time the zone 2 from zone 5 moving targets of going out.Line between zone 2 and the zone 10 is represented, may appear at after a period of time the zone 10 from zone 2 moving targets of going out.Line between zone 10 and the zone 2 is represented, may appear at after a period of time the zone 2 from zone 10 moving targets of going out.
Figure 12 illustrates object matching and data association process flow diagram, and described object matching step is as follows:
Step S61: check and leave between the arrival zone under zone and j the video camera whether have annexation under i the video camera;
Step S62:, calculate and to leave all moving targets of going out in the zone under i the video camera and from the similarity between all moving targets in the arrival zone of j video camera if there is this type of annexation.Described data association is according to similarity information and temporal information between the moving target, calculates the moving target that leaves under i the video camera and from the corresponding relation between the moving target of j video camera time arrival.
The object matching process has been calculated the similarity between the moving target feature.The data association process obtains the corresponding relation of moving target under different cameras.Calculate video camera and arrive the zone and leave after the annexation between the zone, then do not need object matching is carried out in all moving object that certain moving object and other video cameras occur down.For example, in Figure 11,, after after a while, can not appear in the zone 2, so do not need to mate the moving target that from zone 6, leaves and the arrival target in the zone 2 for each moving target that leaves 6 from the zone.And the moving target that only need mate the appearance in the zone that annexation is arranged and leave, as for each moving target x that leaves 6 from the zone 1, we only need with it with a period of time in arrive regional 10 moving target x 2Mate, promptly calculate x 1And x 2Between similarity Sim (x 1, x 2).The connection relation information of utilizing video camera to leave the zone and arriving between the zone is carried out the number of times that object matching has reduced object matching, thereby has improved efficiency of algorithm and reduced the possibility of erroneous matching.Moving target x 1And x 2Calculation of similarity degree depends on its characteristic similarity Sim_app (x 1, x 2) and x 1Time departure t 1And x 2T time of arrival 2Characteristic similarity Sim_app (x 1, x 2) calculating depend on the feature extraction form of moving target.If the moment t that leaves certain scene of moving target x1 1Be later than it and arrive the moment t of another scene 2, then there is the public domain between these two scenes, promptly there is the overlapping region between the field range of two-way video camera.Because video camera is adjacent, if moving target x 2Arrive the moment t of certain scene 2Much larger than moving target x 1Leave the moment t of certain scene 1, i.e. t 2-t 1Greater than certain threshold value T 1, x then 1And x 2Also can not corresponding same moving target.Threshold value T1 is 2 minutes in an embodiment.In general, if x 1And x 2Corresponding same moving target, t 1And t 2Between should satisfy a time relationship.As t 2-t 1Obey a single Gauss or many Gaussian distribution.Given moving target x 1Departure time t 1With moving target x 2Due in t 2, measure x from time relationship 1And x 2Probability corresponding to same moving target is Sim_time (x 1, x 2, t 2-t 1).Calculating moving target x 1And x 2Similarity the time, generally can suppose characteristic similarity Sim_app (x 1, x 2) and time relationship tolerance Sim_time (x 1, x 2, t 2-t 1) independence, then moving target x 1And x 2Similarity can be expressed as:
Sim(x 1,x 2)=Sim_app(x 1,x 2)*Sim_time(x 1,x 2,t 2-t 1)
The result who obtains object matching just can realize data association afterwards, promptly judges moving target x 1And x 2Whether corresponding to same moving object.For each moving target x that from leave zone 1, leaves 1, calculate meet time relationship arrive the moving target x in zone 2 with all 2Similarity, and write down its maximal value maxsim and corresponding sports target x 2If maximum similarity maxsim is greater than certain threshold value T 2, then think x 2Be x 1Corresponding target under another video camera.
Figure 13 (a) shows moving object Continuous Tracking synoptic diagram to Figure 13 (c).According to the data association step, can obtain the corresponding relation of moving target under a plurality of video cameras.Wherein moving target 001 field range of passing video camera 1 and video camera 2 has finally appeared under the field range of video camera 3, and label is identical to be indicated as same moving target.
Figure 14 illustrates target Continuous Tracking process flow diagram, and described target Continuous Tracking step is as follows:
Step S81: the moving target that passes through a plurality of camera coverage scopes is carried out unique sign according to the corresponding relation between the moving target;
Step S82: handing-over motion target detection and tracing task.
Among Figure 13, the field range of passing through video camera 1 when moving target 001 arrive video camera 2 within sweep of the eye the time, moving target is carried out unique sign and motion target detection and tracing task are delivered to video camera 2 times by video camera 1, promptly take over respective detection, tracing task by target detection under the video camera 2 and tracking module.Accordingly, when moving target arrive at video camera 3 within sweep of the eye the time, take over corresponding task by the detection and tracking module of video camera 3.010 is the label of moving target 002 under first video camera, 011 is the label of moving target 001 under first video camera, 035 is the label of moving target 001 under second video camera, 085 is the label of moving target 001 under the 3rd video camera, and 081 is the label of moving target 003 under the 3rd video camera.
The above; only be the embodiment among the present invention; but protection scope of the present invention is not limited thereto; anyly be familiar with the people of this technology in the disclosed technical scope of the present invention; can understand conversion or the replacement expected; all should be encompassed in of the present invention comprising within the scope, therefore, protection scope of the present invention should be as the criterion with the protection domain of claims.

Claims (9)

1. target continuous tracking method based on multiple-camera, based on multi-channel video, it is characterized in that, comprise target detection, target following, target's feature-extraction, calculate and arrive the zone and leave the zone, calculate camera area connectivity, object matching, data association and target Continuous Tracking step, step is as follows:
Step S1: the image sequence that each road video camera is collected carries out target detection, is used to obtain moving target;
Step S2: adopt Kalman Filter Technology that each moving target is carried out target following, be used to obtain this moving target complete track under each road video camera;
Step S3: the color histogram that calculates each movement destination image zone realizes that color characteristic extracts and mean profile is realized Shape Feature Extraction, is used to obtain the feature description of this moving target robust;
Step S4: adopt the initial point position and the terminating point position of k-average technology cluster movement objective orbit, be used to obtain the arrival zone under the video camera of every road and leave the zone;
Step S5: determine the arrival zone of each road video camera and leave arrival zone and other video camera under regional and leave annexation between the zone by experience or video data;
Step S6: according to the annexation between the camera area, the moving target under the different cameras is carried out object matching, be used to obtain similarity between the moving target;
Step S7:, calculate the moving target that leaves under i the video camera and from j the video camera corresponding relation between the moving target of arrival down, realization data association according to similarity information and the temporal information between the moving target;
Step S8: the moving target that passes a plurality of video cameras is carried out the Continuous Tracking that unique sign realizes moving target under the multiple-camera.
2. by the described target continuous tracking method based on multiple-camera of claim 1, it is characterized in that: described moving object detection step is as follows:
Step S11: the image sequence that each road video camera is collected makes up background model;
Step S12: from image sequence, region of variation is extracted from background model, be used to obtain moving target;
Step S13: the moving target that obtains is carried out morphological operation and connected domain analysis, obtain cutting apart moving target accurately.
3. by the described target continuous tracking method based on multiple-camera of claim 1, it is characterized in that: described target following step is as follows:
Step S21:, calculate the centroid position in moving target A corresponding image zone in the previous frame according to the moving object detection result;
Step S22: according to the centroid position of Kalman filtering predicted motion target A in present frame;
Step S23:, calculate the centroid position in moving target B corresponding image zone for each moving target B in the present frame;
Step S24: calculate the Euclidean distance dist between moving target A and the moving target B;
Step S25: if this is that moving target A compares for the first time, (A B) is worth mindist and corresponding sports target B to preserve dist; If not for the first time relatively, preserve dist (A, B) and the smaller value of mindist to mindist and corresponding sports target B;
Step S26:, obtain the complete trajectory sequence of this moving target in each road video if mindist less than certain threshold value T, connects the centroid position of moving target in image sequence;
Step S27: greater than certain threshold value T, then B is that emerging moving target or A disappear from current scene as if mindist.
4. by the described target continuous tracking method based on multiple-camera of claim 1, it is characterized in that: described target's feature-extraction step is as follows:
Step S31: the image-region and the corresponding foreground area of all moving target correspondences are normalized to identical size;
Step S32: the color histogram that calculates each moving target correspondence image zone realizes that color characteristic extracts;
Step S33: the mean profile that calculates each moving target correspondence image zone is realized Shape Feature Extraction.
5. by the described target continuous tracking method based on multiple-camera of claim 1, it is characterized in that: described calculating arrives the zone and leaves regional step as follows:
Step S41: according to the initial point position and the terminating point position of the resulting track sequence extraction of target following movement objective orbit;
Step S42: adopt the initial point position and the terminating point position of all moving targets in k-average technology cluster a period of time, obtain the arrival zone under each road video camera and leave the zone.
6. by the described target continuous tracking method based on multiple-camera of claim 1, it is characterized in that: the annexation between the described calculating camera area is: determine that by experience video camera arrives the zone and leaves the annexation between the zone or determine that by video data video camera arrives the zone and leaves annexation between the zone.
7. by the described target continuous tracking method based on multiple-camera of claim 1, it is characterized in that: described object matching step is as follows:
Step S61: check and leave between the arrival zone under zone and j the video camera whether have annexation under i the video camera;
Step S62:, calculate and to leave all moving targets of going out in the zone under i the video camera and from the similarity between all moving targets in the arrival zone of j video camera if there is this type of annexation.
8. by the described target continuous tracking method of claim 1 based on multiple-camera, it is characterized in that: described data association is according to similarity information and temporal information between the moving target, calculates the moving target that leaves under i the video camera and from the corresponding relation between the moving target of j video camera time arrival.
9. by the described target continuous tracking method based on multiple-camera of claim 1, it is characterized in that: described target Continuous Tracking step is as follows:
Step S81: the moving target that passes through a plurality of camera coverage scopes is carried out unique sign according to the corresponding relation between the moving target;
Step S82: handing-over motion target detection and tracing task.
CN 200810240364 2008-12-17 2008-12-17 Target continuous tracking method based on multi-camera Active CN101751677B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 200810240364 CN101751677B (en) 2008-12-17 2008-12-17 Target continuous tracking method based on multi-camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 200810240364 CN101751677B (en) 2008-12-17 2008-12-17 Target continuous tracking method based on multi-camera

Publications (2)

Publication Number Publication Date
CN101751677A true CN101751677A (en) 2010-06-23
CN101751677B CN101751677B (en) 2013-01-02

Family

ID=42478620

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 200810240364 Active CN101751677B (en) 2008-12-17 2008-12-17 Target continuous tracking method based on multi-camera

Country Status (1)

Country Link
CN (1) CN101751677B (en)

Cited By (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102176246A (en) * 2011-01-30 2011-09-07 西安理工大学 Camera relay relationship determining method of multi-camera target relay tracking system
CN102736079A (en) * 2012-07-10 2012-10-17 中国船舶重工集团公司第七二四研究所 Realization method for tracing boats at state of passing through bridges by using boat traffic navigation system
CN104008371A (en) * 2014-05-22 2014-08-27 南京邮电大学 Regional suspicious target tracking and recognizing method based on multiple cameras
CN104038729A (en) * 2014-05-05 2014-09-10 重庆大学 Cascade-type multi-camera relay tracing method and system
CN104079872A (en) * 2014-05-16 2014-10-01 大连理工大学 Video image processing and human-computer interaction method based on content
CN104123732A (en) * 2014-07-14 2014-10-29 中国科学院信息工程研究所 Online target tracking method and system based on multiple cameras
CN104284145A (en) * 2013-07-11 2015-01-14 松下电器产业株式会社 Tracking assistance device, a tracking assistance system and a tracking assistance method
CN104299236A (en) * 2014-10-20 2015-01-21 中国科学技术大学先进技术研究院 Target locating method based on scene calibration and interpolation combination
CN104376577A (en) * 2014-10-21 2015-02-25 南京邮电大学 Multi-camera multi-target tracking algorithm based on particle filtering
CN104463900A (en) * 2014-12-31 2015-03-25 天津汉光祥云信息科技有限公司 Method for automatically tracking target among multiple cameras
CN104539909A (en) * 2015-01-15 2015-04-22 安徽大学 Video monitoring method and video monitoring server
CN104881637A (en) * 2015-05-09 2015-09-02 广东顺德中山大学卡内基梅隆大学国际联合研究院 Multimode information system based on sensing information and target tracking and fusion method thereof
CN105184829A (en) * 2015-08-28 2015-12-23 华中科技大学 Closely spatial object detection and high-precision centroid location method
CN105741325A (en) * 2016-03-15 2016-07-06 上海电气集团股份有限公司 Moving target tracking method and moving target tracking equipment
CN106778746A (en) * 2016-12-23 2017-05-31 成都赫尔墨斯科技有限公司 A kind of anti-unmanned plane method of multiple target
CN106934326A (en) * 2015-12-29 2017-07-07 同方威视技术股份有限公司 Method, system and equipment for safety inspection
CN107065586A (en) * 2017-05-23 2017-08-18 中国科学院自动化研究所 Interactive intelligent home services system and method
CN107493441A (en) * 2016-06-12 2017-12-19 杭州海康威视数字技术股份有限公司 A kind of summarized radio generation method and device
CN108399411A (en) * 2018-02-26 2018-08-14 北京三快在线科技有限公司 A kind of multi-cam recognition methods and device
CN109427074A (en) * 2017-08-31 2019-03-05 深圳富泰宏精密工业有限公司 Image analysis system and method
CN109565563A (en) * 2016-08-09 2019-04-02 索尼公司 Multicamera system, camera, the processing method of camera, confirmation device and the processing method for confirming device
CN110047097A (en) * 2019-03-27 2019-07-23 深圳职业技术学院 A kind of target Continuous tracking of multiple-camera collaboration
CN110110670A (en) * 2019-05-09 2019-08-09 杭州电子科技大学 Data correlation method in pedestrian tracking based on Wasserstein measurement
CN110322471A (en) * 2019-07-18 2019-10-11 华中科技大学 Method, apparatus, equipment and the storage medium of panoramic video concentration
CN107341819B (en) * 2017-05-09 2020-04-28 深圳市速腾聚创科技有限公司 Target tracking method and storage medium
CN112200841A (en) * 2020-09-30 2021-01-08 杭州海宴科技有限公司 Cross-domain multi-camera tracking method and device based on pedestrian posture

Cited By (37)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102176246A (en) * 2011-01-30 2011-09-07 西安理工大学 Camera relay relationship determining method of multi-camera target relay tracking system
CN102736079A (en) * 2012-07-10 2012-10-17 中国船舶重工集团公司第七二四研究所 Realization method for tracing boats at state of passing through bridges by using boat traffic navigation system
CN104284145A (en) * 2013-07-11 2015-01-14 松下电器产业株式会社 Tracking assistance device, a tracking assistance system and a tracking assistance method
CN104284145B (en) * 2013-07-11 2017-09-26 松下电器产业株式会社 Track servicing unit, tracking accessory system and tracking householder method
CN104038729A (en) * 2014-05-05 2014-09-10 重庆大学 Cascade-type multi-camera relay tracing method and system
CN104079872A (en) * 2014-05-16 2014-10-01 大连理工大学 Video image processing and human-computer interaction method based on content
CN104008371A (en) * 2014-05-22 2014-08-27 南京邮电大学 Regional suspicious target tracking and recognizing method based on multiple cameras
CN104008371B (en) * 2014-05-22 2017-02-15 南京邮电大学 Regional suspicious target tracking and recognizing method based on multiple cameras
CN104123732A (en) * 2014-07-14 2014-10-29 中国科学院信息工程研究所 Online target tracking method and system based on multiple cameras
CN104299236A (en) * 2014-10-20 2015-01-21 中国科学技术大学先进技术研究院 Target locating method based on scene calibration and interpolation combination
CN104299236B (en) * 2014-10-20 2018-02-27 中国科学技术大学先进技术研究院 A kind of object localization method based on scene calibration combined interpolation
CN104376577A (en) * 2014-10-21 2015-02-25 南京邮电大学 Multi-camera multi-target tracking algorithm based on particle filtering
CN104463900A (en) * 2014-12-31 2015-03-25 天津汉光祥云信息科技有限公司 Method for automatically tracking target among multiple cameras
CN104539909A (en) * 2015-01-15 2015-04-22 安徽大学 Video monitoring method and video monitoring server
CN104881637A (en) * 2015-05-09 2015-09-02 广东顺德中山大学卡内基梅隆大学国际联合研究院 Multimode information system based on sensing information and target tracking and fusion method thereof
CN104881637B (en) * 2015-05-09 2018-06-19 广东顺德中山大学卡内基梅隆大学国际联合研究院 Multimodal information system and its fusion method based on heat transfer agent and target tracking
CN105184829B (en) * 2015-08-28 2017-12-12 华中科技大学 A kind of tight quarters target detection and high-precision method for positioning mass center
CN105184829A (en) * 2015-08-28 2015-12-23 华中科技大学 Closely spatial object detection and high-precision centroid location method
CN106934326A (en) * 2015-12-29 2017-07-07 同方威视技术股份有限公司 Method, system and equipment for safety inspection
CN105741325B (en) * 2016-03-15 2019-09-03 上海电气集团股份有限公司 A kind of method and movable object tracking equipment of tracked mobile target
CN105741325A (en) * 2016-03-15 2016-07-06 上海电气集团股份有限公司 Moving target tracking method and moving target tracking equipment
CN107493441B (en) * 2016-06-12 2020-03-06 杭州海康威视数字技术股份有限公司 Abstract video generation method and device
CN107493441A (en) * 2016-06-12 2017-12-19 杭州海康威视数字技术股份有限公司 A kind of summarized radio generation method and device
US11025877B2 (en) 2016-08-09 2021-06-01 Sony Corporation Multi-camera system, camera, processing method of camera, confirmation apparatus, and processing method of confirmation apparatus
CN109565563A (en) * 2016-08-09 2019-04-02 索尼公司 Multicamera system, camera, the processing method of camera, confirmation device and the processing method for confirming device
CN106778746A (en) * 2016-12-23 2017-05-31 成都赫尔墨斯科技有限公司 A kind of anti-unmanned plane method of multiple target
CN107341819B (en) * 2017-05-09 2020-04-28 深圳市速腾聚创科技有限公司 Target tracking method and storage medium
CN107065586A (en) * 2017-05-23 2017-08-18 中国科学院自动化研究所 Interactive intelligent home services system and method
CN107065586B (en) * 2017-05-23 2020-02-07 中国科学院自动化研究所 Interactive intelligent home service system and method
CN109427074A (en) * 2017-08-31 2019-03-05 深圳富泰宏精密工业有限公司 Image analysis system and method
CN108399411A (en) * 2018-02-26 2018-08-14 北京三快在线科技有限公司 A kind of multi-cam recognition methods and device
CN110047097A (en) * 2019-03-27 2019-07-23 深圳职业技术学院 A kind of target Continuous tracking of multiple-camera collaboration
CN110047097B (en) * 2019-03-27 2019-11-29 深圳职业技术学院 A kind of target Continuous tracking of multiple-camera collaboration
CN110110670A (en) * 2019-05-09 2019-08-09 杭州电子科技大学 Data correlation method in pedestrian tracking based on Wasserstein measurement
CN110322471A (en) * 2019-07-18 2019-10-11 华中科技大学 Method, apparatus, equipment and the storage medium of panoramic video concentration
CN110322471B (en) * 2019-07-18 2021-02-19 华中科技大学 Method, device and equipment for concentrating panoramic video and storage medium
CN112200841A (en) * 2020-09-30 2021-01-08 杭州海宴科技有限公司 Cross-domain multi-camera tracking method and device based on pedestrian posture

Also Published As

Publication number Publication date
CN101751677B (en) 2013-01-02

Similar Documents

Publication Publication Date Title
CN101751677B (en) Target continuous tracking method based on multi-camera
Kim et al. Fast and robust algorithm of tracking multiple moving objects for intelligent video surveillance systems
CN104123544A (en) Video analysis based abnormal behavior detection method and system
CN101389004B (en) Moving target classification method based on on-line study
CN102436662B (en) Human body target tracking method in nonoverlapping vision field multi-camera network
CN108055501A (en) A kind of target detection and the video monitoring system and method for tracking
CN101635835A (en) Intelligent video monitoring method and system thereof
CN104200466B (en) A kind of method for early warning and video camera
CN103164858A (en) Adhered crowd segmenting and tracking methods based on superpixel and graph model
CN101344965A (en) Tracking system based on binocular camera shooting
Berclaz et al. Multi-camera tracking and atypical motion detection with behavioral maps
CN102819847B (en) Based on the movement locus extracting method of PTZ dollying head
CN103761514A (en) System and method for achieving face recognition based on wide-angle gun camera and multiple dome cameras
CN103997624A (en) Overlapped domain dual-camera target tracking system and method
Karpagavalli et al. Estimating the density of the people and counting the number of people in a crowd environment for human safety
Tao et al. Real-time detection and tracking of moving object
CN105426820B (en) More people's anomaly detection methods based on safety monitoring video data
TW201904265A (en) Abnormal motion detection method and system
Hsieh et al. Grid-based template matching for people counting
CN103888731A (en) Structured description device and system for mixed video monitoring by means of gun-type camera and dome camera
CN103426008A (en) Vision human hand tracking method and system based on on-line machine learning
CN102065275B (en) Multi-target tracking method in intelligent video monitoring system
CN103489202A (en) Intrusion detection method based on videos
Shafie et al. Smart video surveillance system for vehicle detection and traffic flow control
Yang et al. An effective background subtraction under a continuosly and rapidly varying illumination

Legal Events

Date Code Title Description
PB01 Publication
C06 Publication
SE01 Entry into force of request for substantive examination
C10 Entry into substantive examination
GR01 Patent grant
C14 Grant of patent or utility model