CN107563272B - Target matching method in non-overlapping vision monitoring system - Google Patents

Target matching method in non-overlapping vision monitoring system Download PDF

Info

Publication number
CN107563272B
CN107563272B CN201710447010.3A CN201710447010A CN107563272B CN 107563272 B CN107563272 B CN 107563272B CN 201710447010 A CN201710447010 A CN 201710447010A CN 107563272 B CN107563272 B CN 107563272B
Authority
CN
China
Prior art keywords
target
view
pedestrian
targets
steps
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710447010.3A
Other languages
Chinese (zh)
Other versions
CN107563272A (en
Inventor
赵高鹏
沈玉鹏
李双双
刘天宇
王超尘
王建宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Science and Technology
Original Assignee
Nanjing University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Science and Technology filed Critical Nanjing University of Science and Technology
Priority to CN201710447010.3A priority Critical patent/CN107563272B/en
Publication of CN107563272A publication Critical patent/CN107563272A/en
Application granted granted Critical
Publication of CN107563272B publication Critical patent/CN107563272B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The invention discloses a target matching method in a non-overlapping vision monitoring system, which comprises the following steps: 1: establishing a background model of the view A and the view B; 2: tracking pedestrian targets in the view field A, and dividing the complete pedestrian targets I by utilizing background difference when the targets are about to leave the view field A A The method comprises the steps of carrying out a first treatment on the surface of the 3: extracting color name characteristics of pedestrian targets as a target expression model M A The method comprises the steps of carrying out a first treatment on the surface of the 4: detecting pedestrian targets in the vision field B, and segmenting possible pedestrian targets I by utilizing background difference B The method comprises the steps of carrying out a first treatment on the surface of the 5: target I of pedestrian A And I B Switching from RGB space to HSV space to carry out brightness correction; 6: extracting luminance corrected I B Color name feature of (c) as target expression model M B The method comprises the steps of carrying out a first treatment on the surface of the 7: determining space-time constraints of a monitoring network; 8: from M A 、M B The problem of constructing the maximum posterior probability by monitoring the space-time constraint of the network; the method has stronger robustness to the influence of illumination change and environmental difference on target matching under the non-overlapping visual field.

Description

Target matching method in non-overlapping vision monitoring system
Technical Field
The invention belongs to the field of computer vision, in particular to the technical field of video monitoring, and particularly relates to a target matching method in a non-overlapping vision monitoring system.
Background
With the development of camera monitoring technology, a large number of monitoring cameras are applied to various public places, and pedestrians in videos are one of important monitoring objects. Due to geographical environment and cost constraints, it is not practical to use cameras to cover all of the monitored area. Thus, joint monitoring of cameras without overlapping views is increasingly used in monitoring systems. The correct identification of the same target in different cameras at different moments is beneficial to understanding the behavior of the target in the whole monitoring scene. However, the different camera poses and observation orientations make the geometric features of the target in the different cameras different. Compared with other features, the color features have smaller dependence on the size, direction and viewing angle of the image, and have higher robustness. However, due to the influence of camera parameters and illumination conditions, the brightness of the target presented in different cameras is also different, so that when the color features are adopted, the brightness needs to be corrected first. Meanwhile, the topological structure of the monitoring network needs to be considered, so that the target matching rate in the non-overlapping vision area is improved as much as possible.
The brightness correction function (Brightness Transfer Function, BTF) is a method of color correction that provides a conversion model for the change in color characteristics of a target between two cameras. Javed O et al in Appearance Modeling for Tracking in Multiple Non-Overlapping Cameras propose a method for establishing a brightness mapping relationship between camera monitoring scenes using a target as a medium. Since the same object displays different brightness characteristics in two different scenes with light and shade differences, the brightness differences of the two scenes are reflected.
In establishing space-time constraints between cameras, javed O et al in Tracking Across Multiple Non-Overlapping Cameras use a mixed probability density estimator of Parzen windows and gaussian kernels to estimate the topology and target path probabilities of multiple cameras without overlap. Ellis et al in Learning a Multi-Camera topology automatically establish a temporal-spatial topological relationship between cameras for a Multi-Camera spatial network by using a large amount of target observation data through an unsupervised Learning method. However, these automatic calibration methods for cameras have a large limitation, and in some cases cannot be obtained or the result is inaccurate.
The Bayesian estimation model-based method is used for fusing the similarity information of the expression model of the target and the space-time constraint information of the monitoring network, so that the target matching under the non-overlapping visual field is realized. A bayesian estimation framework is built, as in Huang et al, object Identification in a Bayesian Context, to fuse the time, location, speed of the vehicle when it is coming out of view with the color information of the target vehicle to match and identify vehicles detected by two adjacent cameras on the highway.
Disclosure of Invention
The purpose of the invention is that: the invention provides a target matching method in a non-overlapping vision monitoring system, which comprises the steps of firstly, carrying out brightness correction by adjusting pixel value distribution of a V channel in an image HSV space; then extracting color name characteristics of the target after brightness correction as a target expression model; then under the framework of a Bayesian estimation model, fusing a performance model of the target with space-time constraints of a monitoring network to construct a maximum posterior probability problem; finally judging whether the targets are consistent or not through comparison of the posterior probability and a threshold value; the method has stronger robustness to the influence of illumination change and environmental difference on target matching under the non-overlapping visual field.
The specific implementation steps of the invention are as follows:
a target matching method in a non-overlapping vision monitoring system comprises the following steps:
step 1: establishing a background model of the view A and the view B;
step 2: tracking pedestrian targets in the view field A, and dividing the complete pedestrian targets I by utilizing background difference when the targets are about to leave the view field A A
Step 3: extracting color name characteristics of pedestrian targets as a target expression model M A
Step (a)4: detecting pedestrian targets in the vision field B, and segmenting possible pedestrian targets I by utilizing background difference B
Step 5: target I of pedestrian A And I B Switching from RGB space to HSV space to carry out brightness correction;
step 6: extracting luminance corrected I B Color name feature of (c) as target expression model M B
Step 7: determining space-time constraints of a monitoring network;
step 8: from M A 、M B And monitoring the space-time constraint construction maximum posterior probability problem of the network, and judging whether the targets are consistent.
Furthermore, the step 1 establishes the background model of the views a and B by using the key area of the intercepting target leaving or entering the views as the background.
Furthermore, in the step 2, the pedestrian target in the view field A is tracked by using a rectangular frame for manual calibration, and the target pedestrian is tracked by using a DSST algorithm.
Still further, the step 3 of extracting color name features of the pedestrian target includes black, blue, brown, gray, green, orange, pink, purple, red, white, and yellow.
Furthermore, in the step 4, binarization is performed by using an oxford algorithm to obtain a binary image, and a complete pedestrian target I is segmented B
Further, the step 5 is to target the pedestrian I A And pedestrian target I B A transition from RGB space to HSV space comprising the steps of:
step 5-1, pedestrian targets I in the view A and the view B A And I B Converting from RGB space to HSV space, extracting V channel;
step 5-2, adjusting Gaussian distribution of the pixel values of the V channel:
Figure GDA0001476569630000031
wherein s is v And t v The pixel values of the source image and the target image V-channels respectively,
Figure GDA0001476569630000032
and->
Figure GDA0001476569630000033
Is the standard deviation of the V-channels of the source and target images, respectively, mean (s v ) And mean (t) v ) Respectively the average value of the V channels of the source image and the target image, I v Is the pixel value of the V channel after the brightness correction of the source image;
and step 5-3, the image of the HSV space after the V channel correction is transferred to the RGB space again, so that the brightness correction purpose is achieved.
Further, the step 7 of determining the space-time constraint of the monitoring network includes the steps of:
step 7-1, extracting a time interval from leaving the view A to entering the view B of a target by using a sample manually calibrated in advance, and training a mixed Gaussian model:
p(t=T|a=b)=π 1 p(t=T;μ 11 )+π 2 p(t=T;μ 22 )
where a is the target in view A and B is the target in view B; p (t=t; μ) 11 ) Is a Gaussian probability density, μ used to fit the transition time of a faster pedestrian 1 Is the average transition time delta of pedestrians with higher speeds 1 Is the standard deviation of the pedestrian transfer time with higher speed pi 1 The weight of the Gaussian probability density of the pedestrian with higher fitting speed is adopted; p (t=t; μ) 22 ) Is the transfer time Gaussian probability density, mu, used for fitting pedestrians with slower speed 2 Is the average transition time, delta, spent by pedestrians at slower speeds 2 Is the standard deviation of the transfer time of pedestrians with slower speed pi 2 Is the weight of the Gaussian probability density of the pedestrian with slower fitting speed;
step 7-2, extracting the position of the target leaving the view A and the position of the target entering the view B by using a sample calibrated in advance, and training a single Gaussian model to fit the change condition of the position:
p(d=D|a=b)=p(d=D;μ,δ)
where a is the target in view A and B is the target in view B; p (d=d; μ, δ) is a gaussian probability density used to fit the position change, μ is the average position change in view a and view B, and δ is the standard deviation of the position change.
Further, the step 8 is formed by M A 、M B And monitoring the space-time constraint construction maximum posterior probability problem of the network, comprising the following steps:
step 8-1, representing the model M by the target A 、M B The space-time constraint construction maximum posterior probability problem of a monitoring network:
Figure GDA0001476569630000041
where p (a=b) is the probability of transition of the target from view a to view B, assuming uniform distribution;
p(M A ,M B the method comprises the steps of carrying out a first treatment on the surface of the T is a T; d) Using a constant scale factor representation, then:
p(a=b|M A ,M B ;T;D)∝p(M A ,M B ;T;D|a=b)
p(M A ,M B the method comprises the steps of carrying out a first treatment on the surface of the T is a T; d| a=b) depends on the target-dependent performance model probability p (M A ,M B |a=b), time transition probability p (t=t|a=b), and position transition probability p (d=d|a=b). Assuming that these three probabilities are all distributed independently, then:
p(M A ,M B ;T;D|a=b)=p(M A ,M B |a=b)×p(t=T|a=b)×p(d=D|a=b)
defining a target performance model probability by utilizing the square of the Pasteur coefficient;
p(d=D|a=b)=p(d=D;μ,δ)
the matching problem of the targets at this time is summarized as: given object a in view A, find object B in view B * Such that:
b * =argmax b∈B p(M A ,M B ;T;D|a=b)
step 8-2, by setting a threshold value
Figure GDA0001476569630000043
To determine whether the targets are consistent, when +.>
Figure GDA0001476569630000042
When a and b are considered to be the same target.
Further, the slower pedestrian is walking at a speed less than the average walking speed of the pedestrian, and the faster artificial walking speed is greater than the average walking speed of the pedestrian.
The beneficial effects are that: the method is simple and clear, the target is matched with a speed block, the matched target is accurate, the method can adapt to different illumination environments and different environmental differences, and the problem that the coverage monitoring area of the camera is not full can be effectively solved.
Drawings
FIG. 1 is a flow chart of a method of object matching in a non-overlapping view monitoring system of the present invention.
Fig. 2 is a background diagram of a view A, B in an embodiment of the invention.
Fig. 3 is a diagram of the last frame tracking result, the target binary image, the target segmentation image and the target Yan Seming of the target in a in the embodiment of the present invention.
Fig. 4 is a target binary image, a target segmentation image, a target color name image without brightness correction, and a target color name image with brightness correction of the detection result in the view field B according to the embodiment of the present invention.
Fig. 5 is a posterior probability that the first 5 frames of the detection result in view B and the target in view a are the same target in the embodiment of the present invention.
Detailed Description
The invention relates to a target matching method in a non-overlapping vision monitoring system, which comprises the following steps:
step 1, establishing a background model of a view A and a view B; in order to reduce the calculation amount of background model establishment, intercepting a key area of a target leaving or entering a view as a background, as shown in fig. 2;
and 2, manually calibrating a target to be tracked in the monitoring view area A by using a rectangular frame, and tracking a target pedestrian by using a DSST (Discriminative Scale Space Tracker) algorithm. When one side of the rectangular box surrounding the object reaches the video frame boundary, it is considered the last frame of the object in view A. Obtaining a binary image containing the target by using a background difference method and an Ojin algorithm, and dividing the complete pedestrian target I according to the binary image A Fig. 3 shows, from left to right, a last frame of the object in the view a, a binary image of the object, a segmentation image of the object, and a color name image of the object, respectively;
step 3, using Yan Seming (Color Name, CN) features, summarizing all pixels into 11 Color semantic tags: black, blue, brown, gray, green, orange, pink, purple, red, white, yellow. Target I for pedestrians A One color label is assigned to each pixel in the display as shown in fig. 3. Mapping all pixels of the pedestrian target into 11-dimensional color name vectors, and normalizing to obtain a normalized color name histogram as a target expression model M A
Step 4, performing target detection in the view field B by using a background difference method, performing binarization by using an Ojin algorithm, and dividing a complete pedestrian target I according to a binary image B Fig. 4 (a), (B), and (c) are respectively, from left to right, the candidate objects detected in the view B, and the candidate objects are respectively, from left to right, the detection result of the candidate objects in the view B, the binary image of the candidate objects, the segmentation image of the candidate objects, and the color name image of the candidate objects;
step 5, I A ,I B And switching from the RGB space to the HSV space to perform brightness correction. The step 5 specifically comprises the following steps:
step 5-1, monitoring pedestrian targets I in the vision area A and the vision area B A And I B Converting from RGB space to HSV space, extracting V channel;
step 5-2, adjusting Gaussian distribution of the pixel values of the V channel:
Figure GDA0001476569630000061
wherein s is v ,t v The pixel values of the source image and the target image V-channels respectively,
Figure GDA0001476569630000062
is the standard deviation of the V-channels of the source and target images, respectively, mean (s v ),mean(t v ) Respectively the average value of the V channels of the source image and the target image, I v Is the pixel value of the V channel after the brightness correction of the source image;
step 5-3, the image of the HSV space after the V channel correction is transferred to the RGB space again, so that the brightness correction purpose is achieved, and the image is shown in figure 4;
step 6, using the color name feature to give the detected target I B Each pixel in the target is allocated a color label, as shown in fig. 4, so that all pixels of the target are mapped into 11-dimensional color name vectors, the color name vectors are normalized, and a normalized color name histogram is obtained as a target expression model M B
And 7, determining space-time constraint of the monitoring network. The step 7 specifically comprises the following steps:
step 7-1, extracting a time interval from leaving the view A to entering the view B of a target by using a sample manually calibrated in advance, and training a mixed Gaussian model:
p(t=T|a=b)=π 1 p(t=T;μ 11 )+π 2 p(t=T;μ 22 )
where a is the target in view A and B is the target in view B; p (t=t; μ) 11 ) Is a Gaussian probability density, μ used to fit the transition time of a faster pedestrian 1 Is the average transition time delta of pedestrians with higher speeds 1 Is the standard deviation of the pedestrian transfer time with higher speed pi 1 The weight of the Gaussian probability density of the pedestrian with higher fitting speed is adopted; p (t=t; μ) 22 ) Is used for fitting the rotation of pedestrians with slower speedTime-shifted gaussian probability density, mu 2 Is the average transition time, delta, spent by pedestrians at slower speeds 2 Is the standard deviation of the transfer time of pedestrians with slower speed pi 2 Is a weight of the gaussian probability density of the pedestrian with slower fitting speed. Because the pedestrian speed is high or low, the fitting transfer time of the double Gaussian mixture model is more accurate than that of a single Gaussian model;
step 7-2, extracting the position of the target leaving the view A and the position of the target entering the view B by using a sample calibrated in advance, and training a single Gaussian model to fit the change condition of the position:
p(d=D|a=b)=p(d=D;μ,δ)
where a is the target in view A and B is the target in view B; p (d=d; μ, δ) is a gaussian probability density used to fit the position change, μ is the average position change in view a and view B, δ is the standard deviation of the position change;
step 8, by M A 、M B And monitoring the space-time constraint construction maximum posterior probability problem of the network, and judging whether the targets are consistent. The step 8 specifically comprises the following steps:
step 8-1, representing the model M by the target A 、M B The space-time constraint construction maximum posterior probability problem of a monitoring network:
Figure GDA0001476569630000071
wherein p (a=b) is the probability of transition of the object from view a to view B, assuming both distributions; p (M) A ,M B The method comprises the steps of carrying out a first treatment on the surface of the T is a T; d) Using a constant scale factor representation, then:
p(a=b|M A ,M B ;T;D)∝p(M A ,M B ;T;D|a=b)
p(M A ,M B the method comprises the steps of carrying out a first treatment on the surface of the T is a T; d| a=b) depends on the target-dependent performance model probability p (M A ,M B |a=b), time transition probability p (t=t|a=b), and position transition probability p (d=d|a=b). Assuming that these three probabilities are all distributed independently, then:
p(M A ,M B ;T;D|a=b)=p(M A ,M B |a=b)×p(t=T|a=b)×p(d=D|a=b)
coefficient of Babbitt acharyyaCoefficient) The similarity of two discrete probability distributions can be measured and the square of the coefficient of pasteurization is used to define the performance model probability of the target. The square of the coefficient of pasteurization is chosen instead of the coefficient of pasteurization itself in order to enhance the effect of the performance model in the posterior probability.
Figure GDA0001476569630000072
p(M A ,M B |a=b)=BC(M A ,M B ) 2
The time transition probability is measured by using the Gaussian mixture model trained in the step 7-1:
p(t=T|a=b)=π 1 p(t=T;μ 11 )+π 2 p(t=T;μ 22 )
the single Gaussian model trained in the step 7-2 is used for measuring the position transition probability:
p(d=D|a=b)=p(d=D;μ,δ)
the matching problem of the targets at this time is summarized as: given object a in view A, find object B in view B * Such that:
b * =argmax b∈B p(M A ,M B ;T;D|a=b)
step 8-2, by setting a threshold value
Figure GDA0001476569630000073
To determine whether the targets are consistent, when +.>
Figure GDA0001476569630000074
When a and b are considered to be the same target.
Examples:
in order to illustrate the effectiveness of the algorithm of the present invention, the experiments to accomplish target matching in a non-overlapping view monitoring system are as follows:
(1) Experimental data and parameter settings
The test Dataset adopts the sequence of Cam1 and Cam2 contained in the Dataset of Dataset 1:three-camera network with non-overlapping views of NLPR_MCT public Dataset.
40 pedestrians going from Cam1 to Cam2 are selected as training samples, and the time transfer probability obtained by training is as follows:
Figure GDA0001476569630000081
the position transition probability is:
Figure GDA0001476569630000082
selecting a threshold t=10 -5 If p (M) A ,M B ;T;D|a=b)>And T, the targets are considered to be consistent. Target 4 in Cam1 (the sequence number of the target is described in the data set animation folder) is selected as a matching target, and the corresponding target is identified in Cam 2.
(2) Analysis of experimental results
Fig. 5 is a posterior probability that the target detected in Cam2 is the same as the selected target (target 4) in Cam1, and a posterior probability (a posterior probability of the previous 5 frames is calculated) that each target detected in Cam2 is the same as the selected target is calculated. As can be seen from fig. 5, when the threshold t=10 -5 The re-matching of the target can be accurately realized.

Claims (6)

1. A target matching method in a non-overlapping vision monitoring system comprises the following steps:
step 1: establishing a background model of the view A and the view B;
step 2: tracking pedestrian targets in the view field A, and dividing the complete pedestrian targets I by utilizing background difference when the targets are about to leave the view field A A
Step 3: extracting colors of pedestrian targetsName feature, target expression model M A
Step 4: detecting pedestrian targets in the vision field B, and segmenting possible pedestrian targets I by utilizing background difference B
Step 5: target I of pedestrian A And I B The method changes from RGB space to HSV space for brightness correction, and specifically comprises the following steps:
step 5-1, pedestrian targets I in the view A and the view B A And I B Converting from RGB space to HSV space, extracting V channel;
step 5-2, adjusting Gaussian distribution of the pixel values of the V channel:
Figure FDA0004120658200000011
wherein s is v And t v The pixel values of the source image and the target image V-channels respectively,
Figure FDA0004120658200000012
and->
Figure FDA0004120658200000013
Is the standard deviation of the V-channels of the source and target images, respectively, mean (s v ) And mean (t) v ) Respectively the average value of the V channels of the source image and the target image, I v Is the pixel value of the V channel after the brightness correction of the source image;
step 5-3, the image of the HSV space corrected by the V channel is transferred to the RGB space again, so that the brightness correction purpose is achieved;
step 6: extracting luminance corrected I B Color name feature of (c) as target expression model M B
Step 7: determining space-time constraints of a monitoring network, comprising the following steps:
step 7-1, extracting a time interval from leaving the view A to entering the view B of a target by using a sample manually calibrated in advance, and training a mixed Gaussian model:
p(t=T|a=b)=π 1 p(t=f;μ 1 ,δ 1 )+π 2 p(t=T;μ 2 ,δ 2 )
where a is the target in view A and B is the target in view B; p (t=t; μ) 1 ,δ 1 ) Is a Gaussian probability density, μ used to fit the transition time of a faster pedestrian 1 Is the average transition time delta of pedestrians with higher speeds 1 Is the standard deviation of the pedestrian transfer time with higher speed pi 1 The weight of the Gaussian probability density of the pedestrian with higher fitting speed is adopted; p (t=t; μ) 2 ,δ 2 ) Is the transfer time Gaussian probability density, mu, used for fitting pedestrians with slower speed 2 Is the average transition time, delta, spent by pedestrians at slower speeds 2 Is the standard deviation of the transfer time of pedestrians with slower speed pi 2 Is the weight of the Gaussian probability density of the pedestrian with slower fitting speed;
step 7-2, extracting the position of the target leaving the view A and the position of the target entering the view B by using a sample calibrated in advance, and training a single Gaussian model to fit the change condition of the position:
p(d=D|a=b)=p(d=D;μ,δ)
where a is the target in view A and B is the target in view B; p (d=d; μ, δ) is a gaussian probability density used to fit the position change, μ is the average position change in view a and view B, δ is the standard deviation of the position change;
step 8: from M A 、M B And monitoring the space-time constraint construction maximum posterior probability problem of the network, judging whether the targets are consistent or not, and specifically comprising the following steps:
step 8-1, representing the model M by the target A 、M B The space-time constraint construction maximum posterior probability problem of a monitoring network:
Figure FDA0004120658200000021
where p (a=b) is the probability of transition of the target from view a to view B, assuming thatUniformly distributed; p (M) A ,M B The method comprises the steps of carrying out a first treatment on the surface of the T is a T; d) Using a constant scale factor representation, then:
p(a=b|M A ,M B ;T;D)∝p(M A ,M B ;T;D|a=b)
p(M A ,M B the method comprises the steps of carrying out a first treatment on the surface of the T is a T; d| a=b) depends on the target-dependent performance model probability p (M A ,M B A=b), time transition probability p (t=t|a=b), and position transition probability p (d=d|a=b); assuming that these three probabilities are all distributed independently, then:
p(M A ,M B ;T;D|a=b)=p(M A ,M B |a=b)×p(t=T|a=b)×p(d=D|a=b)
the target is defined by the square of the coefficient of pasteurization,
the matching problem of the targets at this time is summarized as: given object a in view A, object B is found in view B * Such that:
b * =argmax b∈B p(M A ,M B ;T;D|a=b)
step 8-2, by setting a threshold value
Figure FDA0004120658200000022
To determine whether the targets are consistent, when +.>
Figure FDA0004120658200000023
When a and b are considered to be the same target.
2. The method for matching targets in a non-overlapping view monitoring system according to claim 1, wherein the step 1 is to build a background model of views a and B by using a key region where targets are intercepted to leave or enter the views as a background.
3. The method for matching targets in a non-overlapping view monitoring system according to claim 1, wherein in the step 2, tracking the targets of the pedestrians in the view a uses a rectangular frame manual calibration, and tracking the target pedestrians by using a DSST algorithm.
4. The method of claim 1, wherein the step 3 of extracting color name features of pedestrian objects includes black, blue, brown, gray, green, orange, pink, purple, red, white, and yellow.
5. The method for matching targets in a non-overlapping vision monitoring system according to claim 1, wherein in step 4, binarization is performed by using an oxford algorithm to obtain a binary image, and a complete pedestrian target I is segmented B
6. The method for matching targets in a non-overlapping view monitoring system of claim 1, wherein the slower speed pedestrian is a walking speed less than an average walking speed of the pedestrian, and the faster speed artificial walking speed is greater than the average walking speed of the pedestrian.
CN201710447010.3A 2017-06-14 2017-06-14 Target matching method in non-overlapping vision monitoring system Active CN107563272B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710447010.3A CN107563272B (en) 2017-06-14 2017-06-14 Target matching method in non-overlapping vision monitoring system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710447010.3A CN107563272B (en) 2017-06-14 2017-06-14 Target matching method in non-overlapping vision monitoring system

Publications (2)

Publication Number Publication Date
CN107563272A CN107563272A (en) 2018-01-09
CN107563272B true CN107563272B (en) 2023-06-20

Family

ID=60973187

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710447010.3A Active CN107563272B (en) 2017-06-14 2017-06-14 Target matching method in non-overlapping vision monitoring system

Country Status (1)

Country Link
CN (1) CN107563272B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110110598A (en) * 2019-04-01 2019-08-09 桂林电子科技大学 The pedestrian of a kind of view-based access control model feature and space-time restriction recognition methods and system again

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102509118A (en) * 2011-09-28 2012-06-20 安科智慧城市技术(中国)有限公司 Method for monitoring video retrieval
CN102592144A (en) * 2012-01-06 2012-07-18 东南大学 Multi-camera non-overlapping view field-based pedestrian matching method
CN103530638A (en) * 2013-10-29 2014-01-22 无锡赛思汇智科技有限公司 Method for matching pedestrians under multiple cameras
CN105205834A (en) * 2015-07-09 2015-12-30 湖南工业大学 Target detection and extraction method based on Gaussian mixture and shade detection model
CN105261037A (en) * 2015-10-08 2016-01-20 重庆理工大学 Moving object detection method capable of automatically adapting to complex scenes
WO2017092431A1 (en) * 2015-12-01 2017-06-08 乐视控股(北京)有限公司 Human hand detection method and device based on skin colour

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI380677B (en) * 2007-12-31 2012-12-21 Altek Corp Method of color space transformation to transfer prospect for camera sphere

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102509118A (en) * 2011-09-28 2012-06-20 安科智慧城市技术(中国)有限公司 Method for monitoring video retrieval
CN102592144A (en) * 2012-01-06 2012-07-18 东南大学 Multi-camera non-overlapping view field-based pedestrian matching method
CN103530638A (en) * 2013-10-29 2014-01-22 无锡赛思汇智科技有限公司 Method for matching pedestrians under multiple cameras
CN105205834A (en) * 2015-07-09 2015-12-30 湖南工业大学 Target detection and extraction method based on Gaussian mixture and shade detection model
CN105261037A (en) * 2015-10-08 2016-01-20 重庆理工大学 Moving object detection method capable of automatically adapting to complex scenes
WO2017092431A1 (en) * 2015-12-01 2017-06-08 乐视控股(北京)有限公司 Human hand detection method and device based on skin colour

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
翁菲 ; 刘允才 ; .多场景视频监控中的人物连续跟踪.微型电脑应用.2010,(第06期),全文. *
韩敬贤 ; 齐美彬 ; 蒋建国 ; .基于外观模型和时空模型的多摄像机目标跟踪.合肥工业大学学报(自然科学版).2016,(第12期),全文. *

Also Published As

Publication number Publication date
CN107563272A (en) 2018-01-09

Similar Documents

Publication Publication Date Title
CN105631880B (en) Lane line dividing method and device
CN110544258B (en) Image segmentation method and device, electronic equipment and storage medium
Soriano et al. Adaptive skin color modeling using the skin locus for selecting training pixels
CN108665487B (en) Transformer substation operation object and target positioning method based on infrared and visible light fusion
CN103093203B (en) A kind of human body recognition methods again and human body identify system again
Zang et al. Robust background subtraction and maintenance
AU2006252252B2 (en) Image processing method and apparatus
US20150125074A1 (en) Apparatus and method for extracting skin area to block harmful content image
US9418426B1 (en) Model-less background estimation for foreground detection in video sequences
Huerta et al. Exploiting multiple cues in motion segmentation based on background subtraction
CN109918971B (en) Method and device for detecting number of people in monitoring video
KR101374139B1 (en) Monitoring method through image fusion of surveillance system
EP2340525A1 (en) Detection of vehicles in an image
WO2016165064A1 (en) Robust foreground detection method based on multi-view learning
CN110111351B (en) Pedestrian contour tracking method fusing RGBD multi-modal information
CN111886600A (en) Device and method for instance level segmentation of image
CN111815528A (en) Bad weather image classification enhancement method based on convolution model and feature fusion
Surkutlawar et al. Shadow suppression using RGB and HSV color space in moving object detection
WO2020259416A1 (en) Image collection control method and apparatus, electronic device, and storage medium
CN108491857B (en) Multi-camera target matching method with overlapped vision fields
CN110866889A (en) Multi-camera data fusion method in monitoring system
CN107563272B (en) Target matching method in non-overlapping vision monitoring system
Mouats et al. Fusion of thermal and visible images for day/night moving objects detection
CN110148105B (en) Video analysis method based on transfer learning and video frame association learning
Fan et al. Edge detection of color road image based on lab model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant