CN107563272A - Target matching method in a kind of non-overlapping visual field monitoring system - Google Patents

Target matching method in a kind of non-overlapping visual field monitoring system Download PDF

Info

Publication number
CN107563272A
CN107563272A CN201710447010.3A CN201710447010A CN107563272A CN 107563272 A CN107563272 A CN 107563272A CN 201710447010 A CN201710447010 A CN 201710447010A CN 107563272 A CN107563272 A CN 107563272A
Authority
CN
China
Prior art keywords
target
pedestrian
view
space
time
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710447010.3A
Other languages
Chinese (zh)
Other versions
CN107563272B (en
Inventor
赵高鹏
沈玉鹏
李双双
刘天宇
王超尘
王建宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Science and Technology
Original Assignee
Nanjing University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Science and Technology filed Critical Nanjing University of Science and Technology
Priority to CN201710447010.3A priority Critical patent/CN107563272B/en
Publication of CN107563272A publication Critical patent/CN107563272A/en
Application granted granted Critical
Publication of CN107563272B publication Critical patent/CN107563272B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses target matching method in a kind of non-overlapping visual field monitoring system, comprise the following steps:1:Establish ken A and the ken B background model;2:Pedestrian target in ken A is tracked, when target will leave ken A, using background difference, is partitioned into complete pedestrian target IA;3:Extract the color name feature of pedestrian target, the presentation model M as targetA;4:The detection of pedestrian target is carried out in ken B, using background difference, is partitioned into possible pedestrian target IB;5:By pedestrian target IAAnd IBHSV space is gone to from rgb space, carries out gamma correction;6:Extract the I after gamma correctionBColor name feature, the presentation model M as targetB;7:It is determined that the space-time restriction of monitoring network;8:By MA、MBAnd the space-time restriction structure maximum a posteriori probability problem of monitoring network;The method influences have stronger robustness to illumination variation, environmental difference under non-overlapping visual field to caused by object matching.

Description

Target matching method in non-overlapping vision field monitoring system
Technical Field
The invention belongs to the field of computer vision, particularly relates to the technical field of video monitoring, and particularly relates to a method for matching targets in a non-overlapping vision field monitoring system.
Background
With the development of camera monitoring technology, a large number of monitoring cameras are applied to various public places, and pedestrians in videos are one of important monitoring objects. Due to geographical constraints and cost constraints, it is not practical to cover all monitored areas with cameras. Therefore, the joint monitoring of cameras without overlapping fields of view is increasingly applied in monitoring systems. The correct identification of the same target in different cameras at different times is beneficial to understanding the behavior of the target in the whole monitoring scene. However, the different cameras have different postures and different observation directions, so that the geometric characteristics of the target in the different cameras are different. Compared with other features, the color features have smaller dependence on the size, direction and visual angle of the image and have higher robustness. However, due to the influence of camera parameters and lighting conditions, the brightness of the target appearing in different cameras will also be different, and therefore, when the color features are adopted, the brightness needs to be corrected first. Meanwhile, the topological structure of the monitoring network needs to be considered, so that the target matching rate under the non-overlapping view field is improved as much as possible.
The luminance correction Function (BTF) is a method of color correction that provides a Transfer model for the variation of the color characteristics of a target between two cameras. Javed O et al propose a method for establishing a brightness mapping relationship between monitored scenes of a camera by using a target as a medium in "application Modeling for Tracking in Multiple Non-overlaying Cameras". Because the same object shows different brightness characteristics in two different scenes with different brightness, the brightness difference of the two scenes is reflected.
In terms of establishing space-time constraints among Cameras, javed O et al estimate the topology and target path probabilities of Multiple Cameras without Overlapping by using a Parzen window and Gaussian kernel mixed probability density estimator in Tracking Across Multiple Non-Overlapping Cameras. Ellis et al, in Learning a Multi-Camera topology, automatically establishes a temporal-spatial topological relationship between cameras for a Multi-Camera spatial network by using a large amount of target observation data through an unsupervised Learning method. However, these camera automatic calibration methods have a large limitation, and cannot be obtained or result in inaccurate conditions.
A Bayesian estimation model-based method is used for fusing similar information of a representation model of the target and space-time constraint information of a monitoring network, so that target matching under non-overlapping vision fields is realized. For example, huang et al, in Object Identification in a Bayesian Context, establish a Bayesian estimation framework to merge the time, position, speed, and color information of the target vehicle when the vehicle appears and leaves the field of view, so as to identify vehicles detected by two adjacent cameras on the highway in a matching manner.
Disclosure of Invention
The purpose of the invention is as follows: the invention provides a target matching method in a non-overlapping vision field monitoring system, which comprises the steps of firstly, adjusting the pixel value distribution of a V channel in an image HSV space to carry out brightness correction; then extracting the color name characteristics of the target after brightness correction as an expression model of the target; then, under the framework of a Bayes estimation model, integrating a target expression model and the space-time constraint of a monitoring network to construct a maximum posterior probability problem; finally, whether the targets are consistent or not is judged by comparing the posterior probability with a threshold; the method has stronger robustness to the influence of illumination change and environment difference on target matching under non-overlapped vision fields.
The specific implementation steps of the invention are as follows in sequence:
a method for matching objects in a non-overlapping field monitoring system comprises the following steps:
step 1: establishing background models of a visual field A and a visual field B;
and 2, step: tracking the pedestrian target in the visual field A, and when the target is about to leave the visual field A, segmenting a complete pedestrian target I by utilizing background difference A
And 3, step 3: extracting the color name characteristic of the pedestrian target as the expression model M of the target A
And 4, step 4: detecting pedestrian objects in the vision field B, and segmenting possible pedestrian objects I by utilizing background difference B
And 5: target the pedestrian I A And I B Transferring from the RGB space to the HSV space, and performing brightness correction;
step 6: extracting luminance corrected I B Color name characteristic of (1), as an expression model M of the object B
And 7: determining a spatiotemporal constraint of a monitoring network;
and 8: by M A 、M B And constructing a maximum posterior probability problem by the time-space constraint of the monitoring network, and judging whether the targets are consistent.
Further, the step 1 of establishing the background models of the views a and B uses the key areas of the intercepting objects leaving or entering the views as the background.
Furthermore, in the step 2, the pedestrian target in the view area a is tracked by manually calibrating with a rectangular frame, and the target pedestrian is tracked by using a DSST algorithm.
Further, the step 3 of extracting color name features of the pedestrian object includes black, blue, brown, gray, green, orange, pink, purple, red, white and yellow.
Furthermore, in the step 4, the Otsu algorithm is used for binarization to obtain a binary image, and the binary image is divided into a complete pedestrian target I B
Further, the step 5 targets the pedestrian I A And pedestrian object I B The method for converting the RGB space to the HSV space comprises the following steps:
step 5-1, carrying out pedestrian object I in the vision area A and the vision area B A And I B Transferring the RGB space to the HSV space, and extracting a V channel;
step 5-2, adjusting Gaussian distribution of V-channel pixel values:
wherein s is v And t v Are the pixel values of the source image and target image V-channels respectively,andthe standard deviation, mean(s), of the V-channels of the source and target images, respectively v ) And mean (t) v ) Are the mean values of the V channels of the source and target images, I v The pixel value of the V channel after the brightness correction of the source image is carried out;
and 5-3, transferring the HSV space image corrected by the V channel to the RGB space again, thereby achieving the aim of brightness correction.
Further, the step 7 of determining a space-time constraint of the monitoring network comprises the following steps:
step 7-1, extracting the time interval from the time when the target leaves the view A to the time when the target enters the view B by using a sample calibrated manually in advance, and training a mixed Gaussian model:
p(t=T|a=b)=π 1 p(t=T;μ 11 )+π 2 p(t=T;μ 22 )
wherein a is an object in view a and B is an object in view B; p (T = T; mu) 11 ) Is a Gaussian probability density, μ, for fitting the transition time of a pedestrian of faster speed 1 Is the average transition time, δ, of the pedestrian at the faster speed 1 Is the standard deviation, π, of the transfer time of a faster pedestrian 1 Is the weight of the Gaussian probability density of the pedestrian with higher fitting speed; p (T = T; mu) 22 ) Is used to fit the Gaussian probability density, mu, of the transition time of a pedestrian with a slower speed 2 Is the average transfer time, δ, spent by a slower pedestrian 2 Is the standard deviation, π, of the transfer time of a slower pedestrian 2 Is the weight of the Gaussian probability density of the pedestrian with slower fitting speed;
step 7-2, extracting the position of the target leaving the visual field A and the position of the target entering the visual field B by using a sample calibrated in advance, and training a single Gaussian model to fit the change condition of the positions:
p(d=D|a=b)=p(d=D;μ,δ)
wherein a is an object in view a, and B is an object in view B; p (D = D; μ, δ) is the Gaussian probability density used to fit the position change, μ is the average position change in view A and view B, and δ is the standard deviation of the position change.
Further, said step 8 is performed by M A 、M B And the space-time constraint of the monitoring network constructs the problem of maximum posterior probability, comprising the following steps:
step 8-1, expressing the model M by the object A 、M B And the space-time constraint of the monitoring network constructs the problem of maximum posterior probability:
where p (a = B) is the transition probability of the target from view a to view B, assuming a uniform distribution; p (M) A ,M B (ii) a T; d) Using a constant scale factor to represent, then:
p(a=b|M A ,M B ;T;D)∝p(M A ,M B ;T;D|a=b)
p(M A ,M B (ii) a T; d | a = b) depends on the target's probability p (M) of the representation model A ,M B A = b), time transfer probability p (t =)T | a = b) and a position transition probability p (D = D | a = b). Assuming that these three probabilities are all independently distributed, then:
p(M A ,M B ;T;D|a=b)=p(M A ,M B |a=b)×p(t=T|a=b)×p(d=D|a=b)
defining a representation model probability of the target by using the square of the Papanicolaou coefficient;
p(d=D|a=b)=p(d=D;μ,δ)
the matching problem of the target at this time is summarized as follows: given a target a in field A, a found target B in field B * Such that:
b * =argmax b∈B p(M A ,M B ;T;D|a=b)
step 8-2, setting threshold valueTo determine whether the targets are consistent whenWhen a and b are considered to be the same object.
Furthermore, the walking speed of the slower pedestrians is lower than the average walking speed of the pedestrians, and the walking speed of the faster pedestrians is higher than the average walking speed of the pedestrians.
Has the beneficial effects that: the method is simple and clear, the target is matched with the speed block, the target is matched accurately, the method can adapt to different illumination environments and different environment differences, and the problem that the camera cannot cover the monitored area completely can be solved effectively.
Drawings
FIG. 1 is a flow chart of a method of object matching in a non-overlapping field of view monitoring system of the present invention.
FIG. 2 is a background view of views A and B in an embodiment of the present invention.
Fig. 3 is a last frame tracking result, a target binary image, a target segmentation image and a target color name image of a target in a in the embodiment of the present invention.
FIG. 4 is a target binary image, a target segmentation image, a target color name image without luminance correction, and a target color name image with luminance correction of the detection result in the viewing area B according to the embodiment of the present invention.
Fig. 5 is a posterior probability that the first 5 frames of the detection result in view B and the object in view a are the same object in the embodiment of the present invention.
Detailed Description
The invention discloses a target matching method in a non-overlapping vision field monitoring system, which comprises the following steps:
step 1, establishing background models of a visual field A and a visual field B; in order to reduce the calculation amount of the background model building, intercepting a key area of an object leaving or entering a visual field as a background, as shown in FIG. 2;
and 2, manually calibrating the target to be tracked in the monitoring vision field A by using a rectangular frame, and tracking the target pedestrian by utilizing a DSST (cognitive Scale Space Tracker) algorithm. When one side of the rectangular box surrounding the object reaches the video frame boundary, it is considered to be the last frame of the object in the field of view a. Obtaining a binary image containing the target by using a background difference method and an Otsu algorithm, and segmenting a complete pedestrian target I according to the binary image A FIG. 3 is a diagram of the last frame of the target in view A, a binary image of the target, a segmentation image of the target, and a color name image of the target, from left to right, respectively;
and 3, summarizing all pixel points into 11 Color semantic labels by utilizing the Color Name (CN) characteristics: black (black), blue (blue), brown (brown), gray (gray), green (green), orange (orange), pink (pink), purple (purple), red (red), white (white), yellow (yellow). For pedestrian object I A Each pixel in (a) is assigned a color label as shown in fig. 3. Mapping all pixels of the pedestrian target into 11-dimensional color name vectors, and normalizing to obtain a normalized color name histogram as a target expression model M A
Step (ii) of4. Carrying out target detection in the vision field B by using a background difference method, carrying out binarization by using an Otsu algorithm, and segmenting a complete pedestrian target I according to a binary image B The three sub-graphs of fig. 4 (a), (B) and (c) are respectively the candidate object detected in the view field B, and the graphs (a), (B) and (c) are respectively the detection result of the candidate object in the view field B, the binary graph of the candidate object, the division graph of the candidate object and the color name graph of the candidate object from left to right;
step 5, mixing I A ,I B And (4) converting from the RGB space to the HSV space, and performing brightness correction. The step 5 specifically comprises the following steps:
step 5-1, monitoring pedestrian objects I in the vision areas A and B A And I B Transferring from the RGB space to the HSV space, and extracting a V channel;
step 5-2, adjusting Gaussian distribution of V-channel pixel values:
wherein s is v ,t v Are the pixel values of the source image and target image V-channels respectively,standard deviation of the V-channel, mean(s), of the source and target images, respectively v ),mean(t v ) Are the average values, I, of the V channels of the source and target images, respectively v The pixel value of the V channel after the brightness correction of the source image is carried out;
step 5-3, transferring the HSV space image after the V channel correction to the RGB space again so as to achieve the aim of brightness correction, as shown in FIG. 4;
step 6, giving the detected target I by using the color name characteristics B Each pixel in the image is assigned a color label, as shown in fig. 4, thereby mapping all pixels of the object into an 11-dimensional color name vector, normalizing the color name vector, and obtaining a normalized color name histogram as the object representation model M B
And 7, determining the space-time constraint of the monitoring network. The step 7 specifically comprises:
step 7-1, extracting the time interval from the time when the target leaves the view A to the time when the target enters the view B by using a sample calibrated manually in advance, and training a mixed Gaussian model:
p(t=T|a=b)=π 1 p(t=T;μ 11 )+π 2 p(t=T;μ 22 )
wherein a is an object in view a and B is an object in view B; p (T = T; mu) 11 ) Is a Gaussian probability density, μ, for fitting the transition time of a pedestrian of faster speed 1 Is the average transition time, δ, of the pedestrian at the faster speed 1 Is the standard deviation, pi, of the transfer time of a pedestrian with a relatively high speed 1 Is the weight of the Gaussian probability density of the pedestrian with higher fitting speed; p (T = T; mu) 22 ) Is used to fit the Gaussian probability density, mu, of the transition time of a pedestrian with a slower speed 2 Is the average transfer time, δ, spent by a slower pedestrian 2 Is the standard deviation, π, of the transfer time of a slower pedestrian 2 Is a weight that fits the gaussian probability density of a pedestrian with a slower speed. Because the speed of the pedestrian is fast or slow, the fitting and transferring time of the double-Gaussian mixture model is more accurate than that of a single-Gaussian model;
step 7-2, extracting the position of the target leaving the visual field A and the position of the target entering the visual field B by using a sample calibrated in advance, and training a single Gaussian model to fit the change condition of the positions:
p(d=D|a=b)=p(d=D;μ,δ)
wherein a is an object in view a and B is an object in view B; p (D = D; μ, δ) is the Gaussian probability density used to fit the position change, μ is the mean position change in view A and view B, δ is the standard deviation of the position change;
step 8, by M A 、M B And constructing a maximum posterior probability problem by the time-space constraint of the monitoring network, and judging whether the targets are consistent. The step 8 specifically comprises:
step 8-1, expressing the model M by the object A 、M B And the space-time constraint of the monitoring network constructs the problem of maximum posterior probability:
wherein p (a = B) is the transition probability of the target from view a to view B, assuming that both are distributions; p (M) A ,M B (ii) a T; d) Using a constant scale factor to represent, then:
p(a=b|M A ,M B ;T;D)∝p(M A ,M B ;T;D|a=b)
p(M A ,M B (ii) a T; d | a = b) depends on the target's probability p (M) of the representation model A ,M B | a = b), temporal transition probability p (T = T | a = b), and positional transition probability p (D = D | a = b). Assuming that these three probabilities are all independently distributed, then:
p(M A ,M B ;T;D|a=b)=p(M A ,M B |a=b)×p(t=T|a=b)×p(d=D|a=b)
barcol index (Bhattacharyya)Coefficient) The similarity of two discrete probability distributions can be measured, and the representation model probability of the target is defined by the square of the Papanicolaou coefficient. The square of the Papanicolaou coefficient, rather than the Papanicolaou coefficient itself, is chosen to enhance the effect of the representation model in the posterior probability.
p(M A ,M B |a=b)=BC(M A ,M B ) 2
Measuring the time transition probability by using the Gaussian mixture model trained in the step 7-1:
p(t=T|a=b)=π 1 p(t=T;μ 11 )+π 2 p(t=T;μ 22 )
the single Gaussian model trained in step 7-2 is used to measure the position transition probability:
p(d=D|a=b)=p(d=D;μ,δ)
the matching problem of the target at this time is summarized as follows: given an object a in view A, a found object B in view B * So that:
b * =argmax b∈B p(M A ,M B ;T;D|a=b)
step 8-2, setting a threshold valueTo determine whether the targets are consistent whenWhen a and b are considered to be the same object.
Example (b):
to illustrate the effectiveness of the algorithm of the present invention, the experiments to accomplish target matching in a non-overlapping view monitoring system are as follows:
(1) Experimental data and parameter settings
The test data set adopts sequences of Cam1 and Cam2 contained in a data set of a database 1. Selecting 40 pedestrians moving from Cam1 to Cam2 as training samples, wherein the time transition probability obtained by training is as follows:
the position transition probability is:
selecting a threshold T =10 -5 If p (M) A ,M B ;T;D|a=b)&And gt, T, the target is considered to be consistent. Is selected byTarget 4 in Cam1 (the serial number of the target is specified in the dataset annotation folder) is the matching target, and the corresponding target is identified in Cam 2.
(2) Analysis of Experimental results
Fig. 5 is a posterior probability that the target detected in Cam2 is the same target as the selected target (target 4) in Cam1, and the posterior probability that each target detected in Cam2 is the same target as the selected target is calculated (the posterior probability of the first 5 frames is calculated). As can be seen from fig. 5, when the threshold T =10 -5 The re-matching of the target can be accurately achieved.

Claims (9)

1. A method for matching objects in a non-overlapping field monitoring system comprises the following steps:
step 1: establishing background models of a visual field A and a visual field B;
step 2: tracking the pedestrian target in the visual field A, and when the target is about to leave the visual field A, segmenting a complete pedestrian target I by utilizing background difference A
And step 3: extracting the color name characteristic of the pedestrian target as the expression model M of the target A
And 4, step 4: detecting pedestrian objects in the vision field B, and segmenting possible pedestrian objects I by utilizing background difference B
And 5: target the pedestrian I A And I B Transferring from the RGB space to the HSV space, and performing brightness correction;
step 6: extracting luminance corrected I B Color name characteristic of (1), as an expression model M of the object B
And 7: determining a spatiotemporal constraint of a monitoring network;
and step 8: by M A 、M B And constructing a maximum posterior probability problem by the time-space constraint of the monitoring network, and judging whether the targets are consistent.
2. The method for matching objects in a system for monitoring non-overlapping fields of view as claimed in claim 1, wherein the step 1 of establishing the background models of the fields of view a and B uses the key areas of the intercepted objects leaving or entering the fields of view as the background.
3. The method as claimed in claim 1, wherein the step 2 of tracking the pedestrian object in the field a is performed by using a rectangular frame manual calibration and tracking the target pedestrian by using DSST algorithm.
4. The method for matching objects in a non-overlapping vision field monitoring system as claimed in claim 1, wherein the step 3 of extracting color name features of pedestrian objects comprises black, blue, brown, gray, green, orange, pink, purple, red, white and yellow.
5. The method as claimed in claim 1, wherein the step 4 is performed by using Otsu's algorithm to obtain binary image and segment complete pedestrian object I B
6. The method for matching objects in a system for monitoring non-overlapping fields of view as claimed in claim 1, wherein said step 5 is to match the pedestrian object I A And pedestrian object I B The method for converting the RGB space to the HSV space comprises the following steps:
step 5-1, carrying out pedestrian object I in the vision area A and the vision area B A And I B Transferring from the RGB space to the HSV space, and extracting a V channel;
step 5-2, adjusting Gaussian distribution of V-channel pixel values:
wherein s is v And t v Are the pixel values of the source image and target image V-channels respectively,andstandard deviation of the V-channel, mean(s), of the source and target images, respectively v ) And mean (t) v ) Are the mean values of the V channels of the source and target images, I v The pixel value of the V channel after the brightness of the source image is corrected;
and 5-3, transferring the HSV space image corrected by the V channel to the RGB space again, thereby achieving the aim of brightness correction.
7. The method for matching objects in a system for monitoring non-overlapping fields of view according to claim 1, wherein said step 7 of determining the space-time constraint of the monitoring network comprises the steps of:
step 7-1, extracting the time interval from the time when the target leaves the visual field A to the time when the target enters the visual field B by utilizing a sample calibrated manually in advance, and training a mixed Gaussian model:
p(t=T|a=b)=π 1 p(t=T;μ 11 )+π 2 p(t=T;μ 22 )
wherein a is an object in view a and B is an object in view B; p (T = T; mu) 11 ) Is a Gaussian probability density, μ, used to fit the transition time of a pedestrian of faster speed 1 Is the average transition time, δ, of the pedestrian at the faster speed 1 Is the standard deviation, pi, of the transfer time of a pedestrian with a relatively high speed 1 Is the weight of the Gaussian probability density of the pedestrian with higher fitting speed; p (T = T; mu) 22 ) Is used to fit the Gaussian probability density, mu, of the transition time of a pedestrian with a slower speed 2 Is the average transfer time, δ, spent by a slower pedestrian 2 Is the standard deviation, π, of the transfer time of a slower pedestrian 2 Is the weight of the Gaussian probability density of the pedestrian with slower fitting speed;
step 7-2, extracting the position of the target leaving the view area A and the position of the target entering the view area B by using a sample calibrated in advance, and training a single Gaussian model to fit the change condition of the positions:
p(d=D|a=b)=p(d=D;μ,δ)
wherein a is an object in view a and B is an object in view B; p (D = D; μ, δ) is the Gaussian probability density used to fit the position change, μ is the mean position change in view A and view B, and δ is the standard deviation of the position change.
8. The method of claim 7, wherein step 8 is performed by M A 、M B And the problem of constructing the maximum posterior probability by the space-time constraint of the monitoring network comprises the following steps:
step 8-1, expressing the model M by the object A 、M B And constructing a maximum posterior probability problem by the time-space constraint of the monitoring network:
where p (a = B) is the transition probability of the target from view a to view B, assuming a uniform distribution; p (M) A ,M B (ii) a T; d) Using a constant scale factor to represent, then:
p(a=b|M A ,M B ;T;D)∝p(M A ,M B ;T;D|a=b)
p(M A ,M B (ii) a T; d | a = b) depends on the target's probability p (M) of the representation model A ,M B | a = b), temporal transition probability p (T = T | a = b), and positional transition probability p (D = D | a = b); assuming that these three probabilities are all independently distributed, then:
p(M A ,M B ;T;D|a=b)=p(M A ,M B |a=b)×p(t=T|a=b)×p(d=D|a=b)
defining a representation model probability of the target by using the square of the Papanicolaou coefficient;
p(d=D|a=b)=p(d=D;μ,δ)
the matching problem of the target at this time is summarized as follows: given an object a in view A, a found object B in view B * Such that:
b * =argmax b∈B p(M A ,M B ;T;D|a=b)
step 8-2, setting a threshold valueTo determine whether the targets are consistent whenWhen a and b are considered to be the same object.
9. The method of claim 7, wherein the slower pedestrian walking speed is less than the average pedestrian walking speed, and the faster pedestrian walking speed is greater than the average pedestrian walking speed.
CN201710447010.3A 2017-06-14 2017-06-14 Target matching method in non-overlapping vision monitoring system Active CN107563272B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710447010.3A CN107563272B (en) 2017-06-14 2017-06-14 Target matching method in non-overlapping vision monitoring system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710447010.3A CN107563272B (en) 2017-06-14 2017-06-14 Target matching method in non-overlapping vision monitoring system

Publications (2)

Publication Number Publication Date
CN107563272A true CN107563272A (en) 2018-01-09
CN107563272B CN107563272B (en) 2023-06-20

Family

ID=60973187

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710447010.3A Active CN107563272B (en) 2017-06-14 2017-06-14 Target matching method in non-overlapping vision monitoring system

Country Status (1)

Country Link
CN (1) CN107563272B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110110598A (en) * 2019-04-01 2019-08-09 桂林电子科技大学 The pedestrian of a kind of view-based access control model feature and space-time restriction recognition methods and system again

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090169129A1 (en) * 2007-12-31 2009-07-02 Yun-Chin Li Method for automatically transforming color space and prospect of an imaging device
CN102509118A (en) * 2011-09-28 2012-06-20 安科智慧城市技术(中国)有限公司 Method for monitoring video retrieval
CN102592144A (en) * 2012-01-06 2012-07-18 东南大学 Multi-camera non-overlapping view field-based pedestrian matching method
CN103530638A (en) * 2013-10-29 2014-01-22 无锡赛思汇智科技有限公司 Method for matching pedestrians under multiple cameras
CN105205834A (en) * 2015-07-09 2015-12-30 湖南工业大学 Target detection and extraction method based on Gaussian mixture and shade detection model
CN105261037A (en) * 2015-10-08 2016-01-20 重庆理工大学 Moving object detection method capable of automatically adapting to complex scenes
WO2017092431A1 (en) * 2015-12-01 2017-06-08 乐视控股(北京)有限公司 Human hand detection method and device based on skin colour

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090169129A1 (en) * 2007-12-31 2009-07-02 Yun-Chin Li Method for automatically transforming color space and prospect of an imaging device
CN102509118A (en) * 2011-09-28 2012-06-20 安科智慧城市技术(中国)有限公司 Method for monitoring video retrieval
CN102592144A (en) * 2012-01-06 2012-07-18 东南大学 Multi-camera non-overlapping view field-based pedestrian matching method
CN103530638A (en) * 2013-10-29 2014-01-22 无锡赛思汇智科技有限公司 Method for matching pedestrians under multiple cameras
CN105205834A (en) * 2015-07-09 2015-12-30 湖南工业大学 Target detection and extraction method based on Gaussian mixture and shade detection model
CN105261037A (en) * 2015-10-08 2016-01-20 重庆理工大学 Moving object detection method capable of automatically adapting to complex scenes
WO2017092431A1 (en) * 2015-12-01 2017-06-08 乐视控股(北京)有限公司 Human hand detection method and device based on skin colour

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
翁菲;刘允才;: "多场景视频监控中的人物连续跟踪" *
韩敬贤;齐美彬;蒋建国;: "基于外观模型和时空模型的多摄像机目标跟踪" *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110110598A (en) * 2019-04-01 2019-08-09 桂林电子科技大学 The pedestrian of a kind of view-based access control model feature and space-time restriction recognition methods and system again

Also Published As

Publication number Publication date
CN107563272B (en) 2023-06-20

Similar Documents

Publication Publication Date Title
CN110544258B (en) Image segmentation method and device, electronic equipment and storage medium
Jung Efficient background subtraction and shadow removal for monochromatic video sequences
US11443454B2 (en) Method for estimating the pose of a camera in the frame of reference of a three-dimensional scene, device, augmented reality system and computer program therefor
CN103077521B (en) A kind of area-of-interest exacting method for video monitoring
KR101374139B1 (en) Monitoring method through image fusion of surveillance system
CN108022258B (en) Real-time multi-target tracking method based on single multi-frame detector and Kalman filtering
US9418426B1 (en) Model-less background estimation for foreground detection in video sequences
Benedek et al. Study on color space selection for detecting cast shadows in video surveillance
CN104601964A (en) Non-overlap vision field trans-camera indoor pedestrian target tracking method and non-overlap vision field trans-camera indoor pedestrian target tracking system
CN106683119A (en) Moving vehicle detecting method based on aerially photographed video images
WO2016165064A1 (en) Robust foreground detection method based on multi-view learning
Huerta et al. Exploiting multiple cues in motion segmentation based on background subtraction
Tiwari et al. A survey on shadow detection and removal in images and video sequences
Naufal et al. Preprocessed mask RCNN for parking space detection in smart parking systems
CN108921857A (en) A kind of video image focus area dividing method towards monitoring scene
WO2020259416A1 (en) Image collection control method and apparatus, electronic device, and storage medium
CN107103301B (en) Method and system for matching discriminant color regions with maximum video target space-time stability
CN108491857B (en) Multi-camera target matching method with overlapped vision fields
Barcellos et al. Shadow detection in camera-based vehicle detection: survey and analysis
Agrawal et al. ABGS Segmenter: pixel wise adaptive background subtraction and intensity ratio based shadow removal approach for moving object detection
CN107563272B (en) Target matching method in non-overlapping vision monitoring system
CN113221603A (en) Method and device for detecting shielding of monitoring equipment by foreign matters
CN109118546A (en) A kind of depth of field hierarchical estimation method based on single-frame images
CN110148105B (en) Video analysis method based on transfer learning and video frame association learning
CN114359332A (en) Target tracking method, device, equipment and medium based on depth image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant