CN103164693A - Surveillance video pedestrian detection matching method - Google Patents

Surveillance video pedestrian detection matching method Download PDF

Info

Publication number
CN103164693A
CN103164693A CN2013100439226A CN201310043922A CN103164693A CN 103164693 A CN103164693 A CN 103164693A CN 2013100439226 A CN2013100439226 A CN 2013100439226A CN 201310043922 A CN201310043922 A CN 201310043922A CN 103164693 A CN103164693 A CN 103164693A
Authority
CN
China
Prior art keywords
target sequence
target
frame
sequence
similarity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2013100439226A
Other languages
Chinese (zh)
Other versions
CN103164693B (en
Inventor
谭毅华
黄石泉
李彦胜
田金文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huazhong University of Science and Technology
Original Assignee
Huazhong University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huazhong University of Science and Technology filed Critical Huazhong University of Science and Technology
Priority to CN201310043922.6A priority Critical patent/CN103164693B/en
Publication of CN103164693A publication Critical patent/CN103164693A/en
Application granted granted Critical
Publication of CN103164693B publication Critical patent/CN103164693B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention belongs to the field of video image data processing and discloses a surveillance video pedestrian detection matching method which comprises a video pedestrian target detection step, an inter-frame common target relevance step, target sequence and to-be-matched target characteristic extraction step, a characteristic similarity calculation step and a target matching distinguishing step. Due to the fact that no big displacement exists between adjacent inter-frame pedestrian targets, a target sequence is obtained by utilizing detected location information of a pedestrian to conduct associating on the common target. One method for extracting target sequence features is to extract multi-frame grey level histogram features of the target sequence, and the other method is to extract a physical configuration audit (PCA) template. Due to the adoption of the multi-frame information, the surveillance video pedestrian detection matching method possesses better stability and matching accuracy compared with single frame target matching. A matching determination method utilizes iteration features to integrate the similarity of the two features to make judgment and obtain a matching result.

Description

A kind of monitor video pedestrian detection matching process
Technical field
The invention belongs to the vedio data process field, more specifically, relate to a kind of monitor video pedestrian detection matching process.
Background technology
The video camera ken in supervisory system be mostly non-overlapped, take the pedestrian as target, the tracking of research pedestrian target in many kens supervisory system and the purpose of matching problem are the Computer Automatic Search problems that solves middle target in monitor videos different in certain area monitoring system.The basic framework that solves the object matching problem generally comprises target's feature-extraction, sets up the target signature database, and characteristic similarity such as estimates at the step.
In present monitor video both domestic and external there be the main Types of single frames target's feature-extraction: the display model matching process of (1) color-based feature, and for example the color histogram graph appearance model mates, UV chrominance space Model Matching etc.; (2) based on the method for non-color characteristic, SIFT characteristic matching for example.
But often describing, single feature can not to all video uncertain factors robust all, therefore have a lot of trials to improve the accuracy of matching process by the method for many Fusion Features.No matter single features mates or the matching process of many Fusion Features all is based on the single frames target information, do not take full advantage of the pedestrian target information of continuous multiple frames in video sequence, easily produce the unstable of matching result.
Summary of the invention
For the defective of prior art, the object of the present invention is to provide a kind of monitor video pedestrian detection matching process, be intended to solve existing matching process and all be based on the single frames target information and easily produce the unsettled problem of matching result.
The invention provides a kind of monitor video pedestrian detection matching process, comprise the steps:
S1: select target image to be matched and obtain the square region coordinate of pedestrian target by the global change's histogram feature that extracts image in sequence of video images at the single-frame images center;
S2: according to the number b of the pedestrian target of a frame of each sequence of video images aSet up b aIndividual target sequence calculates present frame (a+p) pedestrian target T successively since the a+1 frame jThe square region center and the b of former frame (a+p-1) (a+p-1)Distance h between the last target's center of the sequence of individual target sequence c, a, b a, j, p be the positive integer more than or equal to 1;
S3: judge described distance h cWhether less than the threshold value of setting; If enter step S4; If not, enter step S5;
S4: with described (a+p) frame pedestrian target T jBe added into the sequence end of c target sequence; C=1 ... b a
S5: with (a+p) frame pedestrian target T jFor first element rebulids a new target sequence;
S6: extract grey level histogram feature and the principal component analysis (PCA) template characteristic of each frame of target sequence, and extract grey level histogram feature and the principal component analysis (PCA) template characteristic of target image to be matched;
S7: calculate the Pasteur's distance between the grey level histogram feature of the grey level histogram feature of each frame of target sequence and target image to be matched, and with Pasteur apart from average as grey level histogram characteristic similarity H iCalculate the Euclidean distance between the principal component analysis (PCA) template characteristic of the principal component analysis (PCA) template characteristic of each frame of target sequence and target image to be matched, and with described Euclidean distance as principal component analysis (PCA) template characteristic similarity P i
S8: obtain preliminary coupling target sequence according to the grey level histogram characteristic similarity after the iteration width that arranges, similarity difference threshold value and sequence and principal component analysis (PCA) template characteristic similarity, and according to preliminary coupling target sequence output matching target sequence.
Further, the grey level histogram feature of each frame of extraction target sequence is specially in step S6:
S61: three passages of each two field picture RGB of target sequence are done respectively the image that illumination difference was processed and obtained to eliminate to histogram specification;
S62: calculate the grey level histogram of the image of described elimination illumination difference, and with the grey level histogram feature of described grey level histogram as described each frame of target sequence.
Further, described histogram specification processing is that the employing pattern number is 2 mixed Gauss model.
Further, the principal component analysis (PCA) template characteristic of extraction target sequence is specially in step S6:
Before utilizing target sequence, the m frame is done principal component analysis (PCA), each frame is launched into the vectorial A of the n of delegation row i, calculate covariance matrix
Figure BDA00002815132100031
Eigenwert and proper vector, with n eigenwert according to arranging from big to small and get front k eigenwert characteristic of correspondence vector constitutive characteristic space F N * k
The matrix A that respectively m frame data vector before target sequence is formed M * nWith object vector I to be matched 1 * nAll project to described feature space F N * kAnd obtain the principal component analysis (PCA) template characteristic A of target sequence M * kPrincipal component analysis (PCA) template characteristic I with target to be matched 1 * k
Further, in step S8, grey level histogram characteristic similarity and principal component analysis (PCA) template characteristic similarity are sorted according to from small to large order respectively.
Further, step S8 is specially:
S81: initialization iteration width and similarity difference threshold value;
S82: with the target sequence that occurs simultaneously in front w grey level histogram characteristic similarity after sequence and front w principal component analysis (PCA) template characteristic similarity as preliminary coupling target sequence;
S83: when the number of described preliminary coupling target sequence is 1, the output matching target sequence; , calculate the similarity difference value of preliminary coupling target sequence, and whether judge the similarity difference value less than described similarity difference threshold value greater than 1 the time when the number of described preliminary coupling target sequence, if, output matching target sequence; If not, be non-matching target sequence; When the number of described preliminary coupling target sequence is 0, upgrade iteration width and similarity difference threshold value, be back to step S82; When the number of preliminary coupling target sequence is 0 again, it is non-matching target sequence.
Further, the scope of described iteration width w is 0<w<N, and N is the target sequence number.
Further, described initialization similarity difference value threshold value R=3 * (σ 1+ σ 2), σ 1, σ 2Be respectively the standard deviation of two characteristic similarities after normalization when target to be matched and target sequence are same target.
Further, according to formula h=10 * | P i-P 0|+| H i-H 0| calculate the similarity difference value of preliminary coupling target sequence, P 0, H 0Be respectively two minterms in the characteristic similarity sequence.
The present invention represents the information of target sequence from the target sequence angle by the target sequence feature, well overcome the instability of single frames pedestrian target coupling.Feature fusion of the present invention is differentiated through twice iteration, can effectively eliminate because single features can't to all interference of robust property generation of all video uncertain factors, have better robustness and matching precision with respect to traditional blending algorithm based on Bayesian Structure.
Description of drawings
Fig. 1 is the realization flow figure of the monitor video pedestrian detection matching process that provides of the embodiment of the present invention;
Fig. 2 is the sub-process figure of step S8 in the monitor video pedestrian detection matching process that provides of the embodiment of the present invention;
Fig. 3 is that the monitor video pedestrian detection matching process that adopts the embodiment of the present invention to provide is realized the matching result schematic diagram that pedestrian detection is mated.
Embodiment
In order to make purpose of the present invention, technical scheme and advantage clearer, below in conjunction with drawings and Examples, the present invention is further elaborated.Should be appreciated that specific embodiment described herein only in order to explain the present invention, is not intended to limit the present invention.
As shown in Figure 1, the monitor video pedestrian detection matching process that the embodiment of the present invention provides specifically comprises the steps:
S1: select target image to be matched and obtain the square region coordinate of pedestrian target by global change's histogram (Census Transform Histogram, CENTRIST) feature of extracting image in sequence of video images at the single-frame images center;
S2: according to the number b of the pedestrian target of a frame of each sequence of video images aSet up b aIndividual target sequence calculates present frame (a+p) pedestrian target T successively since the a+1 frame jThe square region center and the b of former frame (a+p-1) (a+p-1)Distance h between the last target's center of the sequence of individual target sequence c, a, b a, j, p be the positive integer more than or equal to 1;
S3: judge described distance h cWhether less than the threshold value of setting; If enter step S4; If not, enter step S5; Wherein threshold value is empirical value, and the present invention determines that by adding up 10 groups of displacements between pedestrian target sequence consecutive frame target this threshold value is 60;
S4: with described (a+p) frame pedestrian target T jBe added into the sequence end of c target sequence; C=1 ... b a
S5: with (a+p) frame pedestrian target T jFor first element rebulids a new target sequence;
S6: extract grey level histogram feature and the principal component analysis (PCA) template characteristic of each frame of target sequence, and extract grey level histogram feature and the principal component analysis (PCA) template characteristic of target image to be matched;
S7: calculate the Pasteur's distance between the grey level histogram feature of the grey level histogram feature of each frame of target sequence and target image to be matched, and with Pasteur apart from average as grey level histogram characteristic similarity H iCalculate the Euclidean distance between the principal component analysis (PCA) template characteristic of the principal component analysis (PCA) template characteristic of each frame of target sequence and target image to be matched, and with described Euclidean distance as principal component analysis (PCA) template characteristic similarity P i
S8: obtain preliminary coupling target sequence according to the grey level histogram characteristic similarity after the iteration width that arranges, similarity difference threshold value and sequence and principal component analysis (PCA) template characteristic similarity, and according to preliminary coupling target sequence output matching target sequence.
The algorithm that the embodiment of the present invention provides has matching precision preferably at the same pedestrian target for different light environment, different angles, different attitudes, compares accuracy with the matching result under single features higher.
In inventive embodiments, the grey level histogram feature of extracting each frame of target sequence in step S6 is specially:
S61: three passages of each two field picture RGB of target sequence are done respectively the image that illumination difference was processed and obtained to eliminate to histogram specification;
S62: calculate the grey level histogram of the image of described elimination illumination difference, and with the grey level histogram feature of described grey level histogram as described each frame of target sequence.
In inventive embodiments, it is that the employing pattern number is 2 mixed Gauss model that described histogram specification is processed. S i = a 1 1 2 π σ 1 2 e - ( i 256 - μ 1 ) 2 2 σ 1 2 + a 2 1 2 π σ 2 2 e - ( i 256 - μ 2 ) 2 2 σ 2 2 ; T = Σ j = 0 255 ( a 1 1 2 π σ 1 2 e - ( j 256 - μ 1 ) 2 2 σ 1 2 + a 2 1 2 π σ 2 2 e - ( j 256 - μ 2 ) 2 2 σ 2 2 ) ; G i=S i/ T; A wherein 1, a 2, μ 1, μ 2, σ 1, σ 2Utilize maximal possibility estimation to determine, a in the embodiment of the present invention 1=1, a 2=0.07, μ 1=0.15, μ 2=0.75, σ 12=0.05.
In inventive embodiments, the principal component analysis (PCA) template characteristic of extracting target sequence in step S6 is specially: before utilizing target sequence, the m frame is done principal component analysis (PCA), each frame is launched into the vectorial A of the n of delegation row i, calculate covariance matrix
Figure BDA00002815132100063
Eigenwert and proper vector, with n eigenwert according to arranging from big to small and get front k eigenwert characteristic of correspondence vector constitutive characteristic space F N * kThe scope of k is 1 ... n; The matrix A that respectively m frame data vector before target sequence is formed M * nWith object vector I to be matched 1 * nAll project to described feature space F N * kAnd obtain the principal component analysis (PCA) template characteristic A of target sequence M * kPrincipal component analysis (PCA) template characteristic I with target to be matched 1 * k
In embodiments of the present invention, often be subject to the direct impact of target detection result based on the feature extraction of single frames target image, and step S6 carries out the coupling of back by the feature of extracting target sequence, can effectively eliminate like this instability that the single frames target signature exists, and then improve final matching precision.
In inventive embodiments, in step S8, grey level histogram characteristic similarity and principal component analysis (PCA) template characteristic similarity are sorted respectively from small to large.
As shown in Figure 2, in inventive embodiments, step S8 is specially:
S81: initialization iteration width w and similarity difference threshold value R;
S82: with the target sequence that occurs simultaneously in front w grey level histogram characteristic similarity after sequence and front w principal component analysis (PCA) template characteristic similarity as preliminary coupling target sequence;
S83: when the number of described preliminary coupling target sequence is 1, the output matching target sequence; , calculate the similarity difference value of preliminary coupling target sequence, and whether judge the similarity difference value less than described similarity difference threshold value greater than 1 the time when the number of described preliminary coupling target sequence, if, output matching target sequence; If not, be non-matching target sequence; When the number of described preliminary coupling target sequence is 0, upgrade iteration width and similarity difference threshold value, be back to step S82; When the number of preliminary coupling target sequence is 0 again, it is non-matching target sequence.
In inventive embodiments, the scope of iteration width w is 0<w<N, and N is the target sequence number.
In inventive embodiments, similarity difference threshold value R=3 * (σ 1+ σ 2), σ 1, σ 2Be respectively the standard deviation of two characteristic similarities after normalization when target to be matched and target sequence are same target, the present invention is based on number of samples and be target sequence and 10 targets to be matched of 20, calculate σ 1=0.0484, σ 2=0.0498, so this paper initialization R=0.03.
In inventive embodiments, according to formula h=10 * | P i-P 0|+| H i-H 0| calculate the similarity difference value of preliminary coupling target sequence, P 0, H 0Be respectively two minterms in the characteristic similarity sequence.
Adopt matching result that monitor video pedestrian detection matching process that the embodiment of the present invention provides realizes the pedestrian detection coupling as shown in Figure 3, experiment has been chosen frame and has been selected 6 targets to be matched, then detect with interframe by pedestrian target and obtained 20 target sequences (representing the respective objects sequence with a two field picture in target sequence in figure) with the target association step, then utilize algorithm of the present invention to find out from target sequence and the corresponding item of target to be matched.Can find out from matching result, although there is a small amount of mistake coupling, effect can satisfy the demands.
The iteration blending algorithm that the present invention is based on the characteristic similarity sequence fully utilizes the differentiation result of two features by the sequence of two characteristic similarities, avoid choosing of characteristic threshold value in traditional matching algorithm, thereby can get rid of the whole interference of floating and bringing of characteristic similarity under varying environment.Then respectively two characteristic similarities sequences of normalization, two characteristic similarities that then utilize each target to be matched respectively with the individual features sequencing of similarity in the relative different of optimum do further judgement, improve accuracy rate.
Those skilled in the art will readily understand; the above is only preferred embodiment of the present invention; not in order to limiting the present invention, all any modifications of doing within the spirit and principles in the present invention, be equal to and replace and improvement etc., within all should being included in protection scope of the present invention.

Claims (9)

1. a monitor video pedestrian detection matching process, is characterized in that, comprises the steps:
S1: select target image to be matched and obtain the square region coordinate of pedestrian target by the global change's histogram feature that extracts image in sequence of video images at the single-frame images center;
S2: according to the number b of the pedestrian target of a frame of each sequence of video images aSet up b aIndividual target sequence calculates present frame (a+p) pedestrian target T successively since the a+1 frame jThe square region center and the b of former frame (a+p-1) (a+p-1)Distance h between the last target's center of the sequence of individual target sequence c, a, b a, j, p be the positive integer more than or equal to 1;
S3: judge described distance h cWhether less than the threshold value of setting; If enter step S4; If not, enter step S5;
S4: with described (a+p) frame pedestrian target T jBe added into the sequence end of c target sequence; C=1 ... b a
S5: with (a+p) frame pedestrian target T jFor first element rebulids a new target sequence;
S6: extract grey level histogram feature and the principal component analysis (PCA) template characteristic of each frame of target sequence, and extract grey level histogram feature and the principal component analysis (PCA) template characteristic of target image to be matched;
S7: calculate the Pasteur's distance between the grey level histogram feature of the grey level histogram feature of each frame of target sequence and target image to be matched, and with Pasteur apart from average as grey level histogram characteristic similarity H iCalculate the Euclidean distance between the principal component analysis (PCA) template characteristic of the principal component analysis (PCA) template characteristic of each frame of target sequence and target image to be matched, and with described Euclidean distance as principal component analysis (PCA) template characteristic similarity P i
S8: obtain preliminary coupling target sequence according to the grey level histogram characteristic similarity after the iteration width that arranges, similarity difference threshold value and sequence and principal component analysis (PCA) template characteristic similarity, and according to preliminary coupling target sequence output matching target sequence.
2. the method for claim 1, is characterized in that, the grey level histogram feature of extracting each frame of target sequence in step S6 is specially:
S61: three passages of each two field picture RGB of target sequence are done respectively the image that illumination difference was processed and obtained to eliminate to histogram specification;
S62: calculate the grey level histogram of the image of described elimination illumination difference, and with the grey level histogram feature of described grey level histogram as described each frame of target sequence.
3. method as claimed in claim 2, is characterized in that, it is that the employing pattern number is 2 mixed Gauss model that described histogram specification is processed.
4. the method for claim 1, is characterized in that, the principal component analysis (PCA) template characteristic of extracting target sequence in step S6 is specially:
Before utilizing target sequence, the m frame is done principal component analysis (PCA), each frame is launched into the vectorial A of the n of delegation row i, calculate covariance matrix
Figure FDA00002815132000021
Eigenwert and proper vector, with n eigenwert according to arranging from big to small and get front k eigenwert characteristic of correspondence vector constitutive characteristic space F N * k
The matrix A that respectively m frame data vector before target sequence is formed M * nWith object vector I to be matched 1 * nAll project to described feature space F N * kAnd obtain the principal component analysis (PCA) template characteristic A of target sequence M * kPrincipal component analysis (PCA) template characteristic I with target to be matched 1 * kThe scope of k is 1,2 ... n.
5. the method for claim 1, is characterized in that, in step S8, grey level histogram characteristic similarity and principal component analysis (PCA) template characteristic similarity sorted according to from small to large order respectively.
6. method as claimed in claim 5, is characterized in that, step S8 is specially:
S81: initialization iteration width and similarity difference threshold value;
S82: with the target sequence that occurs simultaneously in front w grey level histogram characteristic similarity after sequence and front w principal component analysis (PCA) template characteristic similarity as preliminary coupling target sequence;
S83: when the number of described preliminary coupling target sequence is 1, the output matching target sequence;
, calculate the similarity difference value of preliminary coupling target sequence, and whether judge the similarity difference value less than described similarity difference threshold value greater than 1 the time when the number of described preliminary coupling target sequence, if, output matching target sequence; If not, be non-matching target sequence;
When the number of described preliminary coupling target sequence is 0, upgrade iteration width and similarity difference threshold value, be back to step S82; When the number of preliminary coupling target sequence is 0 again, it is non-matching target sequence.
7. method as claimed in claim 6, is characterized in that, the scope of described iteration width w is 0<w<N, and N is the target sequence number.
8. method as claimed in claim 6, is characterized in that, described initialization similarity difference value threshold value R=3 * (σ 1+ σ 2), σ 1, σ 2Be respectively the standard deviation of two characteristic similarities after normalization when target to be matched and target sequence are same target.
9. method as claimed in claim 6, is characterized in that, according to formula h=10 * | P i-P 0|+| H i-H 0| calculate the similarity difference value of preliminary coupling target sequence, P 0, H 0Be respectively two minterms in the characteristic similarity sequence.
CN201310043922.6A 2013-02-04 2013-02-04 A kind of monitor video pedestrian detection matching process Expired - Fee Related CN103164693B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310043922.6A CN103164693B (en) 2013-02-04 2013-02-04 A kind of monitor video pedestrian detection matching process

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310043922.6A CN103164693B (en) 2013-02-04 2013-02-04 A kind of monitor video pedestrian detection matching process

Publications (2)

Publication Number Publication Date
CN103164693A true CN103164693A (en) 2013-06-19
CN103164693B CN103164693B (en) 2016-01-13

Family

ID=48587765

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310043922.6A Expired - Fee Related CN103164693B (en) 2013-02-04 2013-02-04 A kind of monitor video pedestrian detection matching process

Country Status (1)

Country Link
CN (1) CN103164693B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103310204A (en) * 2013-06-28 2013-09-18 中国科学院自动化研究所 Feature and model mutual matching face tracking method based on increment principal component analysis
CN105894020A (en) * 2016-03-30 2016-08-24 重庆大学 Specific target candidate box generating method based on gauss model
CN105931266A (en) * 2016-04-15 2016-09-07 张志华 Guide system
CN107341430A (en) * 2016-04-28 2017-11-10 鸿富锦精密电子(天津)有限公司 Monitoring device, monitoring method and counting method
CN109961455A (en) * 2017-12-22 2019-07-02 杭州萤石软件有限公司 Target detection method and device
CN110163029A (en) * 2018-02-11 2019-08-23 中兴飞流信息科技有限公司 A kind of image-recognizing method, electronic equipment and computer readable storage medium
CN112380970A (en) * 2020-11-12 2021-02-19 常熟理工学院 Video target detection method based on local area search
CN113542725A (en) * 2020-04-22 2021-10-22 百度在线网络技术(北京)有限公司 Video auditing method, video auditing device and electronic equipment
CN118116034A (en) * 2024-04-26 2024-05-31 厦门四信通信科技有限公司 Pedestrian retrograde detection method, device, equipment and medium based on AI visual analysis

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090226044A1 (en) * 2008-03-07 2009-09-10 The Chinese University Of Hong Kong Real-time body segmentation system
CN101901486A (en) * 2009-11-17 2010-12-01 华为技术有限公司 Method for detecting moving target and device thereof
CN102663366A (en) * 2012-04-13 2012-09-12 中国科学院深圳先进技术研究院 Method and system for identifying pedestrian target

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090226044A1 (en) * 2008-03-07 2009-09-10 The Chinese University Of Hong Kong Real-time body segmentation system
CN101901486A (en) * 2009-11-17 2010-12-01 华为技术有限公司 Method for detecting moving target and device thereof
CN102663366A (en) * 2012-04-13 2012-09-12 中国科学院深圳先进技术研究院 Method and system for identifying pedestrian target

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103310204B (en) * 2013-06-28 2016-08-10 中国科学院自动化研究所 Feature based on increment principal component analysis mates face tracking method mutually with model
CN103310204A (en) * 2013-06-28 2013-09-18 中国科学院自动化研究所 Feature and model mutual matching face tracking method based on increment principal component analysis
CN105894020B (en) * 2016-03-30 2019-04-12 重庆大学 Specific objective candidate frame generation method based on Gauss model
CN105894020A (en) * 2016-03-30 2016-08-24 重庆大学 Specific target candidate box generating method based on gauss model
CN105931266A (en) * 2016-04-15 2016-09-07 张志华 Guide system
CN107341430B (en) * 2016-04-28 2021-01-29 鸿富锦精密电子(天津)有限公司 Monitoring device, monitoring method and counting method
CN107341430A (en) * 2016-04-28 2017-11-10 鸿富锦精密电子(天津)有限公司 Monitoring device, monitoring method and counting method
CN109961455A (en) * 2017-12-22 2019-07-02 杭州萤石软件有限公司 Target detection method and device
CN109961455B (en) * 2017-12-22 2022-03-04 杭州萤石软件有限公司 Target detection method and device
US11367276B2 (en) 2017-12-22 2022-06-21 Hangzhou Ezviz Software Co., Ltd. Target detection method and apparatus
CN110163029A (en) * 2018-02-11 2019-08-23 中兴飞流信息科技有限公司 A kind of image-recognizing method, electronic equipment and computer readable storage medium
CN110163029B (en) * 2018-02-11 2021-03-30 中兴飞流信息科技有限公司 Image recognition method, electronic equipment and computer readable storage medium
CN113542725A (en) * 2020-04-22 2021-10-22 百度在线网络技术(北京)有限公司 Video auditing method, video auditing device and electronic equipment
CN113542725B (en) * 2020-04-22 2023-09-05 百度在线网络技术(北京)有限公司 Video auditing method, video auditing device and electronic equipment
CN112380970A (en) * 2020-11-12 2021-02-19 常熟理工学院 Video target detection method based on local area search
CN112380970B (en) * 2020-11-12 2022-02-11 常熟理工学院 Video target detection method based on local area search
CN118116034A (en) * 2024-04-26 2024-05-31 厦门四信通信科技有限公司 Pedestrian retrograde detection method, device, equipment and medium based on AI visual analysis

Also Published As

Publication number Publication date
CN103164693B (en) 2016-01-13

Similar Documents

Publication Publication Date Title
CN103164693B (en) A kind of monitor video pedestrian detection matching process
Xiong et al. Spatiotemporal modeling for crowd counting in videos
CN111797653B (en) Image labeling method and device based on high-dimensional image
US20170345181A1 (en) Video monitoring method and video monitoring system
CN101329765B (en) Method for fusing target matching characteristics of multiple video cameras
CN104598883B (en) Target knows method for distinguishing again in a kind of multiple-camera monitoring network
CN104850850B (en) A kind of binocular stereo vision image characteristic extracting method of combination shape and color
CN107506700B (en) Pedestrian re-identification method based on generalized similarity measurement learning
CN107330397B (en) Pedestrian re-identification method based on large-interval relative distance measurement learning
US20150334267A1 (en) Color Correction Device, Method, and Program
CN104881637A (en) Multimode information system based on sensing information and target tracking and fusion method thereof
CN102521565A (en) Garment identification method and system for low-resolution video
CN102243765A (en) Multi-camera-based multi-objective positioning tracking method and system
CN103735269B (en) A kind of height measurement method followed the tracks of based on video multi-target
CN103971386A (en) Method for foreground detection in dynamic background scenario
CN102706274B (en) System for accurately positioning mechanical part by machine vision in industrially-structured scene
CN105005760A (en) Pedestrian re-identification method based on finite mixture model
CN104700089A (en) Face identification method based on Gabor wavelet and SB2DLPP
Jelača et al. Vehicle matching in smart camera networks using image projection profiles at multiple instances
Lejbolle et al. Attention in multimodal neural networks for person re-identification
CN103150575A (en) Real-time three-dimensional unmarked human body gesture recognition method and system
Xiao et al. Structuring visual words in 3D for arbitrary-view object localization
CN110705363B (en) Commodity specification identification method and device
Oliveira et al. Vehicle re-identification: exploring feature fusion using multi-stream convolutional networks
Lee et al. An intelligent image-based customer analysis service

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20160113

Termination date: 20170204