CN109325465A - Gait library under multiple cameras environment constructs system and method - Google Patents

Gait library under multiple cameras environment constructs system and method Download PDF

Info

Publication number
CN109325465A
CN109325465A CN201811206427.1A CN201811206427A CN109325465A CN 109325465 A CN109325465 A CN 109325465A CN 201811206427 A CN201811206427 A CN 201811206427A CN 109325465 A CN109325465 A CN 109325465A
Authority
CN
China
Prior art keywords
camera
gait
information
target
video camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811206427.1A
Other languages
Chinese (zh)
Other versions
CN109325465B (en
Inventor
李茂贞
黄正文
张纬栋
王德勇
师文喜
敖乃翔
赵学义
陈东旭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yangquan Jinyun Software Development Center
Xinjiang Lianhai Powerise Mdt Infotech Ltd
China Electronics Technology Group Corp CETC
Electronic Science Research Institute of CTEC
Original Assignee
Yangquan Jinyun Software Development Center
Xinjiang Lianhai Powerise Mdt Infotech Ltd
China Electronics Technology Group Corp CETC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yangquan Jinyun Software Development Center, Xinjiang Lianhai Powerise Mdt Infotech Ltd, China Electronics Technology Group Corp CETC filed Critical Yangquan Jinyun Software Development Center
Priority to CN201811206427.1A priority Critical patent/CN109325465B/en
Publication of CN109325465A publication Critical patent/CN109325465A/en
Application granted granted Critical
Publication of CN109325465B publication Critical patent/CN109325465B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/23Recognition of whole body movements, e.g. for sport training
    • G06V40/25Recognition of walking or running movements, e.g. gait recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/292Multi-camera tracking

Abstract

The present invention relates to the gait base construction methods and system under a kind of multiple cameras environment, and the method comprising the steps of: the number for determining video camera and each video camera ensure target by multiple video camera all standings the angle of coverage of target;Each camera carries out target gait information extraction respectively, and mutually compensates to the information respectively lacked;After the compensation of each camera, the weight of each camera is adjusted, compensated generic features value is obtained, and realize using compensation generic features value and extract to the gait information of target, forms gait library.By the method for the invention and system, it can establish more accurate gait library, expand the application scenarios in gait library.

Description

Gait library under multiple cameras environment constructs system and method
Technical field
The present invention relates to Gait Recognition technical field, in particular to the gait library under a kind of multiple cameras environment constructs system And method.
Background technique
Traditional video analysis is to carry out identification by extracting face characteristic.However, the target being tracked is being committed a crime When can deliberately block face, avoid camera direct-view or far from camera make capture video blur, cause identity not easy to identify. Camera can also acquire body gait information, including height posture, behavioural habits, limb defects etc..Currently, to gait feature Extraction identification and gait library design it is still at an early stage, relate generally to the Method of Gait Feature Extraction of single camera.However, Due to the limitation of the factors such as installation site, angle, the key component in the gait data information of single camera acquisition is easy to lack It loses, so that the accuracy of Gait Recognition be greatly reduced.The step generated by the target gait feature that single camera acquires State library can not be efficiently applied to other camera scenes, so that the application power in gait library is greatly restricted.
Summary of the invention
The purpose of the present invention is to provide the gait libraries under a kind of multiple cameras environment to construct system and method.
In order to achieve the above-mentioned object of the invention, the embodiment of the invention provides following technical schemes:
On the one hand, the present invention provides the gait base construction methods under a kind of multiple cameras environment, comprising the following steps:
The number for determining video camera and each video camera ensure that target is covered entirely by multiple video cameras to the angle of coverage of target Lid;
Each camera carries out target gait information extraction respectively, and mutually compensates to the information respectively lacked;
After the compensation of each camera, the weight of each camera is adjusted, compensated generic features value is obtained, and It is realized using compensation generic features value and the gait information of target is extracted, form gait library.
Further, each camera carries out target gait information extraction respectively, and to the information respectively lacked into The step of row mutually compensates, comprising:
Step 2, one video camera progress target gait information of any selection tentatively extracts;
Step 3, based on the gait information extracted in step 2, the information that is lacked in analysing gait information;
Step 4, it is directed to the information of missing, is compensated using the data of other cameras;
Step 5, it reselects another camera and carries out target gait information and tentatively extract, repeat step 3 to step 4, Until the compensation of each camera finishes.
On the other hand, the embodiment of the invention provides the gait library building systems under a kind of multiple cameras environment, including with Lower module:
Video camera determining module, number and each video camera for determining video camera ensure the angle of coverage of target Target is by multiple video camera all standings;
Single camera compensating module, compensates for the missing information to each camera;
Multi-cam comprehensive compensation module obtains compensated generic features value for adjusting the weight of each camera, And realized using the compensation generic features value and the gait information of target is extracted, form gait library.
Further, the single camera compensating module includes following submodule:
First extracting sub-module carries out target gait information and tentatively extracts for being directed to each video camera;
Submodule is analyzed, the gait information for extracting to first extracting sub-module is analyzed, and obtains lacking in gait information The information of mistake;
Submodule is compensated, the missing information analyzed for being directed to analysis submodule utilizes the data of other cameras It compensates.
Compared with prior art, system and method provided by the invention has the advantages that
By the method for the invention and system can establish more accurate gait library, calculate for the processing of perspective in research unrelated images Method provides the data source basis of Gao Lubang, realizes the general gait library design unrelated with visual angle, can arbitrarily image Gait Recognition is carried out under head deployed environment.
Detailed description of the invention
In order to illustrate the technical solution of the embodiments of the present invention more clearly, below will be to needed in the embodiment attached Figure is briefly described, it should be understood that the following drawings illustrates only certain embodiments of the present invention, therefore is not construed as pair The restriction of range for those of ordinary skill in the art without creative efforts, can also be according to this A little attached drawings obtain other relevant attached drawings.
Fig. 1 is the flow chart of the gait base construction method in embodiment under multiple cameras environment.
Fig. 2 is the schematic diagram of a scenario of three video cameras covering.
Fig. 3 is the schematic block diagram of the gait library building system in embodiment under multiple cameras environment.
Specific embodiment
Below in conjunction with attached drawing in the embodiment of the present invention, technical solution in the embodiment of the present invention carries out clear, complete Ground description, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.Usually exist The component of the embodiment of the present invention described and illustrated in attached drawing can be arranged and be designed with a variety of different configurations herein.Cause This, is not intended to limit claimed invention to the detailed description of the embodiment of the present invention provided in the accompanying drawings below Range, but it is merely representative of selected embodiment of the invention.Based on the embodiment of the present invention, those skilled in the art are not doing Every other embodiment obtained under the premise of creative work out, shall fall within the protection scope of the present invention.
Referring to Fig. 1, the gait base construction method under the multiple cameras environment provided in the present embodiment, including following step It is rapid:
Step 1, the number for determining video camera (camera) and each video camera ensure target quilt to the angle of coverage of target Multiple video camera all standings.As shown in Fig. 2, each video camera covers 120 degree of angles by taking three video cameras as an example, 60 degree of center is taken Covering 15 square metres of regions is goal activities region, red area in Fig. 2.
Step 2, one video camera progress target gait information of any selection tentatively extracts, i.e., to the moving condition figure of target As the lasting tracking of variable.Such as shown in Fig. 2, extract personage march into the arena after moving condition image variables, comprising clothing, height, The slave parts such as the key components such as paces and illumination, background.The key component refers to that can result in target identification precision big The information of amplitude decline.
Step 3, based on the gait information extracted in step 2, the distribution feelings of key component missing in analysing gait information Condition analyzes the key component missing of which time node, those positions.Key component missing may be due to blocking, moving The reasons such as direction change cause.One complete information set includes that the stabilization of each moving condition image variables is persistently contributed, when Key component variable is unable to sustainable supply and supports to be considered as key component missing when identification.For example, when the information such as clothing, paces are lost It goes stable identification to export and recognition accuracy is caused to go wrong.
Step 4, it is distributed according to the missing of the key component analyzed in step 3, passes through the time using compensation calculation system The synchronization of node carries out the reconstruction of data complement using the data of normal camera to misalignment camera.Misalignment camera shooting herein Head refers to that camera selected in step 2, normal camera refer to other cameras other than misalignment camera.Reconstruction is exactly The process of one selection and the process filled a vacancy and a replacement misalignment camera misalignment information, i other words, it is based on same time The information of node acquisition utilizes the key component of the collected information filling misalignment camera missing of normal camera.
In this step, specifically, comprising the following steps:
A) characteristic value of each camera is persistently detected, the characteristic value of each camera is calculated by following formula to be obtained, public N in formula indicates the number of information source.Information source is the source of a characteristic signal, information source (abbreviation information source sometimes below) It is using the number of variable in image (such as clothing, paces) as unit, source set includes in image for mobile target Numerical value description after various quantizations.Information source weight is one of the contribution of each information source characteristic value final for single camera Weighting expression.Information source weight is determined based on history optimal allocation value.Each target environment has the optimal value of oneself, can be with Pass through study historical data distribution.For example, only considering two information sources (paces and clothing) of a camera, the two information source weight It is 0.4 and 0.6 respectively, the two characteristic value (value of the information source n i.e. in following formula) is divided into other α and β, then single camera Characteristic value is+0.6 β of 0.4 α.
B) the variation threshold collection of characteristic value is set.The variation threshold of the characteristic value of single camera is based on history feature value It calculates, shown as the following formula, the history mean value in formula is to record number based on threshold of the past data for similar scene Value.Between when detecting in window (- t/2, t/2), due to the opposite variation for the normal operating conditions that various factors information source loses Value is recorded to history mean value library, provides for the record result of step a) and compares foundation.Calculation method is as follows:
Indicate the variation cumulant in the value of unit time in past (- t/2, t/2) interior information source n. The threshold centralized recording variation threshold of all cameras, and a camera corresponds to multiple variation thresholds, when any information source becomes Change cause tracking failure when will one record variation cumulant minimum value.
C) by the tracing detection result of step a) (testing result includes characteristic value itself and variation range) and variation threshold (with threshold concentrate each threshold compared with or any one threshold compare be ok) be compared, if single camera Characteristic value occur beyond change threshold collection (or be interpreted as the characteristic value of camera and be greater than variation threshold compared with it) Situation then cooperates with the target gait data information for handling multiple camera acquisitions.Based on testing time the window (- t/ after synchronizing 2, t/2) each information source weight of misalignment camera, is modified, so that characteristic value output all-the-time stable is accurate, stabilization herein can be managed Solution is characterized value and is referring to the variable quantity of two neighboring time node in a certain range.Here collaborative processes are to calibrate repeatedly The process of single misalignment camera.
It reselects another camera progress target gait information tentatively to extract, repeats step 3 to step 4, reality Reparation now is compensated to the key component of each single camera missing.
Step 5, the information that multiple cameras are handled using compensation calculation systematic collaboration adjusts each camera to final logical With the contribution weight of characteristic value, to obtain the gait generic features value result after compensation correction.By to multi-cam (one Camera is indicated by a channel) weight is adjusted, and providing effectively output by normal camera as far as possible, (effectively output is exactly Can influence the variation of the image variables of accuracy of identification), so that compensation generic features value is obtained, with close to mobile realization of goal It accurately identifies.Adjustment amount is determined that the compensation calculation system of step 4 is the letter for the missing rebuild by the crucial missing degree of step 4 Breath, but rebuild part and recalculated in contribution weight needs of the step 5 to final gait generic features value.Generic features value is just It is the value for being ultimately used to support identification, or is called generic features value after compensation.The calculation formula of generic features value is as follows, formula In, m indicates the number of channel, the i.e. number of camera, and i indicates i-th of channel, and channel is the j as unit of camera quantity Indicate j-th of information source.
It should be noted that the testing result of the step a compared with threshold includes characteristic value itself and variation range information, Therefore, can be compared herein by accumulative variation range information with threshold in stable inspection function.
Step 6, it is realized using generic features value calculated in step 5 and the gait information of target is extracted, form gait Library.The related data that can complete correct identification process can be associated modeling by the generic features value, be equivalent to data Index can extract the gait information of target using the generic features value, form gait library.When characteristic value provides a disengaging Between axis limit index information, be the information labels for being equivalent to digital finger-print.Association modeling can be by that will acquire The relevant image variables information of acquisition targets in environment is realized according to the characteristic value classification polymerization respectively met.It utilizes The process that characteristic value is associated modeling belongs to the prior art, does not run business into particular one and states herein.Individual data collection includes target (such as people Object) march into the arena after moving condition image variables (comprising clothing, height, the chiefs such as paces part and illumination, background etc. are auxiliary Help part).Overall data library also includes the information of target scene in addition to comprising a large amount of individual data collections, such as pickup area is big It is small, acquisition facility information etc..
In scene shown in Fig. 2, personage head-on enters overlay area from A single machine, and information obtained by A machine is not enough to provide identification Required All Eigenvalues.The output of B and C machine provides compensation calculation system by requisition simultaneously at this time.B and C will provide key simultaneously The characteristic value of missing.It is captured in overlay area in Multi-Machine Synchronous, feature needed for the discriminance analysis that all single machines cannot capture completely Value provides correction using compensation system and supports to obtain high-precision recognition result.
Referring to Fig. 3, be based on identical inventive concept, the step under a kind of multiple cameras environment is additionally provided in the present embodiment State library constructs system, comprises the following modules:
Video camera determining module, number and each video camera for determining video camera ensure the angle of coverage of target Target is by multiple video camera all standings;
Single camera compensating module, compensates for the missing information to each camera;
Multi-cam comprehensive compensation module obtains compensated generic features value for adjusting the weight of each camera, And realized using the compensation generic features value and the gait information of target is extracted, form gait library.
Wherein, the single camera compensating module includes following submodule:
First extracting sub-module carries out target gait information and tentatively extracts for being directed to each video camera;
Submodule is analyzed, the gait information for extracting to first extracting sub-module is analyzed, and obtains lacking in gait information The information of mistake;
Submodule is compensated, the missing information analyzed for being directed to analysis submodule utilizes the data of other cameras It compensates.
Wherein, compensation submodule compensates in the following way:
Step a), persistently detects each camera in the characteristic value of each time node,
The variation threshold collection of characteristic value is arranged in step b),
The characteristic value that step a) is detected is compared by step c) with the variation threshold in step b), if detect Range of the characteristic value beyond variation threshold collection then adjusts the weighted factor of each camera, and circulation executes step a)-step c), Until the characteristic value of the camera is stablized.
The above description is merely a specific embodiment, but scope of protection of the present invention is not limited thereto, any Those familiar with the art in the technical scope disclosed by the present invention, can easily think of the change or the replacement, and should all contain Lid is within protection scope of the present invention.

Claims (6)

1. the gait base construction method under a kind of multiple cameras environment, which comprises the following steps:
The number for determining video camera and each video camera ensure target by multiple video camera all standings the angle of coverage of target;
Each camera carries out target gait information extraction respectively, and mutually compensates to the information respectively lacked;
After the compensation of each camera, the weight of each camera is adjusted, obtains compensated generic features value, and utilize It compensates the realization of generic features value to extract the gait information of target, forms gait library.
2. being mentioned the method according to claim 1, wherein each camera carries out target gait information respectively It takes, and the step of information respectively lacked is mutually compensated, comprising:
Step 2, one video camera progress target gait information of any selection tentatively extracts;
Step 3, based on the gait information extracted in step 2, the information that is lacked in analysing gait information;
Step 4, it is directed to the information of missing, is compensated using the data of other cameras;
Step 5, it reselects another camera and carries out target gait information and tentatively extract, repeat step 3 to step 4, until The compensation of each camera finishes.
3. according to the method described in claim 2, it is characterized in that, in the step 4, comprising the following steps:
Step a), persistently detects each camera in the characteristic value of each time node,
The variation threshold collection of characteristic value is arranged in step b),
The characteristic value that step a) is detected is compared, if the feature detected by step c) with the variation threshold in step b) Range of the value beyond variation threshold collection then adjusts the weighted factor of each camera, and circulation executes step a)-step c), until The characteristic value of the camera two neighboring time node variable quantity within the set range.
4. according to the method described in claim 3, it is characterized in that, the compensated generic features value is public by following calculating Formula obtains,
5. the gait library under a kind of multiple cameras environment constructs system, which is characterized in that comprise the following modules:
Video camera determining module, number and each video camera for determining video camera ensure target to the angle of coverage of target By multiple video camera all standings;
Single camera compensating module, compensates for the missing information to each camera;
Multi-cam comprehensive compensation module obtains compensated generic features value, and benefit for adjusting the weight of each camera It is realized with the compensation generic features value and the gait information of target is extracted, form gait library.
6. system according to claim 5, which is characterized in that the single camera compensating module includes following submodule Block:
First extracting sub-module carries out target gait information and tentatively extracts for being directed to each video camera;
Submodule is analyzed, the gait information for extracting to first extracting sub-module analyzes, and obtains lacking in gait information Information;
Submodule is compensated, the missing information analyzed for being directed to analysis submodule is carried out using the data of other cameras Compensation.
CN201811206427.1A 2018-10-17 2018-10-17 Gait library construction system and method under multi-camera environment Active CN109325465B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811206427.1A CN109325465B (en) 2018-10-17 2018-10-17 Gait library construction system and method under multi-camera environment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811206427.1A CN109325465B (en) 2018-10-17 2018-10-17 Gait library construction system and method under multi-camera environment

Publications (2)

Publication Number Publication Date
CN109325465A true CN109325465A (en) 2019-02-12
CN109325465B CN109325465B (en) 2021-08-03

Family

ID=65262601

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811206427.1A Active CN109325465B (en) 2018-10-17 2018-10-17 Gait library construction system and method under multi-camera environment

Country Status (1)

Country Link
CN (1) CN109325465B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111860063A (en) * 2019-04-30 2020-10-30 杭州海康威视数字技术股份有限公司 Gait data construction system, method and device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080174516A1 (en) * 2007-01-24 2008-07-24 Jing Xiao Mosaicing of View Projections
CN104463099A (en) * 2014-11-05 2015-03-25 哈尔滨工程大学 Multi-angle gait recognizing method based on semi-supervised coupling measurement of picture
CN106101535A (en) * 2016-06-21 2016-11-09 北京理工大学 A kind of based on local and the video stabilizing method of mass motion disparity compensation
CN108171279A (en) * 2018-01-28 2018-06-15 北京工业大学 A kind of adaptive product Grassmann manifold Subspace clustering methods of multi-angle video

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080174516A1 (en) * 2007-01-24 2008-07-24 Jing Xiao Mosaicing of View Projections
CN104463099A (en) * 2014-11-05 2015-03-25 哈尔滨工程大学 Multi-angle gait recognizing method based on semi-supervised coupling measurement of picture
CN106101535A (en) * 2016-06-21 2016-11-09 北京理工大学 A kind of based on local and the video stabilizing method of mass motion disparity compensation
CN108171279A (en) * 2018-01-28 2018-06-15 北京工业大学 A kind of adaptive product Grassmann manifold Subspace clustering methods of multi-angle video

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111860063A (en) * 2019-04-30 2020-10-30 杭州海康威视数字技术股份有限公司 Gait data construction system, method and device
CN111860063B (en) * 2019-04-30 2023-08-11 杭州海康威视数字技术股份有限公司 Gait data construction system, method and device

Also Published As

Publication number Publication date
CN109325465B (en) 2021-08-03

Similar Documents

Publication Publication Date Title
CN106503615B (en) Indoor human body detecting and tracking and identification system based on multisensor
Zhang et al. Active exposure control for robust visual odometry in HDR environments
US11451192B2 (en) Automated photovoltaic plant inspection system and method
CN106226157B (en) Concrete structure member crevices automatic detection device and method
JP2020107349A (en) Object tracking device, object tracking system, and program
CN107357286A (en) Vision positioning guider and its method
KR101558467B1 (en) System for revising coordinate in the numerical map according to gps receiver
CN105279480A (en) Method of video analysis
CN105550670A (en) Target object dynamic tracking and measurement positioning method
Dong et al. Robust circular marker localization under non-uniform illuminations based on homomorphic filtering
KR20150021526A (en) Self learning face recognition using depth based tracking for database generation and update
CN105898107B (en) A kind of target object grasp shoot method and system
CN107024339A (en) A kind of test device and method for wearing display device
CN106778615B (en) A kind of method, apparatus and service for infrastructure robot identifying user identity
CN106485751A (en) It is applied to the unmanned plane photographic imagery in pile detection and data processing method and system
CN110245592A (en) A method of for promoting pedestrian's weight discrimination of monitoring scene
CN109165559A (en) A kind of method and apparatus generating track
CN109714530A (en) A kind of aerial camera image focus adjustment method
CN109325465A (en) Gait library under multiple cameras environment constructs system and method
CN107077623A (en) Image quality compensation system and method
CN109035343A (en) A kind of floor relative displacement measurement method based on monitoring camera
CN110348366A (en) It is a kind of to automate optimal face searching method and device
CN106941580A (en) Method and system of the teacher student from motion tracking is realized based on single detective camera lens
CN110909617B (en) Living body face detection method and device based on binocular vision
CN104104860B (en) Object images detection device and its control method and control program, recording medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant