CN103366172A - Video-based human body feature extraction method - Google Patents
Video-based human body feature extraction method Download PDFInfo
- Publication number
- CN103366172A CN103366172A CN2013101559865A CN201310155986A CN103366172A CN 103366172 A CN103366172 A CN 103366172A CN 2013101559865 A CN2013101559865 A CN 2013101559865A CN 201310155986 A CN201310155986 A CN 201310155986A CN 103366172 A CN103366172 A CN 103366172A
- Authority
- CN
- China
- Prior art keywords
- cube
- starting point
- space
- expression
- time
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Landscapes
- Image Analysis (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a video-based human body feature extraction method. A group of videos comprising human body data is input, a space-time body is selected from the videos; gradient computation and distance transformation are carried out on the space-time body to obtain a space-time distance transformation body; and a three-dimensional Haar filter and the space-time distance transformation body are used for carrying out convolution so as to extract static features and dynamic features. According to the scheme disclosed by the invention, the discriminating ability of a human body detector is improved, and the problem of the low signal-to-noise ratio is effectively solved.
Description
Technical field
The present invention relates to computer vision and area of pattern recognition, more particularly, is a kind of characteristics of human body's extracting method that fully utilizes appearance information and movable information.
Background technology
Human detection is a special case in the object detection, refers to determine in defeated people's image or video sequence the position of all human bodies and the process of size.Because its in intelligent monitoring and vehicle-mounted supplementary security system the widespread use demand and obtained the attention of more and more researchers and research institution.Somatic data itself exists the difference on significant outward appearance, visual angle and the yardstick, and effectively feature extracting method can significantly promote the robustness reduce false alarm of human body detector.
Human detection is different according to data source, can be divided into based on the human detection of still image with based on the human detection of video sequence.Because somatic data self is widely different, and be subject to the impact of environment, therefore very challenging based on the human detection of still image, comparatively speaking, more reference information is arranged in the video sequence, therefore can utilize motion feature to carry out the identification of human body as supplementary; Perhaps adopt based on the method for cutting apart human region is cut apart, and then segmentation result is further detected, can greatly improve like this precision and the speed of detection.The work of Viola etc., Dalal etc. and Wojek etc. shows that all comprehensive utilization static appearance feature and dynamic motion feature can significantly improve the detection performance.Some results of study in video analysis have further been verified the validity of space-time analysis method in the recent period, and for example Ukrainitz and Irani are based on the video registration of space-time description; The video repairing of behavior correlation calculations in the video sequence of Shechtman and Irani and Wexler etc.
Because the most of zone of human body is covered by clothing, but because profile and the texture of clothes vary, add illumination, block the impact with complex background, caused the problem of low signal-to-noise ratio.This problem remains one of difficulties of human detection at present.
Summary of the invention
The object of the invention is to, in above-mentioned Human Detection, the problem of somatic data low signal-to-noise ratio has proposed a kind of characteristics of human body's extracting method that merges human appearance information and movable information in the Space Time territory.
The technical solution that realizes the object of the invention is: a kind of characteristics of human body's extracting method based on video, and step is as follows:
1) one group of video that comprises somatic data of input is therefrom selected a space-time body;
2) carry out gradient calculation and range conversion at space-time body, obtain space-time range conversion body;
3) utilize 3 dimension Haar wave filters and space-time range conversion body to carry out convolution and extract static nature and behavioral characteristics.
In the said method, space-time body in the described step 1) refers to select from video adjacent some frames as one group, and select the rectangular area image of same scale in this same position of organizing every two field picture, the cube image sequence that is formed by all rectangular area images.
In the said method, in the described step 1) single-frame images coordinate of space-time body by
Axle
Axle represents, the frame sequence number is by time shaft
The expression, any point on the space-time body can by (
) expression.
In the said method, described step 2) comprise following concrete steps:
21) compute gradient space-time body:
Wherein,
Any one pixel on the expression space-time body,
The expression space-time body,
,
With
The expression respectively along
,
With
The single order partial derivative of direction,
The representation space gradient,
The expression time gradient,
It is the balance parameters of regulating the weight between spatial gradient and the time gradient;
22) adopt formula (2) to carry out thresholding to the gradient space-time body and obtain the profile space and time body:
Wherein,
The threshold value of expression gradient space-time body,
Any one pixel on the expression gradient space-time body;
23) employing formula (3) is given the distance value of this pixel to its nearest point to each pixel:
Wherein,
Expression profile space and time body
On any one pixel,
Expression profile space and time body
On any one contour pixel,
The threshold value of expression gradient space-time body,
Expression
Arrive
Distance metric, can be Euclidean distance or city block distance,
Minimum value is got in expression.
In the said method, described step 3) comprises following concrete steps:
31) 3 dimension Haar wave filters of design expression static nature or behavioral characteristics;
32) on space-time range conversion body, generate 3 dimension Haar wave filters;
33) according to the method for setting calculate 3 dimension Haar wave filters each subregion pixel value and difference, obtain 3 dimension Haar features;
34) adopt respectively 7 kind of 3 dimension Haar wave filter, repeating step 32), 33), until generate 3 dimension Haar features of predetermined number, and all 3 dimension Haar features are concatenated into a proper vector;
In the said method, described step 31) 3 dimension Haar wave filters splices the cube that forms by different cube subregions in the space, establish 7 kind of 3 yardstick of tieing up the Haar wave filter and all are
, wherein,
Represent respectively cubical wide, high and length, the
Planting 3 dimension Haar wave filters uses
Expression,
Individual cube subregion is used
Expression forms various 3 dimension Haar wave filters
The cube subregion
As follows respectively:
Above 7 kind of 3 dimension Haar wave filter,
,
With
Static 3 dimension Haar wave filters,
,
,
With
Dynamic 3 dimension Haar wave filters.
The length of 3 dimension Haar wave filters in the said method, described step 32) and the appearance of space-time range conversion body with, its wide and high in the wide and high range scale of space-time range conversion body with any yardstick generation in optional position.
In the said method, described step 33) in for every kind 3 dimension Haar wave filter
, use
The expression subregion
In pixel value and, use
Represent different
And between difference, then
Computing method respectively as follows:
Wherein,
,
Expression static three-dimensional Haar feature,
,
With
Expression dynamic 3 D Haar feature.
The present invention compared with prior art, its remarkable advantage: the feature extracting method that in the Space Time territory, merges human appearance information and movable information that the present invention proposes, owing to having utilized motion feature to carry out the identification of human body as supplementary, improve the discriminating power of human body detector, effectively solved the problem of low signal-to-noise ratio.
Description of drawings
Fig. 1 is based on the process flow diagram of the method for space-time model.
Fig. 2 is that the space-time body of human body represents.
Fig. 3 is the definition of 7 types 3 dimension Haar wave filters.
Fig. 4 utilize the integration body calculate fast in the space-time body area pixel and schematic diagram.
Embodiment
The present invention is expressed as space-time body with human body and utilizes 3 dimension Haar wave filters to extract its Static and dynamic 3 dimension Haar features, and the integrated operation flow process is shown in Fig. 1.Below in conjunction with accompanying drawing embodiment is described in further detail.
Select one group of adjacent frame from video, sequence number is got between 5 to 10 frames usually.Select the rectangular area image of same scale in this same position of organizing every two field picture, the image sequence that is comprised of all rectangular area images is called space-time body.
The method for expressing of space-time body as shown in Figure 2, any point on it is used
Expression, wherein,
Be respectively the transverse and longitudinal coordinate of single-frame images,
Be the time shaft coordinate of image sequence, and use
The gray-scale value that represents this point.
Step 2:Carry out gradient calculation and range conversion at space-time body.Concrete steps are as follows:
Step 21:Respectively along
,
With
Direction calculating single order partial derivative obtains corresponding value and is designated as
,
With
Step 22:The compute gradient space-time body.Computing method are as follows:
Wherein:
The weight parameter of-spatial gradient and time gradient.
Step 23:To the gradient space-time body
(2) formula of employing is carried out thresholding, and the profile that obtains space-time body represents:
(2)
Wherein:
Step 24:Adopt the method for range conversion to give the distance value of this pixel to its nearest point to each pixel.Following formula is adopted in the range conversion of space-time body:
Wherein:
Step 3:The static nature of design expression space-time range conversion body and 3 dimension Haar wave filters of behavioral characteristics.
3 dimension Haar wave filters splice the cube that forms by different cube subregions in the space, the yardstick of establishing 7 kind of 3 dimension Haar wave filter all is
, wherein,
Represent respectively cubical wide, high and length, the
Planting 3 dimension Haar wave filters uses
Expression,
Individual cube subregion is used
Expression forms the cube subregion of various 3 dimension Haar wave filters as shown in Figure 3:
A) as shown in figure 31,
2 subregions are arranged, are respectively:
C) as shown in figure 33,
4 subregions are arranged, are respectively:
D) as shown in figure 34,
4 subregions are arranged, are respectively:
With (
) be starting point,
Cube
Above 7 kind of 3 dimension Haar wave filter,
,
With
Static 3 dimension Haar wave filters,
,
,
With
Dynamic 3 dimension Haar wave filters.
Step 4:Adopt the integration body method to calculate 3 dimension Haar features of space-time range conversion body.
Step 41:Calculate the integration body of space-time range conversion body, account form is as follows:
(4)
Wherein,
Be illustrated in space-time range conversion body
The integration body at place,
Expression space-time range conversion body exists
The pixel value at place.
Step 42:Generate 3 dimension Haar wave filters at space-time range conversion body.
The length of 3 among the present invention dimension Haar wave filter and the appearance of space-time range conversion body with, its wide and high in the wide and high range scale of space-time range conversion body with any yardstick generation in optional position.For example, exist
On the space-time body, with (
) be starting point, generate
3 dimension Haar wave filters
, then
Parameters must meet the following conditions:
,
Arbitrarily
Adopt subregion
The integration body on 8 summits carried out for 7 steps and add reducing and obtain.As shown in Figure 4, cube
A-
B-C-D-E-F-G-HCalculating with the following formula of employing of interior pixel value:
Step 44:For every kind
, calculate in accordance with the following methods each
Between and difference
:
e)
;
Wherein,
,
Expression static three-dimensional Haar feature,
,
With
Expression dynamic 3 D Haar feature.
Step 5:Adopt respectively the operation of 7 kind of 3 dimension Haar wave filter repeating step 42,43,44, until generate 3 dimension Haar features of predetermined number, 7 kind of 3 dimension Haar feature is concatenated into a proper vector.
Claims (7)
1. characteristics of human body's extracting method based on video is characterized in that step is as follows:
1) one group of video that comprises somatic data of input is therefrom selected a space-time body;
2) carry out gradient calculation and range conversion at space-time body, obtain space-time range conversion body;
3) utilize 3 dimension Haar wave filters and space-time range conversion body to carry out convolution and extract static nature and behavioral characteristics.
2. the characteristics of human body's extracting method based on video according to claim 1, it is characterized in that: space-time body refers to select from video adjacent some frames as one group in the described step 1), and select the rectangular area image of same scale in this same position of organizing every two field picture, the cube image sequence that is formed by all rectangular area images; The single-frame images coordinate of space-time body by
Axle
Axle represents, the frame sequence number is by time shaft
Expression.
3. the characteristics of human body's extracting method based on video according to claim 1 is characterized in that described step 2) comprise following concrete steps:
21) compute gradient space-time body:
Wherein,
Any one pixel on the expression space-time body,
The expression space-time body,
,
With
The expression respectively along
,
With
The single order partial derivative of direction,
The representation space gradient,
The expression time gradient,
It is the balance parameters of regulating the weight between spatial gradient and the time gradient;
22) adopt formula (2) to carry out thresholding to the gradient space-time body and obtain the profile space and time body:
Wherein,
The threshold value of expression gradient space-time body,
Any one pixel on the expression gradient space-time body;
23) employing formula (3) is given the distance value of this pixel to its nearest point to each pixel:
Wherein,
Expression profile space and time body
On any one pixel,
Expression profile space and time body
On any one contour pixel,
The threshold value of expression gradient space-time body,
Expression
Arrive
Distance metric, can be Euclidean distance or city block distance,
Minimum value is got in expression.
4. the characteristics of human body's extracting method based on video according to claim 1 is characterized in that described step 3) comprises following concrete steps:
31) 3 dimension Haar wave filters of design expression static nature or behavioral characteristics;
32) on space-time range conversion body, generate 3 dimension Haar wave filters;
33) according to the method for setting calculate 3 dimension Haar wave filters each subregion pixel value and difference, obtain 3 dimension Haar features;
34) adopt respectively 7 kind of 3 dimension Haar wave filter, repeating step 32), 33), until generate 3 dimension Haar features of predetermined number, and all 3 dimension Haar features are concatenated into a proper vector.
5. the characteristics of human body's extracting method based on video according to claim 4, it is characterized in that: 3 dimension Haar wave filters splice the cube that forms by different cube subregions described step 31) in the space, and the yardstick of establishing 7 kind of 3 dimension Haar wave filter all is
, wherein,
Represent respectively cubical wide, high and length, the
Planting 3 dimension Haar wave filters uses
Expression,
Individual cube subregion is used
Expression forms various 3 dimension Haar wave filters
The cube subregion
As follows respectively:
A)
2 subregions are arranged, are respectively:
With (
) be starting point,
Cube
,
6. the characteristics of human body's extracting method based on video according to claim 4, it is characterized in that: the length of 3 described step 32) dimension Haar wave filter and the appearance of space-time range conversion body with, its wide and high in the wide and high range scale of space-time range conversion body with any yardstick generation in optional position.
7. the characteristics of human body's extracting method based on video according to claim 4 is characterized in that: tie up the Haar wave filter for every kind 3 described step 33)
, use
The expression subregion
In pixel value and, use
Represent different
And between difference,
Computing method respectively as follows:
e)
;
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2013101559865A CN103366172A (en) | 2013-04-28 | 2013-04-28 | Video-based human body feature extraction method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2013101559865A CN103366172A (en) | 2013-04-28 | 2013-04-28 | Video-based human body feature extraction method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN103366172A true CN103366172A (en) | 2013-10-23 |
Family
ID=49367474
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN2013101559865A Pending CN103366172A (en) | 2013-04-28 | 2013-04-28 | Video-based human body feature extraction method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103366172A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105631463A (en) * | 2014-11-28 | 2016-06-01 | 无锡慧眼电子科技有限公司 | Time-space movement profile feature-based pedestrian detection method |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2007041759A (en) * | 2005-08-02 | 2007-02-15 | Hitachi Eng Co Ltd | Personal authentication device and method |
-
2013
- 2013-04-28 CN CN2013101559865A patent/CN103366172A/en active Pending
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2007041759A (en) * | 2005-08-02 | 2007-02-15 | Hitachi Eng Co Ltd | Personal authentication device and method |
Non-Patent Citations (1)
Title |
---|
刘亚洲: ""基于时空分析和多粒度特征表示的人体检测方法研究"", 《中国博士学位论文全文数据库 信息科技辑》, no. 5, 15 May 2011 (2011-05-15) * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105631463A (en) * | 2014-11-28 | 2016-06-01 | 无锡慧眼电子科技有限公司 | Time-space movement profile feature-based pedestrian detection method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104933414B (en) | A kind of living body faces detection method based on WLD-TOP | |
CN104050449B (en) | A kind of face identification method and device | |
CN106548153B (en) | Video abnormality detection method based on graph structure under multi-scale transform | |
CN103337072B (en) | A kind of room objects analytic method based on texture and geometric attribute conjunctive model | |
CN105843386A (en) | Virtual fitting system in shopping mall | |
CN107273835A (en) | Act of violence intelligent detecting method based on video analysis | |
CN103248906B (en) | Method and system for acquiring depth map of binocular stereo video sequence | |
CN102495998B (en) | Static object detection method based on visual selective attention computation module | |
CN105243376A (en) | Living body detection method and device | |
CN103049758A (en) | Method for realizing remote authentication by fusing gait flow images (GFI) and head and shoulder procrustes mean shapes (HS-PMS) | |
CN104794737A (en) | Depth-information-aided particle filter tracking method | |
CN103971122B (en) | Three-dimensional face based on depth image describes method | |
CN102947863A (en) | Moving-object detection device | |
US8953852B2 (en) | Method for face recognition | |
CN108596193A (en) | A kind of method and system for building the deep learning network structure for ear recognition | |
CN104065954B (en) | A kind of disparity range method for quick of high definition three-dimensional video-frequency | |
Garg et al. | Look no deeper: Recognizing places from opposing viewpoints under varying scene appearance using single-view depth estimation | |
CN103268482B (en) | A kind of gesture of low complex degree is extracted and gesture degree of depth acquisition methods | |
CN105512610B (en) | Human motion recognition method in a kind of video based on point-of-interest location information | |
CN103093480A (en) | Particle filtering video image tracking method based on dual model | |
CN102708589B (en) | Three-dimensional target multi-viewpoint view modeling method on basis of feature clustering | |
CN102592150A (en) | Gait identification method of bidirectional two-dimensional principal component analysis based on fuzzy decision theory | |
CN105549746A (en) | Action identification method based on acceleration sensing chip | |
CN104376312A (en) | Face recognition method based on word bag compressed sensing feature extraction | |
CN102013101A (en) | Blind detection method of permuted and tampered images subjected to fuzzy postprocessing |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C02 | Deemed withdrawal of patent application after publication (patent law 2001) | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20131023 |