CN103366172A - Video-based human body feature extraction method - Google Patents

Video-based human body feature extraction method Download PDF

Info

Publication number
CN103366172A
CN103366172A CN2013101559865A CN201310155986A CN103366172A CN 103366172 A CN103366172 A CN 103366172A CN 2013101559865 A CN2013101559865 A CN 2013101559865A CN 201310155986 A CN201310155986 A CN 201310155986A CN 103366172 A CN103366172 A CN 103366172A
Authority
CN
China
Prior art keywords
cube
starting point
space
expression
time
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN2013101559865A
Other languages
Chinese (zh)
Inventor
刘亚洲
张艳
孙权森
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Science and Technology
Original Assignee
Nanjing University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Science and Technology filed Critical Nanjing University of Science and Technology
Priority to CN2013101559865A priority Critical patent/CN103366172A/en
Publication of CN103366172A publication Critical patent/CN103366172A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a video-based human body feature extraction method. A group of videos comprising human body data is input, a space-time body is selected from the videos; gradient computation and distance transformation are carried out on the space-time body to obtain a space-time distance transformation body; and a three-dimensional Haar filter and the space-time distance transformation body are used for carrying out convolution so as to extract static features and dynamic features. According to the scheme disclosed by the invention, the discriminating ability of a human body detector is improved, and the problem of the low signal-to-noise ratio is effectively solved.

Description

A kind of characteristics of human body's extracting method based on video
Technical field
The present invention relates to computer vision and area of pattern recognition, more particularly, is a kind of characteristics of human body's extracting method that fully utilizes appearance information and movable information.
Background technology
Human detection is a special case in the object detection, refers to determine in defeated people's image or video sequence the position of all human bodies and the process of size.Because its in intelligent monitoring and vehicle-mounted supplementary security system the widespread use demand and obtained the attention of more and more researchers and research institution.Somatic data itself exists the difference on significant outward appearance, visual angle and the yardstick, and effectively feature extracting method can significantly promote the robustness reduce false alarm of human body detector.
Human detection is different according to data source, can be divided into based on the human detection of still image with based on the human detection of video sequence.Because somatic data self is widely different, and be subject to the impact of environment, therefore very challenging based on the human detection of still image, comparatively speaking, more reference information is arranged in the video sequence, therefore can utilize motion feature to carry out the identification of human body as supplementary; Perhaps adopt based on the method for cutting apart human region is cut apart, and then segmentation result is further detected, can greatly improve like this precision and the speed of detection.The work of Viola etc., Dalal etc. and Wojek etc. shows that all comprehensive utilization static appearance feature and dynamic motion feature can significantly improve the detection performance.Some results of study in video analysis have further been verified the validity of space-time analysis method in the recent period, and for example Ukrainitz and Irani are based on the video registration of space-time description; The video repairing of behavior correlation calculations in the video sequence of Shechtman and Irani and Wexler etc.
Because the most of zone of human body is covered by clothing, but because profile and the texture of clothes vary, add illumination, block the impact with complex background, caused the problem of low signal-to-noise ratio.This problem remains one of difficulties of human detection at present.
Summary of the invention
The object of the invention is to, in above-mentioned Human Detection, the problem of somatic data low signal-to-noise ratio has proposed a kind of characteristics of human body's extracting method that merges human appearance information and movable information in the Space Time territory.
The technical solution that realizes the object of the invention is: a kind of characteristics of human body's extracting method based on video, and step is as follows:
1) one group of video that comprises somatic data of input is therefrom selected a space-time body;
2) carry out gradient calculation and range conversion at space-time body, obtain space-time range conversion body;
3) utilize 3 dimension Haar wave filters and space-time range conversion body to carry out convolution and extract static nature and behavioral characteristics.
In the said method, space-time body in the described step 1) refers to select from video adjacent some frames as one group, and select the rectangular area image of same scale in this same position of organizing every two field picture, the cube image sequence that is formed by all rectangular area images.
In the said method, in the described step 1) single-frame images coordinate of space-time body by
Figure 2013101559865100002DEST_PATH_IMAGE001
Axle Axle represents, the frame sequence number is by time shaft The expression, any point on the space-time body can by (
Figure 289251DEST_PATH_IMAGE004
) expression.
In the said method, described step 2) comprise following concrete steps:
21) compute gradient space-time body:
Figure 2013101559865100002DEST_PATH_IMAGE005
, (1)
Wherein, Any one pixel on the expression space-time body,
Figure 2013101559865100002DEST_PATH_IMAGE007
The expression space-time body,
Figure 440670DEST_PATH_IMAGE008
,
Figure 2013101559865100002DEST_PATH_IMAGE009
With The expression respectively along
Figure 2013101559865100002DEST_PATH_IMAGE011
, With
Figure 2013101559865100002DEST_PATH_IMAGE013
The single order partial derivative of direction,
Figure 794926DEST_PATH_IMAGE014
The representation space gradient, The expression time gradient,
Figure 344987DEST_PATH_IMAGE016
It is the balance parameters of regulating the weight between spatial gradient and the time gradient;
22) adopt formula (2) to carry out thresholding to the gradient space-time body and obtain the profile space and time body:
Figure 2013101559865100002DEST_PATH_IMAGE017
(2)
Wherein,
Figure 743738DEST_PATH_IMAGE018
The threshold value of expression gradient space-time body,
Figure 235899DEST_PATH_IMAGE006
Any one pixel on the expression gradient space-time body;
23) employing formula (3) is given the distance value of this pixel to its nearest point to each pixel:
Figure 2013101559865100002DEST_PATH_IMAGE019
(3)
Wherein, Expression profile space and time body
Figure 432275DEST_PATH_IMAGE020
On any one pixel,
Figure 2013101559865100002DEST_PATH_IMAGE021
Expression profile space and time body On any one contour pixel,
Figure 286278DEST_PATH_IMAGE018
The threshold value of expression gradient space-time body,
Figure 77517DEST_PATH_IMAGE022
Expression
Figure 969381DEST_PATH_IMAGE006
Arrive
Figure 529675DEST_PATH_IMAGE021
Distance metric, can be Euclidean distance or city block distance,
Figure 2013101559865100002DEST_PATH_IMAGE023
Minimum value is got in expression.
In the said method, described step 3) comprises following concrete steps:
31) 3 dimension Haar wave filters of design expression static nature or behavioral characteristics;
32) on space-time range conversion body, generate 3 dimension Haar wave filters;
33) according to the method for setting calculate 3 dimension Haar wave filters each subregion pixel value and difference, obtain 3 dimension Haar features;
34) adopt respectively 7 kind of 3 dimension Haar wave filter, repeating step 32), 33), until generate 3 dimension Haar features of predetermined number, and all 3 dimension Haar features are concatenated into a proper vector;
In the said method, described step 31) 3 dimension Haar wave filters splices the cube that forms by different cube subregions in the space, establish 7 kind of 3 yardstick of tieing up the Haar wave filter and all are
Figure 173758DEST_PATH_IMAGE024
, wherein, Represent respectively cubical wide, high and length, the
Figure 937446DEST_PATH_IMAGE026
Planting 3 dimension Haar wave filters uses
Figure 2013101559865100002DEST_PATH_IMAGE027
Expression,
Figure 695318DEST_PATH_IMAGE027
Individual cube subregion is used
Figure 2013101559865100002DEST_PATH_IMAGE029
Expression forms various 3 dimension Haar wave filters
Figure 865716DEST_PATH_IMAGE027
The cube subregion
Figure 303651DEST_PATH_IMAGE029
As follows respectively:
A)
Figure 724268DEST_PATH_IMAGE030
2 subregions are arranged, are respectively:
With (
Figure 259154DEST_PATH_IMAGE004
) be starting point,
Figure 2013101559865100002DEST_PATH_IMAGE031
Cube
Figure 188583DEST_PATH_IMAGE032
,
With (
Figure 2013101559865100002DEST_PATH_IMAGE033
) be starting point,
Figure 176130DEST_PATH_IMAGE034
Cube
B)
Figure 400438DEST_PATH_IMAGE036
2 subregions are arranged, are respectively:
With (
Figure 274985DEST_PATH_IMAGE004
) be starting point, Cube
Figure 621652DEST_PATH_IMAGE038
,
With (
Figure 2013101559865100002DEST_PATH_IMAGE039
) be starting point,
Figure 847228DEST_PATH_IMAGE037
Cube
Figure 875227DEST_PATH_IMAGE040
C)
Figure 2013101559865100002DEST_PATH_IMAGE041
4 subregions are arranged, are respectively:
With (
Figure 119127DEST_PATH_IMAGE004
) be starting point, Cube
Figure 2013101559865100002DEST_PATH_IMAGE043
,
With (
Figure 21672DEST_PATH_IMAGE033
) be starting point,
Figure 650099DEST_PATH_IMAGE042
Cube
Figure 951768DEST_PATH_IMAGE044
,
With (
Figure 312342DEST_PATH_IMAGE039
) be starting point,
Figure 509580DEST_PATH_IMAGE042
Cube ,
With (
Figure 941699DEST_PATH_IMAGE046
) be starting point,
Figure 645344DEST_PATH_IMAGE042
Cube
Figure 2013101559865100002DEST_PATH_IMAGE047
D)
Figure 504715DEST_PATH_IMAGE048
4 subregions are arranged, are respectively:
With (
Figure 113551DEST_PATH_IMAGE004
) be starting point,
Figure 2013101559865100002DEST_PATH_IMAGE049
Cube ,
With (
Figure 907512DEST_PATH_IMAGE039
) be starting point,
Figure 875468DEST_PATH_IMAGE049
Cube
Figure 2013101559865100002DEST_PATH_IMAGE051
,
With (
Figure 50228DEST_PATH_IMAGE052
) be starting point,
Figure 761832DEST_PATH_IMAGE049
Cube ,
With (
Figure 689337DEST_PATH_IMAGE054
) be starting point,
Figure 468392DEST_PATH_IMAGE049
Cube
Figure 2013101559865100002DEST_PATH_IMAGE055
E)
Figure 114137DEST_PATH_IMAGE056
4 subregions are arranged, are respectively:
With (
Figure 363853DEST_PATH_IMAGE004
) be starting point,
Figure 2013101559865100002DEST_PATH_IMAGE057
Cube
Figure 896597DEST_PATH_IMAGE058
,
With (
Figure 534251DEST_PATH_IMAGE033
) be starting point, Cube
Figure 2013101559865100002DEST_PATH_IMAGE059
,
With (
Figure 471431DEST_PATH_IMAGE052
) be starting point,
Figure 311212DEST_PATH_IMAGE057
Cube
Figure 588609DEST_PATH_IMAGE060
,
With (
Figure 2013101559865100002DEST_PATH_IMAGE061
) be starting point,
Figure 959679DEST_PATH_IMAGE057
Cube
Figure 816776DEST_PATH_IMAGE062
F)
Figure 2013101559865100002DEST_PATH_IMAGE063
8 subregions are arranged, are respectively:
With (
Figure 307800DEST_PATH_IMAGE004
) be starting point,
Figure 35061DEST_PATH_IMAGE064
Cube
Figure 2013101559865100002DEST_PATH_IMAGE065
,
With (
Figure 80377DEST_PATH_IMAGE033
) be starting point,
Figure 537903DEST_PATH_IMAGE064
Cube
Figure 86696DEST_PATH_IMAGE066
,
With (
Figure 722208DEST_PATH_IMAGE039
) be starting point,
Figure 317137DEST_PATH_IMAGE064
Cube
Figure 2013101559865100002DEST_PATH_IMAGE067
,
With ( ) be starting point,
Figure 529124DEST_PATH_IMAGE064
Cube
Figure 866696DEST_PATH_IMAGE068
,
With ( ) be starting point,
Figure 154774DEST_PATH_IMAGE064
Cube
Figure 2013101559865100002DEST_PATH_IMAGE069
,
With (
Figure 759718DEST_PATH_IMAGE061
) be starting point, Cube
Figure 962346DEST_PATH_IMAGE070
,
With (
Figure 316098DEST_PATH_IMAGE054
) be starting point,
Figure 287465DEST_PATH_IMAGE064
Cube ,
With (
Figure 170102DEST_PATH_IMAGE072
) be starting point,
Figure 961340DEST_PATH_IMAGE064
Cube
Figure 2013101559865100002DEST_PATH_IMAGE073
G)
Figure 40155DEST_PATH_IMAGE074
2 subregions are arranged, are respectively:
With (
Figure 351181DEST_PATH_IMAGE004
) be starting point,
Figure 2013101559865100002DEST_PATH_IMAGE075
Cube be ,
With (
Figure 135784DEST_PATH_IMAGE052
) be starting point, Cube be
Figure 2013101559865100002DEST_PATH_IMAGE077
Above 7 kind of 3 dimension Haar wave filter,
Figure 446472DEST_PATH_IMAGE030
,
Figure 185758DEST_PATH_IMAGE036
With
Figure 623692DEST_PATH_IMAGE041
Static 3 dimension Haar wave filters,
Figure 857359DEST_PATH_IMAGE048
,
Figure 329928DEST_PATH_IMAGE056
,
Figure 240116DEST_PATH_IMAGE063
With
Figure 165346DEST_PATH_IMAGE074
Dynamic 3 dimension Haar wave filters.
The length of 3 dimension Haar wave filters in the said method, described step 32) and the appearance of space-time range conversion body with, its wide and high in the wide and high range scale of space-time range conversion body with any yardstick generation in optional position.
In the said method, described step 33) in for every kind 3 dimension Haar wave filter
Figure 2013101559865100002DEST_PATH_IMAGE079
, use
Figure 2013101559865100002DEST_PATH_IMAGE081
The expression subregion
Figure 2013101559865100002DEST_PATH_IMAGE083
In pixel value and, use
Figure 2013101559865100002DEST_PATH_IMAGE085
Represent different
Figure 78070DEST_PATH_IMAGE081
And between difference, then
Figure 405146DEST_PATH_IMAGE085
Computing method respectively as follows:
a)
Figure 751814DEST_PATH_IMAGE086
b)
Figure 2013101559865100002DEST_PATH_IMAGE087
c)
Figure 968601DEST_PATH_IMAGE088
d)
Figure 2013101559865100002DEST_PATH_IMAGE089
e)
Figure 731020DEST_PATH_IMAGE090
f)
Figure 2013101559865100002DEST_PATH_IMAGE091
g)
Figure 974920DEST_PATH_IMAGE092
Wherein,
Figure DEST_PATH_IMAGE093
,
Figure 243221DEST_PATH_IMAGE094
Expression static three-dimensional Haar feature,
Figure DEST_PATH_IMAGE095
,
Figure 205361DEST_PATH_IMAGE096
With
Figure DEST_PATH_IMAGE097
Expression dynamic 3 D Haar feature.
The present invention compared with prior art, its remarkable advantage: the feature extracting method that in the Space Time territory, merges human appearance information and movable information that the present invention proposes, owing to having utilized motion feature to carry out the identification of human body as supplementary, improve the discriminating power of human body detector, effectively solved the problem of low signal-to-noise ratio.
Description of drawings
Fig. 1 is based on the process flow diagram of the method for space-time model.
Fig. 2 is that the space-time body of human body represents.
Fig. 3 is the definition of 7 types 3 dimension Haar wave filters.
Fig. 4 utilize the integration body calculate fast in the space-time body area pixel and schematic diagram.
Embodiment
The present invention is expressed as space-time body with human body and utilizes 3 dimension Haar wave filters to extract its Static and dynamic 3 dimension Haar features, and the integrated operation flow process is shown in Fig. 1.Below in conjunction with accompanying drawing embodiment is described in further detail.
Step 1:Input comprises the video of somatic data, therefrom selects space-time body
Figure 584521DEST_PATH_IMAGE098
Select one group of adjacent frame from video, sequence number is got between 5 to 10 frames usually.Select the rectangular area image of same scale in this same position of organizing every two field picture, the image sequence that is comprised of all rectangular area images is called space-time body.
The method for expressing of space-time body as shown in Figure 2, any point on it is used Expression, wherein,
Figure 682927DEST_PATH_IMAGE100
Be respectively the transverse and longitudinal coordinate of single-frame images,
Figure 309080DEST_PATH_IMAGE003
Be the time shaft coordinate of image sequence, and use
Figure DEST_PATH_IMAGE101
The gray-scale value that represents this point.
Step 2:Carry out gradient calculation and range conversion at space-time body.Concrete steps are as follows:
Step 21:Respectively along
Figure 509249DEST_PATH_IMAGE001
,
Figure 613471DEST_PATH_IMAGE002
With
Figure 831963DEST_PATH_IMAGE003
Direction calculating single order partial derivative obtains corresponding value and is designated as ,
Figure DEST_PATH_IMAGE103
With
Figure 47973DEST_PATH_IMAGE104
Step 22:The compute gradient space-time body.Computing method are as follows:
Figure 283782DEST_PATH_IMAGE005
(1)
Wherein:
Figure 294463DEST_PATH_IMAGE006
Any one pixel of-space-time body;
Figure DEST_PATH_IMAGE105
-spatial gradient;
Figure 75469DEST_PATH_IMAGE106
-time gradient;
The weight parameter of-spatial gradient and time gradient.
Step 23:To the gradient space-time body
Figure 233918DEST_PATH_IMAGE108
(2) formula of employing is carried out thresholding, and the profile that obtains space-time body represents:
(2)
Wherein:
Figure 623759DEST_PATH_IMAGE006
Any one pixel of-gradient space-time body;
Figure 762616DEST_PATH_IMAGE018
The threshold value of-gradient space-time body.
Step 24:Adopt the method for range conversion to give the distance value of this pixel to its nearest point to each pixel.Following formula is adopted in the range conversion of space-time body:
Figure DEST_PATH_IMAGE109
(3)
Wherein:
Figure 408361DEST_PATH_IMAGE006
-profile space and time body
Figure 658077DEST_PATH_IMAGE020
Any one pixel;
Figure 456400DEST_PATH_IMAGE021
-profile space and time body
Figure 766158DEST_PATH_IMAGE020
Any one contour pixel;
Figure 899199DEST_PATH_IMAGE022
-
Figure 952606DEST_PATH_IMAGE006
Arrive
Figure 608365DEST_PATH_IMAGE021
Distance metric, can be Euclidean distance or city block distance;
Figure 89025DEST_PATH_IMAGE023
-get minimum value;
Figure 381466DEST_PATH_IMAGE018
The threshold value of-gradient space-time body.
Step 3:The static nature of design expression space-time range conversion body and 3 dimension Haar wave filters of behavioral characteristics.
3 dimension Haar wave filters splice the cube that forms by different cube subregions in the space, the yardstick of establishing 7 kind of 3 dimension Haar wave filter all is
Figure 300880DEST_PATH_IMAGE024
, wherein,
Figure 995167DEST_PATH_IMAGE025
Represent respectively cubical wide, high and length, the
Figure 459777DEST_PATH_IMAGE026
Planting 3 dimension Haar wave filters uses
Figure 505094DEST_PATH_IMAGE027
Expression,
Figure 511413DEST_PATH_IMAGE028
Individual cube subregion is used
Figure 412504DEST_PATH_IMAGE029
Expression forms the cube subregion of various 3 dimension Haar wave filters as shown in Figure 3:
A) as shown in figure 31, 2 subregions are arranged, are respectively:
With (
Figure 612858DEST_PATH_IMAGE004
) be starting point,
Figure 344054DEST_PATH_IMAGE031
Cube ,
With (
Figure 904796DEST_PATH_IMAGE033
) be starting point,
Figure 907387DEST_PATH_IMAGE034
Cube
Figure 493090DEST_PATH_IMAGE035
B) shown in figure 32,
Figure 657355DEST_PATH_IMAGE036
2 subregions are arranged, are respectively:
With ( ) be starting point,
Figure 249803DEST_PATH_IMAGE037
Cube ,
With ( ) be starting point,
Figure 567149DEST_PATH_IMAGE037
Cube
Figure 911542DEST_PATH_IMAGE040
C) as shown in figure 33, 4 subregions are arranged, are respectively:
With (
Figure 977904DEST_PATH_IMAGE004
) be starting point,
Figure 7171DEST_PATH_IMAGE042
Cube
Figure 889677DEST_PATH_IMAGE043
,
With (
Figure 304477DEST_PATH_IMAGE033
) be starting point,
Figure 981446DEST_PATH_IMAGE042
Cube
Figure 232430DEST_PATH_IMAGE044
,
With ( ) be starting point,
Figure 187934DEST_PATH_IMAGE042
Cube ,
With (
Figure 54312DEST_PATH_IMAGE046
) be starting point,
Figure 340937DEST_PATH_IMAGE042
Cube
Figure 668013DEST_PATH_IMAGE047
D) as shown in figure 34, 4 subregions are arranged, are respectively:
With (
Figure 912360DEST_PATH_IMAGE004
) be starting point,
Figure 2676DEST_PATH_IMAGE049
Cube
Figure 184259DEST_PATH_IMAGE050
,
With (
Figure 452560DEST_PATH_IMAGE039
) be starting point,
Figure 352383DEST_PATH_IMAGE049
Cube
Figure 980811DEST_PATH_IMAGE051
,
With (
Figure 829949DEST_PATH_IMAGE052
) be starting point,
Figure 456102DEST_PATH_IMAGE049
Cube
Figure 639959DEST_PATH_IMAGE053
,
With ( ) be starting point, Cube
E) as shown in figure 35,
Figure 444263DEST_PATH_IMAGE056
4 subregions are arranged, are respectively:
With (
Figure 352176DEST_PATH_IMAGE004
) be starting point,
Figure 175906DEST_PATH_IMAGE057
Cube
Figure 143862DEST_PATH_IMAGE058
,
With (
Figure 239994DEST_PATH_IMAGE033
) be starting point,
Figure 13915DEST_PATH_IMAGE057
Cube
Figure 144682DEST_PATH_IMAGE059
,
With (
Figure 96589DEST_PATH_IMAGE052
) be starting point,
Figure 414438DEST_PATH_IMAGE057
Cube
Figure 992050DEST_PATH_IMAGE060
,
With (
Figure 977323DEST_PATH_IMAGE061
) be starting point,
Figure 100131DEST_PATH_IMAGE057
Cube
Figure 905276DEST_PATH_IMAGE062
F) as shown in figure 36,
Figure 21000DEST_PATH_IMAGE063
8 subregions are arranged, are respectively:
With (
Figure 860780DEST_PATH_IMAGE004
) be starting point,
Figure 157419DEST_PATH_IMAGE064
Cube
Figure 715439DEST_PATH_IMAGE065
,
With (
Figure 369274DEST_PATH_IMAGE033
) be starting point,
Figure 329140DEST_PATH_IMAGE064
Cube
Figure 793750DEST_PATH_IMAGE066
,
With ( ) be starting point,
Figure 968697DEST_PATH_IMAGE064
Cube
Figure 845386DEST_PATH_IMAGE067
,
With (
Figure 667848DEST_PATH_IMAGE046
) be starting point,
Figure 747931DEST_PATH_IMAGE064
Cube
Figure 946831DEST_PATH_IMAGE068
,
With ( ) be starting point,
Figure 671391DEST_PATH_IMAGE064
Cube ,
With (
Figure 241360DEST_PATH_IMAGE061
) be starting point,
Figure 764746DEST_PATH_IMAGE064
Cube
Figure 991328DEST_PATH_IMAGE070
,
With (
Figure 232953DEST_PATH_IMAGE054
) be starting point,
Figure 583776DEST_PATH_IMAGE064
Cube
Figure 961667DEST_PATH_IMAGE071
,
With (
Figure 624730DEST_PATH_IMAGE072
) be starting point,
Figure 88072DEST_PATH_IMAGE064
Cube
Figure 432466DEST_PATH_IMAGE073
G) as shown in figure 37,
Figure 477913DEST_PATH_IMAGE074
2 subregions are arranged, are respectively:
With (
Figure 249560DEST_PATH_IMAGE004
) be starting point,
Figure 262516DEST_PATH_IMAGE075
Cube be
Figure 145021DEST_PATH_IMAGE076
,
With (
Figure 310554DEST_PATH_IMAGE052
) be starting point, Cube be
Figure 753354DEST_PATH_IMAGE077
Above 7 kind of 3 dimension Haar wave filter,
Figure 439550DEST_PATH_IMAGE030
,
Figure 646541DEST_PATH_IMAGE036
With
Figure 573039DEST_PATH_IMAGE041
Static 3 dimension Haar wave filters,
Figure 498270DEST_PATH_IMAGE048
,
Figure 784895DEST_PATH_IMAGE056
, With
Figure 130743DEST_PATH_IMAGE074
Dynamic 3 dimension Haar wave filters.
Step 4:Adopt the integration body method to calculate 3 dimension Haar features of space-time range conversion body.
Step 41:Calculate the integration body of space-time range conversion body, account form is as follows:
(4)
Wherein,
Figure DEST_PATH_IMAGE111
Be illustrated in space-time range conversion body
Figure 437845DEST_PATH_IMAGE112
The integration body at place,
Figure DEST_PATH_IMAGE113
Expression space-time range conversion body exists
Figure 432477DEST_PATH_IMAGE114
The pixel value at place.
Step 42:Generate 3 dimension Haar wave filters at space-time range conversion body.
The length of 3 among the present invention dimension Haar wave filter and the appearance of space-time range conversion body with, its wide and high in the wide and high range scale of space-time range conversion body with any yardstick generation in optional position.For example, exist On the space-time body, with (
Figure 950046DEST_PATH_IMAGE004
) be starting point, generate
Figure 584290DEST_PATH_IMAGE024
3 dimension Haar wave filters
Figure 150401DEST_PATH_IMAGE027
, then Parameters must meet the following conditions:
Figure 891272DEST_PATH_IMAGE116
,
Figure DEST_PATH_IMAGE117
Step 43:Calculate 3 dimension Haar wave filters
Figure 75128DEST_PATH_IMAGE027
Each subregion
Figure 257979DEST_PATH_IMAGE029
Pixel value and
Figure 414154DEST_PATH_IMAGE118
Arbitrarily
Figure 211209DEST_PATH_IMAGE118
Adopt subregion
Figure DEST_PATH_IMAGE119
The integration body on 8 summits carried out for 7 steps and add reducing and obtain.As shown in Figure 4, cube A- B-C-D-E-F-G-HCalculating with the following formula of employing of interior pixel value:
Figure 882361DEST_PATH_IMAGE120
(5)
Wherein,
Figure 600394DEST_PATH_IMAGE098
The expression cube A- B-C-D-E-F-G-H.
Step 44:For every kind
Figure 611076DEST_PATH_IMAGE027
, calculate in accordance with the following methods each
Figure 641348DEST_PATH_IMAGE118
Between and difference :
a)
Figure 550530DEST_PATH_IMAGE086
b)
Figure 324451DEST_PATH_IMAGE087
c)
Figure 189639DEST_PATH_IMAGE088
d)
Figure 141545DEST_PATH_IMAGE089
e)
f)
Figure 302585DEST_PATH_IMAGE091
g)
Figure 22279DEST_PATH_IMAGE092
Wherein,
Figure 145087DEST_PATH_IMAGE093
,
Figure 215812DEST_PATH_IMAGE094
Expression static three-dimensional Haar feature,
Figure 331535DEST_PATH_IMAGE095
,
Figure 171315DEST_PATH_IMAGE096
With
Figure 467954DEST_PATH_IMAGE097
Expression dynamic 3 D Haar feature.
Step 5:Adopt respectively the operation of 7 kind of 3 dimension Haar wave filter repeating step 42,43,44, until generate 3 dimension Haar features of predetermined number, 7 kind of 3 dimension Haar feature is concatenated into a proper vector.

Claims (7)

1. characteristics of human body's extracting method based on video is characterized in that step is as follows:
1) one group of video that comprises somatic data of input is therefrom selected a space-time body;
2) carry out gradient calculation and range conversion at space-time body, obtain space-time range conversion body;
3) utilize 3 dimension Haar wave filters and space-time range conversion body to carry out convolution and extract static nature and behavioral characteristics.
2. the characteristics of human body's extracting method based on video according to claim 1, it is characterized in that: space-time body refers to select from video adjacent some frames as one group in the described step 1), and select the rectangular area image of same scale in this same position of organizing every two field picture, the cube image sequence that is formed by all rectangular area images; The single-frame images coordinate of space-time body by
Figure 2013101559865100001DEST_PATH_IMAGE001
Axle
Figure 288126DEST_PATH_IMAGE002
Axle represents, the frame sequence number is by time shaft
Figure 2013101559865100001DEST_PATH_IMAGE003
Expression.
3. the characteristics of human body's extracting method based on video according to claim 1 is characterized in that described step 2) comprise following concrete steps:
21) compute gradient space-time body:
Figure 880781DEST_PATH_IMAGE004
, (1)
Wherein,
Figure 2013101559865100001DEST_PATH_IMAGE005
Any one pixel on the expression space-time body,
Figure 978181DEST_PATH_IMAGE006
The expression space-time body,
Figure 2013101559865100001DEST_PATH_IMAGE007
,
Figure 734916DEST_PATH_IMAGE008
With
Figure 2013101559865100001DEST_PATH_IMAGE009
The expression respectively along
Figure 28494DEST_PATH_IMAGE010
,
Figure 2013101559865100001DEST_PATH_IMAGE011
With The single order partial derivative of direction,
Figure 2013101559865100001DEST_PATH_IMAGE013
The representation space gradient,
Figure 531906DEST_PATH_IMAGE014
The expression time gradient,
Figure 2013101559865100001DEST_PATH_IMAGE015
It is the balance parameters of regulating the weight between spatial gradient and the time gradient;
22) adopt formula (2) to carry out thresholding to the gradient space-time body and obtain the profile space and time body:
Figure 494046DEST_PATH_IMAGE016
(2)
Wherein,
Figure 2013101559865100001DEST_PATH_IMAGE017
The threshold value of expression gradient space-time body,
Figure 669943DEST_PATH_IMAGE005
Any one pixel on the expression gradient space-time body;
23) employing formula (3) is given the distance value of this pixel to its nearest point to each pixel:
Figure 706032DEST_PATH_IMAGE018
(3)
Wherein, Expression profile space and time body
Figure 2013101559865100001DEST_PATH_IMAGE019
On any one pixel,
Figure 532354DEST_PATH_IMAGE020
Expression profile space and time body
Figure 964473DEST_PATH_IMAGE019
On any one contour pixel,
Figure 120647DEST_PATH_IMAGE017
The threshold value of expression gradient space-time body, Expression
Figure 730751DEST_PATH_IMAGE005
Arrive
Figure 401904DEST_PATH_IMAGE020
Distance metric, can be Euclidean distance or city block distance,
Figure 309817DEST_PATH_IMAGE022
Minimum value is got in expression.
4. the characteristics of human body's extracting method based on video according to claim 1 is characterized in that described step 3) comprises following concrete steps:
31) 3 dimension Haar wave filters of design expression static nature or behavioral characteristics;
32) on space-time range conversion body, generate 3 dimension Haar wave filters;
33) according to the method for setting calculate 3 dimension Haar wave filters each subregion pixel value and difference, obtain 3 dimension Haar features;
34) adopt respectively 7 kind of 3 dimension Haar wave filter, repeating step 32), 33), until generate 3 dimension Haar features of predetermined number, and all 3 dimension Haar features are concatenated into a proper vector.
5. the characteristics of human body's extracting method based on video according to claim 4, it is characterized in that: 3 dimension Haar wave filters splice the cube that forms by different cube subregions described step 31) in the space, and the yardstick of establishing 7 kind of 3 dimension Haar wave filter all is
Figure 2013101559865100001DEST_PATH_IMAGE023
, wherein,
Figure 130618DEST_PATH_IMAGE024
Represent respectively cubical wide, high and length, the
Figure 2013101559865100001DEST_PATH_IMAGE025
Planting 3 dimension Haar wave filters uses
Figure 160891DEST_PATH_IMAGE026
Expression,
Figure 70072DEST_PATH_IMAGE026
Individual cube subregion is used
Figure 843993DEST_PATH_IMAGE028
Expression forms various 3 dimension Haar wave filters
Figure 787810DEST_PATH_IMAGE026
The cube subregion
Figure 926667DEST_PATH_IMAGE028
As follows respectively:
A) 2 subregions are arranged, are respectively:
With (
Figure 306833DEST_PATH_IMAGE030
) be starting point,
Figure 2013101559865100001DEST_PATH_IMAGE031
Cube ,
With (
Figure 2013101559865100001DEST_PATH_IMAGE033
) be starting point,
Figure 682768DEST_PATH_IMAGE034
Cube
Figure 2013101559865100001DEST_PATH_IMAGE035
B)
Figure 808505DEST_PATH_IMAGE036
2 subregions are arranged, are respectively:
With (
Figure 675967DEST_PATH_IMAGE030
) be starting point, Cube
Figure 542423DEST_PATH_IMAGE038
,
With (
Figure 2013101559865100001DEST_PATH_IMAGE039
) be starting point,
Figure 444520DEST_PATH_IMAGE037
Cube
C)
Figure 2013101559865100001DEST_PATH_IMAGE041
4 subregions are arranged, are respectively:
With (
Figure 296249DEST_PATH_IMAGE030
) be starting point,
Figure 950085DEST_PATH_IMAGE042
Cube
Figure 2013101559865100001DEST_PATH_IMAGE043
,
With ( ) be starting point, Cube ,
With (
Figure 611824DEST_PATH_IMAGE039
) be starting point,
Figure 236316DEST_PATH_IMAGE042
Cube
Figure 2013101559865100001DEST_PATH_IMAGE045
,
With (
Figure 121096DEST_PATH_IMAGE046
) be starting point, Cube
Figure 2013101559865100001DEST_PATH_IMAGE047
D)
Figure 400078DEST_PATH_IMAGE048
4 subregions are arranged, are respectively:
With (
Figure 131274DEST_PATH_IMAGE030
) be starting point, Cube
Figure 937687DEST_PATH_IMAGE050
,
With (
Figure 692016DEST_PATH_IMAGE039
) be starting point,
Figure 756924DEST_PATH_IMAGE049
Cube
Figure 2013101559865100001DEST_PATH_IMAGE051
,
With ( ) be starting point, Cube
Figure 2013101559865100001DEST_PATH_IMAGE053
,
With (
Figure 561566DEST_PATH_IMAGE054
) be starting point,
Figure 929967DEST_PATH_IMAGE049
Cube
Figure 2013101559865100001DEST_PATH_IMAGE055
E)
Figure 370176DEST_PATH_IMAGE056
4 subregions are arranged, are respectively:
With (
Figure 970921DEST_PATH_IMAGE030
) be starting point,
Figure DEST_PATH_IMAGE057
Cube ,
With ( ) be starting point,
Figure 886422DEST_PATH_IMAGE057
Cube
Figure DEST_PATH_IMAGE059
,
With (
Figure 471118DEST_PATH_IMAGE052
) be starting point,
Figure 484073DEST_PATH_IMAGE057
Cube
Figure 366579DEST_PATH_IMAGE060
,
With (
Figure DEST_PATH_IMAGE061
) be starting point,
Figure 532112DEST_PATH_IMAGE057
Cube
Figure 536977DEST_PATH_IMAGE062
F)
Figure DEST_PATH_IMAGE063
8 subregions are arranged, are respectively:
With (
Figure 785031DEST_PATH_IMAGE030
) be starting point,
Figure 471227DEST_PATH_IMAGE064
Cube ,
With (
Figure 740535DEST_PATH_IMAGE033
) be starting point,
Figure 667034DEST_PATH_IMAGE064
Cube
Figure 592264DEST_PATH_IMAGE066
,
With (
Figure 878889DEST_PATH_IMAGE039
) be starting point,
Figure 205965DEST_PATH_IMAGE064
Cube
Figure DEST_PATH_IMAGE067
,
With (
Figure 37786DEST_PATH_IMAGE046
) be starting point,
Figure 450313DEST_PATH_IMAGE064
Cube
Figure 540629DEST_PATH_IMAGE068
,
With (
Figure 722211DEST_PATH_IMAGE052
) be starting point,
Figure 990513DEST_PATH_IMAGE064
Cube
Figure DEST_PATH_IMAGE069
,
With (
Figure 687073DEST_PATH_IMAGE061
) be starting point, Cube
Figure 370831DEST_PATH_IMAGE070
,
With (
Figure 59301DEST_PATH_IMAGE054
) be starting point,
Figure 993891DEST_PATH_IMAGE064
Cube ,
With (
Figure 426009DEST_PATH_IMAGE072
) be starting point,
Figure 582184DEST_PATH_IMAGE064
Cube
Figure DEST_PATH_IMAGE073
G)
Figure 192288DEST_PATH_IMAGE074
2 subregions are arranged, are respectively:
With (
Figure 801124DEST_PATH_IMAGE030
) be starting point,
Figure DEST_PATH_IMAGE075
Cube be
Figure 456840DEST_PATH_IMAGE076
,
With (
Figure 529838DEST_PATH_IMAGE052
) be starting point, Cube be
Figure DEST_PATH_IMAGE077
Above 7 kind of 3 dimension Haar wave filter,
Figure 469292DEST_PATH_IMAGE029
, With
Figure 859133DEST_PATH_IMAGE041
Static 3 dimension Haar wave filters, ,
Figure 456785DEST_PATH_IMAGE056
,
Figure 972080DEST_PATH_IMAGE063
With
Figure 754091DEST_PATH_IMAGE074
Dynamic 3 dimension Haar wave filters.
6. the characteristics of human body's extracting method based on video according to claim 4, it is characterized in that: the length of 3 described step 32) dimension Haar wave filter and the appearance of space-time range conversion body with, its wide and high in the wide and high range scale of space-time range conversion body with any yardstick generation in optional position.
7. the characteristics of human body's extracting method based on video according to claim 4 is characterized in that: tie up the Haar wave filter for every kind 3 described step 33) , use The expression subregion In pixel value and, use
Figure DEST_PATH_IMAGE079
Represent different
Figure 707387DEST_PATH_IMAGE078
And between difference,
Figure 188047DEST_PATH_IMAGE079
Computing method respectively as follows:
a)
Figure 808384DEST_PATH_IMAGE080
b)
Figure DEST_PATH_IMAGE081
c)
Figure 212952DEST_PATH_IMAGE082
d)
Figure DEST_PATH_IMAGE083
e)
f)
Figure DEST_PATH_IMAGE085
g)
Figure 699745DEST_PATH_IMAGE086
Wherein,
Figure DEST_PATH_IMAGE087
,
Figure 541799DEST_PATH_IMAGE088
Expression static three-dimensional Haar feature,
Figure DEST_PATH_IMAGE089
,
Figure 809445DEST_PATH_IMAGE090
With
Figure DEST_PATH_IMAGE091
Expression dynamic 3 D Haar feature.
CN2013101559865A 2013-04-28 2013-04-28 Video-based human body feature extraction method Pending CN103366172A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2013101559865A CN103366172A (en) 2013-04-28 2013-04-28 Video-based human body feature extraction method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2013101559865A CN103366172A (en) 2013-04-28 2013-04-28 Video-based human body feature extraction method

Publications (1)

Publication Number Publication Date
CN103366172A true CN103366172A (en) 2013-10-23

Family

ID=49367474

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2013101559865A Pending CN103366172A (en) 2013-04-28 2013-04-28 Video-based human body feature extraction method

Country Status (1)

Country Link
CN (1) CN103366172A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105631463A (en) * 2014-11-28 2016-06-01 无锡慧眼电子科技有限公司 Time-space movement profile feature-based pedestrian detection method

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007041759A (en) * 2005-08-02 2007-02-15 Hitachi Eng Co Ltd Personal authentication device and method

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007041759A (en) * 2005-08-02 2007-02-15 Hitachi Eng Co Ltd Personal authentication device and method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
刘亚洲: ""基于时空分析和多粒度特征表示的人体检测方法研究"", 《中国博士学位论文全文数据库 信息科技辑》, no. 5, 15 May 2011 (2011-05-15) *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105631463A (en) * 2014-11-28 2016-06-01 无锡慧眼电子科技有限公司 Time-space movement profile feature-based pedestrian detection method

Similar Documents

Publication Publication Date Title
CN104933414B (en) A kind of living body faces detection method based on WLD-TOP
CN104050449B (en) A kind of face identification method and device
CN106548153B (en) Video abnormality detection method based on graph structure under multi-scale transform
CN103337072B (en) A kind of room objects analytic method based on texture and geometric attribute conjunctive model
CN105843386A (en) Virtual fitting system in shopping mall
CN107273835A (en) Act of violence intelligent detecting method based on video analysis
CN103248906B (en) Method and system for acquiring depth map of binocular stereo video sequence
CN102495998B (en) Static object detection method based on visual selective attention computation module
CN105243376A (en) Living body detection method and device
CN103049758A (en) Method for realizing remote authentication by fusing gait flow images (GFI) and head and shoulder procrustes mean shapes (HS-PMS)
CN104794737A (en) Depth-information-aided particle filter tracking method
CN103971122B (en) Three-dimensional face based on depth image describes method
CN102947863A (en) Moving-object detection device
US8953852B2 (en) Method for face recognition
CN108596193A (en) A kind of method and system for building the deep learning network structure for ear recognition
CN104065954B (en) A kind of disparity range method for quick of high definition three-dimensional video-frequency
Garg et al. Look no deeper: Recognizing places from opposing viewpoints under varying scene appearance using single-view depth estimation
CN103268482B (en) A kind of gesture of low complex degree is extracted and gesture degree of depth acquisition methods
CN105512610B (en) Human motion recognition method in a kind of video based on point-of-interest location information
CN103093480A (en) Particle filtering video image tracking method based on dual model
CN102708589B (en) Three-dimensional target multi-viewpoint view modeling method on basis of feature clustering
CN102592150A (en) Gait identification method of bidirectional two-dimensional principal component analysis based on fuzzy decision theory
CN105549746A (en) Action identification method based on acceleration sensing chip
CN104376312A (en) Face recognition method based on word bag compressed sensing feature extraction
CN102013101A (en) Blind detection method of permuted and tampered images subjected to fuzzy postprocessing

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20131023