CN102930334A - Video recognition counter for body silhouette - Google Patents

Video recognition counter for body silhouette Download PDF

Info

Publication number
CN102930334A
CN102930334A CN2012103833408A CN201210383340A CN102930334A CN 102930334 A CN102930334 A CN 102930334A CN 2012103833408 A CN2012103833408 A CN 2012103833408A CN 201210383340 A CN201210383340 A CN 201210383340A CN 102930334 A CN102930334 A CN 102930334A
Authority
CN
China
Prior art keywords
image
human body
angle
point
theta
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2012103833408A
Other languages
Chinese (zh)
Other versions
CN102930334B (en
Inventor
不公告发明人
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Kaisen Century Technology Development Co.,Ltd.
Original Assignee
BEIJING KAISEN SHIJI TECHNOLOGY DEVELOPMENT Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BEIJING KAISEN SHIJI TECHNOLOGY DEVELOPMENT Co Ltd filed Critical BEIJING KAISEN SHIJI TECHNOLOGY DEVELOPMENT Co Ltd
Priority to CN2012103833408A priority Critical patent/CN102930334B/en
Publication of CN102930334A publication Critical patent/CN102930334A/en
Application granted granted Critical
Publication of CN102930334B publication Critical patent/CN102930334B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Processing (AREA)
  • Image Analysis (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

The invention relates to a counter for the number of people. An object in a video image is recognized at high speed on the basis of silhouette characteristics of the object; for the problem that the recognition accuracy of a human body is low or the human body cannot be recognized due to outside factors such as field illumination, camera angle, human dimension and shielding in the prior art, the silhouette characteristics in the image are extracted by means of an Sobel operator; the background is eliminated by adopting a background subtraction method, the silhouette characteristics of the object are extracted and the silhouette is regenerated according to silhouette angle information; a standard silhouette model image of the object is established and the matching between a standard silhouette model of the object and the generated silhouette is realized by adopting three steps of virtually setting an image center, calculating the matching rate and calculating the matching position; and after the matching, an recognition result is outputted. According to the counter disclosed by the invention, the number of people can be recognized according to the saved silhouette shapes of human bodies, the recognition rate is hardly influenced by interference factors such as clothes colors, states, backgrounds, shadows and hats of the human bodies, background information can be effectively eliminated and higher recognition precision and higher operation speed are obtained.

Description

Video human profile identification and counting machine
Technical field
The invention belongs to two dimensional image recognition technology field, also belong to the intelligent video identification technology field, be specifically related to a kind of video human profile identification and counting machine based on human body contour outline feature identification people body method.
Background technology
In identification video or the image to as if one of the problem of the tool using value of computer vision field.That very many good methods have all appearred in industry member or academia.Have to mate respectively based on head, trunk, four limbs then to combine, have based on the human motion periodic feature etc., the below does simple description to some classics with relevant method.
S.Belongie and J.Malik have proposed based on the object in the Shape Context method recognition image of object external outline line sample point set.Shape Context algorithm at first calculates the particle of target, then calculates its log-polar histogram centered by particle. and this basic method has following two shortcomings: utilizing the particle definition to calculate particle is a very complicated and very consuming time process; Be merely able to judge the whether similar and point set that can't obtain to be complementary of two targets.When especially operand is relatively many, be difficult to realize efficient identification.
The Active shape model that is proposed by T.F.Cootes and C.J.Taylor and Active Appearance Model algorithm are one of effective methods in the aspect such as human face detection and recognition.Above-mentioned two kinds of methods all belong to the method for statistics, but Active shape model comes identifying object by the outer contour feature of identifying object, and Active Appearance Model utilizes the texture feature to come identifying object.These two kinds of methods have more application in recognition of face and medical image recognition, be only applicable to rest image identification, and are not suitable for the Real time identification in the video image processing.
In addition, Active contour (snake) model algorithm is applicable to the tracking of moving target in the video.The method is very valuable in medical image recognition.But the performance requirement to computing machine is higher, is not suitable for embedded system.
Movable object tracking Mean shift algorithm and the Cam Shift algorithms of utilizing in the video image more.The core of these two kinds of algorithms is colors of tracking target.But the difficult point of this algorithm is the initial position that is difficult to determine tracking target, so not good tracking and recognition effect then when lacking colouring information.And tracking target irregular movement, target be too close, have other objects to disturb and color of object often causes following the tracks of unsuccessfully when similar with background color.
Summary of the invention
The characteristic quantity that human body identification is adopted is directly connected to the precision of identification and the speed of computing.The problem that the present invention is directed in the prior art because of on-the-spot illumination, camera angle, human dimension and the low or None-identified of human body recognition accuracy that extraneous factor causes such as block, the recognition methods of human body integral under a kind of complex scene is provided, the accuracy of identification of Effective Raise video and objects in images, reduce system resources consumption, accelerate recognition speed.
The present invention utilizes the profile information of human body, and does not adopt colouring information, can accurately identify black white image; Do not adopt the human body area information, can effectively reduce operand; Do not consider human body contour outline information of the same type, can reduce the quantity of model.Only need to preserve the differently contoured profile of suitable mobile human body, no matter how its internal state changes, and all can accurately identify human body and counting.
The application utilizes the contour feature of Sobel operator extraction image by inputted video image; Adopt the background subtraction method to eliminate background, extract the object contour feature, and regenerate profile according to the profile angle information; Set up object nominal contour model image, adopt the virtual image center, calculate matching rate and three steps realizations of matched position object nominal contour model and the coupling that generates profile, export recognition result after the coupling.
In practice, when human body mobile in the surveyed area is counted, only preserve the contour shape of human body and can identify number, discrimination is subject to the impact of the disturbing factors such as human clothing's color, state, background, shadow and cap hardly, can identify at a high speed target (such as people, vehicle and various object) and quantity thereof with contour feature.
Personnel's counter by the present invention's development can be used on the fields such as prison security protection, hazardous area personnel control, passenger carriage overloading detection, crowded place early warning, market volume of the flow of passengers analysis, has significant economic worth and social value.The present invention is not only applicable to human body identification and counting, can be in the widespread use of object identification field yet.
Description of drawings
Fig. 1. video camera is taken the visual angle; Fig. 2. the video image of input; Fig. 3. the contour feature image; Fig. 4. the objective contour image of extraction; Fig. 5. the objective contour image that regenerates; Fig. 6. human body nominal contour model image element; Fig. 7. the some position of human body nominal contour model image; Fig. 8. the human body nominal contour image of diverse location; Fig. 9. the human body nominal contour model image nearer apart from the video image central point; Figure 10. apart from video image central point human body nominal contour model image far away; Figure 11. human body nominal contour model characteristic point vector; Figure 12. the virtual center point position; Figure 13. outline result; Figure 14. the recognition result output interface;
Embodiment
Instantiation of the present invention is described as follows in conjunction with Fig. 1-14:
Step 1: image input
The present invention adopts the video camera inputted video image.Color to video pictures does not limit, and black white image and coloured image all can; Unrestricted to camera lens, CCD camera lens or the COMS camera lens of main flow all can; Shooting angle to video camera does not limit, and various angles all can be identified, and to overlook the mode accuracy of identification the highest but adopt; The visual angle size of video camera, setting height(from bottom), image deformation etc. are not limited, only need to set up corresponding human body nominal contour model for different cameras and get final product.
Video camera adopts and overlooks the visual angle capture video in this example, as shown in Figure 1.The video image of input as shown in Figure 2.
Step 2: Image Edge-Detection
For the video image of input, we adopt the Sobel operator to carry out rim detection, to extract the image outline feature.The present invention adopts variable M * matrix, and does not adopt general 3x3 matrix.Matrix size can require to adjust according to accuracy of identification, and M and N value are larger, and the rim detection effect is better.Adopt in this example 5 * 5 matrixes.
The video image of input at first carries out Image Edge-Detection, to extract the contour feature of image.Rim detection adopts the Sobel operator.The Sobel operator has two, and one is the detection level edge, and one is the detection of vertical edge.The matrix of general two groups of 3x3 of Sobel operator inclusion is respectively horizontal and vertical, and it and image are made planar convolution, can draw respectively horizontal and vertical brightness difference approximate value.
Suppose that the two dimensional image vertical direction of input and the operator of horizontal direction are respectively v (x, y) and h (x, y), the size of matrix is M * N.At this moment, utilize the Sobel operator to carry out rim detection, wherein the contour feature image of C (x, y) for producing through rim detection.
C x ( x , y ) = Σ p = - M / 2 M / 2 Σ q = - N / 2 N / 2 I ( x + p , y + q ) × h ( p , q ) - - - ( 1 )
C y ( x , y ) = Σ p = - M / 2 M / 2 Σ q = - N / 2 N / 2 I ( x + p , y + q ) × v ( p , q ) - - - ( 2 )
C ( x , y ) = C x ( x , y ) 2 + C y ( x , y ) 2 - - - ( 3 )
θ(x,y)=tan -1(C x(x,y)/C y(x,y))(4)
Wherein, C x(x, y) and C y(x, y) is respectively the image through horizontal and vertical rim detection, and θ (x, y) is the gradient direction of pixel (x, y).
In this step, we have adopted variable matrix M * N, and do not adopt at general 3x3 matrix, and matrix size can be according to the accuracy requirement adjustment like this.M, N value are larger, and the effect of rim detection is better.In actual operation, we generally adopt the 5x5 matrix, and are as follows, wherein the center of C representing matrix.
Figure BDA00002239121900041
Through Image Edge-Detection, the video image contour feature of input is extracted, and generates contour feature image C (x, y), as shown in Figure 3.
Step 3: background modeling
Background modeling is the basis of realizing that target and background is separated in the image.Whether occur continuously realizing background model by the point in the image under consideration, in this process, investigated two characteristic quantities of intensity and angle, compare with single consideration intensity, have higher reliability.
Through Image Edge-Detection, although generated the contour feature image, the target in the image (people) is not separated with noise (inhuman part).After the Image Edge-Detection, many noises are arranged in the contour feature image, we are referred to as background.These noises have not only reduced discrimination, also can increase operand.
The method that we adopt image to compare is continuously determined background.Many continuous contour feature images are compared, occurring in the plurality of pictures continuously such as fruit dot (x, y), then it is background; Otherwise, then be target.Concrete methods of realizing is as follows:
Suppose that at t contour feature image constantly be C t(x, y), the profile angle is θ t(x, y).At this moment, last the contour feature image C that compares with it T-1(x, y), we are referred to as background contour feature image.Background intensity is B t(x, y), the background angle is
Figure BDA00002239121900042
Then set up in accordance with the following methods background model.
B t + 1 ( x , y ) = C t ( x , y ) , C t - 1 ( x , y ) > N tr B t ( x , y ) , C t - 1 ( x , y ) ≤ N tr - - - ( 5 )
B θ t + 1 ( x , y ) = θ t ( x , y ) , C t - 1 ( x , y ) > N tr B θ t ( x , y ) , C t - 1 ( x , y ) ≤ N tr - - - ( 6 )
B 0(x,y)=C 0(x,y), B θ 0 ( x , y ) = θ 0 ( x , y ) - - - ( 7 )
Wherein, N TrBe the prospect threshold values, be used for distinguishing background and prospect.This value arranges according to experience, span is 100~1000, if be set to 100, then refer to 100 frames, its implication is: if location point (x, y) in the image that upgrades of speed with a second 10 frames, continue to occur, and continued for 10 seconds, then this point is regarded as background.So far, we can determine intensity and the angle of background model.
In this step, we adopt the method for real-time update to determine background model, if the background of inputted video image changes in practice, our background model also can change thereupon.Therefore, this method can not be subject to the impact of the factors such as light, color, interfering object substantially.
In addition, we determine that background model has two characteristic quantities, i.e. intensity and angle.With respect to single working strength contrast, the model that this method is set up has higher reliability.
Step 4: extract the objective contour image
Extract the objective contour image and adopt the background subtraction method, after contour feature image and in real time contrast of background model, remaining some composition diagram looks like to be the objective contour image, and the realization prospect is separated with background.In this process, still consider two characteristic quantities of intensity and angle, to improve reliability.
The purpose of background modeling is for the target in the differentiate between images (people) and noise.Through step 3, we have set up background model.Contour feature image and background model are compared, can extract the objective contour in the image.
If satisfy:
| θ t ( x , y ) - B θ t ( x , y ) | ≤ π / 12 And | C t(x, y)-B t(x, y) |≤Ltr (8)
C then t(x, y)=0
Wherein, π/12nd, the threshold values of assessment profile differential seat angle; Ltr is the threshold values of assessment profile intensity difference, and this is an amount that decides according to intensity.When if the contrast of image is relatively little, it is set to little value, if when opposite contrast was large, it was set to large value, the span of Ltr is 0-12.Wherein belong to C tThe point of (x, y)=0 then as being the background pixel point, is noise.Otherwise, be exactly target.
Process through background modeling and extraction objective contour, noise is eliminated in the image, and the image after the background is removed in generation, the objective contour image C of namely extracting T(x, y).Such as Fig. 4.
Step 5: regenerate the objective contour image
The objective contour image that extracts still has intensity and two characteristic quantities of angle.Wherein reflect the angle character amount of profile appearance as the essential characteristic amount, and the characteristic quantity of reflection profile intensity consists of contour images as the supplemental characteristic amount.In order to reduce operand, we need eliminate the strength characteristic amount, rearrange pixel, make the objective contour image more clear.
The image that extracts as can see from Figure 4 behind the objective contour still has two characteristic quantities.Be this characteristic quantity of contour elimination intensity, we need rearrange pixel, to regenerate the objective contour image.
At first, give suitable intensity threshold values Ctr, belong to C T(x, y)<C TrPixel do not consider.Ctr is the threshold values of assessment profile intensity difference, and this is an amount that decides according to intensity.When if the contrast of image is relatively little, it is set to little value, if when opposite contrast was large, it was set to large value.The span of Ltr is 0-12.
Secondly, utilize the some positional alignment of each profile angle to consist of contour images.The point of angle θ is arranged Q (θ)
Q ( θ ) = { P i } i = 1 , N θ - - - ( 9 )
P wherein i=(x i, y i) be i the point of angle θ, N θThat angle is the some number of θ.
Contour images by formula (9) definition no longer has the profile strength information, only preserves the some position of each angle of reflection contour shape.Through above-mentioned computing, regenerated the objective contour image, and known the angle information of each pixel.With respect to the objective contour image that extracts, the objective contour image that regenerates is more clear.As shown in Figure 5.
Step 6: set up the human body master pattern
If determine which is the people in the input picture, then need to set up in advance the differently contoured image of identification target body, these images are called human body nominal contour model image.We adopt manual method to make human body nominal contour model image, set up one by one the human body master pattern for the video record of different shooting heights, angle, picture deformation.
Human body nominal contour model image method for making is as follows:
Will be in the Image Edge-Detection process naked eyes can clear identification the intercepting of human body contour outline characteristic image, being positioned over does not have the blank image of background center.Image should comprise the diverse location of image center, human body head and health, human body, the different size of human body.As shown in Figure 6.
We will extract the shape information of image, utilize single pixel to connect the appearance that neighbor pixel obtains the number of people and health, and the center can be decided in its sole discretion.Like this, can obtain human body nominal contour model image.
When image center was reference point c=(0,0), the position of point was p (x, y)=(x, y).At this moment, model image is the positional alignment according to point, and it is constructed as follows;
M ( θ ) = { P i } i = 1 , M θ - - - ( 10 )
Wherein, p iM when the expression angle is θ θThe position of i point in the individual point.
In actual applications, not to adopt all continuous angles, but adopt the characteristic quantity of certain intervals.Suitable characteristic quantity can reduce the impact of noise in matching process, the requirement of accuracy of identification is depended at the interval of characteristic quantity.
The angle of supposing the characteristic quantity of getting is θ Q,(1≤q≤S).Wherein S is the number of characteristic quantity, its span 0-180, and value 12-18 is comparatively suitable generally speaking.Then k model image is the some position two-dimensional arrangements to different angles.
M k = { M k ( θ q ) } i = 1 , M θ q - - - ( 11 )
Formula (9) mid point P iThe some position in the image coordinate system, the p in the formula (11) iThe some position of human body nominal contour model image central point when being benchmark.As shown in Figure 7.
Although be to identify equally human body, the position of human body in inputted video image is different, and the profile appearance is also different.Such as when camera is vertically overlooked ground, the profile nearer apart from the video image central point is less, and the profile far away apart from the video image center is larger.As shown in Figure 8.
In the Model Matching stage, a nearer model image in matching distance input picture point position, and do not consider image far away, and so both guaranteed discrimination, also can reduce operand.So during the Criterion model image, model image point positional information and center position information as formula (11) are preserved together.
By the computing of this step, we can set up human body nominal contour model image, these images such as Fig. 9 and Figure 10. can be found out by the computing of this step, human body master pattern image only has the angle character amount, and we know the central point of each human body master pattern and the angle of each some unique point.
Step 7: outline
With the objective contour image that generates and human body master pattern image ratio pair, such as coupling, then assert to be people and counting; As not mating, then do not assert to be the people.The coupling of profile need to be passed through virtual target contour images center, be calculated matching rate and three steps of matched position.
We have regenerated objective contour image (Fig. 5) and have set up human body nominal contour model image (Fig. 9 Figure 10) from video image.If whether detect these targets is human body, then need carry out the coupling of human body nominal contour model image and objective contour image.
1, virtual target contour images center
Do not have central point in the objective contour image, and standard human body skeleton pattern image has central point C. for setting up the coupling of carrying out both, need find in advance the center of objective contour image.
Respectively with the horizontal sides of human body nominal contour model image and objective contour image and perpendicular edge x axle and the y axle as coordinate system.M that belongs to angle θ in N of angle θ point and the human body nominal contour model image in the objective contour image put compare.
Belong to θ in the hypothetical target contour images qThe point of angle is P i Q=(x i Q, y i Q).Belong to angle θ in the human body nominal contour model image qPoint be P j M=(x j M, y j M), some P j MWith the vector of human body nominal contour image center C be l.As shown in figure 11.
P with the objective contour image i QSet up coordinate system centered by the point, P 1, P 2, P 3, P 4Be respectively four symmetric points as vector take l.These four points are the virtual center of objective contour image.Because of a P i QHave vector, therefore for four virtual center of each point, may have one to be practical center.As shown in figure 12.
As seen, on an angle, can generate 4 * N * M virtual center.For all feature angles, can generate Individual virtual center, wherein s is the number of human body nominal contour model characteristic point.Investigating four virtual center is because do not know the position of objective contour picture centre.If objective contour image and human body master pattern image mate, necessarily there is one to be practical center in four virtual center so.Therefore the most concentrated position of virtual center is the practical center of objective contour image in the objective contour image.
2, calculate matching rate
When setting up human body nominal contour model image, we have set up n human body nominal contour model image, and each human body nominal contour model image is comprised of the point of varying number.The point number of supposing k human body nominal contour model image is n k, on average the number of each human body nominal contour illustraton of model picture point is
Figure BDA00002239121900082
Wherein k is the sequence number of the human body nominal contour model image investigated.For k model, the sum of counting in the 3*3 pixel coverage around the objective contour image virtual central point corresponding with it is s k(x, y), then the matching rate of the image k on position (x, y) is evaluated as follows:
R k ( x , y ) = α × s k ( x , y ) + ( 1 - α ) × s k ( x , y ) × n ‾ / n k - - - ( 12 )
Wherein, α is weight parameter.Weight parameter depends on that absolute coupling counts s k(x, y) and relative coupling number
Figure BDA00002239121900091
In that is more important.The number of match point is absolute match point number s relatively k(x, y) and model image mid point number n kRatio multiply by the relative quantity of equalization point number.The α value is less, relatively mates number and just occupies larger weight; The α value is larger, then definitely mates number and occupies larger weight.The span of α is 0.1~0.3.
3, the decision of matched position
From k matching rate matrix, determine Optimum Matching rate R (x, y) and Optimum Matching human body nominal contour model image T (x, y) with following formula.
R(x,y)=max{R k(x,y)}(13)
T(x,y)=argmax k{R k(x,y)}(14)
When having several target on the single image, may match several model images.Therefore after finding out first the maximal value of R (x, y), its position is decided to be first matched position.Then according to the size of model image, outside certain area, find out second matched position, find whole matched positions with this.Until R (x, y) during less than threshold values, stops coupling.
At this moment, threshold values is determined with following formula:
R k=β×(α×n k+(1-α))(15)
Wherein, the β value of giving is 0.3~0.35 to be best suited for.
White contours represents that the human body nominal contour model image that mates, grey profile represent because of the low and abandoned human body nominal contour of matching rate model image among Figure 13, and square region is the count detection zone.
Step 8: result's output
Through the computing of step 1 to step 7, we have found out the profile of human body in the image.The number of profile is number in the image.Figure 14 is the recognition result output interface.

Claims (9)

1. the counter for number of people based on profile identification comprises video input apparatus, processing module, storer, it is characterized in that, described counter is counted in the following way:
The first step: image input
Adopt the video input apparatus input video;
Second step: Image Edge-Detection
The two dimensional image of input is I (x, y), adopts the Sobel operator, and the operator of vertical direction and horizontal direction is respectively v (x, y) and h (x, y), the size of matrix is M * N, at this moment, contour feature image C (x, y) is calculated as follows:
C x ( x , y ) = Σ p = - M / 2 M / 2 Σ q = - N / 2 N / 2 I ( x + p , y + q ) × h ( p , q ) - - - ( 1 )
C y ( x , y ) = Σ p = - M / 2 M / 2 Σ q = - N / 2 N / 2 I ( x + p , y + q ) × v ( p , q ) - - - ( 2 )
C ( x , y ) = C x ( x , y ) 2 + C y ( x , y ) 2 - - - ( 3 )
θ(x,y)=tan -1(C x(x,y)/C y(x,y))(4)
Wherein, C x(x, y) and C y(x, y) is respectively through image horizontal and that longitudinal edge detects, and θ (x, y) is the profile angle;
The 3rd step: background modeling
T contour feature image constantly is C t(x, y), the profile angle is θ t(x, y), contoured background is P t(x, y), background intensity model are B t(x, y), the background angle model is
Figure FDA00002239121800014
Then background model is according to following renewal:
B t + 1 ( x , y ) = C t ( x , y ) , C t - 1 ( x , y ) > N tr B t ( x , y ) , C t - 1 ( x , y ) ≤ N tr - - - ( 5 )
B θ t + 1 ( x , y ) = θ t ( x , y ) , C t - 1 ( x , y ) > N tr B θ t ( x , y ) , C t - 1 ( x , y ) ≤ N tr - - - ( 6 )
B 0(x,y)=C 0(x,y), B θ 0 ( x , y ) = θ 0 ( x , y ) - - - ( 7 )
Wherein, N TrIt is the prospect threshold values;
Step 4: extract the objective contour image
If satisfy:
| θ t ( x , y ) - B θ t ( x , y ) | ≤ π / 12 And | C t(x, y)-B t(x, y) |≤Ltr;
C then t(x, y)=0
Wherein, π/12nd, the threshold values of assessment profile differential seat angle, Ltr is the threshold values of assessment profile intensity difference, the contour feature image after this step is C t(x, y) is designated as the contour feature image C T(x, y);
Step 5: generate the objective contour image
At first, give suitable intensity threshold values Ct R,Belong to C T(x, y)<C TrPixel do not consider, secondly, utilize the some positional alignment of each profile angle to consist of contour images, the point of angle θ is arranged Q (θ) and is:
Q ( θ ) = { P i } i = 1 , N θ - - - ( 7 )
P wherein i=(x i, y i) be i the point of angle θ, N θThat angle is the some number of θ;
Step 6: set up the human body master pattern
At first, place the contour feature image that obtains in the step 2 in the null images that does not have background, extract the shape information of image, utilize single pixel to connect neighbor pixel and obtain the number of people and health appearance,
When image center was c=(0,0), the position of point was p (x, y)=(x, y), and at this moment, model image is the positional alignment according to point, and it is constructed as follows;
M ( θ ) = { P i } i = 1 , M θ - - - ( 8 )
Wherein, p iM when the expression angle is θ θThe position of i point in the individual point,
The angle of supposing the characteristic quantity of getting is θ q(1≤q≤S), S is the number of characteristic quantity, and then k model image is the some position two-dimensional arrangements to different angles:
M k = { M k ( θ q ) } i = 1 , M θ q - - - ( 9 )
Step 7: outline
1, virtual target contour images center
Respectively with the perpendicular edge of human body master pattern and objective contour characteristic image place picture to be detected and horizontal sides x axle and the y axle as coordinate system, M that belongs to angle θ in N of angle θ point and the human body master pattern image in the objective contour characteristic image put compare, belong to θ in the hypothetical target contour feature image qThe point of angle is P i Q=(x i Q, y i Q), belong to angle θ in the human body master pattern image qPoint be P i M=(x j M, y j M), some P j MWith the vector of human body master pattern image center C be l, with the P of objective contour characteristic image i QSet up coordinate system centered by the point, P 1, P 2, P 3, P 4Be respectively is that these four points are the virtual center of objective contour characteristic image take four symmetric points of l as vector;
2, calculate matching rate
The point number of supposing k human body master pattern image is n k, on average the number of each human body master pattern picture point is
Figure FDA00002239121800031
Wherein k is the sequence number of the human body master pattern image investigated, and for k model, the sum of counting in the certain limit around the objective contour characteristic image virtual center point corresponding with it is s k(x, y), then the fitting percentage of the image k on position (x, y) is evaluated as follows:
R k ( x , y ) = α × s k ( x , y ) + ( 1 - α ) × s k ( x , y ) × n ‾ / n k - - - ( 10 )
Wherein, α is weight parameter, and the span of α is 0.1-0.3;
3, the decision of matched position
From k matching rate, determine Optimum Matching rate R (x, y) and Optimum Matching human body master pattern image T (x, y) with following formula,
R(x,y)=max{R k(x,y)}(11)
T(x,y)=argmax k{R k(x,y)}(12)
When having several target on the single image, can match several human body master pattern images.Therefore find out first R (x, y) after the maximal value, its position is decided to be first matched position, then according to the size of model image, outside certain area, find out second matched position, find whole matched positions with this, until R (x, during y) less than threshold values, stop coupling, the following public affairs of threshold values
Formula is determined:
R k=β×(α×n k+(1-α))(13)
Wherein, the β assignment is 0.3-0.35;
Step 8: result's output
With the image output after the number that detects and the identification.
2. counter for number of people as claimed in claim 1 is characterized in that, described video input apparatus is video camera.
3. counter for number of people as claimed in claim 1 is characterized in that, M, N get 5 in formula (1), (2), corresponding Vertical Square
Be respectively to the operator with horizontal direction:
Figure FDA00002239121800041
The center of C representing matrix wherein.
4. counter for number of people as claimed in claim 1 is characterized in that, the N in the step 3 TrValue is 100-1000.
5. counter for number of people as claimed in claim 1 is characterized in that, according to the visual angle of described video input apparatus, the picture that collects is divided into zones of different, and the human body master pattern is set up in the subregion, corresponding subregion coupling target body model.
6. counter for number of people as claimed in claim 1 is characterized in that, certain limit is 3 * 3 pixel coverages around the central point described in the step 7.
7. counter for number of people as claimed in claim 1 is characterized in that, the Ltr in the step 4 is directly proportional with the contrast of image, and preferred value is 0-12.
8. counter for number of people as claimed in claim 1 is characterized in that, the human body master pattern of setting up in the step 6 is stored in the described storer.
9. counter for number of people as claimed in claim 1 is characterized in that, described processing module is responsible for the calculating of above-mentioned each step.
CN2012103833408A 2012-10-10 2012-10-10 Video recognition counter for body silhouette Expired - Fee Related CN102930334B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2012103833408A CN102930334B (en) 2012-10-10 2012-10-10 Video recognition counter for body silhouette

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2012103833408A CN102930334B (en) 2012-10-10 2012-10-10 Video recognition counter for body silhouette

Publications (2)

Publication Number Publication Date
CN102930334A true CN102930334A (en) 2013-02-13
CN102930334B CN102930334B (en) 2013-08-14

Family

ID=47645128

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2012103833408A Expired - Fee Related CN102930334B (en) 2012-10-10 2012-10-10 Video recognition counter for body silhouette

Country Status (1)

Country Link
CN (1) CN102930334B (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103605967A (en) * 2013-11-26 2014-02-26 东华大学 Subway fare evasion prevention system and working method thereof based on image recognition
CN103679212A (en) * 2013-12-06 2014-03-26 无锡清华信息科学与技术国家实验室物联网技术中心 Method for detecting and counting personnel based on video image
CN105550743A (en) * 2015-12-10 2016-05-04 世纪美映影院技术服务(北京)有限公司 Method for counting number of persons in building
CN105631609A (en) * 2016-02-04 2016-06-01 王爱玲 Power distribution cabinet
CN107704874A (en) * 2017-09-29 2018-02-16 上海与德通讯技术有限公司 Intelligent robot and its recognition methods and computer-readable recording medium
CN109345558A (en) * 2018-10-29 2019-02-15 网易(杭州)网络有限公司 Image processing method, device, medium and electronic equipment
CN110701741A (en) * 2019-10-10 2020-01-17 珠海格力电器股份有限公司 Air conditioning unit regulating and controlling method and air conditioning unit
CN111369394A (en) * 2020-02-27 2020-07-03 吴秋琴 Scenic spot passenger flow volume statistical evaluation system and method based on big data
CN111428546A (en) * 2019-04-11 2020-07-17 杭州海康威视数字技术股份有限公司 Method and device for marking human body in image, electronic equipment and storage medium
CN112860059A (en) * 2021-01-08 2021-05-28 广州朗国电子科技有限公司 Image identification method and device based on eyeball tracking and storage medium
CN113837052A (en) * 2021-09-18 2021-12-24 泰州市雷信农机电制造有限公司 Current-limiting trigger system based on block chain
CN113947546A (en) * 2021-10-18 2022-01-18 江阴市人人达科技有限公司 Image picture multi-layer filtering processing system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101587541A (en) * 2009-06-18 2009-11-25 上海交通大学 Character recognition method based on human body contour outline
CN101636031A (en) * 2008-07-23 2010-01-27 优志旺电机株式会社 Ultraviolet irradiation device and lighting control method thereof
CN101872422A (en) * 2010-02-10 2010-10-27 杭州海康威视软件有限公司 People flow rate statistical method and system capable of precisely identifying targets
CN101908150A (en) * 2010-06-25 2010-12-08 北京交通大学 Human body detection method
CN102054306A (en) * 2011-01-31 2011-05-11 潘海朗 Method and system for detecting pedestrian flow by adopting deformable two-dimensional curves

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101636031A (en) * 2008-07-23 2010-01-27 优志旺电机株式会社 Ultraviolet irradiation device and lighting control method thereof
CN101587541A (en) * 2009-06-18 2009-11-25 上海交通大学 Character recognition method based on human body contour outline
CN101872422A (en) * 2010-02-10 2010-10-27 杭州海康威视软件有限公司 People flow rate statistical method and system capable of precisely identifying targets
CN101908150A (en) * 2010-06-25 2010-12-08 北京交通大学 Human body detection method
CN102054306A (en) * 2011-01-31 2011-05-11 潘海朗 Method and system for detecting pedestrian flow by adopting deformable two-dimensional curves

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103605967A (en) * 2013-11-26 2014-02-26 东华大学 Subway fare evasion prevention system and working method thereof based on image recognition
CN103679212A (en) * 2013-12-06 2014-03-26 无锡清华信息科学与技术国家实验室物联网技术中心 Method for detecting and counting personnel based on video image
CN105550743A (en) * 2015-12-10 2016-05-04 世纪美映影院技术服务(北京)有限公司 Method for counting number of persons in building
CN105631609A (en) * 2016-02-04 2016-06-01 王爱玲 Power distribution cabinet
CN107704874A (en) * 2017-09-29 2018-02-16 上海与德通讯技术有限公司 Intelligent robot and its recognition methods and computer-readable recording medium
CN109345558A (en) * 2018-10-29 2019-02-15 网易(杭州)网络有限公司 Image processing method, device, medium and electronic equipment
CN111428546A (en) * 2019-04-11 2020-07-17 杭州海康威视数字技术股份有限公司 Method and device for marking human body in image, electronic equipment and storage medium
CN111428546B (en) * 2019-04-11 2023-10-13 杭州海康威视数字技术股份有限公司 Method and device for marking human body in image, electronic equipment and storage medium
CN110701741A (en) * 2019-10-10 2020-01-17 珠海格力电器股份有限公司 Air conditioning unit regulating and controlling method and air conditioning unit
CN111369394B (en) * 2020-02-27 2021-05-14 浙江力石科技股份有限公司 Scenic spot passenger flow volume statistical evaluation system and method based on big data
CN111369394A (en) * 2020-02-27 2020-07-03 吴秋琴 Scenic spot passenger flow volume statistical evaluation system and method based on big data
CN112860059A (en) * 2021-01-08 2021-05-28 广州朗国电子科技有限公司 Image identification method and device based on eyeball tracking and storage medium
CN113837052A (en) * 2021-09-18 2021-12-24 泰州市雷信农机电制造有限公司 Current-limiting trigger system based on block chain
CN113947546A (en) * 2021-10-18 2022-01-18 江阴市人人达科技有限公司 Image picture multi-layer filtering processing system

Also Published As

Publication number Publication date
CN102930334B (en) 2013-08-14

Similar Documents

Publication Publication Date Title
CN102930334B (en) Video recognition counter for body silhouette
US10423856B2 (en) Vector engine and methodologies using digital neuromorphic (NM) data
US10387741B2 (en) Digital neuromorphic (NM) sensor array, detector, engine and methodologies
CN106127148B (en) A kind of escalator passenger's anomaly detection method based on machine vision
CN103824070B (en) A kind of rapid pedestrian detection method based on computer vision
CN103559703B (en) Crane barrier based on binocular vision is monitored and method for early warning and system
CN103310444B (en) A kind of method of the monitoring people counting based on overhead camera head
WO2018023916A1 (en) Shadow removing method for color image and application
CN103279737B (en) A kind of behavioral value method of fighting based on space-time interest points
CN107909604A (en) Dynamic object movement locus recognition methods based on binocular vision
CN102622584B (en) Method for detecting mask faces in video monitor
CN103164858A (en) Adhered crowd segmenting and tracking methods based on superpixel and graph model
CN104063702A (en) Three-dimensional gait recognition based on shielding recovery and partial similarity matching
CN104061907A (en) Viewing-angle greatly-variable gait recognition method based on gait three-dimensional contour matching synthesis
CN107392885A (en) A kind of method for detecting infrared puniness target of view-based access control model contrast mechanism
CN104715238A (en) Pedestrian detection method based on multi-feature fusion
CN102156880A (en) Method for detecting abnormal crowd behavior based on improved social force model
CN104036488A (en) Binocular vision-based human body posture and action research method
CN105354856A (en) Human matching and positioning method and system based on MSER and ORB
CN104463869A (en) Video flame image composite recognition method
CN110189375A (en) A kind of images steganalysis method based on monocular vision measurement
CN110021029A (en) A kind of real-time dynamic registration method and storage medium suitable for RGBD-SLAM
Tian et al. Human Detection using HOG Features of Head and Shoulder Based on Depth Map.
CN106295657A (en) A kind of method extracting human height's feature during video data structure
CN101873506B (en) Image processing method for providing depth information and image processing system thereof

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CP03 Change of name, title or address
CP03 Change of name, title or address

Address after: Room 301, No.389, Shengzhou Road, Qinhuai District, Nanjing City, Jiangsu Province

Patentee after: Nanjing Kaisen Century Technology Development Co.,Ltd.

Address before: 100085, room 115, building 5, building 1, East Road, Haidian District, Beijing

Patentee before: BEIJING KEYSEEN TECHNOLOGY DEVELOPMENT Co.,Ltd.

CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20130814