CN102930334B - Video recognition counter for body silhouette - Google Patents

Video recognition counter for body silhouette Download PDF

Info

Publication number
CN102930334B
CN102930334B CN2012103833408A CN201210383340A CN102930334B CN 102930334 B CN102930334 B CN 102930334B CN 2012103833408 A CN2012103833408 A CN 2012103833408A CN 201210383340 A CN201210383340 A CN 201210383340A CN 102930334 B CN102930334 B CN 102930334B
Authority
CN
China
Prior art keywords
image
human body
point
angle
people
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN2012103833408A
Other languages
Chinese (zh)
Other versions
CN102930334A (en
Inventor
不公告发明人
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Kaisen Century Technology Development Co.,Ltd.
Original Assignee
BEIJING KAISEN SHIJI TECHNOLOGY DEVELOPMENT Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BEIJING KAISEN SHIJI TECHNOLOGY DEVELOPMENT Co Ltd filed Critical BEIJING KAISEN SHIJI TECHNOLOGY DEVELOPMENT Co Ltd
Priority to CN2012103833408A priority Critical patent/CN102930334B/en
Publication of CN102930334A publication Critical patent/CN102930334A/en
Application granted granted Critical
Publication of CN102930334B publication Critical patent/CN102930334B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

The invention relates to a counter for the number of people. An object in a video image is recognized at high speed on the basis of silhouette characteristics of the object; for the problem that the recognition accuracy of a human body is low or the human body cannot be recognized due to outside factors such as field illumination, camera angle, human dimension and shielding in the prior art, the silhouette characteristics in the image are extracted by means of an Sobel operator; the background is eliminated by adopting a background subtraction method, the silhouette characteristics of the object are extracted and the silhouette is regenerated according to silhouette angle information; a standard silhouette model image of the object is established and the matching between a standard silhouette model of the object and the generated silhouette is realized by adopting three steps of virtually setting an image center, calculating the matching rate and calculating the matching position; and after the matching, an recognition result is outputted. According to the counter disclosed by the invention, the number of people can be recognized according to the saved silhouette shapes of human bodies, the recognition rate is hardly influenced by interference factors such as clothes colors, states, backgrounds, shadows and hats of the human bodies, background information can be effectively eliminated and higher recognition precision and higher operation speed are obtained.

Description

Video human profile identification and counting machine
Technical field
The invention belongs to two dimensional image recognition technology field, also belong to the intelligent video identification technology field, be specifically related to a kind of video human profile identification and counting machine based on human body contour outline feature identification people body method.
Background technology
In identification video or the image to as if one of the problem of the tool using value of computer vision field.Be that very many good methods have all appearred in industry member or academia.Have to mate respectively based on head, trunk, four limbs to combine then, have based on the human motion periodic feature etc., do simple argumentation to some classics with relevant method below.
S.Belongie and J.Malik have proposed based on the object in the Shape Context method recognition image of object external outline line sample point set.Shape Context algorithm at first calculates the particle of target, calculates its log-polar histogram then centered by particle. and this fundamental method has following two shortcomings: utilizing the particle definition to calculate particle is very complicated and unusual time-consuming procedure; Be merely able to judge the whether similar and point set that can't obtain to be complementary of two targets.Especially operand is difficult to realize identification efficiently relatively more for a long time.
The Active shape model that is proposed by T.F.Cootes and C.J.Taylor and Active Appearance Model algorithm are one of effective methods in aspect such as people's face detection and Identification.Above-mentioned two kinds of methods all belong to the method for statistics, but Active shape model comes identifying object by the outer contour feature of identifying object, and Active Appearance Model utilizes the texture feature to come identifying object.These two kinds of methods have more application in recognition of face and medical image recognition, be only applicable to rest image identification, and are not suitable for the Real time identification in the video image processing.
In addition, Active contour (snake) model algorithm is applicable to the tracking of moving target in the video.The method is very valuable in medical image recognition.But the performance requirement to computing machine is higher, is not suitable for embedded system.
Movable object tracking Mean shift algorithm and the Cam Shift algorithms of utilizing in the video image more.The core of these two kinds of algorithms is colors of tracking target.But the difficult point of this algorithm is the initial position that is difficult to determine tracking target, so not good tracking and recognition effect then when lacking colouring information.And tracking target irregular movement, target be too close, have other objects to disturb and color of object often causes following the tracks of failure when similar with background color.
Summary of the invention
The characteristic quantity that human body identification is adopted is directly connected to the precision of identification and the speed of computing.The present invention is directed in the prior art because of on-the-spot illumination, camera angle, human dimension and the low problem that maybe can't identify of human body recognition accuracy that extraneous factor causes such as block, the recognition methods of human body integral under a kind of complex scene is provided, effectively improve the accuracy of identification of object in video and the image, reduce system resources consumption, accelerate recognition speed.
The present invention utilizes the profile information of human body, and does not adopt colouring information, can accurately identify black white image; Do not adopt the human body area information, can effectively reduce operand; Do not consider human body contour outline information of the same type, can reduce the quantity of model.Only need to preserve the differently contoured profile of suitable mobile human body, no matter how its internal state changes, and all can accurately identify human body and counting.
The application utilizes the contour feature of Sobel operator extraction image by inputted video image; Adopt the background subtraction method to eliminate background, extract the object contour feature, and regenerate profile according to the profile angle information; Set up object nominal contour model image, adopt the virtual image center, calculate matching rate and three steps realizations of matched position object nominal contour model and the coupling that generates profile, recognition result is exported in the coupling back.
In practice, when human body mobile in the surveyed area is counted, only preserve the contour shape of human body and can identify number, discrimination is subjected to the influence of disturbing factors such as human clothing's color, state, background, shadow and cap hardly, can identify target (such as people, vehicle and various object) and quantity thereof with contour feature at a high speed.
Personnel's counter by the present invention's development can be used on fields such as prison security protection, hazardous area personnel control, passenger carriage overloading detection, crowded place early warning, market volume of the flow of passengers analysis, has remarkable economical and is worth and social value.The present invention is not only applicable to human body identification and counting, can be in the widespread use of object identification field yet.
Description of drawings
Fig. 1. video camera is taken the visual angle; Fig. 2. the video image of input; Fig. 3. the contour feature image; Fig. 4. the objective contour image of extraction; Fig. 5. the objective contour image that regenerates; Fig. 6. human body nominal contour model image element; Fig. 7. the some position of human body nominal contour model image; Fig. 8. the human body nominal contour image of diverse location; Fig. 9. the human body nominal contour model image nearer apart from the video image central point; Figure 10. apart from video image central point human body nominal contour model image far away; Figure 11. human body nominal contour aspect of model point vector; Figure 12. the virtual center point position; Figure 13. outline result; Figure 14. the recognition result output interface;
Embodiment
Instantiation of the present invention is described as follows in conjunction with Fig. 1-14:
Step 1: image input
The present invention adopts the video camera inputted video image.Color to video pictures does not limit, and black white image and coloured image all can; Unrestricted to camera lens, CCD camera lens or the COMS camera lens of main flow all can; Shooting angle to video camera does not limit, and various angles all can be identified, and to overlook the mode accuracy of identification the highest but adopt; The visual angle size of video camera, setting height(from bottom), image deformation etc. are not limited, only need to set up corresponding human body nominal contour model at different cameras and get final product.
Video camera adopts and overlooks the visual angle capture video in this example, as shown in Figure 1.The video image of input as shown in Figure 2.
Step 2: Image Edge-Detection
For the video image of input, we adopt the Sobel operator to carry out rim detection, to extract the image outline feature.The present invention adopts variable M * matrix, and does not adopt general 3x3 matrix.Matrix size can require to adjust according to accuracy of identification, and M and N value are more big, and the rim detection effect is more good.Adopt 5 * 5 matrixes in this example.
The video image of input at first carries out Image Edge-Detection, to extract the contour feature of image.Rim detection adopts the Sobel operator.The Sobel operator has two, and one is the detection level edge, and one is the detection of vertical edge.General Sobel operator comprises the matrix of two groups of 3x3, is respectively horizontal and vertical, and it and image are made the plane convolution, can draw horizontal and vertical brightness difference approximate value respectively.
Suppose the two dimensional image vertical direction of input and the operator of horizontal direction be respectively v (x, y) and h (x, y), the size of matrix is M * N.At this moment, utilize the Sobel operator to carry out rim detection, wherein C (x, y) the contour feature image for producing through rim detection.
C x ( x , y ) = Σ p = - M / 2 M / 2 Σ q = - N / 2 N / 2 I ( x + p , y + q ) × h ( p , q ) - - - ( 1 )
C y ( x , y ) = Σ p = - M / 2 M / 2 Σ q = - N / 2 N / 2 I ( x + p , y + q ) × v ( p , q ) - - - ( 2 )
C ( x , y ) = C x ( x , y ) 2 + C y ( x , y ) 2 - - - ( 3 )
θ(x,y)=tan -1(C x(x,y)/C y(x,y))(4)
Wherein, C x(x, y) and C y(x y) is respectively through horizontal and vertical edge-detected image, and (x y) is pixel (x, gradient direction y) to θ.
In this step, we have adopted variable matrix M * N, and do not adopt at general 3x3 matrix, and matrix size can be according to the accuracy requirement adjustment like this.M, N value are more big, and the effect of rim detection is more good.In actual operation, we generally adopt the 5x5 matrix, and are as follows, wherein the center of C representing matrix.
Through Image Edge-Detection, the video image contour feature of input is extracted, and generation contour feature image C (x, y), as shown in Figure 3.
Step 3: background modeling
Background modeling is the basis of realizing that target and background is separated in the image.Whether occur continuously realizing background model by the point in the image under consideration, in this process, investigated two characteristic quantities of intensity and angle, compare with single consideration intensity, have higher reliability.
Through Image Edge-Detection, though generated the contour feature image, the target in the image (people) is not separated with noise (inhuman part).After the Image Edge-Detection, many noises are arranged in the contour feature image, we are referred to as background.These noises have not only reduced discrimination, also can increase operand.
The method that we adopt image to compare is continuously determined background.Many continuous contour feature images are compared, and (x y) is occurring in the plurality of pictures continuously, and then it is background as fruit dot; Otherwise, then be target.The specific implementation method is as follows:
Suppose that at t contour feature image constantly be C t(x, y), the profile angle is θ t(x, y).At this moment, last the contour feature image C that compares with it T-1(x, y), we are referred to as background contour feature image.Background intensity is B t(x, y), the background angle is
Figure BDA00002239121900042
Then set up background model in accordance with the following methods.
B t + 1 ( x , y ) = C t ( x , y ) , C t - 1 ( x , y ) > N tr B t ( x , y ) , C t - 1 ( x , y ) ≤ N tr - - - ( 5 )
B θ t + 1 ( x , y ) = θ t ( x , y ) , C t - 1 ( x , y ) > N tr B θ t ( x , y ) , C t - 1 ( x , y ) ≤ N tr - - - ( 6 )
B 0(x,y)=C 0(x,y), B θ 0 ( x , y ) = θ 0 ( x , y ) - - - ( 7 )
Wherein, N TrBe the prospect threshold values, be used for distinguishing background and prospect.This value arranges according to experience, and span is 100~1000, if be set to 100, then refer to 100 frames, its implication is: if (x y) continues to occur in the speed updated images of a second 10 frames location point, and continued for 10 seconds, then this point is regarded as background.So far, we can determine intensity and the angle of background model.
In this step, we adopt the method for real-time update to determine background model, if the background of inputted video image changes in practice, our background model also can change thereupon.Therefore, this method can not be subjected to the influence of factors such as light, color, interfering object substantially.
In addition, we determine that background model has two characteristic quantities, i.e. intensity and angle.With respect to single working strength contrast, the model that this method is set up has higher reliability.
Step 4: extract the objective contour image
Extract the objective contour image and adopt the background subtraction method, after contour feature image and background model contrast in real time, remaining some composition diagram looks like to be the objective contour image, and the realization prospect is separated with background.In this process, still consider two characteristic quantities of intensity and angle, to improve reliability.
The purpose of background modeling is for the target in the differentiate between images (people) and noise.Through step 3, we have set up background model.Contour feature image and background model are compared, can extract the objective contour in the image.
If satisfy:
| θ t ( x , y ) - B θ t ( x , y ) | ≤ π / 12 And | C t(x, y)-B t(x, y) |≤Ltr (8)
C then t(x, y)=0
Wherein, π/12nd, the threshold values of assessment profile differential seat angle; Ltr is the threshold values of assessment profile intensity difference, and this is an amount that decides according to intensity.When if the contrast of image is relatively little, it is set to little value, if when opposite contrast is big, it is set to big value, and the span of Ltr is 0-12.Wherein belong to C t(x, some y)=0 then as being the background pixel point, are noise.Otherwise, be exactly target.
Handle through background modeling and extraction objective contour, noise is eliminated in the image, and the image after the background, i.e. objective contour image C of Ti Quing are removed in generation T(x, y).As Fig. 4.
Step 5: regenerate the objective contour image
The objective contour image that extracts still has intensity and two characteristic quantities of angle.Wherein reflect the angle character amount of profile appearance as the essential characteristic amount, and the characteristic quantity of reflection profile intensity constitutes contour images as the supplemental characteristic amount.In order to reduce operand, we need eliminate the strength characteristic amount, rearrange pixel, make the objective contour image more clear.
The image that extracts as can see from Figure 4 behind the objective contour still has two characteristic quantities.Be this characteristic quantity of contour elimination intensity, we need rearrange pixel, to regenerate the objective contour image.
At first, give suitable intensity threshold values Ctr, belong to C T(x, y)<C TrPixel do not consider.Ctr is the threshold values of assessment profile intensity difference, and this is an amount that decides according to intensity.When if the contrast of image is relatively little, it is set to little value, if when opposite contrast is big, it is set to big value.The span of Ltr is 0-12.
Secondly, utilize the some positional alignment of each profile angle to constitute contour images.The point of angle θ is arranged Q (θ)
Q ( θ ) = { P i } i = 1 , N θ - - - ( 9 )
P wherein i=(x i, y i) be i the point of angle θ, N θBe that angle is the some number of θ.
Contour images by formula (9) definition no longer has the profile strength information, only preserves the some position of each angle of reflection contour shape.Through above-mentioned computing, regenerated the objective contour image, and known the angle information of each pixel.With respect to the objective contour image that extracts, the objective contour image that regenerates is more clear.As shown in Figure 5.
Step 6: set up the human body master pattern
If will determine which is the people in the input picture, then need to set up in advance the differently contoured image of identification target body, these images are called human body nominal contour model image.We adopt manual method to make human body nominal contour model image, set up the human body master pattern one by one at the video record of different shooting heights, angle, picture deformation.
Human body nominal contour model image method for making is as follows:
Will be in the Image Edge-Detection process naked eyes can clear identification the intercepting of human body contour outline characteristic image, being positioned over does not have the blank image of background center.Image should comprise the diverse location of image center, human body head and health, human body, the different size of human body.As shown in Figure 6.
We will extract the shape information of image, utilize single pixel to connect the appearance that neighbor pixel obtains the number of people and health, and the center can be decided in its sole discretion.Like this, can obtain human body nominal contour model image.
When image center is reference point c=(0,0), the position of point be p (x, y)=(x, y).At this moment, model image is the positional alignment according to point, and it is constructed as follows;
M ( θ ) = { P i } i = 1 , M θ - - - ( 10 )
Wherein, p iM when the expression angle is θ θThe position of i point in the individual point.
In actual applications, not to adopt all continuous angles, but adopt the characteristic quantity of certain intervals.Suitable characteristic quantity can reduce the influence of noise in matching process, the requirement of accuracy of identification is depended at the interval of characteristic quantity.
The angle of supposing the characteristic quantity of getting is θ Q,(1≤q≤S).Wherein S is the number of characteristic quantity, its span 0-180, and value 12-18 is comparatively suitable generally speaking.Then k model image is the some position two-dimensional arrangements to different angles.
M k = { M k ( θ q ) } i = 1 , M θ q - - - ( 11 )
Formula (9) mid point P iBe the some position in the image coordinate system, the p in the formula (11) iBe the some position of human body nominal contour model image central point when being benchmark.As shown in Figure 7.
Though be to identify human body equally, the position difference of human body in inputted video image, profile appearance are also different.Such as when camera is vertically overlooked ground, the profile more near apart from the video image central point is more little, and the profile more far away apart from the video image center is more big.As shown in Figure 8.
In the Model Matching stage, a nearer model image in matching distance input picture point position, and do not consider image far away, and so both guaranteed discrimination, also can reduce operand.So when setting up the master pattern image, preserve together as model image point positional information and the center position information of formula (11).
By the computing of this step, we can set up human body nominal contour model image, these images such as Fig. 9 and Figure 10. by the computing of this step as can be seen, human body master pattern image has only the angle character amount, and we know the central point of each human body master pattern and the angle of each some unique point.
Step 7: outline
With objective contour image and the comparison of human body master pattern image that generates, as coupling, then assert to be people and counting; As do not match, then do not assert to be the people.The coupling of profile need be passed through virtual target contour images center, be calculated matching rate and three steps of matched position.
We have regenerated objective contour image (Fig. 5) and have set up human body nominal contour model image (Fig. 9 Figure 10) from video image.If whether be human body, then need carry out the coupling of human body nominal contour model image and objective contour image if will detect these targets.
1, virtual target contour images center
Do not have central point in the objective contour image, and standard human body skeleton pattern image has central point C. for setting up the coupling of carrying out both, need find the center of objective contour image in advance.
Respectively with the horizontal sides of human body nominal contour model image and objective contour image and vertical edges x axle and the y axle as coordinate system.M the point that belongs to angle θ in N of angle θ point and the human body nominal contour model image in the objective contour image compared.
Belong to θ in the hypothetical target contour images qThe point of angle is P i Q=(x i Q, y i Q).Belong to angle θ in the human body nominal contour model image qPoint be P j M=(x j M, y j M), some P j MWith the vector of human body nominal contour image center C be l.As shown in figure 11.
P with the objective contour image i QSet up coordinate system centered by the point, P 1, P 2, P 3, P 4Being respectively with l is four symmetric points of vector.These four points are the virtual center of objective contour image.Because of a P i QHave vector, therefore for four virtual center of each point, may have one to be practical center.As shown in figure 12.
As seen, on an angle, can generate 4 * N * M virtual center.For all feature angles, can generate
Figure BDA00002239121900081
Individual virtual center, wherein s is the number of human body nominal contour aspect of model point.Investigating four virtual center is because do not know the position of objective contour picture centre.If objective contour image and human body master pattern image mate, necessarily there is one to be practical center in four virtual center so.Therefore the most concentrated position of virtual center is the practical center of objective contour image in the objective contour image.
2, calculate matching rate
When setting up human body nominal contour model image, we have set up n human body nominal contour model image, and each human body nominal contour model image is made up of the point of varying number.The point number of supposing k human body nominal contour model image is n k, on average the number of each human body nominal contour illustraton of model picture point is
Figure BDA00002239121900082
Wherein k is the sequence number of the human body nominal contour model image investigated.For k model, the sum of counting in the 3*3 pixel coverage around the objective contour image virtual central point corresponding with it is s k(x, y), then in the position (x, y) matching rate of the image k on is evaluated as follows:
R k ( x , y ) = α × s k ( x , y ) + ( 1 - α ) × s k ( x , y ) × n ‾ / n k - - - ( 12 )
Wherein, α is weight parameter.Weight parameter depends on that absolute coupling counts s k(x is y) with relative coupling number
Figure BDA00002239121900091
In that is more important.The number of match point is absolute match point number s relatively k(x is y) with model image mid point number n kRatio multiply by the relative quantity of equalization point number.The α value is more little, mates number relatively and just occupies bigger weight; The α value is more big, then definitely mates number and occupies bigger weight.The span of α is 0.1~0.3.
3, the decision of matched position
From k matching rate matrix, with following formula determine Optimum Matching rate R (x, y) and Optimum Matching human body nominal contour model image T (x, y).
R(x,y)=max{R k(x,y)}(13)
T(x,y)=argmax k{R k(x,y)}(14)
When having several objects on the single image, may match several model images.Therefore (x, after maximal value y), its position is decided to be first matched position to find out R earlier.According to the size of model image, outside certain zone, find out second matched position then, find whole matched positions with this.(x during y) less than threshold values, stops coupling up to R.
At this moment, threshold values is determined with following formula:
R k=β×(α×n k+(1-α))(15)
Wherein, the β value of giving is 0.3~0.35 to be best suited for.
White contours is represented the human body nominal contour model image that mates among Figure 13, and the grey profile represents that square region is the count detection zone because of the low and abandoned human body nominal contour of matching rate model image.
Step 8: result's output
Through the computing of step 1 to step 7, we have found out the profile of human body in the image.The number of profile is number in the image.Figure 14 is the recognition result output interface.

Claims (10)

1. recognition methods based on people's counter of profile identification, described people's counter comprises video input apparatus, processing module, storer, it is characterized in that, described counter is counted in the following way:
The first step: image input
Adopt the video input apparatus input video;
Second step: Image Edge-Detection
The two dimensional image of input be I (x y), adopts the Sobel operator, the operator of vertical direction and horizontal direction be respectively v (x, y) and h (x, y), the size of matrix is MXN, at this moment, the contour feature image C (x, y) calculate as follows:
Figure FDA00003302394600011
Figure FDA00003302394600012
θ(x,y)=tan -1(C x(x,y)/C y(x,y)) (4)
Wherein, C x(x, y) and C y(x y) is respectively through image horizontal and that longitudinal edge detects, and (x y) is the profile angle to θ;
The 3rd step: background modeling
T contour feature image constantly is C t(x, y), the profile angle is θ t(x, y), contoured background is P t(x, y), the background intensity model is B t(x, y), the background angle model is
Figure FDA00003302394600014
Then background model is according to following renewal:
Figure FDA00003302394600015
Figure FDA00003302394600016
Figure FDA00003302394600017
Wherein, N TrIt is the prospect threshold values;
Step 4: extract the objective contour image
If satisfy:
Figure FDA00003302394600021
And | C t(x, y)-B t(x, y) |≤Ltr;
C then t(x, y)=0
Wherein, π/12nd, the threshold values of assessment profile differential seat angle, Ltr is the threshold values of assessment profile intensity difference, the contour feature image after this step is C t(x y) is designated as the contour feature image C T(x, y);
Step 5: generate the objective contour image
At first, give suitable intensity threshold values c Tr, belong to C T(x, y)≤C TrPixel secondly do not consider that, utilize the some positional alignment of each profile angle to constitute contour images, the point of angle θ is arranged Q (θ) and is:
Figure FDA00003302394600023
P wherein i=(x i, y i) be i the point of angle θ, N θBe that angle is the some number of θ;
Step 6: set up the human body master pattern
At first, place the contour feature image that obtains in the step 2 in the null images that does not have background, extract the shape information of image, utilize single pixel to connect neighbor pixel and obtain the number of people and health appearance,
When image center is c=(0,0), the position of point be p (x, y)=(x, y), at this moment, model image is the positional alignment according to point, it is constructed as follows;
Figure FDA00003302394600024
Wherein, P iM when the expression angle is θ θ; The position of i point in the individual point,
The angle of the characteristic quantity of getting is θ q(1≤q≤S), S is the number of characteristic quantity, and then k model image is the some position two-dimensional arrangements to different angles:
Figure FDA00003302394600022
Step 7: outline
1, virtual target contour images center
Respectively with the vertical edges of human body master pattern and objective contour characteristic image place picture to be detected and horizontal sides x axle and the y axle as coordinate system, M the point that belongs to angle θ in N of angle θ point and the human body master pattern image in the objective contour characteristic image compared, belonged to θ in the objective contour characteristic image qThe point of angle is Belong to angle θ in the human body master pattern image qPoint be
Figure FDA00003302394600032
The point
Figure FDA00003302394600033
With the vector of human body master pattern image center C be l, with the objective contour characteristic image
Figure FDA00003302394600034
Set up coordinate system centered by the point, P 1, P 2, P 3, P 4Being respectively is 1 being four symmetric points of vector, and these four points are the virtual center of objective contour characteristic image;
2, calculate matching rate
The point number of k human body master pattern image is n k, on average the number of each human body master pattern picture point is Wherein k is the sequence number of the human body master pattern image investigated, and for k model, the sum of counting in the certain limit around the objective contour characteristic image virtual center point corresponding with it is s k(x, y), then in the position (x, y) fitting percentage of the image k on is evaluated as follows:
Figure FDA00003302394600036
Wherein, α is weight parameter, and the span of α is 0.1-0.3;
3, the decision of matched position
From k matching rate, with following formula determine Optimum Matching rate R (x, y) and Optimum Matching human body master pattern image T (x, y),
R(x,y)=max{R k(x,y)} (ll)
T(x,y)=argmax k{R k(x,y)} (12)
When having several objects on the single image, can match several human body master pattern images, (x is after maximal value y) therefore to find out R earlier, its position is decided to be first matched position, according to the size of model image, outside certain zone, find out second matched position then, find whole matched positions with this, up to R (x, during y) less than threshold values, stop coupling, threshold values is determined with following formula:
R k=β×(α×n k+(1-α)) (13)
Wherein, the β assignment is 0.3-0.35;
Step 8: result's output
With the image output after detected number and the identification.
2. the recognition methods of people's counter as claimed in claim 1 is characterized in that, described video input apparatus is video camera.
3. the recognition methods of people's counter as claimed in claim 1 is characterized in that, M, N get 5 in formula (1), (2), and the operator of corresponding vertical direction and horizontal direction is respectively:
Figure FDA00003302394600041
The center of C representing matrix wherein.
4. the recognition methods of people's counter as claimed in claim 1 is characterized in that, the N in the step 3 TrValue is 100-l000.
5. the recognition methods of people's counter as claimed in claim 1, it is characterized in that, according to the visual angle of described video input apparatus, the picture that collects is divided into zones of different, the human body master pattern is set up in the subregion, corresponding subregion coupling target body model.
6. the recognition methods of people's counter as claimed in claim 1 is characterized in that, certain limit is 3 * 3 pixel coverages around the central point described in the step 7.
7. the recognition methods of people's counter as claimed in claim 1 is characterized in that, the Ltr in the step 4 is directly proportional with the contrast of image.
8. the recognition methods of people's counter as claimed in claim 1 is characterized in that, the human body master pattern of setting up in the step 6 is stored in the described storer.
9. the recognition methods of people's counter as claimed in claim 1 is characterized in that, described processing module is responsible for the calculating of above-mentioned each step.
10. the recognition methods of people's counter as claimed in claim 7 is characterized in that, the Ltr value is 0-12.
CN2012103833408A 2012-10-10 2012-10-10 Video recognition counter for body silhouette Expired - Fee Related CN102930334B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2012103833408A CN102930334B (en) 2012-10-10 2012-10-10 Video recognition counter for body silhouette

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2012103833408A CN102930334B (en) 2012-10-10 2012-10-10 Video recognition counter for body silhouette

Publications (2)

Publication Number Publication Date
CN102930334A CN102930334A (en) 2013-02-13
CN102930334B true CN102930334B (en) 2013-08-14

Family

ID=47645128

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2012103833408A Expired - Fee Related CN102930334B (en) 2012-10-10 2012-10-10 Video recognition counter for body silhouette

Country Status (1)

Country Link
CN (1) CN102930334B (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103605967A (en) * 2013-11-26 2014-02-26 东华大学 Subway fare evasion prevention system and working method thereof based on image recognition
CN103679212A (en) * 2013-12-06 2014-03-26 无锡清华信息科学与技术国家实验室物联网技术中心 Method for detecting and counting personnel based on video image
CN105550743A (en) * 2015-12-10 2016-05-04 世纪美映影院技术服务(北京)有限公司 Method for counting number of persons in building
CN105631609A (en) * 2016-02-04 2016-06-01 王爱玲 Power distribution cabinet
CN107704874A (en) * 2017-09-29 2018-02-16 上海与德通讯技术有限公司 Intelligent robot and its recognition methods and computer-readable recording medium
CN109345558B (en) * 2018-10-29 2021-04-13 杭州易现先进科技有限公司 Image processing method, image processing apparatus, image processing medium, and electronic device
CN111428546B (en) * 2019-04-11 2023-10-13 杭州海康威视数字技术股份有限公司 Method and device for marking human body in image, electronic equipment and storage medium
CN110701741A (en) * 2019-10-10 2020-01-17 珠海格力电器股份有限公司 Air conditioning unit regulating and controlling method and air conditioning unit
CN112785462A (en) * 2020-02-27 2021-05-11 吴秋琴 Scenic spot passenger flow volume statistics evaluation system based on big data
CN112860059A (en) * 2021-01-08 2021-05-28 广州朗国电子科技有限公司 Image identification method and device based on eyeball tracking and storage medium
CN113837052A (en) * 2021-09-18 2021-12-24 泰州市雷信农机电制造有限公司 Current-limiting trigger system based on block chain
CN113947546A (en) * 2021-10-18 2022-01-18 江阴市人人达科技有限公司 Image picture multi-layer filtering processing system

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5071289B2 (en) * 2008-07-23 2012-11-14 ウシオ電機株式会社 Ultraviolet irradiation device and method for controlling lighting of ultraviolet irradiation device
CN101587541B (en) * 2009-06-18 2011-02-02 上海交通大学 Character recognition method based on human body contour outline
CN101872422B (en) * 2010-02-10 2012-11-21 杭州海康威视数字技术股份有限公司 People flow rate statistical method and system capable of precisely identifying targets
CN101908150B (en) * 2010-06-25 2012-05-30 北京交通大学 Human body detection method
CN102054306B (en) * 2011-01-31 2012-02-08 潘海朗 Method and system for detecting pedestrian flow by adopting deformable two-dimensional curves

Also Published As

Publication number Publication date
CN102930334A (en) 2013-02-13

Similar Documents

Publication Publication Date Title
CN102930334B (en) Video recognition counter for body silhouette
US10423856B2 (en) Vector engine and methodologies using digital neuromorphic (NM) data
US10387741B2 (en) Digital neuromorphic (NM) sensor array, detector, engine and methodologies
CN104063702B (en) Three-dimensional gait recognition based on shielding recovery and partial similarity matching
CN106127148B (en) A kind of escalator passenger's anomaly detection method based on machine vision
CN103559703B (en) Crane barrier based on binocular vision is monitored and method for early warning and system
Santosh et al. Tracking multiple moving objects using gaussian mixture model
CN101443817B (en) Method and device for determining correspondence, preferably for the three-dimensional reconstruction of a scene
WO2018023916A1 (en) Shadow removing method for color image and application
CN104298996B (en) A kind of underwater active visual tracking method applied to bionic machine fish
CN103310444B (en) A kind of method of the monitoring people counting based on overhead camera head
CN104715238A (en) Pedestrian detection method based on multi-feature fusion
CN107392885A (en) A kind of method for detecting infrared puniness target of view-based access control model contrast mechanism
CN110232389A (en) A kind of stereoscopic vision air navigation aid based on green crop feature extraction invariance
CN102340620B (en) Mahalanobis-distance-based video image background detection method
CN110189375A (en) A kind of images steganalysis method based on monocular vision measurement
CN105354856A (en) Human matching and positioning method and system based on MSER and ORB
CN104463869A (en) Video flame image composite recognition method
CN106127812A (en) A kind of passenger flow statistical method of non-gate area, passenger station based on video monitoring
Tian et al. Human Detection using HOG Features of Head and Shoulder Based on Depth Map.
CN110021029A (en) A kind of real-time dynamic registration method and storage medium suitable for RGBD-SLAM
CN109886195A (en) Skin identification method based on depth camera near-infrared single color gradation figure
CN109443319A (en) Barrier range-measurement system and its distance measuring method based on monocular vision
CN102510437A (en) Method for detecting background of video image based on distribution of red, green and blue (RGB) components
CN111582076A (en) Picture freezing detection method based on pixel motion intelligent perception

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CP03 Change of name, title or address
CP03 Change of name, title or address

Address after: Room 301, No.389, Shengzhou Road, Qinhuai District, Nanjing City, Jiangsu Province

Patentee after: Nanjing Kaisen Century Technology Development Co.,Ltd.

Address before: 100085, room 115, building 5, building 1, East Road, Haidian District, Beijing

Patentee before: BEIJING KEYSEEN TECHNOLOGY DEVELOPMENT Co.,Ltd.

CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20130814