CN105139425A - People counting method and device - Google Patents

People counting method and device Download PDF

Info

Publication number
CN105139425A
CN105139425A CN201510540599.2A CN201510540599A CN105139425A CN 105139425 A CN105139425 A CN 105139425A CN 201510540599 A CN201510540599 A CN 201510540599A CN 105139425 A CN105139425 A CN 105139425A
Authority
CN
China
Prior art keywords
frame
shoulder feature
head shoulder
image
foreground object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510540599.2A
Other languages
Chinese (zh)
Other versions
CN105139425B (en
Inventor
毛泉涌
祝中科
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Uniview Technologies Co Ltd
Original Assignee
Zhejiang Uniview Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Uniview Technologies Co Ltd filed Critical Zhejiang Uniview Technologies Co Ltd
Priority to CN201510540599.2A priority Critical patent/CN105139425B/en
Publication of CN105139425A publication Critical patent/CN105139425A/en
Application granted granted Critical
Publication of CN105139425B publication Critical patent/CN105139425B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30242Counting objects in image

Abstract

The application provides a people counting method and device. The method comprises: performing target segmentation on a current frame image detection area to extract a moving foreground object; detecting the head and shoulder characteristic frame in the moving foreground object; determining whether the head and shoulder characteristic frame in the moving foreground object meets people counting trigger conditions; and counting people according to the head and shoulder characteristic frame when the head and shoulder characteristic frame meets people counting trigger conditions. The people counting method and device can shorten characteristic detection time, reduce a characteristic false drop rate, improve characteristic detection effects, and thereby increase people counting efficiency and accuracy.

Description

A kind of demographic method and device
Technical field
The application relates to technical field of image processing, particularly relates to a kind of demographic method and device.
Background technology
In many public places, (such as, market, supermarket, park etc.) all deploys real-time passenger number statistical system capable, so that managerial personnel grasp real-time passenger flow situation, takes necessary measure of dredging, and prevents from crossing to mostly occur due to number the hazard event such as trampling.
Current demographic method is mainly detected as master with video, installs camera and carries out video acquisition, then analyze the image gathered, finally count the number in place by import and export in public places.Such as, the method for the additional feature database coupling of background modeling is adopted to carry out pedestrian detection, or, adopt multiple number of people sorter to carry out number of people detection.But above-mentioned demographic method ubiquity statistical accuracy is not high, and the problem such as treatment effeciency is low.
Summary of the invention
In view of this, the application provides a kind of demographic method and device.
Particularly, the application is achieved by the following technical solution:
The application provides a kind of demographic method, and the method comprises:
Moving foreground object is extracted by carrying out Target Segmentation to current frame image surveyed area;
Detect the head shoulder feature frame in described moving foreground object;
Judge whether the head shoulder feature frame in described moving foreground object meets demographics trigger condition;
When described head shoulder feature frame meets described demographics trigger condition, carry out demographics according to described head shoulder feature frame.
The application also provides a kind of people counting device, and this device comprises:
Extraction unit, for extracting moving foreground object by carrying out Target Segmentation to current frame image surveyed area;
Detecting unit, for detecting the head shoulder feature frame in described moving foreground object;
Judging unit, for judging whether the head shoulder feature frame in described moving foreground object meets demographics trigger condition;
Statistic unit, for when described head shoulder feature frame meets described demographics trigger condition, carries out demographics according to described head shoulder feature frame.
Described as can be seen from above, the application can shorten the feature detection time, reduces feature false drop rate, lifting feature Detection results, thus promotes demographics efficiency and accuracy rate.
Accompanying drawing explanation
Fig. 1 is the application scenarios schematic diagram shown in the application one exemplary embodiment;
Fig. 2 is a kind of demographic method process flow diagram shown in the application one exemplary embodiment;
Fig. 3 is that the moving foreground object shown in the application one exemplary embodiment extracts process flow diagram;
Fig. 4 is the underlying hardware structural representation of a kind of people counting device place equipment shown in the application one exemplary embodiment;
Fig. 5 is the structural representation of a kind of people counting device shown in the application one exemplary embodiment.
Embodiment
Here will be described exemplary embodiment in detail, its sample table shows in the accompanying drawings.When description below relates to accompanying drawing, unless otherwise indicated, the same numbers in different accompanying drawing represents same or analogous key element.Embodiment described in following exemplary embodiment does not represent all embodiments consistent with the application.On the contrary, they only with as in appended claims describe in detail, the example of apparatus and method that some aspects of the application are consistent.
Only for describing the object of specific embodiment at term used in this application, and not intended to be limiting the application." one ", " described " and " being somebody's turn to do " of the singulative used in the application and appended claims is also intended to comprise most form, unless context clearly represents other implications.It is also understood that term "and/or" used herein refer to and comprise one or more project of listing be associated any or all may combine.
Term first, second, third, etc. may be adopted although should be appreciated that to describe various information in the application, these information should not be limited to these terms.These terms are only used for the information of same type to be distinguished from each other out.Such as, when not departing from the application's scope, the first information also can be called as the second information, and similarly, the second information also can be called as the first information.Depend on linguistic context, word as used in this " if " can be construed as into " ... time " or " when ... time " or " in response to determining ".
In many public places, (such as, market, supermarket, park etc.) all deploys real-time passenger number statistical system capable, so that managerial personnel grasp real-time passenger flow situation, takes necessary measure of dredging, and prevents from crossing to mostly occur due to number the hazard event such as trampling.
Current demographic method is mainly detected as master with video, installs camera and carries out video acquisition, then analyze the image gathered, finally count the number in place by import and export in public places.
Prior art one, adopts background modeling algorithm to extract prospect, then adopts the feature database of training to carry out pedestrian detection to the foreground area extracted, and finally carries out count tracking to the pedestrian detected.But this statistical method, when holding up an umbrella and personnel are mutually blocked more, exists undetected; And when not carrying out foreground object segmentation, pedestrian detection is consuming time comparatively large, causes demographics efficiency lower.
Prior art two, adopt multiple number of people detection of classifier, number of people recall rate is high.But this detection method exists large problem consuming time equally, when being applied to video camera and detecting in real time, can not ensure that each two field picture is all processed, there is frame losing risk, cause demographics accuracy rate to reduce.
For the problems referred to above, the embodiment of the present application proposes a kind of demographic method, and the moving foreground object that the head detected shoulder characteristic sum extracts combines and carries out demographics, to reduce false dismissal probability by the method; Meanwhile, in moving foreground object leaching process, adopt the method for Target Segmentation, reduce the operand of follow-up head shoulder feature detection, improve demographics efficiency.
See Fig. 1, it is the application one preferably application scenarios schematic diagram.In this application scenarios, video camera vertically or close to vertically installing, the depression angle α scope of video camera is 65 degree ~ 90 degree.Under this application scenarios, personnel's circumstance of occlusion is less, adopts the demographic method of the embodiment of the present application, and statistics accuracy rate is higher, and speed.
See Fig. 2, be an embodiment process flow diagram of the application's demographic method, this embodiment is described demographics process.
Step 201, extracts moving foreground object by carrying out Target Segmentation to current frame image surveyed area.
The monitoring visual field scope of video camera is comparatively large, when carrying out moving foreground object and extracting, without the need to processing whole two field picture.The key monitoring region of usual video camera is positioned at the central area of image, such as, during camera supervised subway sluice gate, although the field range of video camera is very large, but real effective region is only image-region residing for subway sluice gate during image procossing, when the installation site of video camera and angle are fixed, in image, position, effective coverage is also just determined.Therefore, the embodiment of the present application only carries out moving foreground object extraction to the surveyed area (being equivalent to aforementioned effective coverage) in current frame image, with downscaled images process range, improves the extraction efficiency of moving foreground object.
See Fig. 3, for the application's moving foreground object extracts process flow diagram, specifically describe as follows:
Step 2011, obtains the foreground image of current frame image surveyed area.Such as, mixed Gauss model can being utilized to carry out foreground image extraction, by carrying out many Gausses modeling, real-time update Gauss model background to monitoring fact, thus improving foreground image extraction accuracy.
Step 2012, obtains foreground target frame by carrying out aftertreatment to foreground image.This aftertreatment can comprise medium filtering, expansive working and regional connectivity process etc., foreground target frame is obtained after aftertreatment, this foreground target frame is the approximate range of moving foreground object in surveyed area, is reduced the sensing range of moving foreground object by foreground target frame further.
Step 2013, calculates the frame difference figure of current frame image surveyed area and previous frame image surveyed area.
Step 2014, obtains the edge of frame change texture maps of surveyed area frame difference figure.Such as, getting frame difference Edge texture figure is processed by Sobel (Sobel).
Step 2015, in the edge of frame change texture maps obtained, the image-region corresponding to foreground target frame carries out horizontal projection and vertical projection.
Step 2016, the horizontal projective histogram generated after obtaining projection and vertical projective histogram.
Step 2017, carries out Target Segmentation according to horizontal projective histogram and vertical projective histogram to the edge of frame change texture image within the scope of foreground target frame and obtains moving foreground object.
First, the processing priority of difference calculated level projection histogram and vertical projective histogram.
Horizontal projective histogram is identical with the processing priority computing method of vertical projective histogram, is specially:
u x = Σ i = 0 n ω i x i Formula (1)
Wherein, u xfor processing priority; x ifor line number or columns that projection value is i; ω ifor weighting coefficient; N is default projection threshold value.
Wherein, projection threshold value n can experimentally data selection smaller value (such as, n=5), when projection value is less than or equal to this projection threshold value, represents that the image-region that this projection value is corresponding is background.Projection value is less, and correspondence image region is that the possibility of background is larger.Therefore, weighting coefficient ω is being set itime, the weighting coefficient that less projection value is arranged is larger.The embodiment of the present application is by the size of background area in the height reflection projection histogram of processing priority, and wherein, the background area that the projection histogram that processing priority is higher is corresponding is larger, and vice versa.
After the processing priority of the processing priority and vertical projective histogram that calculate horizontal projective histogram, first, the high projection histogram of processing priority is selected to carry out Target Segmentation to the edge of frame change texture image within the scope of current foreground target frame, then, the low projection histogram of processing priority is selected to carry out Target Segmentation to the edge of frame change texture image after the projection histogram segmentation that processing priority is high.Such as, suppose the processing priority of processing priority higher than vertical projective histogram of horizontal projective histogram, then first carry out Target Segmentation according to horizontal projective histogram, then carry out Target Segmentation according to vertical projective histogram.
The Target Segmentation method of the projection histogram of different directions is identical, is specially, and calculates the segmentation accumulated value of the projection histogram of current selection according to Target Segmentation algorithm, and wherein, this Target Segmentation algorithm can be:
T y = Σ m 1 m 2 f ( ω j ) y j Formula (2)
f ( ω j ) = ω j , y j ≤ n - ω j , y j > n Formula (3)
Wherein, ω jfor weighting coefficient, and it is positive integer; N is projection threshold value; y jfor the projection value of jth row or column; F (ω j) for being with the weighting coefficient of positive negative direction; m 1, m 2for row or column, and m 2> m 1; T yfor segmentation accumulated value.As previously mentioned, the threshold value n that projects can get a smaller value, to represent that projection value is less than or equal to the image-region of this projection threshold value n for background.
By traveling through row (if the projection histogram of current selection is horizontal projective histogram) or the row (if the projection histogram of current selection is vertical projective histogram) of projection histogram, select different m 1and m 2computed segmentation accumulated value T y.
As segmentation accumulated value T ybe more than or equal to default segmentation threshold T, and when being less than segmentation threshold T, confirm m 1and m 2be one group of Target Segmentation line, be positioned at Target Segmentation line m 1and m 2between image-region be background.After the confirmation completing all Target Segmentation lines of Current projection histogram, the background area be partitioned into can be rejected, reduce the operand of subsequent treatment.This is also the embodiment of the present application carries out Target Segmentation from high to low reason according to processing priority, namely first process from the direction comprising background more, reject most of background, process from the direction comprising background relatively less again, reject remaining background, improve the whole efficiency of Target Segmentation.
After completing above-mentioned Target Segmentation, can obtain multiple image block comprising moving target, will comprise the image block of moving target referred to as moving foreground object below, this moving foreground object is used for follow-up head shoulder feature detection.Visible, the application can reduce the sensing range of follow-up head shoulder feature by Target Segmentation, thus reduces feature false drop rate, improves the detector efficiency of feature.
Step 202, detects the head shoulder feature frame in described moving foreground object.
Utilize existing feature detection algorithm (such as, Adaboost algorithm) the head shoulder feature in moving foreground object is detected, wherein, when carrying out sorter training, can selected angle is consistent, size is close, feature is close head shoulder images as positive sample, the speed detected with lifting feature.
Detect by carrying out head shoulder to moving foreground object, can detect multiple shoulder feature frames, each head shoulder feature frame represents a people.But because feature detection exists error, may detect multiple shoulder feature frames to same person, if do not carry out special processing, the error of follow-up demographics will be larger.
The feature that the embodiment of the present application utilizes same clarification of objective frame to overlap each other, is fused into a head shoulder feature frame, so that subsequent characteristics is followed the tracks of and coupling by the head of multiple overlap shoulder feature frame.
Fusion treatment process is as follows: from multiple shoulder feature frames of current existence, select two head shoulder feature frames not carrying out fusion treatment each other, obtains the area of this two head shoulder feature frame and the intersection area of this two head shoulder feature frame respectively.Judge whether this two head shoulder feature frame meets fusion conditions, when two heads shoulder feature frames meet fusion conditions, using the minimum enclosed rectangle of this two head shoulder feature frame as new head shoulder feature frame according to the area of this two head shoulder feature frame and the area that intersects; When two head shoulder feature frames do not meet fusion conditions, as independently head shoulder feature frame existence.
Judge any two heads of current existence take between feature frames whether all carry out fusion treatment, if so, then stop fusion treatment, the head shoulder feature frame of current existence is independently head shoulder feature frame, namely can think that each head takes on the corresponding people of feature frame; If not, then return continuation and perform fusion treatment.
Wherein, judge that the process whether two heads shoulder feature frames meet fusion conditions is:
First, judge whether the intersection area Area_Over of two head shoulder feature frames is greater than standard header shoulder feature frame area A rea_S and is multiplied by default area percentage threshold value θ (such as, θ=50%).Wherein, standard header shoulder feature frame area A rea_S be the head shoulder normal width Width preset under current application scene square.
When intersect area Area_Over be less than or equal to standard header shoulder feature frame area A rea_S be multiplied by default area percentage threshold value θ time, illustrate that the intersection area of two head shoulder feature frames is very little or completely independent, therefore, confirm that two head shoulder feature frames do not meet fusion conditions.
When intersect area Area_Over be greater than standard header shoulder feature frame area A rea_S be multiplied by default area percentage threshold value θ time, illustrate that the intersection area of two heads shoulder feature frames tentatively meets fusion conditions, also need further confirmation.
After the intersection area of two head shoulder feature frames tentatively meets fusion conditions, calculate the intersection area percentage of two head shoulder feature frames:
p = ω a × A r e a _ O v e r A r e a _ A + ω b × A r e a _ O v e r A r e a _ B Formula (4)
ω a+ ω b=1, ω a> 0, ω b> 0 formula (5)
Wherein, ω afor the weighting coefficient of head shoulder feature frame A; ω bfor the weighting coefficient of head shoulder feature frame B; Area_Over is the intersection area of head shoulder feature frame A and head shoulder feature frame B; Area_A is the area of head shoulder feature frame A; Area_B is the area of head shoulder feature frame B; P is the intersection area percentage of head shoulder feature frame A and head shoulder feature frame B.
When the intersection area percentage p of two head shoulder feature frames is greater than default area percentage threshold value θ, confirm that two head shoulder feature frames meet fusion conditions.When the intersection area percentage p of two head shoulder feature frames is less than or equal to default area percentage threshold value θ, confirm that two head shoulder feature frames do not meet fusion conditions.
Step 203, judges whether the head shoulder feature in described moving foreground object meets demographics trigger condition.
After the head shoulder feature frame of completing steps 202 detects, object matching and track following are carried out to the head shoulder feature frame detected.
The matching process of head shoulder feature frame is as follows: the area and the position that obtain the head shoulder feature frame in current frame image and previous frame image.Suppose, in current frame image, the width of head shoulder feature frame A is w a, then the area of head shoulder feature frame A is in previous frame image, the width of head shoulder feature frame B is w b, then the area of head shoulder feature frame B is the coordinate of head shoulder feature frame adopts the coordinate of feature frame central point usually, supposes, the coordinate of head shoulder feature frame A is (x a, y a), the coordinate of head shoulder feature frame B is (x b, y b).
Determine whether the head shoulder feature frame in current frame image and previous frame image mates, and specifically can confirm according to following formula according to the area of the head shoulder feature frame obtained and position.
d i s t ( a , b ) = ( x a - x b ) 2 + ( y a - y b ) 2 Formula (6)
d i f f _ a r e a ( a , b ) = | w a 2 - w b 2 | Formula (7)
T h r _ D i r e c t i o n = θ 1 , d i r e c t i o n = 0 θ 2 , d i r e c t i o n = 1 Formula (8)
T h r = &omega; 1 &times; d i s t ( a , b ) + &omega; 2 &times; d i f f _ a r e a ( a , b ) , d i s t ( a , b ) < T h r _ D i r e c t i o n &omega; 3 &times; d i s t ( a , b ) &times; &eta; + &omega; 4 &times; d i f f _ a r e a ( a , b ) , d i s t ( a , b ) > T h r _ D i r e c t i o n
Formula (9)
ω 1+ ω 2=1, ω 3× η+ω 4=1, η > 1 formula (10)
Wherein, dist (a, b) is the distance between head shoulder feature frame A and head shoulder feature frame B; Diff_area (a, b) is the area mean square deviation of head shoulder feature frame A and head shoulder feature frame B; Direction represents the moving direction of head shoulder feature frame A relative to head shoulder feature frame B, and 0 and 1 represents two contrary moving directions; θ 1and θ 2for the distance threshold on default different moving directions; Thr_Direction represents according to the selected distance threshold of moving direction; ω 1and ω 3for the weight coefficient of distance; ω 2and ω 4for the weight coefficient of area mean square deviation; η is important degree coefficient, represents that distance value is more important; Thr is coupling evaluation of estimate.
Two distance threshold (θs relevant to moving direction are provided with in formula (8) 1and θ 2), this is the reason due to video camera setting angle, cause people near video camera or deviate from video camera move same distance time, the displacement be presented in picture is different, therefore, the embodiment of the present application, by arranging different distance thresholds on different moving directions, improves the precision of head shoulder characteristic matching.
Gone out the coupling evaluation of estimate Thr of head shoulder feature frame in adjacent two two field pictures by above-mentioned formulae discovery, this coupling evaluation of estimate Thr has reacted the matching degree of head shoulder feature frame in adjacent two two field pictures.Obtain the coupling Evaluation threshold δ preset, judge whether coupling evaluation of estimate Thr is less than coupling Evaluation threshold δ.When mating evaluation of estimate Thr and being less than coupling Evaluation threshold δ, determine the head shoulder feature frame coupling in current frame image and previous frame image; Otherwise determine that the head shoulder feature frame in current frame image and previous frame image does not mate, the head shoulder feature frame in current frame image is new head shoulder feature frame.
As can be seen from above-mentioned head shoulder characteristic matching process, the head shoulder characteristic matching principle of the embodiment of the present application is: the distance of the head shoulder feature frame in consecutive frame image is nearer, area is more close, and matching degree is higher.
After determining head shoulder characteristic matching, correct shoulder feature carries out track following process.Be specially, record this shoulder position of feature frame in current frame image, be called for short current location; Record the position that this shoulder feature frame appears at image detection region first, i.e. reference position; The occurrence number of this shoulder feature frame accumulative.
After the track following information obtaining above-mentioned head shoulder feature, judge whether this shoulder feature meets demographics trigger condition, be specially: judge whether this shoulder feature frame triggers line along direction of motion away from counting, wherein, this direction of motion is the head shoulder direction of feature frame from reference position to current location, and this counting triggering line is a default line in surveyed area; Judge whether the interframe displacement of this shoulder feature frame is more than or equal to default interframe displacement threshold value, wherein, this interframe displacement is the displacement of head shoulder feature frame between current frame image and previous frame image.
Take on feature frame right overhead and trigger line along direction of motion away from counting, and when the interframe displacement of this shoulder feature frame is more than or equal to default interframe displacement threshold value, confirm that this shoulder feature frame meets demographics trigger condition; Otherwise, confirm that this shoulder feature frame does not meet demographics trigger condition.
Below supplementary notes are further done to above-mentioned demographics trigger condition:
Condition one, head shoulder feature frame triggers line along direction of motion away from counting.
This trigger condition is at least applicable to following two kinds of scenes: scene one, suppose that the reference position of head shoulder feature frame triggers above line at counting, current location triggers the below of line at counting, illustrate that this direction of motion takeing on feature frame is from top to bottom, and passed through counting triggering alignment triggers line direction motion away from counting, therefore, demographics can be carried out.Scene two, suppose that the reference position of head shoulder feature frame triggers the below of line at counting, the lower zone that head shoulder feature frame triggers line at counting is movable, finally leave from the lower limb of surveyed area, when then shoulder feature frame moves to below reference position right overhead, can determine that its direction of motion is from top to bottom according to reference position and current location, and to deviating from the direction motion of counting triggering line, can demographics be carried out equally.
Condition two, the interframe displacement of head shoulder feature frame is more than or equal to default interframe displacement threshold value.
There is flase drop because head shoulder feature frame detects, such as, the background object of some static state is used as the head shoulder feature of people, therefore, needing the head shoulder feature frame to detecting to do screening further.The embodiment of the present application utilizes the feature that background object displacement is less, preset an interframe displacement threshold value, when the displacement of the head shoulder feature frame in adjacent two frames is more than or equal to default interframe displacement threshold value, think that current head shoulder feature frame is a genuine and believable head shoulder feature frame.
Step 204, when described head shoulder feature frame meets described demographics trigger condition, carries out demographics according to described head shoulder feature frame.
When being confirmed that by step 203 head shoulder feature frame meets demographics trigger condition, one is added at the upper counting of moving direction (direction from reference position to current location) of this shoulder feature frame, and this shoulder feature collimation mark is designated as counts head shoulder feature frame, avoid repeat count.
In addition, the embodiment of the present application, when moving foreground object leaves surveyed area, judges whether the head shoulder feature frame in moving foreground object all had neither part nor lot in demographics.When the head shoulder feature frame in moving foreground object all had neither part nor lot in demographics, carried out demographics according to moving foreground object.
Aforementioned description describes the demographic method based on head shoulder feature, but this statistical exists certain loss, and such as, the thing that is blocked of the people in moving foreground object blocks, then cannot people be detected by head shoulder feature, therefore, cannot count.
The embodiment of the present application, for above-mentioned situation, carrying out on the basis of demographics based on head shoulder feature, increases a kind of auxiliary counting method based on moving foreground object.This auxiliary counting method is before moving foreground object is about to leave current detection region, head shoulder feature frame counting statistics situation in moving foreground object is judged, when the head shoulder feature frame in this moving foreground object all had neither part nor lot in demographics, demographics is carried out, to reduce loss according to this moving foreground object.
Be specially, after extracting moving foreground object by step 201, object matching and track following carried out to this moving foreground object.
The matching process of moving foreground object is as follows: the area obtaining the moving foreground object in current frame image and previous frame image, when the overlapping area of the moving foreground object in two two field pictures is greater than default overlapping area threshold value, confirm that this moving foreground object mates.
After the match is successful at moving foreground object, track following is carried out to moving foreground object.Be specially, record the position of this moving foreground object in current frame image, i.e. current location; Record the position that this moving foreground object appears at image detection region first, i.e. reference position; The occurrence number of this moving foreground object accumulative.
When confirming that the shoulder of the head in moving foreground object feature frame all had neither part nor lot in demographics, the matched jamming result of moving foreground object is utilized to carry out demographics.
First, the occurrence number threshold value of moving foreground object is set.This occurrence number threshold value is arranged according to the current occurrence number having participated in the head shoulder feature frame of demographics.Be specially, ask top n to participate in the occurrence number mean value of the head shoulder feature frame of demographics, occurrence number mean value is multiplied by the occurrence number threshold value of default adjustment factor as moving foreground object.
Wherein, the occurrence number mean value of head shoulder feature frame calculates according to the head shoulder feature frame of up-to-date participation demographics all the time, the occurrence number threshold value gone out according to this occurrence number mean value calculation is made not to be a fixed value, but one according to the data of applied environment real-time change, can be improved the accuracy of demographics.In addition, because people is through surveyed area, the occurrence number of head shoulder feature frame can be less than the occurrence number of relatively stable moving foreground object, therefore, the application increases an adjustment factor on the basis of the occurrence number mean value of head shoulder feature frame, this adjustment factor is greater than 1, more reasonable with the setting of the occurrence number threshold value making moving foreground object.
After completing the arranging of occurrence number threshold value of moving foreground object, judge whether the occurrence number of current kinetic foreground target is greater than the occurrence number threshold value of setting.When the occurrence number of moving foreground object is greater than the occurrence number threshold value of setting, in the direction of motion (direction of moving foreground object from reference position to current location) of current kinetic foreground target, one is added to number.
As can be seen from foregoing description, the moving foreground object that the head detected shoulder characteristic sum extracts combines and carries out demographics by the application, reduces the false dismissal probability of demographics, enhances the scene adaptability of demographics; Meanwhile, reduce the operand in demographics process by methods such as Target Segmentations, make the demographic method of the application can be applied on the relatively weak video monitoring equipment of the processing poweies such as video camera, improve real-time and the efficiency of demographics.
Corresponding with the embodiment of aforementioned demographic method, present invention also provides the embodiment of people counting device.
The embodiment of the application's people counting device can be applied on image processing equipment.Device embodiment can pass through software simulating, also can be realized by the mode of hardware or software and hardware combining.For software simulating, as the device on a logical meaning, be that computer program instructions corresponding in the processor run memory by its place equipment is formed.Say from hardware view, as shown in Figure 4, for a kind of hardware structure diagram of the application's people counting device place equipment, except the processor shown in Fig. 4, other interface and storer, in embodiment, the equipment at device place is usually according to the actual functional capability of this equipment, other hardware can also be comprised, this is repeated no more.
Please refer to Fig. 5, is the structural representation of the people counting device in the application's embodiment.This people counting device comprises extraction unit 501, detecting unit 502, judging unit 503 and statistic unit 504, wherein:
Extraction unit 501, for extracting moving foreground object by carrying out Target Segmentation to current frame image surveyed area;
Detecting unit 502, for detecting the head shoulder feature frame in described moving foreground object;
Judging unit 503, for judging whether the head shoulder feature frame in described moving foreground object meets demographics trigger condition;
Statistic unit 504, for when described head shoulder feature frame meets described demographics trigger condition, carries out demographics according to described head shoulder feature frame.
Further, described extraction unit 501, comprising:
Foreground image acquisition module, for obtaining the foreground image of current frame image surveyed area;
Foreground target frame acquisition module, for obtaining foreground target frame by carrying out aftertreatment to described foreground image;
Frame difference figure computing module, for calculating the frame difference figure of described current frame image surveyed area and previous frame image surveyed area;
Frame difference texture maps acquisition module, for obtaining the edge of frame change texture maps of surveyed area frame difference figure;
Image projection module, in described edge of frame change texture maps, the image-region corresponding to described foreground target frame carries out horizontal projection and vertical projection;
Histogram acquisition module, for obtaining horizontal projective histogram and the vertical projective histogram of the rear generation of projection;
Target Segmentation module, obtains described moving foreground object for carrying out Target Segmentation according to described horizontal projective histogram and described vertical projective histogram to the edge of frame change texture image within the scope of described foreground target frame.
Further, described Target Segmentation module, comprising:
Priority calculating sub module, for the processing priority of the processing priority and described vertical projective histogram that calculate described horizontal projective histogram respectively;
Target Segmentation submodule, carries out Target Segmentation for the projection histogram selecting processing priority high to the edge of frame change texture image within the scope of current foreground target frame; The low projection histogram of processing priority is selected to carry out Target Segmentation, to obtain some moving foreground object to the edge of frame change texture image after the projection histogram segmentation that processing priority is high.
Further, described priority calculating sub module, specifically for:
Described horizontal projective histogram is identical with the processing priority computing method of described vertical projective histogram, is specially:
u x = &Sigma; i = 0 n &omega; i x i
Wherein,
U xfor processing priority;
X ifor line number or columns that projection value is i;
ω ifor weighting coefficient;
N is default projection threshold value.
Further, described Target Segmentation submodule, specifically for:
The projection histogram that described selection processing priority is high carries out the Target Segmentation projection histogram low with described selection processing priority, and to carry out the method for Target Segmentation identical, is specially:
Calculate the segmentation accumulated value of the projection histogram selected according to Target Segmentation algorithm, wherein, described Target Segmentation algorithm is:
T y = &Sigma; m 1 m 2 f ( &omega; j ) y j
f ( &omega; j ) = &omega; j , y j &le; n - &omega; j , y j > n
Wherein,
ω jfor weighting coefficient, and it is positive integer;
N is projection threshold value;
Y jfor the projection value of jth row or column;
F (ω j) for being with the weighting coefficient of positive negative direction;
M 1, m 2for row or column, and m 2> m 1;
T yfor segmentation accumulated value;
As described segmentation accumulated value T ybe more than or equal to default segmentation threshold T, and when being less than described segmentation threshold T, confirm m 1and m 2for Target Segmentation line.
Further, described device also comprises:
Tracking cell, after detecting the head shoulder feature frame in described moving foreground object, carries out object matching and track following to described head shoulder feature frame for described detecting unit 502;
Record cell, for the current location of the shoulder feature frame of head according to object matching and track following outcome record in current frame image; Record the reference position that described head shoulder feature frame appears at image detection region first;
Described judging unit 503, specifically for judging whether described head shoulder feature frame triggers line along direction of motion away from counting, described direction of motion is that described head shoulder feature frame is from described reference position to the direction of described current location; Judge whether the interframe displacement of described head shoulder feature frame is more than or equal to default interframe displacement threshold value, described interframe displacement is the displacement of described head shoulder feature frame between current frame image and previous frame image; When described head shoulder feature frame triggers line along direction of motion away from counting, and when the interframe displacement of described head shoulder feature frame is more than or equal to default interframe displacement threshold value, determine that described head shoulder feature frame meets demographics trigger condition; Otherwise, determine that described head shoulder feature frame does not meet demographics trigger condition.
In said apparatus, the implementation procedure of the function and efficacy of unit specifically refers to the implementation procedure of corresponding step in said method, does not repeat them here.
For device embodiment, because it corresponds essentially to embodiment of the method, so relevant part illustrates see the part of embodiment of the method.Device embodiment described above is only schematic, the wherein said unit illustrated as separating component or can may not be and physically separates, parts as unit display can be or may not be physical location, namely can be positioned at a place, or also can be distributed in multiple network element.Some or all of module wherein can be selected according to the actual needs to realize the object of the application's scheme.Those of ordinary skill in the art, when not paying creative work, are namely appreciated that and implement.
The foregoing is only the preferred embodiment of the application, not in order to limit the application, within all spirit in the application and principle, any amendment made, equivalent replacements, improvement etc., all should be included within scope that the application protects.

Claims (15)

1. a demographic method, is characterized in that, the method comprises:
Moving foreground object is extracted by carrying out Target Segmentation to current frame image surveyed area;
Detect the head shoulder feature frame in described moving foreground object;
Judge whether the head shoulder feature frame in described moving foreground object meets demographics trigger condition;
When described head shoulder feature frame meets described demographics trigger condition, carry out demographics according to described head shoulder feature frame.
2. the method for claim 1, is characterized in that, described by carrying out Target Segmentation extraction moving foreground object to current frame image surveyed area, comprising:
Obtain the foreground image of current frame image surveyed area;
Foreground target frame is obtained by carrying out aftertreatment to described foreground image;
Calculate the frame difference figure of described current frame image surveyed area and previous frame image surveyed area;
Obtain the edge of frame change texture maps of surveyed area frame difference figure;
In described edge of frame change texture maps, the image-region corresponding to described foreground target frame carries out horizontal projection and vertical projection;
The horizontal projective histogram generated after obtaining projection and vertical projective histogram;
According to described horizontal projective histogram and described vertical projective histogram, Target Segmentation is carried out to the edge of frame change texture image within the scope of described foreground target frame and obtain described moving foreground object.
3. method as claimed in claim 2, it is characterized in that, describedly according to described horizontal projective histogram and described vertical projective histogram, Target Segmentation is carried out to the edge of frame change texture image within the scope of described foreground target frame and obtains described moving foreground object, comprising:
Calculate the processing priority of described horizontal projective histogram and the processing priority of described vertical projective histogram respectively;
The high projection histogram of processing priority is selected to carry out Target Segmentation to the edge of frame change texture image within the scope of current foreground target frame; The low projection histogram of processing priority is selected to carry out Target Segmentation, to obtain some moving foreground object to the edge of frame change texture image after the projection histogram segmentation that processing priority is high.
4. method as claimed in claim 3, it is characterized in that, the described processing priority calculating described horizontal projective histogram and described vertical projective histogram respectively, comprising:
Described horizontal projective histogram is identical with the processing priority computing method of described vertical projective histogram, is specially:
u x = &Sigma; i = 0 n &omega; i x i
Wherein,
U xfor processing priority;
X ifor line number or columns that projection value is i;
ω ifor weighting coefficient;
N is default projection threshold value.
5. method as claimed in claim 3, it is characterized in that, the projection histogram that described selection processing priority is high carries out Target Segmentation to the edge of frame change texture image within the scope of current foreground target frame; Select the low projection histogram of processing priority to carry out Target Segmentation to the edge of frame change texture image after the projection histogram segmentation that processing priority is high, comprising:
The projection histogram that described selection processing priority is high carries out the Target Segmentation projection histogram low with described selection processing priority, and to carry out the method for Target Segmentation identical, is specially:
Calculate the segmentation accumulated value of the projection histogram selected according to Target Segmentation algorithm, wherein, described Target Segmentation algorithm is:
T y = &Sigma; m 1 m 2 f ( &omega; j ) y j
f ( &omega; j ) = &omega; j , y j &le; n - &omega; j , y j > n
Wherein,
ω ifor weighting coefficient, and it is positive integer;
N is projection threshold value;
Y jfor the projection value of jth row or column;
F (ω j) for being with the weighting coefficient of positive negative direction;
M 1, m 2for row or column, and m 2> m 1;
T yfor segmentation accumulated value;
As described segmentation accumulated value T ybe more than or equal to default segmentation threshold T, and when being less than described segmentation threshold T, confirm m 1and m 2for Target Segmentation line.
6. the method for claim 1, is characterized in that, after the head shoulder feature frame in the described moving foreground object of described detection, also comprises:
Object matching and track following are carried out to described head shoulder feature frame;
The current location of head shoulder feature frame in current frame image according to object matching and track following outcome record; Record the reference position that described head shoulder feature frame appears at image detection region first;
Whether the described head shoulder feature frame judged in described moving foreground object meets demographics trigger condition, comprising:
Judge whether described head shoulder feature frame triggers line along direction of motion away from counting, described direction of motion is that described head shoulder feature frame is from described reference position to the direction of described current location;
Judge whether the interframe displacement of described head shoulder feature frame is more than or equal to default interframe displacement threshold value, described interframe displacement is the displacement of described head shoulder feature frame between current frame image and previous frame image;
When described head shoulder feature frame triggers line along direction of motion away from counting, and when the interframe displacement of described head shoulder feature frame is more than or equal to default interframe displacement threshold value, determine that described head shoulder feature frame meets demographics trigger condition; Otherwise, determine that described head shoulder feature frame does not meet demographics trigger condition.
7. the method for claim 1, is characterized in that, described method also comprises:
When described moving foreground object leaves surveyed area, judge whether the head shoulder feature frame in described moving foreground object all had neither part nor lot in demographics;
When the head shoulder feature frame in described moving foreground object all had neither part nor lot in demographics, carried out demographics according to described moving foreground object.
8. method as claimed in claim 7, is characterized in that, described by after carry out Target Segmentation extraction moving foreground object to current frame image surveyed area, also comprises:
Object matching and track following are carried out to described moving foreground object;
The current location of moving foreground object in current frame image according to object matching and track following outcome record; Record the reference position that described moving foreground object appears at image detection region first; Record the occurrence number of described moving foreground object;
Describedly carry out demographics according to described moving foreground object, comprising:
The occurrence number threshold value of moving foreground object is set;
When the occurrence number of described moving foreground object is greater than described occurrence number threshold value, the direction of motion counting according to described moving foreground object adds one, and wherein, described direction of motion is the direction of described moving foreground object from reference position to current location.
9. method as claimed in claim 8, it is characterized in that, the described occurrence number threshold value arranging moving foreground object, comprising:
Top n is asked to participate in the occurrence number mean value of the head shoulder feature frame of demographics;
Described occurrence number mean value is multiplied by the occurrence number threshold value of default adjustment factor as described moving foreground object.
10. a people counting device, is characterized in that, this device comprises:
Extraction unit, for extracting moving foreground object by carrying out Target Segmentation to current frame image surveyed area;
Detecting unit, for detecting the head shoulder feature frame in described moving foreground object;
Judging unit, for judging whether the head shoulder feature frame in described moving foreground object meets demographics trigger condition;
Statistic unit, for when described head shoulder feature frame meets described demographics trigger condition, carries out demographics according to described head shoulder feature frame.
11. devices as claimed in claim 10, it is characterized in that, described extraction unit, comprising:
Foreground image acquisition module, for obtaining the foreground image of current frame image surveyed area;
Foreground target frame acquisition module, for obtaining foreground target frame by carrying out aftertreatment to described foreground image;
Frame difference figure computing module, for calculating the frame difference figure of described current frame image surveyed area and previous frame image surveyed area;
Frame difference texture maps acquisition module, for obtaining the edge of frame change texture maps of surveyed area frame difference figure;
Image projection module, in described edge of frame change texture maps, the image-region corresponding to described foreground target frame carries out horizontal projection and vertical projection;
Histogram acquisition module, for obtaining horizontal projective histogram and the vertical projective histogram of the rear generation of projection;
Target Segmentation module, obtains described moving foreground object for carrying out Target Segmentation according to described horizontal projective histogram and described vertical projective histogram to the edge of frame change texture image within the scope of described foreground target frame.
12. devices as claimed in claim 11, it is characterized in that, described Target Segmentation module, comprising:
Priority calculating sub module, for the processing priority of the processing priority and described vertical projective histogram that calculate described horizontal projective histogram respectively;
Target Segmentation submodule, carries out Target Segmentation for the projection histogram selecting processing priority high to the edge of frame change texture image within the scope of current foreground target frame; The low projection histogram of processing priority is selected to carry out Target Segmentation, to obtain some moving foreground object to the edge of frame change texture image after the projection histogram segmentation that processing priority is high.
13. devices as claimed in claim 12, is characterized in that, described priority calculating sub module, specifically for:
Described horizontal projective histogram is identical with the processing priority computing method of described vertical projective histogram, is specially:
u x = &Sigma; i = 0 n &omega; i x 1
Wherein,
U xfor processing priority;
X ifor line number or columns that projection value is i;
ω ifor weighting coefficient;
N is default projection threshold value.
14. devices as claimed in claim 12, is characterized in that, described Target Segmentation submodule, specifically for:
The projection histogram that described selection processing priority is high carries out the Target Segmentation projection histogram low with described selection processing priority, and to carry out the method for Target Segmentation identical, is specially:
Calculate the segmentation accumulated value of the projection histogram selected according to Target Segmentation algorithm, wherein, described Target Segmentation algorithm is:
T y = &Sigma; m 1 m 2 f ( &omega; j ) y j
f ( &omega; j ) = &omega; j , y j &le; n - &omega; j , y j > n
Wherein,
ω jfor weighting coefficient, and it is positive integer;
N is projection threshold value;
Y jfor the projection value of jth row or column;
F (ω j) for being with the weighting coefficient of positive negative direction;
M 1, m 2for row or column, and m 2> m 1;
T yfor segmentation accumulated value;
As described segmentation accumulated value T ybe more than or equal to default segmentation threshold T, and when being less than described segmentation threshold T, confirm m 1and m 2for Target Segmentation line.
15. devices as claimed in claim 10, it is characterized in that, described device also comprises:
Tracking cell, after detecting the head shoulder feature frame in described moving foreground object, carries out object matching and track following to described head shoulder feature frame for described detecting unit;
Record cell, for the current location of the shoulder feature frame of head according to object matching and track following outcome record in current frame image; Record the reference position that described head shoulder feature frame appears at image detection region first;
Described judging unit, specifically for judging whether described head shoulder feature frame triggers line along direction of motion away from counting, described direction of motion is that described head shoulder feature frame is from described reference position to the direction of described current location; Judge whether the interframe displacement of described head shoulder feature frame is more than or equal to default interframe displacement threshold value, described interframe displacement is the displacement of described head shoulder feature frame between current frame image and previous frame image; When described head shoulder feature frame triggers line along direction of motion away from counting, and when the interframe displacement of described head shoulder feature frame is more than or equal to default interframe displacement threshold value, determine that described head shoulder feature frame meets demographics trigger condition; Otherwise, determine that described head shoulder feature frame does not meet demographics trigger condition.
CN201510540599.2A 2015-08-28 2015-08-28 A kind of demographic method and device Active CN105139425B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510540599.2A CN105139425B (en) 2015-08-28 2015-08-28 A kind of demographic method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510540599.2A CN105139425B (en) 2015-08-28 2015-08-28 A kind of demographic method and device

Publications (2)

Publication Number Publication Date
CN105139425A true CN105139425A (en) 2015-12-09
CN105139425B CN105139425B (en) 2018-12-07

Family

ID=54724757

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510540599.2A Active CN105139425B (en) 2015-08-28 2015-08-28 A kind of demographic method and device

Country Status (1)

Country Link
CN (1) CN105139425B (en)

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105631418A (en) * 2015-12-24 2016-06-01 浙江宇视科技有限公司 People counting method and device
CN105844234A (en) * 2016-03-21 2016-08-10 商汤集团有限公司 People counting method and device based on head shoulder detection
CN106067008A (en) * 2016-06-15 2016-11-02 汤美 Student's statistical method of network courses and system
CN106504261A (en) * 2016-10-31 2017-03-15 北京奇艺世纪科技有限公司 A kind of image partition method and device
CN106530328A (en) * 2016-11-04 2017-03-22 深圳维周机器人科技有限公司 Method for detecting and smoothly following moving object based on video images
CN107093186A (en) * 2017-03-10 2017-08-25 北京环境特性研究所 The strenuous exercise's detection method matched based on edge projection
CN107330386A (en) * 2017-06-21 2017-11-07 厦门中控智慧信息技术有限公司 A kind of people flow rate statistical method and terminal device
CN108197579A (en) * 2018-01-09 2018-06-22 杭州智诺科技股份有限公司 The detection method of number in protective cabin
CN108256404A (en) * 2016-12-29 2018-07-06 北京旷视科技有限公司 Pedestrian detection method and device
CN108280952A (en) * 2018-01-25 2018-07-13 盛视科技股份有限公司 A kind of passenger's trailing monitoring method based on foreground object segmentation
CN108921072A (en) * 2018-06-25 2018-11-30 苏州欧普照明有限公司 A kind of the people flow rate statistical method, apparatus and system of view-based access control model sensor
CN108989677A (en) * 2018-07-27 2018-12-11 上海与德科技有限公司 A kind of automatic photographing method, device, server and storage medium
CN109101929A (en) * 2018-08-16 2018-12-28 新智数字科技有限公司 A kind of pedestrian counting method and device
CN110490030A (en) * 2018-05-15 2019-11-22 保定市天河电子技术有限公司 A kind of channel demographic method and system based on radar
CN111353342A (en) * 2018-12-21 2020-06-30 浙江宇视科技有限公司 Shoulder recognition model training method and device, and people counting method and device
CN111461086A (en) * 2020-03-18 2020-07-28 深圳北斗应用技术研究院有限公司 People counting method and system based on head detection
CN111723664A (en) * 2020-05-19 2020-09-29 烟台市广智微芯智能科技有限责任公司 Pedestrian counting method and system for open type area
CN112333431A (en) * 2020-10-30 2021-02-05 深圳市商汤科技有限公司 Scene monitoring method and device, electronic equipment and storage medium
CN112434566A (en) * 2020-11-04 2021-03-02 深圳云天励飞技术股份有限公司 Passenger flow statistical method and device, electronic equipment and storage medium
CN113469982A (en) * 2021-07-12 2021-10-01 浙江大华技术股份有限公司 Method and device for accurate passenger flow statistics and electronic equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7940957B2 (en) * 2006-06-09 2011-05-10 Sony Computer Entertainment Inc. Object tracker for visually tracking object motion
CN102214309A (en) * 2011-06-15 2011-10-12 北京工业大学 Special human body recognition method based on head and shoulder model
CN104751491A (en) * 2015-04-10 2015-07-01 中国科学院宁波材料技术与工程研究所 Method and device for tracking crowds and counting pedestrian flow

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7940957B2 (en) * 2006-06-09 2011-05-10 Sony Computer Entertainment Inc. Object tracker for visually tracking object motion
CN102214309A (en) * 2011-06-15 2011-10-12 北京工业大学 Special human body recognition method based on head and shoulder model
CN104751491A (en) * 2015-04-10 2015-07-01 中国科学院宁波材料技术与工程研究所 Method and device for tracking crowds and counting pedestrian flow

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
吴玉堂: "基于视觉的行人流量统计方法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
张庆利: "视频对象自动分割技术及其细胞神经网络实现方法的研究", 《中国博士学位论文全文数据库 信息科技辑》 *

Cited By (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105631418A (en) * 2015-12-24 2016-06-01 浙江宇视科技有限公司 People counting method and device
CN105844234A (en) * 2016-03-21 2016-08-10 商汤集团有限公司 People counting method and device based on head shoulder detection
CN106067008A (en) * 2016-06-15 2016-11-02 汤美 Student's statistical method of network courses and system
CN106504261A (en) * 2016-10-31 2017-03-15 北京奇艺世纪科技有限公司 A kind of image partition method and device
CN106504261B (en) * 2016-10-31 2019-08-06 北京奇艺世纪科技有限公司 A kind of image partition method and device
CN106530328A (en) * 2016-11-04 2017-03-22 深圳维周机器人科技有限公司 Method for detecting and smoothly following moving object based on video images
CN106530328B (en) * 2016-11-04 2019-09-20 深圳维周机器人科技有限公司 A method of it is followed based on video image to moving object detection and smoothly
CN108256404A (en) * 2016-12-29 2018-07-06 北京旷视科技有限公司 Pedestrian detection method and device
CN108256404B (en) * 2016-12-29 2021-12-10 北京旷视科技有限公司 Pedestrian detection method and device
CN107093186A (en) * 2017-03-10 2017-08-25 北京环境特性研究所 The strenuous exercise's detection method matched based on edge projection
CN107330386A (en) * 2017-06-21 2017-11-07 厦门中控智慧信息技术有限公司 A kind of people flow rate statistical method and terminal device
CN108197579A (en) * 2018-01-09 2018-06-22 杭州智诺科技股份有限公司 The detection method of number in protective cabin
CN108280952A (en) * 2018-01-25 2018-07-13 盛视科技股份有限公司 A kind of passenger's trailing monitoring method based on foreground object segmentation
CN110490030A (en) * 2018-05-15 2019-11-22 保定市天河电子技术有限公司 A kind of channel demographic method and system based on radar
CN108921072A (en) * 2018-06-25 2018-11-30 苏州欧普照明有限公司 A kind of the people flow rate statistical method, apparatus and system of view-based access control model sensor
CN108921072B (en) * 2018-06-25 2021-10-15 苏州欧普照明有限公司 People flow statistical method, device and system based on visual sensor
WO2020001302A1 (en) * 2018-06-25 2020-01-02 苏州欧普照明有限公司 People traffic statistical method, apparatus, and system based on vision sensor
CN108989677A (en) * 2018-07-27 2018-12-11 上海与德科技有限公司 A kind of automatic photographing method, device, server and storage medium
CN109101929A (en) * 2018-08-16 2018-12-28 新智数字科技有限公司 A kind of pedestrian counting method and device
CN111353342A (en) * 2018-12-21 2020-06-30 浙江宇视科技有限公司 Shoulder recognition model training method and device, and people counting method and device
CN111353342B (en) * 2018-12-21 2023-09-19 浙江宇视科技有限公司 Shoulder recognition model training method and device, and people counting method and device
CN111461086A (en) * 2020-03-18 2020-07-28 深圳北斗应用技术研究院有限公司 People counting method and system based on head detection
CN111723664A (en) * 2020-05-19 2020-09-29 烟台市广智微芯智能科技有限责任公司 Pedestrian counting method and system for open type area
CN112333431A (en) * 2020-10-30 2021-02-05 深圳市商汤科技有限公司 Scene monitoring method and device, electronic equipment and storage medium
CN112434566A (en) * 2020-11-04 2021-03-02 深圳云天励飞技术股份有限公司 Passenger flow statistical method and device, electronic equipment and storage medium
CN113469982A (en) * 2021-07-12 2021-10-01 浙江大华技术股份有限公司 Method and device for accurate passenger flow statistics and electronic equipment

Also Published As

Publication number Publication date
CN105139425B (en) 2018-12-07

Similar Documents

Publication Publication Date Title
CN105139425A (en) People counting method and device
US10452931B2 (en) Processing method for distinguishing a three dimensional object from a two dimensional object using a vehicular system
CN106980829B (en) Abnormal behaviour automatic testing method of fighting based on video analysis
CN103164706B (en) Object counting method and device based on video signal analysis
Chan et al. Privacy preserving crowd monitoring: Counting people without people models or tracking
Gupte et al. Detection and classification of vehicles
CN102542289B (en) Pedestrian volume statistical method based on plurality of Gaussian counting models
CN102819764B (en) Method for counting pedestrian flow from multiple views under complex scene of traffic junction
EP2128818A1 (en) Method of moving target tracking and number accounting
CN106682586A (en) Method for real-time lane line detection based on vision under complex lighting conditions
JP2019505866A (en) Passerby head identification method and system
CN104123544A (en) Video analysis based abnormal behavior detection method and system
CN103235933A (en) Vehicle abnormal behavior detection method based on Hidden Markov Model
CN102243765A (en) Multi-camera-based multi-objective positioning tracking method and system
CN103632427B (en) A kind of gate cracking protection method and gate control system
CN107886055A (en) A kind of retrograde detection method judged for direction of vehicle movement
Rodríguez et al. An adaptive, real-time, traffic monitoring system
CN104392239A (en) License plate identification method and system
CN102915433A (en) Character combination-based license plate positioning and identifying method
CN104966062A (en) Video monitoring method and device
CN110189425A (en) Multilane free-flow vehicle detection method and system based on binocular vision
CN108830204A (en) The method for detecting abnormality in the monitor video of target
CN111260696A (en) Method for edge-end-oriented pedestrian tracking and accurate people counting
CN104281851A (en) Extraction method and device of car logo information
CN110147748A (en) A kind of mobile robot obstacle recognition method based on road-edge detection

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant