CN108986064A - A kind of people flow rate statistical method, equipment and system - Google Patents

A kind of people flow rate statistical method, equipment and system Download PDF

Info

Publication number
CN108986064A
CN108986064A CN201710399814.0A CN201710399814A CN108986064A CN 108986064 A CN108986064 A CN 108986064A CN 201710399814 A CN201710399814 A CN 201710399814A CN 108986064 A CN108986064 A CN 108986064A
Authority
CN
China
Prior art keywords
people
target
tracking
frame image
confidence level
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710399814.0A
Other languages
Chinese (zh)
Other versions
CN108986064B (en
Inventor
宋涛
谢迪
浦世亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Original Assignee
Hangzhou Hikvision Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Digital Technology Co Ltd filed Critical Hangzhou Hikvision Digital Technology Co Ltd
Priority to CN201710399814.0A priority Critical patent/CN108986064B/en
Publication of CN108986064A publication Critical patent/CN108986064A/en
Application granted granted Critical
Publication of CN108986064B publication Critical patent/CN108986064B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/277Analysis of motion involving stochastic approaches, e.g. using Kalman filters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/53Recognition of crowd images, e.g. recognition of crowd congestion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20076Probabilistic image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30242Counting objects in image

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention provides a kind of people flow rate statistical method, equipment and systems, wherein people flow rate statistical method includes: the sequential frame image for obtaining and being acquired by image capture device;Sequential frame image is inputted into trained obtained full convolutional neural networks, generates the number of people confidence level distribution map of every frame image in sequential frame image;For the number of people confidence level distribution map of every frame image, method is determined using goal-selling, determines that at least one number of people detects target in every frame image;It obtains and clarification of objective matching result and motion smoothing degree is detected according to the number of people any in every frame image, target association is carried out to the previous frame of present frame and present frame, obtain tracking target, and be tracking Target Assignment tracking mark;The quantity for counting all tracking marks, obtains people flow rate statistical result.The accuracy and operation efficiency of people flow rate statistical can be improved through the invention.

Description

A kind of people flow rate statistical method, equipment and system
Technical field
The present invention relates to technical field of machine vision, more particularly to a kind of people flow rate statistical method, equipment and system.
Background technique
With being constantly progressive for society, will be used wider and wider for video monitoring system is general.Supermarket, market, gymnasium, The flow of the people of the places such as airport, station disengaging has great significance for the operator in above-mentioned place or manager, leads to Cross and flow of the people counted, can in real time effective monitoring, organize public activity region operation.Traditional video prison In control, people flow rate statistical is mainly manually checked by monitoring personnel to realize, this implementation method is monitoring period is short, the stream of people Measure it is sparse in the case where it is reliable, but due to the limitation of human eye biological nature, when monitoring period is longer, when flow of the people is intensive, Statistical accuracy will be greatly reduced, and the mode manually counted needs to expend a large amount of human cost.
In view of the above-mentioned problems, the method for relevant people flow rate statistical carries out present image using multi classifier in parallel Number of people detection, determines each number of people in present image;Each number of people determined is tracked, number of people target is formed and moves rail Mark;Flow of the people counting is carried out in number of people target trajectory direction.
It is suitable according to detecting since multi classifier parallel-connection structure detection process needs to be arranged the detection ordering of all kinds of classifiers Sequence successively carries out number of people detection to present image using each classifier, and the selection of classifier directly influences people flow rate statistical Accuracy, and the training sample of multi classifier in parallel needs the classification and special scenes according to classifier, acquires respectively With the positive negative sample for demarcating multiple classifications and demarcate number of people target frame, cause the complexity of number of people target identification too high, influence people The operation efficiency of traffic statistics.
Summary of the invention
The embodiment of the present invention is designed to provide a kind of people flow rate statistical method, equipment and system, to improve flow of the people Statistical accuracy and operation efficiency.Specific technical solution is as follows:
In a first aspect, the embodiment of the invention provides a kind of people flow rate statistical methods, which comprises
Obtain the sequential frame image acquired by image capture device;
The sequential frame image is inputted into trained obtained full convolutional neural networks, is generated every in the sequential frame image The number of people confidence level distribution map of frame image;
For the number of people confidence level distribution map of every frame image, method is determined using goal-selling, is determined in every frame image extremely Few number of people detects target;
It obtains and clarification of objective matching result and motion smoothing degree is detected according to the number of people any in every frame image, to current The previous frame of frame and the present frame carries out target association, obtains tracking target, and is tracking Target Assignment tracking mark;
The quantity for counting all tracking marks, obtains people flow rate statistical result.
Second aspect, the embodiment of the invention provides a kind of people flow rate statistical equipment, the equipment includes:
First obtains module, for obtaining the sequential frame image acquired by image capture device;
Convolution module, for the sequential frame image to be inputted trained obtained full convolutional neural networks, described in generation The number of people confidence level distribution map of every frame image in sequential frame image;
The number of people detects target determination module, for being directed to the number of people confidence level distribution map of every frame image, using goal-selling It determines method, determines that at least one number of people detects target in every frame image;
Tracking mark distribution module, for obtaining and detecting clarification of objective matching knot according to the number of people any in every frame image Fruit and motion smoothing degree carry out target association to the previous frame of present frame and the present frame, obtain tracking target, and be described Track Target Assignment tracking mark;
Statistical module obtains people flow rate statistical result for counting the quantity of all tracking marks.
The third aspect, the embodiment of the invention provides a kind of people flow rate statistical system, the system comprises:
Image capture device, for acquiring sequential frame image;
Processor, for obtaining the sequential frame image for acquiring equipment acquisition by described image;By the sequential frame image Trained obtained full convolutional neural networks are inputted, the number of people confidence level distribution of every frame image in the sequential frame image is generated Figure;For the number of people confidence level distribution map of every frame image, method is determined using goal-selling, determines at least one in every frame image The number of people detects target;It obtains and clarification of objective matching result and motion smoothing degree is detected according to the number of people any in every frame image, Target association is carried out to the previous frame of present frame and the present frame, obtains tracking target, and for the tracking Target Assignment with Track mark;The quantity for counting all tracking marks, obtains people flow rate statistical result.
A kind of people flow rate statistical method, equipment and system provided in an embodiment of the present invention, it is continuous by the video that will be acquired Frame image inputs trained obtained full convolutional neural networks, generates the corresponding number of people confidence level distribution map of every frame image, according to Number of people confidence level distribution map determines that the number of people in every frame image detects target, gives using the target association of present frame and previous frame Associated tracking Target Assignment tracking mark finally counts the quantity of all tracking marks, to obtain people flow rate statistical knot Fruit.Using trained obtained full convolutional neural networks, number of people substantive characteristics can be extracted, the accurate of people flow rate statistical is improved Property, and only can determine that the number of people detects mesh by generating number of people confidence level distribution map using a full convolutional neural networks Mark, reduces the complexity of number of people target identification, to improve the operation efficiency of people flow rate statistical.Also, compared to based on spy A method for sign point tracking, the embodiment of the present invention are promoted due to only needing to record tracking mark, tracking number of people target that can be stable Counting precision;Compared to the method based on human body segmentation and tracking, the embodiment of the present invention can not only record tracking mark, and And influenced on tracking the record identified and not blocked, precision is higher.
Detailed description of the invention
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this Some embodiments of invention for those of ordinary skill in the art without creative efforts, can be with It obtains other drawings based on these drawings.
Fig. 1 is a kind of flow diagram of the people flow rate statistical method of the embodiment of the present invention;
Fig. 2 is the structural schematic diagram of the full convolutional neural networks of the embodiment of the present invention;
Fig. 3 is the number of people confidence level distribution map of the embodiment of the present invention;
Fig. 4 is another flow diagram of the people flow rate statistical method of the embodiment of the present invention;
Fig. 5 is that the number of people confidence level of the embodiment of the present invention is distributed true value figure;
Fig. 6 is the structural schematic diagram of the full convolutional neural networks of another kind of the embodiment of the present invention;
Fig. 7 is that the number of people of the embodiment of the present invention detects object delineation;
Fig. 8 is the tracing area schematic diagram of the embodiment of the present invention;
Fig. 9 is a kind of structural schematic diagram of the people flow rate statistical equipment of the embodiment of the present invention;
Figure 10 is another structural schematic diagram of the people flow rate statistical equipment of the embodiment of the present invention;
Figure 11 is the structural schematic diagram of the people flow rate statistical system of the embodiment of the present invention.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete Site preparation description, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.It is based on Embodiment in the present invention, it is obtained by those of ordinary skill in the art without making creative efforts every other Embodiment shall fall within the protection scope of the present invention.
In order to improve the accuracy and operation efficiency of people flow rate statistical, the embodiment of the invention provides a kind of people flow rate statisticals Method, equipment and system.
A kind of people flow rate statistical method is provided for the embodiments of the invention first below to be introduced.
It should be noted that a kind of executing subject of people flow rate statistical method provided by the embodiment of the present invention can be one Processor of the kind equipped with kernel processor chip, for example, it may be being equipped with DSP (Digital Signal Processor, number Word signal processor), ARM (Advanced Reduced Instruction Set Computer Machines, reduced instruction Collect computer microprocessor) or the cores such as FPGA (Field-Programmable Gate Array, field programmable gate array) The heart handles the processor of chip, and it includes the image capture device of processor that executing subject, which can also be a kind of,.Realize the present invention A kind of mode of people flow rate statistical method provided by embodiment can be the software being set in executing subject, hardware circuit And/or logic circuit.
As shown in Figure 1, a kind of people flow rate statistical method provided by the embodiment of the present invention, may include steps of:
S101 obtains the sequential frame image acquired by image capture device.
Wherein, image capture device can be the video camera with video capture function, or have and continuously take pictures The camera of function, certainly, image capture device is not limited only to this.When image capture device is video camera, video camera shooting Be video in certain time, which be made of multiple sequential frame images;When image capture device is camera, Camera can continuously be taken pictures, and taken pictures to obtain an image each time, can be clapped camera by the sequencing taken pictures According to obtained a series of images as sequential frame image.If image capture device only collects an image, can also pass through Flow of the people in the mode statistical picture of Head recognition, still, only one image of acquisition can have situations such as blocking, fuzzy, make The flow of the people that must be counted is not consistent with actual conditions, has certain error, therefore, in the present embodiment, utilizes acquisition successive frame figure Accuracy rate is promoted as carrying out people flow rate statistical.
Sequential frame image is inputted trained obtained full convolutional neural networks, generates every frame in sequential frame image by S102 The number of people confidence level distribution map of image.
Since full convolutional neural networks have the ability for automatically extracting number of people substantive characteristics, and the net of full convolutional neural networks Network parameter, which can be, to be obtained by the process of sample training.Therefore, it can be protected using the full convolutional neural networks that training obtains Demonstrate,prove to have many characteristics, such as example dark hair, light color hair, the various new samples for whether wearing cap number of people target quickly know Not, real number of people target is obtained to a greater degree, promotes the accuracy of people flow rate statistical.As shown in Fig. 2, the embodiment of the present invention In, full convolutional neural networks are by multiple convolutional layers and the spaced sequential frame image that constitutes, will acquire of multiple down-sampled layers The full convolutional neural networks are inputted, feature extraction is carried out by number of people feature of the full convolutional neural networks to every frame image, can be obtained To the number of people confidence level distribution map of every frame image, as shown in figure 3, bright spot is number of people confidence level in figure.Wherein, number of people confidence level point Butut is it is to be understood that target detected is the distribution map of people head's target probability.Ginseng in the number of people confidence level distribution map Number can be the specific probability value of people head's target for the target in the region of specific each identification, wherein the region of identification be with The position of target and the relevant region of size, the area in the region can be greater than or equal to the practical big of target under normal conditions It is small;The size of probability can also be represented with the pixel value of pixel, the pixel value of each pixel is bigger in the region, the area Ze Gai Target in domain is that people head's target probability is also bigger, certainly, the specific ginseng of number of people confidence level distribution map in the embodiment of the present invention Number is not limited only to this.
S103 determines method using goal-selling for the number of people confidence level distribution map of every frame image, determines every frame image In at least one number of people detect target.
It is each knowledge in the number of people confidence level distribution map of the every frame image obtained by full convolutional neural networks due to including The target in other region is people head's target probability, may includes the target of the non-number of people in all targets, therefore, it is necessary to be directed to The number of people confidence level distribution map of every frame image, determines method using goal-selling, determines that the frame image is quasi- from the distribution map The true number of people detects target, wherein goal-selling determines that method can be one threshold value of setting, in number of people confidence level distribution map Probability determines that the corresponding region of the probability is that the number of people detects target when being greater than the threshold value, be also possible to the pixel according to each pixel Value, each pixel value in region determine that the region is that the number of people detects target, can also be each when being all larger than a presetted pixel value The confidence level of pixel determines that the region is that the number of people detects target or each pixel when being all larger than a default confidence threshold value The average value of confidence level determines that the region is that the number of people detects target when being greater than a default confidence threshold value, certainly, specifically determine people The mode of head detection target is not limited only to this, for the ease of realizing the mode that threshold process can be used.
Optionally, the number of people confidence level distribution map for every frame image determines method using goal-selling, determines every At least one number of people detects the step of target in frame image, may include:
The first step determines at least one using non-maxima suppression method for the number of people confidence level distribution map of every frame image The position of the central point of a detection target.
Since in the number of people confidence level distribution map of every frame image, confidence level maximum point is characterized in each detection target The position of heart point, the non-zero points of space clustering characterize region locating for detection target on distribution map, to number of people confidence level distribution map Using non-maxima suppression, it is not the element of maximum by inhibition, searches for the maximum in the region, thus can obtains each Detect the position of target's center's point.The formation in the region is related to the confidence level of each pixel, since there may be two mesh Mark from the factors such as too close, background object influence so that the region and actually detected target be there may be deviation, but confidence level Maximum point characterizes detection target's center's point, and the number of people is circular target, accordingly, it is determined that after center position, in central point Certain neighborhood in, a detection target can be determined as, thus by determine central point position can be improved the number of people detection Accuracy.
Second step obtains the confidence level of all pixels point in the center neighborhood of a point of each detection target.
Due to that can be determined as a detection target in the center neighborhood of a point of detection target, the size of the neighborhood can root It is determined according to the statistical analysis of number of people radius, it can it is the average value counted by the size of practical number of people radius, or Person is the value for obeying a default distribution, is all possible.All pixels point sets in center neighborhood of a point due to detecting target Reliability is bigger, which is that the probability of number of people detection target is bigger, therefore, in the present embodiment, needs to institute in neighborhood There is the confidence level of pixel to be obtained.
Third step, the detection target for determining that the confidence level of each pixel is all larger than default confidence threshold value is the frame image The number of people detect target.
Since the confidence level of all pixels point in detection target's center's neighborhood of a point is bigger, which is number of people detection The probability of target is bigger, therefore, in the present embodiment, a default confidence threshold value is preset, in the confidence of all pixels point When degree is all larger than the default confidence threshold value, it can determine that the number of people that the detection target is the frame image detects target.Wherein, in advance Be arranged confidence threshold can rule of thumb, demand or multiple test result setting, for example, default confidence level can be set to 85%, then if the confidence level of all pixels point is all larger than 85% in the center vertex neighborhood of detection target, it can determine the detection Target is that the number of people detects target.In another example default confidence level can be set to 91% or other numerical value, do not limit herein It is fixed.
Determine that the mode of number of people detection target, the method are equal due to the confidence level for defining all pixels point compared to other Default confidence threshold value need to be greater than, further ensure the accuracy of number of people detection target.
S104 is obtained and is detected clarification of objective matching result and motion smoothing degree according to the number of people any in every frame image, Target association is carried out to the previous frame of present frame and present frame, obtains tracking target, and is tracking Target Assignment tracking mark.
After determining the number of people detection target in every frame image, firstly, carrying out number of people detection target for every frame image Characteristic matching gets each number of people detection clarification of objective in every frame image, and then can determine not that is, by characteristic matching The number of people in image at same frame with same characteristic features detects target, to track to the number of people detection target with same characteristic features; Meanwhile the smoothness analysis of number of people detection target is carried out for every frame image, i.e., it is analyzed by smoothness, gets every frame image In each number of people detection target motion smoothing degree, wherein what motion smoothing degree referred in sequential frame image with same characteristic features The number of people detects the movement tendency of target, if the movement tendency of number of people detection target has bigger jump, illustrates the number of people Detecting target may be error detection.Then, clarification of objective matching knot is detected according to the number of people each in the every frame image got Fruit and motion smoothing degree detect target for each number of people, using the target association of present frame and previous frame, realize target with Track, may further determine the number of people detection target in tracking target, for these tracking Target Assignments tracking mark, for according to Tracking mark tracks tracking target.Wherein it is possible to be all determined as all people head's mark to track target, it can be by people The target with high motion smoothing degree is determined as tracking target in head detection target, and carrying out primary screening according to motion smoothing degree can To guarantee the accuracy of target following, the efficiency of people flow rate statistical is improved;For each number of people detect target, carry out present frame with The step of target association of previous frame, can be synchronous progress, may not be synchronous progress, for target following result And have no significant effect, it is not specifically limited here.
Optionally, described to obtain and clarification of objective matching result is detected according to the number of people any in every frame image and is moved flat Slippery carries out target association to the previous frame of present frame and present frame, obtains tracking target, and track for the tracking Target Assignment The step of mark may include:
The first step carries out characteristic matching and smooth to any number of people of frame image every in video sequential frame image detection target Degree analysis obtains number of people detection clarification of objective matching result and motion smoothing degree.
After determining the number of people detection target in every frame image, target can be detected to the number of people in every frame image and carry out spy Sign matching determines that the number of people detects clarification of objective, and match number of people detection target of every frame image with same characteristic features;Then Smoothness analysis is carried out to the number of people detection target in every frame image, that is, determines the motion smoothing degree of number of people detection target, movement Smoothness refers to the movement tendency of the number of people detection target in sequential frame image with same characteristic features, if the number of people detects target Movement tendency have bigger jump, illustrate the number of people detection target may be error detection.Therefore, in order to realize to even Different numbers of people detection target is tracked respectively in continuous frame image, improves the accuracy of tracking, in the present embodiment, needs to pass through Characteristic matching and smoothness analysis, obtain number of people detection clarification of objective matching result and motion smoothing degree.
Second step carries out target to the previous frame of present frame and present frame according to characteristic matching result and motion smoothing degree Association determines the people when characteristic matching degree is higher than preset matching degree threshold value and motion smoothing degree is higher than default smoothness threshold Head detection target is tracking target.
Illustratively, after carrying out characteristic matching and smoothness analysis to number of people detection target, based on obtained feature With result and motion smoothing degree, the number of people detection that characteristic matching degree is high in front of and after frames image and motion smoothing degree is high can be determined Target is same person head's mark, then can determine that number of people detection target as characterized above, therefore, can be with for tracking target Rule of thumb, demand or multiple test result preset a preset matching degree threshold value and default smoothness threshold, in feature When matching degree is higher than preset matching degree threshold value and motion smoothing degree and is higher than default smoothness threshold, determine number of people detection target be with Track target, to achieve the effect that improve number of people tracking accuracy.For example, by preset matching degree threshold value be set as 88% or 83%;Default smoothness threshold is the inner product of the vector of the vector sum direction of motion for the movement velocity that the number of people detects target, characterization The consistency of number of people detection target movement, default smoothness threshold setting it is bigger, then explanation is to number of people detection target fortune Dynamic coherence request is higher.
Third step is tracked for the tracking Target Assignment and is identified.
Determine track target after, can for tracking Target Assignment one tracking mark, in order to it is different tracking mark Target is accurately tracked respectively.It is understood that detecting target for each number of people, it is both needed to execute above-mentioned steps, determines Different tracking targets in sequential frame image, to reach the tracking to different tracking targets.
S105 counts the quantity of all tracking marks, obtains people flow rate statistical result.
Since different tracking marks represents different tracking targets, each tracking target is an accurate number of people, Therefore it is counted by the quantity to all tracking marks, can determine the flow of the people in the sequential frame image currently acquired.
Optionally, the statistics all the step of tracking the quantity identified, obtaining people flow rate statistical result, may include:
The first step determines stream of people's motion profile direction according to sequential frame image.
In sequential frame image, position of each tracking target in different frame image may be varied, this variation So that tracking target generates a motion profile between different frame image, and in fixed scene, the movement rail of difference tracking target Mark direction is almost the same, for example, target is moved generally along the direction in street under the scene in street.Therefore, by even The position analysis that target is tracked in continuous frame image, can determine the motion profile direction of the stream of people.
Second step determines in sequential frame image, a detection line vertical with stream of people's motion profile direction.
In fixed scene, target is typically in motion state, in order to when reducing that two targets are interlocked during tracking Bring tracking error, can be arranged a detection line in sequential frame image, and the direction of the detection line is generally moved with the stream of people Course bearing is vertical, can be using the detection line as testing conditions, to carry out people flow rate statistical.
Third step records the corresponding tracking mark of the tracking target when any tracking target passes through detection line.
4th step counts the quantity of the corresponding tracking mark of all tracking targets by detection line, obtains flow of the people system Count result.
When a tracking target passes through detection line, can recorde the corresponding tracking of the tracking target and identify, in total how many Target is tracked by detection line, then how many flow of the people in sequential frame image is illustrated, therefore, by count by detection line with The quantity of the corresponding tracking mark of track target can determine the statistics of flow of the people that is, by the quantity of the tracking target of detection line As a result.
Optionally, described to obtain and clarification of objective matching result is detected according to the number of people any in every frame image and is moved flat Slippery carries out target association to the previous frame of present frame and present frame, obtains tracking target, and track for the tracking Target Assignment Before the step of mark, this method can also include:
According to default people flow rate statistical condition, at least one tracing area delimited to sequential frame image.
Wherein, presetting people flow rate statistical condition can be with are as follows: the application demand of people flow rate statistical, such as need to inward and outward card channel Flow of the people counted, need to count the flow of the people of Waiting Lounge, need the flow of the people of ticket lobby such as to count, Illustratively, it then can be in 5 meters of bayonet front and back or wait according to the tracing area that default people flow rate statistical condition marks off In front of each ticket entrance near Room entrance and platform-ticket in 2 meters or in ticket lobby in 10 meters etc..Due to being directed to the stream of people Big, the complicated scene of amount, people flow rate statistical method through this embodiment might have partial target after detection line is arranged It will not be by detection line, so that people flow rate statistical resultant error is too big.Therefore, in order to reduce the mistake of people flow rate statistical result Difference can delimit single or multiple tracing areas, to people in tracing area for different default people flow rate statistical conditions Flow is counted, to guarantee that each tracking target in sequential frame image can be by corresponding detection line.It can also root The delimitation for carrying out tracing area to sequential frame image according to engineering experience, can choose the simple specific tracking region of background and is examined It surveys, tracking, statistics, so that the interference of complex background is excluded, the further accuracy for promoting people flow rate statistical;Also, by drawing Divide tracing area, reduce the calculation amount of matched jamming, improves the real-time of method execution.
Optionally, described to obtain and clarification of objective matching result is detected according to the number of people any in every frame image and is moved flat Slippery carries out target association to the previous frame of present frame and present frame, obtains tracking target, and track for the tracking Target Assignment The step of mark may include:
In any tracing area, obtain and according to the number of people any in every frame image detect clarification of objective matching result and Motion smoothing degree carries out target association to the previous frame of present frame and present frame, obtains tracking target, and is the tracking target point Mark Fen Pei not tracked.
It, can be to every frame using the target association of present frame and previous frame after delimiting tracing area to sequential frame image Each number of people detection target in image in each tracing area carries out characteristic matching, that is, determines the spy of each number of people detection target Sign, and match number of people detection target of every frame image with same characteristic features;It can also be in each tracing area in every frame image Each number of people detection target carry out smoothness analysis, that is, determine the motion smoothing degree of each number of people detection target, motion smoothing Degree refers to the movement tendency of the number of people detection target in sequential frame image with same characteristic features, if the fortune of number of people detection target Dynamic trend has bigger jump, illustrates that number of people detection target may be error detection.Then pass through characteristic matching and smoothness point Analysis carries out the target association of present frame and previous frame to the number of people detection target in every frame image, and can determine has height smooth The number of people detection target of degree is tracking target, can be used as the target of number of people target, tracks Target Assignment tracking mark for these, For being tracked in tracing area to tracking target according to tracking mark.
Optionally, it the statistics all the step of tracking the quantity identified, obtaining people flow rate statistical result, can also wrap It includes:
In any tracing area, a detection line vertical with stream of people's motion profile direction is determined;
Then, it counts in the tracing area, the number of the corresponding tracking mark of all tracking targets by the detection line Amount, obtains people flow rate statistical result.
It, can be any in sequential frame image for bring tracking error when reducing that two targets are interlocked during tracking One detection line of middle setting in tracing area, available one or more of detection lines, the quantity of detection line and delimit with The quantity in track region is identical, the direction of every detection line generally with the stream of people motion profile side in tracing area locating for the detection line To vertical, line can be will test as testing conditions, to carry out people flow rate statistical.A tracking target in any one tracing area When passing through detection line, the corresponding tracking mark of the tracking target can recorde, how many tracking target is by detection line in total, then How many flow of the people in the tracing area illustrated, therefore, by counting the corresponding tracking mark of tracking target by detection line Quantity, can determine the statistical result of the flow of the people in the tracing area.Again by counting the stream of people in all tracing areas The statistical result of amount determines total flow of the people in sequential frame image.
Using the present embodiment, by the way that the video sequential frame image of acquisition is inputted trained obtained full convolutional Neural net Network, generates the corresponding number of people confidence level distribution map of every frame image, determines the people in every frame image according to number of people confidence level distribution map Head detection target is identified with the target association of previous frame to the tracking of associated tracking Target Assignment, finally using present frame The quantity for counting all tracking marks, to obtain people flow rate statistical result.Using trained obtained full convolutional neural networks, Number of people substantive characteristics can be extracted, the accuracy of people flow rate statistical is improved, and only utilizes a full convolutional neural networks just It can determine that the number of people detects target, reduces the complexity of number of people target identification by generating number of people confidence level distribution map, thus Improve the operation efficiency of people flow rate statistical.Also, compared to the method based on feature point tracking, the present embodiment is due to only needing to remember Record tracking mark, tracking number of people target that can be stable improve counting precision;Compared to the side based on human body segmentation and tracking Method, the present embodiment can not only record tracking mark, and influence on tracking the record identified and not blocked, and precision is higher.
Based on embodiment illustrated in fig. 1, as shown in figure 4, the embodiment of the invention provides another people flow rate statistical method, Before S102, it can also include the following steps:
S401 obtains everyone head's target center in default training set sample image and the default training set sample image Position.
This implementation needs first to construct full convolutional neural networks, due to complete before the operation for carrying out full convolutional neural networks The network parameter training of convolutional neural networks obtains, and trained process can be understood as the number of people to preset various modes The learning process of target, such as the feature of dark hair is learnt, the feature of light hair is learnt, to whether wearing The feature being branded as learns etc., many kinds of due to number of people feature, will not enumerate here, belongs to the present embodiment The range of training sample.The feature construction for the various numbers of people is needed to preset training set sample image, every image has corresponded to not Same number of people feature, also, often obeyed due to the number of people and be distributed with circular Gaussian, it is therefore desirable to obtain the centre bit of number of people target It sets, which can demarcate.
S402 is raw according to everyone head's target center in default distribution law and default training set sample image True value figure is distributed at the number of people confidence level of default training set sample image.
Wherein, the probability distribution that distribution law is obeyed by people head's target confidence level is preset, under normal circumstances number of people mesh Target confidence level obeys circular Gaussian distribution, and certainly, the present embodiment is not limited only to this.As shown in figure 5, by pre- shown in left hand view If training set sample image and the operation of Gauss nuclear phase, the distribution true value figure of the number of people confidence level as shown in right part of flg, Ke Yicong are obtained Find out in number of people confidence level distribution true value figure, each bright spot has corresponded to each of the default training set sample image head Mark.Assuming that everyone head's target center is P in uncalibrated imageh, the confidence level obedience circular Gaussian distribution of number of people target Nh, then according to formula (1), (2), number of people confidence level distribution true value figure is obtained.
Wherein, p indicates any pixel position coordinates on number of people confidence level distribution true value figure;D (p) indicates number of people confidence level It is distributed the number of people confidence level on true value figure at p position coordinates;σhIndicate that circular Gaussian is distributed NhVariance;H indicates human body head; PhIndicate everyone head's target center;NhIndicate the obeyed circular Gaussian distribution of the confidence level of number of people target;Formula (2) table Show that the center of demarcated number of people target has highest confidence level 1.0, and confidence level is decremented to 0 to edge.
Default training set sample image is inputted initial full convolutional neural networks, obtains default training set sample graph by S403 The number of people confidence level distribution map of picture.
Wherein, the network parameter of initial full convolutional neural networks is preset value.It can be with by initial full convolutional neural networks Obtain the number of people confidence level distribution map of default training set sample image, the number of people confidence level distribution map to above-mentioned number of people confidence Degree distribution true value figure is compared, and by constantly training study, is updated network parameter, is made us confidence level distribution map and a number of people Confidence level distribution true value figure is close, and is again determined as full convolutional neural networks when close enough to carry out people flow rate statistical Full convolutional neural networks after training.
Optionally, the full convolutional neural networks can also include: convolutional layer, down-sampled layer and warp lamination.
Full convolutional neural networks frequently include at least one convolutional layer and at least one down-sampled layer, and warp lamination is one Optional layer, in order to enable the resolution ratio of the characteristic pattern arrived is identical as the resolution ratio of default training set sample image of input, to subtract The step of conversion of few compression of images ratio, it is convenient for the operation of number of people confidence level, it, can be with after the last one convolutional layer One warp lamination is set.
Optionally, described that default training set sample image is inputted into initial full convolutional neural networks, obtain default training set The step of number of people confidence level distribution map of sample image, may include:
Default training set sample image is inputted initial full convolutional neural networks, through convolutional layer and down-sampled layer by the first step Spaced network structure extracts the feature of default training set sample image.
Second step is up-sampled feature to the resolution ratio phase of resolution ratio and default training set sample image by warp lamination Together, the result after being up-sampled.
Default training set sample image is inputted into initial full convolutional neural networks, as shown in fig. 6, utilizing a series of convolutional layers It successively extracts by low layer with down-sampled layer to high-rise feature, this series of convolutional layer and down-sampled layer are spaced.So Connection warp lamination up-samples feature to the default training set sample image size of input afterwards.
Third step carries out operation to the result using 1 × 1 convolutional layer, obtains and the default same equal part of training set sample image The number of people confidence level distribution map of resolution.
In order to guarantee that resolution ratio and the default training set sample image of input of number of people confidence level distribution map have same resolution Rate finally can carry out operation to the result after up-sampling by a convolutional layer, and the convolution kernel size of the convolutional layer can choose 1 The convolution kernel of × 1,3 × 3 or 5 × 5 equidimensions still in order to accurately extract the feature of a pixel, can select the convolution Number of people confidence level distribution map, and the resulting number of people then can be obtained by the operation of the convolutional layer having a size of 1 × 1 in the convolution kernel of layer Number of people confidence level corresponding to each pixel characterization picture position on confidence level distribution map.
S404 calculates the number of people confidence level distribution map of default training set sample image and the people of default training set sample image The mean error of head confidence level distribution true value figure.
S405, according to mean error and predetermined gradient operation strategy, is updated when mean error is greater than default error threshold Network parameter, the full convolutional neural networks updated;Calculate the default training set that updated full convolutional neural networks obtain The average mistake of the number of people confidence level of the number of people confidence level distribution map of sample image and default training set sample image distribution true value figure Difference;Up to mean error is less than or equal to default error threshold, determining corresponding full convolutional neural networks are complete after training Convolutional neural networks.
Full convolutional neural networks are trained using classical back-propagation algorithm, and predetermined gradient operation strategy can be general Logical gradient descent method, or stochastic gradient descent method, it is the direction of search that gradient descent method, which is with negative gradient direction, is more connect Close-target value, step-length is smaller, advances slower, since stochastic gradient descent method only uses a sample, the primary speed of iteration every time Degree will decline much higher than gradient.Therefore, in order to improve operation efficiency, the present embodiment can use stochastic gradient descent method, update Network parameter.In training process, the number of people confidence that default training set sample image exports after full convolutional neural networks is calculated The mean error of degree distribution map and number of people confidence level distribution true value figure updates full convolutional Neural with mean error such as formula (3) The network parameter of network, iteration proceed as described above, until meeting mean error and no longer declining, wherein full convolutional Neural net The network parameter of network includes the convolution nuclear parameter and offset parameter of convolutional layer.
Wherein, LD(θ) indicates the number of people confidence level distribution map of network output and being averaged for number of people confidence level distribution true value figure Error;D indicates that the number of people confidence level obtained by formula (1) is distributed true value figure;θ indicates the network parameter of full convolutional neural networks;N Indicate the number of default training set sample image;Fd(Xi;θ) indicate before being carried out using the obtained full convolutional neural networks of training to It calculates, the number of people confidence level distribution map of output;XiExpression is input to network, the input picture that number is i;I indicates picture number; DiIndicate XiCorresponding number of people confidence level is distributed true value figure.
Using the present embodiment, by the way that the video sequential frame image of acquisition is inputted trained obtained full convolutional Neural net Network, generates the corresponding number of people confidence level distribution map of every frame image, determines the people in every frame image according to number of people confidence level distribution map Head detection target is identified with the target association of previous frame to the tracking of associated tracking Target Assignment, finally using present frame The quantity for counting all tracking marks, to obtain people flow rate statistical result.Using trained obtained full convolutional neural networks, Number of people substantive characteristics can be extracted, the accuracy of people flow rate statistical is improved, and only utilizes a full convolutional neural networks just It can determine that the number of people detects target, reduces the complexity of number of people target identification by generating number of people confidence level distribution map, thus Improve the operation efficiency of people flow rate statistical.Also, compared to the method based on feature point tracking, the present embodiment is due to only needing to remember Record tracking mark, tracking number of people target that can be stable improve counting precision;Compared to the side based on human body segmentation and tracking Method, the present embodiment can not only record tracking mark, and influence on tracking the record identified and not blocked, and precision is higher.? In the training process of full convolutional neural networks, for the number of people target with different characteristic, default training set sample graph is set Picture, by the training to default training set sample image, iteration, obtained full convolutional neural networks have stronger extensive energy Power, avoids complicated classifier cascade mode, and structure is more simple.
Below with reference to specific application example, it is provided for the embodiments of the invention people flow rate statistical method and is introduced.
For under the scene at crossing, video image is acquired by video camera, the frame image in video image is inputted Trained obtained full convolutional neural networks, obtain the number of people confidence level of the frame image;For the number of people confidence level of the frame image Distribution map determines the position of the central point of each detection target using non-maxima suppression, and in the central point of detection target Neighborhood in the confidence level of pixel be greater than default confidence threshold value, determine that the number of people detects target, number of people inspection as shown in Figure 7 It surveys in object delineation shown in black surround.
Then the tracing area being made of lines A, lines B and lines C as shown in Figure 8 delimited the frame image, led to Cross Fig. 8 can be seen that in tracing area be provided with a detection line D, detection line D is vertical with the direction of motion of the stream of people, with When track target passes through the detection line, the tracking mark of record tracking target.In the tracing area, currently there are 8 people to pass through Detection line, then counting flow of the people is 8 people.
Compared to the relevant technologies, this programme is by inputting trained obtained full convolution for the video sequential frame image of acquisition Neural network, generates the corresponding number of people confidence level distribution map of every frame image, determines every frame image according to number of people confidence level distribution map In the number of people detect target, give associated tracking Target Assignment tracking mark using present frame and the target association of previous frame Know, the quantity of all tracking marks is finally counted, to obtain people flow rate statistical result.Using trained obtained full convolution mind Through network, number of people substantive characteristics can be extracted, improves the accuracy of people flow rate statistical, and only utilizes a full convolutional Neural Network can determine that the number of people detects target, reduces the complexity of number of people target identification by generating number of people confidence level distribution map Degree, to improve the operation efficiency of people flow rate statistical.Also, compared to the method based on feature point tracking, this programme is due to only Need to record tracking mark, tracking number of people target that can be stable improves counting precision;Compared to based on human body segmentation and with The method of track, this programme can not only record tracking mark, and influence on tracking the record identified and not blocked, and precision is more It is high.
Corresponding to above-described embodiment, the embodiment of the invention provides a kind of people flow rate statistical equipment, as shown in figure 9, the stream of people Amount counts equipment
First obtains module 910, for obtaining the sequential frame image acquired by image capture device;
Convolution module 920 generates institute for the sequential frame image to be inputted trained obtained full convolutional neural networks State the number of people confidence level distribution map of every frame image in sequential frame image;
The number of people detects target determination module 930, for being directed to the number of people confidence level distribution map of every frame image, using default mesh The method of determination is marked, determines that at least one number of people detects target in every frame image;
Tracking mark distribution module 940, for obtaining and detecting clarification of objective according to the number of people any in every frame image With result and motion smoothing degree, target association is carried out to the previous frame of present frame and the present frame, obtains tracking target, and be The tracking Target Assignment tracking mark;
Statistical module 950 obtains people flow rate statistical result for counting the quantity of all tracking marks.
Using the present embodiment, by the way that the video sequential frame image of acquisition is inputted trained obtained full convolutional Neural net Network, generates the corresponding number of people confidence level distribution map of every frame image, determines the people in every frame image according to number of people confidence level distribution map Head detection target is identified with the target association of previous frame to the tracking of associated tracking Target Assignment, finally using present frame The quantity for counting all tracking marks, to obtain people flow rate statistical result.Using trained obtained full convolutional neural networks, Number of people substantive characteristics can be extracted, the accuracy of people flow rate statistical is improved, and only utilizes a full convolutional neural networks just It can determine that the number of people detects target, reduces the complexity of number of people target identification by generating number of people confidence level distribution map, thus Improve the operation efficiency of people flow rate statistical.Also, compared to the method based on feature point tracking, the present embodiment is due to only needing to remember Record tracking mark, tracking number of people target that can be stable improve counting precision;Compared to the side based on human body segmentation and tracking Method, the present embodiment can not only record tracking mark, and influence on tracking the record identified and not blocked, and precision is higher.
Optionally, the number of people detects target determination module 930, specifically can be used for:
At least one detection is determined using non-maxima suppression method for the number of people confidence level distribution map of every frame image The position of the central point of target;
Obtain the confidence level of all pixels point in the center neighborhood of a point of each detection target;
The detection target for determining that the confidence level of each pixel is all larger than default confidence threshold value is the number of people of the frame image Detect target.
Optionally, the tracking identifies distribution module 940, specifically can be used for:
Characteristic matching and smoothness are carried out to any number of people detection target of every frame image in the video sequential frame image Analysis obtains number of people detection clarification of objective matching result and motion smoothing degree;
According to the characteristic matching result and the motion smoothing degree, the previous frame of present frame and the present frame is carried out Target association is determined when characteristic matching degree is higher than preset matching degree threshold value and motion smoothing degree is higher than default smoothness threshold It is tracking target that the number of people, which detects target,;
It tracks and identifies for the tracking Target Assignment.
Optionally, the statistical module 950, specifically can be used for:
According to the sequential frame image, stream of people's motion profile direction is determined;
It determines in the sequential frame image, a detection line vertical with stream of people's motion profile direction;
When any tracking target is by the detection line, the corresponding tracking mark of the tracking target is recorded;
The quantity for counting the corresponding tracking mark of all tracking targets by the detection line, obtains people flow rate statistical knot Fruit.
Optionally, the equipment can also include:
Module delimited, for delimiting at least one tracking to the sequential frame image according to people flow rate statistical condition is preset Region;
The tracking identifies distribution module 940, specifically can be also used for:
In any tracing area, obtain and according to the number of people any in every frame image detect clarification of objective matching result and Motion smoothing degree carries out target association to the previous frame of present frame and the present frame, obtains tracking target, and is the tracking Target distributes tracking mark respectively;
The statistical module 950, specifically can be also used for:
In any tracing area, a detection line vertical with stream of people's motion profile direction is determined;
It counts in the tracing area, the quantity of the corresponding tracking mark of all tracking targets by the detection line obtains To people flow rate statistical result.
It should be noted that the people flow rate statistical equipment of the embodiment of the present invention is setting using above-mentioned people flow rate statistical method Standby, then all embodiments of above-mentioned people flow rate statistical method are suitable for the equipment, and can reach the same or similar beneficial Effect.
Further, comprising first obtain module 910, convolution module 920, the number of people detection target determination module 930, On the basis of tracking mark distribution module 940, statistical module 950, as shown in Figure 10, a kind of people provided by the embodiment of the present invention Flow statistical equipment can also include:
Second obtains module 1010, for obtaining in default training set sample image and the default training set sample image Everyone head's target center;
Generation module 1020 presets everyone head in distribution law and the default training set sample image for basis Target center, the number of people confidence level for generating the default training set sample image are distributed true value figure;
Extraction module 1030 is obtained for the default training set sample image to be inputted initial full convolutional neural networks The number of people confidence level distribution map of the default training set sample image, wherein the network ginseng of the initial full convolutional neural networks Number is preset value;
Computing module 1040, for calculate the number of people confidence level distribution map of the default training set sample image with it is described pre- If the mean error of the number of people confidence level distribution true value figure of training set sample image;
Loop module 1050, for when the mean error is greater than default error threshold, according to the mean error and Predetermined gradient operation strategy updates network parameter, the full convolutional neural networks updated;Calculate the full convolution through the update The number of people confidence level distribution map and the default training set sample graph for the default training set sample image that neural network obtains The mean error of the number of people confidence level distribution true value figure of picture, until the mean error is less than or equal to the default error threshold Value determines that corresponding full convolutional neural networks are the full convolutional neural networks after training.
Using the present embodiment, by the way that the video sequential frame image of acquisition is inputted trained obtained full convolutional Neural net Network, generates the corresponding number of people confidence level distribution map of every frame image, determines the people in every frame image according to number of people confidence level distribution map Head detection target is identified with the target association of previous frame to the tracking of associated tracking Target Assignment, finally using present frame The quantity for counting all tracking marks, to obtain people flow rate statistical result.Using trained obtained full convolutional neural networks, Number of people substantive characteristics can be extracted, the accuracy of people flow rate statistical is improved, and only utilizes a full convolutional neural networks just It can determine that the number of people detects target, reduces the complexity of number of people target identification by generating number of people confidence level distribution map, thus Improve the operation efficiency of people flow rate statistical.Also, compared to the method based on feature point tracking, the present embodiment is due to only needing to remember Record tracking mark, tracking number of people target that can be stable improve counting precision;Compared to the side based on human body segmentation and tracking Method, the present embodiment can not only record tracking mark, and influence on tracking the record identified and not blocked, and precision is higher.? In the training process of full convolutional neural networks, for the number of people target with different characteristic, default training set sample graph is set Picture, by the training to default training set sample image, iteration, obtained full convolutional neural networks have stronger extensive energy Power, avoids complicated classifier cascade mode, and structure is more simple.
Optionally, the full convolutional neural networks further include: convolutional layer, down-sampled layer and warp lamination;
The extraction module 1030, specifically can be used for:
The default training set sample image is inputted into initial full convolutional neural networks, it is alternate through convolutional layer and down-sampled layer The network structure of arrangement extracts the feature of the default training set sample image;
The feature is up-sampled to point of resolution ratio and the default training set sample image by the warp lamination Resolution is identical, the result after being up-sampled;
Operation is carried out to the result using 1 × 1 convolutional layer, obtains differentiating on an equal basis with the default training set sample image The number of people confidence level distribution map of rate.
It should be noted that the people flow rate statistical equipment of the embodiment of the present invention is setting using above-mentioned people flow rate statistical method Standby, then all embodiments of above-mentioned people flow rate statistical method are suitable for the equipment, and can reach the same or similar beneficial Effect.
Corresponding to above-described embodiment, the embodiment of the invention provides a kind of people flow rate statistical systems, as shown in figure 11, the stream of people Volume statistic system may include:
Image capture device 1110, for acquiring sequential frame image;
Processor 1120 acquires the sequential frame image that equipment 1110 acquires by described image for obtaining;By the company Continuous frame image inputs trained obtained full convolutional neural networks, generates the number of people confidence of every frame image in the sequential frame image Spend distribution map;For the number of people confidence level distribution map of every frame image, method is determined using goal-selling, is determined in every frame image extremely Few number of people detects target;It obtains and clarification of objective matching result is detected according to the number of people any in every frame image and is moved flat Slippery carries out target association to the previous frame of present frame and the present frame, obtains tracking target, and is the tracking target point It is identified with tracking;The quantity for counting all tracking marks, obtains people flow rate statistical result.
Using the present embodiment, by the way that the video sequential frame image of acquisition is inputted trained obtained full convolutional Neural net Network, generates the corresponding number of people confidence level distribution map of every frame image, determines the people in every frame image according to number of people confidence level distribution map Head detection target is identified with the target association of previous frame to the tracking of associated tracking Target Assignment, finally using present frame The quantity for counting all tracking marks, to obtain people flow rate statistical result.Using trained obtained full convolutional neural networks, Number of people substantive characteristics can be extracted, the accuracy of people flow rate statistical is improved, and only utilizes a full convolutional neural networks just It can determine that the number of people detects target, reduces the complexity of number of people target identification by generating number of people confidence level distribution map, thus Improve the operation efficiency of people flow rate statistical.Also, compared to the method based on feature point tracking, the present embodiment is due to only needing to remember Record tracking mark, tracking number of people target that can be stable improve counting precision;Compared to the side based on human body segmentation and tracking Method, the present embodiment can not only record tracking mark, and influence on tracking the record identified and not blocked, and precision is higher.
Optionally, the processor 1120 specifically can be also used for:
Obtain everyone head's target centre bit in default training set sample image and the default training set sample image It sets;
According to everyone head's target center in default distribution law and the default training set sample image, generate The number of people confidence level of the default training set sample image is distributed true value figure;
The default training set sample image is inputted into initial full convolutional neural networks, obtains the default training set sample The number of people confidence level distribution map of image, wherein the network parameter of the initial full convolutional neural networks is preset value;
Calculate the number of people confidence level distribution map and the default training set sample image of the default training set sample image The number of people confidence level distribution true value figure mean error;
When the mean error is greater than default error threshold, according to the mean error and predetermined gradient operation strategy, Update network parameter, the full convolutional neural networks updated;Calculate the institute that the full convolutional neural networks through the update obtain State the number of people confidence level distribution map of default training set sample image and the number of people confidence level point of the default training set sample image The mean error of cloth true value figure determines corresponding complete until the mean error is less than or equal to the default error threshold Convolutional neural networks are the full convolutional neural networks after training.
Optionally, the full convolutional neural networks include: convolutional layer, down-sampled layer and warp lamination;
The default training set sample image is inputted initial full convolutional neural networks by the processor 1120, is obtained described The number of people confidence level distribution map of default training set sample image, is specifically as follows:
The default training set sample image is inputted into initial full convolutional neural networks, it is alternate through convolutional layer and down-sampled layer The network structure of arrangement extracts the feature of the default training set sample image;
The feature is up-sampled to point of resolution ratio and the default training set sample image by the warp lamination Resolution is identical, the result after being up-sampled;
Operation is carried out to the result using 1 × 1 convolutional layer, obtains differentiating on an equal basis with the default training set sample image The number of people confidence level distribution map of rate.
Optionally, the processor 1120 is directed to the number of people confidence level distribution map of every frame image, is determined using goal-selling Method determines at least one number of people detection target of every frame image, is specifically as follows:
At least one detection is determined using non-maxima suppression method for the number of people confidence level distribution map of every frame image The position of the central point of target;
Obtain the confidence level of all pixels point in the center neighborhood of a point of each detection target;
The detection target for determining that the confidence level of each pixel is all larger than default confidence threshold value is the number of people of the frame image Detect target.
Optionally, the processor 1120 obtains and detects clarification of objective matching knot according to the number of people any in every frame image Fruit and motion smoothing degree carry out target association to the previous frame of present frame and the present frame, obtain tracking target, and be described Target Assignment tracking mark is tracked, is specifically as follows:
Characteristic matching and smoothness are carried out to any number of people detection target of every frame image in the video sequential frame image Analysis obtains number of people detection clarification of objective matching result and motion smoothing degree;
According to the characteristic matching result and the motion smoothing degree, the previous frame of present frame and the present frame is carried out Target association is determined when characteristic matching degree is higher than preset matching degree threshold value and motion smoothing degree is higher than default smoothness threshold It is tracking target that the number of people, which detects target,;
It tracks and identifies for the tracking Target Assignment.
Optionally, the processor 1120 counts the quantity of all tracking marks, obtains people flow rate statistical result, comprising:
According to the sequential frame image, stream of people's motion profile direction is determined;
It determines in the sequential frame image, a detection line vertical with stream of people's motion profile direction;
When any tracking target is by the detection line, the corresponding tracking mark of the tracking target is recorded;
The quantity for counting the corresponding tracking mark of all tracking targets by the detection line, obtains people flow rate statistical knot Fruit.
Optionally, the processor 1120 specifically can be also used for:
According to default people flow rate statistical condition, at least one tracing area delimited to the sequential frame image;
The processor 1120 obtains and detects clarification of objective matching result and fortune according to the number of people any in every frame image Dynamic smoothness carries out target association to the previous frame of present frame and the present frame, obtains tracking target, and is the tracking mesh Mark distribution tracking mark, is specifically as follows:
In any tracing area, obtain and according to the number of people any in every frame image detect clarification of objective matching result and Motion smoothing degree carries out target association to the previous frame of present frame and the present frame, obtains tracking target, and is the tracking Target distributes tracking mark respectively;
The processor 1120 counts the quantity of all tracking marks, obtains people flow rate statistical as a result, being specifically as follows:
In any tracing area, a detection line vertical with stream of people's motion profile direction is determined;
It counts in the tracing area, the quantity of the corresponding tracking mark of all tracking targets by the detection line obtains To people flow rate statistical result.
It should be noted that the people flow rate statistical system of the embodiment of the present invention is to be using above-mentioned people flow rate statistical method System, then all embodiments of above-mentioned people flow rate statistical method are suitable for the system, and can reach the same or similar beneficial Effect.
It should be noted that, in this document, relational terms such as first and second and the like are used merely to a reality Body or operation are distinguished with another entity or operation, are deposited without necessarily requiring or implying between these entities or operation In any actual relationship or order or sequence.Moreover, the terms "include", "comprise" or its any other variant are intended to Non-exclusive inclusion, so that the process, method, article or equipment including a series of elements is not only wanted including those Element, but also including other elements that are not explicitly listed, or further include for this process, method, article or equipment Intrinsic element.In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that There is also other identical elements in process, method, article or equipment including the element.
Each embodiment in this specification is all made of relevant mode and describes, same and similar portion between each embodiment Dividing may refer to each other, and each embodiment focuses on the differences from other embodiments.Especially for system reality For applying example, since it is substantially similar to the method embodiment, so being described relatively simple, related place is referring to embodiment of the method Part explanation.
The foregoing is merely illustrative of the preferred embodiments of the present invention, is not intended to limit the scope of the present invention.It is all Any modification, equivalent replacement, improvement and so within the spirit and principles in the present invention, are all contained in protection scope of the present invention It is interior.

Claims (15)

1. a kind of people flow rate statistical method, which is characterized in that the described method includes:
Obtain the sequential frame image acquired by image capture device;
The sequential frame image is inputted into trained obtained full convolutional neural networks, generates every frame figure in the sequential frame image The number of people confidence level distribution map of picture;
For the number of people confidence level distribution map of every frame image, method is determined using goal-selling, is determined at least one in every frame image A number of people detects target;
Obtain and clarification of objective matching result and motion smoothing degree simultaneously detected according to the number of people any in every frame image, to present frame with The previous frame of the present frame carries out target association, obtains tracking target, and is tracking Target Assignment tracking mark;
The quantity for counting all tracking marks, obtains people flow rate statistical result.
2. people flow rate statistical method according to claim 1, which is characterized in that described that the video sequential frame image is defeated Enter trained obtained full convolutional neural networks, generate every frame image in the sequential frame image number of people confidence level distribution map it Before, the method also includes:
Obtain everyone head's target center in default training set sample image and the default training set sample image;
According to everyone head's target center in default distribution law and the default training set sample image, described in generation The number of people confidence level of default training set sample image is distributed true value figure;
The default training set sample image is inputted into initial full convolutional neural networks, obtains the default training set sample image Number of people confidence level distribution map, wherein the network parameter of the initial full convolutional neural networks is preset value;
Calculate the number of people confidence level distribution map of the default training set sample image and the people of the default training set sample image The mean error of head confidence level distribution true value figure;
When the mean error is greater than default error threshold, according to the mean error and predetermined gradient operation strategy, update Network parameter, the full convolutional neural networks updated;Full convolutional neural networks of the calculating through the update obtain described pre- If the distribution of the number of people confidence level of the number of people confidence level distribution map of training set sample image and the default training set sample image is true It is worth the mean error of figure, until the mean error is less than or equal to the default error threshold, determines corresponding full convolution Neural network is the full convolutional neural networks after training.
3. people flow rate statistical method according to claim 2, which is characterized in that the full convolutional neural networks include: volume Lamination, down-sampled layer and warp lamination;
It is described that the default training set sample image is inputted into initial full convolutional neural networks, obtain the default training set sample The number of people confidence level distribution map of image, comprising:
The default training set sample image is inputted into initial full convolutional neural networks, it is spaced through convolutional layer and down-sampled layer Network structure, extract the feature of the default training set sample image;
The feature is up-sampled to the resolution ratio of resolution ratio and the default training set sample image by the warp lamination It is identical, the result after being up-sampled;
Operation is carried out to the result using 1 × 1 convolutional layer, is obtained and the default same resolution ratio of training set sample image Number of people confidence level distribution map.
4. people flow rate statistical method according to claim 1, which is characterized in that the number of people confidence for every frame image Distribution map is spent, method is determined using goal-selling, determines at least one number of people detection target of every frame image, comprising:
For the number of people confidence level distribution map of every frame image, using non-maxima suppression method, at least one detection target is determined Central point position;
Obtain the confidence level of all pixels point in the center neighborhood of a point of each detection target;
Determine that the confidence level of each pixel is all larger than the detection target of default confidence threshold value and detects for the number of people of the frame image Target.
5. people flow rate statistical method according to claim 1, which is characterized in that the acquisition is simultaneously appointed according in every frame image One number of people detects clarification of objective matching result and motion smoothing degree, carries out target to the previous frame of present frame and the present frame Association obtains tracking target, and is tracking Target Assignment tracking mark, comprising:
Characteristic matching and smoothness analysis are carried out to any number of people detection target of every frame image in the video sequential frame image, Obtain number of people detection clarification of objective matching result and motion smoothing degree;
According to the characteristic matching result and the motion smoothing degree, target is carried out to the previous frame of present frame and the present frame Association determines the people when characteristic matching degree is higher than preset matching degree threshold value and motion smoothing degree is higher than default smoothness threshold Head detection target is tracking target;
It tracks and identifies for the tracking Target Assignment.
6. people flow rate statistical method according to claim 1, which is characterized in that the number of all tracking marks of statistics Amount, obtains people flow rate statistical result, comprising:
According to the sequential frame image, stream of people's motion profile direction is determined;
It determines in the sequential frame image, a detection line vertical with stream of people's motion profile direction;
When any tracking target is by the detection line, the corresponding tracking mark of the tracking target is recorded;
The quantity for counting the corresponding tracking mark of all tracking targets by the detection line, obtains people flow rate statistical result.
7. people flow rate statistical method according to claim 6, which is characterized in that the acquisition is simultaneously appointed according in every frame image One number of people detects clarification of objective matching result and motion smoothing degree, carries out target to the previous frame of present frame and the present frame Association, obtain tracking target, and for the tracking Target Assignment tracking identify before, the method also includes:
According to default people flow rate statistical condition, at least one tracing area delimited to the sequential frame image;
The acquisition simultaneously detects clarification of objective matching result and motion smoothing degree according to the number of people any in every frame image, to current The previous frame of frame and the present frame carries out target association, obtains tracking target, and is tracking Target Assignment tracking mark, Include:
In any tracing area, obtains and clarification of objective matching result and movement are detected according to the number of people any in every frame image Smoothness carries out target association to the previous frame of present frame and the present frame, obtains tracking target, and is the tracking target Distribution tracking mark respectively;
The quantity of all tracking marks of statistics, obtains people flow rate statistical result, comprising:
In any tracing area, a detection line vertical with stream of people's motion profile direction is determined;
It counts in the tracing area, the quantity of the corresponding tracking mark of all tracking targets by the detection line obtains people Traffic statistics result.
8. a kind of people flow rate statistical equipment, which is characterized in that the equipment includes:
First obtains module, for obtaining the sequential frame image acquired by image capture device;
Convolution module generates described continuous for the sequential frame image to be inputted trained obtained full convolutional neural networks The number of people confidence level distribution map of every frame image in frame image;
The number of people detects target determination module, for being directed to the number of people confidence level distribution map of every frame image, is determined using goal-selling Method determines that at least one number of people detects target in every frame image;
Tracking mark distribution module, for obtain and according to the number of people any in every frame image detect clarification of objective matching result and Motion smoothing degree carries out target association to the previous frame of present frame and the present frame, obtains tracking target, and is the tracking Target Assignment tracking mark;
Statistical module obtains people flow rate statistical result for counting the quantity of all tracking marks.
9. people flow rate statistical equipment according to claim 8, which is characterized in that the equipment further include:
Second obtains module, for obtaining each number of people in default training set sample image and the default training set sample image The center of target;
Generation module presets everyone head's target center in distribution law and the default training set sample image for basis Position, the number of people confidence level for generating the default training set sample image are distributed true value figure;
Extraction module obtains described default for the default training set sample image to be inputted initial full convolutional neural networks The number of people confidence level distribution map of training set sample image, wherein the network parameter of the initial full convolutional neural networks is default Value;
Computing module, for calculating the number of people confidence level distribution map and the default training set of the default training set sample image The mean error of the number of people confidence level distribution true value figure of sample image;
Loop module is used for when the mean error is greater than default error threshold, according to the mean error and predetermined gradient Operation strategy updates network parameter, the full convolutional neural networks updated;Calculate the full convolutional neural networks through the update The number of people confidence level distribution map of the obtained default training set sample image and the number of people of the default training set sample image Confidence level is distributed the mean error of true value figure, until the mean error is less than or equal to the default error threshold, determines institute Corresponding full convolutional neural networks are the full convolutional neural networks after training.
10. people flow rate statistical equipment according to claim 9, which is characterized in that the full convolutional neural networks further include: Convolutional layer, down-sampled layer and warp lamination;
The extraction module, is specifically used for:
The default training set sample image is inputted into initial full convolutional neural networks, it is spaced through convolutional layer and down-sampled layer Network structure, extract the feature of the default training set sample image;
The feature is up-sampled to the resolution ratio of resolution ratio and the default training set sample image by the warp lamination It is identical, the result after being up-sampled;
Operation is carried out to the result using 1 × 1 convolutional layer, is obtained and the default same resolution ratio of training set sample image Number of people confidence level distribution map.
11. people flow rate statistical equipment according to claim 8, which is characterized in that the number of people detects target determination module, It is specifically used for:
For the number of people confidence level distribution map of every frame image, using non-maxima suppression method, at least one detection target is determined Central point position;
Obtain the confidence level of all pixels point in the center neighborhood of a point of each detection target;
Determine that the confidence level of each pixel is all larger than the detection target of default confidence threshold value and detects for the number of people of the frame image Target.
12. people flow rate statistical equipment according to claim 8, which is characterized in that the tracking identifies distribution module, specifically For:
Characteristic matching and smoothness analysis are carried out to any number of people detection target of every frame image in the video sequential frame image, Obtain number of people detection clarification of objective matching result and motion smoothing degree;
According to the characteristic matching result and the motion smoothing degree, target is carried out to the previous frame of present frame and the present frame Association determines the people when characteristic matching degree is higher than preset matching degree threshold value and motion smoothing degree is higher than default smoothness threshold Head detection target is tracking target;
It tracks and identifies for the tracking Target Assignment.
13. people flow rate statistical equipment according to claim 8, which is characterized in that the statistical module is specifically used for:
According to the sequential frame image, stream of people's motion profile direction is determined;
It determines in the sequential frame image, a detection line vertical with stream of people's motion profile direction;
When any tracking target is by the detection line, the corresponding tracking mark of the tracking target is recorded;
The quantity for counting the corresponding tracking mark of all tracking targets by the detection line, obtains people flow rate statistical result.
14. people flow rate statistical equipment according to claim 13, which is characterized in that the equipment further include:
Module delimited, for delimiting at least one tracing area to the sequential frame image according to people flow rate statistical condition is preset;
The tracking identifies distribution module, is specifically also used to:
In any tracing area, obtains and clarification of objective matching result and movement are detected according to the number of people any in every frame image Smoothness carries out target association to the previous frame of present frame and the present frame, obtains tracking target, and is the tracking target Distribution tracking mark respectively;
The statistical module, is specifically also used to:
In any tracing area, a detection line vertical with stream of people's motion profile direction is determined;
It counts in the tracing area, the quantity of the corresponding tracking mark of all tracking targets by the detection line obtains people Traffic statistics result.
15. a kind of people flow rate statistical system, which is characterized in that the system comprises:
Image capture device, for acquiring sequential frame image;
Processor, for obtaining the sequential frame image for acquiring equipment acquisition by described image;The sequential frame image is inputted Trained obtained full convolutional neural networks generate the number of people confidence level distribution map of every frame image in the sequential frame image;Needle To the number of people confidence level distribution map of every frame image, method is determined using goal-selling, determines at least one number of people in every frame image Detect target;It obtains and clarification of objective matching result and motion smoothing degree is simultaneously detected according to the number of people any in every frame image, to working as The previous frame of previous frame and the present frame carries out target association, obtains tracking target, and is tracking Target Assignment tracking mark Know;The quantity for counting all tracking marks, obtains people flow rate statistical result.
CN201710399814.0A 2017-05-31 2017-05-31 People flow statistical method, equipment and system Active CN108986064B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710399814.0A CN108986064B (en) 2017-05-31 2017-05-31 People flow statistical method, equipment and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710399814.0A CN108986064B (en) 2017-05-31 2017-05-31 People flow statistical method, equipment and system

Publications (2)

Publication Number Publication Date
CN108986064A true CN108986064A (en) 2018-12-11
CN108986064B CN108986064B (en) 2022-05-06

Family

ID=64502214

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710399814.0A Active CN108986064B (en) 2017-05-31 2017-05-31 People flow statistical method, equipment and system

Country Status (1)

Country Link
CN (1) CN108986064B (en)

Cited By (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109740721A (en) * 2018-12-19 2019-05-10 中国农业大学 Wheat head method of counting and device
CN109903561A (en) * 2019-03-14 2019-06-18 智慧足迹数据科技有限公司 Flow of the people calculation method, device and electronic equipment between section
CN110490099A (en) * 2019-07-31 2019-11-22 武汉大学 A kind of subway common location stream of people's analysis method based on machine vision
CN110738137A (en) * 2019-09-26 2020-01-31 中移物联网有限公司 people flow rate statistical method and device
CN110765940A (en) * 2019-10-22 2020-02-07 杭州姿感科技有限公司 Target object statistical method and device
CN110795998A (en) * 2019-09-19 2020-02-14 深圳云天励飞技术有限公司 People flow detection method and device, electronic equipment and readable storage medium
CN110992305A (en) * 2019-10-31 2020-04-10 中山大学 Package counting method and system based on deep learning and multi-target tracking technology
CN111145214A (en) * 2019-12-17 2020-05-12 深圳云天励飞技术有限公司 Target tracking method, device, terminal equipment and medium
CN111160243A (en) * 2019-12-27 2020-05-15 深圳云天励飞技术有限公司 Passenger flow volume statistical method and related product
CN111160410A (en) * 2019-12-11 2020-05-15 北京京东乾石科技有限公司 Object detection method and device
CN111291646A (en) * 2020-01-20 2020-06-16 北京市商汤科技开发有限公司 People flow statistical method, device, equipment and storage medium
CN111489284A (en) * 2019-01-29 2020-08-04 北京搜狗科技发展有限公司 Image processing method and device for image processing
CN111553180A (en) * 2019-02-12 2020-08-18 阿里巴巴集团控股有限公司 Clothing counting method, clothing counting device and electronic equipment
CN111680569A (en) * 2020-05-13 2020-09-18 北京中广上洋科技股份有限公司 Attendance rate detection method, device, equipment and storage medium based on image analysis
CN111815671A (en) * 2019-04-10 2020-10-23 曜科智能科技(上海)有限公司 Target quantity statistical method, system, computer device and storage medium
CN112149457A (en) * 2019-06-27 2020-12-29 西安光启未来技术研究院 People flow statistical method, device, server and computer readable storage medium
CN112232210A (en) * 2020-10-16 2021-01-15 京东方科技集团股份有限公司 Personnel flow analysis method and system, electronic device and readable storage medium
CN112561971A (en) * 2020-12-16 2021-03-26 珠海格力电器股份有限公司 People flow statistical method, device, equipment and storage medium
CN112614154A (en) * 2020-12-08 2021-04-06 深圳市优必选科技股份有限公司 Target tracking track obtaining method and device and computer equipment
CN113034544A (en) * 2021-03-19 2021-06-25 奥比中光科技集团股份有限公司 People flow analysis method and device based on depth camera
CN113051975A (en) * 2019-12-27 2021-06-29 深圳云天励飞技术有限公司 People flow statistical method and related product
CN113326830A (en) * 2021-08-04 2021-08-31 北京文安智能技术股份有限公司 Passenger flow statistical model training method and passenger flow statistical method based on overlook images
CN113592785A (en) * 2021-07-09 2021-11-02 浙江大华技术股份有限公司 Target flow statistical method and device
CN113762169A (en) * 2021-09-09 2021-12-07 北京市商汤科技开发有限公司 People flow statistical method and device, electronic equipment and storage medium
CN115330756A (en) * 2022-10-11 2022-11-11 天津恒宇医疗科技有限公司 Light and shadow feature-based guide wire identification method and system in OCT image
CN116311084A (en) * 2023-05-22 2023-06-23 青岛海信网络科技股份有限公司 Crowd gathering detection method and video monitoring equipment
CN116895047A (en) * 2023-07-24 2023-10-17 北京全景优图科技有限公司 Rapid people flow monitoring method and system

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101877058A (en) * 2010-02-10 2010-11-03 杭州海康威视软件有限公司 People flow rate statistical method and system
JP2011188444A (en) * 2010-03-11 2011-09-22 Kddi Corp Head tracking device and control program
CN102799935A (en) * 2012-06-21 2012-11-28 武汉烽火众智数字技术有限责任公司 Human flow counting method based on video analysis technology
CN105184258A (en) * 2015-09-09 2015-12-23 苏州科达科技股份有限公司 Target tracking method and system and staff behavior analyzing method and system
CN105447458A (en) * 2015-11-17 2016-03-30 深圳市商汤科技有限公司 Large scale crowd video analysis system and method thereof
CN105512720A (en) * 2015-12-15 2016-04-20 广州通达汽车电气股份有限公司 Public transport vehicle passenger flow statistical method and system
CN105512640A (en) * 2015-12-30 2016-04-20 重庆邮电大学 Method for acquiring people flow on the basis of video sequence
CN106022237A (en) * 2016-05-13 2016-10-12 电子科技大学 Pedestrian detection method based on end-to-end convolutional neural network
CN106127812A (en) * 2016-06-28 2016-11-16 中山大学 A kind of passenger flow statistical method of non-gate area, passenger station based on video monitoring
US20160335490A1 (en) * 2015-05-12 2016-11-17 Ricoh Company, Ltd. Method and apparatus for detecting persons, and non-transitory computer-readable recording medium
CN106326937A (en) * 2016-08-31 2017-01-11 郑州金惠计算机系统工程有限公司 Convolutional neural network based crowd density distribution estimation method
CN106372570A (en) * 2016-08-19 2017-02-01 云赛智联股份有限公司 Visitor flowrate statistic method
CN106709432A (en) * 2016-12-06 2017-05-24 成都通甲优博科技有限责任公司 Binocular stereoscopic vision based head detecting and counting method

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101877058A (en) * 2010-02-10 2010-11-03 杭州海康威视软件有限公司 People flow rate statistical method and system
JP2011188444A (en) * 2010-03-11 2011-09-22 Kddi Corp Head tracking device and control program
CN102799935A (en) * 2012-06-21 2012-11-28 武汉烽火众智数字技术有限责任公司 Human flow counting method based on video analysis technology
US20160335490A1 (en) * 2015-05-12 2016-11-17 Ricoh Company, Ltd. Method and apparatus for detecting persons, and non-transitory computer-readable recording medium
CN105184258A (en) * 2015-09-09 2015-12-23 苏州科达科技股份有限公司 Target tracking method and system and staff behavior analyzing method and system
CN105447458A (en) * 2015-11-17 2016-03-30 深圳市商汤科技有限公司 Large scale crowd video analysis system and method thereof
CN105512720A (en) * 2015-12-15 2016-04-20 广州通达汽车电气股份有限公司 Public transport vehicle passenger flow statistical method and system
CN105512640A (en) * 2015-12-30 2016-04-20 重庆邮电大学 Method for acquiring people flow on the basis of video sequence
CN106022237A (en) * 2016-05-13 2016-10-12 电子科技大学 Pedestrian detection method based on end-to-end convolutional neural network
CN106127812A (en) * 2016-06-28 2016-11-16 中山大学 A kind of passenger flow statistical method of non-gate area, passenger station based on video monitoring
CN106372570A (en) * 2016-08-19 2017-02-01 云赛智联股份有限公司 Visitor flowrate statistic method
CN106326937A (en) * 2016-08-31 2017-01-11 郑州金惠计算机系统工程有限公司 Convolutional neural network based crowd density distribution estimation method
CN106709432A (en) * 2016-12-06 2017-05-24 成都通甲优博科技有限责任公司 Binocular stereoscopic vision based head detecting and counting method

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
HAO LIU 等: "Cross-Scene Crowd Counting via FCN and Gaussian Model", 《2016 INTERNATIONAL CONFERENCE ON VIRTUAL REALITY AND VISUALIZATION》 *
JIANYONG WANG 等: "Counting Crowd with Fully Convolutional Networks", 《2017 2ND INTERNATIONAL CONFERENCE ON MULTIMEDIA AND IMAGE PROCESSING》 *
张坤石 主编: "《潜艇光电装备技术》", 31 December 2012, 哈尔滨:哈尔滨工程大学出版社 *
张雅俊 等: "基于卷积神经网络的人流量统计", 《重庆邮电大学学报(自然科学版)》 *

Cited By (42)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109740721B (en) * 2018-12-19 2021-06-29 中国农业大学 Wheat ear counting method and device
CN109740721A (en) * 2018-12-19 2019-05-10 中国农业大学 Wheat head method of counting and device
CN111489284A (en) * 2019-01-29 2020-08-04 北京搜狗科技发展有限公司 Image processing method and device for image processing
CN111489284B (en) * 2019-01-29 2024-02-06 北京搜狗科技发展有限公司 Image processing method and device for image processing
CN111553180A (en) * 2019-02-12 2020-08-18 阿里巴巴集团控股有限公司 Clothing counting method, clothing counting device and electronic equipment
WO2020164401A1 (en) * 2019-02-12 2020-08-20 阿里巴巴集团控股有限公司 Method for counting items of clothing, counting method and apparatus, and electronic device
CN111553180B (en) * 2019-02-12 2023-08-29 阿里巴巴集团控股有限公司 Garment counting method, garment counting method and device and electronic equipment
CN109903561A (en) * 2019-03-14 2019-06-18 智慧足迹数据科技有限公司 Flow of the people calculation method, device and electronic equipment between section
CN111815671A (en) * 2019-04-10 2020-10-23 曜科智能科技(上海)有限公司 Target quantity statistical method, system, computer device and storage medium
CN111815671B (en) * 2019-04-10 2023-09-15 曜科智能科技(上海)有限公司 Target quantity counting method, system, computer device and storage medium
CN112149457A (en) * 2019-06-27 2020-12-29 西安光启未来技术研究院 People flow statistical method, device, server and computer readable storage medium
CN110490099B (en) * 2019-07-31 2022-10-21 武汉大学 Subway public place pedestrian flow analysis method based on machine vision
CN110490099A (en) * 2019-07-31 2019-11-22 武汉大学 A kind of subway common location stream of people's analysis method based on machine vision
CN110795998A (en) * 2019-09-19 2020-02-14 深圳云天励飞技术有限公司 People flow detection method and device, electronic equipment and readable storage medium
CN110738137A (en) * 2019-09-26 2020-01-31 中移物联网有限公司 people flow rate statistical method and device
CN110765940A (en) * 2019-10-22 2020-02-07 杭州姿感科技有限公司 Target object statistical method and device
CN110992305A (en) * 2019-10-31 2020-04-10 中山大学 Package counting method and system based on deep learning and multi-target tracking technology
CN111160410A (en) * 2019-12-11 2020-05-15 北京京东乾石科技有限公司 Object detection method and device
CN111160410B (en) * 2019-12-11 2023-08-08 北京京东乾石科技有限公司 Object detection method and device
CN111145214A (en) * 2019-12-17 2020-05-12 深圳云天励飞技术有限公司 Target tracking method, device, terminal equipment and medium
CN113051975A (en) * 2019-12-27 2021-06-29 深圳云天励飞技术有限公司 People flow statistical method and related product
CN113051975B (en) * 2019-12-27 2024-04-02 深圳云天励飞技术有限公司 People flow statistics method and related products
CN111160243A (en) * 2019-12-27 2020-05-15 深圳云天励飞技术有限公司 Passenger flow volume statistical method and related product
CN111291646A (en) * 2020-01-20 2020-06-16 北京市商汤科技开发有限公司 People flow statistical method, device, equipment and storage medium
CN111680569B (en) * 2020-05-13 2024-04-19 北京中广上洋科技股份有限公司 Attendance rate detection method, device, equipment and storage medium based on image analysis
CN111680569A (en) * 2020-05-13 2020-09-18 北京中广上洋科技股份有限公司 Attendance rate detection method, device, equipment and storage medium based on image analysis
CN112232210A (en) * 2020-10-16 2021-01-15 京东方科技集团股份有限公司 Personnel flow analysis method and system, electronic device and readable storage medium
WO2022078134A1 (en) * 2020-10-16 2022-04-21 京东方科技集团股份有限公司 People traffic analysis method and system, electronic device, and readable storage medium
CN112614154A (en) * 2020-12-08 2021-04-06 深圳市优必选科技股份有限公司 Target tracking track obtaining method and device and computer equipment
CN112614154B (en) * 2020-12-08 2024-01-19 深圳市优必选科技股份有限公司 Target tracking track acquisition method and device and computer equipment
CN112561971A (en) * 2020-12-16 2021-03-26 珠海格力电器股份有限公司 People flow statistical method, device, equipment and storage medium
CN113034544A (en) * 2021-03-19 2021-06-25 奥比中光科技集团股份有限公司 People flow analysis method and device based on depth camera
WO2022193516A1 (en) * 2021-03-19 2022-09-22 奥比中光科技集团股份有限公司 Depth camera-based pedestrian flow analysis method and apparatus
CN113592785A (en) * 2021-07-09 2021-11-02 浙江大华技术股份有限公司 Target flow statistical method and device
CN113326830A (en) * 2021-08-04 2021-08-31 北京文安智能技术股份有限公司 Passenger flow statistical model training method and passenger flow statistical method based on overlook images
CN113762169A (en) * 2021-09-09 2021-12-07 北京市商汤科技开发有限公司 People flow statistical method and device, electronic equipment and storage medium
CN115330756B (en) * 2022-10-11 2023-02-28 天津恒宇医疗科技有限公司 Light and shadow feature-based guide wire identification method and system in OCT image
CN115330756A (en) * 2022-10-11 2022-11-11 天津恒宇医疗科技有限公司 Light and shadow feature-based guide wire identification method and system in OCT image
CN116311084A (en) * 2023-05-22 2023-06-23 青岛海信网络科技股份有限公司 Crowd gathering detection method and video monitoring equipment
CN116311084B (en) * 2023-05-22 2024-02-23 青岛海信网络科技股份有限公司 Crowd gathering detection method and video monitoring equipment
CN116895047A (en) * 2023-07-24 2023-10-17 北京全景优图科技有限公司 Rapid people flow monitoring method and system
CN116895047B (en) * 2023-07-24 2024-01-30 北京全景优图科技有限公司 Rapid people flow monitoring method and system

Also Published As

Publication number Publication date
CN108986064B (en) 2022-05-06

Similar Documents

Publication Publication Date Title
CN108986064A (en) A kind of people flow rate statistical method, equipment and system
JP6549797B2 (en) Method and system for identifying head of passerby
CN104166841B (en) The quick detection recognition methods of pedestrian or vehicle is specified in a kind of video surveillance network
CN102542289B (en) Pedestrian volume statistical method based on plurality of Gaussian counting models
CN109117797A (en) A kind of face snapshot recognition method based on face quality evaluation
CN105069472B (en) A kind of vehicle checking method adaptive based on convolutional neural networks
US20130243343A1 (en) Method and device for people group detection
CN100440246C (en) Positioning method for human face characteristic point
CN108256459A (en) Library algorithm is built in detector gate recognition of face and face based on multiple-camera fusion automatically
CN107644418B (en) Optic disk detection method and system based on convolutional neural networks
CN109583324A (en) A kind of pointer meters reading automatic identifying method based on the more box detectors of single-point
CN106133750A (en) For determining the 3D rendering analyzer of direction of visual lines
CN107229930A (en) A kind of pointer instrument numerical value intelligent identification Method and device
CN102521581B (en) Parallel face recognition method with biological characteristics and local image characteristics
CN103577875B (en) A kind of area of computer aided CAD demographic method based on FAST
JP2008538623A (en) Method and system for detecting and classifying events during motor activity
CN105208325B (en) The land resources monitoring and early warning method captured and compare analysis is pinpointed based on image
CN102521565A (en) Garment identification method and system for low-resolution video
CN107730515A (en) Panoramic picture conspicuousness detection method with eye movement model is increased based on region
CN106611160A (en) CNN (Convolutional Neural Network) based image hair identification method and device
CN106709438A (en) Method for collecting statistics of number of people based on video conference
CN109697441A (en) A kind of object detection method, device and computer equipment
CN101726498B (en) Intelligent detector and method of copper strip surface quality on basis of vision bionics
CN101456501A (en) Method and apparatus for controlling elevator button
CN112464843A (en) Accurate passenger flow statistical system, method and device based on human face human shape

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant