CN105590094A - Method and device for determining number of human bodies - Google Patents
Method and device for determining number of human bodies Download PDFInfo
- Publication number
- CN105590094A CN105590094A CN201510920993.9A CN201510920993A CN105590094A CN 105590094 A CN105590094 A CN 105590094A CN 201510920993 A CN201510920993 A CN 201510920993A CN 105590094 A CN105590094 A CN 105590094A
- Authority
- CN
- China
- Prior art keywords
- convolutional neural
- image
- human body
- neural networks
- trained
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a method and a device for determining the number of human bodies. The method comprises the steps of scaling an image collected by an image collection module according to the input dimension of an already trained convolutional neural network, controlling the resolution ratio of the scaled image to be equal to the input dimension of the convolutional neural network; inputting the scaled image into the already trained convolutional neural network, wherein the weight parameters of each convolution layer and the weight parameters of each full-connection layer of the convolutional neural network respectively represent the features of already trained human profiles; extracting the features of human profiles in the scaled image through the already trained convolutional neural network, and conducting the regressive calculation on the obtained features of human bodies in the already trained convolutional neural network to obtain the number of human bodies in the scaled image. According to the technical scheme of the invention, the number of human bodies in the convolutional neural network can be accurately recognized based on the weight parameters of human profiles learned through the convolutional neural network, so that the precise early warning is provided for the management of the human traffic.
Description
Technical field
The disclosure relates to image recognition technology field, relates in particular to a kind of method and dress of definite human body quantityPut.
Background technology
For example, for large-scale public situation (, subway station, market etc.), the monitoring of flow of the people can be pipeReason personnel provide the early warning information about evacuating personnel, to guarantee the safety of public situation. Correlation technique baseDetection in face head portrait is added up the flow of the people of public situation, in the time that the face in image is more,Can reduce the speed that face detects, and, owing to can existing mutually and block between men under public situationPhenomenon, cause the verification and measurement ratio of flow of the people not high.
Summary of the invention
For overcoming the problem existing in correlation technique, disclosure embodiment provides a kind of definite human body quantityMethod and device, the degree of accuracy of estimating in order to improve crowd's flow.
According to the first aspect of disclosure embodiment, a kind of method of definite human body quantity is provided, comprising:
Image image capture module being collected according to the input dimension of the convolutional neural networks of having trained entersThe processing of row convergent-divergent, controls the defeated of the resolution ratio of described convergent-divergent image after treatment and described convolutional neural networksEnter dimension identical;
In the convolutional neural networks of having trained described in image after treatment described convergent-divergent is inputed to, describedWeighting parameter in each convolutional layer of the convolutional neural networks of training and full articulamentum is trained for representingHuman body contour outline feature;
Extract the human body wheel in described convergent-divergent image after treatment by described convolutional neural networks of having trainedWide feature;
By described convolutional neural networks of having trained, the described human body contour outline feature obtaining is returned to meterCalculate, obtain the human body quantity on described convergent-divergent image after treatment.
In one embodiment, described method also can comprise:
The human body comprising in the image pattern of setting quantity of human body and each image pattern will be includedQuantity inputs to the not convolutional neural networks of training, to the described not convolutional layer of the convolutional neural networks of trainingTrain with full articulamentum;
Meet default in the weight parameter of determining the connection between each node in described convolutional layer and full articulamentumWhen condition, convolutional neural networks described in deconditioning, the convolutional neural networks of having been trained.
In one embodiment, described method also can comprise:
Human body quantity described in determining in each image pattern and the actual human body in described each image patternWhether the error between quantity is less than predetermined threshold value;
When the actual human body number in the human body quantity in described each image pattern and described each image patternWhen error between amount is less than predetermined threshold value, determine in described convolutional layer and full articulamentum between each nodeThe weight parameter connecting meets described pre-conditioned.
In one embodiment, described method also can comprise:
Determine the actual human body quantity in described image;
Calculate that described actual human body quantity and described convolutional neural networks of having trained obtain about described figureError between human body quantity on picture;
Determine whether to enter the weighting parameter of described convolutional neural networks of having trained according to described errorRow upgrades.
In one embodiment, described method also can comprise:
In the time need to upgrading the weighting parameter of described convolutional neural networks of having trained, to providingThe server of stating the convolutional neural networks of having trained obtains the weights ginseng of the described convolutional neural networks after renewalNumber.
In one embodiment, described method also can comprise:
Determine that described image capture module gathers the collection period of image in setting-up time section;
Control described image capture module and obtain described image at described collection period from camera head;
On the image that statistics obtains by described convolutional neural networks of having trained in described setting-up time sectionThe summation of human body quantity;
Determine unit according to the time span of the summation of described human body quantity and described setting-up time section correspondenceCrowd density in time.
According to the second aspect of disclosure embodiment, a kind of device of definite human body quantity is provided, comprising:
Pretreatment module, is configured to according to the input dimension of the convolutional neural networks of having trained, image be adoptedThe image that collects of collection module carries out convergent-divergent processing, the resolution ratio of controlling described convergent-divergent image after treatment withThe input dimension of described convolutional neural networks is identical;
Input module, described in being configured to described pretreatment module convergent-divergent image after treatment to input toTraining convolutional neural networks in, each convolutional layer of described convolutional neural networks of having trained be entirely connectedThe human body contour outline feature of weighting parameter in layer for representing to have trained;
Characteristic extracting module, is configured to extract described input by described convolutional neural networks of having trainedHuman body contour outline feature in the described convergent-divergent image after treatment of module input;
Return computing module, be configured to by described convolutional neural networks of having trained, described feature be carriedThe human body contour outline feature that delivery piece obtains returns calculating, obtains on described convergent-divergent image after treatmentHuman body quantity.
In one embodiment, described device also can comprise:
The first training module, is configured to include the image pattern of setting quantity of human body and eachThe human body quantity comprising in image pattern inputs to the not described convolutional neural networks of training, to describedThe convolutional layer of the convolutional neural networks of training and full articulamentum are trained;
The first control module, is configured to determining that described the first training module training obtains described convolutional layerWith the weight parameter of the connection between each node in full articulamentum meets when pre-conditioned, described in deconditioningConvolutional neural networks, the convolutional neural networks of having been trained.
In one embodiment, described device also can comprise:
The first determination module, is configured to the human body quantity in each image pattern and described every described in determiningWhether the error between the actual human body quantity in one image pattern is less than predetermined threshold value;
The second determination module, is configured to determine in described each image pattern when described the first determination moduleHuman body quantity and described each image pattern in actual human body quantity between error be less than predetermined threshold valueTime, determine that the weight parameter of the connection between each node in described convolutional layer and full articulamentum meets described pre-If condition.
In one embodiment, described device also can comprise:
The 3rd determination module, is configured to determine the actual human body quantity in described image;
Error calculating module, is configured to calculate the described actual human body number that described the 3rd determination module is determinedAmount with described recurrence computing module calculate by described convolutional neural networks of having trained about describedError between human body quantity on image;
The 4th determination module, is configured to the described error that calculates according to described error calculating module trueWhether determine needs the weighting parameter of described convolutional neural networks of having trained to upgrade.
In one embodiment, described device also can comprise:
Acquisition module, being configured to determine when described the 4th determination module need to be to described convolution of having trainedWhen the weighting parameter of neutral net upgrades, to the service of the convolutional neural networks of having trained described in providingDevice obtains the weighting parameter of the described convolutional neural networks after renewal.
In one embodiment, described device also can comprise:
The 5th determination module, is configured to determine described image capture module collection figure in setting-up time sectionThe collection period of picture;
The second control module, is configured to control described image capture module true at described the 5th determination moduleFixed described collection period obtains described image from camera head;
Statistical module, is configured to add up by the described convolution god who has trained in described setting-up time sectionHuman body on the described image that described in the second control module control obtaining through network, image capture module gathersThe summation of quantity;
The 6th determination module, is configured to the described human body quantity that obtains according to described statistical module countsThe time span of summation and described setting-up time section correspondence is determined the crowd density in the unit interval.
According to the third aspect of disclosure embodiment, a kind of device of definite human body quantity is provided, comprising:
Processor;
For the memory of storage of processor executable instruction;
Wherein, described processor is configured to:
Image image capture module being collected according to the input dimension of the convolutional neural networks of having trained entersThe processing of row convergent-divergent, controls the defeated of the resolution ratio of described convergent-divergent image after treatment and described convolutional neural networksEnter dimension identical;
Image after treatment described convergent-divergent is inputed in described convolutional neural networks to described convolution nerve netThe human body contour outline feature of weighting parameter in each convolutional layer of network and full articulamentum for representing to have trained;
Extract the human body wheel in described convergent-divergent image after treatment by described convolutional neural networks of having trainedWide feature;
By described convolutional neural networks of having trained, the described human body contour outline feature obtaining is returned to meterCalculate, obtain the human body quantity on described convergent-divergent image after treatment.
The technical scheme that embodiment of the present disclosure provides can comprise following beneficial effect: by by convergent-divergent placeThe convolutional neural networks that the input of image after reason has been trained, is closed by the convolutional neural networks of having trainedHuman body quantity on image, due to each convolutional layer and the full articulamentum of the convolutional neural networks of having trainedIn weighting parameter for representing the human body contour outline feature of having trained, even therefore image capture module collectionTo image in can there is the phenomenon of mutually blocking between men, also can be by the convolution god who has trainedThrough network learning to the human body contour outline feature in convergent-divergent image after treatment, in the image after convergent-divergentHuman body contour outline feature return calculate after can identify exactly the human body number in the image after convergent-divergentAmount, thus accurate early warning provided to the management of flow of the people.
Should be understood that, it is only exemplary and explanatory that above general description and details are hereinafter described, can not limit the disclosure.
Brief description of the drawings
Accompanying drawing is herein merged in description and forms the part of this description, shows and meets thisBright embodiment, and be used from and explain principle of the present invention with description one.
Figure 1A is according to the flow chart of the method for the definite human body quantity shown in an exemplary embodiment.
Figure 1B is according to the scene graph of the method for the definite human body quantity shown in an exemplary embodiment.
Fig. 2 A is according to the flow chart to convolutional neural networks training shown in an exemplary embodiment one.
Fig. 2 B is according to the schematic diagram of the convolutional neural networks shown in an exemplary embodiment one.
Fig. 2 C is the schematic diagram of training according to the convolutional neural networks shown in an exemplary embodiment one.
Fig. 3 is according to the flow chart of the method for the definite human body quantity shown in an exemplary embodiment two.
Fig. 4 is according to the flow chart of the method for the definite human body quantity shown in an exemplary embodiment three.
Fig. 5 is according to the block diagram of the device of a kind of definite human body quantity shown in an exemplary embodiment.
Fig. 6 is the block diagram of determining the device of human body quantity according to the another kind shown in an exemplary embodiment.
Fig. 7 is the block diagram of determining the device of human body quantity according to another shown in an exemplary embodiment.
Fig. 8 is according to a kind of frame that is applicable to the device of determining human body quantity shown in an exemplary embodimentFigure.
Detailed description of the invention
Here will at length describe exemplary embodiment, its sample table shows in the accompanying drawings. BelowWhen description relates to accompanying drawing, unless separately there is expression, the same numbers in different accompanying drawings represents same or analogousKey element. Embodiment described in following exemplary embodiment does not represent the institute consistent with the present inventionThere is embodiment. On the contrary, they be only with as that described in detail in appended claims, of the present invention oneThe example of the consistent apparatus and method in a little aspects.
Figure 1A is according to the flow chart of the method for the definite human body quantity shown in an exemplary embodiment, figure1B is according to the scene graph of the method for the definite human body quantity shown in an exemplary embodiment; This determines human bodyThe method of quantity can be applied in camera head or the watch-dog that is connected with camera head on, as figureShown in 1A, this method of determining human body quantity comprises the following steps S101-S104:
In step S101, according to the input dimension of the convolutional neural networks of having trained to IMAQ mouldThe image that piece collects carries out convergent-divergent processing, controls resolution ratio and the convolution nerve of convergent-divergent image after treatmentThe input dimension of network is identical.
In one embodiment, the resolution ratio difference of the video image gathering due to camera head, and convolutionThe input dimension of neutral net immobilizes after training, therefore needs image capture module to taking the photographThe video image captured as device is normalized, and controls the resolution ratio of convergent-divergent image after treatmentIdentical with the input dimension of the convolutional neural networks of training, thereby after can making image capture module processImage can input to the convolutional neural networks of having trained, for example, for input dimension be 227*227Convolutional neural networks, if the video image of camera head is 800*600, image capture module is adoptedIntegrate the resolution ratio of the image arriving also as 800*600, now need the image of this 800*600 resolution ratio to returnOne turns to the image of 227*227.
In step S102, image after treatment convergent-divergent is inputed to the convolutional neural networks of having trained,Weighting parameter in each convolutional layer of the convolutional neural networks of wherein, having trained and full articulamentum is for tableShow the human body contour outline feature of having trained.
In one embodiment, the training process of convolutional neural networks can be implemented shown in following Fig. 2 AThe description of example, does not first describe in detail at this.
In step S103, extract in convergent-divergent image after treatment by the convolutional neural networks of having trainedHuman body contour outline feature.
In one embodiment, convolutional neural networks can arrange the convolutional layer of different numbers according to actual needsWith full articulamentum, by convolutional layer and full articulamentum, convergent-divergent image after treatment is carried out to feature extraction, fromAnd obtain the human body contour outline feature in convergent-divergent image after treatment.
In step S104, by the convolutional neural networks of having trained, the human body contour outline feature obtaining is enteredLine retrace calculates, and obtains the human body quantity on convergent-divergent image after treatment.
In one embodiment, can be by the people of the full articulamentum output to the convolutional neural networks of having trainedBody contour feature returns calculating, obtains the numerical value of an one dimension, and this numerical value can represent to calculateConvergent-divergent after image in human body quantity.
As an exemplary scenario, as shown in Figure 1B, when camera head 10 is arranged on subway gatewayOr when the larger public situation of the flows of the people such as gateway, market, camera head 10 can continuous acquisition scenes inVideo image, device 11 is realized human body number by the method that definite human body quantity of providing of the disclosure are providedThe identification of amount, thus can determine the crowd's quantity in this scene. Particularly, image capture module 12Can gather the video image that camera head 10 collects, pretreatment module by the default sampling period13 images that image capture module collected according to the input dimension of the convolutional neural networks of having trained enterThe processing of row convergent-divergent, so that the input dimension phase of the resolution ratio of convergent-divergent image after treatment and convolutional neural networksWith, pretreatment module 13 inputs to image after treatment convergent-divergent the convolutional neural networks 14 of having trained,Extract the human body contour outline feature on the image of convergent-divergent processing by the convolutional neural networks 14 of having trained, knotFruit output module 15 returns by the human body contour outline feature of the full articulamentum to convolutional neural networks 14Calculate, obtain the human body quantity the output that on convergent-divergent image after treatment, comprise. If camera head 10User determine by the mode of observing human body quantity and the figure that the convolutional neural networks 14 of having trained obtainsDiffer larger as upper actual human body quantity, user can train to providing by communication interface 16The server of convolutional neural networks service obtains the convolutional neural networks after weighting parameter upgrades, and to instructingWeighting parameter in the convolutional neural networks 14 of practicing upgrades, thereby guarantees the convolution nerve net of having trainedThe degree of accuracy of the human body quantity that network obtains.
In the present embodiment, by the convolutional neural networks that image after treatment convergent-divergent input has been trained, logicalCrossing the convolutional neural networks of having trained obtains about the human body quantity on image, due to the convolution god who has trainedThe human body contour outline spy of weighting parameter in each convolutional layer and the full articulamentum of network for representing to have trainedLevy, even can there is showing of mutually blocking between men in the image that therefore image capture module collectsResemble, also can be by the convolutional neural networks learning of having trained to the human body in convergent-divergent image after treatmentContour feature, by the human body contour outline feature in the image after convergent-divergent return calculate after can be exactlyIdentify the human body quantity in the image after convergent-divergent, thereby provide accurate early warning to the management of flow of the people.
In one embodiment, method further also can comprise:
By include human body not training image pattern and each image pattern in the human body that comprisesQuantity inputs to the not convolutional neural networks of training, the convolutional layer to the convolutional neural networks of not training and completeArticulamentum is trained;
Meet pre-conditioned in the weight parameter of determining the connection between each node in convolutional layer and full articulamentumTime, deconditioning convolutional neural networks, the convolutional neural networks of having been trained.
In one embodiment, method further also can comprise:
Determine between the actual human body quantity in human body quantity and each image pattern in each image patternError whether be less than predetermined threshold value;
Between the actual human body quantity in human body quantity and each image pattern in each image patternWhen error is less than predetermined threshold value, determine the weight ginseng of the connection between each node in convolutional layer and full articulamentumNumber meets pre-conditioned.
In one embodiment, method further also can comprise:
Determine the actual human body quantity in image;
Calculate that actual human body quantity and convolutional neural networks obtain about between the human body quantity on imageError;
Determine whether to upgrade the weighting parameter of the convolutional neural networks of having trained according to error.
In one embodiment, method also can comprise:
In the time need to upgrading the weighting parameter of the convolutional neural networks of having trained, to convolution god is providedObtain the weighting parameter of the convolutional neural networks after renewal through the server of network.
In one embodiment, method also can comprise:
Determine that image capture module gathers the collection period of image in setting-up time section;
Control image capture module and obtain image at collection period from camera head;
Human body number on the image that statistics obtains by the convolutional neural networks of having trained in setting-up time sectionThe summation of amount, obtains the crowd density in setting-up time section.
Concrete how realization determined and human body quantity be please refer to subsequent embodiment.
So far, the said method that disclosure embodiment provides, can carry by the convolutional neural networks of having trainedGet the human body contour outline feature in image, identify the human body in image by the human body contour outline feature in imageQuantity, thus can provide accurate early warning to the management of flow of the people.
With specific embodiment, the technical scheme that disclosure embodiment provides is described below.
Fig. 2 A is according to the flow chart to convolutional neural networks training shown in an exemplary embodiment one,Fig. 2 B is that Fig. 2 C is basis according to the schematic diagram of the convolutional neural networks shown in an exemplary embodiment oneThe schematic diagram that convolutional neural networks shown in one exemplary embodiment one is trained; The present embodiment utilization originallyThe said method that disclosed embodiment provides, with how by there being label human sample to enter convolutional neural networksRow is trained for example and carries out exemplary illustration, as shown in Figure 2 A, comprises the steps:
In step S201, by decent image pattern and each figure of setting quantity that includes human bodyThe human body quantity comprising in this inputs to the not convolutional neural networks of training, to the convolution nerve of not trainingThe convolutional layer of network and full articulamentum are trained.
In step S202, determine in human body quantity in each image pattern and each image patternWhether the error between actual human body quantity is less than predetermined threshold value, when the human body quantity in each image patternAnd when the error between the actual human body quantity in each image pattern is less than predetermined threshold value, execution stepS203, between the actual human body quantity in human body quantity and each image pattern in each image patternError while being greater than or equal to predetermined threshold value, execution step S201.
In step S203, when the reality in human body quantity and each image pattern in each image patternWhen error between the human body quantity of border is less than predetermined threshold value, determine in convolutional layer and full articulamentum each node itBetween the weight parameter of connection meet pre-conditioned.
In step S204, in the weight of determining the connection between each node in convolutional layer and full articulamentumParameter meets when pre-conditioned, deconditioning convolutional neural networks, the convolutional neural networks of having been trained.
Before the convolutional neural networks of not training is trained, need to prepare the image of the human body of magnanimitySample (for example, 20000 image patterns that comprise by different human body the quantity, " sea described in the disclosureAmount " can limit by a larger setting quantity, for example, set quantity and can reach more than ten thousandMagnitude), the image pattern of these magnanimity is carried out to label (label), for example, include 0 peopleThe label of the image pattern of body is 0, and the label that includes the image pattern of 1 human body is 1 ..., compriseThere is the label N (N is positive integer) of the image pattern of N human body, etc., can prepare 20000The image pattern that comprises different human body quantity, in these 20000 image patterns, has comprised from 0-100Individual human body quantity, for example, the image pattern that includes 0 human body quantity is 30, includes 1 peopleThe image pattern of body quantity is 50, etc., the quantity of the image pattern that comprises different human body quantity is passableIdentical can be not identical yet, the disclosure does not limit this.
The structure of the convolutional neural networks of training can be with reference to the signal of Fig. 2 B, as shown in Figure 2 B,In this convolutional neural networks of not training, comprise input layer, first volume lamination, volume Two lamination, the 3rdConvolutional layer, Volume Four lamination, the 5th convolutional layer, the first full articulamentum, the second full articulamentum and outputLayer. 20000 above-mentioned image patterns are input in this convolutional neural networks and instruct as training samplePractice, and according to the classification results of convolutional neural networks output, constantly to the each convolutional layer of this convolutional neural networksThe weight parameter of the connection between upper node is adjusted. In continuous adjustment process, this convolution nerveNetwork after the training sample based on input is trained, the classification of the classification results of output and user demarcationResult is compared, and the degree of accuracy will improve gradually. Meanwhile, user can set in advance a default barPart, for example, in continuous adjustment process, if classification results and the use of this convolutional neural networks outputThe classification results that family is demarcated is compared, and the degree of accuracy reaches after the degree of accuracy threshold value setting in advance, and represents this convolutionThe weight parameter connecting between each convolutional layer node in neutral net is optimal weight parameter, now canThink the weight parameter symbol of the connection between each node in the convolutional layer of this convolutional neural networks and full articulamentumShould pre-conditioned.
The degree of accuracy of the human body quantity of exporting for the result output module 15, can also pass through Fig. 2 CShown error calculating module 20 is controlled by the mode of carrying out above-mentioned steps S202 and step S203The iterations of convolutional neural networks training. In one embodiment, error calculating module 20 can be passed throughThe loss layer of L2 norm is determined the actual human body quantity in image pattern and the convolution god through not trainingThe human body quantity obtaining through network 14 carries out error calculating, if error calculating module 20 calculatesError amount is larger, and reaches a predetermined threshold value, continues convolutional neural networks 14 by image patternTrain, for example, the Output rusults that convolutional neural networks 14 obtains is 1, represents in this image patternInclude a human body, if include 50 human bodies in this image pattern, this time error is (50-1) *(50-1), be greater than predetermined threshold value, need to continue convolutional neural networks 14 to train convolution nerveThe weighting parameter of each layer in network 14 has been updated again once, so iteration, for example in iteration alwaysAfter 100w wheel, the Output rusults obtaining at convolutional neural networks 14 is 49 o'clock, and error is (50-49)* (50-49), now can deconditioning convolutional neural networks 14, now preserves convolutional neural networks instructionThe weighting parameter getting, the convolutional neural networks of having been trained.
In the present embodiment, by the image pattern of magnanimity, convolutional neural networks is trained, due to imageIn sample, include the human body contour outline feature of magnanimity, and then can make the power of the convolutional neural networks of having trainedValue parameter can be expressed human body contour outline feature accurately, when inputting the image of the convolutional neural networks of having trainedIn while including human body contour outline, can identify the feature that is conducive to human body contour outline identification in image, keep awayExempt to occur human body overlapping cause face is produced to the situation that causes recognition of face mistake while blocking.
Fig. 3 is according to the flow chart of the method for the definite human body quantity shown in an exemplary embodiment two; ThisHow the said method that embodiment utilizes disclosure embodiment to provide, come convolution god according to human body quantityWeighting parameter in network is updated to example and carries out exemplary illustration in conjunction with Figure 1B, as Fig. 3 instituteShow, comprise the steps:
In step S301, according to the input dimension of the convolutional neural networks of having trained to IMAQ mouldThe image that piece collects carries out convergent-divergent processing, controls resolution ratio and the convolution nerve of convergent-divergent image after treatmentThe input dimension of network is identical.
In step S302, image after treatment convergent-divergent is inputed in the convolutional neural networks of having trained,Weighting parameter in each convolutional layer of the convolutional neural networks of having trained and full articulamentum is instructed for representingThe human body contour outline feature of practicing.
In step S303, by the convolutional neural networks of having trained, convergent-divergent image after treatment is extractedHuman body contour outline feature in convergent-divergent image after treatment.
In step S304, by the convolutional neural networks of having trained, the human body contour outline feature obtaining is enteredLine retrace calculates, and obtains the human body quantity on convergent-divergent image after treatment.
The associated description of step S301 and step S304 can retouching referring to above-mentioned Figure 1A illustrated embodimentState, be not described in detail in this.
In step S305, monitor the button upgrading for control value parameter and whether be triggered, ifButton is triggered, and execution step S306, if button is not triggered, using this human body quantity as finalThe output of human body quantity.
In one embodiment, the button upgrading for control value parameter can be both physical button, also couldThink the virtual key on the device of carrying out method described in the disclosure. This button can be by user by observingMode trigger, for example, user finds on the human body quantity of device output and image frame shownWhen human body quantity difference is larger, trigger this button, need to be to the weighting parameter of convolutional neural networks to determineUpgrade.
In step S306, when upgrading the weighting parameter of the convolutional neural networks of having trainedTime, obtain the power of the convolutional neural networks after renewal to the server that the convolutional neural networks of having trained is providedValue parameter.
In one embodiment, when providing the server of convolutional neural networks to receive the need that a large number of users sendsWhile upgrading the request of weighting parameter, server can upgrade the weighting parameter of convolutional neural networks,To guarantee that user in use can obtain human body quantity accurately by convolutional neural networks, oneIn embodiment, can obtain weighting parameter to server by the communication interface 16 shown in Figure 1B.
The present embodiment is on the basis of useful technique effect with above-described embodiment, by convolution applicationThe weighting parameter of network upgrades, and can guarantee in use can obtain by convolutional neural networksArrive human body quantity accurately.
Fig. 4 is according to the flow chart of the method for the definite human body quantity shown in an exemplary embodiment three; ThisHow the said method that embodiment utilizes disclosure embodiment to provide, obtain crowd density as example combinationFigure 1B carries out exemplary illustration, as shown in Figure 4, comprises the steps:
In step S401, determine that image capture module gathers the collection week of image in setting-up time sectionPhase.
In step S402, control image capture module and obtain image at collection period from camera head.
In one embodiment, collection period can be arranged by user's actual needs, for example, and IMAQModule 12 is every the video image of one minute collection camera head 10. In one embodiment, setting-up timeSection also can be arranged by user's actual needs, for example, and in 8 o'clock to the 10 o'clock morning of peak period, eveningUpper 17 to point in evenings 19, now, image capture module 12 can be in the morning 8 o'clock to 10 o'clock with oftenThe video image of individual one minute collection camera head 10, at night 17 o'clock at 19 o'clock in evening with every one minuteClock gathers the video image of camera head.
In step S403, in setting-up time section, add up and obtain by the convolutional neural networks of having trainedImage on the summation of human body quantity.
In step S404, according to the time span of the summation of human body quantity and setting-up time section correspondenceDetermine the crowd density in the unit interval.
In step S405, if crowd density reaches threshold value of warning, send warning message.
In an exemplary scenario, for example, for 8 o'clock to 10 o'clock morning of on November 27th, 2015 itBetween, image capture module 12 is every the video image of one minute collection camera head 10, obtains totally 120The human body image of opening, obtains by installing 11 execution above-described embodiments the human body number that each image comprisesAmount, sums up the human body quantity on these 120 images, then divided by time of setting-up time section correspondenceLength 120 minutes, can obtain the crowd density of (per minute) in the unit interval. People per minutePopulation density has reached 100 people, and the flow of the people of these camera head 10 place occasions must be controlled at 70 peopleIn, now can send warning message, to point out relevant staff to carry out flow of the people to this occasionRestriction, avoids the unnecessary crowded personnel casualty accidents causing of personnel.
The present embodiment has on the basis of the useful technique effect of above-described embodiment, by setting-up time sectionThe summation of the human body quantity on the image that interior statistics obtains by the convolutional neural networks of having trained, according to peopleThe time span of the summation of body quantity and setting-up time section correspondence is determined the crowd density in the unit interval,In the time that crowd density reaches threshold value of warning, send warning message, thereby can make relevant staff to thisOccasion is carried out stream of people's quantitative limitation, avoids the unnecessary crowded casualties thing causing of personnel.
Fig. 5 is according to the block diagram of the device of a kind of definite human body quantity shown in an exemplary embodiment, asShown in Fig. 5, determine that the device of human body quantity comprises:
Pretreatment module 51, is configured to according to the input dimension of the convolutional neural networks of having trained imageThe image that acquisition module collects carries out convergent-divergent processing, so that the resolution ratio of convergent-divergent image after treatment and volumeThe input dimension of long-pending neutral net is identical;
Input module 52, is configured to pretreatment module 51 convergent-divergent image after treatment to input to and instructIn the convolutional neural networks of practicing, in each convolutional layer of the convolutional neural networks of having trained and full articulamentumThe human body contour outline feature of weighting parameter for representing to have trained;
Characteristic extracting module 53, the convolutional neural networks being configured to by having trained extracts input module 52Human body contour outline feature in the convergent-divergent image after treatment of input;
Return computing module 54, be configured to convolutional neural networks by having trained to characteristic extracting moduleThe 53 human body contour outline features that obtain return calculating, obtain the human body quantity on convergent-divergent image after treatment.
In one embodiment, pretreatment module 51 can with the pretreatment module 13 shown in above-mentioned Figure 1BIdentical, return computing module 54 and can comprise convolutional neural networks 14 and the result shown in above-mentioned Fig. 2 COutput module 15.
Fig. 6 is the block diagram of determining the device of human body quantity according to the another kind shown in an exemplary embodiment,As shown in Figure 6, on above-mentioned basis embodiment illustrated in fig. 5, in one embodiment, device also can wrapDraw together:
The first training module 55, is configured to include the image pattern of setting quantity of human body and everyThe human body quantity comprising in one image pattern inputs to the not convolutional neural networks of training, to what do not trainThe convolutional layer of convolutional neural networks and full articulamentum are trained;
The first control module 56, be configured to determine the first training module 55 training obtain convolutional layer andThe weight parameter of the connection in full articulamentum between each node meets when pre-conditioned, deconditioning convolution godThrough network, the convolutional neural networks of having been trained.
In one embodiment, device also can comprise:
The first determination module 57, is configured to determine human body quantity and each image in each image patternWhether the error between the actual human body quantity in sample is less than predetermined threshold value;
The second determination module 58, is configured to determine in each image pattern when the first determination module 57Error between actual human body quantity in human body quantity and each image pattern is greater than or equal to default thresholdWhen value, the weight parameter of determining the connection between each node in convolutional layer and full articulamentum meets pre-conditioned,The first control module 56 can be controlled deconditioning convolutional neural networks, the convolution nerve net of having been trainedNetwork.
Fig. 7 is the block diagram of determining the device of human body quantity according to another shown in an exemplary embodiment,As shown in Figure 7, on above-mentioned Fig. 5 or basis embodiment illustrated in fig. 6, in one embodiment, deviceAlso can comprise:
The 3rd determination module 59, is configured to determine the actual human body quantity in image;
Error calculating module 60, is configured to calculate the definite actual human body quantity of the 3rd determination module 59With return computing module 54 obtain by the convolutional neural networks of having trained about the human body number on imageError between amount;
The 4th determination module 61, the error that is configured to calculate according to error calculating module 60 is determinedWhether need the weighting parameter of the convolutional neural networks to having trained to upgrade.
In one embodiment, error calculating module 60 can be the error calculating module 20 in above-mentioned Fig. 2 CIdentical.
In one embodiment, device also can comprise:
Acquisition module 62, being configured to determine when the 4th determination module 61 need to be to the convolution god who has trainedIn the time that the weighting parameter of network upgrades, obtain to the server that the convolutional neural networks of having trained is providedThe weighting parameter of the convolutional neural networks after renewal, thus can make characteristic extracting module 53 according to after upgradingConvolutional neural networks extract the human body contour outline feature in the image after convergent-divergent.
In one embodiment, device also can comprise:
The 5th determination module 63, is configured to determine that image capture module gathers image in setting-up time sectionCollection period;
The second control module 64, is configured to control image capture module and determines at the 5th determination module 63Collection period obtain image from camera head;
Statistical module 65, is configured to statistics in setting-up time section and obtains by returning computing module 54The second control module 64 control the summation of human body quantity on the image that image capture module gathers;
The 6th determination module 66, is configured to add up the total of the human body quantity that obtains according to statistical module 65With and the time span of setting-up time section correspondence determine the crowd density in the unit interval.
About the device in above-described embodiment, wherein the concrete mode of modules executable operations is havingIn the embodiment of pass the method, have been described in detail, will not elaborate explanation herein.
Fig. 8 is according to a kind of frame that is applicable to the device of determining human body quantity shown in an exemplary embodimentFigure. For example, device 800 can be mobile phone, computer, and digital broadcast terminal, information receiving and transmitting is establishedStandby, game console, tablet device, Medical Devices, body-building equipment, personal digital assistant etc.
With reference to Fig. 8, device 800 can comprise following one or more assembly: processing components 802, storageDevice 804, power supply module 808, multimedia groupware 808, audio-frequency assembly 810, I/O (I/O)Interface 812, sensor cluster 814, and communications component 816.
The integrated operation of processing components 802 common control device 800, such as with demonstration, call,Data communication, the operation that camera operation and record operation are associated. Treatment element 802 can comprise oneOr multiple processors 820 carry out instruction, to complete all or part of step of above-mentioned method. In addition,Processing components 802 can comprise one or more modules, is convenient between processing components 802 and other assembliesMutual. For example, processing unit 802 can comprise multi-media module, to facilitate multimedia groupware 808And mutual between processing components 802.
Memory 804 is configured to store various types of data to be supported in the operation of equipment 800. ThisThe example of a little data comprises for any application program of operation on device 800 or the instruction of method, connectionBe personal data, telephone book data, message, picture, video etc. Memory 804 can be by any typeVolatibility or non-volatile memory device or their combination realize, as static RAM(SRAM), Electrically Erasable Read Only Memory (EEPROM), erasable programmable is read-onlyMemory (EPROM), programmable read only memory (PROM), read-only storage (ROM),Magnetic memory, flash memory, disk or CD.
Electric power assembly 808 provides electric power for installing 800 various assemblies. Electric power assembly 808 can comprisePower-supply management system, one or more power supplys, and other and generate, manage and divide distribution for device 800The assembly that power is associated.
The output interface that provides between described device 800 and user is provided multimedia groupware 808Screen. In certain embodiments, screen can comprise liquid crystal display (LCD) and touch panel (TP).If screen comprises touch panel, screen may be implemented as touch-screen, to receive the input from userSignal. Touch panel comprises that one or more touch sensors are with on sensing touch, slip and touch panelGesture. Described touch sensor is the border of sensing touch or sliding action not only, but also detectsDuration and the pressure relevant to described touch or slide. In certain embodiments, multimedia groupPart 808 comprises a front-facing camera and/or post-positioned pick-up head. When equipment 800 is in operator scheme, asWhen screening-mode or video mode, front-facing camera and/or post-positioned pick-up head can receive outside multimediaData. Each front-facing camera and post-positioned pick-up head can be fixing optical lens systems or haveFocal length and optical zoom ability.
Audio-frequency assembly 810 is configured to output and/or input audio signal. For example, audio-frequency assembly 810 wrapsDraw together a microphone (MIC), when device 800 is in operator scheme, as call model, logging modeDuring with speech recognition mode, microphone is configured to receive external audio signal. The audio signal receivingCan be further stored in memory 804 or be sent via communications component 816. In certain embodiments,Audio-frequency assembly 810 also comprises a loudspeaker, for output audio signal.
I/O interface 812 is for providing interface between processing components 802 and peripheral interface module, above-mentioned peripheryInterface module can be keyboard, some striking wheel, button etc. These buttons can include but not limited to: homepage is pressedButton, volume button, start button and locking press button.
Sensor cluster 814 comprises one or more sensors, is used to device 800 that various aspects are providedState estimation. For example, sensor cluster 814 can detect the opening/closing state of equipment 800,The relative positioning of assembly, for example described assembly is display and the keypad of device 800, sensor clusterThe position of 814 all right checkout gears 800 or 800 1 assemblies of device changes, user and device 800The existence of contact or do not have the variations in temperature of device 800 orientation or acceleration/deceleration and device 800. PassSensor assembly 814 can comprise proximity transducer, be configured to without any physical contact time examineNear the existence of object surveying. Sensor cluster 814 can also comprise optical sensor, as CMOS or CCDImageing sensor, for using in imaging applications. In certain embodiments, this sensor cluster 814Can also comprise acceleration transducer, gyro sensor, Magnetic Sensor, pressure sensor or temperature passSensor.
Communications component 816 is configured to be convenient to wired or wireless mode between device 800 and other equipmentCommunication. Device 800 wireless networks that can access based on communication standard, as WiFi, 2G or 3G, orTheir combination. In one exemplary embodiment, communication component 816 via broadcast channel receive fromThe broadcast singal of external broadcasting management system or broadcast related information. In one exemplary embodiment, instituteState communication component 816 and also comprise near-field communication (NFC) module, to promote junction service. For example, existNFC module can be based on RF identification (RFID) technology, and Infrared Data Association (IrDA) technology is superBroadband (UWB) technology, bluetooth (BT) technology and other technologies realize.
In the exemplary embodiment, device 800 can be by one or more application specific integrated circuits(ASIC), digital signal processor (DSP), digital signal processing appts (DSPD), can compileJourney logical device (PLD), field programmable gate array (FPGA), controller, microcontroller, micro-Processor or other electronic components are realized, for carrying out said method.
In the exemplary embodiment, also provide a kind of non-provisional computer-readable storage that comprises instructionMedium, for example, comprise the memory 804 of instruction, and above-mentioned instruction can be held by the processor 820 of device 800Row is to complete said method. For example, described non-provisional computer-readable recording medium can be ROM,Random access memory (RAM), CD-ROM, tape, floppy disk and optical data storage equipment etc.
Those skilled in the art, considering description and putting into practice after disclosed herein disclosing, will easily expect thisDisclosed other embodiment. The application is intended to contain any modification of the present disclosure, purposes or adaptabilityChange, these modification, purposes or adaptations are followed general principle of the present disclosure and are comprised these public affairsOpen undocumented common practise or conventional techniques means in the art. Description and embodiment only byBe considered as exemplary, true scope of the present disclosure and spirit pointed out by claim below.
Should be understood that, the disclosure is not limited to described above and illustrated in the accompanying drawings accurateStructure, and can carry out various amendments and change not departing from its scope. The scope of the present disclosure is only by instituteAttached claim limits.
Claims (13)
1. a method for definite human body quantity, is characterized in that, described method comprises:
Image image capture module being collected according to the input dimension of the convolutional neural networks of having trained entersThe processing of row convergent-divergent, controls the defeated of the resolution ratio of described convergent-divergent image after treatment and described convolutional neural networksEnter dimension identical;
In the convolutional neural networks of having trained described in image after treatment described convergent-divergent is inputed to, describedWeighting parameter in each convolutional layer of the convolutional neural networks of training and full articulamentum is trained for representingHuman body contour outline feature;
Extract the human body wheel in described convergent-divergent image after treatment by described convolutional neural networks of having trainedWide feature;
By described convolutional neural networks of having trained, the described human body contour outline feature obtaining is returned to meterCalculate, obtain the human body quantity on described convergent-divergent image after treatment.
2. method according to claim 1, is characterized in that, described method also comprises:
The human body comprising in the image pattern of setting quantity of human body and each image pattern will be includedQuantity inputs to the not convolutional neural networks of training, to the described not convolutional layer of the convolutional neural networks of trainingTrain with full articulamentum;
Meet default in the weight parameter of determining the connection between each node in described convolutional layer and full articulamentumWhen condition, convolutional neural networks described in deconditioning, the convolutional neural networks of having been trained.
3. method according to claim 2, is characterized in that, described method also comprises:
Human body quantity described in determining in each image pattern and the actual human body in described each image patternWhether the error between quantity is less than predetermined threshold value;
When the actual human body number in the human body quantity in described each image pattern and described each image patternWhen error between amount is less than predetermined threshold value, determine in described convolutional layer and full articulamentum between each nodeThe weight parameter connecting meets described pre-conditioned.
4. method according to claim 1, is characterized in that, described method also comprises:
Determine the actual human body quantity in described image;
Calculate that described actual human body quantity and described convolutional neural networks of having trained obtain about described figureError between human body quantity on picture;
Determine whether to enter the weighting parameter of described convolutional neural networks of having trained according to described errorRow upgrades.
5. method according to claim 4, is characterized in that, described method also comprises:
In the time need to upgrading the weighting parameter of described convolutional neural networks of having trained, to providingThe server of stating the convolutional neural networks of having trained obtains the weighting parameter of the convolutional neural networks after renewal.
6. method according to claim 1, is characterized in that, described method also comprises:
Determine that described image capture module gathers the collection period of image in setting-up time section;
Control described image capture module and obtain described image at described collection period from camera head;
On the image that statistics obtains by described convolutional neural networks of having trained in described setting-up time sectionThe summation of human body quantity;
Determine unit according to the time span of the summation of described human body quantity and described setting-up time section correspondenceCrowd density in time.
7. a device for definite human body quantity, is characterized in that, described device comprises:
Pretreatment module, is configured to according to the input dimension of the convolutional neural networks of having trained, image be adoptedThe image that collects of collection module carries out convergent-divergent processing, the resolution ratio of controlling described convergent-divergent image after treatment withThe input dimension of described convolutional neural networks is identical;
Input module, described in being configured to described pretreatment module convergent-divergent image after treatment to input toTraining convolutional neural networks in, each convolutional layer of described convolutional neural networks of having trained be entirely connectedThe human body contour outline feature of weighting parameter in layer for representing to have trained;
Characteristic extracting module, is configured to extract described input by described convolutional neural networks of having trainedHuman body contour outline feature in the described convergent-divergent image after treatment of module input;
Return computing module, be configured to by described convolutional neural networks of having trained, described feature be carriedThe human body contour outline feature that delivery piece obtains returns calculating, obtains on described convergent-divergent image after treatmentHuman body quantity.
8. device according to claim 7, is characterized in that, described device also comprises:
The first training module, is configured to include the image pattern of setting quantity of human body and eachThe human body quantity comprising in image pattern inputs to the not described convolutional neural networks of training, to describedThe convolutional layer of the convolutional neural networks of training and full articulamentum are trained;
The first control module, is configured to determining that described the first training module training obtains described convolutional layerWith the weight parameter of the connection between each node in full articulamentum meets when pre-conditioned, described in deconditioningConvolutional neural networks, the convolutional neural networks of having been trained.
9. device according to claim 8, is characterized in that, described device also comprises:
The first determination module, is configured to the human body quantity in each image pattern and described every described in determiningWhether the error between the actual human body quantity in one image pattern is less than predetermined threshold value;
The second determination module, is configured to determine in described each image pattern when described the first determination moduleHuman body quantity and described each image pattern in actual human body quantity between error be less than predetermined threshold valueTime, determine that the weight parameter of the connection between each node in described convolutional layer and full articulamentum meets described pre-If condition.
10. device according to claim 7, is characterized in that, described device also comprises:
The 3rd determination module, is configured to determine the actual human body quantity in described image;
Error calculating module, is configured to calculate the described actual human body number that described the 3rd determination module is determinedAmount with described recurrence computing module calculate by described convolutional neural networks of having trained about describedError between human body quantity on image;
The 4th determination module, is configured to the described error that calculates according to described error calculating module trueWhether determine needs the weighting parameter of described convolutional neural networks of having trained to upgrade.
11. devices according to claim 10, is characterized in that, described device also comprises:
Acquisition module, being configured to determine when described the 4th determination module need to be to described convolution of having trainedWhen the weighting parameter of neutral net upgrades, to the service of the convolutional neural networks of having trained described in providingDevice obtains the weighting parameter of the described convolutional neural networks after renewal.
12. devices according to claim 7, is characterized in that, described device also comprises:
The 5th determination module, is configured to determine described image capture module collection figure in setting-up time sectionThe collection period of picture;
The second control module, is configured to control described image capture module true at described the 5th determination moduleFixed described collection period obtains described image from camera head;
Statistical module, is configured to add up by the described convolution god who has trained in described setting-up time sectionHuman body on the described image that described in the second control module control obtaining through network, image capture module gathersThe summation of quantity;
The 6th determination module, is configured to the described human body quantity that obtains according to described statistical module countsThe time span of summation and described setting-up time section correspondence is determined the crowd density in the unit interval.
The device of 13. 1 kinds of definite human body quantity, is characterized in that, described device comprises:
Processor;
For the memory of storage of processor executable instruction;
Wherein, described processor is configured to:
Image image capture module being collected according to the input dimension of the convolutional neural networks of having trained entersThe processing of row convergent-divergent, controls the defeated of the resolution ratio of described convergent-divergent image after treatment and described convolutional neural networksEnter dimension identical;
Image after treatment described convergent-divergent is inputed in described convolutional neural networks to described convolution nerve netThe human body contour outline feature of weighting parameter in each convolutional layer of network and full articulamentum for representing to have trained;
Extract the human body wheel in described convergent-divergent image after treatment by described convolutional neural networks of having trainedWide feature;
By described convolutional neural networks of having trained, the described human body contour outline feature obtaining is returned to meterCalculate, obtain the human body quantity on described convergent-divergent image after treatment.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510920993.9A CN105590094B (en) | 2015-12-11 | 2015-12-11 | Determine the method and device of human body quantity |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510920993.9A CN105590094B (en) | 2015-12-11 | 2015-12-11 | Determine the method and device of human body quantity |
Publications (2)
Publication Number | Publication Date |
---|---|
CN105590094A true CN105590094A (en) | 2016-05-18 |
CN105590094B CN105590094B (en) | 2019-03-01 |
Family
ID=55929664
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510920993.9A Active CN105590094B (en) | 2015-12-11 | 2015-12-11 | Determine the method and device of human body quantity |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105590094B (en) |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106980880A (en) * | 2017-03-06 | 2017-07-25 | 北京小米移动软件有限公司 | The method and device of images match |
CN106991641A (en) * | 2017-03-10 | 2017-07-28 | 北京小米移动软件有限公司 | It is implanted into the method and device of picture |
CN107316434A (en) * | 2017-07-28 | 2017-11-03 | 惠州市伊涅科技有限公司 | Environment passes through quantity monitoring method |
CN107392189A (en) * | 2017-09-05 | 2017-11-24 | 百度在线网络技术(北京)有限公司 | For the method and apparatus for the driving behavior for determining unmanned vehicle |
CN107566781A (en) * | 2016-06-30 | 2018-01-09 | 北京旷视科技有限公司 | Video frequency monitoring method and video monitoring equipment |
WO2018059408A1 (en) * | 2016-09-29 | 2018-04-05 | 北京市商汤科技开发有限公司 | Cross-line counting method, and neural network training method and apparatus, and electronic device |
CN109883005A (en) * | 2019-03-26 | 2019-06-14 | 广州远正智能科技股份有限公司 | Terminal tail end air conditioner device intelligence control system, method, medium and equipment |
CN110096959A (en) * | 2019-03-28 | 2019-08-06 | 上海拍拍贷金融信息服务有限公司 | Flow of the people calculation method, device and computer storage medium |
CN110298254A (en) * | 2019-05-30 | 2019-10-01 | 罗普特科技集团股份有限公司 | A kind of analysis method and system for personnel's abnormal behaviour |
CN110322037A (en) * | 2018-03-28 | 2019-10-11 | 普天信息技术有限公司 | Method for predicting and device based on inference pattern |
CN111199215A (en) * | 2020-01-06 | 2020-05-26 | 郑红 | People counting method and device based on face recognition |
US20200334789A1 (en) * | 2017-12-01 | 2020-10-22 | Huawei Technologies Co., Ltd. | Image Processing Method and Device |
CN112132234A (en) * | 2020-10-28 | 2020-12-25 | 重庆斯铂电气自动化设备有限公司 | Oil level monitoring system and method based on image recognition |
CN112235727A (en) * | 2020-09-02 | 2021-01-15 | 武汉烽火众智数字技术有限责任公司 | Personnel flow monitoring and analyzing method and system based on MAC data |
CN113763344A (en) * | 2021-08-31 | 2021-12-07 | 中建一局集团第三建筑有限公司 | Operation platform safety detection method and device, electronic equipment and medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8345984B2 (en) * | 2010-01-28 | 2013-01-01 | Nec Laboratories America, Inc. | 3D convolutional neural networks for automatic human action recognition |
CN103778414A (en) * | 2014-01-17 | 2014-05-07 | 杭州电子科技大学 | Real-time face recognition method based on deep neural network |
CN104077613A (en) * | 2014-07-16 | 2014-10-01 | 电子科技大学 | Crowd density estimation method based on cascaded multilevel convolution neural network |
CN104166861A (en) * | 2014-08-11 | 2014-11-26 | 叶茂 | Pedestrian detection method |
-
2015
- 2015-12-11 CN CN201510920993.9A patent/CN105590094B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8345984B2 (en) * | 2010-01-28 | 2013-01-01 | Nec Laboratories America, Inc. | 3D convolutional neural networks for automatic human action recognition |
CN103778414A (en) * | 2014-01-17 | 2014-05-07 | 杭州电子科技大学 | Real-time face recognition method based on deep neural network |
CN104077613A (en) * | 2014-07-16 | 2014-10-01 | 电子科技大学 | Crowd density estimation method based on cascaded multilevel convolution neural network |
CN104166861A (en) * | 2014-08-11 | 2014-11-26 | 叶茂 | Pedestrian detection method |
Cited By (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107566781B (en) * | 2016-06-30 | 2019-06-21 | 北京旷视科技有限公司 | Video monitoring method and video monitoring equipment |
CN107566781A (en) * | 2016-06-30 | 2018-01-09 | 北京旷视科技有限公司 | Video frequency monitoring method and video monitoring equipment |
WO2018059408A1 (en) * | 2016-09-29 | 2018-04-05 | 北京市商汤科技开发有限公司 | Cross-line counting method, and neural network training method and apparatus, and electronic device |
CN106980880A (en) * | 2017-03-06 | 2017-07-25 | 北京小米移动软件有限公司 | The method and device of images match |
CN106991641A (en) * | 2017-03-10 | 2017-07-28 | 北京小米移动软件有限公司 | It is implanted into the method and device of picture |
CN107316434A (en) * | 2017-07-28 | 2017-11-03 | 惠州市伊涅科技有限公司 | Environment passes through quantity monitoring method |
WO2019047655A1 (en) * | 2017-09-05 | 2019-03-14 | 百度在线网络技术(北京)有限公司 | Method and apparatus for use in determining driving behavior of driverless vehicle |
CN107392189A (en) * | 2017-09-05 | 2017-11-24 | 百度在线网络技术(北京)有限公司 | For the method and apparatus for the driving behavior for determining unmanned vehicle |
US11704771B2 (en) * | 2017-12-01 | 2023-07-18 | Huawei Technologies Co., Ltd. | Training super-resolution convolutional neural network model using a high-definition training image, a low-definition training image, and a mask image |
US20200334789A1 (en) * | 2017-12-01 | 2020-10-22 | Huawei Technologies Co., Ltd. | Image Processing Method and Device |
CN110322037A (en) * | 2018-03-28 | 2019-10-11 | 普天信息技术有限公司 | Method for predicting and device based on inference pattern |
CN109883005A (en) * | 2019-03-26 | 2019-06-14 | 广州远正智能科技股份有限公司 | Terminal tail end air conditioner device intelligence control system, method, medium and equipment |
CN110096959A (en) * | 2019-03-28 | 2019-08-06 | 上海拍拍贷金融信息服务有限公司 | Flow of the people calculation method, device and computer storage medium |
CN110298254A (en) * | 2019-05-30 | 2019-10-01 | 罗普特科技集团股份有限公司 | A kind of analysis method and system for personnel's abnormal behaviour |
CN111199215A (en) * | 2020-01-06 | 2020-05-26 | 郑红 | People counting method and device based on face recognition |
CN112235727A (en) * | 2020-09-02 | 2021-01-15 | 武汉烽火众智数字技术有限责任公司 | Personnel flow monitoring and analyzing method and system based on MAC data |
CN112132234A (en) * | 2020-10-28 | 2020-12-25 | 重庆斯铂电气自动化设备有限公司 | Oil level monitoring system and method based on image recognition |
CN113763344A (en) * | 2021-08-31 | 2021-12-07 | 中建一局集团第三建筑有限公司 | Operation platform safety detection method and device, electronic equipment and medium |
Also Published As
Publication number | Publication date |
---|---|
CN105590094B (en) | 2019-03-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105590094A (en) | Method and device for determining number of human bodies | |
EP3012766B1 (en) | Method and apparatus for processing terminal anomaly | |
CN106791893A (en) | Net cast method and device | |
CN105205479A (en) | Human face value evaluation method, device and terminal device | |
EP3086275A1 (en) | Numerical value transfer method, terminal, cloud server, computer program and recording medium | |
CN110381443A (en) | Near-field communication card Activiation method and device | |
CN103944804B (en) | Contact recommending method and device | |
CN112115894B (en) | Training method and device of hand key point detection model and electronic equipment | |
CN106803777A (en) | Method for information display and device | |
CN106170004A (en) | Process the method and device of identifying code | |
EP3312702B1 (en) | Method and device for identifying gesture | |
CN109359056A (en) | A kind of applied program testing method and device | |
CN106778531A (en) | Face detection method and device | |
CN107527024A (en) | Face face value appraisal procedure and device | |
CN107766820A (en) | Image classification method and device | |
CN105354560A (en) | Fingerprint identification method and device | |
CN107832746A (en) | Expression recognition method and device | |
CN105868709A (en) | Method and apparatus for closing fingerprint identifying function | |
CN107341509A (en) | The training method and device of convolutional neural networks | |
CN106980880A (en) | The method and device of images match | |
CN108476379A (en) | Information recording method and information record carrier | |
CN112948704A (en) | Model training method and device for information recommendation, electronic equipment and medium | |
CN106572003A (en) | User information recommendation method and device | |
CN112529871A (en) | Method and device for evaluating image and computer storage medium | |
CN107105311A (en) | Live broadcasting method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |