CN111191506A - Personnel flow statistical method and device, computer equipment and storage medium - Google Patents
Personnel flow statistical method and device, computer equipment and storage medium Download PDFInfo
- Publication number
- CN111191506A CN111191506A CN201911173178.5A CN201911173178A CN111191506A CN 111191506 A CN111191506 A CN 111191506A CN 201911173178 A CN201911173178 A CN 201911173178A CN 111191506 A CN111191506 A CN 111191506A
- Authority
- CN
- China
- Prior art keywords
- face image
- image
- face
- target
- personnel
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000007619 statistical method Methods 0.000 title abstract description 16
- 238000000034 method Methods 0.000 claims abstract description 29
- 230000001815 facial effect Effects 0.000 claims description 154
- 238000012790 confirmation Methods 0.000 claims description 40
- 238000004590 computer program Methods 0.000 claims description 25
- 238000012216 screening Methods 0.000 claims description 21
- 238000001914 filtration Methods 0.000 claims description 3
- 230000003252 repetitive effect Effects 0.000 abstract description 4
- 238000010586 diagram Methods 0.000 description 7
- 238000004422 calculation algorithm Methods 0.000 description 5
- 230000008569 process Effects 0.000 description 5
- 239000000284 extract Substances 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 3
- 210000000887 face Anatomy 0.000 description 3
- 238000013528 artificial neural network Methods 0.000 description 2
- 230000001174 ascending effect Effects 0.000 description 2
- 238000010276 construction Methods 0.000 description 2
- 238000001303 quality assessment method Methods 0.000 description 2
- 230000001360 synchronised effect Effects 0.000 description 2
- 238000013459 approach Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000003064 k means clustering Methods 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 210000000056 organ Anatomy 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 239000000523 sample Substances 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000012549 training Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
The application relates to a personnel flow statistical method, a system, a computer device and a storage medium. The method comprises the following steps: acquiring a first face image shot in a statistical time period in a statistical area; carrying out similarity matching on the first face image and a second face image in a face library; determining a second face image with the similarity meeting the condition as a target face image; identifying target personnel in the target face image, and counting the personnel passing times according to the personnel identification corresponding to the target personnel; and determining the personnel flow of the statistical region in the statistical time period according to the personnel passing times corresponding to different personnel identifications. By adopting the method, the statistical requirement on the garden entering times of the non-repetitive personnel can be met.
Description
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method and an apparatus for statistical analysis of a flow of people, a computer device, and a storage medium.
Background
Along with the improvement of safety consciousness, more and more garages need make statistics of the personnel flow of a certain period of time in the garages, so that a security layout strategy is determined according to the personnel flow, and safer environment and higher-quality service are provided for garages.
Traditional personnel flow statistical approach mainly carries out the image shooting to the personnel of going into the garden in a period through installing the probe of making a video recording at the garden entrance, carries out the people to the discernment image of treating that shoots again and detects, obtains personnel flow according to the statistics of the people quantity that detects. However, the statistical method can only count the total number of people entering the park within a period of time, and the statistical requirement on the number of times of entering the park of non-repetitive people is difficult to meet.
Disclosure of Invention
Based on this, it is necessary to provide a method and an apparatus for counting a person flow, a computer-readable storage medium, and a computer device for solving the technical problem that it is difficult to meet the statistical requirement of the number of times of entering a park for people who do not repeat.
A people flow statistics method, the method comprising:
acquiring a first face image shot in a statistical time period in a statistical area;
carrying out similarity matching on the first face image and a second face image in a face library;
determining a second face image with the similarity meeting the condition as a target face image;
identifying a target person in the target face image, and counting the passing times of the persons according to a person identifier corresponding to the target person;
and determining the personnel flow of the statistical region in the statistical time period according to the personnel passing times corresponding to different personnel identifications.
In one embodiment, the acquiring the first face image of the statistical region taken within the statistical period includes:
acquiring more than one frame of live images shot by a statistical area for the same person in a statistical time period;
calculating a quality score for the live image;
and taking the live image with the highest quality score as the first face image.
In one embodiment, the acquiring the first face image of the statistical region taken within the statistical period includes:
acquiring more than one frame of live images shot by a statistical area for the same person in a statistical time period;
acquiring facial feature information of each frame of live image;
calculating average facial feature information of the more than one frame of live images according to the facial feature information of each frame of live image, and determining the average facial feature information as the facial feature information of the first facial image;
the similarity matching of the first face image and the second face image in the face library comprises the following steps:
and performing similarity matching on the facial feature information of the first face image and the facial feature information in a face library.
In one embodiment, the similarity matching between the first face image and the second face image in the face library includes:
acquiring facial feature information of a first face image;
clustering the facial feature information of the first face image with facial feature information in a face database to obtain a cluster of the same type as the first face image;
and performing similarity matching on the facial feature information of the first face image and the facial feature information in the cluster.
In one embodiment, the determining, as the target face image, a second face image with a similarity meeting the condition includes:
screening out similar images with similarity greater than a threshold value from the second face image;
and when the image has two or more frames of similar images, determining the second face image with the maximum similarity as the target face image.
In one embodiment, the method further comprises:
when no similar image with the similarity larger than a threshold exists in the second face image, adding the first face image to the face library, and identifying a target person in the first face image;
creating a corresponding personnel identifier according to the facial feature information corresponding to the target personnel;
and counting the passing times of the personnel according to the created personnel identification.
In one embodiment, the determining, as the target face image, a second face image with the similarity meeting the filtering condition includes:
screening out similar images with similarity greater than a threshold value from the second face image;
constructing a corresponding target image confirmation task based on the first face image and the similar image;
sending the target image confirmation task to a terminal; the target image confirmation task is used for determining whether the similar images have target face images according to the selection operation of a terminal user;
and receiving a confirmation result which is returned by the terminal and is based on the target image confirmation task, and screening out the target face image from the similar image according to the confirmation result.
A people flow statistics apparatus, the apparatus comprising:
the first face image determining module is used for acquiring a first face image shot in a statistical time period in a statistical area;
the target face image determining module is used for carrying out similarity matching on the first face image and a second face image in a face library; determining a second face image with the similarity meeting the condition as a target face image;
the personnel flow counting module is used for identifying target personnel in the target face image and counting the personnel passing times according to the personnel identification corresponding to the target personnel; and determining the personnel flow of the statistical region in the statistical time period according to the personnel passing times corresponding to different personnel identifications.
A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the following steps when executing the computer program:
acquiring a first face image shot in a statistical time period in a statistical area;
carrying out similarity matching on the first face image and a second face image in a face library;
determining a second face image with the similarity meeting the condition as a target face image;
identifying a target person in the target face image, and counting the passing times of the persons according to a person identifier corresponding to the target person;
and determining the personnel flow of the statistical region in the statistical time period according to the personnel passing times corresponding to different personnel identifications.
A computer-readable storage medium, on which a computer program is stored which, when executed by a processor, carries out the steps of:
acquiring a first face image shot in a statistical time period in a statistical area;
carrying out similarity matching on the first face image and a second face image in a face library;
determining a second face image with the similarity meeting the condition as a target face image;
identifying a target person in the target face image, and counting the passing times of the persons according to a person identifier corresponding to the target person;
and determining the personnel flow of the statistical region in the statistical time period according to the personnel passing times corresponding to different personnel identifications.
According to the personnel flow counting method, the personnel flow counting device, the computer equipment and the storage medium, the target face image is determined according to the similarity matching result of the first face image and the second face image, so that when the target person image exists in the face library, the personnel passing times of the personnel identification corresponding to the target face image are directly added with 1, and the counting of the personnel passing times corresponding to different personnel identification is realized.
Drawings
FIG. 1 is a diagram illustrating an exemplary scenario for an application of a statistical method of traffic flow;
FIG. 2 is a flow chart illustrating a statistical method of personnel flow in one embodiment;
FIG. 3 is a diagram illustrating a display interface of a terminal according to an embodiment;
FIG. 4 is a flow chart illustrating a statistical method of human traffic according to another embodiment;
FIG. 5 is a block diagram of a device for statistics of flow of persons in one embodiment;
FIG. 6 is a block diagram of a device for statistics of flow rate of persons in another embodiment;
FIG. 7 is a diagram illustrating an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The personnel flow statistical method provided by the application can be applied to the application environment shown in fig. 1. Referring to fig. 1, the people flow rate statistical method is applied to a people flow rate statistical system. The people flow system includes an image capture device 110 and a server 120. The image capturing apparatus 110 and the server 120 are connected via a network. The image capturing device 110 may send the captured first face image to the server 120, and the server 120 processes the first face image by using the people flow rate statistical method to obtain the people flow rate of the statistical area in the statistical time period. The image capturing device 110 may be a camera, a video camera, a still camera or other devices with image capturing function, such as a mobile phone, a tablet computer, etc. The server 120 may be implemented as a stand-alone server or a server cluster composed of a plurality of servers. It is easily understood that the people traffic statistical method can be performed not only at the server 120 but also at the terminal.
In one embodiment, as shown in fig. 2, a people flow statistical method is provided, which is described by taking the method as an example applied to the server in fig. 1, and includes the following steps:
s202, acquiring a first face image shot in a statistical time period in the statistical area.
The statistical area is an area where the flow of the entering and exiting people needs to be counted, and may be specifically a campus entrance area. The statistical time interval refers to a time interval in which the personnel flow in the statistical area needs to be counted. The first face image is a face image which is acquired based on image acquisition equipment erected near the statistical area and corresponds to a target person. The target person refers to a nonrepeating natural person who enters the statistical area within the statistical time period.
Specifically, the image acquisition equipment acquires real scenes in real time according to a preset time frequency in a current statistics sub-period, and sends all acquired field images to the server so that the server performs personnel flow statistics. The time length of the statistical time interval can be freely set according to the requirement, for example, 1 day, the statistical time interval can be divided into a plurality of statistical sub-time intervals, and the overlong time interval of the statistical sub-time intervals can make the target person possibly enter a statistical area for a plurality of times in the statistical sub-time intervals; the fact that the counting sub-period is too short enables the image acquisition equipment to shoot the same passing operation of the same target person in two adjacent counting sub-periods, and therefore the passing times of the target person are counted repeatedly. Therefore, the length of the time interval of the statistical sub-period needs to be designed reasonably, so that the same target person only passes through the statistical region once as much as possible in a single statistical sub-period, for example, the statistical time period can be divided every 5 minutes on the basis of 8:00(am), and a plurality of statistical sub-periods are obtained. At the end of each statistical sub-period, the server refreshes the number of people passing based on the live images collected during the statistical sub-period.
The server acquires all field images acquired in the statistic sub-period and sent by the image acquisition equipment, extracts personnel feature information of target personnel in all field images, and classifies all field images according to the personnel feature information to obtain different types of field images. The live images of the same category refer to a plurality of image frames shot by the same target person, namely the live images of the same category have the face image of the same target person; the person characteristic information can reflect one or more of the characteristic information of the person, the person clothes color, the person hair style, the person accessories and the like. For example, a is a person entering a garden wearing a certain special accessory, the server screens out field images with the same accessory from all received field images according to the person characteristic information, and the field images screened out based on the accessory information are used as field images collected for the A; for another example, the server classifies a plurality of face images according to the person hair style characteristics in the live images to obtain different live images of different target persons.
The server sequentially traverses each type of field image and screens out a forward field image with the face direction being the forward direction from the field images of the same type. And the server calculates the area ratio of the face in the forward live image, and takes the forward live image with the largest face area ratio as the first face image of the category. For example, the server divides the live images into two types a and B, and the server acquires the forward live image with the largest face area ratio in the two types a and B, respectively, and takes the forward live image with the largest face area ratio acquired from the live image of the type a as the first face image corresponding to the target person a, and takes the forward live image with the largest face area ratio acquired from the live image of the type B as the first face image corresponding to the target person B.
In one embodiment, after acquiring an image frame, an image acquisition device may detect whether a human face exists in the image frame, and if the human face exists, acquire the image frame as a live image and send the live image to a server.
In one embodiment, the server preferentially classifies all live images based on readily discernable person characteristic information, resulting in different live images for different target persons. For example, the server preferentially classifies all live images based on the person clothes color and the person hair style characteristics.
The field images are classified based on the personnel features which are easy to distinguish, so that the classification efficiency can be improved, and resources consumed by the server in image classification can be saved.
In one embodiment, when the acquired live image has a plurality of faces, the server determines the area occupied by each face in the live image, and takes the face with the largest area ratio as the target person. The server extracts the personnel characteristic information of the target personnel and classifies the live images according to the personnel characteristic information.
In one embodiment, the server acquires live images taken of the same target person and determines the time of acquisition of each live image and the location coordinates of the face in the corresponding live image. The server screens out image frames with adjacent acquisition time and position coordinate difference smaller than a first threshold value from a plurality of field images shot aiming at the same target person, and selects a forward field image from the screened image frames, so that a first human face image is determined based on the face area ratio in the forward field image.
Whether different face images have position change continuity or not is judged from two dimensions of time (acquisition time) and space (position coordinates), the confidence degree of the judgment result of the field images which can be judged as the same target person can be improved, and the accuracy of the statistics of the number of times of passage of the persons is improved.
And S204, carrying out similarity matching on the first face image and the second face image in the face library.
The face library stores face feature information of a person which is counted in a counting time period and first appears in a counting area.
Specifically, after the server obtains a first face image of the target person, the server inputs the first face image into a preset feature recognition model. The feature recognition model determines facial feature points in the first facial image according to the input image information, recognizes specific coordinates of the facial feature points, and then determines corresponding facial feature information according to the specific coordinates of the facial feature points. For example, the feature recognition model may determine the eye width according to the specific coordinates of the corner of the eye, so as to use the eye width as the facial feature information of the first face. The facial feature information is data reflecting facial features of a person, and the facial feature information can reflect one or more kinds of feature information such as the nose, the mouth and the distance between each facial organ of the person.
After the server determines the facial feature information of the first face, the server respectively carries out similarity matching on the facial feature information of the first face and the facial feature information in the face library, and therefore the similarity between the first face image and the face image in the face library is determined.
In one embodiment, the server judges whether the face library has facial feature information, when the face library does not have the facial feature information, the server stores the facial feature information of the first face image in the face library, and sets the passing frequency of the target person corresponding to the first face image to be 1.
In one embodiment, the feature recognition model may be trained based on a neural network. A developer of the personnel flow statistical system collects a large number of face images in advance, manually labels face feature points on the collected face images, and then sends the face images labeled with the face feature points to a server, so that the server performs model training according to the face images labeled with the face feature points, and finally obtains a feature recognition model.
The feature recognition model is trained through the neural network, so that the reliability of facial feature information can be effectively improved, and the accuracy of similarity matching is improved.
In one embodiment, the server intercepts a face region in the first face image and inputs the face region into the feature recognition model, so that the feature recognition model determines facial feature information based on the face region.
By correspondingly intercepting the face region, the image information identified by the feature identification model is purer, so that the probability of reducing the accuracy of face feature information identification due to the interference of the background region can be reduced.
S206, determining a second face image with the similarity meeting the condition as a target face image.
Specifically, the server acquires a preset series of similarity thresholds with different numerical values, arranges the similarity thresholds in an ascending order, and sequentially stores the similarity thresholds arranged in the ascending order in a threshold list, wherein the similarity threshold with the minimum numerical value is stored in the first bit of the threshold list, and the similarity threshold with the maximum numerical value is stored in the last bit of the threshold list. For convenience of description, the similarity threshold located at the first bit of the threshold list is referred to as a first threshold, the similarity threshold located at the second bit is referred to as a second threshold, and the similarity threshold located at the nth bit is referred to as an nth threshold.
After the server carries out similarity matching on the first face image and the second face image in the face library, the server extracts a first threshold value from a threshold value list, and screens out similar images with the similarity larger than the first threshold value from the second face image, then the server counts the number of the screened similar images, when the number of the images is larger than 1, the server acquires a second threshold value, and screens out the similar images with the similarity larger than the second threshold value from the similar images again, the server counts the number of the similar images with the similarity larger than the second threshold value, when the number of the images is still larger than 1, the server acquires a third threshold value, and screens the similar images based on the third threshold value until the number of the screened similar images is smaller than or equal to 1.
When the number of the images of the similar images screened out based on the screening operation of the current sequence is equal to 1, the finally screened out similar images and the first face image are considered to be image frames collected by the same person, and at the moment, the server takes the finally screened out similar image as a target face image; and when the similar images screened by the screening operation based on the current sequence are smaller than 1, the server acquires the similar images screened by the screening operation based on the previous sequence, and takes the image frame with the maximum similarity in the plurality of similar images screened by the screening operation based on the previous sequence as the target face image.
In one embodiment, a developer of the staff flow system may collect a plurality of face images of the same person at different times in advance, and transmit the plurality of collected face images to the server. The server selects one face image from the plurality of face images as a reference image, and calculates the similarity between the reference image and the rest face images. The server superimposes the plurality of calculated similarities to obtain a similarity sum, calculates an average similarity based on the similarity sum and the number of the face images transmitted to the server, and then uses the average similarity as a first threshold, and sequentially obtains a second threshold and a third threshold according to a fixed increment until the similarity is 100%, for example, the first threshold is 90%, the fixed increment is 1%, the second threshold is 91%, and the third threshold is 92%.
In one embodiment, when there are a plurality of target persons, the server superimposes a plurality of average similarity degrees corresponding to the plurality of target persons, and divides the sum of the superimposed average similarity degrees by the number of the target persons, thereby obtaining the first threshold.
The first threshold value is obtained based on the average similarity, so that the confidence degree of the judgment result which can be judged as the target face image can be improved, and the accuracy of the statistics of the passing times of the people is improved.
And S208, identifying the target person in the target face image, and counting the passing times of the persons according to the person identification corresponding to the target person.
The person identifier is information capable of uniquely identifying one face feature, such as a serial number, a hash value obtained by performing hash operation on the face feature, and the like. The number of passes refers to the number of times that a natural person enters a statistical area within a statistical period of time, such as the number of times that the natural person enters a campus through a campus entrance within the statistical period of time.
In particular, the same person may appear in the statistical region multiple times at different statistical periods, and thus the same person identification may be counted multiple times. When the server counts the target person appearing in the counting area for the first time in the counting time period, the server creates the person identification of the target person, sets the passing times corresponding to the newly created person identification to be 1, and correspondingly stores the person identification and the passing times in the counting table. The statistical table is a storage space for storing the personnel identification and the corresponding passing times.
After the server determines the target face image, the server can determine that the target person corresponding to the target face image does not appear in the statistical area for the first time, at the moment, the server determines the person identifier of the target person corresponding to the target image according to a preset person identifier generation rule, checks whether the person identifier same as the target person identifier exists in the statistical table, and when the person identifier same as the target person identifier exists in the statistical table, the server obtains the passing times of the target person identifier and adds 1 to the passing times.
And S210, determining the personnel flow of the statistical area in the statistical time period according to the personnel passing times corresponding to different personnel identifications.
Specifically, the server obtains the end time of the statistical time period and the end time of the current statistical sub-time period, and determines whether the end time of the current statistical sub-time period is equal to the end time of the statistical time period, and when the end time of the current statistical sub-time period is equal to the end time of the statistical time period, the server may determine that all live images acquired within the statistical time period have been sent to the server by the image acquirer at the moment, and at the moment, the server determines whether the statistical refreshing of the passing times has been completed based on the first face image reported by the current. If a plurality of statistical sub-periods with determined starting time and ending time are predetermined, the statistical sub-period to which the time segment of the live image being sent by the image collector belongs is called the current statistical sub-period.
When the statistics of the passing times of the personnel is completed based on the first face image reported in the current statistics sub-period, the server acquires the statistics table, and determines the personnel flow of the statistics area in the statistics period according to the corresponding relation between the personnel identification and the passing times stored in the statistics table.
In one embodiment, the server may superimpose the number of passes corresponding to each person identifier to obtain the number of persons entering the statistical area within the statistical time period. Wherein a natural person in the statistical region is counted as one person from entry to exit.
In one embodiment, the server acquires the personnel identification of which the passing times are greater than the threshold value in the statistical time period, and determines the personnel corresponding to the personnel identification as the standing personnel. The server obtains the number of the non-repetitive personnel identifications in the statistical table, divides the number of the permanent personnel by the number of the non-repetitive personnel identifications to obtain the occupation ratio of the permanent personnel, and accordingly determines the corresponding security policy according to the occupation ratio of the permanent personnel.
In one embodiment, when the server completes the personnel traffic statistics of the statistical area in the current time period, the server sends the statistical table backup to the cloud end so as to process the statistical table in the following process.
In the personnel flow statistical method, based on the collected first face image, the face image can be subjected to similarity matching with a second face image in a face library; through similarity matching, a target face image which meets the conditions and is most similar to the first face can be screened out from a face library, so that the target face image and the first face image can be considered to be image frames which are acquired by aiming at the same person to a certain confidence degree; through recognizing the personnel identification of the target human face image, the personnel passing times can be counted based on the personnel identification of the target personnel, so that the personnel flow of the counting area in the counting time period can be determined according to the personnel passing times corresponding to different personnel identifications when the counting is finished. Because the target face image is determined according to the similarity matching result of the first face image and the second face image, when the target person image exists in the face library, the statistics of the passing times of the persons corresponding to different person identifications is realized by directly adding 1 to the passing times of the persons corresponding to the person identifications corresponding to the target face image.
In one embodiment, acquiring a first face image of a statistical region taken over a statistical period comprises: acquiring more than one frame of live images shot by a statistical area for the same person in a statistical time period; calculating a quality score of the live image; and taking the live image with the highest quality score as the first face image.
The quality score is an average value of one or more indexes of the definition, the face orientation, the face shielding degree, whether eyes are closed or not and whether mouth is opened or not of the face in the live image. The definition refers to a pixel value corresponding to a face region in a live image, and the face is clearer when the pixel value is larger. The face orientation refers to a deflection angle of a face relative to the collecting direction of the image collecting device, and the smaller the deflection angle is, the more the face in the face image tends to be a front face image.
Specifically, the image capturing device may capture live images of more than one frame for the same person at a preset capturing frequency, and transmit the captured live images to the server. When the server obtains more than one live image shot for the same person in the statistic sub-period, the server carries out quality scoring on each live image according to a preset face quality assessment strategy and takes the live image with the highest quality score as a first face image of a target person.
In another embodiment, a developer of the staff flow rate statistical system may previously perform weight setting on a plurality of indexes based on the importance degree of each face quality assessment item, so that the server may assess the quality of the live image based on the set weight.
In another embodiment, when the server obtains more than one frame of live images shot for the same person in the statistic sub-period, the server judges whether the definition of each frame of live image is greater than a definition threshold, whether the proportion of the face shielding area is greater than an area threshold, and whether the deflection angle of the face relative to the acquisition direction of the image acquisition equipment is greater than a deflection angle threshold, and when the definition of the face image is less than the definition threshold or the proportion of the face shielding area is greater than the area threshold or the deflection angle of the face is greater than the deflection angle threshold, the server deletes the corresponding live image, and performs quality scoring based on the deleted live image. By deleting the site images which do not meet the requirements in advance, the number of images for quality scoring can be reduced, resources consumed by the server for quality scoring are saved, and the efficiency of quality scoring can be improved.
In the embodiment, the live images with the highest quality scores are used as the first face images by performing quality scoring on the live images, so that the influence of the low-quality live images on the personnel flow statistical result can be reduced.
In one embodiment, acquiring a first face image of a statistical region taken over a statistical period comprises: acquiring more than one frame of live images shot by a statistical area for the same person in a statistical time period; acquiring facial feature information of each frame of live image; calculating average facial feature information of more than one frame of live images according to the facial feature information of each frame of live image, and determining the average facial feature information as the facial feature information of the first facial image; the similarity matching of the first face image and the second face image in the face library comprises the following steps: and performing similarity matching on the facial feature information of the first face image and the facial feature information in the face library.
Specifically, when the server receives live images acquired within a self-counting time period uploaded by the image acquisition equipment, the server classifies the live images according to the personnel feature information in the live images to obtain multi-frame live images shot for the same personnel within a sub-counting time period. The server respectively inputs the multiple field images into a preset feature recognition model to obtain the facial feature information of each field image. And the server superposes the facial feature information corresponding to the corresponding facial feature points and obtains average facial feature information according to the number of the field images. For example, when the facial feature information is the width value between the upper lip and the lower lip, the server superimposes the lip width values in the multiple live images, and divides the lip width values by the number of the live images to obtain the average lip width value of the target person.
And the server takes the average facial feature information as the facial feature information of the first face, and compares the similarity of the facial feature information of the first face with the similarity of the facial feature information of the second face to obtain a comparison result.
In this embodiment, the average facial feature information is used as the facial feature information of the first face image, so that the facial features of the first face image are more accurate, and the influence of the facial feature information with deviation on the matching result can be reduced.
In one embodiment, similarity matching the first face image with a second face image in a face library comprises: acquiring facial feature information of a first face image; clustering the facial feature information of the first face image with the facial feature information in the face database to obtain a cluster of the same type as the first face image; and performing similarity matching on the facial feature information of the first face image and the facial feature information in the cluster.
The cluster is a set of similar data, for example, the cluster may be facial feature information of similar faces.
Specifically, the server inputs a first face image into the feature recognition model, determines facial feature information of the first face image according to the feature recognition model, and then performs clustering processing on the facial feature information of the first face and facial feature information in the face library. The algorithm used for clustering the facial feature information may be a K-means clustering algorithm, a chinese-whisper algorithm, or the like, and is not limited herein. Taking a chinese-whisper algorithm as an example for explanation, the server takes the facial feature information of each face as a node, and takes each node as a category, and calculates the similarity between different nodes. And when the similarity between different nodes is greater than a preset threshold value, the server connects the two nodes to form an associated edge, and the similarity between the two connected nodes is used as the weight of the associated edge. The server randomly selects a node i, selects the associated edge with the largest weight from the adjacent nodes to obtain a node j connected with the associated edge, and at the moment, the server classifies the node i into j classes. And traversing all the nodes by the server until the homopolymerization of all the nodes is finished.
The server determines a cluster where the first face image is located, judges the number of facial feature information stored in the cluster, and when the cluster has facial feature information of at least two persons, the server performs similarity calculation on the facial feature information of the first face image and facial feature information of the cluster except the facial feature information of the first face image.
In this embodiment, the face images similar to the first face image are first screened from the face library, and then the similarity calculation is performed on the first face image and the screened similar images, so that the number of images for similarity calculation can be reduced, and the efficiency of similarity comparison is improved.
In one embodiment, determining a second face image with similarity satisfying the condition as the target face image comprises: screening out similar images with the similarity larger than a threshold value from the second face image; and when the image has two or more frames of similar images, determining the second face image with the maximum similarity as the target face image.
Specifically, after the server matches the similarity between a first face image and a second face image in a face library, the server screens out similar images with the similarity larger than a threshold value from the second face image, then the server counts the number of the screened out similar images, and when the number of the images is larger than 1, the server determines the second face image with the maximum similarity as a target face image; when the number of the images is equal to 1, the server takes the images screened from the second face image as target face images; when the number of images is less than 1, it can be considered that the facial feature information of the person pointed by the first face image is not added to the face library, and at this time, the server adds the first face image and the facial feature information of the first face image to the face library. The threshold value can be freely set according to requirements, if 90%, the threshold value is set too high, the probability of screening out the target face image by mistake is increased, and the threshold value is set too low, so that the screened face image and the first face image are not image frames acquired by the same person, and therefore reasonable setting is needed.
In this embodiment, when two or more frames of similar images are provided, the confidence of the face images of the same person in the target image and the first face image can be increased by determining the second face image with the largest similarity as the target face image, so as to increase the accuracy of the statistical result of the flow of the person.
In one embodiment, the people flow statistical method further includes: when the second face image does not have a similar image with the similarity larger than the threshold value, adding the first face image to a face library, and identifying a target person in the first face image; creating a corresponding personnel identifier according to the facial feature information corresponding to the target personnel; and counting the passing times of the personnel according to the created personnel identification.
Specifically, when no similar image with the similarity larger than the threshold exists in the face library, the server adds the first face image and the facial feature information of the first face image to the face library, and generates a corresponding person identifier according to the facial feature information of the first face image. For example, the hash operation is performed on the facial feature information to obtain the person identifier of the target person corresponding to the first face image. Then, the server sets the passing times of the target person corresponding to the first face image to be 1, and stores the newly created person identification and the passing times in the statistical table correspondingly.
In another embodiment, after the server acquires the facial feature information of the first face, the server may search the internet for the identity information associated with the facial feature information, such as occupation, native place, and the like, and store the identity information, the person identification, and the number of passes, so that the subsequent management personnel in the statistical area can select the corresponding management policy according to the identity information of the target person and the number of passes.
In this embodiment, when the facial feature information of the first face image does not exist in the face library, the facial feature information of the first face appearing for the first time is added to the face library, so that the passing times can be correspondingly increased on the basis of the current passing times when the face image of the target person in the first face image is received again in the statistical time period.
In one embodiment, determining a second face image with similarity meeting the filtering condition as the target face image comprises: screening out similar images with the similarity larger than a threshold value from the second face image; constructing a corresponding target image confirmation task based on the first face image and the similar image; sending the target image confirmation task to a terminal; the target image confirmation task is used for determining whether the similar images have target face images according to the selection operation of the terminal user; and receiving a confirmation result based on a target image confirmation task returned by the terminal, and screening out a target face image from the similar images according to the confirmation result.
Specifically, when the server compares the similarity of the facial feature information of the first facial image with the similarity of the facial feature information of the second facial image and screens out the similar images with the similarity larger than the threshold value from the second facial image, the server generates a target image confirmation task according to the screened similar images and the first facial image and sends the target image confirmation task to the terminal. The terminal extracts the similar image and the target image from the target image confirmation task, and correspondingly displays the extracted image frame on a display interface as shown in fig. 3. Fig. 3 is a schematic diagram of a display interface of a terminal in one embodiment.
A terminal user such as a statistical area manager may select a face image most similar to the first face image in the display interface, so that the terminal may set a specific tag for the corresponding face image according to the selection operation of the user, for example, a tag of a target image typeface may be set for the face image selected by the user. And then, the terminal packs the face image and the label and returns the face image and the label to the server.
The server receives the face image returned by the terminal, screens out an image frame with a specific label from the returned face image, and determines the image frame with the specific label as a target face image.
In another embodiment, the terminal may correspondingly display the similarity of the similar images on the display interface, so that the terminal user may be assisted in determining the target face image.
In another embodiment, the people flow statistical method further includes: when the confirmation result shows that the similar image does not have the target image, adding the first face image to a face library, and identifying the target person in the first face image; creating a corresponding personnel identifier according to the facial feature information corresponding to the target personnel; and counting the passing times of the personnel according to the created personnel identification.
Specifically, when the image frame added with the specific identifier is not included in the face image returned from the terminal, the similar image is considered to have no target face image, the server adds the first face image and the facial feature information of the first face image to the face library, generates a corresponding person identifier according to the facial feature information of the first face image, then sets the number of passes of the target person corresponding to the first face image to 1, and stores the newly created person identifier and the number of passes in the statistical table.
When the facial feature information of the first face image does not exist in the face library, the facial feature information of the first face appearing for the first time is added to the face library, so that the passing times can be correspondingly increased on the basis of the current passing times when the face image of the target person in the first face image is received again in the counting time period.
In the embodiment, because the similarity algorithm has certain accuracy, when the face library has similar faces, the confidence of the target face image can be improved by determining the target face image in a manual labeling mode, so that the passing times of corresponding personnel can be counted subsequently when the selected image frame is determined to be the real target face image.
It should be understood that, although the steps in the flowchart of fig. 2 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a portion of the steps in fig. 2 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performance of the sub-steps or stages is not necessarily sequential, but may be performed in turn or alternately with other steps or at least a portion of the sub-steps or stages of other steps.
For convenience of understanding, fig. 4 provides a schematic flow chart of another method for counting a person flow, as shown in fig. 4, after a server acquires a first face image, the server compares the similarity between the first face image and a second face image in a face library, and when a similar image with a similarity greater than a threshold does not exist in the face library, the server adds the first face image to the face library and determines the first face image as a target face image; when the face library has similar images with the similarity larger than the threshold value, the server judges the number of the similar images, and if the number of the similar images is only 1, the server determines the only similar image as a target face image; and if the number of the similar images is not 1, the server determines the similar image with the maximum similarity as the target face image. And then the server identifies target personnel in the similar face image and counts the personnel passing times according to the personnel identification of the target personnel. The server judges whether the end time of the statistical time interval is reached, if the end time of the statistical time interval is reached, the server determines the personnel flow according to the personnel passing times corresponding to different personnel identifications, and if the end time is not reached, the server continues to receive the first face image.
In one embodiment, as shown in fig. 5, there is provided a people flow statistics apparatus 500, comprising: a first facial image determination module 502, a target facial image determination module 504, and a flow statistics module 506, wherein:
a first face image determining module 502, configured to obtain a first face image captured in a statistical time period in a statistical region.
A target face image determining module 504, configured to perform similarity matching between the first face image and a second face image in a face library; and determining a second face image with the similarity meeting the condition as a target face image.
The personnel flow counting module 506 is used for identifying target personnel in the target face image and counting the personnel passing times according to the personnel identification corresponding to the target personnel; and determining the personnel flow of the statistical region in the statistical time period according to the personnel passing times corresponding to different personnel identifications.
In one embodiment, as shown in fig. 6, the first face image determination module 502 further comprises a quality scoring module 5021 for obtaining more than one live image of a statistical region taken for the same person within a statistical period; calculating a quality score of the live image; and taking the live image with the highest quality score as the first face image.
In one embodiment, the first facial image determining module 502 further includes an average feature information determining module 5022, configured to obtain more than one live image of a statistical region taken for the same person within a statistical time period; acquiring facial feature information of each frame of live image; calculating average facial feature information of more than one frame of live images according to the facial feature information of each frame of live image, and determining the average facial feature information as the facial feature information of the first facial image; the similarity matching of the first face image and the second face image in the face library comprises the following steps: and performing similarity matching on the facial feature information of the first face image and the facial feature information in the face library.
In one embodiment, the target face image determination module 504 is further configured to obtain facial feature information of the first face image; clustering the facial feature information of the first face image with the facial feature information in the face database to obtain a cluster of the same type as the first face image; and performing similarity matching on the facial feature information of the first face image and the facial feature information in the cluster.
In one embodiment, the target face image determining module 504 further includes a maximum similarity obtaining module 5041, configured to screen out a similar image with a similarity greater than a threshold from the second face image; and when the image has two or more frames of similar images, determining the second face image with the maximum similarity as the target face image.
In one embodiment, the maximum similarity obtaining module 5041 is further configured to, when there is no similar image with similarity greater than the threshold in the second face image, add the first face image to the face library, and identify the target person in the first face image; creating a corresponding personnel identifier according to the facial feature information corresponding to the target personnel; and counting the passing times of the personnel according to the created personnel identification.
In one embodiment, the target facial image determination module 504 further includes a task construction module 5042, configured to screen out a similar image with similarity greater than a threshold from the second facial image; constructing a corresponding target image confirmation task based on the first face image and the similar image; sending the target image confirmation task to a terminal; the target image confirmation task is used for determining whether the similar images have target face images according to the selection operation of the terminal user; and receiving a confirmation result based on the target image confirmation task returned by the terminal, and screening the target face image from the similar image according to the confirmation result.
In one embodiment, the task construction module 5042 is further configured to add the first face image to a face library and identify the target person in the first face image when it is determined that the target image is not present in the unambiguous similar image; creating a corresponding personnel identifier according to the facial feature information corresponding to the target personnel; and counting the passing times of the personnel according to the created personnel identification.
For specific limitations of the device for calculating a flow rate of a person, reference may be made to the above limitations of the method for calculating a flow rate of a person, and details thereof are not repeated here. All or part of the modules in the personnel flow counting device can be realized by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a server, the internal structure of which may be as shown in fig. 7. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The database of the computer device is used for storing the personnel passing times data. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a people flow statistics method.
Those skilled in the art will appreciate that the architecture shown in fig. 7 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided, comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor implementing the following steps when executing the computer program:
acquiring a first face image shot in a statistical time period in a statistical area;
carrying out similarity matching on the first face image and a second face image in a face library;
determining a second face image with the similarity meeting the condition as a target face image;
identifying target personnel in the target face image, and counting the personnel passing times according to the personnel identification corresponding to the target personnel;
and determining the personnel flow of the statistical region in the statistical time period according to the personnel passing times corresponding to different personnel identifications.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
acquiring more than one frame of live images shot by a statistical area for the same person in a statistical time period;
calculating a quality score of the live image;
and taking the live image with the highest quality score as the first face image.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
acquiring more than one frame of live images shot by a statistical area for the same person in a statistical time period;
acquiring facial feature information of each frame of live image;
calculating average facial feature information of more than one frame of live images according to the facial feature information of each frame of live image, and determining the average facial feature information as the facial feature information of the first facial image;
the similarity matching of the first face image and the second face image in the face library comprises the following steps:
and performing similarity matching on the facial feature information of the first face image and the facial feature information in the face library.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
acquiring facial feature information of a first face image;
clustering the facial feature information of the first face image with the facial feature information in the face database to obtain a cluster of the same type as the first face image;
and performing similarity matching on the facial feature information of the first face image and the facial feature information in the cluster.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
screening out similar images with the similarity larger than a threshold value from the second face image;
and when the image has two or more frames of similar images, determining the second face image with the maximum similarity as the target face image.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
when the second face image does not have a similar image with the similarity larger than the threshold value, adding the first face image to a face library, and identifying a target person in the first face image;
creating a corresponding personnel identifier according to the facial feature information corresponding to the target personnel;
and counting the passing times of the personnel according to the created personnel identification.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
screening out similar images with the similarity larger than a threshold value from the second face image;
constructing a corresponding target image confirmation task based on the first face image and the similar image;
sending the target image confirmation task to a terminal; the target image confirmation task is used for determining whether the similar images have target face images according to the selection operation of the terminal user;
and receiving a confirmation result based on the target image confirmation task returned by the terminal, and screening the target face image from the similar image according to the confirmation result.
In one embodiment, a computer-readable storage medium is provided, having a computer program stored thereon, which when executed by a processor, performs the steps of:
acquiring a first face image shot in a statistical time period in a statistical area;
carrying out similarity matching on the first face image and a second face image in a face library;
determining a second face image with the similarity meeting the condition as a target face image;
identifying target personnel in the target face image, and counting the personnel passing times according to the personnel identification corresponding to the target personnel;
and determining the personnel flow of the statistical region in the statistical time period according to the personnel passing times corresponding to different personnel identifications.
In one embodiment, the computer program when executed by the processor further performs the steps of:
acquiring more than one frame of live images shot by a statistical area for the same person in a statistical time period;
calculating a quality score of the live image;
and taking the live image with the highest quality score as the first face image.
In one embodiment, the computer program when executed by the processor further performs the following steps;
acquiring more than one frame of live images shot by a statistical area for the same person in a statistical time period;
acquiring facial feature information of each frame of live image;
calculating average facial feature information of more than one frame of live images according to the facial feature information of each frame of live image, and determining the average facial feature information as the facial feature information of the first facial image;
the similarity matching of the first face image and the second face image in the face library comprises the following steps:
and performing similarity matching on the facial feature information of the first face image and the facial feature information in the face library.
In one embodiment, the computer program when executed by the processor further performs the steps of:
acquiring facial feature information of a first face image;
clustering the facial feature information of the first face image with the facial feature information in the face database to obtain a cluster of the same type as the first face image;
and performing similarity matching on the facial feature information of the first face image and the facial feature information in the cluster.
In one embodiment, the computer program when executed by the processor further performs the steps of:
screening out similar images with the similarity larger than a threshold value from the second face image;
and when the image has two or more frames of similar images, determining the second face image with the maximum similarity as the target face image.
In one embodiment, the computer program when executed by the processor further performs the steps of:
when a similar image with the similarity larger than a threshold value does not exist in the second face image, adding the first face image to a face library, and identifying a target person in the first face image;
creating a corresponding personnel identifier according to the facial feature information corresponding to the target personnel;
and counting the passing times of the personnel according to the created personnel identification.
In one embodiment, the computer program when executed by the processor further performs the steps of:
screening out similar images with the similarity larger than a threshold value from the second face image;
constructing a corresponding target image confirmation task based on the first face image and the similar image;
sending the target image confirmation task to a terminal; the target image confirmation task is used for determining whether the similar images have target face images according to the selection operation of the terminal user;
and receiving a confirmation result based on the target image confirmation task returned by the terminal, and screening the target face image from the similar image according to the confirmation result.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.
Claims (10)
1. A people flow statistics method, the method comprising:
acquiring a first face image shot in a statistical time period in a statistical area;
carrying out similarity matching on the first face image and a second face image in a face library;
determining a second face image with the similarity meeting the condition as a target face image;
identifying a target person in the target face image, and counting the passing times of the persons according to a person identifier corresponding to the target person;
and determining the personnel flow of the statistical region in the statistical time period according to the personnel passing times corresponding to different personnel identifications.
2. The method of claim 1, wherein obtaining the first face image of the statistical region taken over a statistical period comprises:
acquiring more than one frame of live images shot by a statistical area for the same person in a statistical time period;
calculating a quality score for the live image;
and taking the live image with the highest quality score as the first face image.
3. The method of claim 1, wherein obtaining the first face image of the statistical region taken over a statistical period comprises:
acquiring more than one frame of live images shot by a statistical area for the same person in a statistical time period;
acquiring facial feature information of each frame of live image;
calculating average facial feature information of the more than one frame of live images according to the facial feature information of each frame of live image, and determining the average facial feature information as the facial feature information of the first facial image;
the similarity matching of the first face image and the second face image in the face library comprises the following steps:
and performing similarity matching on the facial feature information of the first face image and the facial feature information in a face library.
4. The method of claim 1, wherein the similarity matching the first facial image with a second facial image in a face library comprises:
acquiring facial feature information of a first face image;
clustering the facial feature information of the first face image with facial feature information in a face database to obtain a cluster of the same type as the first face image;
and performing similarity matching on the facial feature information of the first face image and the facial feature information in the cluster.
5. The method according to claim 1, wherein the determining a second face image with a similarity satisfying a condition as the target face image comprises:
screening out similar images with similarity greater than a threshold value from the second face image;
and when the image has two or more frames of similar images, determining the second face image with the maximum similarity as the target face image.
6. The method of claim 5, further comprising:
when no similar image with the similarity larger than a threshold exists in the second face image, adding the first face image to the face library, and identifying a target person in the first face image;
creating a corresponding personnel identifier according to the facial feature information corresponding to the target personnel;
and counting the passing times of the personnel according to the created personnel identification.
7. The method according to claim 1, wherein the determining a second face image with the similarity meeting the filtering condition as the target face image comprises:
screening out similar images with similarity greater than a threshold value from the second face image;
constructing a corresponding target image confirmation task based on the first face image and the similar image;
sending the target image confirmation task to a terminal; the target image confirmation task is used for determining whether the similar images have target face images according to the selection operation of a terminal user;
and receiving a confirmation result which is returned by the terminal and is based on the target image confirmation task, and screening out the target face image from the similar image according to the confirmation result.
8. A people flow statistics device, characterized in that the device comprises:
the first face image determining module is used for acquiring a first face image shot in a statistical time period in a statistical area;
the target face image determining module is used for carrying out similarity matching on the first face image and a second face image in a face library; determining a second face image with the similarity meeting the condition as a target face image;
the personnel flow counting module is used for identifying target personnel in the target face image and counting the personnel passing times according to the personnel identification corresponding to the target personnel; and determining the personnel flow of the statistical region in the statistical time period according to the personnel passing times corresponding to different personnel identifications.
9. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the steps of the method of any of claims 1 to 7 are implemented when the computer program is executed by the processor.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911173178.5A CN111191506A (en) | 2019-11-26 | 2019-11-26 | Personnel flow statistical method and device, computer equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911173178.5A CN111191506A (en) | 2019-11-26 | 2019-11-26 | Personnel flow statistical method and device, computer equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111191506A true CN111191506A (en) | 2020-05-22 |
Family
ID=70710958
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911173178.5A Pending CN111191506A (en) | 2019-11-26 | 2019-11-26 | Personnel flow statistical method and device, computer equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111191506A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112597854A (en) * | 2020-12-15 | 2021-04-02 | 重庆电子工程职业学院 | Non-matching type face recognition system and method |
CN112597880A (en) * | 2020-12-21 | 2021-04-02 | 杭州海康威视系统技术有限公司 | Passenger flow batch identification method and device, computer equipment and readable storage medium |
CN112835954A (en) * | 2021-01-26 | 2021-05-25 | 浙江大华技术股份有限公司 | Method, device and equipment for determining target service object |
CN113887541A (en) * | 2021-12-06 | 2022-01-04 | 北京惠朗时代科技有限公司 | Multi-region employee number detection method applied to company management |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106570465A (en) * | 2016-10-31 | 2017-04-19 | 深圳云天励飞技术有限公司 | Visitor flow rate statistical method and device based on image recognition |
CN110263703A (en) * | 2019-06-18 | 2019-09-20 | 腾讯科技(深圳)有限公司 | Personnel's flow statistical method, device and computer equipment |
-
2019
- 2019-11-26 CN CN201911173178.5A patent/CN111191506A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106570465A (en) * | 2016-10-31 | 2017-04-19 | 深圳云天励飞技术有限公司 | Visitor flow rate statistical method and device based on image recognition |
CN110263703A (en) * | 2019-06-18 | 2019-09-20 | 腾讯科技(深圳)有限公司 | Personnel's flow statistical method, device and computer equipment |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112597854A (en) * | 2020-12-15 | 2021-04-02 | 重庆电子工程职业学院 | Non-matching type face recognition system and method |
CN112597880A (en) * | 2020-12-21 | 2021-04-02 | 杭州海康威视系统技术有限公司 | Passenger flow batch identification method and device, computer equipment and readable storage medium |
CN112597880B (en) * | 2020-12-21 | 2024-03-08 | 杭州海康威视系统技术有限公司 | Passenger flow batch identification method and device, computer equipment and readable storage medium |
CN112835954A (en) * | 2021-01-26 | 2021-05-25 | 浙江大华技术股份有限公司 | Method, device and equipment for determining target service object |
CN112835954B (en) * | 2021-01-26 | 2023-05-23 | 浙江大华技术股份有限公司 | Method, device and equipment for determining target service object |
CN113887541A (en) * | 2021-12-06 | 2022-01-04 | 北京惠朗时代科技有限公司 | Multi-region employee number detection method applied to company management |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111191506A (en) | Personnel flow statistical method and device, computer equipment and storage medium | |
CN110334569B (en) | Passenger flow volume in-out identification method, device, equipment and storage medium | |
CN109644255B (en) | Method and apparatus for annotating a video stream comprising a set of frames | |
CN105631430A (en) | Matching method and apparatus for face image | |
CN111160275B (en) | Pedestrian re-recognition model training method, device, computer equipment and storage medium | |
US20210382933A1 (en) | Method and device for archive application, and storage medium | |
CN111626123A (en) | Video data processing method and device, computer equipment and storage medium | |
CN114491148B (en) | Target person searching method and device, computer equipment and storage medium | |
CN110750670B (en) | Stranger monitoring method, device and system and storage medium | |
CN111161206A (en) | Image capturing method, monitoring camera and monitoring system | |
CN110427265A (en) | Method, apparatus, computer equipment and the storage medium of recognition of face | |
CN110175553B (en) | Method and device for establishing feature library based on gait recognition and face recognition | |
CN111429476B (en) | Method and device for determining action track of target person | |
CN110691202A (en) | Video editing method, device and computer storage medium | |
CN109961031A (en) | Face fusion identifies identification, target person information display method, early warning supervision method and system | |
JP2021520015A (en) | Image processing methods, devices, terminal equipment, servers and systems | |
CN111241928A (en) | Face recognition base optimization method, system, equipment and readable storage medium | |
CN110334568B (en) | Track generation and monitoring method, device, equipment and storage medium | |
CN110968719B (en) | Face clustering method and device | |
WO2022134916A1 (en) | Identity feature generation method and device, and storage medium | |
CN118038341A (en) | Multi-target tracking method, device, computer equipment and storage medium | |
CN111950507B (en) | Data processing and model training method, device, equipment and medium | |
CN109359689A (en) | A kind of data identification method and device | |
CN113469135A (en) | Method and device for determining object identity information, storage medium and electronic device | |
CN111310602A (en) | System and method for analyzing attention of exhibit based on emotion recognition |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200522 |