WO2020173314A1 - Personnel statistical method and device and electronic device - Google Patents
Personnel statistical method and device and electronic device Download PDFInfo
- Publication number
- WO2020173314A1 WO2020173314A1 PCT/CN2020/075285 CN2020075285W WO2020173314A1 WO 2020173314 A1 WO2020173314 A1 WO 2020173314A1 CN 2020075285 W CN2020075285 W CN 2020075285W WO 2020173314 A1 WO2020173314 A1 WO 2020173314A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- person
- face model
- face
- similarity
- model
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
Definitions
- This application relates to the field of machine vision technology, and in particular to a method, device and electronic equipment for personnel counting. Background technique
- the user may need to count the people appearing in the monitoring scene to dig out useful information. For example, the user may want to count the flow of people in a shopping mall every day during a week to serve as a reference for shopping mall management.
- a monitoring device may be set to shoot a monitoring scene to obtain a monitoring picture. Face recognition is performed on the monitoring screen, and each time a face image is recognized in the monitoring screen, the number of people present is determined to increase by one. Exemplarily, assuming that in one day, from the start of business to the closing of the mall, a total of 1,000 face images are recognized from the monitoring screen, then it can be determined that the number of people appearing in the mall in one day is 1,000.
- the purpose of the embodiments of the present application is to provide a method for counting people, so as to reduce the inaccuracy of statistical results caused by repeated statistics.
- the specific technical solutions are as follows:
- a method for counting people includes: when a face image is recognized from a monitoring screen, comparing the face image with a person saved in a first person database.
- the face model is matched, and the first person database stores a person identifier and a face model correspondingly; If the face image matches the face model in the first person database, the appearance record of the person indicated by the first person identifier is updated, and the first person identifier is the same as that in the first person database.
- the person ID corresponding to the face model that matches the face image is provided.
- a person counting device comprising: a face matching module, when a face image is recognized from a monitoring screen, compare the face image with a first person Matching the face models saved in the library, and the first person library stores a person identifier and a face model correspondingly;
- a record update module if the face image matches the face model in the first person database, update the appearance record of the person indicated by the first person identifier, where the first person identifier is the first person database The person ID corresponding to the face model matching the face image in.
- an electronic device including:
- the processor is used to implement the steps of the personnel counting method described in any one of the first aspects when executing the program stored in the memory.
- a computer-readable storage medium is provided, and a computer program is stored in the computer-readable storage medium.
- the computer program When executed by a processor, the computer program implements any of the above-mentioned aspects of the first aspect. The steps of the staff statistics method described.
- the personnel counting method, device and electronic equipment provided by the embodiments of the present application can match the recognized facial images to distinguish the personnel corresponding to the facial images, and then separately count the appearance records of each personnel Effectively reduce the possibility of counting recurring personnel as different personnel, and improve the accuracy of statistical results.
- any product or method of the present application does not necessarily need to achieve all the advantages described above at the same time.
- FIG. 1 is a schematic flowchart of a method for counting people provided by an embodiment of the application
- FIG. 2 is a schematic diagram of another flow of the method for counting people provided by an embodiment of this application
- FIG. 3 is a schematic diagram of another flow of the method for counting people provided by an embodiment of this application
- 4 is a schematic diagram of another flow chart of the personnel counting method provided by an embodiment of this application
- FIG. 5 is another structural schematic diagram of the people counting device provided by an embodiment of this application
- FIG. 6 is a schematic diagram of the electronic equipment provided by an embodiment of this application A schematic diagram of the structure.
- Fig. 1 is a schematic flowchart of a method for counting people provided in an embodiment of the application.
- a method for counting people provided in an embodiment of the present application may include:
- the personnel counting method provided in the embodiments of the present application can be applied to electronic devices.
- the electronic device may be a monitoring device with image acquisition and analysis functions, such as a camera.
- the monitoring device may have an image acquisition function, a face recognition function, and a people counting function; or, the electronic device It can also be a device that serves as a back-end server of the monitoring device, such as a desktop computer, a notebook computer, a smart phone, a hard disk video recorder, an intelligent analysis device, etc.
- the monitoring device may have an image acquisition function and a face recognition function
- the The equipment as a back-end server has a personnel counting function.
- the application scenario of the personnel counting method provided in the embodiments of the present application may be any scenario where there is a demand for personnel counting, for example: shopping malls, conference venues, check-in venues, etc., where there is a demand for people flow statistics.
- the first person database stores a person identifier and a face model correspondingly.
- the face model in the first person database mentioned in this application may be the image itself containing the face, or the feature value extracted from the image containing the face, or based on any other
- the image processing method is the content used for face matching obtained by processing the image containing the face, and so on.
- the feature value can also be called feature data, which can be image data of facial feature points extracted from an image.
- the facial feature points can include: eyes, nose, mouth, and pupil distance and many more.
- a monitoring screen may include multiple face images.
- each face image may be matched with the face model saved in the first person database. That is, the case where the monitoring screen includes multiple face images is the same as the case where the monitoring screen includes only one face image. For the convenience of discussion, the following will take the monitoring screen including one face image as an example for description.
- the face image may be modeled to obtain the face model of the face image, corresponding to the aforementioned description of the face model, the person
- the face model of the face image may be: the face image, the feature value of the face image, or the content for face matching obtained by processing the face image based on any other image processing method.
- matching the face image with the face model saved in the first person database may include: calculating the face model of the face image with the person saved in the first person database The similarity between the face models.
- the similarity between the face model of the face image and the face model saved in the first person database is higher than the preset similarity threshold, then it can be determined that the face image and the first person The face model saved in the library is matched; if the similarity between the face model of the face image and the face model saved in the first person library is not higher than the preset similarity threshold, it can be determined that the face image and the first person’s face model The face model saved in a person database does not match.
- the face image does not match the face model saved in the first person library, which means that the face image does not match the face model. If there are multiple face models saved in the first person database, the face image does not match the face model saved in the first person database, which means that the face image does not match any face model.
- the person database corresponds to a predetermined auxiliary database
- each record in the predetermined auxiliary database corresponds to a face model in the first person database
- each record includes at least one auxiliary face model matching the face model corresponding to the record
- the face image is matched with the face model saved in the first person library.
- the personnel identifiers can be selected as the personnel identifiers.
- the personnel identifier may be Use the serial number as the personnel identification, or use a string of numbers and letters as the personnel identification. It is understandable that, because the person identifier is used to indicate a person, the person identifier of different persons is different.
- the first person ID is the person ID corresponding to the face model that matches the face image in the first person database.
- the content recorded in the appearance record may be different.
- the appearance record may include the number of people's appearances.
- updating the appearance record may be the appearance record.
- the so-called update occurrence record may include the number of times a person has signed in.
- the appearance information may include facial feature information obtained based on the face image, and may also include monitoring pictures One or more of the timestamp of, the device identifier of the monitoring device that took the monitoring picture, the monitoring picture, and the person information preset for the first person identifier.
- the facial features obtained based on the facial image can be different according to different application scenarios.
- the facial features can include whether they wear glasses, whether they wear a mask, the age of the person estimated based on the facial image, Gender etc.
- the time stamp of the monitoring screen can be used to indicate the time when the person appears in the monitoring scene this time.
- the device identifier of the monitoring device that took the monitoring picture can be used to indicate the location where the person appears this time. It is understandable that in some application scenarios, the field of view of the monitoring scene may be larger than that of a single monitoring device, making it difficult or impossible to use a monitoring device.
- the equipment shoots the entire surveillance scene, so multiple surveillance equipment can be used to shoot the surveillance scene at the same time to reduce or avoid the surveillance blind spots in the surveillance scene.
- one or more surveillance equipment can be installed in each partition on each floor, and Establish the corresponding relationship between the device identifier and the area monitored by the monitoring device, based on the device identifier of the monitoring device to which the monitoring screen of the recognized face image belongs, and the corresponding relationship, the area monitored by the monitoring device that took the monitoring screen can be determined, And use this area as the location where the person appears this time.
- the preset personnel information for the first person identifier may include different content according to different actual application scenarios, and the content of the preset personnel information may be different for different personnel identifiers.
- it may be the name, ID number, hometown, contact information, home address and other identity information that the user pre-input for the first person ID.
- the user may have the identity information of some personnel.
- the user may know how often they come to the mall. Then, the user can correspondingly save the face models and personal identifications of these persons in the first person database, and input the identification information of these persons as the preset personal information for the personal identifications of these persons.
- the more information included in the appearance information the more useful information users can dig out from the appearance records obtained by statistics. For example, if the appearance information includes whether a person wears a mask, the user can determine the proportion of persons with masks who appear in the surveillance scene within a day based on the appearance records obtained by statistics.
- the recognized face images can be matched to distinguish the persons corresponding to the face images, and the appearance records of each person can be counted separately, effectively reducing the number of recurring persons as The possibility of different personnel improves the accuracy of statistical results.
- the step of matching the face image with the face model saved in the first person database may include step A1-step A2:
- Step A1 Calculate the similarity between the face model of the face image and each face model in the first person library, and calculate the similarity between the face model of the face image and each auxiliary face model in the predetermined auxiliary library Degree; wherein, each record in the predetermined auxiliary library corresponds to a face model in the first person library, and each record includes at least one auxiliary face model that matches the face model corresponding to the record;
- the first person database is used as the basic database for face recognition
- the predetermined auxiliary database is used as the auxiliary database of the first person database
- the auxiliary face model included in each record in the predetermined auxiliary database is The specific form of is the same as the specific data form of the face model in the first person database.
- the number of auxiliary face models included in each record in the predetermined auxiliary library can be set according to actual conditions, for example: 5, 6, 10, and so on.
- any face model in the first person database is determined based on a person's face image
- the person's face image may be an image containing more comprehensive face information, for example: a passport photo , Or, the collected image with a higher face quality score.
- each auxiliary face model included in each record in the predetermined auxiliary library is also determined based on a face image, and the face image to which the auxiliary face model included in each record belongs may be: corresponding to the record A similar image of the face image to which the face model in the first person database belongs.
- the person to which a face model belongs in the first person database, and the person to which the auxiliary face model belongs is recorded corresponding to the face model, Can be identified as the same person, the difference lies in the person's posture, degree of occlusion, etc.
- the predetermined auxiliary database may be a pre-built personnel database, or the face model in the predetermined auxiliary database may be pre-built partly and gradually in the process of personnel statistics. Complete the face model in the predetermined auxiliary library.
- the face models in the predetermined auxiliary database can be pre-built and gradually improved in the process of personnel statistics, and the pre-constructed face models in the predetermined auxiliary database
- the partial face models are related models of persons corresponding to the face models constructed in advance in the first person database.
- the similarity algorithms used may be different.
- a face model as a feature value
- the Euclidean distance between the face model of the face image and the vector value of the face model in the first person database can be calculated, and the face model of the face image can be calculated The Euclidean distance to the vector value of each auxiliary face model in the predetermined auxiliary library.
- the face model is an image
- any image similarity recognition algorithm may be used to calculate the similarity between the face model of the face image and each face model in the first person database, and calculate the The similarity between the face model of the face image and each auxiliary face model in the predetermined auxiliary library.
- Step A2 Based on the calculated similarity, it is determined whether the face image matches the face model in the first person database.
- determining whether the face image matches the face model in the first person database may include:
- the found similarity is the similarity between a face model in the first person database and the face model of the face image, Determine that the face image matches the face model in the first person database, and use the face model in the first person database as a face model that matches the face model of the face image ;
- the found similarity is the similarity between an auxiliary face model in the predetermined auxiliary library and the face model of the face image, determining that the face image matches the face model in the first person library, In addition, the face model corresponding to the record to which the auxiliary face model belongs in the predetermined auxiliary library is used as the face model matching the face model of the face image.
- the specified similarity threshold can be set according to actual conditions, and is not limited here.
- the step of determining whether the face image matches a face model in the first person database based on the calculated similarity may include:
- the similarity to be used of any face model in the first person database is a value determined based on the first similarity and the second similarity corresponding to the face model; the first similarity corresponding to the face model The similarity is the similarity between the face model and the face model of the face image, and the second similarity corresponding to the face model is the auxiliary face model in the corresponding record of the face model, and the difference between the face model and the face image.
- the similarity of the face model is the similarity between the face model and the face model of the face image, and the second similarity corresponding to the face model is the auxiliary face model in the corresponding record of the face model, and the difference between the face model and the face image.
- the face model that matches the face image is the face model that has the greatest similarity to be used and meets the predetermined similarity condition.
- the predetermined similarity condition may be greater than a predetermined similarity threshold, and the predetermined similarity threshold may be set according to the situation, for example: 90%, 92%, 93%, 95%, and so on.
- determining the to-be-used similarity of each face model in the first person database may include:
- the first similarity and the second similarity corresponding to the face model are weighted and averaged to obtain the to-be-used similarity of the face model.
- This optional implementation manner is to determine the to-be-used similarity of each face model in the first person database.
- the face model in the first person database is the data in the basic database, the face information is more comprehensive, and the corresponding similarity is more reliable for the data matching judgment. Therefore, the weight corresponding to the face model in the first person database may be greater than the weight corresponding to the auxiliary face model in the predetermined auxiliary database.
- determining the to-be-used similarity of each face model in the first person database may include steps B1-step B2:
- Step B1 selecting models whose similarity with the face model of the face image meets a predetermined similarity condition from the first person database and the predetermined auxiliary database respectively, to obtain hit data;
- Step B2 For each face model in the face model corresponding to the hit data, when the face model is included in the hit data, if the record corresponding to the face model belongs to the first record, follow the person Among the first similarity and the third similarity corresponding to the face model, the maximum value is selected as the person The to-be-used similarity of the face model; otherwise, the first similarity corresponding to the face model is used as the to-be-used similarity of the face model; when the face model is not included in the hit data, the face model Among the third similarity degrees corresponding to the model, the maximum value is selected as the to-be-used similarity degree of the face model; wherein, the third similarity degree corresponding to the face model is: an auxiliary of the hit data in the corresponding record of the face model The face model, the similarity with the face model of the face image;
- the face model corresponding to the hit data includes: the face model included in the hit data, and the face model not included in the hit data but the corresponding record belongs to the first record; the first record is included
- the auxiliary face model belongs to the record of the hit data.
- step B1 from the first person database and the predetermined auxiliary database, the models whose similarity with the face model of the face image meets the predetermined similarity condition are selected from the first person database and the predetermined auxiliary database, and there are multiple specific implementation methods for obtaining hit data.
- the models whose similarity with the face model of the face image meets the predetermined similarity condition are selected from the first person database and the predetermined auxiliary database.
- determining whether the similarity between the model and the face model of the face image is greater than a predetermined threshold If it is, it is determined that the model is a model whose similarity with the face model of the face image satisfies a predetermined similarity condition.
- the predetermined threshold can be set according to actual conditions, for example: 85%, 87%, 90%, 92%, 95%, and so on.
- the face information included in the image to which each face model belongs in the first person database is relative to the auxiliary
- the face information included in the image to which the face model belongs is more comprehensive, that is, the similarity corresponding to the face model in the first person database is more reliable for judging matching. Therefore, in order to further improve the matching accuracy, different predetermined thresholds can be set for the two libraries based on the characteristics of the first personnel library and the predetermined auxiliary library.
- respectively selecting from the first person database and the predetermined auxiliary database the models whose similarity with the face model of the face image satisfies a predetermined similarity condition may include: determining a preset set for the first person database A predetermined first predetermined threshold, and a second predetermined threshold set in advance for the predetermined auxiliary library; wherein, the first predetermined threshold is less than the second predetermined threshold;
- auxiliary face model For each auxiliary face model in the predetermined auxiliary library, determine whether the similarity between the auxiliary face model and the face model of the face image is greater than the second predetermined threshold; if so, determine that the auxiliary face model is A model whose similarity with the face model of the face image meets the predetermined similarity condition.
- the first predetermined threshold may be lower than the second predetermined threshold.
- the first predetermined threshold and the second predetermined threshold may be set according to actual conditions. For example, the first predetermined threshold may be 87%, and the second predetermined threshold may be 90%; or, the first predetermined threshold may be 90%, and the second predetermined threshold may be 94%, and so on.
- the predetermined person attribute of the face image affects the credibility of the data matching judgment by the similarity corresponding to the face model of the face image, and different attribute values have different effects. Therefore, in order to further improve the matching accuracy, the following correspondence can be set in advance for the first personnel database: The first correspondence between each attribute value of the predetermined personnel attribute and the predetermined threshold; and the following correspondence is set in advance for the predetermined auxiliary database Relationship: The second corresponding relationship between each attribute value of the attribute of the predetermined person and the predetermined threshold.
- the attributes of the predetermined person may include: whether to wear eye glasses, whether to wear a hat, identities based on age groups, and so on.
- the predetermined threshold corresponding to the attribute value with a large degree of influence may be greater than the predetermined threshold corresponding to the attribute value with a small degree of influence.
- the attribute of the predetermined person is whether to wear glasses.
- the attribute value of the attribute of the predetermined person includes wearing glasses and not wearing glasses. Wearing glasses has a great influence on the credibility, and not wearing glasses has little influence on the credibility.
- first The corresponding relationship may be: Wearing glasses corresponds to a predetermined threshold: 91%, and not wearing glasses corresponds to 89%; and the second correspondence may be: Wearing glasses corresponding to a predetermined threshold: 93%, and not wearing glasses corresponds to a threshold of 92% .
- the attributes of the predetermined personnel are identities based on age groups.
- the attribute values of the attributes of the predetermined personnel include old age, children, and young people, and the influence of children, young people, and old age on the credibility is gradually reduced.
- the first corresponding relationship may be: children corresponding to a predetermined threshold: 93%, youth corresponding to a predetermined threshold of 90%, and old age corresponding to a predetermined threshold of 88%; and the second corresponding relationship may be: children corresponding to a predetermined threshold: 95%, The predetermined threshold for young people is 93%, and the predetermined threshold for old people is 90%.
- the determining a first predetermined threshold set in advance for the first personnel library, and a second predetermined threshold set in advance for the predetermined auxiliary library Can include: Determine the attribute value of the predetermined person attribute of the face image as the target attribute value; from the first correspondence relationship between each attribute value of the predetermined person attribute set in advance for the first person database and the predetermined threshold value, search for and The predetermined threshold corresponding to the target attribute value is used as the first predetermined threshold set for the first personnel database;
- a pre-trained neural network model for identifying the attribute value of the predetermined person attribute may be used to identify the attribute value of the predetermined person attribute of the face image.
- the method provided in the embodiments of the present application also Can include:
- the second record is a record in the predetermined auxiliary library that corresponds to a face model that matches the face model of the face image.
- the method for determining the image quality score of the face image can be any method that can score the image quality, which is not limited in the embodiment of the present application.
- the predetermined scoring threshold can be set according to actual conditions. For example, if the image quality score is a percentile system, then the predetermined scoring threshold can be 92 points, 95 points, 96 points, and so on.
- the face model saved in the first person database may be a pre-input face model of a person who may appear in the monitoring scene.
- the face model saved in the first person database may be the face model of students, faculty and staff of the school.
- the surveillance scenes are shopping malls, parks, etc., because these surveillance scenes have a large amount of people and the personnel The composition is more complex, so it is difficult for users to predict who may appear in the surveillance scene.
- FIG. 2 is a schematic diagram of another flow chart of the method for counting people provided in an embodiment of this application.
- the personnel counting method provided by the embodiment of the present application may include:
- This step is the same as S101, and can refer to the related description in the foregoing S101, which is not repeated here.
- the second person ID is different from the person ID saved in the first person database.
- the person identification is a serial number
- the saved person identification in the first person database is 1-66
- the second person identification may be 67.
- the second person ID is different from the saved person ID in the first person database
- the person indicated by the second person ID is different from the person indicated by any saved person ID in the first person database. It is understandable that if the face image does not match the face model in the first person database, it can be considered that the person corresponding to the face image is not the person identified by the saved person ID in the first person database. Therefore, the second person ID needs to be used to indicate the person.
- a person identification in an application scenario where it is difficult for the user to predict a person who may appear in the monitoring scene, a person identification can be automatically assigned to a person who has not previously input a corresponding face model, and the face identification of the person can be saved.
- effective statistics can be made on the appearance records of persons who have not entered the corresponding face model in advance.
- the first person database may not initially save any face models, but by saving the second person identification, the saved person identifications and face models in the first person database are added.
- the first person database corresponds to Before saving the second person identifier and the face model of the face image, it may also include:
- the face model of the face image is determined to be added Stranger data, and execute the step of correspondingly saving the second person identifier and the face model of the face image in the first person database;
- the so-called stranger data of the same stranger the problem of being added multiple times in the first person database specifically refers to: before a piece of stranger data is written to the first person database, the stranger data belongs to another stranger An image containing a human face is recorded as a stranger again as a new image to be analyzed in the process of personnel counting, and written into the first personnel database. Among them, if the face model of the face image does not find a matching face model in the first person database, then the face model of the face image can be used as stranger data.
- the predetermined cache stores the faces determined to be stranger data to be added in the last N seconds Model.
- the similarity between the face model of the face image and each face model in the predetermined cache is calculated, and then the judgment Whether there is a face model with a similarity greater than the third predetermined threshold in the predetermined cache, that is, determine whether the person to which the face image belongs is a stranger that has been determined in the last N seconds, and perform different processing according to different determination results process.
- N can be set according to the writing speed of stranger data in the actual situation.
- the N can be 4, 5, 6, etc.
- the specific value of the third predetermined threshold can be set according to the actual situation.
- for a specific implementation manner of calculating the similarity between the target data and each third face data in the predetermined cache reference may be made to the relevant description of the foregoing embodiment, and details are not described herein.
- the specific implementation of identifying whether the image quality of the face image meets the predetermined high quality condition may include: determining whether the image quality score of the face image exceeds a predetermined score threshold, and if so, determining that the image quality of the face image meets Predetermine high quality conditions.
- determining the image quality score of the face image please refer to the relevant description of the foregoing embodiment, which will not be repeated here.
- the user may only need to count the appearance records of some people in the monitoring scene.
- the monitoring scene as a shopping mall as an example
- the user may be interested in the appearance records of customers in the shopping mall, and the appearance records of the customers may appear in the shopping mall.
- Personnel also include store employees, and users may not be interested in the presence records of store employees.
- an embodiment of the present application provides a method for counting people, which can be referred to FIG. 3, which shows another schematic flow chart of the method for counting people provided by an embodiment of the application.
- the personnel counting method provided by the embodiment of the present application may include:
- the second person database stores face models of persons who do not need to participate in statistics. According to different application scenarios, the people who do not need to participate in statistics can be different.
- the face model in the second person database may be input in advance by the user.
- the person corresponding to the face image can be considered to be a person who does not need to participate in the statistics.
- Statistics of personnel appearance records If the face image does not match the face model saved in the second person database, the person corresponding to the face image can be considered to be a person who needs to participate in the statistics. Therefore, the face image can be further compared with the face model in the first person database. The saved face models are matched for statistics.
- This step is the same as S102, and you can refer to the foregoing description of S102, which will not be repeated here.
- the user may need to count the presence records of customers in the shopping mall, which may be pre-collected facial models of mall staff and save them in the second personnel database. If the face image of the worker is recognized from the monitoring screen, the face image matches the face model in the second person database, so no further statistics will be performed. And when the face image of the customer is recognized from the monitoring change, the face image does not match the face model in the second person database, so further statistics will be performed to obtain the presence record of the customer.
- the first personnel database can be one personnel database or multiple personnel databases. Each first person database may pre-store the face model input by the user, or it may not pre-store the model input by the user, but according to the embodiment shown in FIG. 2 gradually increase the saved face model during the personnel counting process. model.
- the surveillance scene is a shopping mall
- the user has installed surveillance equipment in multiple areas of the shopping mall in advance for shooting surveillance pictures.
- the user needs to count the presence records of customers in the mall to better manage the mall.
- Two first personnel databases and a second personnel database can be pre-set.
- the two first personnel databases are the stranger database and the key personnel database.
- the stranger database does not save face models in advance
- the key personnel database is Corresponding to the face model, person identification, and person information (such as ID number, address, contact information, etc.) of important customers (in some embodiments, it may also include suspicious persons that need to be monitored) that are stored in advance
- the staff library is a staff library, and the face models of the staff in the shopping mall are pre-stored.
- the personnel counting method provided by the embodiment of the present application may include:
- S401 When a face image is recognized from a monitoring picture, the face image is matched with a face model saved in a staff library. Since the staff database is the second personnel database, you can refer to the relevant description in S301, which will not be repeated here.
- S402 If the face image does not match the face model saved in the staff library, match the face image with the face model saved in the key staff library. If the face image matches the face model in the key staff library, Perform S403, if the face image does not match the face model in the key personnel database, perform S404. Since the staff library is the first staff library, you can refer to the related description in S101, which will not be repeated here.
- S405 correspondingly save the face model of the second person identifier and the face image in the stranger library. This step is the same as S203, and you can refer to the foregoing description of S203, which is not repeated here.
- the face model is not pre-stored in the stranger library
- the face image is The face image in the stranger library does not match, so the face model of the face image will be correspondingly saved in the stranger library. That is, after the face image is matched with the face model stored in the stranger library for the first time, the face model and logo are correspondingly stored in the stranger library.
- the staff database can be used to avoid collecting statistics on staff who are not interested in the user.
- the stranger database customers who are difficult to obtain face models in advance can be counted.
- customers who are more interested in users can be distinguished from ordinary customers.
- the appearance records obtained by statistics may be as shown in the following table:
- the table can indicate: The person indicated by the person identification 1 has appeared 5 times in total and has appeared in areas 1 and 2, the person is a stranger and wears glasses. The person indicated by Personnel ID 2 has appeared 5 times in total and has appeared in areas 3, 4, and 5. This person is a key person and does not wear glasses.
- the method is XXX-XXXXX. It is understandable that this table is only a representation form of the appearance records obtained by statistics. In other optional embodiments, according to actual needs, the appearance records may include more table items, and the appearance records may also be in the form of a table. If it is expressed in other forms, this embodiment does not limit it.
- users can conduct information mining based on actual needs.
- the user may sort according to the number of appearances to determine the personnel with a larger number of appearances, and consider key personnel among these personnel as key development targets, and unfamiliar personnel as potential development targets.
- the personnel counting method provided in the embodiments of the present application can be applied to any scenario with personnel counting requirements.
- the following uses a sign-in scenario as an example to describe the personnel counting method provided in the embodiment of the present application.
- the personnel counting method provided in the embodiments of the present application may include the following steps C1-C2:
- Step C1 When a face image is recognized from the monitoring screen, the face image is matched with a face model saved in a first person database, and the first person database correspondingly saves a person identification and a face model;
- Step C2 If the face image matches the face model in the first person database, update the check-in status of the first person ID to the checked-in state, and update the number of check-ins corresponding to the person indicated by the first person ID
- the number of check-ins is the number of times the check-in status of the first person ID is updated to the checked-in state
- the first person ID is the person ID corresponding to the face model in the first person database that matches the face image .
- the electronic device to which the method provided in the embodiment of the present application is applied can be communicatively connected with monitoring devices set up at different check-in locations, and the monitoring device has an image collection function.
- step C1 the recognition of the face image from the monitoring picture can be understood as the face image collected from the scene, and the scene is the sign-in scene.
- the electronic device can match the face image with the face model stored in the first person database, and the first person database correspondingly stores the person identification and Face model.
- the first person database correspondingly stores the person identification and Face model.
- the person identification in the first person database may include any information such as name, ID number, contact information, and face picture, or a combination of various information.
- the check-in status of the first person ID can be updated to the checked-in status, and then the first person ID is updated
- the number of check-ins corresponding to the personnel It should be noted that the number of check-in times reflects the appearance record of the person in the check-in scene.
- the check-in status before the update to checked-in can be the unchecked state or the checked-in state, which is reasonable.
- the personnel counting method provided by the embodiments of the present application can distinguish the personnel corresponding to the facial images by matching the recognized facial images, and then can separately count the appearance records of each personnel at the check-in site, which is effective Reduce the possibility of counting recurring personnel as different personnel, and improve the accuracy of statistical results.
- a personnel counting method provided in the embodiments of the present application may include the following steps D1-D4:
- Step D1 When a face image is recognized from the monitoring screen, the face image is matched with a face model saved in a first person database, and the first person database correspondingly saves a person identification and a face model, And correspondingly save personnel identification and group identification;
- the people belonging to the same group ID can be classified as a group.
- the sign-in status update occurs, the number of signed-in persons included in the group to which the person whose sign-in status update occurs can be judged, and the sign-in status of the group can be determined based on the number of the sign-in persons.
- the number of the first personnel database may include multiple, and each first personnel database has a different identifier, and each first personnel database may include one or more group identifiers.
- Step D2 If the face image matches the face model in the first person database, update the check-in status of the first person ID to the checked-in state, and update the number of check-ins corresponding to the person indicated by the first person ID
- the number of check-ins is the number of times the check-in status of the first person ID is updated to the checked-in state
- the first person ID is the person ID corresponding to the face model in the first person database that matches the face image ;
- Step D3 After updating the sign-in status of the first person identification to the signed-in state, obtain the sign-in status of all the person identifications corresponding to the target group identification in the first person database; wherein, the target group identification is the first person identification.
- Step D4 Determine the sign-in status of the group with the target group ID based on the obtained sign-in status of all the personnel IDs.
- the determination of the sign-in status of the group with the target group identifier based on the obtained sign-in status of all the personnel identities may include:
- the sign-in status of the group with the target group identifier is determined. Among them, if the first number is less than the preset number of people corresponding to the target group, the sign-in status of the target group is not signed in; if the first number is not less than the preset number of people corresponding to the target group, the sign-in status of the target group is signed in .
- a prompt message indicating that the sign-in status of the target group is not sign-in is output; if the first number is not less than the target group
- a prompt message indicating that the sign-in status of the target group is signed-in is output.
- the preset number of people can be set according to actual needs.
- the person ID corresponding to the found face model is: Wang Wu, update the sign-in status of "Wang Wu” to Has checked in, and updated the number of check-in times corresponding to "Wang Wu", and the number of people who have checked-in information contained in group 1 to which Wang Wu belongs is 3, assuming that the preset number of people corresponding to group 1 is 3, then it can be determined And output the sign-in status of group 1 as signed-in.
- all the person identities included in the target group and the sign-in status of all the person identities can be output at the same time, so that the user can know which persons in the target group sign in , Who did not sign in.
- the face image corresponding to the person who has checked in can also be presented, or the statistics and presentation The number of people who have signed in to the target group and the number of people who have not signed in, so that the user can learn the detailed sign-in status of each person in the target group, which is not limited in this application.
- the personnel counting method may further include: taking the collection time of the face image as the sign-in time of the first person identification and recording it; when an external input is received, the specified time and During the group identification retrieval instruction, obtain the person identification corresponding to the received group identification and whose sign-in status is signed-in;
- the obtained second number is less than the preset number of people corresponding to the group with the received group ID, determining that the sign-in status of the group with the received group ID is not signed in;
- the obtained number is not less than the preset number of people corresponding to the group with the received group ID, it is determined that the sign-in status of the group with the received group ID is checked in.
- the personnel counting method provided by the embodiments of the present application can distinguish the personnel corresponding to the facial images by matching the recognized facial images, and then can separately count the appearance records of each personnel at the check-in site, which is effective Reduce the possibility of counting recurring personnel as different personnel, and improve the accuracy of statistical results. Moreover, it can meet the need to count the sign-in status of the sign-in group, and realize the demand for the association between personnel and the group. In addition, it can meet the needs of searching group sign-in situations at different times.
- FIG. 5 shows a schematic structural diagram of a people counting device provided by an embodiment of the application, which may include:
- the face matching module 501 when a face image is recognized from the monitoring picture, matches the face image with a face model saved in a first person database, and the first person database correspondingly saves a person identifier And face model;
- the record update module 502 if the face image matches the face model in the first person database, update the appearance record of the person indicated by the first person identifier, where the first person identifier is the first person The person identification corresponding to the face model in the library that matches the face image.
- the face matching module 501 may include:
- the similarity calculation sub-module is used to calculate the similarity between the face model of the face image and each face model in the first person library, and calculate the face model of the face image and the predetermined auxiliary library The similarity of each auxiliary face model; wherein, each record in the predetermined auxiliary library corresponds to a face model in the first person library, and each record includes a face model matching the record At least one auxiliary face model of The matching analysis sub-module is configured to determine whether the face image matches the face model in the first person database based on the calculated similarity.
- the matching analysis submodule may include:
- the calculation unit is configured to determine the to-be-used similarity of each face model in the first person database based on the calculated similarity; wherein, the to-be-used similarity of any face model in the first person database is The value determined based on the first similarity and the second similarity corresponding to the face model; the first similarity corresponding to the face model is the similarity between the face model and the face model of the face image, The second similarity corresponding to the face model is the similarity between the auxiliary face model in the corresponding record of the face model and the face model of the face image;
- An analysis unit configured to, if there is a face model with the greatest similarity to be used and meet a predetermined similarity condition, determine that the face image matches the face model in the first person database;
- the face model that matches the face image is the face model that has the greatest similarity to be used and meets a predetermined similarity condition.
- the calculation unit may include:
- a screening subunit which is used to screen the models whose similarity with the face model of the face image meets the predetermined similarity condition from the first person database and the predetermined auxiliary database respectively, to obtain hit data;
- the determining subunit is used for each face model in the face model corresponding to the hit data, and when the face model is included in the hit data, if the record corresponding to the face model belongs to the first record , Select the maximum value from the first similarity and the third similarity corresponding to the face model as the to-be-used similarity of the face model, otherwise, the first similarity corresponding to the face model is taken as The to-be-used similarity of the face model; when the face model is not included in the hit data, select the maximum value from the third similarity corresponding to the face model as the to-be-used similarity of the face model Degree
- the third degree of similarity corresponding to the face model is: the degree of similarity between the auxiliary face model belonging to the hit data in the corresponding record of the face model and the face model of the face image;
- the face model corresponding to the hit data includes: a face model included in the hit data, and a face model that is not included in the hit data but the corresponding record belongs to the first record;
- the first record is a record in which the included auxiliary face model belongs to the hit data.
- the calculation unit may include: The first calculation subunit is used to select the maximum value from the first similarity and the second similarity corresponding to the face model for each face model in the first person database, as the face model Similarity to be used;
- the second calculation subunit is used for weighting and averaging the first similarity and the second similarity corresponding to the face model for each face model in the first person database, to obtain the waiting time of the face model Use similarity.
- the screening subunit is specifically configured to:
- the screening subunit determines a first predetermined threshold set in advance for the first personnel database, and a second predetermined threshold set in advance for the predetermined auxiliary database, Include:
- the predetermined threshold corresponding to the target attribute value is searched as the predetermined auxiliary The second predetermined threshold set by the library.
- the face matching module 501 is further configured to: if the face image does not match the face model in the first person database, corresponding in the first person database Save second A person identification and a face model of the face image, and the second person identification is different from a person identification saved in the first person database;
- the record update module 502 is also used to update the appearance record of the person indicated by the second person identifier.
- the face matching module 501 is further configured to calculate the face model of the face image and the second person identification in the first person database.
- the face matching module 501 is further configured to compare the face image with the face model stored in the first person database before matching the face image Matching with face models saved in a second person database, where face models of persons who do not need to participate in statistics are saved in the second person database;
- the record update module 502 is specifically configured to add new appearance information to the appearance record of the person indicated by the first person identifier, and the appearance information includes information based on the person Face feature information obtained from a face image.
- the appearance information further includes: the time stamp of the monitoring screen, the device identification of the monitoring device that took the monitoring screen, the monitoring screen, the identification of the first person One or more of the set personnel information.
- the record update module 502 is specifically configured to:
- the check-in status of the first person ID is updated to the checked-in status, and the number of check-ins corresponding to the person indicated by the first person ID is updated , The check-in status whose check-in times is the first person ID is updated to The number of check-in states.
- the first personnel database also correspondingly stores personnel identifications and group identifications
- the device also includes:
- the obtaining module is configured to obtain the check-in status of all the person IDs corresponding to the target group ID in the first person database after the record update module updates the check-in status of the first person ID to the checked-in status; wherein,
- the target group identifier is the group identifier corresponding to the first person identifier;
- the determining module is configured to determine the sign-in status of the group with the target group identifier based on the obtained sign-in status of all the personnel identities.
- the determining module is specifically configured to:
- the device further includes:
- a recording module configured to, after the record update module updates the sign-in status of the first person identification to the signed-in state, use the collection time of the face image as the sign-in time of the first person identification and record;
- the obtained second number is less than the preset number of people corresponding to the group with the received group ID, determining that the sign-in status of the group with the received group ID is not signed in;
- the acquired second number is not less than the preset number of people corresponding to the group with the received group ID, it is determined that the sign-in status of the group with the received group ID is signed in.
- An embodiment of the present application also provides an electronic device, as shown in FIG. 6, including:
- the memory 601 is used to store computer programs
- the processor 602 is configured to execute the program stored in the memory 601 to implement the following steps: when a face image is recognized from the monitoring picture, the face image is stored in the first person database The stored face model is matched, and the first person database stores a person ID and a face model correspondingly; if the face image matches the face model in the first person database, the first person ID is updated For the appearance record of the indicated person, the first person identifier is a person identifier corresponding to a face model that matches the face image in the first person database.
- the method further includes:
- the second person identifier and the face model of the face image are correspondingly saved in the first person database, and the second person The person ID is different from the person ID saved in the first person database;
- the method before the matching the face image with the face model saved in the first person database, the method further includes:
- the updating the appearance record of the person indicated by the first person identifier includes:
- New appearance information is added to the appearance record of the person indicated by the first person identifier, and the appearance information includes face feature information obtained based on the face image.
- the appearance information further includes: the time stamp of the monitoring screen, the device identification of the monitoring device that took the monitoring screen, the monitoring screen, the identification of the first person One or more of the set personnel information.
- the memory mentioned in the above electronic device may include random access memory ( Random Access Memory, RAM), and may also include non-volatile memory (Non-Volatile Memory, NYM), such as at least one disk memory.
- RAM Random Access Memory
- NYM Non-Volatile Memory
- the memory may also be at least one storage device located far away from the foregoing processor.
- the above-mentioned processor may be a general-purpose processor, including a central processing unit (CPU), a network processor (Network Processor, NP), etc.; it may also be a digital signal processor (DSP), a dedicated integrated Circuit (Application Specific Integrated Circuit, ASIC), Field-Programmable Gate Array (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components.
- CPU central processing unit
- NP Network Processor
- DSP digital signal processor
- ASIC Application Specific Integrated Circuit
- FPGA Field-Programmable Gate Array
- FPGA Field-Programmable Gate Array
- a computer-readable storage medium stores instructions, which when run on a computer, cause the computer to execute any one of the foregoing embodiments. People counting methods.
- a computer program product containing instructions is also provided, which when running on a computer, causes the computer to execute any of the personnel counting methods in the foregoing embodiments.
- the computer program product includes one or more computer instructions.
- the computer may be a general-purpose computer, a special-purpose computer, a computer network, or other programmable devices.
- the computer instructions may be stored in a computer-readable storage medium, or transmitted from one computer-readable storage medium to another computer-readable storage medium. For example, the computer instructions may be transmitted from a website, computer, server, or data center.
- the computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server or data center integrated with one or more available media.
- the usable medium may be a magnetic medium (for example, a floppy disk, a hard disk, a magnetic tape), an optical medium (for example, a DVD), or a semiconductor medium (for example, a solid state hard disk (SSD)).
Abstract
Description
Claims
Applications Claiming Priority (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910146911.8A CN111626079A (en) | 2019-02-27 | 2019-02-27 | Personnel counting method and device and electronic equipment |
CN201910146911.8 | 2019-02-27 | ||
CN201910180045.4 | 2019-03-11 | ||
CN201910180045.4A CN111696220A (en) | 2019-03-11 | 2019-03-11 | Sign-in method and device |
CN201911267295.8A CN112949362B (en) | 2019-12-11 | 2019-12-11 | Personnel information labeling method and device and electronic equipment |
CN201911267295.8 | 2019-12-11 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020173314A1 true WO2020173314A1 (en) | 2020-09-03 |
Family
ID=72239075
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/075285 WO2020173314A1 (en) | 2019-02-27 | 2020-02-14 | Personnel statistical method and device and electronic device |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2020173314A1 (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112232186A (en) * | 2020-10-14 | 2021-01-15 | 盈合(深圳)机器人与自动化科技有限公司 | Epidemic prevention monitoring method and system |
CN112559583A (en) * | 2020-11-30 | 2021-03-26 | 杭州海康威视数字技术股份有限公司 | Method and device for identifying pedestrians |
CN112560772A (en) * | 2020-12-25 | 2021-03-26 | 北京百度网讯科技有限公司 | Face recognition method, device, equipment and storage medium |
CN112651366A (en) * | 2020-12-30 | 2021-04-13 | 深圳云天励飞技术股份有限公司 | Method and device for processing number of people in passenger flow, electronic equipment and storage medium |
CN112784784A (en) * | 2021-01-29 | 2021-05-11 | 新疆爱华盈通信息技术有限公司 | Personnel information statistical method and system based on face recognition |
CN112560772B (en) * | 2020-12-25 | 2024-05-14 | 北京百度网讯科技有限公司 | Face recognition method, device, equipment and storage medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103093213A (en) * | 2013-01-28 | 2013-05-08 | 广东欧珀移动通信有限公司 | Video file classification method and terminal |
CN104298956A (en) * | 2013-07-19 | 2015-01-21 | 因为科技无锡有限公司 | Face identification method |
CN105279814A (en) * | 2014-07-24 | 2016-01-27 | 中兴通讯股份有限公司 | Driving recording treatment method and driving recording treatment system |
CN107527012A (en) * | 2017-07-14 | 2017-12-29 | 深圳云天励飞技术有限公司 | Make a dash across the red light monitoring method, device and monitoring processing equipment |
-
2020
- 2020-02-14 WO PCT/CN2020/075285 patent/WO2020173314A1/en active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103093213A (en) * | 2013-01-28 | 2013-05-08 | 广东欧珀移动通信有限公司 | Video file classification method and terminal |
CN104298956A (en) * | 2013-07-19 | 2015-01-21 | 因为科技无锡有限公司 | Face identification method |
CN105279814A (en) * | 2014-07-24 | 2016-01-27 | 中兴通讯股份有限公司 | Driving recording treatment method and driving recording treatment system |
CN107527012A (en) * | 2017-07-14 | 2017-12-29 | 深圳云天励飞技术有限公司 | Make a dash across the red light monitoring method, device and monitoring processing equipment |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112232186A (en) * | 2020-10-14 | 2021-01-15 | 盈合(深圳)机器人与自动化科技有限公司 | Epidemic prevention monitoring method and system |
CN112232186B (en) * | 2020-10-14 | 2024-02-27 | 盈合(深圳)机器人与自动化科技有限公司 | Epidemic prevention monitoring method and system |
CN112559583A (en) * | 2020-11-30 | 2021-03-26 | 杭州海康威视数字技术股份有限公司 | Method and device for identifying pedestrians |
CN112559583B (en) * | 2020-11-30 | 2023-09-01 | 杭州海康威视数字技术股份有限公司 | Method and device for identifying pedestrians |
CN112560772A (en) * | 2020-12-25 | 2021-03-26 | 北京百度网讯科技有限公司 | Face recognition method, device, equipment and storage medium |
CN112560772B (en) * | 2020-12-25 | 2024-05-14 | 北京百度网讯科技有限公司 | Face recognition method, device, equipment and storage medium |
CN112651366A (en) * | 2020-12-30 | 2021-04-13 | 深圳云天励飞技术股份有限公司 | Method and device for processing number of people in passenger flow, electronic equipment and storage medium |
CN112784784A (en) * | 2021-01-29 | 2021-05-11 | 新疆爱华盈通信息技术有限公司 | Personnel information statistical method and system based on face recognition |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2020173314A1 (en) | Personnel statistical method and device and electronic device | |
US9704185B2 (en) | Product recommendation using sentiment and semantic analysis | |
US20160321256A1 (en) | System and method for generating a facial representation | |
IL258817A (en) | Methods and apparatus for false positive minimization in facial recognition applications | |
CN108038176B (en) | Method and device for establishing passerby library, electronic equipment and medium | |
JP6109970B2 (en) | Proposal for tagging images on online social networks | |
AU2017254967A1 (en) | Presence granularity with augmented reality | |
WO2021068635A1 (en) | Information processing method and apparatus, and electronic device | |
CN106850346A (en) | Change and assist in identifying method, device and the electronic equipment of blacklist for monitor node | |
JP2012533803A (en) | Estimating and displaying social interests in time-based media | |
WO2018205845A1 (en) | Data processing method, server, and computer storage medium | |
CN109783685A (en) | A kind of querying method and device | |
US20230268073A1 (en) | Inquiry information processing method and apparatus, and medium | |
US11288673B1 (en) | Online fraud detection using machine learning models | |
US20220345435A1 (en) | Automated image processing and insight presentation | |
US20180027092A1 (en) | Selecting assets | |
CN109064217B (en) | User level-based core body strategy determination method and device and electronic equipment | |
US10805255B2 (en) | Network information identification method and apparatus | |
WO2022033068A1 (en) | Image management method and apparatus, and terminal device and system | |
WO2023019927A1 (en) | Facial recognition method and apparatus, storage medium, and electronic device | |
JPWO2015016262A1 (en) | Information processing apparatus, authentication system, authentication method, and program | |
WO2021104513A1 (en) | Object display method and apparatus, electronic device and storage medium | |
CN109167939A (en) | It is a kind of to match literary method, apparatus and computer storage medium automatically | |
US10193990B2 (en) | System and method for creating user profiles based on multimedia content | |
CN108256542A (en) | A kind of feature of communication identifier determines method, apparatus and equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20762314 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20762314 Country of ref document: EP Kind code of ref document: A1 |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20762314 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 05/04/2023) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20762314 Country of ref document: EP Kind code of ref document: A1 |