CN112579593A - Population database sorting method and device - Google Patents

Population database sorting method and device Download PDF

Info

Publication number
CN112579593A
CN112579593A CN201910942816.9A CN201910942816A CN112579593A CN 112579593 A CN112579593 A CN 112579593A CN 201910942816 A CN201910942816 A CN 201910942816A CN 112579593 A CN112579593 A CN 112579593A
Authority
CN
China
Prior art keywords
person
camera
population
probability
ith
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910942816.9A
Other languages
Chinese (zh)
Inventor
陈锦聪
钱苏敏
罗幼泉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN201910942816.9A priority Critical patent/CN112579593A/en
Priority to PCT/CN2020/097668 priority patent/WO2021063037A1/en
Publication of CN112579593A publication Critical patent/CN112579593A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/22Indexing; Data structures therefor; Storage structures
    • G06F16/2282Tablespace storage structures; Management thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/22Indexing; Data structures therefor; Storage structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/18Complex mathematical operations for evaluating statistical data, e.g. average values, frequency distributions, probability functions, regression analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Abstract

The application provides a population database sorting method and device. According to the technical scheme, a person set which is possibly present in the first camera is determined according to the person occurrence probability of the person present in the first camera, and the population base sub-base corresponding to the area is determined according to the person set of each first camera. Because the occurrence probability of the personnel can be continuously updated, the newest actual activity population in the area can be always kept in the population library, and compared with a static library, invalid personnel data can be greatly reduced. When the human face 1: N comparison real-name labeling process is carried out, the hit rate of primary comparison can be improved, secondary comparison is greatly reduced, and the effects of low comparison resource consumption and short consumed time are achieved.

Description

Population database sorting method and device
Technical Field
The present application relates to the field of population management, and more particularly, to population pool banking methods and apparatus.
Background
The city-level intelligent portrait labels the dynamic face in real time according to the library lists of the dynamic face and the real names, so that the personnel identity information of the dynamic face is realized. In the process of real-time labeling of the dynamic human face, the dynamic human face feature and static human face feature library are subjected to the following steps of 1: and (6) performing N comparison. The dynamic face is a face collected in an unconstrained scene under the condition that a user does not sense the dynamic face, for example, a face of a passerby collected by a face snapshot camera; the static face is a face collected under a specific constraint scene under the condition that a user has perception, for example, a face collected by a certificate photo.
With the increasing size N of the static face feature library (or population library), the ratio of 1: the accuracy of the N comparison is reduced, and the resources consumed by the comparison and the time consumed by the comparison are increased. The city-level static face feature library is usually in the ten-million level scale, and in order to reduce the size of the scale N of the static face feature library, the traditional method is to perform library partitioning on the static face feature library according to the administrative division to which static persons belong, namely, a static library partitioning mode.
In a static database dividing mode of a population database, classifying the standing population according to the administrative division of the place of the household; the floating population is classified according to the administrative division of the registered residence. The particle size of the classification may be defined according to the hierarchical size of the administrative divisions. The administrative district in China is divided into four grades of province, city, county (district) and town street.
However, due to the mobility of the population, the resident who is not registered in the population may live or live in the residential home; for floating population living in the district, it is possible that the floating population database is not in the population database sublibrary due to the data update hysteresis of the floating population database. That is, there is a high probability that the static banking manner will cause the actual life and the population living in the district to be not included in the banking of the population base, while the population not living and living in the district is included in the banking of the population base. This would result in a population pool based sub-pool of one time 1: the N comparison has low matching hit rate, and even if certain comparison steps and logics are adopted, for example, one comparison is completed in a sub-library of the current-level jurisdiction, and the second comparison is completed in a sub-library of the superior-level jurisdiction, the problems of large comparison resource consumption, long comparison time consumption and the like can be caused.
Disclosure of Invention
The application provides a population database sorting method and device, which can more accurately sort population databases, so that the primary sorting of the population databases is improved by 1: n match hit rate.
In a first aspect, the present application provides a population pool banking method, including: determining N first cameras in a first area, wherein N is a positive integer; determining N personnel sets according to the N first cameras, wherein the probability that personnel in the ith personnel set in the N personnel sets appear in the ith camera is greater than zero, the ith personnel set in the N personnel sets corresponds to the ith camera, and the value of i is each value in [1, N ]; determining a first population sublibrary of the first region according to the N person sets; and storing the first person mouth sub-library, wherein the first person mouth sub-library is used for determining the person identity information corresponding to the first face data acquired by the N first cameras.
For example, people with the occurrence probability of being present at the ith camera being greater than a certain threshold value are included in the people set corresponding to the ith camera. The threshold may be zero or a particular value thereof.
Alternatively, in embodiments of the present application, different time periods may correspond to different demographics. For example, a day may be divided into 1440/T time periods, T being the duration of each time period, and 1440/T time periods may correspond to 1440/T population bins.
In the technical scheme, the person set possibly appearing in the first camera is determined according to the person appearance probability of the person appearing in the first camera, and the population base sub-base corresponding to the area is determined according to the person set of each first camera. Because the occurrence probability of the personnel can be continuously updated, the newest actual activity population in the area can be always kept in the population library, and compared with a static library, invalid personnel data can be greatly reduced. When the human face 1: N comparison real-name labeling process is carried out, the hit rate of primary comparison can be improved, secondary comparison is greatly reduced, and the effects of low comparison resource consumption and short consumed time are achieved.
In addition, only the persons with the person occurrence probability larger than zero are contained in the first person score library, that is, the first person score library only comprises the persons possibly appearing in the first camera, so that the data volume in the first person score library can be reduced, and the source consumption of comparison resources is reduced.
In one possible implementation, prior to determining the first population of people for the first region, the method further comprises: acquiring second face data shot by M second cameras, wherein the second cameras are cameras where persons in the ith personal set are located before region migration; and determining the probability of the personnel in the ith personnel set appearing in the ith camera according to the second face data and the personnel migration probability, wherein the personnel migration probability is the probability of the personnel migrating from the second camera to the ith first camera.
In the above technical solution, the occurrence probability of the person is updated according to the real-time face data from the cameras, that is, the person ID list corresponding to each camera can be dynamically updated. Therefore, according to the area where the camera is located, the population database sub-database corresponding to each area is determined, the freshest actual activity population of the area can be kept in the population database sub-database all the time, invalid personnel data are greatly reduced compared with a full static database, when the face 1: N comparison real-name labeling process is carried out, the hit rate of one comparison can be improved, the secondary comparison is greatly reduced, and the effects of less resource consumption and short consumed time are achieved.
In a possible implementation manner, determining, according to the second face data and the person migration probability, a probability that a person in the ith person set appears in the ith camera includes: comparing the similarity between the second face data and the face data in a second population sub-library to obtain a confidence coefficient of the second face data, wherein the confidence coefficient is used for indicating the confidence probability that the second face data and the face data in the second population sub-library belong to the same person, and the second population sub-library is a population sub-library corresponding to the second camera; and obtaining the probability that the personnel in the ith personal personnel set appear in the ith camera according to the confidence coefficient and the personnel migration probability.
Optionally, the person occurrence probability may be obtained by multiplying the confidence corresponding to certain face data by the person migration probability corresponding to the person.
In one possible implementation, the method further includes: and determining the probability of the person migrating from the second camera to the ith first camera according to the historical spatiotemporal trajectory data of each person in the standing population library and/or the floating population library of the first area within a preset time period.
In one possible implementation, the method further includes: determining an initial probability that a person in the ith person set appears at the ith first camera based on a standing population pool and/or a floating population pool of the first region.
In a second aspect, the present application provides a population pool sorting device, the device comprising: a processing unit for determining N first cameras within a first area, N being a positive integer; the processing unit is further configured to determine N person sets according to the N first cameras, where a probability that a person in an ith person set in the N person sets appears in the ith camera is greater than zero, the ith person set in the N person sets corresponds to the ith camera, and a value of i is each value in [1, N ]; the processing unit is used for determining a first person sub-database of the first area according to the N person sets; and the storage unit is used for storing the first person mouth sub-library, and the first person mouth sub-library is used for determining the personnel identity information corresponding to the first face data acquired by the N first cameras.
For example, people with the occurrence probability of being present at the ith camera being greater than a certain threshold value are included in the people set corresponding to the ith camera. The threshold may be zero or a particular value thereof.
Alternatively, in embodiments of the present application, different time periods may correspond to different demographics. For example, a day may be divided into 1440/T time periods, T being the duration of each time period, and 1440/T time periods may correspond to 1440/T population bins.
In the technical scheme, the person set possibly appearing in the first camera is determined according to the person appearance probability of the person appearing in the first camera, and the population base sub-base corresponding to the area is determined according to the person set of each first camera. Because the occurrence probability of the personnel can be continuously updated, the newest actual activity population in the area can be always kept in the population library, and compared with a static library, invalid personnel data can be greatly reduced. When the human face 1: N comparison real-name labeling process is carried out, the hit rate of primary comparison can be improved, secondary comparison is greatly reduced, and the effects of low comparison resource consumption and short consumed time are achieved.
In addition, only the persons with the person occurrence probability larger than zero are contained in the first person score library, that is, the first person score library only comprises the persons possibly appearing in the first camera, so that the data volume in the first person score library can be reduced, and the source consumption of comparison resources is reduced.
In one possible implementation, the apparatus further includes: an obtaining unit, configured to obtain second face data captured by M second cameras before determining a first person population database of a first region, where the second cameras are cameras before region migration of people in the ith person set; the processing unit is further configured to determine, according to the second face data and the person migration probability, a probability that a person in the ith person set appears in the ith camera, where the person migration probability is a probability that the person migrates from the second camera to the ith first camera.
In the above technical solution, the occurrence probability of the person is updated according to the real-time face data from the cameras, that is, the person ID list corresponding to each camera can be dynamically updated. Therefore, according to the area where the camera is located, the population database sub-database corresponding to each area is determined, the freshest actual activity population of the area can be kept in the population database sub-database all the time, invalid personnel data are greatly reduced compared with a full static database, when the face 1: N comparison real-name labeling process is carried out, the hit rate of one comparison can be improved, the secondary comparison is greatly reduced, and the effects of less resource consumption and short consumed time are achieved.
In a possible implementation manner, the processing unit is specifically configured to: comparing the similarity between the second face data and the face data in a second population sub-base to obtain a confidence coefficient of the second face data, wherein the confidence coefficient is used for indicating the confidence probability that the second face data and the face data in the second population sub-base belong to the same person, and the second population sub-base is a population sub-base corresponding to the second camera; and obtaining the probability that the personnel in the ith personal personnel set appear in the ith camera according to the confidence coefficient and the personnel migration probability.
Optionally, the person occurrence probability may be obtained by multiplying the confidence corresponding to certain face data by the person migration probability corresponding to the person.
In one possible implementation, the processing unit is further configured to: and determining the probability of the person migrating from the second camera to the ith first camera according to the historical spatiotemporal trajectory data of each person in the standing population library and/or the floating population library of the first area within a preset time period.
In one possible implementation, the processing unit is further configured to: determining an initial probability that a person in the ith person set appears at the ith first camera based on a standing population pool and/or a floating population pool of the first region.
In a third aspect, the present application provides a chip, where the chip is connected to a memory, and is configured to read and execute a software program stored in the memory, so as to implement the method described in the first aspect or any implementation manner of the first aspect.
In a fourth aspect, the present application provides a population repository device, comprising a memory for storing a program; a processor configured to execute the memory-stored program, the processor being configured to perform the first aspect and the method of any possible implementation of the first aspect when the memory-stored program is executed.
In one possible implementation, the population pool device further comprises a transceiver.
In one possible implementation, the population database device is a chip that can be applied to a network device.
In one possible implementation, the population repository device is a server, a cloud host, or a container.
In a fifth aspect, the present application provides a computer program product comprising computer instructions that, when executed, cause a method of the foregoing first aspect or any possible implementation manner of the first aspect to be performed.
In a sixth aspect, the present application provides a computer-readable storage medium storing computer instructions that, when executed, cause a method of the foregoing first aspect or any possible implementation manner of the first aspect to be performed.
Drawings
FIG. 1 is a schematic illustration of a static banking of a population pool.
Fig. 2 is a graph comparing population in a population pool obtained by static pool-based population pool with actual activity population.
Fig. 3 is a schematic flow chart of a population pool classification method provided in an embodiment of the present application.
Fig. 4 is a schematic flow chart of a method of demographic accurate banking according to an embodiment of the present application.
Fig. 5 is a schematic diagram of a process flow of determining an initial occurrence probability of a population according to an embodiment of the present application.
Fig. 6 is an example of the initial occurrence probability of a person according to the embodiment of the present application.
FIG. 7 is a schematic diagram of a process flow for determining a spatiotemporal migration probability matrix of a person according to an embodiment of the present application.
Fig. 8 is an example of a person migration probability distribution matrix according to an embodiment of the present application.
Fig. 9 is a schematic diagram of a process flow of updating the occurrence probability of people according to an embodiment of the present application.
Fig. 10 is an example of updating the person occurrence probability according to the embodiment of the present application.
FIG. 11 is a schematic illustration of a secondary demographic library sub-library of an embodiment of the present application.
Fig. 12 is a schematic flow chart of people alignment based on demographic stratification in an embodiment of the present application.
FIG. 13 is a schematic flow chart of a search alignment according to an embodiment of the present application.
FIG. 14 is an implementation of a system according to an embodiment of the present application.
Fig. 15 is a schematic configuration diagram of a population pool classifying device according to an embodiment of the present application.
Fig. 16 is a schematic structural diagram of a population pool sorting device according to another embodiment of the present application.
Detailed Description
The technical solution in the present application will be described below with reference to the accompanying drawings.
The technical scheme of the embodiment of the application can be applied to various scenes as long as the scenes need to carry out population library division. For example, a city-level smart portrait scenario, a public security population management scenario, an intelligent traffic scenario, and the like. The technical scheme of the application is described below by taking an urban-level intelligent portrait scene as an example.
The city-level intelligent portrait labels the dynamic face in real time according to the library lists of the dynamic face and the real names, so that the personnel identity information of the dynamic face is realized. In the process of real-time labeling of the dynamic human face, the dynamic human face feature and static human face feature library are subjected to the following steps of 1: and (6) performing N comparison.
Wherein, the human face 1: the N comparison refers to the comparison of a face with a set (comprising N comparison objects), and the face with high similarity to the designated face is obtained through query. The dynamic face is a face collected in an unconstrained scene under the condition that a user does not sense the dynamic face, for example, a face of a passerby collected by a face snapshot camera. The static face is a face collected under a specific constraint scene under the condition that a user has perception, for example, a face collected by a certificate photo. The face features are feature vectors generated by mapping pixel points in the face close-up image. The face sketch map is a picture of a face sketch region which meets the requirements of face recognition pixels and is deducted from a face scene picture, and is commonly called a face minimap. The face scene graph is a snapshot picture at least comprising one face and human body elements, and is commonly called a face big graph.
With the increasing size N of the static face feature library (or population library), the ratio of 1: the accuracy of the N comparison is reduced, and the resources consumed by the comparison and the time consumed by the comparison are increased. The urban population base is generally in the ten-million scale, and in order to reduce the size of the population base scale N, the traditional method is to divide the population base according to administrative districts to which static persons belong, namely, a static division manner.
FIG. 1 is a schematic illustration of a static banking of a population pool. As shown in fig. 1, in the static banking manner of the population banking, for the permanent population, the permanent population is classified according to the administrative division of the household; the floating population is classified according to the administrative division of the registered residence. The particle size of the classification may be defined according to the hierarchical size of the administrative divisions. The administrative district in China is divided into four grades of province, city, county (district) and town street.
However, due to the mobility of the population, the resident who is not registered in the population may live or live in the residential home; for floating population living in the district, it is possible that the floating population database is not in the population database sublibrary due to the data update hysteresis of the floating population database. That is, the static banking manner shown in fig. 2 may cause the population living and living in the district to be not included in the banking of the population base with a high probability, and the population not living and living in the district is included in the banking of the population base, so that the banking of the population base is lack of freshness. This would result in a population pool based sub-pool of one time 1: the N comparison has low matching hit rate, and even if certain comparison steps and logics are adopted, for example, one comparison is completed in a sub-library of the current-level jurisdiction, and the second comparison is completed in a sub-library of the superior-level jurisdiction, the problems of large comparison resource consumption, long comparison time consumption and the like can be caused.
In view of the above problems, embodiments of the present application provide a population pool sorting method and apparatus, which can perform more accurate sorting on a population pool, so as to improve the first-order 1: n match hit rate.
Fig. 3 is a schematic flow chart of a population pool classification method provided in an embodiment of the present application. The method shown in fig. 3 may be executed by a server, a cloud host, a container, or the like, and may also be executed by a chip or a module included in the server, the cloud host, the container, or the like. The method illustrated in fig. 3 includes at least some of the following.
In 310, N first cameras within the first region are determined, N being a positive integer.
The first region may be a region of any size, and the embodiment of the present application is not particularly limited. For example, the sections may be provinces, cities, counties, districts, streets, and the like, which are divided in the administration. Also for example, it may be an area including a preset number of cameras.
The camera may also be other apparatuses or devices having a photographing function, and the embodiments of the present application are not particularly limited.
The N first cameras within the first area are all cameras deployed or disposed within the first area.
In a possible implementation manner, at least one camera list may be saved for each area, and when N first cameras in the first area are determined, the camera list corresponding to the first area may be determined according to an Identifier (ID) of the first area, and the like. The camera list may be determined according to the geographic location of the camera and the ID of the camera, and the geographic location of the camera may refer to the longitude and latitude where the camera is located, and the like.
In another possible implementation manner, the area where each first camera is located may be determined directly according to the geographic location of the first camera and the ID of the first camera, so as to determine the N first cameras in the first area. The geographic position of the camera may refer to the longitude and latitude of the camera.
At 320, determining N person sets according to the N first cameras, where a probability that a person in an ith person set in the N person sets appears in the ith camera is greater than zero (hereinafter, referred to as a person appearance probability), the ith person set in the N person sets corresponds to the ith camera, and a value of i is each value in [1, N ].
At 330, a first population pool of the first region is determined based on the set of N people.
In 340, a first population pool is stored.
In a possible implementation manner, according to the probability that people in the N person sets appear in the corresponding first cameras, the N person sets corresponding to the N first cameras are determined, and further, according to the N person sets, the population sub-base of the first area is determined. Wherein the probability of a person in the set of people appearing at the first camera is greater than 0. For example, where cameras 1-3 are located in the first area, 300 people may be present with camera 1, 60 people may be present with camera 2, and 15 people may be present with camera 3, then the demographic pool in the first area may consist of some or all of the 375 people.
In another possible implementation, the demographic sub-base of the present embodiments may be sliced in time. Specifically, according to the probability of the person in the N person sets appearing in the corresponding first camera in the first time period, the N person sets corresponding to the N first cameras in the first time period are determined, and further according to the N person sets, the population sub-base of the first area in the first time period is determined; and determining N person sets corresponding to the N first cameras in the second time period according to the probability of the persons in the N person sets appearing in the corresponding first cameras in the second time period, and further determining the population sub-base of the first area in the second time period according to the N person sets. That is, the first region may correspond to different demographics over different time periods. The time period in the embodiment of the present application may be a time slice divided according to a preset time length. For example, a time slice length of T, 1440 minutes a day may be divided into 1440/T time slices, and T may be divided equally by 1440. The time slice is divided in a day period and a minute unit in this example, and it is understood that the time slice of the embodiment of the present application may be divided in periods of other granularities and/or units of other granularities.
The person occurrence probability described above may also be determined prior to execution 320.
In a possible implementation manner, when the M second cameras capture the face data of the people in the people set corresponding to the ith first camera, the occurrence probability of the people in the ith people set is triggered and updated. And the second camera is the camera where the personnel in the personnel set corresponding to the first camera are located before the region migration. Specifically, when the M second cameras acquire face data, determining the probability that a person in the ith personal member set appears in the ith camera according to the face data and the person migration probability, wherein the person migration probability is the probability that the person migrates from the second camera to the ith first camera.
More specifically, as an example, the face data is compared in a second population sub-base to obtain a confidence level of the face data, and a probability that a person in an ith person set appears in the ith camera is obtained according to the obtained confidence level and the person migration probability. The confidence degree is used for indicating the confidence probability that the second face data and the face data in the second population sub-base belong to the same person, and for example, the confidence degree can be similarity and the like; the second population sub-database is the population sub-database corresponding to the second camera.
Alternatively, the above-described person migration probability may be applied to each person of the first region.
Alternatively, the person migration probability may be said for each person. For example, for person a, the probability of the person a migrating from the second camera to the first camera is zero, which means that person a does not migrate from the second camera to the first camera; for person B, the probability of moving from the second camera to the first camera is not zero, which means that person B may move from the second camera to the first camera.
It should also be understood that when the face data captured by the first camera is aligned in the first population sub-base, the probability that people in the people set corresponding to the third camera migrate to the third camera may also be updated. Specifically, the updating method may refer to the updating method in the first camera, and is not described herein again.
And determining a first population of the first region according to the N person sets. In one possible implementation, people with a non-zero probability of occurrence that appear at the first camera may be included in the first people pool, i.e., people with a non-zero probability of occurrence that appear at the second camera and migrate from the second camera to the first camera may be included in the first people pool.
In another possible implementation manner, a real-time population accurate sub-base can be formed according to the occurrence probability of people, the area corresponding to the geographic position of the front-end camera and the retrieval logic of the personnel filing. Specifically, dividing a secondary sub-library according to the office of a district and a county and a street; dividing a first-level database according to the city; the full pool includes the full population of the permanent population and the floating population, and is generally in units of provinces or cities.
Furthermore, each stage of the library can be further divided into one or more stages of precise sub-libraries according to the precision. Taking two-stage accurate sub-libraries as an example, the first accurate sub-library includes people whose occurrence probability is not zero, and the second accurate sub-library includes people who never occur in the area (i.e. the occurrence probability of people is 0) and people whose occurrence probability is not zero. Like this, when carrying out personnel's comparison, can compare in first accurate minute storehouse in advance, further improve personnel 1 once: and the hit rate of N comparison is reduced, and secondary comparison is reduced.
Before execution 320, a people migration probability may also be determined.
Optionally, the probability of the person migrating from the second camera to the first camera may be determined according to historical spatiotemporal trajectory data of each person in the set of N persons within a preset time period.
Optionally, the probability of the person migrating from the second camera to the first camera may be determined according to historical spatiotemporal trajectory data of all or part of the persons in the standing population pool and/or the floating population pool of the first area within a preset time period.
Alternatively, the preset time period may be any possible length of time, e.g., 1 month, 1 week, 3 months, 1 year, etc. The historical spatiotemporal trajectory data may be activity trajectory data of each person every day, and the data may include time to be snapped, camera to be snapped, and the like.
It will be appreciated that at system initialization, the initial population pool may be generated by administrative divisions, real-time population distribution data, since there is no real-time data for the people. Specifically, an initial probability that a person in the ith personal group appears at the ith first camera is determined from a standing population pool and/or a floating population pool of the first region.
The technical solutions of the embodiments of the present application are described in more detail below with reference to specific examples.
Fig. 4 is a schematic flow chart of a method for accurate population banking for sensing spatiotemporal regularity of human migration according to an embodiment of the present application.
As shown in fig. 4, at 410, after the system is initialized and started, the data of the zone real-name regular population database and the floating population database are imported, and based on the administrative zones of the regular population and the floating population and the geographic positions of the cameras, the initial occurrence probability and the initial population classification database of the people are generated.
Specifically, as shown in fig. 5, the input information of the system includes:
(1) camera ID and geographic location of the camera, e.g., longitude and latitude, etc.
(2) And the personnel information is divided according to administrative regions of province, city and county, and comprises the information of the permanent population and the information of the floating population.
The processing flow of the initial occurrence probability of the person comprises the following steps:
in 4101, when the system is initially brought online, since there is no real-time data of people, an initial population sub-base is generated by administrative divisions, real-time population distribution data, or from a regular population base and a floating population base. Information such as the ID of a person, the face data of the person, the space trajectory data, the residence and the like is recorded in the initial population database.
For example, the residence area registered by the person a is a section B, and the person a belongs to a section B population pool.
In 4102, a list of people IDs for each camera is built based on the initial demographic base, camera IDs, and geographic location of the camera found in 4101. The person ID list includes IDs of persons who may appear at the camera. The list of person IDs may correspond to the set of persons above.
For example, the residence registered by the person a is the section B, and the person a enters the person ID list of the camera whose geographical position is in the section B.
Besides the residence, the correspondence between the person ID and the camera can be established by using the information of the place of birth, the place of work, the place of consumption and the like.
At 4103, upon initial start-up of the system, an initial probability of occurrence is assigned to the person ID in the initial people pool, including the initial probability of occurrence within each time slice. Alternatively, the initial probability of occurrence for the standing and floating persons may be different, e.g., p% for the standing persons and q% for the floating persons. p and q may be any possible values, and the embodiments of the present application are not particularly limited, for example, p is 5, q is 10; p is 15, q is 10; p is 25, q is 40; p is 50, q is 20; p is 5.5, q is 1.3, etc. Of course, p and q may have the same value.
The probability values in the table shown in fig. 5 indicate the probability of a person appearing at the corresponding camera, for example, the probability of the person corresponding to the person ID1 appearing at the camera 1 is p% and the probability of the person corresponding to the person ID3 appearing at the camera 1 is q% in time slice (0:00, T).
The time slice may be a period of time, for example, 1440 minutes in 1 day may be divided into 1440/T time periods according to the duration T, where each time period may be regarded as one time slice, and the value of T is not limited.
The setting manner of the initial occurrence probability described herein may indicate that the probability that the standing person and the floating person in the initial people sub-bank are present in the corresponding region (or the region where the camera is located) is higher than that of other persons. Fig. 6 is an example of the initial occurrence probability of a person according to the embodiment of the present application. As shown in fig. 6, the initial probability data of the occurrence of the person is periodic in time slices, and the initial probability data of the person in each time slice includes the person ID, the corresponding time slice, the camera ID, and the probability of the occurrence of the person in the camera. For example, in the time slice (0:00, T), the probability of occurrence of the person corresponding to the person ID1 appearing on the camera 1 is 5%, and the probability of occurrence of the person corresponding to the person ID3 appearing on the camera 1 is 10%.
It should be appreciated that this initial probability of occurrence does not change over time unless a real-time face snapshot match or real-time checkpoint detection of a person is subsequently triggered.
At 420, a person migration probability distribution matrix within each time slice is determined in conjunction with historical spatiotemporal trajectory data for the person. Alternatively, the historical spatiotemporal trajectory data may be historical spatiotemporal trajectory data over a period of time, e.g., historical spatiotemporal trajectory data over 1 month, historical spatiotemporal trajectory data over 1 week, historical spatiotemporal trajectory data over 3 months, historical spatiotemporal estimation data over 1 year, etc.
Specifically, as shown in FIG. 7, the input information to the system includes historical spatiotemporal trajectory data of the person. The historical spatiotemporal trajectory data of the person may be "one person one file" historical spatiotemporal trajectory data of the person. The "one person one file" is to establish a person profile for each person, and the person profile may include historical spatiotemporal trajectory data of the corresponding person and other personal information. For example, activity trace data within 1 month in the person profile of each person is acquired.
In 4201, information such as a person ID, a capturing time, and a capturing camera ID is extracted.
In 4202, cameras are associated according to the person ID to generate an activity track for each person every day.
For example, a list of cameras that a person passes through: (camera 1, camera 2, …, camera k).
For another example, using the way (person ID, time, camera ID), a list of cameras that the person has passed at each time is described:
{PersonID_1,Date_1,weekday_1,(Time_i,Camera_k),(Time_j,Camera_m),…};
{PersonID_1,Date_2,weekday_2,(Time_n,Camera_l),(Time_s,Camera_r),…};
{PersonID_x,Date_y,weekday_y,(Time_t,Camera_c),(Time_e,Camera_f),…}。
in 4203, the "one step" track points of two adjacent cameras in each active track are extracted, and are called "one step" for describing that when a person enters the range of a subsequent camera from the range of one camera step by step, the range of each new camera is equivalent to that the person "walks one step". In the movement track of the person, there is an adjacent relationship between the two cameras.
For example, the "one step" track points of the tracks (camera 1, camera 2, camera 3) are (camera 1, camera 2) and (camera 2, camera 3). That is to say, the person starts from the shooting range of the camera 1, the next one comes in the shooting range of the camera 2, and the next one comes in the shooting range of the camera 3.
In 4204, the probability of people migrating for each "one step" trajectory point is calculated.
Alternatively, the person migration probability of each trajectory point may be obtained by statistics according to the activity trajectories of all the persons included in the regular population library and the floating population library of a certain region.
In 4205, a person migration probability distribution matrix for one day is generated in units of time slices.
Fig. 8 shows an example of a person migration probability distribution matrix according to an embodiment of the present application. Specifically, fig. 8 shows a distribution matrix of the migration probability of the person at each time slice, and taking the time slice (1440-T, 1440) as an example, the probability of the person migrating from camera 3 to camera 1 is 1.2%, and the probability of the person migrating from camera 3 to camera N is 3%.
Optionally, the "one-step" people migration probability distribution matrix of the embodiments of the present application is static.
At 430, based on the face data and the mount data captured by the front-end camera, the initial occurrence probability of the person generated at 410 and the distribution matrix of the person migration probability generated at 420, the probability of the person migrating to each camera is calculated, i.e., the occurrence probability of the person is updated. Wherein, the bayonet data comprise travel data, trip data and the like.
Specifically, as shown in fig. 9, the input information of the system includes:
(1) the probabilities of the initial occurrence of standing and floating persons generated in 410;
(2) 420;
(3) face data captured in real-time (e.g., person ID, time of capture, camera ID of capture, confidence in match, etc.), and/or bayonet data captured in real-time (e.g., person ID, time of occurrence, location of occurrence, etc.).
In 4301, in [ mT, mT + T ] time slices, the snapshot records of the personnel enter a buffer queue, and the snapshot records of the personnel are processed in a traversal mode according to the sequence.
At 4302, it is determined whether the confidence level of the person is above a threshold. If the confidence coefficient of a certain person is detected to be higher than or equal to the threshold value, 4303 is executed; if the confidence of a certain person is detected to be lower than the threshold value, the process jumps back to 4301, and the traversal is continued.
The confidence coefficient may be that the person performs 1: and N comparing the obtained similarity. That is, the probability that the person is in the demographic base corresponding to the current camera.
In 4303, the probability that the person migrates from the current camera to the peripheral camera is obtained from the person migration probability distribution matrix in the [ mT, mT + T ] time slice of the person.
At 4304, it is determined whether the probability that the person migrates from the area where the current camera is located to the peripheral camera is greater than 0. If the probability is greater than 0, 4305 is executed; if the probability is equal to 0, then a jump is made back to 4304 and the traversal continues.
In 4305, the person occurrence probabilities of the perimeter cameras are updated.
In 4307, a new camera region personnel time sequence sublibrary list is loaded, i.e., the personnel sets corresponding to each camera are updated.
Fig. 10 shows an example of updating the person occurrence probability according to the embodiment of the present application. As shown in fig. 10, the person ID in the people occurrence probability list includes the real-name filing activity person and the person moving from the previous hop in the previous time slice to the current time slice (e.g., which may be determined from the person migration probability distribution matrix).
When a person appears at the previous jump of the previous time slice of the time slice, that is, the person appears at the previous camera corresponding to each of the camera lists shown in fig. 10, the probability that the person appears at the camera in the camera list shown in fig. 10 is updated. For example, since the person ID 5-person ID8 appears at the camera of the previous hop of the camera 1, the probabilities of the person ID 5-person ID8 appearing at the camera 1 are updated to 30%, 25%, 28%, and 26%. For another example, the person ID2 and the person ID4 appear at the camera 1 one hop before, and the probabilities that the person ID2 and the person ID4 appear at the camera 1 are updated to 15% and 6%. The updated occurrence probability value can be obtained according to the confidence coefficient obtained by comparing the person ID 5-person ID8 in the population sub-base corresponding to the previous-hop camera and the person migration probability of the person migrating from the previous-hop camera to the camera 1.
When a person is confirmed to be present in a certain camera in a time slice (mT-T, mT) (i.e., the last time slice of the current time slice) and is unlikely to be present in another camera for a time period of time duration T, i.e., is unlikely to be present in another camera in a time slice (mT, mT +1), the probability that the person is present in the another camera is set to 0. For example, the person ID1 and the person ID3 in fig. 10 are confirmed to be present at the camera 3 and unlikely to be present at the camera 1 within T time, so the probability of the person ID1 and the person ID3 being present at the camera 1 is set to 0.
Among them, the person ID1 and the person ID2 are the standing population among the real-name filing persons, and therefore the person ID1 and the person ID2 have an initial occurrence probability of 10%. Person ID3 and person ID4 are floating population among real-name filing people, so person ID3 and person ID4 have an initial probability of occurrence of 15%.
In fig. 10, the list of person IDs (i.e., the person set) of the camera 1 in the time slice (mT, mT +1) includes a person ID2, a person ID4, a person ID5, a person ID6, a person ID7, and a person ID 8.
It can be understood that the update of the person occurrence probability in the embodiment of the present application is triggered by face data or bayonet data captured in real time.
It should also be understood that the corresponding list of person IDs for each camera should contain the person IDs and facial feature information that may be captured by that camera in each time slice. Meanwhile, it is also necessary to avoid that the personnel information of the whole population library is completely incorporated into the ID list, which results in the traversal comparison degraded into the whole population library.
At 440, a corresponding real-time accurate population sub-base divided by time slices is formed according to the regions corresponding to the geographic locations of the cameras according to the person occurrence probabilities updated at 430.
Further, a real-time population accurate sub-base can be formed according to the personnel occurrence probability updated in 430, the partition corresponding to the geographic position of the front-end camera and the retrieval logic of personnel filing. Specifically, the person ID that may appear in each camera current time slice is added to the corresponding sub-library. Optionally, a face bottom library of the current time slice sub-library may be formed based on the ID of a person that may appear in the current time slice of each current camera, and a precise sub-library feature library may be formed based on feature values extracted from the sub-library face pictures.
FIG. 11 is a schematic illustration of a secondary demographic library sub-library of an embodiment of the present application. And dividing the camera according to the geographic position of the camera and the administrative region. For example, dividing a secondary sub-library according to the unit of district and county and street; dividing a first-level database according to the city; the full pool includes the full population of the permanent population and the floating population, and is generally in units of provinces or cities.
Optionally, each level of the precision-based library can be further divided into one or more levels of precision-based libraries according to the precision. Taking two-stage accurate sub-libraries as an example, the first accurate sub-library includes people whose occurrence probability is not zero, and the second accurate sub-library includes people who never occur in the area (i.e. the occurrence probability of people is 0) and people whose occurrence probability is not zero. Like this, when carrying out personnel's comparison, can compare in first accurate minute storehouse in advance, further improve personnel 1 once: and the hit rate of N comparison is reduced, and secondary comparison is reduced.
It should be appreciated that the updating of the sublibrary list for each time slice needs to be done in a shorter time. For example, the time consumption is less than 0.1T.
Optionally, when there is face data or bayonet data in the current time slice entering the cache, after 440, 1: and (6) performing N comparison.
Fig. 12 is a schematic flow chart of people alignment based on demographic stratification in an embodiment of the present application. After the front-end camera captures face data, associating a corresponding population sub-database according to the geographic position of the capturing camera, and performing 1: and N, comparing to obtain a person with the face similarity TOP1 of the snap shot person, and acquiring the real-name tag information of the person, namely acquiring the real-name tag information of the snap shot person. In this flow, the population pool is the precise pool of populations obtained in 440.
Further, fig. 13 is a schematic flowchart of a retrieval comparison according to an embodiment of the present application. In the current time slice, after the front-end camera captures the face data, the face data are compared according to a secondary sub-library corresponding to the area where the camera is located, if the face data are not compared, the face data are further compared in a primary sub-library corresponding to the area where the camera is located, and if the face data are not compared in a full-scale library, the face data are further compared. The retrieval logic shown in fig. 13 adopts accurate database-based hierarchical comparison, which can improve comparison accuracy and efficiency and improve the hit rate of personnel in the database.
FIG. 14 is an implementation of a system according to an embodiment of the present application. Fig. 14 illustrates an example of automatic face real-time snapshot data archiving of the video surveillance intelligent analysis system, and as shown in fig. 14, a real-time regional dynamic accurate sub-database of people is generated based on a real-name archived active population database (which may correspond to the historical spatio-temporal migration data described above) and people floating record data, and in combination with a real-name permanent population database and a floating population database of an administrative region; after the video monitoring intelligent analysis system is imported, a population sub-database face characteristic value database is formed through face image algorithm processing; the face snapshot data (for example, a face small image and a corresponding scene large image) of the front-end intelligent camera are uploaded to the video monitoring intelligent analysis system, the video monitoring intelligent analysis system calls a face algorithm to extract features, and the real-time sub-libraries of corresponding areas are mapped in combination with the specific positions of the front-end intelligent camera, so that the face 1 is carried out: and N, comparing, and pushing the comparison result to a personnel archive for filing.
The technical scheme includes that a personnel ID list corresponding to each camera is dynamically updated according to real-time face data from the cameras, and further, a population database sub-database corresponding to each area is determined according to the area where the cameras are located, so that the freshest actual activity population of the area can be kept in the population database sub-database all the time.
The population pool classifying method provided by the embodiment of the present application is described in detail above with reference to fig. 1 to 14, and the device embodiment of the present application is described in detail below with reference to fig. 15 and 16. It should be understood that the apparatus in the embodiment of the present application may perform the various methods in the embodiment of the present application, that is, the following specific working processes of various products, and reference may be made to the corresponding processes in the embodiment of the foregoing methods.
Fig. 15 is a schematic configuration diagram of a population pool classifying device according to an embodiment of the present application. The apparatus 1500 shown in fig. 15 is an example only, and the apparatus of the embodiments of the present application may further include other modules or units. As shown in fig. 15, the apparatus 1500 includes a processing unit 1520 and a storage unit 1530.
A processing unit 1520 configured to determine N first cameras within the first region, N being a positive integer.
The processing unit 1520, further configured to determine N person sets according to the N first cameras, where a probability that a person in an ith person set of the N person sets appears in the ith camera is greater than zero, the ith person set of the N person sets corresponds to the ith camera, and a value of i is each value in [1, N ].
The processing unit 1520 is further configured to determine a first population of people for the first region according to the N person sets.
The storage unit 1530 is configured to store the first person mouth sub-library, where the first person mouth sub-library is configured to determine the person identity information corresponding to the first face data acquired by the N first cameras.
For example, people with the occurrence probability of being present at the ith camera being greater than a certain threshold value are included in the people set corresponding to the ith camera. The threshold may be zero or a particular value thereof.
Alternatively, in embodiments of the present application, different time periods may correspond to different demographics. For example, a day may be divided into 1440/T time periods, T being the duration of each time period, and 1440/T time periods may correspond to 1440/T population bins.
In the technical scheme, the person set possibly appearing in the first camera is determined according to the person appearance probability of the person appearing in the first camera, and the population base sub-base corresponding to the area is determined according to the person set of each first camera. Because the occurrence probability of the personnel can be continuously updated, the newest actual activity population in the area can be always kept in the population library, and compared with a static library, invalid personnel data can be greatly reduced. When the human face 1: N comparison real-name labeling process is carried out, the hit rate of primary comparison can be improved, secondary comparison is greatly reduced, and the effects of low comparison resource consumption and short consumed time are achieved.
In addition, only the persons with the person occurrence probability larger than zero are contained in the first person score library, that is, the first person score library only comprises the persons possibly appearing in the first camera, so that the data volume in the first person score library can be reduced, and the source consumption of comparison resources is reduced.
Optionally, the apparatus further comprises: an obtaining unit 1510, configured to obtain, before determining the first person population pool of the first region, second face data captured by M second cameras, where the second cameras are cameras where persons in the ith person set before region migration. The processing unit 1520, further configured to determine, according to the second face data and a person migration probability, a probability that a person in the ith person set appears in the ith camera, where the person migration probability is a probability that the person migrates from the second camera to the ith first camera.
In the above technical solution, the occurrence probability of the person is updated according to the real-time face data from the cameras, that is, the person ID list corresponding to each camera can be dynamically updated. Therefore, according to the area where the camera is located, the population database sub-database corresponding to each area is determined, the freshest actual activity population of the area can be kept in the population database sub-database all the time, invalid personnel data are greatly reduced compared with a full static database, when the face 1: N comparison real-name labeling process is carried out, the hit rate of one comparison can be improved, the secondary comparison is greatly reduced, and the effects of less resource consumption and short consumed time are achieved.
Optionally, the processing unit 1520 is specifically configured to: comparing the similarity between the second face data and the face data in a second population sub-library to obtain a confidence coefficient of the second face data, wherein the confidence coefficient is used for indicating the confidence probability that the second face data and the face data in the second population sub-library belong to the same person, and the second population sub-library is a population sub-library corresponding to the second camera; and obtaining the probability that the personnel in the ith personal personnel set appear in the ith camera according to the confidence coefficient and the personnel migration probability.
Optionally, the person occurrence probability may be obtained by multiplying the confidence corresponding to certain face data by the person migration probability corresponding to the person.
Optionally, the processing unit 1520 is further configured to: and determining the probability of the person migrating from the second camera to the ith first camera according to the historical spatiotemporal trajectory data of each person in the standing population library and/or the floating population library of the first area within a preset time period.
Optionally, the processing unit 1520 is further configured to: determining an initial probability that a person in the ith person set appears at the ith first camera based on a standing population pool and/or a floating population pool of the first region.
The obtaining unit 1510 may be implemented by a transceiver. The processing unit 1520 may be implemented by a processor. The storage unit 1530 may be implemented by a memory. For specific functions and advantages of the obtaining unit 1510, the processing unit 1520 and the storage unit 1530, reference may be made to the related description of the method embodiment, and details are not described herein again.
Fig. 16 is a schematic structural diagram of a population pool sorting device according to another embodiment of the present application. The apparatus 1600 shown in fig. 16 (the apparatus 1600 may be a computer device in particular) includes a memory 1601, a processor 1602, a communication interface 1603, and a bus 1604. The memory 1601, the processor 1602, and the communication interface 1603 are communicatively connected to each other via a bus 1604.
A processor 1602, configured to determine N first cameras within a first area, where N is a positive integer; determining N personnel sets according to the N first cameras, wherein the probability that personnel in the ith personnel set in the N personnel sets appear in the ith camera is greater than zero, the ith personnel set in the N personnel sets corresponds to the ith camera, and the value of i is each value in [1, N ]; and determining a first population of the first region according to the N person sets.
The memory 1601 is configured to store the first population sub-library, where the first population sub-library is configured to determine identity information corresponding to the first face data acquired by the N first cameras.
The memory 1601 may be a Read Only Memory (ROM), a static memory device, a dynamic memory device, or a Random Access Memory (RAM). The memory 1601 may store a program, and when the program stored in the memory 1601 is executed by the processor 1602, the processor 1602 is configured to perform the steps of the population repository pooling method of the embodiment of the present application, for example, the steps of the embodiment of the method of fig. 3 may be performed.
The processor 1602 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in a processor or instructions in the form of software. The processor described in the embodiments of the present application may be a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, a discrete gate or transistor logic device, or a discrete hardware component. The various methods, steps, and logic blocks disclosed in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software modules may be located in a Random Access Memory (RAM), a flash memory, a read-only memory (ROM), a programmable ROM, an electrically erasable programmable memory, a register, or other storage media that are well known in the art. The storage medium is located in a memory, and a processor reads instructions in the memory and combines hardware thereof to complete the steps of the method.
The steps of the method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in the memory 1601, and the processor 1602 reads the information in the memory 1601, and in conjunction with the hardware thereof, performs the functions that need to be performed by the units included in the population library device, or performs the methods according to the embodiments of the method of the present application.
Communication interface 1603 enables communication between apparatus 1600 and other devices or communication networks using transceiver means such as, but not limited to, a transceiver. For example, face data captured dynamically by a front-end camera may be acquired through the communication interface 1603.
The bus 1604 may include a pathway to transfer information between various components of the device 1600 (e.g., memory 1601, processor 1602, communication interface 1603).
It is noted that the processor in the apparatus 1600 in fig. 16 may correspond to the processing unit 1520 in the apparatus 1500 in fig. 15, the communication interface 1603 may correspond to the obtaining unit 1510, and the memory 1601 may correspond to the storing unit 1530.
It should be understood that the population repository device shown in the embodiment of the present application may be a server, for example, a server in a cloud, or may also be a chip configured in the server in the cloud. The population pool classifying device may be an electronic device, or may be a chip disposed in the electronic device.
It should be noted that although the apparatus 1600 described above shows only memories, processors, and communication interfaces, in particular implementations, those skilled in the art will appreciate that the apparatus 1600 may also include other components necessary for proper operation. Also, those skilled in the art will appreciate that the apparatus 1600 may also include hardware components for performing other additional functions, according to particular needs. Furthermore, those skilled in the art will appreciate that apparatus 1600 may also include only those components necessary to practice embodiments of the present application, and need not include all of the components shown in FIG. 16.
The specific operation and beneficial effects of the apparatus 1600 can be referred to the related descriptions in the above method embodiments, and are not described herein again.
It should be appreciated that various aspects or features of the application may be implemented as a method, apparatus, or article of manufacture using standard programming and/or engineering techniques. The term "article of manufacture" as used herein is intended to encompass a computer program accessible from any computer-readable device, carrier, or media. For example, computer-readable media can include but are not limited to magnetic storage devices (e.g., hard disk, floppy disk, magnetic strips, etc.), optical disks (e.g., Compact Disk (CD), Digital Versatile Disk (DVD), etc.), smart cards, and flash memory devices (e.g., erasable programmable read-only memory (EPROM), card, stick, or key drive, etc.). In addition, various storage media described herein can represent one or more devices and/or other machine-readable media for storing information. The term "machine-readable medium" can include, without being limited to, wireless channels and various other media capable of storing, containing, and/or carrying instruction(s) and/or data.
It should also be understood that in the following embodiments of the present application, "at least one", "one or more" means one, two or more. The term "and/or" in this application is only one kind of association relationship describing the associated object, and means that there may be three kinds of relationships, for example, a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
In the embodiments of the present application, the sequence numbers of the processes do not mean the execution sequence, and the execution sequence of the processes should be determined by the functions and the inherent logic of the processes, and should not constitute any limitation to the implementation process of the embodiments of the present application.
In the above embodiments, all or part of the implementation may be realized by software, hardware, firmware or any other combination. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored on a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website, computer, server, or data center to another website, computer, server, or data center via wire (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device including one or more available media integrated servers, data centers, and the like. The usable medium may be a magnetic medium (e.g., a floppy disk, a hard disk, a magnetic tape), an optical medium (e.g., a Digital Video Disk (DVD)), or a semiconductor medium (e.g., a Solid State Disk (SSD)), among others.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (11)

1. A population pool banking method, comprising:
determining N first cameras in a first area, wherein N is a positive integer;
determining N personnel sets according to the N first cameras, wherein the probability that personnel in the ith personnel set in the N personnel sets appear in the ith camera is greater than zero, the ith personnel set in the N personnel sets corresponds to the ith camera, and the value of i is each value in [1, N ];
determining a first population sublibrary of the first region according to the N person sets;
and storing the first person mouth sub-library, wherein the first person mouth sub-library is used for determining the person identity information corresponding to the first face data acquired by the N first cameras.
2. The method of claim 1, wherein prior to determining the first population of the first region, the method further comprises:
acquiring second face data shot by M second cameras, wherein the second cameras are cameras where persons in the ith personal set are located before region migration;
and determining the probability of the personnel in the ith personnel set appearing in the ith camera according to the second face data and the personnel migration probability, wherein the personnel migration probability is the probability of the personnel migrating from the second camera to the ith first camera.
3. The method of claim 2, wherein determining the probability that the person in the ith personal membership set appears at the ith camera based on the second face data and the person migration probability comprises:
comparing the similarity between the second face data and the face data in a second population sub-library to obtain a confidence coefficient of the second face data, wherein the confidence coefficient is used for indicating the confidence probability that the second face data and the face data in the second population sub-library belong to the same person, and the second population sub-library is a population sub-library corresponding to the second camera;
and obtaining the probability that the personnel in the ith personal personnel set appear in the ith camera according to the confidence coefficient and the personnel migration probability.
4. A method according to claim 2 or 3, characterized in that the method further comprises:
and determining the probability of the person migrating from the second camera to the ith first camera according to the historical spatiotemporal trajectory data of each person in the standing population library and/or the floating population library of the first area within a preset time period.
5. The method according to any one of claims 1 to 4, further comprising:
determining an initial probability that a person in the ith person set appears at the ith first camera based on a standing population pool and/or a floating population pool of the first region.
6. A population pool banking apparatus, comprising:
a processing unit for determining N first cameras within a first area, N being a positive integer;
the processing unit is further configured to determine N person sets according to the N first cameras, where a probability that a person in an ith person set in the N person sets appears in the ith camera is greater than zero, the ith person set in the N person sets corresponds to the ith camera, and a value of i is each value in [1, N ];
the processing unit is used for determining a first person sub-database of the first area according to the N person sets;
and the storage unit is used for storing the first person mouth sub-library, and the first person mouth sub-library is used for determining the personnel identity information corresponding to the first face data acquired by the N first cameras.
7. The apparatus of claim 6, further comprising:
an obtaining unit, configured to obtain second face data captured by M second cameras before determining a first person population database of a first region, where the second cameras are cameras before region migration of people in the ith person set;
the processing unit is further configured to determine, according to the second face data and the person migration probability, a probability that a person in the ith person set appears in the ith camera, where the person migration probability is a probability that the person migrates from the second camera to the ith first camera.
8. The apparatus according to claim 7, wherein the processing unit is specifically configured to:
comparing the similarity between the second face data and the face data in a second population sub-library to obtain a confidence coefficient of the second face data, wherein the confidence coefficient is used for indicating the confidence probability that the second face data and the face data in the second population sub-library belong to the same person, and the second population sub-library is a population sub-library corresponding to the second camera;
and obtaining the probability that the personnel in the ith personal personnel set appear in the ith camera according to the confidence coefficient and the personnel migration probability.
9. The apparatus of claim 7 or 8, wherein the processing unit is further configured to:
and determining the probability of the person migrating from the second camera to the ith first camera according to the historical spatiotemporal trajectory data of each person in the standing population library and/or the floating population library of the first area within a preset time period.
10. The apparatus according to any one of claims 6 to 9, wherein the processing unit is further configured to:
determining an initial probability that a person in the ith person set appears at the ith first camera based on a standing population pool and/or a floating population pool of the first region.
11. A computer-readable storage medium, having stored thereon a computer program or instructions, which, when executed by a demographic banking apparatus, implements the method of any one of claims 1 to 5.
CN201910942816.9A 2019-09-30 2019-09-30 Population database sorting method and device Pending CN112579593A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201910942816.9A CN112579593A (en) 2019-09-30 2019-09-30 Population database sorting method and device
PCT/CN2020/097668 WO2021063037A1 (en) 2019-09-30 2020-06-23 Person database partitioning method, and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910942816.9A CN112579593A (en) 2019-09-30 2019-09-30 Population database sorting method and device

Publications (1)

Publication Number Publication Date
CN112579593A true CN112579593A (en) 2021-03-30

Family

ID=75116850

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910942816.9A Pending CN112579593A (en) 2019-09-30 2019-09-30 Population database sorting method and device

Country Status (2)

Country Link
CN (1) CN112579593A (en)
WO (1) WO2021063037A1 (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113807223B (en) * 2021-09-07 2024-04-09 南京中兴力维软件有限公司 Face cluster subclass merging method, device and equipment
CN115470250B (en) * 2022-11-15 2023-03-24 中关村科学城城市大脑股份有限公司 Personnel information processing method and device, electronic equipment and computer readable medium
CN116434313B (en) * 2023-04-28 2023-11-14 北京声迅电子股份有限公司 Face recognition method based on multiple face recognition modules
CN117313981A (en) * 2023-07-11 2023-12-29 厦门身份宝网络科技有限公司 Dynamic urban village population analysis method, device, equipment and readable storage medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
BR112015001949B1 (en) * 2012-07-31 2023-04-25 Nec Corporation IMAGE PROCESSING SYSTEM, IMAGE PROCESSING METHOD, AND STORAGE MEDIA
CN103353940B (en) * 2013-05-15 2016-12-07 吴玉平 A kind of recognition methods dynamically adjusting comparison sequence based on probability of occurrence and system
US20150324822A1 (en) * 2014-05-06 2015-11-12 Mastercard International Incorporated Predicting transient population based on payment card usage
CN109543566B (en) * 2018-11-05 2021-06-15 深圳市商汤科技有限公司 Information processing method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
WO2021063037A1 (en) 2021-04-08

Similar Documents

Publication Publication Date Title
CN112579593A (en) Population database sorting method and device
WO2021057797A1 (en) Positioning method and apparatus, terminal and storage medium
CN102959551B (en) Image-processing device
TWI740537B (en) Information processing method, device and storage medium thereof
WO2021063011A1 (en) Method and device for behavioral analysis, electronic apparatus, storage medium, and computer program
CN109740004B (en) Filing method and device
US20210357624A1 (en) Information processing method and device, and storage medium
CN101425133A (en) Human image retrieval system
CN109783685A (en) A kind of querying method and device
CN110969215A (en) Clustering method and device, storage medium and electronic device
CN111291682A (en) Method and device for determining target object, storage medium and electronic device
CN109902681B (en) User group relation determining method, device, equipment and storage medium
US11734343B1 (en) Hyperzoom attribute analytics on the edge
US11594043B1 (en) People and vehicle analytics on the edge
CN114078277A (en) One-person-one-file face clustering method and device, computer equipment and storage medium
CN101908057A (en) Information processing apparatus and information processing method
CN109784220B (en) Method and device for determining passerby track
CN111367956B (en) Data statistics method and device
CN113962326A (en) Clustering method, device, equipment and computer storage medium
KR102375145B1 (en) Integrated video data analysis and management system
CN114863364B (en) Security detection method and system based on intelligent video monitoring
WO2023124134A1 (en) File processing method and apparatus, electronic device, computer storage medium and program
CN111061916B (en) Video sharing system based on multi-target library image recognition
CN111145514A (en) Multi-dimensional early warning strategy method
CN112906725A (en) Method, device and server for counting people stream characteristics

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination