CN111553288A - Passenger flow information processing method and device - Google Patents

Passenger flow information processing method and device Download PDF

Info

Publication number
CN111553288A
CN111553288A CN202010358806.3A CN202010358806A CN111553288A CN 111553288 A CN111553288 A CN 111553288A CN 202010358806 A CN202010358806 A CN 202010358806A CN 111553288 A CN111553288 A CN 111553288A
Authority
CN
China
Prior art keywords
face
visitor
images
feature
identity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010358806.3A
Other languages
Chinese (zh)
Inventor
李明耀
严石伟
蒋楠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202010358806.3A priority Critical patent/CN111553288A/en
Publication of CN111553288A publication Critical patent/CN111553288A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Library & Information Science (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Image Analysis (AREA)
  • Collating Specific Patterns (AREA)

Abstract

The invention provides a passenger flow information processing method, a passenger flow information processing device, electronic equipment and a computer readable storage medium; the method comprises the following steps: acquiring a first face track corresponding to each first time interval; extracting first face features corresponding to each first face track; matching each first face feature with the face feature bound with the temporary identity in the first search library to obtain the temporary identity of the visitor appearing in each first time interval; extracting second face features corresponding to second face tracks of visitors with temporary identities; and matching each second face feature with the face feature bound with the permanent identity in the second search library to obtain the permanent identity of the visitor appearing every day. By the method and the system, the temporary identity and the permanent identity of the visitor in the same day can be communicated, so that the cross-day visitor identity query is supported, and accurate marketing and personalized recommendation of shops are facilitated.

Description

Passenger flow information processing method and device
Technical Field
The present invention relates to the field of artificial intelligence, and in particular, to a method and an apparatus for processing passenger flow information, an electronic device, and a computer-readable storage medium.
Background
Artificial intelligence is a theory, method and technique and application system that uses a digital computer or a machine controlled by a digital computer to simulate, extend and expand human intelligence, perceive the environment, acquire knowledge and use the knowledge to obtain the best results. Artificial intelligence has now been rapidly developed and is widely used in various industries.
Taking the smart retail field as an example, the current artificial intelligence technology has been applied to the popularization and landing of smart retail store scenes, enabling offline stores (such as visitor reminding, advertisement pushing and the like), and creating a new retail form.
However, the intelligent retail scheme provided by the related art in the market usually only documents the permanent identity of the customer, i.e. only provides the information of the permanent identity of the customer, and the function is relatively single.
Disclosure of Invention
The embodiment of the invention provides a passenger flow information processing method, a passenger flow information processing device, electronic equipment and a computer readable storage medium, which can simultaneously provide the current temporary identity and the permanent identity of a visitor and support the cross-day visitor identity query by communicating the current temporary identity and the permanent identity of the visitor.
The technical scheme of the embodiment of the invention is realized as follows:
the embodiment of the invention provides a passenger flow information processing method, which comprises the following steps:
acquiring a first face track corresponding to each first time interval;
extracting first face features corresponding to each first face track;
matching each first face feature with a face feature bound with a temporary identity in a first search library to obtain the temporary identity of a visitor appearing in each first time interval;
extracting second face features corresponding to second face tracks of visitors with temporary identities;
and matching each second face feature with the face feature bound with the permanent identity in the second search library to obtain the permanent identity of the visitor appearing every day.
An embodiment of the present invention provides a passenger flow information processing apparatus, including:
the acquisition module is used for acquiring a first face track corresponding to each first time interval;
the face feature extraction module is used for extracting a first face feature corresponding to each first face track;
the retrieval module is used for matching each first face feature with a face feature bound with a temporary identity in a first retrieval base to obtain the temporary identity of the visitor appearing in each first time interval;
the face feature extraction module is further used for extracting second face features corresponding to second face tracks of the visitors with the temporary identities;
the retrieval module is further used for matching each second face feature with the face feature bound with the permanent identity in the second retrieval library to obtain the permanent identity of the visitor appearing every day.
In the above scheme, the obtaining module is further configured to obtain a third face track corresponding to each second time interval; wherein the duration of the second time interval is less than the duration of the first time interval; the face feature extraction module is further configured to extract a third face feature corresponding to each third face track; the retrieval module is further configured to match each third face feature with a face feature stored in a third retrieval library to obtain a passenger flow volume corresponding to each second time interval; the device further comprises a deleting module for deleting the third face images with the quality scores smaller than the quality score threshold value in the plurality of third face images.
In the above scheme, the face feature extraction module is further configured to perform feature extraction on a plurality of third face images remaining after the deletion processing, so as to obtain a third face feature of each third face image; the device also comprises a sorting module which is used for sorting the third face features of the plurality of third face images according to the snapshot time sequence, sequentially determining the similarity between the first third face feature and the subsequent third face feature in the sorting and reserving all the third face features with the similarity larger than the similarity threshold.
In the above scheme, the apparatus further includes a clustering module, configured to perform clustering processing on the plurality of third face tracks; the sequencing module is further used for sequencing a plurality of third face images corresponding to third face tracks clustered as belonging to the same visitor according to quality scores, and selecting K third face images with the quality scores higher than that of the third face images; wherein K is a positive integer; and determining the average value of the K third face features corresponding to the K third face images as the third face feature of the visitor.
In the above scheme, the retrieval module is further configured to match a third face feature of each visitor with a face feature stored in a third retrieval library; when the matched maximum similarity is smaller than a similarity threshold value, determining that the visitor is a new visitor, and adding corresponding passenger flow information for the new visitor; when the matched maximum similarity is larger than a similarity threshold value, determining that the visitor is a present visitor, and comparing a first quality score of a face image of the visitor captured in the second time interval with a second quality score of the face image corresponding to the face feature matched in the third search library; when the first quality score is larger than the second quality score, replacing the face image of the visitor stored in the third search library with the face image of the visitor captured in the second time interval.
In the above scheme, the clustering module is further configured to perform clustering processing on the plurality of first face tracks; determining quality scores of a plurality of first face images corresponding to first face tracks clustered as belonging to the same visitor; the sorting module is further configured to, when the number of first face images of which the quality scores are greater than the quality score threshold is greater than a number threshold in the plurality of first face images, perform quality score sorting on the plurality of first face images, and select L first face images ranked in the front; wherein L is a positive integer; determining an average value of L first facial features corresponding to the L first facial images as a first facial feature of the visitor; the deleting module is further configured to delete the first face trajectory of the visitor when the number of the first face images of which the quality scores are larger than the quality score threshold is smaller than a number threshold.
In the above scheme, the retrieval module is further configured to match the first face feature of each visitor with a face feature to which a temporary identity is bound in a first retrieval library; when the matched maximum similarity is smaller than a similarity threshold value, determining that the visitor is a new visitor, and adding a corresponding temporary identity for the new visitor; storing the M personal face images of which the mass scores corresponding to the temporary identity and the new visitor are larger than a mass score threshold value into the first search library; wherein M is a positive integer; when the matched maximum similarity is larger than a similarity threshold value, determining that the visitor is a visitor who has appeared, and performing quality score sequencing on the face image of the visitor captured in the first time interval and the face image corresponding to the face feature matched in the first search library; determining N front-ranked face images as face images corresponding to the visitors, and storing the N face images in the first retrieval library; wherein N is a positive integer.
In the above scheme, the sorting module is further configured to perform quality score sorting on a plurality of second face images corresponding to second face tracks belonging to the same visitor, and select T second face images with the top sorting order; wherein T is a positive integer; and determining the average value of the T second face features corresponding to the T second face images as the second face feature of the visitor.
In the above scheme, the retrieval module is further configured to match the second face feature of each visitor with a face feature to which a permanent identity has been bound in a second retrieval library; when the matched maximum similarity is smaller than a similarity threshold value, determining that the visitor is a new visitor, and adding a corresponding permanent identity for the new visitor; storing the S personal face images of which the quality scores corresponding to the permanent identity and the new visitor are greater than a quality score threshold value into the second search library; wherein S is a positive integer; when the maximum matching similarity is larger than a similarity threshold value, determining that the visitor is a visitor who has appeared, and performing quality score sequencing on the face image of the visitor captured every day and the face image corresponding to the face feature matched in the second search library; determining the X personal face image which is ranked at the front as a face image corresponding to the visitor, and storing the X personal face image into the second search library; wherein X is a positive integer.
An embodiment of the present invention provides an electronic device, including:
a memory for storing executable instructions;
and the processor is used for realizing the passenger flow information processing method provided by the embodiment of the invention when the executable instruction stored in the memory is executed.
The embodiment of the invention provides a computer-readable storage medium, which stores executable instructions and is used for causing a processor to execute the method for processing the passenger flow information.
The embodiment of the invention has the following beneficial effects:
extracting the face features of the visitors appearing in different time intervals through a face recognition technology, and matching the extracted face features of the visitors with the face features bound with temporary identities in a first retrieval library (namely a daily file retrieval library) so as to determine the daily temporary identity of each visitor; subsequently, for each visitor with the temporary identity of the current day, the face features corresponding to each visitor are matched with the face features bound with the permanent identities in the second search library (namely, the permanent file search library), so that the permanent identity of each visitor can be further determined, the temporary identity and the permanent identity of the visitor of the current day are communicated, and the inquiry of the identity of the visitor across days is realized.
Drawings
FIG. 1 is an alternative architecture diagram of a passenger flow application system provided by an embodiment of the invention;
FIG. 2 is an alternative structural diagram of a server according to an embodiment of the present invention;
FIG. 3 is a schematic flow chart of an alternative passenger flow information processing method according to an embodiment of the present invention;
fig. 4 is a schematic flow chart of another alternative passenger flow information processing method according to an embodiment of the present invention;
FIG. 5 is a schematic flow chart of another alternative passenger flow information processing method according to the embodiment of the present invention;
FIG. 6 is a schematic diagram showing a customer profile (left) and a customer flow in a specific area of a mall (right) according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of an alternative system framework for implementing a passenger flow information processing method according to an embodiment of the present invention;
FIG. 8 is a schematic flow chart of an alternative method for calculating a passenger flow in a specific area of a mall according to an embodiment of the present invention;
FIG. 9 is a schematic flow chart of an alternative method for determining the current temporary identity of a customer according to an embodiment of the present invention;
fig. 10 is a schematic flow chart of an alternative method for determining a permanent identity of a customer according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the present invention will be further described in detail with reference to the accompanying drawings, the described embodiments should not be construed as limiting the present invention, and all other embodiments obtained by a person of ordinary skill in the art without creative efforts shall fall within the protection scope of the present invention.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is understood that "some embodiments" may be the same subset or different subsets of all possible embodiments, and may be combined with each other without conflict.
In the description that follows, references to the terms "first \ second \ third" are intended merely to distinguish similar objects and do not denote a particular order, but rather are to be understood that the terms "first \ second \ third" may be interchanged under certain circumstances or sequences of events to enable embodiments of the invention described herein to be practiced in other than those illustrated or described herein.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used herein is for the purpose of describing embodiments of the invention only and is not intended to be limiting of the invention.
Before further detailed description of the embodiments of the present invention, terms and expressions mentioned in the embodiments of the present invention are explained, and the terms and expressions mentioned in the embodiments of the present invention are applied to the following explanations.
1)1: and N faces are searched, one or more faces with the highest similarity to the faces to be searched are found in the large-scale face database, and the searching performance is related to the size of the database scale N.
2) Computer Vision (CV) is a science for researching how to make a machine "look", and further refers to performing machine Vision such as recognizing, tracking and testing a target by using a camera or a Computer instead of human eyes, further performing image processing, and processing the image into an image more suitable for human eye observation or instrument detection by using a Computer.
3) Artificial Intelligence (AI), a new technology science that studies and develops theories, methods, techniques for simulating, extending, and expanding human Intelligence to apply systems.
4) Pedestrian Re-identification (Reid), which is a technique for determining whether a specific pedestrian exists in an image or video sequence using computer vision technology, is widely considered as a sub-problem of image retrieval, and a monitored pedestrian image is given to retrieve the pedestrian image across devices. The visual limitation of a fixed camera is overcome, the pedestrian detection/pedestrian tracking technology can be combined, and the method can be widely applied to the fields of intelligent video monitoring, intelligent security and the like.
5) The bolt face camera, because of its shape, is often referred to simply as a bolt face. The monitoring position is fixed, and only a certain monitoring position can be directly opposite to the monitoring position, so the monitoring direction is limited, but the snapshot quality is good, and the method is generally used for snapshotting human faces and human bodies.
6) The dome camera, because of its appearance, is often referred to simply as a dome camera. The monitoring range of the ball machine is much larger than that of a fixed gun, 360-degree rotation can be generally achieved, and therefore a large area can be monitored, but the snapping quality is poor, and the ball machine is generally used for snapping a human body.
7) The basic unit of the number of the passenger flow is the head of a person shot, such as the number of the heads of customers shot in a specific area of a shopping mall.
8) Number of people in passenger flow, number of people who have removed duplicate versions based on customer identity. The basic unit is the number of different customer identities captured by natural persons, such as a particular area of a store.
At present, artificial intelligence technology has extensive application in the field of intelligent retail, and a large part of the artificial intelligence technology is applied to popularization and landing of scenes of intelligent retail stores, enables offline stores and creates new retail.
However, in the embodiment of the present invention, it is found that the establishment process for the customer identity file in the intelligent retail scheme of the mall provided by the related art is generally implemented in the following manner: firstly, reading all face snapshot data from a database, then extracting features based on the read face snapshot data to obtain corresponding face features, and finally retrieving in a permanent identity archive based on the obtained face features, thereby completing the addition or update of the permanent identity of the customer. That is, the solution provided by the related art only provides the information of the permanent identity of the customer, and the function is relatively single.
In view of the foregoing problems, embodiments of the present invention provide a method, an apparatus, an electronic device, and a computer-readable storage medium for processing passenger flow information, which can provide a temporary identity of a visitor on the same day and a permanent identity of the visitor at the same time, and support cross-day visitor identity query by communicating the temporary identity and the permanent identity of the visitor on the same day.
An exemplary application of the passenger flow information processing device provided by the embodiment of the present invention is described below, and the passenger flow information processing device provided by the embodiment of the present invention may be a server or a server cluster.
It should be noted that the server may be an independent physical server, may also be a server cluster or a distributed system formed by a plurality of physical servers, and may also be a cloud server that provides basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a network service, cloud communication, a middleware service, a domain name service, a security service, and a big data and artificial intelligence platform, which is not limited herein.
Next, an exemplary application when the passenger flow information processing apparatus is implemented as a server will be described. Referring to fig. 1, fig. 1 is an alternative architecture diagram of a passenger flow application system 100 according to an embodiment of the present invention. The passenger flow application system 100 includes: the server 200, the network 300, the terminal 400, and the database 500 will be described separately.
The database 500 is used for storing a plurality of face track data obtained by performing face recognition processing on a video frame image obtained by performing real-time snapshot on a face appearing in a target area to be analyzed through a camera, and information such as snapshot time and snapshot place. In some embodiments, the target area to be analyzed may be a specific area of a mall, such as an atrium of the mall, an entrance, an exit, and other critical locations.
The server 200 reads the stored plurality of face trajectory data from the database 500 at set time intervals (for example, the face trajectory data may be read from the database 500 once every half hour), and determines the current-day temporary identity of the visitor appearing in different time intervals and the permanent identity of each visitor based on the read plurality of face trajectory data (a process of determining the current-day temporary identity and the permanent identity of the visitor will be described in detail later), and then the server 200 transmits the determined current-day temporary identity possessed by the visitor appearing in different time intervals and the permanent identity information of the visitor to the terminal 400 through the network 300.
The network 300 is used as a medium for communication between the server 200 and the terminal 400, and the network 300 may be a wide area network or a local area network, or a combination of both.
The terminal 400 runs with a client 410, and displays temporary identities of visitors appearing in different time intervals and permanent identity information of each visitor issued by the server 200 in a graphical interface of the client 410.
The passenger flow application system provided by the embodiment of the invention can be widely applied to passenger flow analysis of various scenes, for example, in the field of intelligent retail, temporary identities of all visitors appearing in specific areas of a shopping mall in different time intervals and permanent identity information of all visitors are analyzed, so that the shopping mall can be helped to adjust a business recruitment policy, perform shopping diversion, make a scientific rent strategy and the like; in the field of intelligent security, the temporary identity of each visitor and the permanent identity information of each visitor in the public area of the community in different time intervals are analyzed, so that the property can be helped to determine whether suspicious visitors exist in the community, corresponding preventive measures are executed, and the safety of the community is protected.
Referring to fig. 2, fig. 2 is a schematic structural diagram of a server 200 according to an embodiment of the present invention, where the server 200 shown in fig. 2 includes: at least one processor 210, memory 240, at least one network interface 220. The various components in server 200 are coupled together by a bus system 230. It is understood that the bus system 230 is used to enable connected communication between these components. The bus system 230 includes a power bus, a control bus, and a status signal bus in addition to a data bus. For clarity of illustration, however, the various buses are labeled as bus system 230 in fig. 2.
The processor 210 may be an integrated circuit chip having signal processing capabilities.
The memory 240 may be removable, non-removable, or a combination thereof. Exemplary hardware devices include solid state memory, hard disk drives, optical disk drives, and the like. Memory 240 optionally includes one or more storage devices physically located remote from processor 210.
The memory 240 includes either volatile memory or nonvolatile memory, and may include both volatile and nonvolatile memory.
In some embodiments, memory 240 is capable of storing data, examples of which include programs, modules, and data structures, or subsets or supersets thereof, to support various operations, as exemplified below.
An operating system 241, including system programs for handling various basic system services and for performing hardware related tasks;
a network communication module 242 for reaching other computing devices via one or more (wired or wireless) network interfaces 220;
in some embodiments, the passenger flow information processing apparatus provided by the embodiment of the present invention may be implemented in software, and fig. 2 shows the passenger flow information processing apparatus 243 stored in the memory 240, which may be software in the form of programs and plug-ins, and includes the following software modules: an acquisition module 2431, a face feature extraction module 2432 and a retrieval module 2433, which are logical and thus can be arbitrarily combined or further split depending on the functions implemented. The functions of the respective modules will be explained below.
The following describes a passenger flow information processing method provided by an embodiment of the present invention, with reference to an exemplary application of the passenger flow information processing apparatus provided by an embodiment of the present invention, when the passenger flow information processing apparatus is implemented as a server.
Referring to fig. 3, fig. 3 is an alternative flow chart of a passenger flow information processing method according to an embodiment of the present invention, which will be described with reference to the steps shown in fig. 3.
In step S101, the server acquires a first face trajectory corresponding to each first time interval.
In some embodiments, the server may first divide each day according to a first time granularity to obtain a plurality of first time intervals, and then obtain a first face track corresponding to each first time interval.
For example, taking an intelligent community as an example, a server may first divide a day into 24 first time intervals with 1 hour as a first time granularity, and then respectively obtain a first face track corresponding to each first time interval, that is, respectively obtain first face tracks corresponding to 0:00-1:00 (where the first face track corresponding to 0:00-1:00 refers to a plurality of face tracks obtained by decoding and performing face recognition processing on a video captured by a camera deployed in the community at 0:00-1: 00); a first face trajectory corresponding to 1:00-2: 00; 2:00-3:00, etc.
For example, taking a smart retail store scene as an example, the server first divides the business hours of the store according to a first time granularity to obtain a plurality of first time intervals, and then obtains first face track data corresponding to each first time interval. For example, assuming that the business hours of a shopping mall are 8:00-23:00, the server divides the business hours of the shopping mall by taking 1 hour as a first time granularity to obtain 15 first time intervals, and the duration of each first time interval is 1 hour, and then, the server obtains a first face track corresponding to each first time interval respectively according to the 15 first time intervals obtained after division, that is, obtains first face tracks corresponding to 8:00-9:00 respectively (where the first face tracks corresponding to 8:00-9:00 refer to a plurality of face tracks obtained after decoding and face recognition processing a video shot by a camera deployed in a specific area of the shopping mall at 8:00-9: 00); a first face rail corresponding to 9:00-10: 00; 10:00-11:00, and so on.
According to the embodiment of the invention, each day is divided into a plurality of first time intervals according to the first time granularity, and the first face track corresponding to each first time interval is respectively obtained, so that the temporary identity information of the visitor appearing in each first time interval can be obtained based on the first face track corresponding to each first time interval in the following.
In step S102, the server extracts a first face feature corresponding to each first face trajectory.
In some embodiments, the server may also perform a quality score filtering for each first face trajectory before extracting the first face features corresponding to each first face trajectory. The specific process of mass fraction filtration is as follows: the server firstly determines the quality score of each face snapshot image in the plurality of face snapshot images corresponding to each first face track, and then filters out the face snapshot images with the quality scores smaller than the quality score threshold value by setting the threshold value, so that the number of the face images participating in retrieval is reduced on the basis of not influencing the subsequent recognition precision, and the processing performance is improved.
For example, assuming that the number of face snapshot images corresponding to the first face track 1 is 10, it is first required to determine the quality scores of the 10 face snapshot images, and then filter out the face snapshot images with the quality scores smaller than the quality score threshold by setting a threshold. The quality score of the face snapshot image can be determined through a multi-index evaluation method. The basic idea of the multi-index evaluation method is as follows: the quality of the face image is comprehensively evaluated through various evaluation indexes such as the contrast, brightness and definition of the face image, face position information in the image and the like to obtain respective evaluation coefficients, then weighting calculation is carried out through the weight occupied by each coefficient, and finally the final quality score of each face image is determined.
In some embodiments, after the quality score filtering, the server may perform feature extraction on each first face trajectory after the quality score filtering to obtain corresponding first face features, and then the server may further perform purification processing on each first face trajectory to further reduce the number of face snapshot images participating in subsequent retrieval.
The main purpose of the purification treatment is to ensure that the same face track corresponds to the same natural person, and the specific process is as follows: and aiming at each first face track after filtering processing, sequencing a plurality of first face features corresponding to each first face track according to the time sequence of snapshot, sequentially calculating the similarity between the first face feature and the subsequent first face feature in the sequencing, and keeping all the first face features of which the similarity is greater than the similarity threshold value.
For example, taking the first face track 1 as an example, the number of the face images corresponding to the first face track 1 after the quality score filtering processing is 7, then the 7 first face features corresponding to the 7 first face images are sorted according to the snapshot time sequence, and are respectively marked as T1, T2, and … T7, and the similarities between T1 and T2, T3 … T7 are sequentially calculated, assuming that the similarities are respectively 98%, 95%, … 75%, and 66%, and the set similarity threshold is 70%, only T1 to T6 are reserved for the first face track 1, so that, on one hand, the same natural person corresponding to the same face track is ensured, and the algorithm effect is improved; on the other hand, the scale of the face snapshot image corresponding to each first face track is further reduced, the complexity and resource consumption of subsequent calculation are reduced, and the processing performance is improved.
Here, with the above embodiment, after performing quality score filtering, feature extraction, and purification processing on each first face track, the server may further perform clustering processing on the obtained plurality of first face tracks, and then determine quality scores of a plurality of first face images clustered as corresponding to first face tracks belonging to the same visitor; when the number of first face images with quality scores larger than a quality score threshold value in the plurality of first face images is larger than a number threshold value, performing quality score sorting on the plurality of first face images, and selecting L first face images which are sorted in front; wherein L is a positive integer; and determining the average value of the L first face features corresponding to the L first face images as the first face feature of the visitor.
For example, by taking a smart retail store scene as an example, assuming that a current first time interval is 8:00-9:00, that is, after a server acquires a plurality of first face tracks corresponding to 8:00-9:00, first performing quality score filtering, feature extraction, and refinement processing on each first face track, and then calling a face clustering microservice to perform clustering processing on the plurality of processed first face tracks, so that the plurality of first face tracks are clustered under the same cluster, which means that the first face tracks clustered under the same cluster are actually face tracks of the same customer, so that the data processing scale can be reduced and the performance can be improved by the clustering processing.
In addition, the purpose of the embodiment of the present invention is to determine the temporary identity of the visitor appearing in each first time interval, and therefore, after clustering a plurality of first face tracks into different clusters, it is further necessary to verify the validity of each cluster, that is, whether the number of face snapshot images having a quality score greater than a quality score threshold in each cluster is greater than a number threshold.
For example, still taking a smart retail store scene as an example, after clustering a plurality of first face tracks corresponding to 8:00-9:00 and subjected to quality score filtering, feature extraction, and refinement into different clusters, the server needs to further determine the validity of each cluster. For example, taking cluster a as an example, assuming that cluster a includes first face tracks 1-8, the server first determines quality scores of face snapshot images corresponding to first face tracks 1-8 included in cluster a, sets a threshold to judge whether the number of face snapshot images whose quality scores exceed a quality score threshold exceeds a number threshold, and determines cluster a as an effective cluster when the number threshold is exceeded; when the number of face snapshotted images does not exceed the number threshold (namely, a large number of low-quality face snapshotted images exist in the cluster A), the cluster A is deleted, so that invalid clusters are directly filtered by judging the effectiveness of the clusters formed after clustering, subsequent calculation is avoided, and the calculation scale can be further reduced.
In step S103, the server matches each first face feature with a face feature to which a temporary identity has been bound in the first search library, so as to obtain a temporary identity of a visitor appearing in each first time interval.
In some embodiments, matching each first face feature with a face feature to which a temporary identity is bound in the first search library to obtain the temporary identity of the visitor appearing in each first time interval may be implemented as follows: matching the first face features of each visitor with the face features bound with the temporary identities in the first search library; when the matched maximum similarity is smaller than a similarity threshold value, determining that the visitor is a new visitor, adding a corresponding temporary identity for the new visitor, and storing the temporary identity and the M personal face images of which the quality scores corresponding to the new visitor exceed the quality score threshold value into a first search library; wherein M is a positive integer; and when the matched maximum similarity is larger than the similarity threshold, determining that the visitor is the visitor who has appeared, and determining the temporary identity bound by the matched face features as the temporary identity of the visitor.
Taking a smart retail store scene as an example, for example, taking an effective cluster a (corresponding to customer 1) as an example, matching a first face feature corresponding to the effective cluster a with face features of all bound temporary identities in a first search library, and judging whether a returned maximum similarity exceeds a similarity threshold, when the similarity exceeds the similarity threshold, determining that customer 1 corresponding to the effective cluster a is a customer who has appeared in a specific area of a store before, and determining a temporary identity corresponding to the matched face feature as a temporary identity of customer 1; and when the similarity threshold is not exceeded, determining that the customer 1 corresponding to the valid cluster A is a new customer, adding corresponding temporary identity information for the new customer, and storing the newly added temporary identity information and the face image corresponding to the new customer into the first search library.
In some embodiments, the server needs to further ensure that the quality score of the facial image of the customer stored in the first repository is the highest, and therefore, still taking the above embodiment as an example, when it is determined that the returned maximum similarity exceeds the similarity threshold, the server needs to further compare the quality scores of the facial images of the customer 1 captured at 10:00-11:00 with the quality scores of the facial images corresponding to the facial features matched in the first search library (i.e. the facial images of the customer 1 stored in the first search library), when it is determined that the quality score of the facial image of customer 1 captured between 10:00 and 11:00 is greater than the quality score of the facial image of customer 1 stored in the first search repository, the facial image of customer 1 stored in the first search library is replaced with the facial image of customer 1 captured at 10:00-11: 00.
It should be noted that the facial feature data of the bound temporary identity stored in the first search library is dynamically changed, for example, when it is required to determine the temporary identity of a customer who appears in a specific area of a mall from 10:00 to 11:00, the facial feature data stored in the first search library is the facial feature data of all customers who appear in the specific area of the mall before 10: 00; when the temporary identities of the customers who appear in the specific area of the shopping mall from 13:00 to 14:00 need to be determined, the facial feature data stored in the first search library are the facial feature data of all the customers who appear in the specific area of the shopping mall before 13: 00.
According to the embodiment of the invention, the first face track corresponding to each first time interval is obtained, the first face features corresponding to each first face track are extracted, and then each first face feature is matched with the face features bound with the temporary identity in the first retrieval library, so that the temporary identity information of the visitor appearing in each first time interval can be determined.
In some embodiments, referring to fig. 4, fig. 4 is another optional flowchart of the passenger flow information processing method according to the embodiment of the present invention, and step S106 and step S107 may be further included before step S101 shown in fig. 3.
In step S106, the server obtains a third face track corresponding to each second time interval; wherein the duration of the second time interval is less than the duration of the first time interval.
In some embodiments, the passenger flow information processing method provided in the embodiments of the present invention may further calculate passenger flow volumes corresponding to target areas to be analyzed in different time periods. For example, the server first divides each day according to a second time granularity (since the passenger flow is a task with a high real-time requirement, the second time granularity needs to be short, for example, 5 minutes or 10 minutes is the second time granularity), obtains a plurality of second time intervals, and then obtains a third face track corresponding to each second time interval.
For example, taking an intelligent retail store scene as an example, the server first divides the business hours of the store by taking 10 minutes as a second time granularity to obtain a plurality of second time intervals, and then obtains third face track data corresponding to each second time interval. For example, assuming that the current second time interval is 9:00-9:10, the third face track corresponding to 9:00-9:10 is a face track obtained by decoding and face recognition processing a video shot by a camera deployed in a specific area of a shopping mall at 9:00-9: 10.
In step S107, the server extracts a third face feature corresponding to each third face track.
In some embodiments, step S107 shown in fig. 4 may pass through steps S201 to S206 shown in fig. 5
The implementation will be explained in conjunction with the steps shown in fig. 5.
In step S201, the server performs quality score filtering on each third face track.
Here, a specific implementation process of the server performing the quality score filtering on each third face track is similar to the implementation process of performing the quality score filtering on each first face track in step S102, and may be implemented with reference to the description in step S102, and details of the embodiment of the present invention are not repeated here. In step S202, the server performs feature extraction on each third face track after the filtering processing.
Here, the feature extraction of the face image refers to a process of processing the face image according to a certain algorithm, extracting corresponding feature information, and further converting the feature information into a feature vector. For example, geometric features of the face, such as shapes of eyes, nose, mouth, and geometric relationships therebetween, may be extracted, the geometric relationships may be converted into corresponding feature vectors, and the feature vectors obtained through the conversion may be determined as the face features corresponding to the face image.
In step S203, the server performs a refining process on each third face track.
Here, a specific implementation process of performing the purification processing on each third face track is similar to the implementation process of performing the purification processing on each first face track in step S102, and may be implemented with reference to the description of step S102, and no further description is given here in the embodiment of the present invention.
In step S204, the server performs clustering processing on the plurality of refined third face tracks.
Here, with the above embodiment, after performing quality division filtering, feature extraction, and purification on each third face track, the server performs clustering processing on the obtained multiple third face tracks, so that multiple third face tracks are clustered under the same cluster, which means that the third face tracks clustered under the same cluster are actually face tracks of the same customer, and thus, the data processing scale can be reduced and the performance can be improved through the clustering processing.
In step S205, the server sorts the plurality of third face images corresponding to the third face tracks clustered as belonging to the same visitor according to the quality scores, and selects K third face images with the top quality scores; wherein K is a positive integer.
For example, taking an intelligent retail store scene as an example, for example, taking a cluster a as an example, assuming that the cluster a includes third face tracks 1 to 4, the server first determines quality scores of a plurality of face images respectively corresponding to the third face tracks 1 to 4 included in the cluster a, then sorts all the face images according to the quality scores, and selects K personal face images with the top quality scores (for example, 20 personal face images with the top quality scores may be selected, and the value of K may be comprehensively determined according to the quality scores of the face images included in each cluster and the number of the face images).
In step S206, the server determines an average value of K third face features corresponding to the K third face images as a third face feature of the visitor.
Here, taking the cluster a as an example, after obtaining the K face images with the highest quality scores in the cluster a, the server determines the average value of the K face features corresponding to the K face images with the highest quality scores in the cluster a as the third face features corresponding to the cluster a, so that the third face features corresponding to different clusters, that is, the third face features corresponding to different customers, can be obtained.
In step S108, the server matches each third face feature with a face feature stored in a third search library to obtain a passenger flow volume corresponding to each second time interval.
In some embodiments, the server matches each third face feature with a face feature stored in the third search library to obtain the passenger flow volume corresponding to each second time interval, and the method may include: matching the third face features of each visitor with the face features stored in the third search library; when the matched maximum similarity is smaller than a similarity threshold value, determining that the visitor is a new visitor, and adding corresponding passenger flow information for the new visitor; and when the maximum similarity of the matching is larger than the similarity threshold value, determining the visitor as the visitor who has appeared.
Taking the above embodiment as an example, taking cluster a (corresponding to customer 1) as an example, matching the third facial features corresponding to cluster a with the facial features stored in the third search library, and determining whether the returned maximum similarity exceeds a similarity threshold, and when the similarity exceeds the similarity threshold, determining that customer 1 corresponding to cluster a is a customer who has appeared in a specific area of a mall before; and when the similarity threshold is not exceeded, determining that the customer 1 corresponding to the cluster A is a new customer, adding corresponding passenger flow information for the new customer, and updating the newly added passenger flow information into a third search base.
In some embodiments, the server further needs to further ensure that the quality score of the facial image of the customer stored in the third search library is the highest, and a specific implementation process of the server is similar to the implementation process of ensuring that the quality score of the facial image of the customer stored in the first search library is the highest in step S103, and the implementation can be implemented by referring to the description of step S103, which is not described herein again in the embodiments of the present invention.
It should be noted that, because the passenger flow volume is a task with a high real-time requirement, the face feature data stored in the third search library needs to be data with high timeliness, for example, when the passenger flow volume corresponding to a specific area of a 10:00-10:10 market needs to be determined, the face feature data stored in the third search library may be the face feature data within the previous 15 minutes, that is, the face feature data of all customers who appear in the specific area of the market between 9:45-10: 00; when the passenger flow corresponding to the specific area of the shopping mall of 13:00-13:10 needs to be determined, the facial feature data stored in the third search library can be the facial feature data of all customers who appear in the specific area of the shopping mall between 12:45-13: 00.
According to the embodiment of the invention, the third face track corresponding to each second time interval is obtained, the third face feature corresponding to each third face track is extracted, and then each third face feature is matched with the face feature stored in the third search library, so that the passenger flow corresponding to each second time interval can be determined.
In step S104, the server extracts a second face feature corresponding to a second face track of each visitor with the temporary identity.
In some embodiments, the server extracting the second face features corresponding to the second face track of each visitor with the temporary identity may be implemented by: performing quality score sorting on a plurality of second face images corresponding to second face tracks belonging to the same visitor, and selecting T second face images which are sorted in the front; wherein T is a positive integer; and determining the average value of the T second face features corresponding to the T second face images as the second face features of the visitor.
Taking an intelligent retail store scene as an example, after a store finishes business every day, for example, at 23:30, the server performs clustering processing on all customers who appear in a specific area of the store and have temporary identities, divides the customers with the same temporary identities into the same large cluster, and performs the following operations for each large cluster: and selecting T personal face images with the front quality scores in each large cluster, and determining the average value of the T personal face features corresponding to the T personal face images as the second face features corresponding to the large cluster.
In step S105, the server matches each second facial feature with the facial feature bound with the permanent identity in the second search library to obtain the permanent identity of the visitor who appears every day.
In some embodiments, the server matches each second facial feature with a facial feature of a bound permanent identity in the second search repository, and the permanent identity of the visitor who appears every day can be obtained by: matching the second face features of each visitor with the face features of the bound permanent identity in the second search library; when the matched maximum similarity is smaller than a similarity threshold value, determining that the visitor is a new visitor, and adding a corresponding permanent identity for the new visitor; storing the S personal face image with the quality score larger than the quality score threshold value corresponding to the permanent identity and the new visitor into a second search library; wherein S is a positive integer; and when the maximum matching similarity is larger than the similarity threshold, determining that the visitor is the visitor who has appeared, and determining the permanent identity bound by the matched face features as the permanent identity of the visitor.
Taking a smart retail store scene as an example, for example, taking a large cluster a (corresponding to customer 1) as an example, matching a second face feature corresponding to the large cluster a with face features of all bound permanent identities in a second search library, and judging whether the returned maximum similarity exceeds a similarity threshold, when the similarity exceeds the similarity threshold, determining that customer 1 corresponding to the large cluster a is a customer who has appeared in a specific area of a store before, and determining the permanent identity bound by the matched face feature as the permanent identity of customer 1; when the similarity threshold is not exceeded, it is determined that the customer 1 corresponding to the large cluster a is a new customer (i.e. customers appearing only today and not appearing in a specific area of the shopping mall), the corresponding permanent identity information is added for the new customer, and the newly added permanent identity information and the face image corresponding to the new customer are stored in the second search library.
In some embodiments, the server further needs to further ensure that the quality score of the facial image of the customer stored in the second search library is the highest, and a specific implementation process of the server is similar to the implementation process of ensuring that the quality score of the facial image of the customer stored in the first search library is the highest in step S103, and the implementation can be implemented by referring to the description of step S103, which is not described herein again in the embodiments of the present invention.
It should be noted that the facial feature data of the bound permanent identities stored in the second search library also dynamically changes, for example, when it is necessary to determine 2020.04.28 the permanent identity of the customer present in a specific area of a shopping mall, the facial feature data stored in the second search library is the facial feature data of all customers present in the specific area of the shopping mall before 2020.04.28; when it is desired to determine 2020.04.11 the permanent identity of patrons present in a particular area of the store, the facial feature data stored in the second repository is 2020.04.11 facial feature data of all patrons present in the particular area of the store.
According to the embodiment of the invention, the second face features corresponding to the second face track of each visitor with the temporary identity are extracted, and then each second face feature is matched with the face features bound with the permanent identities in the second retrieval library, so that the permanent identities of the visitors appearing in the target area to be analyzed every day can be determined, and the temporary identities and the permanent identities of the visitors are communicated to support the inquiry of the identities of the visitors across the day.
Continuing with the exemplary structure of the passenger flow information processing device 243 provided by the embodiment of the present invention implemented as software modules, in some embodiments, as shown in fig. 2, the software modules stored in the passenger flow information processing device 243 of the memory 240 may include: an acquisition module 2431, a face feature extraction module 2432 and a retrieval module 2433.
An obtaining module 2431, configured to obtain a first face track corresponding to each first time interval; the face feature extraction module 2432 is configured to extract a first face feature corresponding to each first face track; the retrieval module 2433 is configured to match each first face feature with a face feature to which a temporary identity is bound in the first retrieval library, so as to obtain a temporary identity of a visitor appearing in each first time interval; the face feature extraction module 2432 is further configured to extract a second face feature corresponding to a second face track of each visitor with the temporary identity; the retrieval module 2433 is further configured to match each second facial feature with the facial feature bound with the permanent identity in the second retrieval library, so as to obtain the permanent identity of the visitor who appears every day.
In some embodiments, the obtaining module 2431 is further configured to obtain a third face track corresponding to each second time interval; wherein the duration of the second time interval is less than the duration of the first time interval; the face feature extraction module 2432 is further configured to extract a third face feature corresponding to each third face track; the retrieval module 2433 is further configured to match each third face feature with a face feature stored in a third retrieval library, so as to obtain a passenger flow volume corresponding to each second time interval; the passenger flow information processing device 243 further includes a deleting module 2434 configured to delete the third face image with the quality score smaller than the quality score threshold from the plurality of third face images.
In some embodiments, the face feature extraction module 2432 is further configured to perform feature extraction on the plurality of third face images remaining after the deletion processing, so as to obtain a third face feature of each third face image; the passenger flow information processing device 243 further includes a sorting module 2435, configured to sort the third facial features of the multiple third facial images according to the snapshot time sequence, sequentially determine a similarity between a first third facial feature and a subsequent third facial feature in the sorting, and retain all the third facial features whose similarities are greater than the similarity threshold.
In some embodiments, the passenger flow information processing device 243 further includes a clustering module 2436 for clustering the plurality of third face tracks; the sorting module 2435 is further configured to sort the plurality of third face images corresponding to the third face tracks clustered as belonging to the same visitor according to the quality scores, and select K third face images with the top quality scores; wherein K is a positive integer; and determining the average value of the K third face features corresponding to the K third face images as the third face features of the visitor.
In some embodiments, the retrieval module 2433 is further configured to match the third facial features of each guest with facial features stored in a third repository; when the matched maximum similarity is smaller than a similarity threshold value, determining that the visitor is a new visitor, and adding corresponding passenger flow information for the new visitor; when the matched maximum similarity is larger than the similarity threshold value, determining that the visitor is a present visitor, and comparing a first quality score of a face image of the visitor captured in a second time interval with a second quality score of the face image corresponding to the face feature matched in the third search library; and when the first quality score is larger than the second quality score, replacing the face image of the visitor stored in the third search library with the face image of the visitor captured in the second time interval.
In some embodiments, the clustering module 2436 is further configured to perform clustering on the plurality of first face tracks; determining quality scores of a plurality of first face images corresponding to first face tracks clustered as belonging to the same visitor; the sorting module 2435 is further configured to, when the number of first face images with quality scores larger than the quality score threshold in the plurality of first face images is larger than the number threshold, perform quality score sorting on the plurality of first face images, and select L first face images sorted in the front; wherein L is a positive integer; determining the average value of L first face features corresponding to the L first face images as the first face feature of the visitor; the deleting module 2434 is further configured to delete the first face trajectory of the visitor when the number of first face images of the plurality of first face images with quality scores greater than the quality score threshold is less than the number threshold.
In some embodiments, the retrieval module 2433 is further configured to match the first facial features of each visitor with the facial features of the bound temporary identity in the first retrieval library; when the matched maximum similarity is smaller than a similarity threshold value, determining that the visitor is a new visitor, and adding a corresponding temporary identity for the new visitor; storing the M personal face images of which the temporary identities and the mass scores corresponding to the new visitors are larger than a mass score threshold value into a first search library; wherein M is a positive integer; when the matched maximum similarity is larger than a similarity threshold value, determining that the visitor is a visitor who has appeared, and performing quality score sequencing on the face image of the visitor captured in the first time interval and the face image corresponding to the face feature matched in the first retrieval library; determining N front-ranked face images as face images corresponding to visitors, and storing the N face images into a first retrieval library; wherein N is a positive integer.
In some embodiments, the sorting module 2435 is further configured to perform quality score sorting on a plurality of second face images corresponding to second face tracks belonging to the same visitor, and select T second face images sorted in the front; wherein T is a positive integer; and determining the average value of the T second face features corresponding to the T second face images as the second face features of the visitor.
In some embodiments, the retrieval module 2433 is further configured to match the second facial features of each visitor with facial features of the bound permanent identity in the second repository; when the matched maximum similarity is smaller than a similarity threshold value, determining that the visitor is a new visitor, and adding a corresponding permanent identity for the new visitor; storing the S personal face image with the quality score larger than the quality score threshold value corresponding to the permanent identity and the new visitor into a second search library; wherein S is a positive integer; when the matched maximum similarity is larger than the similarity threshold value, determining that the visitor is a visitor who has appeared, and performing quality score sequencing on the face image of the visitor captured every day and the face image corresponding to the face feature matched in the second search library; determining the X personal face images which are ranked in the front to be the face images corresponding to the visitors, and storing the X personal face images into the second search library; wherein X is a positive integer.
It should be noted that the description of the apparatus according to the embodiment of the present invention is similar to the description of the method embodiment, and has similar beneficial effects to the method embodiment, and therefore, the description is omitted. The inexhaustible technical details in the passenger flow information processing device provided by the embodiment of the invention can be understood according to the description of any one of the figures 3-5 and 7-10.
The following describes the application of embodiments of the present invention in the field of intelligent retail.
The artificial intelligence technology is widely applied to the field of intelligent retail at present, and a large part of the artificial intelligence technology is applied to popularization and landing of scenes of intelligent retail stores, enables offline stores and creates new retail. The establishment process for the customer identity file in the intelligent retail scheme of the market provided by the related art is generally implemented by the following ways: firstly, reading all face snapshot data from a database, then extracting features based on the read face snapshot data to obtain corresponding face features, and finally retrieving in a permanent identity archive based on the obtained face features, thereby completing the addition or update of the permanent identity of the customer. It can be seen that the solutions provided by the related art have the following disadvantages:
1) the function is single, and the face filing scheme provided by the related technology only provides the information of the permanent identity of the customer.
2) The performance is poor, and the human face filing scheme provided by the related technology directly takes a large number of human face snapshot images as input, and the calculation amount of human face retrieval requires huge processing, thereby providing high challenges for calculation resources.
In view of the above problems, an embodiment of the present invention provides a passenger flow information processing method, which is designed for application and landing of a smart retail store solution based on a computer vision technology, and provides a FaceID-based passenger flow data, a temporary customer identity on the same day, and a permanent customer identity for stores and shops. The passenger flow information processing method provided by the embodiment of the invention comprises a passenger flow processing module based on face recognition, a current-day temporary identity processing module and a permanent identity processing module of a customer, and can be suitable for the current most mainstream intelligent retail market scenes, and more refined and digitized passenger flow information and customer identity files are provided for markets and shops, so that the markets and shops can make more reasonable operation decisions according to passenger flow changes and new and old customer changes, and the improvement of the operation income of operators of the markets and shops is assisted.
Compared with the intelligent retail scheme of the market provided by the related technology, the passenger flow information processing method provided by the embodiment of the invention has the following advantages:
1. the embodiment of the invention provides a complete intelligent retail store accurate customer file scheme based on a face recognition technology, which can be applied on the ground, and enables entrance and store entrance to create customer files, so that accurate passenger flow data and information of new and old customers are mastered to know the requirements of the customers, and scientific reference data is provided for the operation of a shop sponsor and accurate marketing of stores.
2. The embodiment of the invention not only provides the permanent identity of the customer based on the Face ID, but also reserves the temporary identity of the customer on the same day, and provides the number of people in the specific area of the mall and the information of the number of people in the passenger flow after the duplication removal by clustering, searching and other processing on the Face image captured in a short time, thereby providing rich functions of the passenger flow, the temporary identity of the customer on the same day, the permanent identity of the customer and the like.
3. The system and the method have the advantages that the functions are flexible, and the passenger flow processing module, the current-day temporary identity processing module and the permanent identity processing module are designed, and different modules correspond to different functions, so that the flexible customization of the functions of the passenger flow, the current-day temporary identity of a customer and the permanent identity of the customer in a specific area is realized.
4. The data transmission method has high performance and definite module functions, and the data transmission method and the data transmission system are interacted with each other through the database, so that peak clipping decoupling of request data is realized to balance data resources, and meanwhile data persistence and function decoupling are also completed. In addition, the complexity of face retrieval calculation is reduced by introducing a face clustering technology, and the permanent identity processing logic can be configured to be carried out in the middle of business rest, so that the resource utilization rate is further improved.
The passenger flow information processing method provided by the embodiment of the invention can be applied to various business super scenes of intelligent retail stores such as shopping malls, department stores, shopping centers and the like, and provides rich information such as accurate passenger flow data, temporary identity of a customer on the same day, permanent identity of the customer and the like for the stores and the shops by establishing an accurate customer file system.
For example, referring to fig. 6, fig. 6 is a schematic diagram showing a customer profile (left side) and a customer flow (right side) in a specific area of a shopping mall provided by the embodiment of the present invention. As shown in fig. 6, Face ID provided based on Face recognition technology can provide customer's identity profile information and attributes such as age, gender, etc. for shopping malls and stores (61), thereby helping to adjust the policy of inviting merchants; the accurate marketing and personalized recommendation of the shop can be completed. Meanwhile, statistics (62) of the number of the shop passenger flow and the shop passenger flow information in different time periods can be completed based on the Face ID, so that shopping diversion, address selection and recruitment policy optimization and scientific rent strategy formulation of a shopping mall are facilitated; the method is beneficial to the stores to master the current sales trend and adjust the marketing strategy in time.
The following is a detailed description of a system framework for implementing the passenger flow information processing method according to the embodiment of the present invention.
Referring to fig. 7, fig. 7 is a schematic diagram of an alternative system framework for implementing a passenger flow information processing method according to an embodiment of the present invention. As shown in fig. 7, the system framework for implementing the passenger flow information processing method according to the embodiment of the present invention integrates a (specific area) passenger flow processing module, a temporary identity processing module, and a permanent identity processing module. Firstly, according to task information (ta sk _ info), face track data including face track IDs, face snapshot images corresponding to each face track, snapshot time of each face snapshot image and the like are read from a database cluster. The passenger flow processing module is mainly used for determining the passenger flow of a specific area of a shopping mall, and the passenger flow is a real-time task, so that the frequency of fishing the face track data from the database cluster can be configured to be short-time, for example, the face track data obtained after face detection and tracking is read from the database cluster every 5 minutes, so that the real-time performance of passenger flow information is ensured; the temporary identity processing module is mainly used for determining the temporary identity of the customer on the same day, and has no high requirement on instantaneity, so that the temporary identity processing module can be configured to read face track data from a database cluster for a long time, for example, every 1 hour according to the use condition of machine resources; the permanent identity processing module is mainly used for determining the permanent identity of a customer, and is different from the passenger flow processing module and the temporary identity processing module, when the face track data is read from the database cluster, the permanent identity processing module also comprises the temporary identity ID of the customer corresponding to each face track data, and meanwhile, the module can be regarded as an offline processing task, and can be configured to perform data fishing at midnight in order to fully utilize idle resources at night, so that the utilization rate of the resources is further improved.
In the passenger flow processing module, the calculation of the number of passenger flows in a specific area of a commercial place in a short time can be finished by calling computer vision micro-services such as face extraction features, extraction attributes, clustering and retrieval, and the like, and the calculation is updated into a database and a retrieval library; in the temporary identity processing module, the calculation of the current-day temporary identity of the customers appearing in the whole market for a long time can be completed by calling computer vision micro-services such as face extraction features, extraction attributes, clustering and retrieval, so as to update the temporary identity into a database and a retrieval library, and finally the calculation of the current-day temporary identity of all the customers appearing in the market in the current day is completed at 11 o' clock in the evening, for example, so as to obtain the current-day temporary identity information of each customer; in the permanent identity processing module, the combination of the temporary identity of the customer to the permanent identity on the same day can be completed by calling computer vision services such as clustering, retrieval and the like. And finally, reporting the calculated passenger flow volume of the specific region of the business hall, the temporary identity of the customer on the same day and the permanent identity information of the customer in different time periods, so that the subsequent analysis and display are facilitated.
The system framework for realizing the passenger flow information processing method provided by the embodiment of the invention has the following advantages: the division of labor among the functional modules is clear, and the data interaction and persistence are carried out by the database so as to realize decoupling; computer vision calculation can be performed by balanced utilization of computing resources such as a Central Processing Unit (CPU), an image processing unit (GPU), a Video Processing Unit (VPU) and the like; import and export of data of intermediate processing results can be supported.
The following specifically describes the passenger flow processing module, the temporary identity processing module, and the permanent identity processing module included in the system framework provided in the embodiment of the present invention.
(1) Passenger flow processing module
Referring to fig. 8, fig. 8 is a schematic flow chart of an alternative method for calculating a passenger flow volume in a specific area of a shopping mall according to an embodiment of the present invention, which will be described with reference to the steps shown in fig. 8.
In step S801, the server retrieves face trajectory data configured within a short time, for example, within the last 5 minutes, from the database cluster.
The face track data retrieved from the database cluster by the server comprises face track IDs, face snapshot images corresponding to the face tracks, snapshot time and snapshot areas of the face snapshot images, quality scores of the face images and other information. In addition, since the passenger flow processing module is a real-time task, it needs to be configured to read the face trajectory data from the database cluster for a short time, for example, every 5 minutes.
It should be noted that, before step S801 is executed, a video obtained by capturing a face of a customer appearing in a specific area of a mall in real time by a camera is decoded to obtain a video frame image including a face image, face recognition is performed on the video frame image obtained after decoding to obtain a plurality of face track data, then a quality score of the face image corresponding to each face track is determined, and finally the face track data and the quality score of the face image corresponding to each face track are stored in a database cluster.
In step S802, the server filters a plurality of face images corresponding to each face track according to the quality scores.
In step S803, the server performs feature extraction and attribute extraction on the face image corresponding to the face trajectory.
Here, after the filtering processing of step S802 is executed, the server performs feature extraction on the face image corresponding to each face track after the filtering processing to obtain corresponding face features; and extracting attributes of the face image corresponding to each face track to obtain corresponding attribute information including the age, the gender and the like of the customer.
In step S804, the server performs a refining process on each face track.
In step S805, the server performs clustering processing on the acquired plurality of face tracks.
In step S806, the server selects K face images with the highest quality scores for each cluster, and determines an average value of face features corresponding to the K face images as an average feature corresponding to the cluster.
Here, after performing step S805 to obtain a plurality of clusters of different types, the server performs the following operations for each cluster: firstly, a plurality of face images corresponding to face tracks included in the cluster are sorted according to quality scores, K personal face images with the highest quality scores are selected (for example, 10 personal face images with the highest quality scores can be selected), then, an average value of face features corresponding to the K personal face images is calculated, and the average value is determined as an average feature corresponding to the cluster (i.e., the customer), so that the capture scale of the face images is further reduced.
In step S807, the server performs a passenger database search.
After the merging of the face tracks into different clusters and the determination of the average (face) feature corresponding to each cluster are completed through steps S805 and S806, the server starts to perform a passenger flow library search (i.e., the average face feature corresponding to each cluster is matched with the face features pre-stored in the passenger flow library). Since passenger flow is a real-time demanding task, the scope of the passenger flow repository is required to be the most recent time window, e.g., the most recent 15 minutes of the repository data.
In step S808, the server determines whether or not the returned maximum search score exceeds a threshold value, and if so, executes step S809, and if less than the threshold value, executes step S812.
In step S809, the server determines whether the quality score of the face image in the cluster is greater than the quality score of the face image stored in the passenger flow library, and if so, executes step S810; when not greater than this, step S811 is performed.
Here, when it is determined that the returned maximum matching score is greater than the threshold, it is indicated that the customer corresponding to the cluster is a customer who has appeared in a specific area of the mall (that is, an existing number of people in the area), so it is only necessary to ensure that the quality score of the face image included in the cluster corresponding to the customer stored in the passenger flow library is the highest, and therefore, the server performs the following determination steps: and judging whether the highest quality score of the face images in the cluster is larger than the quality score of the face images of the customer stored in the customer flow library.
In step S810, the server updates the relational database passenger flow identity time.
When the highest quality score of the face image of the customer in the cluster is determined to be larger than the quality score of the face image of the customer stored in the passenger flow library, the face image of the customer in the cluster is used for replacing the face image of the customer stored in the passenger flow library, and therefore the quality score of the face image of the customer stored in the passenger flow library is guaranteed to be the highest.
In step S811, the server ends the passenger flow processing procedure.
When the highest quality score of the face image of the customer in the cluster is determined to be smaller than the quality score of the face image of the customer stored in the passenger flow library, the face image of the customer stored in the passenger flow library is the face image with the highest quality score, so that updating is not needed, and the server finishes the passenger flow processing process.
In step S812, the server creates a new passenger flow ID and time.
Here, when it is determined that the returned maximum matching score is smaller than the threshold, it indicates that the customer corresponding to the cluster is a new customer who has not appeared in the specific area of the mall, and therefore, a flow ID (flow _ person _ ID) is added for the new customer, and at the same time, the information of the number of people in the flow is added to the face track information corresponding to the database, and a piece of information of the ID of the flow and time is added to the flow search library, where the information of the ID of the flow corresponds to information such as a face image of the new customer and face track data.
(2) Temporary identity processing module
Referring to fig. 9, fig. 9 is an alternative flowchart of a method for determining a temporary identity of a customer on the same day according to an embodiment of the present invention, which will be described with reference to the steps shown in fig. 9.
In step S901, the server retrieves face trajectory data configured for a long time, for example, within 30 minutes, from the database.
The server firstly retrieves face track data from a database cluster according to a set time interval, wherein the face track data comprises track IDs corresponding to face tracks, face snapshot images corresponding to the face tracks, snapshot time, snapshot areas, quality scores and other information of the face snapshot images. Because the temporary identity processing module is mainly used for determining the temporary identity information of the customer and is a task with low real-time requirement, the temporary identity processing module can be configured to read the face track data from the database cluster once every 30 minutes for example according to the use condition of the machine.
In step S902, the server performs filtering processing on a plurality of face images corresponding to the face trajectory according to the quality scores.
In step S903, the server performs feature extraction and attribute extraction on a face image corresponding to the face trajectory.
In step S904, the server performs a refining process on the face trajectory.
In step S905, the server performs clustering processing on a plurality of face tracks.
In step S906, the server determines whether the number of face images meeting the conditions in each cluster exceeds a number threshold, and if so, executes step S907; if not, go to step S914.
Here, different from the case that the passenger flow processing module concerns the number of passengers in a specific area of a shopping mall, the temporary identity processing module concerns the current-day temporary identity information of customers, so that the validity of each cluster needs to be judged first, if the mass scores of a large number of face snapshot images in a certain cluster are all lower than a mass score threshold (namely, a large number of low-quality face snapshot images exist), the cluster is determined to be an invalid cluster, and the cluster is directly filtered, so that subsequent calculation is avoided, and the calculation scale is further reduced.
In step S907, for each valid cluster, the server selects K personal face images with the highest quality scores included in the valid cluster, and determines an average value of face features corresponding to the K personal face images as an average face feature corresponding to the cluster.
In step S908, the server performs daily archive retrieval.
In step S909, the server determines whether the returned maximum retrieval score exceeds a threshold, and executes step S910 when the returned maximum retrieval score exceeds a score threshold; when the returned maximum retrieval score does not exceed the score threshold, step S916 is performed.
Here, the server performs the following operations for each active cluster: matching the average face features corresponding to the effective clusters with all face features stored in a daily archive, judging the returned maximum matching score, and when the maximum matching score is larger than a threshold value, indicating that the customer is a customer who has appeared before; when the maximum match score is less than the threshold, it indicates that the customer is a new customer that has not previously appeared. The range of the daily archive library is all the search library data up to now, the front K face images of the face images corresponding to each cluster stored in the daily archive library are taken according to the quality scores, and the average value of the face features corresponding to the K face images is determined as the face feature corresponding to the cluster in the daily archive library.
In step S910, the server determines whether the highest quality score of the facial image corresponding to each valid cluster is greater than the quality score of the facial image of the customer stored in the daily archive, if so, step S911 is executed, and if not, step S914 is executed.
Here, when it is determined that the returned maximum matching score is greater than the threshold, it indicates that the customer corresponding to the valid cluster is a customer who has appeared so far in the specific area of the mall on the current day, and the server also needs to ensure that the quality score of the facial image included in the cluster corresponding to the customer in the day archive is the highest, so it is necessary to further compare the quality score of the facial image corresponding to the valid cluster with the quality score of the facial image of the customer stored in the day archive.
For example, the specific method for ensuring the highest quality score of the face images included in the cluster corresponding to a certain customer in the daily archive may be: and aiming at a certain effective cluster, determining a face image with the maximum quality score corresponding to a face track included in the effective cluster, taking the face image with the maximum quality score as a reference, calculating the similarity between the residual face image in the effective cluster and the reference, filtering by setting a threshold FT3, ensuring that the effective cluster only strictly corresponds to the same customer, combining the retrieved in-library cluster and all face snapshot images of the effective cluster, sequencing according to the quality scores to form a new cluster, ensuring that only all face snapshot images higher than a threshold QT3 are reserved in the new cluster, and changing track information of corresponding current-day temporary identity information day _ person _ id in a database.
In step S911, the server takes the face image with the highest quality score in each valid cluster as a reference.
In step S912, the server calculates the similarity between the face image with the highest quality score and other face images in each valid cluster, and retains the face image with the similarity greater than the similarity threshold by setting a threshold.
In step S913, the server determines whether or not the face image passes the filtering, and if so, executes step S915, and if not, executes step S914.
In step S914, the server ends the process of determining the temporary identity of the customer on the current day.
In step S915, the server sorts the facial image corresponding to each valid cluster and the facial image corresponding to the customer stored in the day archive according to the quality score, and after obtaining the sorting result, executes step S919.
In step S916, the server determines the face image with the largest quality score in the valid cluster, and adds a corresponding temporary identity ID for the new customer.
Here, the server determines the face image with the largest quality score corresponding to the face track included in the effective cluster, and takes the face image with the largest quality score as a reference.
In step S917, the server calculates the similarity between all face snap images in the valid cluster and the face image with the largest quality score, and performs filtering processing by setting a threshold.
In step S918, the server sorts the filtered face images according to the quality scores.
In step S919, the server records the snapshot position of the last face snapshot image satisfying the quality score as X.
In step S920, the server replaces the index photo corresponding to the temporary customer ID (i.e. the face snapshot image corresponding to the customer) with max [ X, Ks ], where Ks is the length of the original index photo.
In step S921, the server updates or newly creates the temporary ID of the customer on the day and the time for a new customer who has not appeared before.
Here, the server creates a corresponding current-day temporary identity ID, namely day _ person _ ID, for the new customer, adds the current-day temporary identity information to the track information corresponding to the database, and adds current-day temporary identity ID information and time to the previous-day archive retrieval library, where the current-day temporary identity ID information includes information such as a face image and a face track of the new visitor.
(3) Permanent identity processing module
Referring to fig. 10, fig. 10 is a schematic flow chart of an alternative method for determining a permanent identity of a customer according to an embodiment of the present invention, which will be described with reference to the steps shown in fig. 10.
In step S1001, the server retrieves the current-day temporary identity information of the customer processed by the day archive from the database cluster.
Here, the server first retrieves from the database cluster temporary identity data of all customers present in a specific area of the mall on the current day, including a plurality of face tracks corresponding to each temporary identity day _ person _ id. Since the permanent identity processing module is an off-line task, it can be deployed in the middle of the night to take full advantage of the idle resources.
In step S1002, the server merges all face tracks of the same day _ person _ id.
Here, since the same customer may appear in a specific area of the shopping mall many times at different times on the same day, the temporary identity processing module may establish temporary identity IDs for the same customer appearing at different times, and these temporary identities I D correspond to the same customer, so that the server needs to merge all face track data corresponding to the same temporary identity ID first.
In step S1003, the server takes all face tracks corresponding to each same day _ person _ id as one large cluster.
In step S1004, the server selects, for each large cluster, K face images with the highest mass fraction under the cluster, and determines an average value of K face features corresponding to the K face images as a face average feature corresponding to the cluster.
In step S1005, the server performs a permanent archive search.
Here, the server starts to perform permanent archive retrieval after completing the operation of merging a plurality of face tracks corresponding to each temporary identity into the same large cluster. The specific retrieval process is as follows: the server matches the average human face features corresponding to each large cluster with the human face features bound with permanent identities in a permanent archive, wherein the determination process of the human face features bound with the permanent identities in the permanent archive is as follows: and selecting K human face images with the highest quality score in the corresponding cluster according to each cluster in the permanent file library, and determining the average value of the human face characteristics corresponding to the K human face images as the human face characteristics corresponding to the cluster, so as to obtain the human face characteristics bound with the permanent identity in the permanent file library.
In step S1006, the server determines whether the returned maximum retrieval score exceeds a threshold, and if so, executes step S1007; if not, step S1013 is executed.
In step S1007, the server determines whether the highest quality score of the face image corresponding to each large cluster is greater than the quality score of the face image of the customer stored in the permanent archive, and if so, executes step S1008; if not, go to step S1011.
Here, when the server determines that the returned maximum retrieval score is greater than the threshold, it indicates that the customer corresponding to the large cluster is a customer that has appeared in the specific area of the mall before, and at this time, the server only needs to ensure that the quality score of the facial image included in the cluster corresponding to the customer in the permanent archive is the highest, and the specific implementation process of the method is similar to the implementation process of ensuring that the quality score of the facial image included in the cluster corresponding to the customer in the daily archive is the highest in step S910, and may be implemented with reference to the description of step S910, which is not described herein again in the embodiments of the present invention.
In step S1008, the server determines the face image with the largest mass fraction in each large cluster, and records the determined face image with the largest mass fraction as a reference.
Here, the server determines the face image with the largest quality score corresponding to the face track included in each large cluster, and takes the face image with the largest quality score as a reference.
In step S1009, the server calculates, for each large cluster, the similarity between all face snap images in the corresponding large cluster and the face image with the largest mass fraction in the large cluster, and performs filtering processing by setting a threshold.
In step S1010, the server determines whether or not the face image passes the filtering, and if so, executes step S1012, and if not, executes step S1011.
In step S1011, the server ends the process of determining the permanent identification information of the customer.
In step S1012, the server sorts the facial image corresponding to each large cluster and the facial image corresponding to the customer stored in the permanent archive according to the quality score, and after obtaining the sorting result, executes step S1016.
In step S1013, the server determines the face image with the highest mass score in each large cluster and adds a corresponding permanent identity ID for the new customer.
Here, the server determines the face image with the largest quality score corresponding to the face track included in each large cluster, and takes the face image with the largest quality score as a reference.
In step S1014, the server calculates, for each large cluster, the similarity between all the face snap images in each large cluster and the face image with the largest mass score in the cluster, and performs filtering processing by setting a threshold.
In step S1015, the server sorts the filtered face images according to the quality scores.
In step S1016, the server records the snapshot position of the last face snapshot image satisfying the quality score as X.
In step S1017, the server replaces the index photo corresponding to the permanent identity ID of the customer (i.e. the face snapshot image corresponding to the customer) with max [ X, Ks ], where Ks is the length of the original index photo.
In step S1018, the server updates or newly creates a permanent ID and time corresponding to a new customer who has not appeared before.
Here, the server creates a corresponding permanent identity ID, i.e. person _ ID, for the new customer, adds the permanent identity information to the track information corresponding to the database, and adds a piece of permanent identity ID information and time to the permanent archive, where the permanent identity ID information includes information such as a face image and a face track of the new visitor.
In other embodiments, since the embodiment of the present invention completes customer identity profiling based on face recognition technology, a relatively expensive bolt camera needs to be deployed in a mall, which is very attractive to a mall that is sensitive to cost or has harsh bolt erection conditions and has no cross-day demands on customer identities. Under the condition that the cross-day archives are not concerned, the identity filing of the customers on the day can be completed based on the human body REID technology, and the human body REID technology only needs a camera of a dome camera which is low in price and convenient to lay, so that the human body REID technology has better affinity for a market.
In addition, although the foregoing embodiment is described by taking the smart retail field as an example, the passenger flow information processing method provided by the embodiment of the invention can also be extended to other industry applications, for example, the method can be applied to the fields of smart security, smart community, smart food and drink, and the like.
Embodiments of the present invention provide a computer-readable storage medium having stored thereon executable instructions that, when executed by a processor, cause the processor to perform a method provided by embodiments of the present invention, for example, a passenger flow information processing method as shown in fig. 3-5.
In some embodiments, the storage medium may be memory such as FRAM, ROM, PROM, EPROM, EEPROM, flash memory, magnetic surface memory, optical disk, or CD-ROM; or may be various devices including one or any combination of the above memories.
In some embodiments, executable instructions may be written in any form of programming language (including compiled or interpreted languages), in the form of programs, software modules, scripts or code, and may be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
By way of example, executable instructions may correspond, but do not necessarily have to correspond, to files in a file system, may be stored in a portion of a file that holds other programs or data, e.g., in one or more scripts stored in a hypertext markup Language (HTML) document, in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub-programs, or portions of code).
By way of example, executable instructions may be deployed to be executed on one computing device or on multiple computing devices at one site or distributed across multiple sites and interconnected by a communication network.
In summary, the embodiment of the invention has the following beneficial effects:
1. the invention has wide applicability, and can be suitable for the current mainstream hardware platforms including Personal Computers (PCs), servers and the like; the provided functions are rich and flexible, and the method is suitable for current main market scenes.
2. The high efficiency, the function modules related by the invention are mutually independent and decoupled, and the modules are interacted and persisted through the database, so that the high availability and peak load shifting of the system are ensured. In addition, the scale of face snapshot images participating in retrieval calculation is continuously reduced by introducing a face clustering technology and quality score filtering processing, so that the consumption of resources is reduced. The temporary identity and the permanent identity of the customer on the same day are combined by configuring the idle time, so that the resource utilization rate is improved.
3. The intelligent retail customer identification system has strong practicability, and provides a complete scheme for establishing the intelligent retail accurate customer identification based on the face recognition technology and determining the passenger flow respectively corresponding to different times in a specific region of a mall.
The above description is only an example of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, and improvement made within the spirit and scope of the present invention are included in the protection scope of the present invention.

Claims (10)

1. A passenger flow information processing method, characterized by comprising:
acquiring a first face track corresponding to each first time interval;
extracting first face features corresponding to each first face track;
matching each first face feature with a face feature bound with a temporary identity in a first search library to obtain the temporary identity of a visitor appearing in each first time interval;
extracting second face features corresponding to second face tracks of visitors with temporary identities;
and matching each second face feature with the face feature bound with the permanent identity in the second search library to obtain the permanent identity of the visitor appearing every day.
2. The method of claim 1, further comprising:
acquiring a third face track corresponding to each second time interval; wherein the duration of the second time interval is less than the duration of the first time interval;
extracting a third face feature corresponding to each third face track, and matching each third face feature with a face feature stored in a third search library to obtain passenger flow corresponding to each second time interval;
before extracting the third face features corresponding to each third face track, the method further includes:
determining the quality scores of a plurality of third face images corresponding to the third face tracks, and deleting the third face images with the quality scores smaller than the quality score threshold value in the plurality of third face images.
3. The method of claim 2, further comprising:
performing feature extraction on the plurality of third face images which are left after the deletion processing to obtain a third face feature of each third face image;
and sequencing the third face features of the plurality of third face images according to the snapshot time sequence, sequentially determining the similarity between the first third face feature and the subsequent third face features in the sequencing, and reserving all the third face features with the similarity larger than the similarity threshold.
4. The method of claim 3, further comprising:
clustering a plurality of third face tracks;
sequencing a plurality of third face images corresponding to third face tracks clustered as belonging to the same visitor according to the quality scores, and selecting K third face images with the quality scores higher than that of the first face images; wherein K is a positive integer;
and determining the average value of the K personal face features corresponding to the K third face images as the third face features of the visitor.
5. The method according to claim 4, wherein the matching each of the third facial features with facial features stored in a third search library to obtain the passenger flow volume corresponding to each of the second time intervals comprises:
matching the third face features of each visitor with the face features stored in a third search library;
when the matched maximum similarity is smaller than a similarity threshold value, determining that the visitor is a new visitor, and adding corresponding passenger flow information for the new visitor;
when the matched maximum similarity is larger than a similarity threshold value, determining that the visitor is a present visitor, and comparing a first quality score of a face image of the visitor captured in the second time interval with a second quality score of the face image corresponding to the face feature matched in the third search library;
when the first quality score is larger than the second quality score, replacing the face image of the visitor stored in the third search library with the face image of the visitor captured in the second time interval.
6. The method of claim 1, wherein prior to extracting first facial features corresponding to each of the first facial tracks, the method further comprises:
determining the quality scores of a plurality of first face images corresponding to each first face track, and deleting the first face images of which the quality scores are smaller than a quality score threshold value in the plurality of first face images;
the method further comprises the following steps:
clustering the first face tracks;
determining quality scores of a plurality of first face images corresponding to first face tracks clustered as belonging to the same visitor;
when the number of first face images with quality scores larger than a quality score threshold value in the plurality of first face images is larger than a number threshold value, performing quality score sorting on the plurality of first face images, and selecting L first face images with front sorting; wherein L is a positive integer;
determining an average value of L personal face features corresponding to the L first face images as a first face feature of the visitor;
and when the number of the first face images with the quality scores larger than the quality score threshold value in the plurality of first face images is smaller than the number threshold value, deleting the first face track of the visitor.
7. The method of claim 6, wherein the matching each first face feature with the face feature bound with the temporary identity in the first search library to obtain the temporary identity of the visitor appearing in each first time interval comprises:
matching the first face features of each visitor with the face features bound with the temporary identities in a first search library;
when the matched maximum similarity is smaller than a similarity threshold value, determining that the visitor is a new visitor, and adding a corresponding temporary identity for the new visitor;
storing the M personal face images of which the mass scores corresponding to the temporary identity and the new visitor are larger than a mass score threshold value into the first search library; wherein M is a positive integer;
when the matched maximum similarity is larger than a similarity threshold value, determining that the visitor is a visitor who has appeared, and performing quality score sequencing on the face image of the visitor captured in the first time interval and the face image corresponding to the face feature matched in the first search library;
determining N front-ranked face images as face images corresponding to the visitors, and storing the N face images in the first retrieval library; wherein N is a positive integer.
8. The method of claim 1, wherein extracting second face features corresponding to a second face track of each visitor with a temporary identity comprises:
performing quality score sorting on a plurality of second face images corresponding to second face tracks belonging to the same visitor, and selecting T second face images which are sorted in the front; wherein T is a positive integer;
and determining the average value of the T second face features corresponding to the T second face images as the second face feature of the visitor.
9. The method of claim 8, wherein matching each of the second facial features with facial features of a bound permanent identity in a second repository to obtain a permanent identity of a visitor present each day comprises:
matching the second face features of each visitor with the face features of the bound permanent identity in a second search library;
when the matched maximum similarity is smaller than a similarity threshold value, determining that the visitor is a new visitor, and adding a corresponding permanent identity for the new visitor;
storing the S personal face images of which the quality scores corresponding to the permanent identity and the new visitor are greater than a quality score threshold value into the second search library; wherein S is a positive integer;
when the maximum matching similarity is larger than a similarity threshold value, determining that the visitor is a visitor who has appeared, and performing quality score sequencing on the face image of the visitor captured every day and the face image corresponding to the face feature matched in the second search library;
determining the X personal face image which is ranked at the front as a face image corresponding to the visitor, and storing the X personal face image into the second search library; wherein X is a positive integer.
10. A passenger flow information processing apparatus, characterized in that the apparatus comprises:
the acquisition module is used for acquiring a first face track corresponding to each first time interval;
the face feature extraction module is used for extracting a first face feature corresponding to each first face track;
the retrieval module is used for matching each first face feature with a face feature bound with a temporary identity in a first retrieval base to obtain the temporary identity of the visitor appearing in each first time interval;
the face feature extraction module is further used for extracting second face features corresponding to second face tracks of the visitors with the temporary identities;
the retrieval module is further used for matching each second face feature with the face feature bound with the permanent identity in the second retrieval library to obtain the permanent identity of the visitor appearing every day.
CN202010358806.3A 2020-04-29 2020-04-29 Passenger flow information processing method and device Pending CN111553288A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010358806.3A CN111553288A (en) 2020-04-29 2020-04-29 Passenger flow information processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010358806.3A CN111553288A (en) 2020-04-29 2020-04-29 Passenger flow information processing method and device

Publications (1)

Publication Number Publication Date
CN111553288A true CN111553288A (en) 2020-08-18

Family

ID=72007888

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010358806.3A Pending CN111553288A (en) 2020-04-29 2020-04-29 Passenger flow information processing method and device

Country Status (1)

Country Link
CN (1) CN111553288A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112101215A (en) * 2020-09-15 2020-12-18 Oppo广东移动通信有限公司 Face input method, terminal equipment and computer readable storage medium
CN112329635A (en) * 2020-11-06 2021-02-05 北京文安智能技术股份有限公司 Method and device for counting store passenger flow
CN114565963A (en) * 2022-03-03 2022-05-31 成都佳华物链云科技有限公司 Customer flow statistical method and device, electronic equipment and storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112101215A (en) * 2020-09-15 2020-12-18 Oppo广东移动通信有限公司 Face input method, terminal equipment and computer readable storage medium
CN112329635A (en) * 2020-11-06 2021-02-05 北京文安智能技术股份有限公司 Method and device for counting store passenger flow
CN114565963A (en) * 2022-03-03 2022-05-31 成都佳华物链云科技有限公司 Customer flow statistical method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN111553288A (en) Passenger flow information processing method and device
CN111476183A (en) Passenger flow information processing method and device
CN103177436B (en) Method and system for tracking object
CN111368943A (en) Method and device for identifying object in image, storage medium and electronic device
CN112954399B (en) Image processing method and device and computer equipment
CN112989212B (en) Media content recommendation method, device and equipment and computer storage medium
CN113761261A (en) Image retrieval method, image retrieval device, computer-readable medium and electronic equipment
CN108229999B (en) Method and device for evaluating competitive products
CN110619284A (en) Video scene division method, device, equipment and medium
CN112017162B (en) Pathological image processing method, pathological image processing device, storage medium and processor
Truong et al. Lifelogging retrieval based on semantic concepts fusion
CN109800251A (en) A kind of relationship discovery method and apparatus, computer readable storage medium
Liu et al. Cross-modality person re-identification via channel-based partition network
Lu et al. Mfnet: Multi-feature fusion network for real-time semantic segmentation in road scenes
CN114398909A (en) Question generation method, device, equipment and storage medium for dialogue training
EP3570207B1 (en) Video cookies
CN113705301A (en) Image processing method and device
CN109492821A (en) A kind of stability maintenance method for early warning and system, electronic equipment
CN111626212A (en) Method and device for identifying object in picture, storage medium and electronic device
CN110659576A (en) Pedestrian searching method and device based on joint judgment and generation learning
CN112529116B (en) Scene element fusion processing method, device and equipment and computer storage medium
CN114067356B (en) Pedestrian re-recognition method based on combined local guidance and attribute clustering
CN115187910A (en) Video classification model training method and device, electronic equipment and storage medium
CN113627384A (en) Attendance system, method and storage medium
CN113011320A (en) Video processing method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20200818