CN109145707B - Image processing method and device, electronic equipment and storage medium - Google Patents

Image processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN109145707B
CN109145707B CN201810639810.XA CN201810639810A CN109145707B CN 109145707 B CN109145707 B CN 109145707B CN 201810639810 A CN201810639810 A CN 201810639810A CN 109145707 B CN109145707 B CN 109145707B
Authority
CN
China
Prior art keywords
target object
target
database
identity
reference image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810639810.XA
Other languages
Chinese (zh)
Other versions
CN109145707A (en
Inventor
王磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sensetime Technology Development Co Ltd
Original Assignee
Beijing Sensetime Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sensetime Technology Development Co Ltd filed Critical Beijing Sensetime Technology Development Co Ltd
Priority to CN201810639810.XA priority Critical patent/CN109145707B/en
Publication of CN109145707A publication Critical patent/CN109145707A/en
Application granted granted Critical
Publication of CN109145707B publication Critical patent/CN109145707B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C9/00Individual registration on entry or exit
    • G07C9/30Individual registration on entry or exit not involving the use of a pass
    • G07C9/32Individual registration on entry or exit not involving the use of a pass in combination with an identity check
    • G07C9/37Individual registration on entry or exit not involving the use of a pass in combination with an identity check using biometric data, e.g. fingerprints, iris scans or voice recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C9/00Individual registration on entry or exit
    • G07C9/30Individual registration on entry or exit not involving the use of a pass
    • G07C9/38Individual registration on entry or exit not involving the use of a pass with central registration
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C2209/00Indexing scheme relating to groups G07C9/00 - G07C9/38
    • G07C2209/02Access control comprising means for the enrolment of users
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C2209/00Indexing scheme relating to groups G07C9/00 - G07C9/38
    • G07C2209/08With time considerations, e.g. temporary activation, valid time window or time limitations

Abstract

The present disclosure relates to an image processing method and apparatus, an electronic device, and a storage medium. The method comprises the following steps: acquiring characteristic information of a target object in a first image; determining whether reference image information matched with the target object exists in the multiple databases or not according to the characteristic information; and under the condition that no reference image information matched with the target object exists in the multiple databases, determining the identity class of the target object as a first identity class corresponding to a first database in the multiple databases. The embodiment of the disclosure can realize the identity category of the target object and improve the efficiency and accuracy of image processing.

Description

Image processing method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to an image processing method and apparatus, an electronic device, and a storage medium.
Background
In a place (such as a shopping mall and a supermarket) needing monitoring, images of persons (such as customers or staff) visiting the place can be acquired through equipment such as a camera, and the images are analyzed and processed through processing equipment. In the process of analyzing the image, other means are needed to process the image to obtain the relevant information of the visitor.
Disclosure of Invention
The present disclosure proposes an image processing technical solution.
According to an aspect of the present disclosure, there is provided an image processing method including:
acquiring characteristic information of a target object in a first image;
determining whether reference image information matched with the target object exists in a plurality of databases or not according to the characteristic information of the target object, wherein different databases in the plurality of databases correspond to different identity categories, and each database comprises one or more pieces of reference image information;
and under the condition that no reference image information matched with the target object exists in the plurality of databases, determining the identity class of the target object as a first identity class corresponding to a first database, wherein the plurality of databases comprise the first database.
In some possible implementations, the method further includes:
in the case where there is reference image information matching the target object in the plurality of databases,
determining the identity category of the target object as a second identity category corresponding to a second database, wherein the second database is a database to which the matched reference image information belongs and/or
Determining an identity of the target object based on the matched reference image information.
In some possible implementations, the method further includes:
and adding the image and/or the characteristic information of the target object to the first database under the condition that the reference image information matched with the target object does not exist in the plurality of databases.
In some possible implementations, the method further includes:
determining a first population statistics result within a target time period according to the identity category of the target object,
the corresponding time of the first image is located in the target time period, the first group counting result comprises at least one of the number of visitors and the number of visitors in the target time period of at least one target place, and one or more shooting devices are arranged in each target place.
In some possible implementations, determining a first population statistics result within a target time period according to an identity category of the target object includes:
and determining a first group counting result in the target time period according to at least one of history matching data of the reference image information matched with the target object and corresponding time of the first image and the identity category of the target object under the condition that the reference image information matched with the target object exists in the plurality of databases.
In some possible implementations, the historical matching data of the matched reference image information includes a most recent time of occurrence of a person corresponding to the reference image information;
wherein, in the case that reference image information matched with the target object exists in the plurality of databases, determining a first population statistics result in the target time period according to at least one of history matching data of the reference image information matched with the target object and corresponding time of the first image and an identity category of the target object includes:
under the condition that the interval between the latest appearance time of the person corresponding to the reference image information and the corresponding time of the first image exceeds a preset interval, determining a first group counting result in the target time period according to the identity type of the target object; and/or
And under the condition that the latest appearance time of the figure corresponding to the reference image information is within the target time period and the interval between the latest appearance time and the corresponding time of the first image does not exceed a preset interval, forbidding counting the current visit of the target object into the number of visitors within the target time period.
In some possible implementations, determining a first population statistics result within a target time period according to an identity category of the target object includes:
under the condition that the identity type of the target object is the identity type of a worker, forbidding a first group accounting result of the current visit of the target object in the target time period; and/or
And under the condition that the identity type of the target object is not the identity type of the staff, counting the current visit of the target object into a first group counting result in the target time period.
In some possible implementations, the method further includes:
and sending the first group counting result in the target time period to a terminal.
In some possible implementations, the method further includes:
and sending at least one of the identity type, the identity and the visiting information of the target object to a terminal.
In some possible implementations, the plurality of databases include a member customer database, a staff member database, an exception customer database, and a general customer database,
determining whether reference image information matched with the target object exists in a plurality of databases according to the characteristic information of the target object, wherein the determining comprises the following steps:
and according to the characteristic information of the target object, sequentially searching whether reference image information matched with the target object exists in the member customer database, the staff database, the abnormal customer database and the common customer database.
In some possible implementations, obtaining feature information of the target object in the first image includes:
receiving a first image sent by a front server;
and performing feature extraction processing on the first image to obtain feature information of a target object in the first image.
In some possible implementations, obtaining feature information of the target object in the first image includes:
determining a first human body region image of the target object from the first image;
determining a second face area image of the target object in the second image according to the first human body area image of the target object;
determining feature information extracted from the second face region image as feature information of the target object.
In some possible implementations, the first identity category includes a general customer identity category, and the second identity category includes a special personnel identity category, where the special personnel identity category includes one or any combination of a member customer identity category, a staff identity category, and an abnormal customer identity category.
In some possible implementations, the feature information of the target object includes facial feature information of the target object, or facial feature information and human feature information of the target object.
In some possible implementations, the method further includes:
determining attributes of the target object based on the characteristic information of the target object, wherein the attributes comprise at least one of age, age range and gender;
determining a second population statistic within a target time period based on the attributes of the target object.
According to another aspect of the present disclosure, there is provided an image processing apparatus including:
the characteristic information acquisition module is used for acquiring the characteristic information of the target object in the first image;
the image information matching module is used for determining whether reference image information matched with the target object exists in a plurality of databases according to the characteristic information of the target object, wherein different databases in the plurality of databases correspond to different identity categories, and each database comprises one or more pieces of reference image information;
a first category determining module, configured to determine an identity category of the target object as a first identity category corresponding to a first database if no reference image information matching the target object exists in the multiple databases, where the multiple databases include the first database.
In some possible implementations, the apparatus further includes:
a second category determination module for, in the case that reference image information matching the target object exists in the plurality of databases,
determining the identity category of the target object as a second identity category corresponding to a second database, wherein the second database is a database to which the matched reference image information belongs and/or
Determining an identity of the target object based on the matched reference image information.
In some possible implementations, the apparatus further includes:
and the adding module is used for adding the image and/or the characteristic information of the target object into the first database under the condition that the reference image information matched with the target object does not exist in the plurality of databases.
In some possible implementations, the apparatus further includes:
a first result determination module for determining a first population statistics result within a target time period according to the identity category of the target object,
the corresponding time of the first image is located in the target time period, the first group counting result comprises at least one of the number of visitors and the number of visitors in the target time period of at least one target place, and one or more shooting devices are arranged in each target place.
In some possible implementations, the first result determination module is further configured to:
and determining a first group counting result in the target time period according to at least one of history matching data of the reference image information matched with the target object and corresponding time of the first image and the identity category of the target object under the condition that the reference image information matched with the target object exists in the plurality of databases.
In some possible implementations, the historical matching data of the matched reference image information includes a most recent time of occurrence of a person corresponding to the reference image information;
wherein the first result determination module is further configured to:
under the condition that the interval between the latest appearance time of the person corresponding to the reference image information and the corresponding time of the first image exceeds a preset interval, determining a first group counting result in the target time period according to the identity type of the target object; and/or
And under the condition that the latest appearance time of the figure corresponding to the reference image information is within the target time period and the interval between the latest appearance time and the corresponding time of the first image does not exceed a preset interval, forbidding counting the current visit of the target object into the number of visitors within the target time period.
In some possible implementations, the first result determination module is further configured to:
under the condition that the identity type of the target object is the identity type of a worker, forbidding a first group accounting result of the current visit of the target object in the target time period; and/or
And under the condition that the identity type of the target object is not the identity type of the staff, counting the current visit of the target object into a first group counting result in the target time period.
In some possible implementations, the apparatus further includes:
and the first sending module is used for sending the first group counting result in the target time period to the terminal.
In some possible implementations, the apparatus further includes:
and the second sending module is used for sending at least one of the identity type, the identity and the visiting information of the target object to the terminal.
In some possible implementations, the plurality of databases include a member customer database, a staff member database, an exception customer database, and a general customer database,
wherein the image information matching module is further configured to:
and according to the characteristic information of the target object, sequentially searching whether reference image information matched with the target object exists in the member customer database, the staff database, the abnormal customer database and the common customer database.
In some possible implementations, the feature information obtaining module is further configured to:
receiving a first image sent by a front server;
and performing feature extraction processing on the first image to obtain feature information of a target object in the first image.
In some possible implementations, the feature information obtaining module is further configured to:
determining a first human body region image of the target object from the first image;
determining a second face area image of the target object in the second image according to the first human body area image of the target object;
determining feature information extracted from the second face region image as feature information of the target object.
In some possible implementations, the first identity category includes a general customer identity category, and the second identity category includes a special personnel identity category, where the special personnel identity category includes one or any combination of a member customer identity category, a staff identity category, and an abnormal customer identity category.
In some possible implementations, the feature information of the target object includes facial feature information of the target object, or facial feature information and human feature information of the target object.
In some possible implementations, the apparatus further includes:
the attribute determining module is used for determining the attribute of the target object based on the characteristic information of the target object, wherein the attribute comprises at least one of age, age range and gender;
a second result determination module to determine a second group statistical result within a target time period based on the attribute of the target object.
According to another aspect of the present disclosure, there is provided an electronic device including:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to: the above-described image processing method is performed.
According to another aspect of the present disclosure, there is provided a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the above-described image processing method.
In the embodiment of the disclosure, by acquiring the feature information of the target object in the first image to be processed, whether reference image information matched with the target object exists in the plurality of databases or not can be determined, and the identity category of the target object is determined when the reference image information does not exist, so that the efficiency and the accuracy of image processing are improved.
Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate exemplary embodiments, features, and aspects of the disclosure and, together with the description, serve to explain the principles of the disclosure.
Fig. 1 shows a flowchart of an image processing method according to an embodiment of the present disclosure.
Fig. 2 illustrates a block diagram of an image processing apparatus according to an embodiment of the present disclosure.
Fig. 3 shows a block diagram of an electronic device in accordance with an embodiment of the disclosure.
Detailed Description
Various exemplary embodiments, features and aspects of the present disclosure will be described in detail below with reference to the accompanying drawings. In the drawings, like reference numbers can indicate functionally identical or similar elements. While the various aspects of the embodiments are presented in drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used exclusively herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
Furthermore, in the following detailed description, numerous specific details are set forth in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements and circuits that are well known to those skilled in the art have not been described in detail so as not to obscure the present disclosure.
Fig. 1 shows a flowchart of an image processing method according to an embodiment of the present disclosure. The image processing method can be applied to an electronic device, such as a server (e.g., a cloud server). The method comprises the following steps:
in step S101, feature information of a target object in a first image is acquired;
in step S102, determining whether reference image information matching the target object exists in a plurality of databases according to the feature information of the target object, wherein different databases in the plurality of databases correspond to different identity categories, and each database includes one or more pieces of reference image information;
in step S103, in a case that no reference image information matching the target object exists in the plurality of databases, the identity category of the target object is determined as a first identity category corresponding to a first database, where the plurality of databases include the first database.
According to the image processing method, the characteristic information of the target object in the first image to be processed is obtained, whether the reference image information matched with the target object exists in the multiple databases or not is determined, and the identity type of the target object is determined when the reference image information does not exist, so that the efficiency and the accuracy of image processing are improved.
In this disclosure, a plurality of databases may be provided, each database storing one or more pieces of reference image information, where the reference image information may include a reference image and/or feature information of the reference image, or further include other information, and optionally, the reference image information may be associated with information such as personal identity information, historical visit information, historical purchase records, purchase preferences, and the like, which is not limited in this disclosure.
In the embodiment of the present disclosure, the different databases may correspond to different identity categories, for example, one or any combination of a black list, a white list, a member, and a general customer, or other different identity categories, which is not limited in the embodiment of the present disclosure.
In some possible implementations, the first database may be, for example, a general customer database of a plurality of databases, the first identity category being a general customer identity category. That is, in this case, the identity class of the target object may be determined as a general customer identity class, but this is not limited by the embodiment of the present disclosure.
For example, the target location to be monitored may include any location such as a store, a mall, a supermarket, etc., and the target object may be a visitor (e.g., a customer or a staff member, etc.) who accesses the target location. One or more target sites (e.g., a store or a plurality of stores of a supermarket) for statistical inclusion may be provided, and each target site may include one or more monitoring areas (e.g., an entrance area, an exit area, a fresh food area, etc. of a supermarket). Cameras (e.g., cameras) may be arranged in monitored areas of the target site, with one or more cameras in each monitored area, to capture video of the monitored area. The shooting device can be deployed according to the actual environment of the site, and at least at a key entrance, the shooting device can cover passenger flow and visiting personnel as comprehensively as possible.
In some possible implementations, a front-end server may be provided in the target site. The front-end server may be connected to one or more cameras to receive video from the cameras. The front server can perform operations such as stream pulling, frame selecting, duplicate removal and the like on the video streams of the various shooting devices, so as to obtain a first image with a target object.
In some possible implementation manners, the front-end server may decode the video stream to obtain a decoded video frame; and tracking and selecting frames of the video frames through a tracking filter to obtain target video frames with target objects. Then, the front-end server can perform de-duplication processing on the target video frames from the same shooting device and the target video frames from different shooting devices, remove repeated target objects in the target video frames, and obtain a de-duplicated first image. For example, for a plurality of target objects identified as the same object within a certain period of time, only one of the plurality of target objects may be retained, with duplicate target objects removed. This period of time may be, for example, 5 minutes. The present disclosure does not limit the specific processing modes of the processes of pulling, selecting frames, removing duplicates, etc.
Optionally, the electronic device may acquire the first image from another device, and perform feature extraction processing on the first image to obtain feature information of the target object.
In some possible implementations, step S101 includes:
receiving a first image sent by front-end equipment;
and performing feature extraction processing on the first image to obtain feature information of a target object in the first image.
The front-end equipment can be a front-end server or a shooting device. For example, a first image from a front-end server may be received by a cloud server. In some embodiments, the cloud server may provide docking capabilities for third party platforms; the system can comprise a face-based member system, and can be used for member identification, management, operation, accurate marketing and the like; the system can be used for carrying out regional (global) passenger flow statistics, specifically, the number of people in each region is used for hot spot region arrangement, optimization analysis of customer movement lines and the like; based on the face recognition capability and the specific scene, the method can communicate the scene-specific services.
In some possible implementation manners, when the cloud server receives the first image from the front server, feature extraction processing may be performed on the first image to obtain feature information of the target object in the first image. Optionally, the electronic device may also obtain feature information of the target object from other devices, for example, after the front-end server obtains the first image collected by the camera, the front-end server performs feature extraction processing on the first image to obtain feature information of the target object, and sends the feature information of the target object to the electronic device (for example, a cloud server), and the like, which is not limited in this disclosure.
In the embodiment of the present disclosure, the target object may be a person, and the feature information of the target object includes facial feature information of the target object, or facial feature information and human body feature information of the target object. That is, only the target object in the first image may be subjected to facial feature extraction processing to obtain facial feature information; the target object in the first image may also be subjected to facial feature extraction processing and human body feature extraction processing to obtain facial feature information and human body feature information. In this way, the accuracy of feature extraction can be improved.
In some possible implementations, the feature information of the target object may also be extracted from other images. For example, step S101 includes:
determining a first human body region image of the target object from the first image;
determining a second face area image of the target object in the second image according to the first human body area image of the target object;
determining feature information extracted from the second face region image as feature information of the target object.
For example, the feature information of the target object includes facial feature information of the target object. In addition, the same target object in different images can be determined by using human body features, and the identity category of the target object detected in the first image can be determined based on feature information of the target object extracted from other images. For example, in the case where the first facial region image of the target object in the first image does not satisfy the preset facial image requirement, the identity category of the target object detected in the first image is determined based on the feature information of the target object extracted in the second image.
For example, when processing the first image, if a first face area image of the target object detected in the first image satisfies a preset face area image requirement (e.g., the definition satisfies a preset definition requirement, the exposure/brightness satisfies a preset requirement, the face pose is a front face, etc.), the feature extracted by the first face area image may be taken as the face feature information of the target object. The present disclosure does not limit the specific form of the preset face region image requirement.
In some possible implementations, if a face region image of the target object is not detected in the first image (e.g., a face frame is not detected) or the detected first face region image does not satisfy a preset face region image requirement (e.g., the first face region image is blurry, does not satisfy a preset image sharpness requirement; or the first face region image is a side image or a back image, etc.), a first face region image of the target object may be detected from the first image. The first human body region image includes an image of a region where a human body frame of the target object is located. That is, in the case of the back, the side, or the occlusion, the face frame may not be detected or the face may be blurred. In this case, human detection may be performed on the image, and the obtained human frame may be used to assist in determining the identity category of the target object.
In some possible implementations, a plurality of databases may be provided in the cloud server, each database including one or more pieces of reference image information, and the reference image information includes a reference image and/or feature information of the reference image. The corresponding identity categories in different databases are different. For example, the plurality of databases include a member customer database, a staff member database, an abnormal customer database, a general customer database, and the like, and each database includes facial images corresponding to different identity categories. The identity categories corresponding to the reference image information in the databases include member customers, staff (white list), abnormal customers (black list), common customers and the like.
In some possible implementations, in step S102, the databases may be sequentially searched according to the feature information of the target object according to a preset order, and whether reference image information matching the target object exists in the databases is searched. In a specific example, the preset sequence may be a black list, a white list, a member and a general customer, but the embodiment of the disclosure is not limited thereto.
In the embodiment of the present disclosure, the reference image information included in the database may be fixed and unchangeable, for example, added by a worker during initial configuration, or the reference image information included in the database may be dynamically updated. For example, reference image information in one or more databases may be added, deleted, or changed as staff or members change. For another example, the image information of the new customer may be added to the general customer database, which is not limited by the embodiment of the present disclosure. As another example, the general customer database may be restored to a default or initial state at intervals, and so on, which is not limited by the embodiments of the present disclosure.
In some possible implementations, the method further includes:
and adding the image and/or the characteristic information of the target object to the first database under the condition that the reference image information matched with the target object does not exist in the plurality of databases.
For example, when there is no reference image information in all databases that matches the feature information of the target object, the image and/or feature information of the target object may be added to the first database. That is, the image and/or feature information of the target object is added to a general customer database (store dynamic customer base) as the newly added reference image information.
In some possible implementations, in the case that reference image information matching the target object exists in a second database of the plurality of databases, the method further includes:
determining the identity category of the target object as a second identity category corresponding to a second database, and/or
Determining an identity of the target object based on the matched reference image information.
For example, when reference image information matching the target object exists in a second database of the plurality of databases, a second identity category corresponding to the second database may be determined as the identity category of the target object. For example, the second database may be, for example, any one of a member customer database, a staff database, an abnormal customer database, and a general customer database. Correspondingly, the second identity category may be a member customer identity category, a staff identity category, an abnormal customer identity category, or a general customer identity category.
In some possible implementations, the first identity category includes a general customer identity category, and the second identity category includes a special personnel identity category, where the special personnel identity category includes one or any combination of a member customer identity category, a staff identity category, and an abnormal customer identity category. Correspondingly, the first database may be a common customer database among a plurality of databases, and the second database may be a special personnel database. Wherein optionally the search priority of the first database is after the second database. That is, in some optional embodiments, the database corresponding to the special identity category (member customer database, staff database, abnormal customer database, etc.) is searched first, and then the database corresponding to the general identity category (general customer database) is searched.
In some possible implementations, step S12 includes:
and according to the characteristic information of the target object, sequentially searching whether reference image information matched with the target object exists in the member customer database, the staff database, the abnormal customer database and the common customer database.
For example, when the plurality of databases include a member customer database, a staff member database, an abnormal customer database, and a general customer database, the databases may be searched sequentially, and the process may be as follows:
in some possible implementations, a search may be first conducted in the member customer database; if the matching result (reference image information) is searched in the member customer database, it indicates that the identity category of the present visitor (target object) is the member customer identity category. At this time, the visit record may be associated with the matched member, the visit information of the member at the visit, such as the visit time, the collected camera information, and the like, may be recorded, and the visit record of the member is incremented, and the search of the visit event is ended. Alternatively, the visit event of the member may be pushed to the client APP, and if the matching result is not searched in the member customer database, the search is continued in the next face database (staff database).
In some possible implementations, a search may be conducted in a staff database; if the matching result (reference image information) is searched in the staff member database, the identity category of the visiting person (target object) is represented as a staff identity category (store clerk identity category). At this point, the search for this visit event may end. If no matching result is searched in the staff database, the search is continued in the next face database (abnormal customer database).
In some possible implementations, a search may be conducted in the anomalous customer database; if the matching result (reference image information) is searched in the abnormal customer database, the identity category of the person (target object) who visits this time is represented as an abnormal customer identity category (blacklist identity category). At this point, the search for this visit event may end. Optionally, visit information of the current visit of the blacklist person, such as the visit time, the collected camera information, and the like, may be recorded, and the number of visits of the blacklist person is increased by one. Optionally, alarm information of visiting blacklist personnel can be pushed to the client APP. If no matching result is searched in the abnormal customer database, the search is continued in the next face database (the general customer database).
In some possible implementations, the search may be conducted in a general customer database (store dynamic customer base); if the matching result is searched in the common customer database, the identity category of the visiting person (target object) is represented as the common customer identity category. At this point, the search for this visit event may end. Optionally, the visiting information of the general customer at the visit, such as the visiting time, the collected camera information, and the like, may also be recorded, and the visiting record of the general customer is incremented. If the common customer database does not search for a matching result, the image and/or characteristic information of the customer is added to the common customer database of the store, which indicates that the store has come a new customer, and the search is finished.
By the method, various databases (the face photos in different libraries) can be compared in sequence, and the comparison efficiency and accuracy are improved.
In some possible implementations, the method further includes:
and sending at least one of the identity type, the identity and the visiting information of the target object to a terminal.
For example, the terminal includes a personal computer PC, a smart phone, a wearable device, a tablet computer, or the like. The cloud server can push various information to the terminal through the network. When the cloud server determines the identity type of the target object, at least one of the identity type, the identity and the visiting information of the target object can be pushed to the terminal. The visiting information comprises one or any combination of visiting time, visiting place, commodity browsing information, commodity purchasing information and the like. For example, when the cloud server determines that the target object is a member client, the visiting information of the member client can be pushed to a terminal installed with the client APP, so that the user can check the visiting condition of the member client, and corresponding measures can be taken. For example, the history of visiting customers and information such as shopping preferences may be viewed, and products and promotional activities may be pushed to customers, etc. By the method, convenience and intuitiveness of the user can be improved.
In some possible implementations, the method further includes:
determining a first population statistics result within a target time period according to the identity category and/or the identity of the target object,
and the corresponding time of the first image is within the target time period, for example, the acquisition time of the first image is within the target time period, or the time when the electronic device acquires the first image or the feature information of the first image is within the first time period, and so on. The first group counting result comprises at least one of the number of visitors and the number of visitors in the target time period of at least one target place, and one or more shooting devices are arranged in each target place.
For example, group statistics may be performed on at least one target location (e.g., one store, multiple stores and/or all stores in a supermarket), a shooting area of at least one camera (e.g., a shooting area of one camera, a shooting area of multiple cameras), a monitoring area with one or more cameras (e.g., an entrance area of a store, a fresh area, etc.), and/or multiple monitoring areas according to a setting of a user, so as to form group statistics results (total passenger flow statistics data and area passenger flow statistics data). The group statistical result comprises the visiting number, the visiting times, the age distribution, the gender distribution and the like of the visiting persons in the preset target time period. The target time period may be a predetermined time period, such as a quarter, a month, a week, a day, an hour, and the like. The present disclosure does not limit the specific values of the target time period.
In some possible implementations, the first population statistics within the target time period may be determined according to an identity class of the target object. The first group counting result comprises at least one of the number of visitors and the number of visitors in the target time period of at least one target place. For example, the first population statistics include the number of visitors on the day, and the like.
In some possible implementations, the step of determining the first population statistics result within the target time period according to the identity category and/or the identity of the target object includes:
and under the condition that the reference image information matched with the target object exists in the plurality of databases, determining a first group statistical result in the target time period according to at least one of historical matching data of the reference image information matched with the target object and corresponding time of the first image and the identity category and/or the identity of the target object.
For example, the history matching data of the reference image information matched with the target object includes the appearance time (e.g., the latest appearance time) of the person corresponding to the reference image information.
In some possible implementations, if reference image information matching the target object exists in multiple databases, historical matching data for the matching reference image information may be viewed. Determining a first population statistics result according to the history matching data, the corresponding time of the first image and the identity category and/or the identity of the target object.
In some possible implementations, in a case that there is reference image information matching the target object in the plurality of databases, determining a first population statistics result in the target time period according to at least one of historical matching data of the reference image information matching the target object and a corresponding time of the first image and an identity category of the target object, includes:
under the condition that the interval between the latest appearance time of the person corresponding to the reference image information and the corresponding time of the first image exceeds a preset interval, determining a first group counting result in the target time period according to the identity type of the target object; and/or
And under the condition that the latest appearance time of the figure corresponding to the reference image information is within the target time period and the interval between the latest appearance time and the corresponding time of the first image does not exceed a preset interval, forbidding counting the current visit of the target object into the number of visitors within the target time period.
For example, the preset interval may be a statistical time interval preset by a user, and the preset interval may be, for example, 5 minutes. If the interval between the latest appearance time of the person corresponding to the reference image information and the corresponding time of the first image exceeds a preset interval, it may be determined whether to update the first group counting result in a case where the identity category of the target object belongs to the identity category subject to statistics.
In some possible implementation manners, for the number of visiting persons in the target time period, if the latest appearance time of the corresponding person of the reference image information is not within the target time period, the person is the first visiting person in the target time period, at this time, the number of visiting persons in the target time period can be updated, and the number of visiting persons is increased by 1; if the latest appearance time of the corresponding person is in the target time period, the person is not the first visit in the target time period, at the moment, the number of the visitors in the target time period can not be updated, and the number of the visitors is not changed.
In some possible implementation manners, for the number of visitors in the target time period, if the latest occurrence time of the corresponding person of the reference image information exceeds the preset interval or is not within the target time period, the person is the first visit within the preset interval, at this time, the number of visitors in the target time period may be updated, and the number of visitors is increased by 1; if the latest appearance time of the corresponding person is within the preset interval, the person is not the first visit within the preset interval, and at this time, the number of visitors within the target time period may not be updated, and the number of visitors is not changed (that is, the current visit of the target object is prohibited to be counted as the number of visitors within the target time period).
In some possible implementations, the number of visitors on the day may be the number of people of the target object matched with the reference image information of the multiple databases on the day and the number of people newly added to the common customer database. The number of visitors visiting the day can be the sum of all the visits recorded in the day, and the same person can be captured and counted once visit within 5 minutes (preset interval). The number of visitors visiting in the current week, the current month and other self-defined time can be directly accumulated on each day of the current week or the current month; the custom time to visit times of the current week, the current month and the like can be the accumulation of all visit records of each day of the current week or the current month.
In some possible implementations, determining a first population statistics result within a target time period according to an identity category and/or an identity of the target object further includes:
under the condition that the identity type of the target object is the identity type of a worker, forbidding a first group accounting result of the current visit of the target object in the target time period; and/or
And under the condition that the identity type of the target object is not the identity type of the staff, counting the current visit of the target object into a first group counting result in the target time period.
For example, if the identity category of the target object is the staff identity category, it may be determined that the identity category of the target object does not belong to the identity category that includes the statistics, and at this time, the number of visitors and the number of visitors in the target time period may not be updated, and neither the number of visitors nor the number of visitors is changed. That is, the first population statistics result of the present visit of the target object is prohibited from being counted in the target time period.
On the contrary, under the condition that the identity category of the target object is not the identity category of the staff, the identity category of the target object can be determined to belong to the identity category which is included in the statistics, and at this time, whether the number of visitors and the number of visitors in the target time period are updated or not can be determined according to the steps. That is, the present visit of the target object is counted as the first population statistical result in the target time period.
In some possible implementations, the method further includes:
and sending the first group counting result in the target time period to a terminal.
In some possible implementation manners, the cloud server may push the first group statistical result in the target time period to the terminal APP, so that the user can view the first group statistical result of each store or each area of each store, and convenience and intuitiveness in use of the user are improved.
In some possible implementations, the method further includes:
determining attributes of the target object based on the characteristic information of the target object, wherein the attributes comprise at least one of age, age range and gender;
determining a second population statistic within a target time period based on the attributes of the target object.
For example, according to the feature information of the target object, the cloud server may further analyze attributes of the target object, where the attributes of the target object include at least one of an age, an age range, and a gender of the target object. That is, age information and gender information of the customer may be analyzed. For example, according to the characteristic information of Zhangyi of a member, Zhangyi of 20-25 years of age, sex of female and the like can be analyzed.
In some possible implementations, the second population statistics for the target time period may be determined according to attributes of the target object. The second group statistical result comprises the age distribution, the gender distribution and the like of the visitors in the target time period. And the cloud server can send the second group statistical result to the terminal, so that the user can check the second group statistical result of each store or each area of each store, and the use convenience and intuition of the user are improved.
According to the image processing method disclosed by the embodiment of the disclosure, whether reference image information matched with the target object exists in the multiple databases or not can be determined by acquiring the characteristic information of the target object in the first image, and the identity type of the target object is determined according to the condition of the reference image information, so that the efficiency and the accuracy of image processing are improved.
According to the image processing method disclosed by the embodiment of the disclosure, statistical data, member/white list/black list personnel visit records and the like can be obtained and displayed for a user, business ultra-science and intelligence are realized, a great deal of detailed analysis capabilities such as consumer group distribution, activity tracks, visit records, consumption behaviors or hobbies and the like are provided for industries such as large markets, supermarkets, high-end coffee shops, 4S shops and the like, accurate marketing, intelligent loss prevention and intelligent operation are realized, a large data decision support is provided for fine operation analysis of merchants, the advantages of the retail industry are better played, cost is reduced, and efficiency is improved.
It is understood that the above-mentioned method embodiments of the present disclosure can be combined with each other to form a combined embodiment without departing from the logic of the principle, which is limited by the space, and the detailed description of the present disclosure is omitted.
In addition, the present disclosure also provides an image processing apparatus, an electronic device, a computer-readable storage medium, and a program, which can be used to implement any one of the image processing methods provided by the present disclosure, and the descriptions and corresponding descriptions of the corresponding technical solutions and the corresponding descriptions in the methods section are omitted for brevity.
Fig. 2 illustrates a block diagram of an image processing apparatus according to an embodiment of the present disclosure, which includes, as illustrated in fig. 2:
a characteristic information obtaining module 21, configured to obtain characteristic information of a target object in a first image;
an image information matching module 22, configured to determine, according to the feature information of the target object, whether reference image information matched with the target object exists in multiple databases, where different databases in the multiple databases correspond to different identity categories, and each database includes one or more pieces of reference image information;
a first category determining module 23, configured to determine an identity category of the target object as a first identity category corresponding to a first database if no reference image information matching the target object exists in the multiple databases, where the multiple databases include the first database.
In some possible implementations, the apparatus further includes:
a second category determination module for, in the case that reference image information matching the target object exists in the plurality of databases,
determining the identity category of the target object as a second identity category corresponding to a second database, wherein the second database is a database to which the matched reference image information belongs and/or
Determining an identity of the target object based on the matched reference image information.
In some possible implementations, the apparatus further includes:
and the adding module is used for adding the image and/or the characteristic information of the target object into the first database under the condition that the reference image information matched with the target object does not exist in the plurality of databases.
In some possible implementations, the apparatus further includes:
a first result determination module for determining a first population statistics result within a target time period according to the identity category of the target object,
the corresponding time of the first image is located in the target time period, the first group counting result comprises at least one of the number of visitors and the number of visitors in the target time period of at least one target place, and one or more shooting devices are arranged in each target place.
In some possible implementations, the first result determination module is further configured to:
and determining a first group counting result in the target time period according to at least one of history matching data of the reference image information matched with the target object and corresponding time of the first image and the identity category of the target object under the condition that the reference image information matched with the target object exists in the plurality of databases.
In some possible implementations, the historical matching data of the matched reference image information includes a most recent time of occurrence of a person corresponding to the reference image information;
wherein the first result determination module is further configured to:
under the condition that the interval between the latest appearance time of the person corresponding to the reference image information and the corresponding time of the first image exceeds a preset interval, determining a first group counting result in the target time period according to the identity type of the target object; and/or
And under the condition that the latest appearance time of the figure corresponding to the reference image information is within the target time period and the interval between the latest appearance time and the corresponding time of the first image does not exceed a preset interval, forbidding counting the current visit of the target object into the number of visitors within the target time period.
In some possible implementations, the first result determination module is further configured to:
under the condition that the identity type of the target object is the identity type of a worker, forbidding a first group accounting result of the current visit of the target object in the target time period; and/or
And under the condition that the identity type of the target object is not the identity type of the staff, counting the current visit of the target object into a first group counting result in the target time period.
In some possible implementations, the apparatus further includes:
and the first sending module is used for sending the first group counting result in the target time period to the terminal.
In some possible implementations, the apparatus further includes:
and the second sending module is used for sending at least one of the identity type, the identity and the visiting information of the target object to the terminal.
In some possible implementations, the plurality of databases include a member customer database, a staff member database, an exception customer database, and a general customer database,
wherein the image information matching module is further configured to:
and according to the characteristic information of the target object, sequentially searching whether reference image information matched with the target object exists in the member customer database, the staff database, the abnormal customer database and the common customer database.
In some possible implementations, the feature information obtaining module is further configured to:
receiving a first image sent by a front server;
and performing feature extraction processing on the first image to obtain feature information of a target object in the first image.
In some possible implementations, the feature information obtaining module is further configured to:
determining a first human body region image of the target object from the first image;
determining a second face area image of the target object in the second image according to the first human body area image of the target object;
determining feature information extracted from the second face region image as feature information of the target object.
In some possible implementations, the first identity category includes a general customer identity category, and the second identity category includes a special personnel identity category, where the special personnel identity category includes one or any combination of a member customer identity category, a staff identity category, and an abnormal customer identity category.
In some possible implementations, the feature information of the target object includes facial feature information of the target object, or facial feature information and human feature information of the target object.
In some possible implementations, the apparatus further includes:
the attribute determining module is used for determining the attribute of the target object based on the characteristic information of the target object, wherein the attribute comprises at least one of age, age range and gender;
a second result determination module to determine a second group statistical result within a target time period based on the attribute of the target object.
Embodiments of the present disclosure also provide a computer-readable storage medium having stored thereon computer program instructions, which when executed by a processor, implement the above-mentioned method. The computer readable storage medium may be a non-volatile computer readable storage medium.
An embodiment of the present disclosure further provides an electronic device, including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured as the above method.
The electronic device may be provided as a server or other modality of device.
Fig. 3 shows a block diagram of an electronic device 1900 according to an embodiment of the disclosure. For example, the electronic device 1900 may be provided as a server. Referring to fig. 3, electronic device 1900 includes a processing component 1922 further including one or more processors and memory resources, represented by memory 1932, for storing instructions, e.g., applications, executable by processing component 1922. The application programs stored in memory 1932 may include one or more modules that each correspond to a set of instructions. Further, the processing component 1922 is configured to execute instructions to perform the above-described method.
The electronic device 1900 may also include a power component 1926 configured to perform power management of the electronic device 1900, a wired or wireless network interface 1950 configured to connect the electronic device 1900 to a network, and an input/output (I/O) interface 1958. The electronic device 1900 may operate based on an operating system stored in memory 1932, such as Windows Server, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, or the like.
In an exemplary embodiment, a non-transitory computer readable storage medium, such as the memory 1932, is also provided that includes computer program instructions executable by the processing component 1922 of the electronic device 1900 to perform the above-described methods.
The present disclosure may be systems, methods, and/or computer program products. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied thereon for causing a processor to implement various aspects of the present disclosure.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
The computer program instructions for carrying out operations of the present disclosure may be assembler instructions, Instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, the electronic circuitry that can execute the computer-readable program instructions implements aspects of the present disclosure by utilizing the state information of the computer-readable program instructions to personalize the electronic circuitry, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA).
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Having described embodiments of the present disclosure, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terms used herein were chosen in order to best explain the principles of the embodiments, the practical application, or technical improvements to the techniques in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (35)

1. An image processing method is applied to a cloud server, and the method comprises the following steps:
acquiring characteristic information of a target object in a first image, wherein the first image is sent by a front-end server;
determining whether reference image information matched with the target object exists in a plurality of databases or not according to the characteristic information of the target object, wherein different databases in the plurality of databases correspond to different identity categories, and each database comprises one or more pieces of reference image information;
under the condition that no reference image information matched with the target object exists in the plurality of databases, determining the identity category of the target object as a first identity category corresponding to a first database, wherein the plurality of databases comprise the first database, and the first identity category is a common customer identity category;
determining the identity category of the target object as a second identity category corresponding to a second database under the condition that reference image information matched with the target object exists in the plurality of databases, wherein the second database is a database to which the matched reference image information belongs, the second identity category comprises a special personnel identity category, and the special personnel identity category comprises one or any combination of a member customer identity category, a worker identity category and an abnormal customer identity category;
the second database includes a smaller number of reference image information than the first database, the first database having a search priority after the second database.
2. The method of claim 1, further comprising:
in the event that there is reference image information in the plurality of databases that matches the target object, determining an identity of the target object based on the matching reference image information.
3. The method of claim 1, further comprising:
and adding the image and/or the characteristic information of the target object to the first database under the condition that the reference image information matched with the target object does not exist in the plurality of databases.
4. The method of claim 1, further comprising:
determining a first population statistics result within a target time period according to the identity category of the target object,
the corresponding time of the first image is located in the target time period, the first group counting result comprises at least one of the number of visitors and the number of visitors in the target time period of at least one target place, and one or more shooting devices are arranged in each target place.
5. The method of claim 4, wherein determining a first population statistics result for a target time period based on the identity class of the target object comprises:
and determining a first group counting result in the target time period according to at least one of history matching data of the reference image information matched with the target object and corresponding time of the first image and the identity category of the target object under the condition that the reference image information matched with the target object exists in the plurality of databases.
6. The method of claim 5, wherein the historical match data for the matched reference image information includes a most recent time of occurrence of the person for the reference image information;
wherein, in the case that reference image information matched with the target object exists in the plurality of databases, determining a first population statistics result in the target time period according to at least one of history matching data of the reference image information matched with the target object and corresponding time of the first image and an identity category of the target object includes:
under the condition that the interval between the latest appearance time of the person corresponding to the reference image information and the corresponding time of the first image exceeds a preset interval, determining a first group counting result in the target time period according to the identity type of the target object; and/or
And under the condition that the latest appearance time of the figure corresponding to the reference image information is within the target time period and the interval between the latest appearance time and the corresponding time of the first image does not exceed a preset interval, forbidding counting the current visit of the target object into the number of visitors within the target time period.
7. The method of claim 4, wherein determining a first population statistics result for a target time period based on the identity class of the target object comprises:
under the condition that the identity type of the target object is the identity type of a worker, forbidding a first group accounting result of the current visit of the target object in the target time period; and/or
And under the condition that the identity type of the target object is not the identity type of the staff, counting the current visit of the target object into a first group counting result in the target time period.
8. The method of claim 4, further comprising:
and sending the first group counting result in the target time period to a terminal.
9. The method of claim 1, further comprising:
and sending at least one of the identity type, the identity and the visiting information of the target object to a terminal.
10. The method of claim 1, wherein the plurality of databases include a member customer database, a staff member database, an abnormal customer database, and a general customer database,
determining whether reference image information matched with the target object exists in a plurality of databases according to the characteristic information of the target object, wherein the determining comprises the following steps:
and according to the characteristic information of the target object, sequentially searching whether reference image information matched with the target object exists in the member customer database, the staff database, the abnormal customer database and the common customer database.
11. The method of claim 1, wherein obtaining feature information of the target object in the first image comprises:
receiving a first image sent by a front server;
and performing feature extraction processing on the first image to obtain feature information of a target object in the first image.
12. The method of claim 1, wherein obtaining feature information of the target object in the first image comprises:
determining a first human body region image of the target object from the first image;
determining a second face area image of the target object in the second image according to the first human body area image of the target object;
determining feature information extracted from the second face region image as feature information of the target object.
13. The method according to claim 1, wherein the feature information of the target object includes facial feature information of the target object, or facial feature information and human body feature information of the target object.
14. The method according to any one of claims 1-13, further comprising:
determining attributes of the target object based on the characteristic information of the target object, wherein the attributes comprise at least one of age, age range and gender;
determining a second population statistic within a target time period based on the attributes of the target object.
15. An image processing apparatus applied to a cloud server, the apparatus comprising:
the system comprises a characteristic information acquisition module, a characteristic information acquisition module and a characteristic information acquisition module, wherein the characteristic information acquisition module is used for acquiring characteristic information of a target object in a first image, and the first image is sent by a front-end server;
the image information matching module is used for determining whether reference image information matched with the target object exists in a plurality of databases according to the characteristic information of the target object, wherein different databases in the plurality of databases correspond to different identity categories, and each database comprises one or more pieces of reference image information;
a first category determining module, configured to determine an identity category of the target object as a first identity category corresponding to a first database when no reference image information matching the target object exists in the multiple databases, where the multiple databases include the first database, the first identity category is a general customer identity category,
the second category determining module is used for determining the identity category of the target object as a second identity category corresponding to a second database under the condition that reference image information matched with the target object exists in the plurality of databases, wherein the second database is a database to which the matched reference image information belongs, the second identity category comprises a special personnel identity category, and the special personnel identity category comprises one or any combination of a member customer identity category, a worker identity category and an abnormal customer identity category;
the second database includes a smaller number of reference image information than the first database, the first database having a search priority after the second database.
16. The apparatus of claim 15, wherein the second category determining module is further configured to:
in the event that there is reference image information in the plurality of databases that matches the target object, determining an identity of the target object based on the matching reference image information.
17. The apparatus of claim 15, further comprising:
and the adding module is used for adding the image and/or the characteristic information of the target object into the first database under the condition that the reference image information matched with the target object does not exist in the plurality of databases.
18. The apparatus of claim 15, further comprising:
a first result determination module for determining a first population statistics result within a target time period according to the identity category of the target object,
the corresponding time of the first image is located in the target time period, the first group counting result comprises at least one of the number of visitors and the number of visitors in the target time period of at least one target place, and one or more shooting devices are arranged in each target place.
19. The apparatus of claim 18, wherein the first result determination module is further configured to:
and determining a first group counting result in the target time period according to at least one of history matching data of the reference image information matched with the target object and corresponding time of the first image and the identity category of the target object under the condition that the reference image information matched with the target object exists in the plurality of databases.
20. The apparatus of claim 19, wherein the historical match data for the matched reference image information includes a most recent time of occurrence of the person for the reference image information;
wherein the first result determination module is further configured to:
under the condition that the interval between the latest appearance time of the person corresponding to the reference image information and the corresponding time of the first image exceeds a preset interval, determining a first group counting result in the target time period according to the identity type of the target object; and/or
And under the condition that the latest appearance time of the figure corresponding to the reference image information is within the target time period and the interval between the latest appearance time and the corresponding time of the first image does not exceed a preset interval, forbidding counting the current visit of the target object into the number of visitors within the target time period.
21. The apparatus of claim 18, wherein the first result determination module is further configured to:
under the condition that the identity type of the target object is the identity type of a worker, forbidding a first group accounting result of the current visit of the target object in the target time period; and/or
And under the condition that the identity type of the target object is not the identity type of the staff, counting the current visit of the target object into a first group counting result in the target time period.
22. The apparatus of claim 18, further comprising:
and the first sending module is used for sending the first group counting result in the target time period to the terminal.
23. The apparatus of claim 15, further comprising:
and the second sending module is used for sending at least one of the identity type, the identity and the visiting information of the target object to the terminal.
24. The apparatus of claim 15, wherein the plurality of databases include a member customer database, a staff member database, an abnormal customer database, and a general customer database,
wherein the image information matching module is further configured to:
and according to the characteristic information of the target object, sequentially searching whether reference image information matched with the target object exists in the member customer database, the staff database, the abnormal customer database and the common customer database.
25. The apparatus of claim 15, wherein the feature information obtaining module is further configured to:
receiving a first image sent by a front server;
and performing feature extraction processing on the first image to obtain feature information of a target object in the first image.
26. The apparatus of claim 15, wherein the feature information obtaining module is further configured to:
determining a first human body region image of the target object from the first image;
determining a second face area image of the target object in the second image according to the first human body area image of the target object;
determining feature information extracted from the second face region image as feature information of the target object.
27. The apparatus according to claim 15, wherein the feature information of the target object includes facial feature information of the target object, or facial feature information and human body feature information of the target object.
28. The apparatus according to any one of claims 15-27, further comprising:
the attribute determining module is used for determining the attribute of the target object based on the characteristic information of the target object, wherein the attribute comprises at least one of age, age range and gender;
a second result determination module to determine a second group statistical result within a target time period based on the attribute of the target object.
29. A passenger flow analysis system, the system comprising:
the camera is arranged at a target place and used for acquiring video streams of a monitored area;
the server is connected to the camera and used for receiving the video stream from the at least one camera and carrying out target identification on the video stream according to a plurality of databases to obtain a target identification result of a target object in a first image of the video stream, wherein the target identification result comprises at least one of the identity type, the identity and the visiting information of the target object;
when reference image information matched with the target object exists in the plurality of databases, determining the identity category of the target object as a second identity category corresponding to a second database, wherein different databases in the plurality of databases correspond to different identity categories, each database comprises one or more pieces of reference image information, the second database is a database to which the matched reference image information belongs, the second identity category comprises a special personnel identity category, and the special personnel identity category comprises one or any combination of a member customer identity category, a worker identity category and an abnormal customer identity category;
under the condition that no reference image information matched with the target object exists in the plurality of databases, determining the identity category of the target object as a first identity category corresponding to a first database, wherein the first identity category is a common customer identity category;
the second database comprises a smaller number of reference image information than the first database, the first database having a search priority after the second database;
and the terminal equipment is connected to the server and used for receiving the target identification result sent by the server and displaying the target identification result.
30. The system of claim 29, wherein the servers comprise a front-end server and a cloud server,
the front-end server is arranged at the target place, is connected to the at least one camera, and is used for receiving the video stream sent by the at least one camera and carrying out target detection on at least one frame of video image in the video stream to obtain detection data of at least one target object in the at least one frame of video image;
the cloud server is connected to the front-end server and used for receiving detection data of the front-end server and carrying out target identification on the at least one target object based on the detection data to obtain a target identification result of the at least one target object.
31. The system of claim 30,
the server is further used for obtaining a group statistical result in a target time period based on the target identification result, and sending the group statistical result to the terminal equipment, wherein the group statistical result comprises a first group statistical result in the target time period;
the terminal equipment is also used for displaying the group statistical result.
32. The system of claim 31, wherein the terminal device is connected to the cloud server for receiving and displaying the group statistics sent by the cloud server within the target time period,
the terminal equipment is further used for receiving and displaying the target identification result sent by the cloud server.
33. The system according to any one of claims 29 to 32, wherein the terminal device comprises a personal computer, a smartphone, a wearable device or a tablet computer.
34. An electronic device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to: performing the method of any one of claims 1 to 14.
35. A computer readable storage medium having computer program instructions stored thereon, which when executed by a processor implement the method of any one of claims 1 to 14.
CN201810639810.XA 2018-06-20 2018-06-20 Image processing method and device, electronic equipment and storage medium Active CN109145707B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810639810.XA CN109145707B (en) 2018-06-20 2018-06-20 Image processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810639810.XA CN109145707B (en) 2018-06-20 2018-06-20 Image processing method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN109145707A CN109145707A (en) 2019-01-04
CN109145707B true CN109145707B (en) 2021-09-14

Family

ID=64802180

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810639810.XA Active CN109145707B (en) 2018-06-20 2018-06-20 Image processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN109145707B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110134810A (en) * 2019-05-14 2019-08-16 深圳市商汤科技有限公司 Retrieve the method and device of image
CN112036919A (en) * 2019-06-03 2020-12-04 北京市商汤科技开发有限公司 Data processing method, device and storage medium
CN110942036B (en) * 2019-11-29 2023-04-18 深圳市商汤科技有限公司 Person identification method and device, electronic equipment and storage medium
CN110955792A (en) * 2019-12-13 2020-04-03 云粒智慧科技有限公司 Searching method and device based on picture, electronic equipment and storage medium
CN111401171B (en) * 2020-03-06 2023-09-22 咪咕文化科技有限公司 Face image recognition method and device, electronic equipment and storage medium
CN113838239B (en) * 2020-06-24 2023-09-29 阿里巴巴集团控股有限公司 Data processing method and device, verification system and electronic equipment
CN111782881B (en) * 2020-06-30 2023-06-16 北京市商汤科技开发有限公司 Data processing method, device, equipment and storage medium
CN112084355A (en) * 2020-09-14 2020-12-15 重庆农村商业银行股份有限公司 Face sub-library updating method, device, equipment and storage medium
CN113128437A (en) * 2021-04-27 2021-07-16 北京市商汤科技开发有限公司 Identity recognition method and device, electronic equipment and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2652947A1 (en) * 2010-12-14 2013-10-23 Scenetap, LLC Apparatus and method to monitor customer demographics in a venue or similar facility
CN105488478A (en) * 2015-12-02 2016-04-13 深圳市商汤科技有限公司 Face recognition system and method
CN105809178A (en) * 2014-12-31 2016-07-27 中国科学院深圳先进技术研究院 Population analyzing method based on human face attribute and device
CN106657606A (en) * 2016-11-16 2017-05-10 努比亚技术有限公司 Photograph processing method, device and terminal
CN106997566A (en) * 2016-01-25 2017-08-01 新谊整合科技股份有限公司 The method and system provided personalized service according to identity
CN107292240A (en) * 2017-05-24 2017-10-24 深圳市深网视界科技有限公司 It is a kind of that people's method and system are looked for based on face and human bioequivalence
CN107679613A (en) * 2017-09-30 2018-02-09 同观科技(深圳)有限公司 A kind of statistical method of personal information, device, terminal device and storage medium
CN107886079A (en) * 2017-11-22 2018-04-06 北京旷视科技有限公司 Object identifying method, apparatus and system

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2652947A1 (en) * 2010-12-14 2013-10-23 Scenetap, LLC Apparatus and method to monitor customer demographics in a venue or similar facility
CN105809178A (en) * 2014-12-31 2016-07-27 中国科学院深圳先进技术研究院 Population analyzing method based on human face attribute and device
CN105488478A (en) * 2015-12-02 2016-04-13 深圳市商汤科技有限公司 Face recognition system and method
CN106997566A (en) * 2016-01-25 2017-08-01 新谊整合科技股份有限公司 The method and system provided personalized service according to identity
CN106657606A (en) * 2016-11-16 2017-05-10 努比亚技术有限公司 Photograph processing method, device and terminal
CN107292240A (en) * 2017-05-24 2017-10-24 深圳市深网视界科技有限公司 It is a kind of that people's method and system are looked for based on face and human bioequivalence
CN107679613A (en) * 2017-09-30 2018-02-09 同观科技(深圳)有限公司 A kind of statistical method of personal information, device, terminal device and storage medium
CN107886079A (en) * 2017-11-22 2018-04-06 北京旷视科技有限公司 Object identifying method, apparatus and system

Also Published As

Publication number Publication date
CN109145707A (en) 2019-01-04

Similar Documents

Publication Publication Date Title
CN109145707B (en) Image processing method and device, electronic equipment and storage medium
CN109145127B (en) Image processing method and device, electronic equipment and storage medium
CN108027827B (en) Coordinated communication and/or storage based on image analysis
US11481789B2 (en) Information processing apparatus, system, control method for information processing apparatus, and non-transitory computer-readable storage medium
CN107291732B (en) Information pushing method and device
JP4778532B2 (en) Customer information collection management system
CN110799972A (en) Dynamic human face image storage method and device, electronic equipment, medium and program
US9576371B2 (en) Busyness defection and notification method and system
CN103300815B (en) Eyeball focus determination method, device and system
JP5780348B1 (en) Information presentation program and information processing apparatus
JP7058755B2 (en) Data processing methods, equipment and storage media
CN110121108B (en) Video value evaluation method and device
CN105659279B (en) Information processing apparatus, information processing method, and computer program
CN110837512A (en) Visitor information management method and device, electronic equipment and storage medium
CN111178966A (en) Latent customer behavior analysis method and system based on face recognition
EP3859664A1 (en) Authentication device, authentication method, and recording medium
US20140278745A1 (en) Systems and methods for providing retail process analytics information based on physiological indicator data
US20160189170A1 (en) Recognizing Customers Requiring Assistance
CN111612657A (en) Client type identification method and device, electronic equipment and storage medium
JP2023507043A (en) DATA PROCESSING METHOD, DEVICE, DEVICE, STORAGE MEDIUM AND COMPUTER PROGRAM
US20200073877A1 (en) Video cookies
CN111246110B (en) Image output method and device, storage medium and electronic device
CN110443187B (en) Recording method and device of characteristic information
KR102278749B1 (en) Marketing Analysis Service Platform and method for Offline Store
CN110636363A (en) Multimedia information playing method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant