CN109145127B - Image processing method and device, electronic equipment and storage medium - Google Patents

Image processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN109145127B
CN109145127B CN201810639814.8A CN201810639814A CN109145127B CN 109145127 B CN109145127 B CN 109145127B CN 201810639814 A CN201810639814 A CN 201810639814A CN 109145127 B CN109145127 B CN 109145127B
Authority
CN
China
Prior art keywords
object image
information
image
cache
cache information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810639814.8A
Other languages
Chinese (zh)
Other versions
CN109145127A (en
Inventor
王磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sensetime Technology Development Co Ltd
Original Assignee
Beijing Sensetime Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sensetime Technology Development Co Ltd filed Critical Beijing Sensetime Technology Development Co Ltd
Priority to CN201810639814.8A priority Critical patent/CN109145127B/en
Publication of CN109145127A publication Critical patent/CN109145127A/en
Application granted granted Critical
Publication of CN109145127B publication Critical patent/CN109145127B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Processing Or Creating Images (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The present disclosure relates to an image processing method and apparatus, an electronic device, and a storage medium. The method comprises the following steps: carrying out target object detection on the collected video stream of the first geographic area to obtain a plurality of first object images; carrying out de-duplication processing on the plurality of first object images to obtain at least one second object image in the plurality of first object images; and sending the at least one second object image to a server. The embodiment of the disclosure can remove repeated objects in the geographic area, and reduce communication overhead and the load of the server.

Description

Image processing method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to an image processing method and apparatus, an electronic device, and a storage medium.
Background
In places (such as shopping malls and supermarkets) needing monitoring, images of persons (such as customers or workers) visiting the places can be collected through equipment such as a camera, and the images are analyzed and processed in different modes through processing equipment. In the process of analyzing the image, other means are needed to process the image to obtain the relevant information of the visitor.
Disclosure of Invention
The present disclosure proposes an image processing technical solution.
According to an aspect of the present disclosure, there is provided an image processing method including:
carrying out target object detection on the collected video stream of the first geographic area to obtain a plurality of first object images;
carrying out de-duplication processing on the plurality of first object images to obtain at least one second object image in the plurality of first object images;
and sending the at least one second object image to a server.
In some possible implementations, the performing a deduplication process on the plurality of first object images to obtain at least one second object image in the plurality of first object images includes:
acquiring feature information of a third object image in the plurality of first object images;
determining whether cache information matched with the feature information of the third object image exists in a first cache information queue corresponding to the first geographic area, wherein the first cache information queue comprises at least one cache information;
and determining a third object image as the second object image when the first cache information queue does not have cache information matched with the feature information of the third object image.
In some possible implementations, in a case that there is no cache information in the first cache information queue that matches feature information of a third object image, the third object image is not included in the at least one second object image.
In some possible implementations, the method further includes:
and storing the characteristic information of the third object image and/or the third object image into the first cache information queue under the condition that cache information matched with the characteristic information of the third object image does not exist in the first cache information queue.
In some possible implementations, determining whether there is cache information in the first cache information queue of the first geographic area that matches the feature information of the third object image includes:
according to the feature information of the third object image and the feature information corresponding to at least one piece of cache information in the first cache information queue, obtaining the similarity between the third object image and the at least one piece of cache information in the first cache information queue;
and determining whether the first cache information queue has cache information matched with the feature information of the third object image or not based on the similarity between the third object image and at least one cache information in the first cache information queue.
In some possible implementations, the cache information includes cache characteristic information;
obtaining a similarity between the third object image and at least one cache information in the first cache information queue according to the feature information of the third object image and the feature information corresponding to the at least one cache information, including:
determining the distance between the feature information of the third object image and each cache feature information in at least one cache feature information in the first cache information queue;
and determining the similarity between the third object image and the at least one cache information according to the distance corresponding to the at least one cache characteristic information.
In some possible implementations, the first geographic area is one of a plurality of geographic areas included in the target site.
In some possible implementations, the method is applied to a head end device disposed at a target site, the head end device being connected to one or more cameras disposed within the first geographic area.
According to another aspect of the present disclosure, there is provided an image processing method including:
receiving at least one second object image detected in a first geographic area and sent by front-end equipment, wherein the at least one second object image is obtained after the front-end equipment performs de-duplication processing on a plurality of detected first object images;
and performing identification processing on the at least one second object image to obtain an identification result of the first geographical area.
In some possible implementations, the obtaining the identification result of the first geographic area by performing the identification process on the at least one second object image includes:
performing de-duplication processing on the at least one second object image, and determining at least one fourth object image in the at least one second object image;
and performing identification processing on the at least one fourth object image to obtain an identification result of the first geographic area.
In some possible implementations, the performing the de-duplication process on the at least one second object image and determining at least one fourth object image in the at least one second object image includes:
acquiring feature information of a fifth object image in the at least one second object image;
searching whether reference image information matched with the feature information of the fifth object image exists in a database;
determining the fifth object image as a fourth object image in a case where reference image information matching the feature information of the fifth object image does not exist in the database.
In some possible implementations, the performing the de-duplication process on the at least one second object image to determine at least one fourth object image in the at least one second object image further includes:
searching whether reference image information exists in a second cache information queue of the first geographic area or not under the condition that the reference image information matched with the feature information of the fifth object image exists in the database;
and when the reference image information does not exist in the second cache information queue, determining the fifth object image as a fourth object image.
In some possible implementations, when the reference image information exists in the second buffer information queue, the fifth object image is not included in the at least one fourth object image.
In some possible implementations, the method further includes:
and storing the reference image information into the second cache information queue under the condition that the reference image information does not exist in the second cache information queue.
In some possible implementations, the method further includes:
and under the condition that the reference image information matched with the feature information of the fifth object image does not exist in the database, storing the feature information of the fifth object image and/or the fifth object image into the database and the second cache information queue respectively.
In some possible implementations, the database includes a member customer database, a staff database, an exception customer database, and a general customer database,
searching whether reference image information matched with the feature information of the fifth object image exists in a database or not, wherein the searching comprises the following steps:
and sequentially searching whether reference image information matched with the characteristic information of the fifth object image exists in the member customer database, the staff database, the abnormal customer database and the common customer database.
In some possible implementations, the recognition result includes an identity class of the target object,
wherein, the identifying the at least one fourth object image to obtain the identification result of the first geographic area includes:
determining the identity class of the target object in the fourth object image as a first identity class corresponding to the matched reference image information if the matched reference image information exists in the fourth object image;
and determining the identity class of the target object in the fourth object image as a second identity class under the condition that the fourth object image does not have the matched reference image information.
In some possible implementations, the second identity category includes a general customer identity category, and the first identity category includes a non-general customer identity category, which includes one or any combination of a member customer identity category, a staff identity category, an abnormal customer identity category.
In some possible implementations, the method further includes:
determining a first population statistics result for the first geographic area over a target time period according to the identity category of the target object,
the corresponding time of the image of the target object is located in the target time period, and the first group counting result comprises at least one of the number of visitors and the number of visitors in at least one geographic area in the target time period.
In some possible implementations, determining a first population statistics result within a target time period according to an identity category of the target object includes:
and under the condition that the target object has matched reference image information, determining a first group counting result in the target time period according to at least one of historical matching data of the matched reference image information and corresponding time of the image of the target object and the identity type of the target object.
In some possible implementations, the method further includes:
and sending the first group counting result in the target time period to a terminal.
In some possible implementations, the method further includes:
and sending at least one of the identity type, the identity and the visiting information of the target object to a terminal.
In some possible implementations, the method is applied in a server that is communicatively connected to one or more head end devices.
According to another aspect of the present disclosure, there is provided an image processing apparatus including:
the detection module is used for detecting the target object of the collected video stream of the first geographic area to obtain a plurality of first object images;
the de-duplication module is used for performing de-duplication processing on the plurality of first object images to obtain at least one second object image in the plurality of first object images;
and the first sending module is used for sending the at least one second object image to the server.
In some possible implementations, the deduplication module is further configured to:
acquiring feature information of a third object image in the plurality of first object images;
determining whether cache information matched with the feature information of the third object image exists in a first cache information queue corresponding to the first geographic area, wherein the first cache information queue comprises at least one cache information;
and determining a third object image as the second object image when the first cache information queue does not have cache information matched with the feature information of the third object image.
In some possible implementations, in a case that there is no cache information in the first cache information queue that matches feature information of a third object image, the third object image is not included in the at least one second object image.
In some possible implementations, the apparatus further includes:
the first storage module is configured to store the feature information of the third object image and/or the third object image into the first cache information queue when cache information matching the feature information of the third object image does not exist in the first cache information queue.
In some possible implementations, the deduplication module is further configured to:
according to the feature information of the third object image and the feature information corresponding to at least one piece of cache information in the first cache information queue, obtaining the similarity between the third object image and the at least one piece of cache information in the first cache information queue;
and determining whether the first cache information queue has cache information matched with the feature information of the third object image or not based on the similarity between the third object image and at least one cache information in the first cache information queue.
In some possible implementations, the cache information includes cache characteristic information;
the de-emphasis module is further configured to:
determining the distance between the feature information of the third object image and each cache feature information in at least one cache feature information in the first cache information queue;
and determining the similarity between the third object image and the at least one cache information according to the distance corresponding to the at least one cache characteristic information.
In some possible implementations, the first geographic area is one of a plurality of geographic areas included in the target site.
In some possible implementations, the apparatus is applied to a head end device disposed at a target site, the head end device being connected to one or more cameras disposed within the first geographic area.
According to another aspect of the present disclosure, there is provided an image processing apparatus including:
the device comprises a receiving module, a processing module and a processing module, wherein the receiving module is used for receiving at least one second object image detected in a first geographic area sent by front-end equipment, and the at least one second object image is obtained after the front-end equipment performs de-duplication processing on a plurality of detected first object images;
and the identification module is used for identifying the at least one second object image to obtain an identification result of the first geographic area.
In some possible implementations, the identification module is further configured to:
performing de-duplication processing on the at least one second object image, and determining at least one fourth object image in the at least one second object image;
and performing identification processing on the at least one fourth object image to obtain an identification result of the first geographic area.
In some possible implementations, the identification module is further configured to:
acquiring feature information of a fifth object image in the at least one second object image;
searching whether reference image information matched with the feature information of the fifth object image exists in a database;
determining the fifth object image as a fourth object image in a case where reference image information matching the feature information of the fifth object image does not exist in the database.
In some possible implementations, the identification module is further configured to:
searching whether reference image information exists in a second cache information queue of the first geographic area or not under the condition that the reference image information matched with the feature information of the fifth object image exists in the database;
and when the reference image information does not exist in the second cache information queue, determining the fifth object image as a fourth object image.
In some possible implementations, when the reference image information exists in the second buffer information queue, the fifth object image is not included in the at least one fourth object image.
In some possible implementations, the apparatus further includes:
and the second storage module is used for storing the reference image information into the second cache information queue under the condition that the reference image information does not exist in the second cache information queue.
In some possible implementations, the apparatus further includes:
and a third storage module, configured to store the feature information of the fifth object image and/or the fifth object image into the database and the second cache information queue, respectively, when there is no reference image information matching the feature information of the fifth object image in the database.
In some possible implementations, the database includes a member customer database, a staff database, an exception customer database, and a general customer database,
wherein the identification module is further configured to:
and sequentially searching whether reference image information matched with the characteristic information of the fifth object image exists in the member customer database, the staff database, the abnormal customer database and the common customer database.
In some possible implementations, the recognition result includes an identity class of the target object,
wherein the identification module is further configured to:
determining the identity class of the target object in the fourth object image as a first identity class corresponding to the matched reference image information if the matched reference image information exists in the fourth object image;
and determining the identity class of the target object in the fourth object image as a second identity class under the condition that the fourth object image does not have the matched reference image information.
In some possible implementations, the second identity category includes a general customer identity category, and the first identity category includes a non-general customer identity category, which includes one or any combination of a member customer identity category, a staff identity category, an abnormal customer identity category.
In some possible implementations, the apparatus further includes:
a first statistics module for determining a first population statistics result of the first geographic area within a target time period according to the identity category of the target object,
the corresponding time of the image of the target object is located in the target time period, and the first group counting result comprises at least one of the number of visitors and the number of visitors in at least one geographic area in the target time period.
In some possible implementations, the first statistics module is further configured to:
and under the condition that the target object has matched reference image information, determining a first group counting result in the target time period according to at least one of historical matching data of the matched reference image information and corresponding time of the image of the target object and the identity type of the target object.
In some possible implementations, the apparatus further includes:
and the second sending module is used for sending the first group counting result in the target time period to the terminal.
In some possible implementations, the apparatus further includes:
and the third sending module is used for sending at least one of the identity type, the identity and the visiting information of the target object to the terminal.
In some possible implementations, the apparatus is implemented in a server communicatively coupled to one or more head end devices.
According to another aspect of the present disclosure, there is provided an electronic device including:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to: the above-described image processing method is performed.
According to another aspect of the present disclosure, there is provided a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the above-described image processing method.
In the embodiment of the disclosure, a plurality of first object images are obtained by detecting a target object in a video stream acquired in a geographic area; carrying out de-duplication processing on the plurality of first object images to obtain at least one second object image in the plurality of first object images; and at least one second object image is sent to the server, thereby removing repeated objects in the geographic area and reducing communication overhead and server load.
Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate exemplary embodiments, features, and aspects of the disclosure and, together with the description, serve to explain the principles of the disclosure.
Fig. 1 shows a flowchart of an image processing method according to an embodiment of the present disclosure.
Fig. 2 shows a flow chart of an image processing method according to an embodiment of the present disclosure.
Fig. 3 illustrates a block diagram of an image processing apparatus according to an embodiment of the present disclosure.
Fig. 4 illustrates a block diagram of an image processing apparatus according to an embodiment of the present disclosure.
Fig. 5 shows a block diagram of an electronic device in accordance with an embodiment of the disclosure.
Detailed Description
Various exemplary embodiments, features and aspects of the present disclosure will be described in detail below with reference to the accompanying drawings. In the drawings, like reference numbers can indicate functionally identical or similar elements. While the various aspects of the embodiments are presented in drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used exclusively herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
Furthermore, in the following detailed description, numerous specific details are set forth in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements and circuits that are well known to those skilled in the art have not been described in detail so as not to obscure the present disclosure.
Fig. 1 shows a flowchart of an image processing method according to an embodiment of the present disclosure. The image processing method can be applied to a front-end device such as a front-end server or a camera provided at a target site. The method comprises the following steps:
in step S11, performing target object detection on the collected video stream of the first geographic area to obtain a plurality of first object images;
in step S12, performing a deduplication process on the plurality of first object images to obtain at least one second object image of the plurality of first object images;
in step S13, the at least one second object image is transmitted to the server.
According to the image processing method disclosed by the embodiment of the disclosure, a plurality of first object images can be obtained by carrying out target object detection on the acquired video stream; carrying out de-duplication processing on the plurality of first object images to obtain at least one second object image in the plurality of first object images; and sending at least one second object image to the server, thereby removing duplicate objects in the first geographic area, reducing communication overhead and server load.
For example, the target location to be monitored may include any location such as a store, a mall, a supermarket, etc., and the object to be identified may be a visitor (e.g., a customer or a staff member, etc.) who accesses the target location. The statistical target places may include one or more than one (for example, one or more than one store in a supermarket), each target place may include one or more geographical areas preset by a user (for example, a first floor area and a second floor area in a mall, or an entrance area, an exit area, a dining area, a mother and baby area, an electrical appliance area, etc. in a mall), and each target place may also include a preset geographical area and an area not preset. Cameras (e.g., cameras) may be arranged in various areas of the target site, with one or more cameras in each area, to capture video of the various areas. The shooting device can be deployed according to the actual environment of the site, at least at a key entrance, and can cover passenger flow and visiting personnel as comprehensively as possible.
In some possible implementations, the head-end device may be a front-end server disposed in the target site. The head-end equipment may be connected to one or more cameras of the first geographic area to receive video streams or video sequences from the cameras. The front-end apparatus can perform operations such as decoding, frame selection, and the like on the video streams of the respective cameras, thereby obtaining a subject image having a target subject (e.g., a person). Optionally, the first geographic area is one of a plurality of geographic areas included in the target site.
In some possible implementations, the front-end server may perform decoding processing on video streams captured by one or more cameras in the first geographic area to obtain decoded video frames; and tracking and frame selection processing is carried out on the video frames through a tracking filter, and the video frames including the first object to be analyzed are selected. Then, a plurality of first object images may be segmented from the video frame. The first object image may be any one of a face image, a human body image, and a video frame including the target object.
In some possible implementations, step S12 includes:
acquiring feature information of a third object image in the plurality of first object images;
determining whether cache information matched with the feature information of the third object image exists in a first cache information queue corresponding to the first geographic area, wherein the first cache information queue comprises at least one cache information;
and determining a third object image as the second object image when the first cache information queue does not have cache information matched with the feature information of the third object image.
For example, an intra-region de-duplication (duplicate object image removal) process may be performed on a plurality of first object images in a first geographic region. The front-end device may be provided with a first cache information queue corresponding to the first geographic area, where one or more cache information queues are cached in the first cache information queue, and the cache information may include an image of the target object (e.g., a face image and/or a body image) and/or feature information of the target object (e.g., facial feature information and/or body feature information of the target object).
The first buffer information queue may include image information of a target object detected in the first geographic area within a certain time period, and the certain time period may be set as required, for example, 5 minutes, and the like. That is, the same object visited within the preset time period may be regarded as a visit, so that the same object visited within the preset time period is removed in the subsequent processing.
For a third object image in the plurality of first object images, feature extraction processing may be performed on the third object image to obtain feature information of the third object image, where optionally, the third object image may include a face region image, and accordingly, the feature information includes face feature information of the third object image, or the third object image includes a human body region image, and accordingly, the feature information may include face feature information and human body feature information of the third object image. That is, only the third object image may be subjected to the facial feature extraction processing to obtain facial feature information; the face feature extraction processing and the human body feature extraction processing may also be performed on the third object image to obtain face feature information and human body feature information. The present disclosure does not limit the specific manner of feature extraction.
In some possible implementations, the first cache information queue of the front-end device includes at least one cache information, and the cache information includes historical detection data of the first geographic area, for example, object image information detected within a certain period of time, for example, an object image and/or feature information of the object image. An independent cache information queue may be established for each of a plurality of geographic areas of the target location, or one cache information queue or a plurality of cache information queues may be established for the plurality of geographic areas of the target location, so that a part or all of the geographic areas share the same cache information queue, at this time, optionally, cache information belonging to different geographic areas in the cache information queues may be distinguished by identifiers of the geographic areas, which is not limited in this embodiment of the disclosure.
In some possible implementation manners, after determining the feature information of the third object image in the first geographic area, a search may be performed in the first cache information queue corresponding to the first geographic area to find whether cache information matching the feature information of the third object image exists.
In some possible implementations, the step of determining whether there is cache information in the first cache information queue of the first geographic area that matches the feature information of the third object image includes:
according to the feature information of the third object image and the feature information corresponding to at least one piece of cache information in the first cache information queue, obtaining the similarity between the third object image and the at least one piece of cache information in the first cache information queue;
and determining whether the first cache information queue has cache information matched with the feature information of the third object image or not based on the similarity between the third object image and at least one cache information in the first cache information queue.
For example, the feature information of the third object image may be compared with the feature information corresponding to the at least one piece of cache information in the first cache information queue to obtain the similarity between the feature information of the third object image and the feature information corresponding to the at least one piece of cache information in the first cache information queue.
In some optional examples, the cache information includes cache characteristic information, and at this time, the characteristic information corresponding to the cache information may be cache characteristic information included in the cache information. Or the cache information does not include the cache feature information but includes the cache image, the feature information corresponding to the cache information may be the feature information of the cache image, which is not limited in this embodiment of the disclosure.
In some possible implementation manners, the step of obtaining, according to the feature information of the third object image and the feature information corresponding to the at least one cache information in the first cache information queue, a similarity between the third object image and the at least one cache information includes:
determining the distance between the feature information of the third object image and each cache feature information in at least one cache feature information in the first cache information queue;
and determining the similarity between the third object image and the at least one cache information according to the distance corresponding to the at least one cache characteristic information.
In some possible implementations, the feature information of the third object image may be facial feature information and the feature information corresponding to the cache information is cache facial feature information. At this time, the similarity between the third object image and the cached information to which the facial feature information belongs is determined, for example, by calculating a distance between the facial feature information of the third object image and the cached facial feature information, wherein the distance may specifically include but is not limited to: cosine distance, Euclidean distance, Mahalanobis distance, etc. The closer the distance between the facial features is, the greater the similarity between the third object image and the cache information is, and whether the third object image corresponds to the same person may be determined by setting a threshold (for example, the similarity is 0.86), for example, when the similarity between the third object image and a certain cache information in the first cache information queue exceeds the set threshold, it is determined that the feature information of the third object image matches the cache information. The size of the set threshold value can be adjusted according to actual conditions, and the embodiment of the disclosure does not limit the size.
In some possible implementation manners, when the similarity between the third object image and at least one piece of cache information in the first cache information queue does not exceed a set threshold, it is determined that cache information matching the feature information of the third object image does not exist in the first cache information queue of the first geographic area. At this time, it may be considered that there is no duplicate image in the third object image, and the third object image may be determined as the second object image without performing deduplication processing on the third object image, and may be transmitted to the server in the subsequent flow.
In some possible implementations, the method further includes:
and storing the characteristic information of the third object image and/or the third object image into the first cache information queue under the condition that cache information matched with the characteristic information of the third object image does not exist in the first cache information queue.
In some possible implementations, if there is no cache information in the first cache information queue of the first geographic area that matches the feature information of the third object image, the feature information of the third object image and/or the third object image may be stored in the first cache information queue so as to process a subsequent object image.
In some possible implementations, in a case that there is cache information in the first cache information queue that matches feature information of a third object image, the third object image is not included in the at least one second object image. That is, when the similarity between the third object image and at least one piece of cache information in the first cache information queue exceeds a set threshold, it may be considered that cache information matching the feature information of the third object image exists in the first cache information queue of the first geographic area. In this case, the third target image may be regarded as a duplicate image, and it is necessary to perform deduplication processing on the third target image. In this case, it may be determined that the third object image is not included in the at least one second object image, that is, the third object image is removed from the plurality of first object images, so that the object images remaining in the plurality of first object images after the completion of the filtering are the second object images.
In some possible implementation manners, when the similarity between the third object image and a certain piece of cache information in the first cache information queue is greater than a preset threshold, it is indicated that the two object images (the third object image and the certain piece of cache information in the first cache information queue) may correspond to the same person, in order to reduce the pressure of subsequent processing, only one object image is reserved for the same person, at this time, the newly received object image may be directly deleted, or the quality of the object image is compared with that of a prestored object image (the cache information in the first cache information queue), and when the quality of the newly received object image is better, the newly received object image is used to replace the prestored object image in the image queue; when the image is identified to be a repeated image, the corresponding occurrence times of the object image can be accumulated and recorded so as to provide information for statistical subsequent processing; when the object image is judged not to be the repeated image, the object image is added into an image queue so as to be accurately identified when similarity matching is carried out on other newly received object images subsequently.
In some possible implementations, after determining at least one second object image of the plurality of first object images, the at least one second object image may be uploaded to a server for further analysis processing of the at least one second object by the server.
In some possible implementations, the step of the front-end device performing target object detection on the video stream acquired in the first geographic area to obtain a plurality of first object images further includes:
the front-end equipment performs target detection on the video stream acquired from the first geographic area to obtain a plurality of object images;
and performing filtering operation on the obtained object images to obtain a plurality of first object images with image quality reaching a first preset condition.
In some possible implementation manners, the display quality of the object image may be evaluated through a face angle, a face width and a face blur, but the embodiment does not limit what index is specifically used to evaluate the display quality of the object image, so as to obtain the object image with the display quality reaching the standard.
Performing a filtering operation on the obtained plurality of object images, including:
and filtering the obtained object images based on the face attributes corresponding to the object images.
The face attribute is used for representing the display quality of the face in the object image;
in some possible implementations, the face attributes include, but are not limited to, one or more of: face angle, face width and height values and face ambiguity; more specifically, the face angles may include, but are not limited to: the yaw horizontal corner is used for representing the steering angle of the face in the horizontal direction; a pitch angle used for representing the rotation angle of the face in the vertical direction; and the roll inclination angle is used for representing the deflection angle of the human face in the vertical direction.
In some possible implementations, the filtering the obtained multiple object images based on the face attributes respectively includes:
acquiring face attributes corresponding to faces in the object image, and judging the face attributes;
matching each object image in at least one object image with object images prestored in an image queue, wherein the matching comprises the following steps:
responding to the condition that the face angle is in a first preset range, the face width and height value is larger than a second preset threshold value,
And/or the face ambiguity is less than a third preset threshold, and matching each object image in the at least one object image with the object images prestored in the image queue.
Further comprising:
and deleting the object image in response to the fact that the face angle is not in the first preset range, the face width and height value is smaller than or equal to a second preset threshold value, and/or the face ambiguity is larger than or equal to a third preset threshold value.
In some possible implementations, the first preset range may be set to ± 20 ° (specific values may be set according to specific situations), when the yaw horizontal rotation angle, pitch angle and roll inclination angle in the face angle are all ± 20 ° (the three angles may be set to the same range or different ranges); the face width height specifically includes a face width and a face height (generally returned by detect, which can be filtered by setting; for example, set to 50 pixels, an object image whose width and height are less than 50 pixels can be considered as being out of condition, and the width and height can be set to different values or the same value); face blurriness (typically returned by SDK-alignment, different values can be set, e.g., set to 0.7, a poor quality object image is considered with a blurriness greater than 0.7). The values of +/-20 degrees and 50 pixels and 0.7 are all adjustable.
And/or filtering the obtained multiple object images based on the face angles in the object images respectively; the face angle is used to represent the deflection angle of the face in the object image. The deflection angle is relative to a standard frontal face, the standard frontal face refers to a face of which the angles in the horizontal, vertical and oblique directions are all 0, and the deflection angle of the face can be calculated by taking the face as an origin.
And/or performing a filtering operation on a plurality of frames of object images obtained from the video stream. The aim of selecting frames from the video stream based on the object images can be achieved by performing filtering on a plurality of frames of object images in the video stream, and the object images in the video frames obtained by selecting the frames all meet the first preset condition.
In one or more optional embodiments, the face trajectory further comprises a timestamp of the corresponding object image, the timestamp corresponding to a time at which the object image starts to perform the filtering operation;
filtering the object image in the face trajectory based on the distance of the three-dimensional vector to the source point, comprising:
and obtaining an object image with the minimum corresponding distance in all object images in the face track within a first set time length based on the distance from the three-dimensional vector to the source point, and storing the object image with the minimum corresponding distance.
The object images within the set time length are filtered to obtain the object images with the best quality in the face tracks within the time length, namely the object images with the better quality are obtained, the processing speed is accelerated, a new face track can be established on the basis of the object images with the best quality obtained from the set time lengths, and the object images with the best quality in all the object images within the set time lengths are obtained on the basis of the new face track and the quality filtering.
Fig. 2 shows a flow chart of an image processing method according to an embodiment of the present disclosure. The image processing method can be applied to a server, such as a cloud server. As shown in fig. 2, the method includes:
in step S21, at least one second object image detected in the first geographic area and sent by the front-end device is received, where the at least one second object image is obtained by the front-end device performing de-duplication processing on the detected multiple first object images;
in step S22, an identification process is performed on the at least one second object image to obtain an identification result of the first geographic area.
According to the image processing method, the at least one second object image detected in the first geographic area and sent by the front-end equipment is received, and the at least one second object image is identified to obtain the identification result of the first geographic area, so that the object image in the first geographic area is identified, and the accuracy of image processing is improved.
For example, the server may, for example, comprise a cloud server, which may provide docking capabilities for third party platforms; the system can comprise a face-based member system, and can be used for member identification, management, operation, accurate marketing and the like; the system can be used for carrying out regional (global) passenger flow statistics, specifically, the number of people in each region is used for hot spot region arrangement, optimization analysis of customer movement lines and the like; based on the face recognition capability and the specific scene, the method can communicate the scene-specific services.
In some possible implementations, the server may be communicatively connected to one or more head-end devices, which may belong to one or more target sites, or one or more first geographic regions of the target sites, respectively. The server may receive at least one second object image detected in the first geographic area, which is sent by the front-end device, where the at least one second object image is obtained by performing, by the front-end device, deduplication processing on the detected multiple first object images. In this way, according to the first geographical area preset by the user, the second object image in the first geographical area can be acquired.
In some possible implementations, the server may perform an identification process on at least one second object image in step S22 to obtain an identification result of the first geographic area. Wherein, step S22 includes:
performing de-duplication processing on the at least one second object image, and determining at least one fourth object image in the at least one second object image;
and performing identification processing on the at least one fourth object image to obtain an identification result of the first geographic area.
For example, the server may perform the deduplication processing again on the at least one second object image, and determine at least one fourth object image after the deduplication processing is performed on the at least one second object image.
Wherein, the step of performing de-duplication processing on the at least one second object image and determining at least one fourth object image in the at least one second object image comprises:
acquiring feature information of a fifth object image in the at least one second object image;
searching whether reference image information matched with the feature information of the fifth object image exists in a database;
determining the fifth object image as a fourth object image in a case where reference image information matching the feature information of the fifth object image does not exist in the database.
In this disclosure, a database may be disposed in the server, and one or more pieces of reference image information are stored in the database, where the reference image information may include the reference image and/or feature information of the reference image, or further include other information, and optionally, the reference image information may be associated with information such as personal identity information, historical visiting information, historical purchasing records, purchasing preferences, and the like, which is not limited in this disclosure.
In some possible implementations, for a fifth object image in the at least one second object image, feature extraction processing may be performed on the fifth object image to obtain feature information of the fifth object image, where the feature information includes facial feature information of the fifth object image, or facial feature information and human body feature information of the fifth object image. That is, only the face of the object in the fifth object image may be subjected to facial feature extraction processing to obtain facial feature information; or performing facial feature extraction processing and human body feature extraction processing on the face and the human body of the object in the fifth object image to obtain facial feature information and human body feature information. The present disclosure does not limit the specific manner of feature extraction. In this way, the accuracy of feature extraction can be improved.
In some possible implementations, after determining the feature information of the fifth object image, a search may be performed in a database to find whether there is reference image information matching the feature information of the fifth object image. For example, the similarity between the feature information corresponding to the reference image information in the database and the feature information of the fifth object image may be respectively determined, and the reference image information having the similarity greater than or equal to the similarity threshold may be taken as the reference image information matching the feature information of the fifth object image.
In some possible implementations, if there is no reference image information matching the feature information of the fifth object image in the database, it may be considered that the object of the fifth object image is not an existing object in the database, and is not a duplicate object, and subsequent processing needs to be performed on the fifth object image. In this case, the fifth object image may be determined as a fourth object image among the at least one second object image.
In some possible implementations, the performing the de-duplication process on the at least one second object image, and the determining at least one fourth object image of the at least one second object image further includes:
searching whether reference image information exists in a second cache information queue of the first geographic area or not under the condition that the reference image information matched with the feature information of the fifth object image exists in the database;
and when the reference image information does not exist in the second cache information queue, determining the fifth object image as a fourth object image.
In some possible implementations, a second buffer information queue may be disposed in the server, and one or more reference image information may be buffered in the second buffer information queue, where the reference image information may include a reference object image (e.g., a face image) and/or feature information (e.g., facial feature information) of a reference object. The reference image information in the second buffer information queue may be stored for a preset time period, for example, 5 minutes. That is, the same object visited within the preset time period may be regarded as a visit, so that the same object visited within the preset time period is removed in the subsequent processing.
In some possible implementations, if there is reference image information in the database that matches the feature information of the fifth object image, the fifth object image may be considered to be an object already in the database. In this case, it may be searched whether the reference image information exists in the second buffer information queue of the first geographic area. If the reference image information does not exist in the second buffer information queue, the object of the fifth object image is not considered to be a repeated object, and subsequent processing needs to be performed on the fifth object image. In this case, the fifth object image may be determined as a fourth object image among the at least one second object image.
In some possible implementations, when the reference image information exists in the second buffer information queue, the fifth object image is not included in the at least one fourth object image. That is, if the reference image information exists in the second buffer information queue, the fifth object image may be considered to be a repetitive object, and the fifth object image needs to be subjected to deduplication processing, so that the fifth object image is not included in the at least one fourth object image.
In some possible implementations, the method further includes:
and storing the reference image information into the second cache information queue under the condition that the reference image information does not exist in the second cache information queue. For example, if the reference image information does not exist in the second buffer information queue, the object in the fifth object image may be considered to be an existing object in the database, but not a duplicate object. In this case, the reference image information matched with the feature information of the fifth object image may be stored in the second buffer information queue, so that the reference image information can be accurately identified when similarity matching is subsequently performed on other newly received object images.
In some possible implementations, the method further includes:
and under the condition that the reference image information matched with the feature information of the fifth object image does not exist in the database, storing the feature information of the fifth object image and/or the fifth object image into the database and the second cache information queue respectively.
For example, if there is no reference image information matching the feature information of the fifth object image in the database, it may be considered that the object of the fifth object image is not an existing object in the database, nor a repetitive object. In this case, the feature information of the fifth object image and/or the fifth object image may be stored in the database and the second cache information queue, respectively, so that the similarity matching between other newly received object images may be accurately identified.
In some possible implementations, multiple databases may be provided. Different databases may correspond to different identity categories, such as one or any combination of a black list, a white list, a member, a general customer, or other different identity categories, which is not limited in this disclosure.
In some possible implementations, the plurality of databases include a member customer database (member database), a staff member database (white list database), an exception customer database (black list database), and a general customer database.
Wherein the step of searching whether reference image information matched with the feature information of the fifth object image exists in a database comprises the steps of:
and sequentially searching whether reference image information matched with the characteristic information of the fifth object image exists in the member customer database, the staff database, the abnormal customer database and the common customer database.
For example, when the plurality of databases include a member customer database, a staff member database, an abnormal customer database, and a general customer database, the databases may be sequentially searched to determine whether reference image information matching the feature information of the fifth object image exists in each database.
In some possible implementations, a search may be first conducted in the member customer database; if the matching result (reference image information) is searched in the member customer database, it indicates that the identity category of the present visitor (target object) is the member customer identity category. At this time, the visit record may be associated with the matched member, the visit information of the member at the visit, such as the visit time, the collected camera information, and the like, may be recorded, and the visit record of the member is incremented, and the search of the visit event is ended. Alternatively, the visit event of the member may be pushed to the client APP, and if the matching result is not searched in the member customer database, the search is continued in the next face database (staff database).
In some possible implementations, a search may be conducted in a staff database; if the matching result (reference image information) is searched in the staff member database, the identity category of the visiting person (target object) is represented as a staff identity category (store clerk identity category). At this point, the search for this visit event may end. If no matching result is searched in the staff database, the search is continued in the next face database (abnormal customer database).
In some possible implementations, a search may be conducted in the anomalous customer database; if the matching result (reference image information) is searched in the abnormal customer database, the identity category of the person (target object) who visits this time is represented as an abnormal customer identity category (blacklist identity category). At this point, the search for this visit event may end. Optionally, visit information of the current visit of the blacklist person, such as the visit time, the collected camera information, and the like, may be recorded, and the number of visits of the blacklist person is increased by one. Optionally, alarm information of visiting blacklist personnel can be pushed to the client APP. If no matching result is searched in the abnormal customer database, the search is continued in the next face database (the general customer database).
In some possible implementations, the search may be conducted in a general customer database (store dynamic customer base); if the matching result is searched in the common customer database, the identity category of the visiting person (target object) is represented as the common customer identity category. At this point, the search for this visit event may end. Optionally, the visiting information of the general customer at the visit, such as the visiting time, the collected camera information, and the like, may also be recorded, and the visiting record of the general customer is incremented. If the common customer database does not search for a matching result, the image and/or characteristic information of the customer is added to the common customer database of the store, which indicates that the store has come a new customer, and the search is finished.
By the method, various databases (the face photos in different libraries) can be compared in sequence, and the comparison efficiency and accuracy are improved.
In some possible implementations, the step of performing recognition processing on the at least one fourth object image to obtain the recognition result of the first geographic area includes:
determining the identity class of the target object in the fourth object image as a first identity class corresponding to the reference image information if the fourth object image has matching reference image information;
and determining the identity class of the target object in the fourth object image as a second identity class under the condition that the fourth object image does not have the matched reference image information.
For example, the server may perform the identification process on at least one fourth object image after performing the deduplication process on the at least one second object image to obtain at least one fourth object image after the deduplication process is performed on the at least one second object image.
In some possible implementations, if there is matching reference image information in the fourth object image, that is, there is reference image information in the database that matches the feature information of the fourth object image, the identity class of the target object in the fourth object image may be determined as the first identity class corresponding to the matching reference image information. For example, when the first identity category corresponding to the matched reference image information is the member customer identity category, the identity category of the target object in the fourth object image may be determined as the member customer identity category.
In some possible implementation manners, if the fourth object image does not have the matched reference image information, that is, the database does not have the reference image information matched with the feature information of the fourth object image, the object of the fourth object image is a newly added object, an identity class may be assigned to the target object in the fourth object image, and the identity class of the target object in the fourth object image is determined as the second identity class. For example, the identity category of the target object in the fourth object image may be determined as a general customer identity category. That is, in this case, the identity class of the target object may be determined as a general customer identity class, but this is not limited by the embodiment of the present disclosure.
In some possible implementations, the second identity category includes a general customer identity category, and the first identity category includes a non-general customer identity category, the non-general customer identity category including one or any combination of a member customer identity category, a staff identity category, an abnormal customer identity category.
In some possible implementations, the method further includes:
determining a first population statistics result for the first geographic area over a target time period according to the identity category of the target object,
the corresponding time of the image of the target object is located in the target time period, and the first group counting result comprises at least one of the number of visitors and the number of visitors in at least one geographic area in the target time period.
The corresponding time of the image of the target object is within the target time period, for example, the acquisition time of the image of the target object is within the target time period, or the time when the electronic device acquires the image of the target object or the feature information of the image of the target object is within the first time period, and so on. The first group counting result comprises at least one of the number of visitors and the number of visitors in the target time period in at least one geographic area, and one or more shooting devices are arranged in each target place.
For example, group statistics may be performed on at least one target location (e.g., one store, multiple stores and/or all stores in a supermarket), a shooting area of at least one camera (e.g., a shooting area of one camera, a shooting area of multiple cameras), a geographic area with one or more cameras (e.g., an entrance area of a store, a fresh area, etc.), and/or multiple geographic areas according to a setting of a user, so as to form group statistics (total passenger flow statistics and regional passenger flow statistics). The group statistical result comprises the visiting number, the visiting times, the age distribution, the gender distribution and the like of the visiting persons in the preset target time period. The target time period may be a predetermined time period, such as a quarter, a month, a week, a day, an hour, and the like. The present disclosure does not limit the specific values of the target time period.
In some possible implementations, the first population statistics within the target time period may be determined according to an identity class of the target object. The first group counting result comprises at least one of the number of visitors and the number of visitors in the target time period of at least one target place. For example, the first population statistics include the number of visitors on the day, and the like.
In some possible implementations, the step of determining a first population statistics result within a target time period according to the identity category of the target object includes:
and under the condition that the target object has matched reference image information, determining a first group counting result in the target time period according to at least one of historical matching data of the matched reference image information and corresponding time of the image of the target object and the identity type of the target object.
For example, the history matching data of the reference image information matched with the target object includes the appearance time (e.g., the latest appearance time) of the person corresponding to the reference image information.
In some possible implementations, if reference image information matching the target object exists in multiple databases, historical matching data for the matching reference image information may be viewed. Determining a first population statistics result according to the history matching data, the corresponding time of the image of the target object and the identity category and/or the identity of the target object.
Determining a first population statistics result within the target time period according to at least one of history matching data of the reference image information matched with the target object and corresponding time of the image of the target object and an identity class of the target object in the case that reference image information matched with the target object exists in the plurality of databases, including:
under the condition that the interval between the latest appearance time of the person corresponding to the reference image information and the corresponding time of the image of the target object exceeds a preset interval, determining a first group counting result in the target time period according to the identity type of the target object; and/or
And under the condition that the latest appearance time of the figure corresponding to the reference image information is within the target time period and the interval between the latest appearance time and the corresponding time of the image of the target object does not exceed a preset interval, forbidding counting the current visit of the target object into the number of visitors within the target time period.
For example, the preset interval may be a statistical time interval preset by a user, and the preset interval may be, for example, 5 minutes. If an interval between the most recent appearance time of the person corresponding to the reference image information and the corresponding time of the image of the target object exceeds a preset interval, it may be determined whether to update the first group accounting result in a case where the identity category of the target object belongs to the identity category of the inclusion statistics.
In some possible implementation manners, for the number of visiting persons in the target time period, if the latest appearance time of the corresponding person of the reference image information is not within the target time period, the person is the first visiting person in the target time period, at this time, the number of visiting persons in the target time period can be updated, and the number of visiting persons is increased by 1; if the latest appearance time of the corresponding person is in the target time period, the person is not the first visit in the target time period, at the moment, the number of the visitors in the target time period can not be updated, and the number of the visitors is not changed.
In some possible implementation manners, for the number of visitors in the target time period, if the latest occurrence time of the corresponding person of the reference image information exceeds the preset interval or is not within the target time period, the person is the first visit within the preset interval, at this time, the number of visitors in the target time period may be updated, and the number of visitors is increased by 1; if the latest appearance time of the corresponding person is within the preset interval, the person is not the first visit within the preset interval, and at this time, the number of visitors within the target time period may not be updated, and the number of visitors is not changed (that is, the current visit of the target object is prohibited to be counted as the number of visitors within the target time period).
In some possible implementations, the number of visitors on the day may be the number of people of the target object matched with the reference image information of the multiple databases on the day and the number of people newly added to the common customer database. The number of visitors visiting the day can be the sum of all the visits recorded in the day, and the same person can be captured and counted once visit within 5 minutes (preset interval). The number of visitors visiting in the current week, the current month and other self-defined time can be directly accumulated on each day of the current week or the current month; the custom time to visit times of the current week, the current month and the like can be the accumulation of all visit records of each day of the current week or the current month.
In some possible implementation manners, when the identity type of the target object is a staff identity type, prohibiting the current visit of the target object from being counted as a first group accounting result in the target time period; and/or
And under the condition that the identity type of the target object is not the identity type of the staff, counting the current visit of the target object into a first group counting result in the target time period.
For example, if the identity category of the target object is the staff identity category, it may be determined that the identity category of the target object does not belong to the identity category that includes the statistics, and at this time, the number of visitors and the number of visitors in the target time period may not be updated, and neither the number of visitors nor the number of visitors is changed. That is, the first population statistics result of the present visit of the target object is prohibited from being counted in the target time period.
On the contrary, under the condition that the identity category of the target object is not the identity category of the staff, the identity category of the target object can be determined to belong to the identity category which is included in the statistics, and at this time, whether the number of visitors and the number of visitors in the target time period are updated or not can be determined according to the steps. That is, the present visit of the target object is counted as the first population statistical result in the target time period.
In some possible implementations, the method further includes:
and sending at least one of the identity type, the identity and the visiting information of the target object to a terminal.
For example, the terminal includes a personal computer PC, a smart phone, a wearable device, a tablet computer, or the like. The cloud server can push various information to the terminal through the network. When the cloud server determines the identity type of the target object, at least one of the identity type, the identity and the visiting information of the target object can be pushed to the terminal. The visiting information comprises one or any combination of visiting time, visiting place, commodity browsing information, commodity purchasing information and the like. For example, when the cloud server determines that the target object is a member client, the visiting information of the member client can be pushed to a terminal installed with the client APP, so that the user can check the visiting condition of the member client, and corresponding measures can be taken. For example, the history of visiting customers and information such as shopping preferences may be viewed, and products and promotional activities may be pushed to customers, etc. By the method, convenience and intuitiveness of the user can be improved.
In some possible implementations, the method further includes:
and sending the first group counting result in the target time period to a terminal.
In some possible implementation manners, the cloud server may push the first group statistical result in the target time period to the terminal APP, so that the user can view the first group statistical result of each store or each area of each store, and convenience and intuitiveness in use of the user are improved.
According to the image processing method, one or more first geographical areas for counting the passenger flow of the target place can be preset, the duplication removal in the areas can be carried out on the front-end server, and the duplication-removed results in the areas are uploaded to the cloud server; and then searching comparison and region duplication elimination are carried out in the four face libraries of the cloud server. Through the strategy of removing the weight layer by layer, more accurate statistics of the number of visitors and the number of visitors can be realized, and errors caused by shooting the same person in the same first geographical area are reduced. The customized and more accurate passenger flow statistics and regional passenger flow statistics functions are provided for the customers.
According to the image processing method disclosed by the embodiment of the disclosure, statistical data, member/blacklist personnel visit records and the like can be obtained and displayed for a user, business ultra-science and intelligence are realized, a plurality of detailed analysis capabilities such as consumer group distribution, activity tracks, visit records, consumption behaviors or hobbies and the like are provided for industries such as superstores, supermarkets, high-end coffee shops, 4S shops and the like, accurate marketing, intelligent loss prevention and intelligent operation are realized, a large data decision support is provided for fine operation analysis of merchants, the advantages of the retail industry are better played, cost is reduced, and efficiency is increased.
It is understood that the above-mentioned method embodiments of the present disclosure can be combined with each other to form a combined embodiment without departing from the logic of the principle, which is limited by the space, and the detailed description of the present disclosure is omitted.
In addition, the present disclosure also provides an image processing apparatus, an electronic device, a computer-readable storage medium, and a program, which can be used to implement any one of the image processing methods provided by the present disclosure, and the descriptions and corresponding descriptions of the corresponding technical solutions and the corresponding descriptions in the methods section are omitted for brevity.
Fig. 3 illustrates a block diagram of an image processing apparatus according to an embodiment of the present disclosure, which includes, as illustrated in fig. 3:
the detection module 31 is configured to perform target object detection on the acquired video stream of the first geographic area to obtain a plurality of first object images;
a de-duplication module 32, configured to perform de-duplication processing on the plurality of first object images to obtain at least one second object image in the plurality of first object images;
a first sending module 33, configured to send the at least one second object image to the server.
In some possible implementations, the deduplication module is further configured to:
acquiring feature information of a third object image in the plurality of first object images;
determining whether cache information matched with the feature information of the third object image exists in a first cache information queue corresponding to the first geographic area, wherein the first cache information queue comprises at least one cache information;
and determining a third object image as the second object image when the first cache information queue does not have cache information matched with the feature information of the third object image.
In some possible implementations, in a case that there is no cache information in the first cache information queue that matches feature information of a third object image, the third object image is not included in the at least one second object image.
In some possible implementations, the apparatus further includes:
the first storage module is configured to store the feature information of the third object image and/or the third object image into the first cache information queue when cache information matching the feature information of the third object image does not exist in the first cache information queue.
In some possible implementations, the deduplication module is further configured to:
according to the feature information of the third object image and the feature information corresponding to at least one piece of cache information in the first cache information queue, obtaining the similarity between the third object image and the at least one piece of cache information in the first cache information queue;
and determining whether the first cache information queue has cache information matched with the feature information of the third object image or not based on the similarity between the third object image and at least one cache information in the first cache information queue.
In some possible implementations, the cache information includes cache characteristic information;
the de-emphasis module is further configured to:
determining the distance between the feature information of the third object image and each cache feature information in at least one cache feature information in the first cache information queue;
and determining the similarity between the third object image and the at least one cache information according to the distance corresponding to the at least one cache characteristic information.
In some possible implementations, the first geographic area is one of a plurality of geographic areas included in the target site.
In some possible implementations, the apparatus is applied to a head end device disposed at a target site, the head end device being connected to one or more cameras disposed within the first geographic area.
Fig. 4 illustrates a block diagram of an image processing apparatus according to an embodiment of the present disclosure, which includes, as illustrated in fig. 4:
a receiving module 41, configured to receive at least one second object image detected in a first geographic area and sent by a front-end device, where the at least one second object image is obtained by the front-end device after performing deduplication processing on a plurality of detected first object images;
and the identification module 42 is configured to perform identification processing on the at least one second object image to obtain an identification result of the first geographic area.
In some possible implementations, the identification module is further configured to:
performing de-duplication processing on the at least one second object image, and determining at least one fourth object image in the at least one second object image;
and performing identification processing on the at least one fourth object image to obtain an identification result of the first geographic area.
In some possible implementations, the identification module is further configured to:
acquiring feature information of a fifth object image in the at least one second object image;
searching whether reference image information matched with the feature information of the fifth object image exists in a database;
determining the fifth object image as a fourth object image in a case where reference image information matching the feature information of the fifth object image does not exist in the database.
In some possible implementations, the identification module is further configured to:
searching whether reference image information exists in a second cache information queue of the first geographic area or not under the condition that the reference image information matched with the feature information of the fifth object image exists in the database;
and when the reference image information does not exist in the second cache information queue, determining the fifth object image as a fourth object image.
In some possible implementations, when the reference image information exists in the second buffer information queue, the fifth object image is not included in the at least one fourth object image.
In some possible implementations, the apparatus further includes:
and the second storage module is used for storing the reference image information into the second cache information queue under the condition that the reference image information does not exist in the second cache information queue.
In some possible implementations, the apparatus further includes:
and a third storage module, configured to store the feature information of the fifth object image and/or the fifth object image into the database and the second cache information queue, respectively, when there is no reference image information matching the feature information of the fifth object image in the database.
In some possible implementations, the database includes a member customer database, a staff database, an exception customer database, and a general customer database,
wherein the identification module is further configured to:
and sequentially searching whether reference image information matched with the characteristic information of the fifth object image exists in the member customer database, the staff database, the abnormal customer database and the common customer database.
In some possible implementations, the recognition result includes an identity class of the target object,
wherein the identification module is further configured to:
determining the identity class of the target object in the fourth object image as a first identity class corresponding to the matched reference image information if the matched reference image information exists in the fourth object image;
and determining the identity class of the target object in the fourth object image as a second identity class under the condition that the fourth object image does not have the matched reference image information.
In some possible implementations, the second identity category includes a general customer identity category, and the first identity category includes a non-general customer identity category, which includes one or any combination of a member customer identity category, a staff identity category, an abnormal customer identity category.
In some possible implementations, the apparatus further includes:
a first statistics module for determining a first population statistics result of the first geographic area within a target time period according to the identity category of the target object,
the corresponding time of the image of the target object is located in the target time period, and the first group counting result comprises at least one of the number of visitors and the number of visitors in at least one geographic area in the target time period.
In some possible implementations, the first statistics module is further configured to:
and under the condition that the target object has matched reference image information, determining a first group counting result in the target time period according to at least one of historical matching data of the matched reference image information and corresponding time of the image of the target object and the identity type of the target object.
In some possible implementations, the apparatus further includes:
and the second sending module is used for sending the first group counting result in the target time period to the terminal.
In some possible implementations, the apparatus further includes:
and the third sending module is used for sending at least one of the identity type, the identity and the visiting information of the target object to the terminal.
In some possible implementations, the apparatus is implemented in a server communicatively coupled to one or more head end devices.
Embodiments of the present disclosure also provide a computer-readable storage medium having stored thereon computer program instructions, which when executed by a processor, implement the above-mentioned method. The computer readable storage medium may be a non-volatile computer readable storage medium.
An embodiment of the present disclosure further provides an electronic device, including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured as the above method.
The electronic device may be provided as a terminal, server, or other form of device.
Fig. 5 illustrates a block diagram of an electronic device 1900 in accordance with an embodiment of the disclosure. For example, the electronic device 1900 may be provided as a server. Referring to fig. 5, electronic device 1900 includes a processing component 1922 further including one or more processors and memory resources, represented by memory 1932, for storing instructions, e.g., applications, executable by processing component 1922. The application programs stored in memory 1932 may include one or more modules that each correspond to a set of instructions. Further, the processing component 1922 is configured to execute instructions to perform the above-described method.
The electronic device 1900 may also include a power component 1926 configured to perform power management of the electronic device 1900, a wired or wireless network interface 1950 configured to connect the electronic device 1900 to a network, and an input/output (I/O) interface 1958. The electronic device 1900 may operate based on an operating system stored in memory 1932, such as Windows Server, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, or the like.
In an exemplary embodiment, a non-transitory computer readable storage medium, such as the memory 1932, is also provided that includes computer program instructions executable by the processing component 1922 of the electronic device 1900 to perform the above-described methods.
The present disclosure may be systems, methods, and/or computer program products. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied thereon for causing a processor to implement various aspects of the present disclosure.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
The computer program instructions for carrying out operations of the present disclosure may be assembler instructions, Instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, the electronic circuitry that can execute the computer-readable program instructions implements aspects of the present disclosure by utilizing the state information of the computer-readable program instructions to personalize the electronic circuitry, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA).
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Having described embodiments of the present disclosure, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terms used herein were chosen in order to best explain the principles of the embodiments, the practical application, or technical improvements to the techniques in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (58)

1. An image processing method, characterized in that the method comprises:
carrying out target object detection on the collected video stream of the first geographic area to obtain a plurality of first object images;
performing deduplication processing on the plurality of first object images to obtain at least one second object image in the plurality of first object images, wherein feature information of the at least one second object image is not matched with cache information in a first cache information queue corresponding to the first geographic area, and the cache information includes historical detection data of the first geographic area;
and sending the at least one second object image to a server.
2. The method of claim 1, wherein the performing de-duplication processing on the plurality of first object images to obtain at least one second object image of the plurality of first object images comprises:
acquiring feature information of a third object image in the plurality of first object images;
determining whether cache information matched with the feature information of the third object image exists in a first cache information queue corresponding to the first geographic area, wherein the first cache information queue comprises at least one cache information;
and determining a third object image as the second object image when the first cache information queue does not have cache information matched with the feature information of the third object image.
3. The method according to claim 2, wherein in a case where there is no cache information in the first cache information queue that matches the feature information of a third object image, the third object image is not included in the at least one second object image.
4. The method of claim 2, further comprising:
and storing the characteristic information of the third object image and/or the third object image into the first cache information queue under the condition that cache information matched with the characteristic information of the third object image does not exist in the first cache information queue.
5. The method of claim 3, further comprising:
and storing the characteristic information of the third object image and/or the third object image into the first cache information queue under the condition that cache information matched with the characteristic information of the third object image does not exist in the first cache information queue.
6. The method of claim 2, wherein determining whether cached information matching the feature information of the third object image exists in the first cached information queue for the first geographic region comprises:
according to the feature information of the third object image and the feature information corresponding to at least one piece of cache information in the first cache information queue, obtaining the similarity between the third object image and the at least one piece of cache information in the first cache information queue;
and determining whether the first cache information queue has cache information matched with the feature information of the third object image or not based on the similarity between the third object image and at least one cache information in the first cache information queue.
7. The method of claim 6, wherein the cache information comprises cache characteristic information;
obtaining a similarity between the third object image and at least one cache information in the first cache information queue according to the feature information of the third object image and the feature information corresponding to the at least one cache information, including:
determining the distance between the feature information of the third object image and each cache feature information in at least one cache feature information in the first cache information queue;
and determining the similarity between the third object image and the at least one cache information according to the distance corresponding to the at least one cache characteristic information.
8. The method of claim 1, wherein the first geographic area is one of a plurality of geographic areas included in a target site.
9. The method according to any one of claims 1-8, wherein the method is applied to a head end device located at a target site, the head end device being connected to one or more cameras located within the first geographical area.
10. An image processing method, characterized in that the method comprises:
receiving at least one second object image detected in a first geographic area and sent by a front-end device, wherein the at least one second object image is obtained by the front-end device after de-duplication processing is performed on a plurality of detected first object images, feature information of the at least one second object image is not matched with cache information in a first cache information queue corresponding to the first geographic area in the front-end device, and the cache information includes historical detection data of the first geographic area;
and performing identification processing on the at least one second object image to obtain an identification result of the first geographical area.
11. The method of claim 10, wherein obtaining the identification result of the first geographic area after performing the identification process on the at least one second object image comprises:
performing de-duplication processing on the at least one second object image, and determining at least one fourth object image in the at least one second object image;
and performing identification processing on the at least one fourth object image to obtain an identification result of the first geographic area.
12. The method of claim 11, wherein the performing de-duplication processing on the at least one second object image to determine at least one fourth object image of the at least one second object image comprises:
acquiring feature information of a fifth object image in the at least one second object image;
searching whether reference image information matched with the feature information of the fifth object image exists in a database;
determining the fifth object image as a fourth object image in a case where reference image information matching the feature information of the fifth object image does not exist in the database.
13. The method of claim 12, wherein de-duplicating the at least one second object image to determine at least one fourth object image of the at least one second object image, further comprises:
searching whether reference image information exists in a second cache information queue of the first geographic area or not under the condition that the reference image information matched with the feature information of the fifth object image exists in the database;
and when the reference image information does not exist in the second cache information queue, determining the fifth object image as a fourth object image.
14. The method according to claim 13, wherein the fifth object image is not included in the at least one fourth object image when the reference image information exists in the second buffer information queue.
15. The method of claim 13, further comprising:
and storing the reference image information into the second cache information queue under the condition that the reference image information does not exist in the second cache information queue.
16. The method of claim 13, further comprising:
and under the condition that the reference image information matched with the feature information of the fifth object image does not exist in the database, storing the feature information of the fifth object image and/or the fifth object image into the database and the second cache information queue respectively.
17. The method of claim 12, wherein the database includes a member customer database, a staff member database, an abnormal customer database, and a general customer database,
searching whether reference image information matched with the feature information of the fifth object image exists in a database or not, wherein the searching comprises the following steps:
and sequentially searching whether reference image information matched with the characteristic information of the fifth object image exists in the member customer database, the staff database, the abnormal customer database and the common customer database.
18. The method of claim 12, wherein the recognition result comprises an identity class of the target object,
wherein, the identifying the at least one fourth object image to obtain the identification result of the first geographic area includes:
determining the identity class of the target object in the fourth object image as a first identity class corresponding to the matched reference image information if the matched reference image information exists in the fourth object image;
and determining the identity class of the target object in the fourth object image as a second identity class under the condition that the fourth object image does not have the matched reference image information.
19. The method of claim 18, wherein the second identity category comprises a general customer identity category, wherein the first identity category comprises a non-general customer identity category, and wherein the non-general customer identity category comprises one or any combination of a member customer identity category, a staff identity category, and an abnormal customer identity category.
20. The method of claim 18, further comprising:
determining a first population statistics result for the first geographic area over a target time period according to the identity category of the target object,
the corresponding time of the image of the target object is located in the target time period, and the first group counting result comprises at least one of the number of visitors and the number of visitors in at least one geographic area in the target time period.
21. The method of claim 20, wherein determining a first population statistics result for a target time period based on the identity class of the target object comprises:
and under the condition that the target object has matched reference image information, determining a first group counting result in the target time period according to at least one of historical matching data of the matched reference image information and corresponding time of the image of the target object and the identity type of the target object.
22. The method of claim 20, further comprising:
and sending the first group counting result in the target time period to a terminal.
23. The method of claim 18, further comprising:
and sending at least one of the identity type, the identity and the visiting information of the target object to a terminal.
24. The method according to any one of claims 10-23, wherein the method is applied in a server, and the server is communicatively connected to one or more front-end devices.
25. An image processing apparatus, characterized in that the apparatus comprises:
the detection module is used for detecting the target object of the collected video stream of the first geographic area to obtain a plurality of first object images;
the duplication removing module is used for carrying out duplication removing processing on the plurality of first object images to obtain at least one second object image in the plurality of first object images, the characteristic information of the at least one second object image is not matched with the cache information in the first cache information queue corresponding to the first geographic area, and the cache information comprises historical detection data of the first geographic area;
and the first sending module is used for sending the at least one second object image to the server.
26. The apparatus of claim 25, wherein the de-duplication module is further configured to:
acquiring feature information of a third object image in the plurality of first object images;
determining whether cache information matched with the feature information of the third object image exists in a first cache information queue corresponding to the first geographic area, wherein the first cache information queue comprises at least one cache information;
and determining a third object image as the second object image when the first cache information queue does not have cache information matched with the feature information of the third object image.
27. The apparatus according to claim 26, wherein in a case where there is no cache information in the first cache information queue matching the feature information of a third object image, the third object image is not included in the at least one second object image.
28. The apparatus of claim 26, further comprising:
the first storage module is configured to store the feature information of the third object image and/or the third object image into the first cache information queue when cache information matching the feature information of the third object image does not exist in the first cache information queue.
29. The apparatus of claim 27, further comprising:
the first storage module is configured to store the feature information of the third object image and/or the third object image into the first cache information queue when cache information matching the feature information of the third object image does not exist in the first cache information queue.
30. The apparatus of claim 26, wherein the de-duplication module is further configured to:
according to the feature information of the third object image and the feature information corresponding to at least one piece of cache information in the first cache information queue, obtaining the similarity between the third object image and the at least one piece of cache information in the first cache information queue;
and determining whether the first cache information queue has cache information matched with the feature information of the third object image or not based on the similarity between the third object image and at least one cache information in the first cache information queue.
31. The apparatus of claim 30, wherein the cache information comprises cache characteristic information;
the de-emphasis module is further configured to:
determining the distance between the feature information of the third object image and each cache feature information in at least one cache feature information in the first cache information queue;
and determining the similarity between the third object image and the at least one cache information according to the distance corresponding to the at least one cache characteristic information.
32. The apparatus of claim 25, wherein the first geographic area is one of a plurality of geographic areas included in a target site.
33. The apparatus of any one of claims 25-32, wherein the apparatus is applied to a head end device located at a target site, the head end device being connected to one or more cameras located within the first geographic region.
34. An image processing apparatus, characterized in that the apparatus comprises:
a receiving module, configured to receive at least one second object image detected in a first geographic area and sent by a front-end device, where the at least one second object image is obtained by the front-end device after performing deduplication processing on a plurality of detected first object images, feature information of the at least one second object image is not matched with cache information in a first cache information queue corresponding to the first geographic area in the front-end device, and the cache information includes historical detection data of the first geographic area;
and the identification module is used for identifying the at least one second object image to obtain an identification result of the first geographic area.
35. The apparatus of claim 34, wherein the identification module is further configured to:
performing de-duplication processing on the at least one second object image, and determining at least one fourth object image in the at least one second object image;
and performing identification processing on the at least one fourth object image to obtain an identification result of the first geographic area.
36. The apparatus of claim 35, wherein the identification module is further configured to:
acquiring feature information of a fifth object image in the at least one second object image;
searching whether reference image information matched with the feature information of the fifth object image exists in a database;
determining the fifth object image as a fourth object image in a case where reference image information matching the feature information of the fifth object image does not exist in the database.
37. The apparatus of claim 36, wherein the identification module is further configured to:
searching whether reference image information exists in a second cache information queue of the first geographic area or not under the condition that the reference image information matched with the feature information of the fifth object image exists in the database;
and when the reference image information does not exist in the second cache information queue, determining the fifth object image as a fourth object image.
38. The apparatus according to claim 37, wherein said fifth object image is not included in said at least one fourth object image when said reference image information exists in said second buffer information queue.
39. The apparatus of claim 37, further comprising:
and the second storage module is used for storing the reference image information into the second cache information queue under the condition that the reference image information does not exist in the second cache information queue.
40. The apparatus of claim 37, further comprising:
and a third storage module, configured to store the feature information of the fifth object image and/or the fifth object image into the database and the second cache information queue, respectively, when there is no reference image information matching the feature information of the fifth object image in the database.
41. The apparatus of claim 36, wherein the database comprises a member customer database, a staff member database, an abnormal customer database, and a general customer database,
wherein the identification module is further configured to:
and sequentially searching whether reference image information matched with the characteristic information of the fifth object image exists in the member customer database, the staff database, the abnormal customer database and the common customer database.
42. The apparatus of claim 36, wherein the recognition result comprises an identity class of a target object,
wherein the identification module is further configured to:
determining the identity class of the target object in the fourth object image as a first identity class corresponding to the matched reference image information if the matched reference image information exists in the fourth object image;
and determining the identity class of the target object in the fourth object image as a second identity class under the condition that the fourth object image does not have the matched reference image information.
43. The apparatus of claim 42, wherein the second identity category comprises a general customer identity category, wherein the first identity category comprises a non-general customer identity category, and wherein the non-general customer identity category comprises one or any combination of a member customer identity category, a staff identity category, and an abnormal customer identity category.
44. The apparatus of claim 42, further comprising:
a first statistics module for determining a first population statistics result of the first geographic area within a target time period according to the identity category of the target object,
the corresponding time of the image of the target object is located in the target time period, and the first group counting result comprises at least one of the number of visitors and the number of visitors in at least one geographic area in the target time period.
45. The apparatus of claim 44, wherein the first statistics module is further configured to:
and under the condition that the target object has matched reference image information, determining a first group counting result in the target time period according to at least one of historical matching data of the matched reference image information and corresponding time of the image of the target object and the identity type of the target object.
46. The apparatus of claim 44, further comprising:
and the second sending module is used for sending the first group counting result in the target time period to the terminal.
47. The apparatus of claim 42, further comprising:
and the third sending module is used for sending at least one of the identity type, the identity and the visiting information of the target object to the terminal.
48. The apparatus according to any one of claims 34-47, wherein the apparatus is applied in a server, and the server is communicatively connected to one or more head-end devices.
49. A passenger flow analysis system, the system comprising:
the camera is arranged at the target place and used for acquiring a video stream of a first geographical area;
a server connected to the cameras for receiving the video streams from the at least one camera, performing target detection on at least one frame of video image in the video stream to obtain detection data of at least one target object in the at least one frame of video image, and performing target identification on the at least one target object based on the detection data to obtain an identification result of the first geographical area, wherein the identification result comprises an identity category of a target object, the detection data comprises at least one second object image obtained by performing de-duplication processing on at least one frame of video image in the video stream, the characteristic information of the at least one second object image is not matched with cache information in a first cache information queue corresponding to the first geographic area, and the cache information comprises historical detection data of the first geographic area;
and the terminal equipment is connected to the server and used for receiving the identification result sent by the server and displaying the identification result.
50. The system of claim 49, wherein the servers comprise a front-end server and a cloud server,
the front-end server is arranged at the target place, is connected to the at least one camera, and is used for receiving the video stream sent by the at least one camera and carrying out target detection on at least one frame of video image in the video stream to obtain detection data of at least one target object in the at least one frame of video image;
the cloud server is connected to the front-end server and used for receiving detection data of the front-end server and carrying out target identification on the at least one target object based on the detection data to obtain an identification result of the first geographic area.
51. The system of claim 49,
the server is further used for obtaining a group statistical result in a target time period based on the identification result, and sending the group statistical result to the terminal equipment, wherein the group statistical result comprises a first group statistical result in the target time period;
the terminal equipment is also used for displaying the group statistical result.
52. The system of claim 50,
the server is further used for obtaining a group statistical result in a target time period based on the identification result, and sending the group statistical result to the terminal equipment, wherein the group statistical result comprises a first group statistical result in the target time period;
the terminal equipment is also used for displaying the group statistical result.
53. The system of claim 52, wherein the terminal device is connected to the cloud server for receiving and displaying the group statistics sent by the cloud server within the target time period,
the terminal equipment is further used for receiving and displaying the target identification result sent by the cloud server.
54. The system according to any one of claims 49 to 53, wherein the terminal device comprises a personal computer, a smartphone, a wearable device or a tablet computer.
55. An electronic device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to: performing the method of any one of claims 1 to 9.
56. A computer readable storage medium having computer program instructions stored thereon, which when executed by a processor implement the method of any one of claims 1 to 9.
57. An electronic device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to: performing the method of any one of claims 10 to 24.
58. A computer readable storage medium having computer program instructions stored thereon, which when executed by a processor implement the method of any one of claims 10 to 24.
CN201810639814.8A 2018-06-20 2018-06-20 Image processing method and device, electronic equipment and storage medium Active CN109145127B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810639814.8A CN109145127B (en) 2018-06-20 2018-06-20 Image processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810639814.8A CN109145127B (en) 2018-06-20 2018-06-20 Image processing method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN109145127A CN109145127A (en) 2019-01-04
CN109145127B true CN109145127B (en) 2021-04-27

Family

ID=64802147

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810639814.8A Active CN109145127B (en) 2018-06-20 2018-06-20 Image processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN109145127B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112149457A (en) * 2019-06-27 2020-12-29 西安光启未来技术研究院 People flow statistical method, device, server and computer readable storage medium
CN110942036B (en) * 2019-11-29 2023-04-18 深圳市商汤科技有限公司 Person identification method and device, electronic equipment and storage medium
CN113128293A (en) * 2019-12-31 2021-07-16 杭州海康威视数字技术股份有限公司 Image processing method and device, electronic equipment and storage medium
CN111400533B (en) * 2020-03-02 2023-10-17 北京三快在线科技有限公司 Image screening method, device, electronic equipment and storage medium
CN113780042A (en) * 2020-11-09 2021-12-10 北京沃东天骏信息技术有限公司 Picture set operation method, picture set labeling method and device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104112209A (en) * 2013-04-16 2014-10-22 苏州和积信息科技有限公司 Audience statistical method of display terminal, and audience statistical system of display terminal
CN106897698A (en) * 2017-02-24 2017-06-27 常州常工电子科技股份有限公司 Classroom number detection method and system based on machine vision Yu binocular coordination technique

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3917506B2 (en) * 2002-11-28 2007-05-23 株式会社日立製作所 Video signal recording and transmitting apparatus, monitoring system, and monitoring apparatus
US8675059B2 (en) * 2010-07-29 2014-03-18 Careview Communications, Inc. System and method for using a video monitoring system to prevent and manage decubitus ulcers in patients
US9432631B2 (en) * 2011-04-04 2016-08-30 Polaris Wireless, Inc. Surveillance system
CN105979232B (en) * 2016-07-12 2019-05-07 湖北誉恒科技有限公司 Video monitoring system for closed school
CN105959653A (en) * 2016-07-12 2016-09-21 湖北誉恒科技有限公司 Video monitoring system for pedestrian crosswalk
CN107393017A (en) * 2017-08-11 2017-11-24 北京铂石空间科技有限公司 Image processing method, device, electronic equipment and storage medium
CN108965826B (en) * 2018-08-21 2021-01-12 北京旷视科技有限公司 Monitoring method, monitoring device, processing equipment and storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104112209A (en) * 2013-04-16 2014-10-22 苏州和积信息科技有限公司 Audience statistical method of display terminal, and audience statistical system of display terminal
CN106897698A (en) * 2017-02-24 2017-06-27 常州常工电子科技股份有限公司 Classroom number detection method and system based on machine vision Yu binocular coordination technique

Also Published As

Publication number Publication date
CN109145127A (en) 2019-01-04

Similar Documents

Publication Publication Date Title
CN109145127B (en) Image processing method and device, electronic equipment and storage medium
CN109145707B (en) Image processing method and device, electronic equipment and storage medium
CN106776619B (en) Method and device for determining attribute information of target object
US10701321B2 (en) System and method for distributed video analysis
US9536153B2 (en) Methods and systems for goods received gesture recognition
CN110799972A (en) Dynamic human face image storage method and device, electronic equipment, medium and program
US10271017B2 (en) System and method for generating an activity summary of a person
CN108038937B (en) Method and device for showing welcome information, terminal equipment and storage medium
US9576371B2 (en) Busyness defection and notification method and system
CN108229456B (en) Target tracking method and device, electronic equipment and computer storage medium
CN111383039B (en) Information pushing method, device and information display system
JP6185186B2 (en) Method and system for providing code scan result information
US20180357492A1 (en) Visual monitoring of queues using auxillary devices
CN107871111B (en) Behavior analysis method and system
CN111160243A (en) Passenger flow volume statistical method and related product
CN105659279B (en) Information processing apparatus, information processing method, and computer program
CN110121108B (en) Video value evaluation method and device
KR102260123B1 (en) Apparatus for Sensing Event on Region of Interest and Driving Method Thereof
US20160189170A1 (en) Recognizing Customers Requiring Assistance
KR20170006356A (en) Method for customer analysis based on two-dimension video and apparatus for the same
KR101848367B1 (en) metadata-based video surveillance method using suspective video classification based on motion vector and DCT coefficients
JP7452622B2 (en) Presentation control device, system, method and program
CN112837369A (en) Data processing method and device, electronic equipment and computer readable storage medium
US20110317010A1 (en) System and method for tracking a person in a pre-defined area
CN110956644A (en) Motion trail determination method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant