CN110825893A - Target searching method, device, system and storage medium - Google Patents

Target searching method, device, system and storage medium Download PDF

Info

Publication number
CN110825893A
CN110825893A CN201910881015.6A CN201910881015A CN110825893A CN 110825893 A CN110825893 A CN 110825893A CN 201910881015 A CN201910881015 A CN 201910881015A CN 110825893 A CN110825893 A CN 110825893A
Authority
CN
China
Prior art keywords
snapshot
image
target
library
bottom library
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910881015.6A
Other languages
Chinese (zh)
Inventor
林江煌
曹继邦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Kuangshi Technology Co Ltd
Beijing Megvii Technology Co Ltd
Original Assignee
Beijing Kuangshi Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Kuangshi Technology Co Ltd filed Critical Beijing Kuangshi Technology Co Ltd
Priority to CN201910881015.6A priority Critical patent/CN110825893A/en
Publication of CN110825893A publication Critical patent/CN110825893A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/51Indexing; Data structures therefor; Storage structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/55Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/5866Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, manually generated location and time information

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Library & Information Science (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention provides a target searching method, a device, a system and a storage medium, wherein the method comprises the following steps: acquiring snapshot image information of a preset snapshot time and a snapshot place range from a snapshot image storage device to serve as a snapshot image information set; acquiring a bottom library with a specific bottom library label; performing collision comparison or image clustering on the snapshot image information set and the base library to obtain one or more alternative target image sets; and selecting a target image set from the candidate target image set, wherein the person corresponding to the target image set is the target. According to the method, the device, the system and the storage medium, the snapshot can be directly pulled without downloading the snapshot to the local and uploading the snapshot, so that the operation complexity is reduced, and the operation time is shortened; and the base library to be retrieved can be determined by user selection or automatically according to the target type and the target image set is selected; and the target image set is acquired and then quickly controlled, so that the target searching efficiency is further improved.

Description

Target searching method, device, system and storage medium
Technical Field
The invention relates to the technical field of image processing, in particular to target searching processing.
Background
With the wide application of image processing technology in the security field, it is becoming more and more common for the public security department to perform security control based on image processing technology to find criminal suspects. For example, when a robbery occurs in a certain place at a certain time, the public security part is expected to determine the image or identity of a suspect of the crime through monitoring of a crime scene according to the time and the place of the crime so as to carry out actions as soon as possible and improve the solution efficiency. Generally, the predecessors have a greater possibility of committing a crime, so that suspects can be determined from a large number of snap pictures by comparing the snap pictures with corresponding base libraries (for example, a theft case occurs, the snap pictures can be compared with the base libraries of the theft persons). In the process, firstly, finding out the snapshot pictures in the time and the camera range as the personnel possibly related to the case according to the case time and the place, and exporting, packaging and compressing the snapshot pictures; and then carrying out multi-person retrieval or bottom library collision. The multi-person search refers to searching the bottom library image with TOP N (for example, TOP 100) similarity to the person in the snapshot image in the bottom library data and returning the bottom library image to the user. When multi-person retrieval is carried out, a user needs to upload the packed and compressed snapshot pictures, bottom database data needing to be retrieved is selected, the system carries out n: m comparison on the packed and compressed snapshot pictures and the bottom database data, and retrieved bottom database personnel are used as the range of suspects. Bottom bank collision refers to finding the intersecting person in two banks. When the base database collision is carried out, a user needs to create a new base database and batch import the packed and compressed snapshot pictures into the new base database, base database data needing to be searched is selected, the system carries out n: m comparison on the new base database and the base database data needing to be searched, and intersection personnel existing in the new base database and the base database data needing to be searched are used as suspect ranges. At present, multi-user retrieval and base collision require a user to export, pack, compress and upload snapshot pictures, and also require the user to select base data, newly establish a base and other operations, so that the retrieval process is complicated, the study and judgment thought is complex, the use threshold is high, the service understanding is difficult, and the operation interruption and failure rate are high; in addition, the processes of exporting and uploading the snapshot picture, newly building a base library and the like all need to occupy long waiting time, are time-consuming, and are easy to have the situations of uploading interruption, failure and the like; in addition, the library building process of occupying resources of the static library may need to examine and approve the processes and the like, which further prolongs the time.
Therefore, the target searching method in the prior art has the problems of tedious operation target searching, inconvenient use, long waiting time, easy interruption, high searching failure rate and the like.
Disclosure of Invention
The present invention has been made in view of the above problems. The invention provides a target searching method, a target searching device, a target searching system and a computer storage medium.
According to a first aspect of the present invention, there is provided a target finding method, including:
responding to a new search task request of a user, establishing communication connection with a snapshot image storage device, and acquiring snapshot image information of a preset snapshot time and a snapshot place range from the snapshot image storage device as a snapshot image information set; the newly-built search task request comprises the preset snapshot time and snapshot place;
acquiring a base library with a specific base library label, wherein the base library comprises a plurality of base library image information; the specific bottom library label is determined according to a bottom library label selected by a user, or the specific bottom library label is determined according to the type of a target to be searched by the user, and the newly-built searching task request comprises the bottom library label or the type of the target;
performing collision comparison or image clustering on the snapshot image information set and the base library to obtain one or more alternative target image sets; each candidate target image set comprises a group of snapshot image information and bottom library image information in collision ratio comparison, or comprises the snapshot image information and the bottom library image information which are classified into one set through image clustering;
selecting a target image set from the candidate target image set, wherein a person corresponding to the target image set is a target;
the snapshot image information comprises characteristic values corresponding to the snapshot image/snapshot image, and the base image information comprises characteristic values corresponding to the base image/base image.
According to a second aspect of the present invention, there is provided a target finding apparatus, comprising:
the snapshot information acquisition module is used for responding to a new search task request of a user, establishing communication connection with the snapshot storage device, and acquiring snapshot information within a preset snapshot time and a snapshot place range from the snapshot storage device to serve as a snapshot information set; the newly-built search task request comprises the preset snapshot time and snapshot place;
the system comprises a bottom library acquisition module, a bottom library identification module and a bottom library identification module, wherein the bottom library acquisition module is used for acquiring a bottom library with a specific bottom library label, and the bottom library comprises a plurality of bottom library image information; the specific bottom library label is determined according to a bottom library label selected by a user, or the specific bottom library label is determined according to the type of a target to be searched by the user, and the newly-built searching task request comprises the bottom library label or the type of the target;
a collision comparison or image clustering module, wherein the collision comparison module is used for performing collision comparison on the snapshot image information set and the bottom library to obtain one or more alternative target image sets, each alternative target image set comprises a group of snapshot image information and bottom library image information in the collision comparison, the image clustering module is used for performing image clustering on the snapshot image information set and the bottom library to obtain one or more alternative target image sets, and each alternative target image set comprises the snapshot image information and the bottom library image information which are grouped into one set through image clustering;
the target snapshot image information selection module is used for selecting a target image set from the alternative target image set, wherein personnel corresponding to the target image set are targets;
the snapshot image information comprises characteristic values corresponding to the snapshot image/snapshot image, and the base image information comprises characteristic values corresponding to the base image/base image.
According to a third aspect of the present invention, there is provided a target finding system comprising a memory, a processor and a computer program stored on the memory and running on the processor, wherein the steps of the method of the first aspect are implemented when the computer program is executed by the processor.
According to a fourth aspect of the present invention, there is provided a computer storage medium having a computer program stored thereon, wherein the computer program, when executed by a computer, implements the steps of the method of the first aspect.
According to the target searching method, the target searching device, the target searching system and the computer storage medium, the snapshot can be directly pulled from the snapshot storage device without downloading the snapshot to the local by a user and uploading the snapshot, so that the operation complexity is reduced, and the operation time is shortened; the base library to be searched can be selected by a user or automatically determined according to the target type, so that the operation complexity is further reduced; after the alternative target image set is obtained, automatically screening out the target image set according to a preset rule or selecting the target image set under the intervention of a user; in addition, the target image set can be quickly distributed and controlled after being obtained, and the target searching efficiency is further improved.
Drawings
The above and other objects, features and advantages of the present invention will become more apparent by describing in more detail embodiments of the present invention with reference to the attached drawings. The accompanying drawings are included to provide a further understanding of the embodiments of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention. In the drawings, like reference numbers generally represent like parts or steps.
FIG. 1 is a schematic block diagram of an example electronic device for implementing a target finding method and apparatus in accordance with embodiments of the present invention;
FIG. 2 is a schematic flow chart diagram of a target finding method according to an embodiment of the present invention;
FIG. 3 is a schematic block diagram of a target lookup apparatus according to an embodiment of the present invention;
FIG. 4 is a schematic block diagram of a target finding system according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, exemplary embodiments according to the present invention will be described in detail below with reference to the accompanying drawings. It is to be understood that the described embodiments are merely a subset of embodiments of the invention and not all embodiments of the invention, with the understanding that the invention is not limited to the example embodiments described herein. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the invention described herein without inventive step, shall fall within the scope of protection of the invention.
First, an example electronic device 100 for implementing the object finding method and apparatus of the present invention is described with reference to fig. 1.
As shown in FIG. 1, electronic device 100 includes one or more processors 101, one or more memory devices 102, an input device 103, an output device 104, which are interconnected via a bus system 105 or other form of connection mechanism (not shown). It should be noted that the components and structure of the electronic device 100 shown in fig. 1 are exemplary only, and not limiting, and the electronic device may have other components and structures as desired.
The processor 101 may be a Central Processing Unit (CPU) or other form of processing unit having data processing capabilities or instruction execution capabilities, and may control other components in the electronic device 100 to perform desired functions.
The storage 102 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory or non-volatile memory. The volatile memory may include, for example, Random Access Memory (RAM), cache memory (cache), or the like. The non-volatile memory may include, for example, Read Only Memory (ROM), hard disk, flash memory, etc. On which one or more computer program instructions may be stored that may be executed by the processor 101 to implement the client functionality (implemented by the processor) and/or other desired functionality in embodiments of the invention described below. Various applications and various data, such as various data used or generated by the applications, may also be stored in the computer-readable storage medium.
The input device 103 may be a device used by a user to input instructions, and may include one or more of a keyboard, a mouse, a microphone, a touch screen, and the like.
The output device 104 may output various information (e.g., images or sounds) to an external (e.g., user), and may include one or more of a display, a speaker, and the like.
Exemplary electronic devices for implementing the object finding method and apparatus according to embodiments of the present invention may be implemented as, for example, smart phones, tablets, computer devices, and the like.
In the following, a target finding method 200 according to an embodiment of the present invention will be described with reference to fig. 2, where the method is used for a target finding device, and the target finding device may be a personal terminal, a server or a cloud. As shown in fig. 2, a method 200 for object finding includes:
step S210, responding to a newly-built search task request of a user, establishing communication connection with a snapshot image storage device, and acquiring snapshot image information within a preset snapshot time and snapshot place range from the snapshot image storage device as a snapshot image information set; and the newly-built search task request comprises the preset snapshot time and snapshot place.
After an event occurs, the user wants to find the target associated with the event from the snapshot taken near the event occurrence time and place as soon as possible. When the number of snap shots is large, it is difficult and inefficient to manually determine which target is the target associated with time. Experience has shown that objects associated with an event are typically objects that have received attention, which are typically located in an existing image base. Thus, the target associated with the event can be quickly determined by comparing the snapshot to the image base. For example, when a theft case occurs, the event may be taken, a snapshot taken near the location may be compared with the pre-theft subject library, and the person in the comparison has a greater probability of being the target associated with the event.
The preset snapshot time and the preset snapshot place can be a place which is a period of time before and after the event time and a certain range from the event place, for example, a time period of 10min before and after the event is taken as the preset snapshot time, and a range of 1km from the event place is taken as the preset snapshot place.
The snapshot storage device may be a terminal, a server, or a cloud device. The snapshot information comprises characteristic values corresponding to the snapshot/snapshot. When the snapshot information includes feature values corresponding to the snapshot (i.e., feature values extracted from the snapshot), the snapshot feature values may be extracted by the front-end device, i.e., the snapshot machine, and stored in the snapshot storage device. When the target searching device acquires the snapshot image information from the snapshot image storage device and only includes the snapshot image and does not include the feature value, the target searching device needs to extract the snapshot image feature value after acquiring the snapshot image, so as to facilitate subsequent collision comparison or image clustering.
In the prior art, a user needs to acquire snapshot image information within a preset snapshot time and a preset snapshot place range from a snapshot image storage device, pack and compress the snapshot image and upload the snapshot image information to target search equipment.
Step S220, acquiring a base library with a specific base library label, wherein the base library comprises a plurality of base library image information; the specific bottom library tag is determined according to a bottom library tag selected by a user, or the specific bottom library tag is determined according to the type of an object to be searched by the user, and the new search task request comprises the bottom library tag or the type of the object.
In order to improve the target finding accuracy, the image base for comparison with the snapshot is a base with a specific base label. For example, when a theft occurs, the snapshot should be compared to the pre-theft subject library rather than the visiting library to have a greater probability of being compared to the person of the event. The specific base label may be selected by the user, or the target search device may determine the specific base label according to the event type or the target type selected by the user. For example, the user selects the event type as a theft case, and the target seeking device may determine that the particular underlying library tag is a pre-theft subject according to the event type. Therefore, the target searching device can automatically determine the base library to be searched according to the target type, and the operation complexity is further reduced.
The number of base libraries of the specific label is at least one, and can be a plurality. For example, the user selects the event type as a theft case, and since the nature of theft and robbery is related, the target search device may determine that the specific base tag is a pre-theft department and a pre-robbery department according to the event type, and compare the snapshot with the two bases, namely, the pre-theft department base and the pre-robbery department base. Each bottom library is provided with a plurality of bottom library image information, and the bottom library image information comprises a bottom library image/characteristic value corresponding to the bottom library image.
Step S230, carrying out collision comparison or image clustering on the snapshot image information set and the bottom library to obtain one or more alternative target image sets; each candidate target image set comprises a group of snapshot image information and bottom library image information in collision ratio comparison, or comprises the snapshot image information and the bottom library image information which are classified into one set through image clustering.
The comparison mode of the snapshot image and the bottom library can be collision comparison or image clustering.
The method for collision comparison comprises the steps of performing m: n comparison on a plurality of snapshot images in a snapshot image information set and a plurality of base image images in a base, taking an image with the similarity exceeding a threshold value and the highest similarity in the base as a base image of the snapshot image in the comparison, thereby obtaining a plurality of image pairs of the base image of the snapshot image in the snapshot image-comparison, wherein each image pair is a candidate target image set. Each alternative target image set only has one snapshot and one base image. If one snapshot compares two bottom library images at the same time, the snapshot and the two bottom library images respectively form two alternative target image sets. For example, if the snapshot a is compared with the base library image 1p having the specific base library tag 1 and is also compared with the base library image 1r having the specific base library tag 1, the candidate target image sets in which the snapshot a is located are (a,1p), (a,1 r). For another example, if the snapshot a is compared with the base library image 1p having the specific base library tag 1 and is also compared with the base library image 2w having the specific base library tag 2, the candidate target images in which the snapshot a is located are set to (a,1p), (a,2 w).
The image clustering mode is to perform image clustering on the snapshot image in the snapshot image information set and the bottom library image in the bottom library (the same process as the clustering process of one person for one file), and if the images clustered into one set comprise the images from the snapshot image and the images from the bottom library image, the set is used as a candidate target image set. If a snapshot compares two base images at the same time, the snapshot and the two base images are taken as a set of candidate target images. For example, if the snapshot a is clustered with the base library image 1p having the specific base library label 1 and the base library image 1r having the specific base library label 1 as a set, the candidate target image set in which the snapshot a is located is (a,1p, 1 r). For another example, if the snapshot a is clustered with the base library images 1p with the specific base library label 1 and the base library images 2w with the specific base library label 2 into a set, the candidate target image set in which the snapshot a is located is (a,1p,2 w). For another example, if the snapshot a and the snapshot B, the base library image 1p with the specific base library label 1, and the base library image 2w with the specific base library label 2 are clustered into a set, the candidate target image set where the snapshot a is located is (a, B,1p,2 w). Compared with a collision comparison mode, the alternative target image set obtained through the image clustering mode can cluster all the snapshot images and the bottom library images belonging to the same person into one set, on one hand, when the duplicate images of the same person exist in the snapshot images or the bottom library images, the comparison result of the same person cannot be repeatedly displayed, the effectiveness of the result can be improved, and on the other hand, the image clustering mode can more intuitively display the condition that one snapshot image is compared with the bottom library images in a plurality of bottom libraries with different specific bottom library labels. When a snapshot of a person is clustered with the images of the bases in a plurality of bases with different specific base labels into a set, the probability that the person is a target is high. For example, when a theft case occurs, the snapshot a,1p in the pre-theft subject base library and 2w in the pre-robbery subject base library are clustered into the same set, so that the probability that the person corresponding to the snapshot a is a target is high because the person has both the pre-theft subject and the pre-robbery subject. In this case, the candidate target image set obtained by image clustering is (a,1p,2w), and the specific base labels of the base images in the snapshot a ratio are 1 and 2, which can be directly obtained from the candidate target image set. And the candidate target image sets obtained by the way of collision comparison are (A,1p) and (A,2w), and the specific base library label of the base library image in the snapshot A ratio can be obtained only by processing.
And S240, selecting a target image set from the candidate target image sets.
When a plurality of candidate target image sets are provided, if the subsequent searching and controlling are performed on the personnel corresponding to each candidate target image set, a large burden is brought to the system. At this time, the target image set can be automatically determined from the multiple candidate target image sets according to a preset rule by the target searching device, or the target image set can be determined from the multiple candidate target image sets in combination with user intervention, and a person corresponding to the target image set is the target.
After the candidate target image set is obtained, the target searching device can automatically screen out the target image set from the candidate target image set according to a preset rule, so that the operation complexity can be further reduced. In addition, a target image set can be screened out from the alternative target image set in combination with user intervention, so that the accuracy of target searching can be improved through manual screening of the user.
Therefore, the embodiment of the invention can directly pull the snapshot image from the snapshot image storage device without downloading the snapshot image to the local by a user and uploading the snapshot image, thereby reducing the operation complexity and shortening the operation time; the base library to be searched can be selected by a user or automatically determined according to the target type, so that the operation complexity is further reduced; after the candidate target image set is obtained, the target image set can be automatically screened out according to a preset rule or selected under the intervention of a user, so that the operation complexity can be further reduced or the accuracy of target searching can be improved through manual screening of the user.
In one embodiment, step S240 includes: and sequencing the alternative target image sets according to the similarity between the snapshot image and the bottom library image in each alternative target image set, and taking the first N alternative target image sets as target image sets.
In this embodiment, the target searching device automatically determines the target from the people corresponding to the multiple candidate target image sets according to a preset rule, where the preset rule is to use the top N candidate target image sets with the highest similarity between the snapshot image and the base image in the candidate target image sets as the target image sets. It can be understood that the higher the similarity between the snapshot image and the bottom library image in one candidate target image set, the higher the probability that the snapshot image and the bottom library image belong to the same person is, so that the candidate target image set with the higher probability of belonging to the person in the bottom library in the candidate target image set is taken as the target image set. For example, in the two candidate target image sets, the similarity between the snapshot image and the bottom library image is 99% and 80%, respectively (for example, more than 80% is considered as being in the ratio), the candidate target image set with the similarity between the snapshot image and the bottom library image being 99% is selected as the target image set, because the person in the snapshot image in the candidate target image set is more likely to be located in the bottom library. It should be noted that such a preset rule is more suitable for a situation where one or more candidate target image sets are obtained through collision comparison, because the set information of the one or more candidate target image sets obtained through collision comparison usually includes the similarity between the snapshot image and the base image.
In one embodiment, the number of the specific base library tags is multiple, and each specific base library tag corresponds to a weight, and the step S240 includes: sorting the candidate target image sets according to the weight of the bottom library identifier of each candidate target image set, and taking the first M candidate target image sets as target image sets; the bottom library identification of the alternative target image set comprises a specific bottom library label of a bottom library in which the bottom library image information is located in the alternative target image set, and the weight of the bottom library identification is determined according to the weight of the specific bottom library label contained in the bottom library identification.
In this embodiment, the target search device automatically determines a target image set from a plurality of candidate target image sets according to a preset rule, where the preset rule is to sort the candidate target image sets according to the weight of the base identifier of each candidate target image set, and use the top M candidate target image sets as the target image sets.
Each alternative target image set is provided with at least one bottom library identification for identifying the ratio of the snapshot image in the alternative target image set to the bottom library image in which bottom library. For example, the candidate target image set is (a,1p), and the base bin identification is 1; the candidate target image set is (a,1p,2w), and the base bin is labeled 1, 2. Each particular underlying library label corresponds to a weight that may be different in different target search tasks, or the weight may be different in different types of targets, different types of cases, and corresponding target search tasks. For example, when a theft case occurs, a user wants to compare the snapshot with two bottom libraries, namely a pre-theft department library and a pre-robbery department library, and compared with the pre-robbery department library, a person corresponding to the snapshot in the pre-theft department library is more likely to be a target, at this time, the weight of the pre-theft department library can be set to be 5, and the weight of the pre-robbery department library can be set to be 3. When a robbery case occurs, the user still wants to compare the snapshot with the two underlying libraries, namely the pre-robbery library and the pre-robbery library, and compared with the pre-robbery library, the user is more likely to be a target than the personnel corresponding to the snapshot in the pre-robbery library, at the moment, the weight of the pre-robbery library can be set to be 3, and the weight of the pre-robbery library is set to be 5. Therefore, when determining the target image set from the plurality of candidate target image sets, the candidate target image set with the higher weight of the base identification may be preferentially used as the target image set.
In a specific embodiment, the weight of the bottom library identifier is the sum of the weights of the specific bottom library labels contained in the bottom library identifier, or the weight of the bottom library identifier is the maximum weight of the specific bottom library labels contained in the bottom library identifier.
Taking the previous example as a continuation, it can be understood that the person in the snapshot map in the pre-theft department bank and the pre-robbery department bank is more likely to be the target, and for the case of theft, the person in the pre-theft department bank is more likely to be the target than the person in the pre-robbery department bank, so the weight of the base identity can be set as the sum of the weights of the specific base labels contained in the base identity, or the weight of the base identity can be set as the maximum weight of the specific base label contained in the base identity. For example, the bottom bin identifiers of the candidate target image sets (a,1p,2w), (B,1k), (a,1p,2w) are 1 and 2, the corresponding weights are 3 and 5, respectively, and the bottom bin identifiers of (B,1k) are 1 and the corresponding weights are 3. When the weights of the bottom library identifications are the sum of the weights of the specific bottom library labels contained in the bottom library identifications, the bottom library identification weights of the candidate target image sets (A,1p,2w) and (B,1k) are respectively 8 and 3, and when the weights of the bottom library identifications are the maximum weights of the specific bottom library labels contained in the bottom library identifications, the bottom library identification weights of the candidate target image sets (A,1p,2w) and (B,1k) are respectively 5 and 3.
In one embodiment, step S240 includes: if the interval between the snapshot time/snapshot place of the snapshot image information contained in the first candidate target image set and the snapshot time/snapshot place of the snapshot image information contained in the second candidate target image set is smaller than a preset time interval threshold value/place interval threshold value, taking the first candidate target image set and the second candidate target image set together as a target image set; the snapshot image information of the first candidate target image set and the snapshot image information of the second candidate target image set correspond to different people.
For some group events or group events, if the snapshot time/snapshot place intervals of the plurality of snapshot images are smaller than the preset time interval threshold/place interval threshold, and the plurality of snapshot images are all located in the existing attention personnel image base, the personnel corresponding to the plurality of snapshot images have a higher probability of being targets. For example, for a theft case, if the same camera captures people A, B, C to get the snap shots of fig. 1, 2, 3 in 5 minutes, and the snap shots of fig. 1, 2, 3 all find the images of the bottom library in the ratio in the pre-theft department bottom library, then people A, B, C may be a theft group and A, B, C may be targeted together.
In one embodiment, the method 200 further includes step S250: and displaying the alternative target image set ordering to the user.
The ordering is to order the alternative target image sets according to the similarity between the snapshot image and the bottom library image in each alternative target image set,
or, the sorting is to sort the candidate target image sets according to weights of the bottom library identifications of each candidate target image set, wherein each specific bottom library label corresponds to a weight, the bottom library identifications of the candidate target image sets include the specific bottom library label of the bottom library in which the bottom library image information is located in the candidate target image set, and the weights of the bottom library identifications are determined according to the weights of the specific bottom library labels included in the bottom library identifications;
or the ranking is ranking according to the snapshot time/snapshot place of the snapshot in each candidate target image set.
In the embodiment, the candidate target image sets are displayed to the user in an ordered manner, so that the user can select a target from the people corresponding to the candidate target image sets. The sorting mode is to sort from high to low according to the probability that the person corresponding to the candidate target image set belongs to the target, which is specifically described above for step S240, and is not described herein again.
In one embodiment, step S240: and selecting a target image set from the candidate target image sets according to a selection instruction of a user.
In this embodiment, the target image set may be selected from the candidate target image sets only according to a selection instruction of a user, that is, the user selects the target image set from the candidate target image sets; and selecting a target image set from the alternative target image sets according to a selection instruction of a user and an automatic screening strategy of the target searching device.
The user can select the candidate target image set according to the prior information, for example, the user knows that a person normally moves in the snapshot of the snapshot image, and can exclude the person from the target range. For another example, the user knows personal characteristics of the target such as gender, age, height, etc., and can select the candidate target image set by the characteristics.
In one embodiment, the method 200 further includes step S260: and quickly searching the target according to the snapshot image information or the bottom library image information of the target.
After the target is selected, it is desirable to further perform fast search on the snapshot of the target in a larger time and place range so as to determine the track of the target, or to further arrange the target in a larger time and place range so as to capture the target. When searching, the snapshot information of the target (i.e. the snapshot information in the target image set) or the image information of the base of the target (i.e. the image information of the base in the target image set) with better quality can be used, so as to improve the image comparison accuracy.
In one embodiment, step S260 includes:
calculating a time difference between the first time and a snapshot time of the snapshot of the target;
calculating the action range of the target according to the time difference and the action rate of the target;
determining a search area which takes the snapshot place of the snapshot picture of the target as the center of a circle and the action range as the radius;
and searching the target in the snapshot image of the camera in the search area according to the snapshot image information or the bottom library image information of the target.
The first time may be the current time or any time after the shooting time of the snapshot of the object.
For example, the time difference between the first time and the snapshot time of the snapshot in the target image set is 1h, the target in the snapshot in the target image set is walking, and the action speed of the target can be estimated to be 5km/h, so that the target is searched in the snapshot of the camera within the range of 4.5-5.5km from the snapshot in the target image set according to the snapshot information or the base image information in the target image set. Namely, the snapshot of the camera in the target image set or the snapshot of the camera in the range of 4.5-5.5km away from the snapshot in the target image set is used for 1: n are compared to search the target.
In one example, t1 min after the shooting time of the snapshot of the target at the first time, in the snapshot of the camera in the search area, the target is searched according to the snapshot information of the target or the image information of the basement, and the snapshot P1 shot at the a1 location is found. At this time, the time interval between the second time and the shooting time of P1 is taken as a time difference, and the action range of the target can be calculated according to the time difference and the action rate of the target; determining a search area which takes the snapshot position of P1 as a circle center and the action range as a radius; and searching the target in the snapshot image of the camera in the search area according to the snapshot image information or the bottom library image information of the target. Therefore, multiple small-range searches can be used for replacing single large-range search, and therefore search efficiency is improved.
In the embodiment, after the target is determined, the target can be quickly searched or quickly controlled, so that the target searching efficiency is further improved.
Fig. 3 shows a schematic block diagram of a target finding apparatus 300 according to an embodiment of the present invention. As shown in fig. 3, the target finding apparatus 300 according to the embodiment of the present invention includes:
the snapshot information acquisition module 310 is configured to respond to a new search task request of a user, establish a communication connection with a snapshot storage device, and acquire snapshot information within a preset snapshot time and a snapshot location range from the snapshot storage device as a snapshot information set; the newly-built search task request comprises the preset snapshot time and snapshot place;
a bottom library acquiring module 320, configured to acquire a bottom library with a specific bottom library tag, where the bottom library includes a plurality of pieces of bottom library image information; the specific bottom library label is determined according to a bottom library label selected by a user, or the specific bottom library label is determined according to the type of a target to be searched by the user, and the newly-built searching task request comprises the bottom library label or the type of the target;
a collision comparison or image clustering module 330, where the collision comparison module is configured to perform collision comparison on the snapshot image information sets and the base library to obtain one or more candidate target image sets, each candidate target image set includes a group of snapshot image information and base library image information in the collision comparison, the image clustering module is configured to perform image clustering on the snapshot image information sets and the base library to obtain one or more candidate target image sets, and each candidate target image set includes snapshot image information and base library image information grouped into one set by image clustering;
a target snapshot image information selection module 340, configured to select a target image set from the candidate target image sets, where a person corresponding to the target image set is a target;
the snapshot image information comprises characteristic values corresponding to the snapshot image/snapshot image, and the base image information comprises characteristic values corresponding to the base image/base image.
The various modules may perform the various steps/functions of the target finding method described above in connection with fig. 2, respectively. Only the main functions of the components of the object finding apparatus 300 are described above, and the details that have been described above are omitted.
FIG. 4 shows a schematic block diagram of a target finding system 400 according to an embodiment of the invention. The target lookup system 400 includes a storage device 410, and a processor 420.
The storage means 410 stores program code for implementing the respective steps in the object finding method according to an embodiment of the present invention.
The processor 420 is configured to run the program codes stored in the storage device 410 to perform corresponding steps of the target searching method according to the embodiment of the present invention, and is configured to implement the snapshot information obtaining module 310, the bottom library obtaining module 320, the collision comparison or image clustering module 330, and the target snapshot information selecting module 340 in the target searching device according to the embodiment of the present invention.
Furthermore, according to an embodiment of the present invention, a storage medium is also provided, on which program instructions are stored, which when executed by a computer or a processor are used for executing corresponding steps of the object finding method according to an embodiment of the present invention, and are used for implementing corresponding modules in the object finding device according to an embodiment of the present invention. The storage medium may include, for example, a memory card of a smart phone, a storage component of a tablet computer, a hard disk of a personal computer, a Read Only Memory (ROM), an Erasable Programmable Read Only Memory (EPROM), a portable compact disc read only memory (CD-ROM), a USB memory, or any combination of the above storage media. The computer readable storage medium can be any combination of one or more computer readable storage media, e.g., one containing computer readable program code for randomly generating sequences of action instructions and another containing computer readable program code for performing object finding.
In one embodiment, the computer program instructions may implement the functional modules of the object finding apparatus according to the embodiment of the present invention when executed by a computer and/or may perform the object finding method according to the embodiment of the present invention.
The modules in the object finding system according to the embodiment of the present invention may be implemented by a processor of the object finding electronic device according to the embodiment of the present invention running computer program instructions stored in a memory, or may be implemented when computer instructions stored in a computer readable storage medium of the computer program product according to the embodiment of the present invention are run by a computer.
Although the illustrative embodiments have been described herein with reference to the accompanying drawings, it is to be understood that the foregoing illustrative embodiments are merely exemplary and are not intended to limit the scope of the invention thereto. Various changes and modifications may be effected therein by one of ordinary skill in the pertinent art without departing from the scope or spirit of the present invention. All such changes and modifications are intended to be included within the scope of the present invention as set forth in the appended claims.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described device embodiments are merely illustrative, and for example, the division of the units is only one logical functional division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another device, or some features may be omitted, or not executed.
In the description provided herein, numerous specific details are set forth. It is understood, however, that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the invention and aiding in the understanding of one or more of the various inventive aspects. However, the method of the present invention should not be construed to reflect the intent: that the invention as claimed requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
It will be understood by those skilled in the art that all of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where such features are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the claims, any of the claimed embodiments may be used in any combination.
The various component embodiments of the invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. It will be appreciated by those skilled in the art that a microprocessor or Digital Signal Processor (DSP) may be used in practice to implement some or all of the functionality of some of the modules in an item analysis apparatus according to embodiments of the present invention. The present invention may also be embodied as apparatus programs (e.g., computer programs and computer program products) for performing a portion or all of the methods described herein. Such programs implementing the present invention may be stored on computer-readable media or may be in the form of one or more signals. Such a signal may be downloaded from an internet website or provided on a carrier signal or in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names.
The above description is only for the specific embodiment of the present invention or the description thereof, and the protection scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and the changes or substitutions should be covered within the protection scope of the present invention. The protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (12)

1. A method of object finding, comprising:
responding to a new search task request of a user, establishing communication connection with a snapshot image storage device, and acquiring snapshot image information of a preset snapshot time and a snapshot place range from the snapshot image storage device as a snapshot image information set; the newly-built search task request comprises the preset snapshot time and snapshot place;
acquiring a base library with a specific base library label, wherein the base library comprises a plurality of base library image information; the specific bottom library label is determined according to a bottom library label selected by a user, or the specific bottom library label is determined according to the type of a target to be searched by the user, and the newly-built searching task request comprises the bottom library label or the type of the target;
performing collision comparison or image clustering on the snapshot image information set and the base library to obtain one or more alternative target image sets; each candidate target image set comprises a group of snapshot image information and bottom library image information in collision ratio comparison, or comprises the snapshot image information and the bottom library image information which are classified into one set through image clustering;
selecting a target image set from the candidate target image set, wherein a person corresponding to the target image set is a target;
the snapshot image information comprises characteristic values corresponding to the snapshot image/snapshot image, and the base image information comprises characteristic values corresponding to the base image/base image.
2. The method of claim 1, wherein the selecting a set of target images from the set of candidate target images comprises:
and sequencing the alternative target image sets according to the similarity between the snapshot image and the bottom library image in each alternative target image set, and taking the first N alternative target image sets as target image sets.
3. The method of claim 1, wherein the number of the specific base library labels is multiple, each specific base library label corresponds to a weight, and the selecting the target image set from the candidate target image sets comprises:
sorting the candidate target image sets according to the weight of the bottom library identifier of each candidate target image set, and taking the first M candidate target image sets as target image sets; the bottom library identification of the alternative target image set comprises a specific bottom library label of a bottom library in which the bottom library image information is located in the alternative target image set, and the weight of the bottom library identification is determined according to the weight of the specific bottom library label contained in the bottom library identification.
4. The method according to claim 3, wherein the weight of the base library identifier is the sum of the weights of the specific base library labels it contains, or the weight of the base library identifier is the maximum weight of the specific base library labels it contains.
5. The method of claim 1, wherein the selecting a set of target images from the set of candidate target images comprises:
if the interval between the snapshot time/snapshot place of the snapshot image information contained in the first candidate target image set and the snapshot time/snapshot place of the snapshot image information contained in the second candidate target image set is smaller than a preset time interval threshold value/place interval threshold value, taking the first candidate target image set and the second candidate target image set together as a target image set; the snapshot image information of the first candidate target image set and the snapshot image information of the second candidate target image set correspond to different people.
6. The method of claim 1, wherein the method further comprises:
displaying the alternative target image set in a sequencing way to a user;
the sorting is to sort the candidate target image sets according to the similarity between the snapshot image and the base image in each candidate target image set, or,
the sorting is to sort the alternative target image sets according to the weight of the bottom library identifier of each alternative target image set, wherein each specific bottom library label corresponds to a weight, the bottom library identifiers of the alternative target image sets comprise specific bottom library labels of the bottom libraries where the bottom library image information is located in the alternative target image sets, and the weight of the bottom library identifiers is determined according to the weight of the specific bottom library labels contained in the bottom library identifiers; alternatively, the first and second electrodes may be,
and the ordering is according to the snapshot time/snapshot place ordering of the snapshot in each candidate target image set.
7. The method of any of claims 1 to 6, wherein the selecting a set of target images from the set of candidate target images comprises:
and selecting a target image set from the candidate target image sets according to a selection instruction of a user.
8. The method according to any one of claims 1 to 7, wherein the method further comprises performing a fast search for the target based on snapshot information or base image information of the target.
9. The method as claimed in claim 8, wherein the fast searching of the target according to the snapshot image information or the base image information of the target comprises:
calculating a time difference between the first time and a snapshot time of the snapshot of the target;
calculating the action range of the target according to the time difference and the action rate of the target;
determining a search area which takes the snapshot place of the snapshot picture of the target as the center of a circle and the action range as the radius;
and searching the target in the snapshot image of the camera in the search area according to the snapshot image information or the bottom library image information of the target.
10. An object finding apparatus, characterized in that the apparatus comprises:
the snapshot information acquisition module is used for responding to a new search task request of a user, establishing communication connection with the snapshot storage device, and acquiring snapshot information within a preset snapshot time and a snapshot place range from the snapshot storage device to serve as a snapshot information set; the newly-built search task request comprises the preset snapshot time and snapshot place;
the system comprises a bottom library acquisition module, a bottom library identification module and a bottom library identification module, wherein the bottom library acquisition module is used for acquiring a bottom library with a specific bottom library label, and the bottom library comprises a plurality of bottom library image information; the specific bottom library label is determined according to a bottom library label selected by a user, or the specific bottom library label is determined according to the type of a target to be searched by the user, and the newly-built searching task request comprises the bottom library label or the type of the target;
a collision comparison or image clustering module, wherein the collision comparison module is used for performing collision comparison on the snapshot image information set and the bottom library to obtain one or more alternative target image sets, each alternative target image set comprises a group of snapshot image information and bottom library image information in the collision comparison, the image clustering module is used for performing image clustering on the snapshot image information set and the bottom library to obtain one or more alternative target image sets, and each alternative target image set comprises the snapshot image information and the bottom library image information which are grouped into one set through image clustering;
the target snapshot image information selection module is used for selecting a target image set from the alternative target image set, wherein personnel corresponding to the target image set are targets;
the snapshot image information comprises characteristic values corresponding to the snapshot image/snapshot image, and the base image information comprises characteristic values corresponding to the base image/base image.
11. An object finding system comprising a memory, a processor and a computer program stored on the memory and running on the processor, characterized in that the steps of the method of any of claims 1 to 9 are implemented when the computer program is executed by the processor.
12. A computer storage medium having a computer program stored thereon, wherein the computer program, when executed by a computer, implements the steps of the method of any of claims 1 to 9.
CN201910881015.6A 2019-09-18 2019-09-18 Target searching method, device, system and storage medium Pending CN110825893A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910881015.6A CN110825893A (en) 2019-09-18 2019-09-18 Target searching method, device, system and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910881015.6A CN110825893A (en) 2019-09-18 2019-09-18 Target searching method, device, system and storage medium

Publications (1)

Publication Number Publication Date
CN110825893A true CN110825893A (en) 2020-02-21

Family

ID=69548021

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910881015.6A Pending CN110825893A (en) 2019-09-18 2019-09-18 Target searching method, device, system and storage medium

Country Status (1)

Country Link
CN (1) CN110825893A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111651627A (en) * 2020-05-27 2020-09-11 深圳市商汤科技有限公司 Data processing method and device, electronic equipment and storage medium
CN112579803A (en) * 2020-11-16 2021-03-30 北京迈格威科技有限公司 Image data cleaning method and device, electronic equipment and storage medium
CN113268482A (en) * 2021-04-29 2021-08-17 北京旷视科技有限公司 Data association method and device and electronic equipment

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102929903A (en) * 2012-07-04 2013-02-13 北京中盾安全技术开发公司 Rapid video retrieval method based on layered structuralized description of video information
CN104573111A (en) * 2015-02-03 2015-04-29 中国人民解放军国防科学技术大学 Method for structured storage and pre-retrieval of pedestrian data in surveillance videos
CN106339428A (en) * 2016-08-16 2017-01-18 东方网力科技股份有限公司 Identity identification method and device for suspects based on large video data
CN106709047A (en) * 2017-01-04 2017-05-24 浙江宇视科技有限公司 Object lookup method and device
CN107067504A (en) * 2016-12-25 2017-08-18 北京中海投资管理有限公司 A kind of recognition of face safety-protection system and a suspect's detection and method for early warning
CN109117714A (en) * 2018-06-27 2019-01-01 北京旷视科技有限公司 A kind of colleague's personal identification method, apparatus, system and computer storage medium
CN109190498A (en) * 2018-08-09 2019-01-11 安徽四创电子股份有限公司 A method of the case intelligence string based on recognition of face is simultaneously
CN109325548A (en) * 2018-10-23 2019-02-12 北京旷视科技有限公司 Image processing method, device, electronic equipment and storage medium
CN109766470A (en) * 2019-01-15 2019-05-17 北京旷视科技有限公司 Image search method, device and processing equipment

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102929903A (en) * 2012-07-04 2013-02-13 北京中盾安全技术开发公司 Rapid video retrieval method based on layered structuralized description of video information
CN104573111A (en) * 2015-02-03 2015-04-29 中国人民解放军国防科学技术大学 Method for structured storage and pre-retrieval of pedestrian data in surveillance videos
CN106339428A (en) * 2016-08-16 2017-01-18 东方网力科技股份有限公司 Identity identification method and device for suspects based on large video data
CN107067504A (en) * 2016-12-25 2017-08-18 北京中海投资管理有限公司 A kind of recognition of face safety-protection system and a suspect's detection and method for early warning
CN106709047A (en) * 2017-01-04 2017-05-24 浙江宇视科技有限公司 Object lookup method and device
CN109117714A (en) * 2018-06-27 2019-01-01 北京旷视科技有限公司 A kind of colleague's personal identification method, apparatus, system and computer storage medium
CN109190498A (en) * 2018-08-09 2019-01-11 安徽四创电子股份有限公司 A method of the case intelligence string based on recognition of face is simultaneously
CN109325548A (en) * 2018-10-23 2019-02-12 北京旷视科技有限公司 Image processing method, device, electronic equipment and storage medium
CN109766470A (en) * 2019-01-15 2019-05-17 北京旷视科技有限公司 Image search method, device and processing equipment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
毛晓东 等: "人脸聚类技术在公安行业应用浅析", 《中国安全防范技术与应用》 *
程大江等: "人脸识别技术在秦皇岛警务实战中的建设与应用", 《警察技术》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111651627A (en) * 2020-05-27 2020-09-11 深圳市商汤科技有限公司 Data processing method and device, electronic equipment and storage medium
CN112579803A (en) * 2020-11-16 2021-03-30 北京迈格威科技有限公司 Image data cleaning method and device, electronic equipment and storage medium
CN112579803B (en) * 2020-11-16 2024-04-02 北京迈格威科技有限公司 Image data cleaning method and device, electronic equipment and storage medium
CN113268482A (en) * 2021-04-29 2021-08-17 北京旷视科技有限公司 Data association method and device and electronic equipment
CN113268482B (en) * 2021-04-29 2023-12-08 北京旷视科技有限公司 Data association method and device and electronic equipment

Similar Documents

Publication Publication Date Title
CN109117714B (en) Method, device and system for identifying fellow persons and computer storage medium
CN111046235B (en) Method, system, equipment and medium for searching acoustic image archive based on face recognition
US20210382933A1 (en) Method and device for archive application, and storage medium
CN110825893A (en) Target searching method, device, system and storage medium
CN109753848B (en) Method, device and system for executing face recognition processing
CN111581423B (en) Target retrieval method and device
CN106709047B (en) Object searching method and device
JP2022518469A (en) Information processing methods and devices, storage media
CN108563651B (en) Multi-video target searching method, device and equipment
KR101777238B1 (en) Method and system for image trend detection and curation of image
CN106484580B (en) A kind of internal-memory detection method, apparatus and system
CN109784220B (en) Method and device for determining passerby track
WO2012075219A2 (en) Relationship detection within biometric match results candidates
CN111954175B (en) Method for judging visiting of interest point and related device
CN112818149A (en) Face clustering method and device based on space-time trajectory data and storage medium
CN111222373A (en) Personnel behavior analysis method and device and electronic equipment
CN114139015A (en) Video storage method, device, equipment and medium based on key event identification
CN107679186B (en) Method and device for searching entity based on entity library
CN110263830B (en) Image processing method, device and system and storage medium
KR20190124436A (en) Method for searching building based on image and apparatus for the same
CN115062200A (en) User behavior mining method and system based on artificial intelligence
CN110196924B (en) Method and device for constructing characteristic information base and method and device for tracking target object
CN110688952B (en) Video analysis method and device
CN108255888B (en) Data processing method and system
JP2019083532A (en) Image processing system, image processing method, and image processing program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200221