CN111291682A - Method and device for determining target object, storage medium and electronic device - Google Patents

Method and device for determining target object, storage medium and electronic device Download PDF

Info

Publication number
CN111291682A
CN111291682A CN202010082713.2A CN202010082713A CN111291682A CN 111291682 A CN111291682 A CN 111291682A CN 202010082713 A CN202010082713 A CN 202010082713A CN 111291682 A CN111291682 A CN 111291682A
Authority
CN
China
Prior art keywords
image acquisition
image information
determining
image
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010082713.2A
Other languages
Chinese (zh)
Inventor
阮学武
高立勋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202010082713.2A priority Critical patent/CN111291682A/en
Publication of CN111291682A publication Critical patent/CN111291682A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a method, a device, a storage medium and an electronic device for determining a target object, wherein the method comprises the following steps: acquiring image information respectively acquired by at least two camera devices at respective corresponding acquisition moments, wherein the image information comprises an image acquisition place, image acquisition time and an image of an object conforming to a target type; clustering the images of the objects included in the acquired image information to obtain a clustering result of each object; and determining the target object meeting the preset position condition and/or the preset time condition based on the obtained clustering result of each object and the image acquisition position and the image acquisition time included in the acquired image information. According to the invention, the problem that the target object cannot be effectively screened in the related technology is solved, the effect of screening the target object according to the defined screening rule is further achieved, and the working efficiency is improved.

Description

Method and device for determining target object, storage medium and electronic device
Technical Field
The present invention relates to the field of communications, and in particular, to a method, an apparatus, a storage medium, and an electronic apparatus for determining a target object.
Background
In the related art, when a target object needs to be screened, the target object needs to be screened from a data source according to the characteristic information of the object which is already input, so that the data source needs to be established in advance in the related art, and then the screening of the target object can be realized.
In addition, in the related art, when the target objects are screened, only the target objects with consistent characteristics can be screened from the data source, and the screening mode is single, so that the target objects cannot be effectively screened.
In view of the above problems in the related art, no effective solution has been proposed.
Disclosure of Invention
The embodiment of the invention provides a method, a device, a storage medium and an electronic device for determining a target object, which are used for at least solving the problem that the target object cannot be effectively screened out in the related technology.
According to an embodiment of the present invention, there is provided a method of determining a target object, including: acquiring image information respectively acquired by at least two camera devices at respective corresponding acquisition moments, wherein the image information comprises an image acquisition place, image acquisition time and an image of an object conforming to a target type; clustering the images of the objects included in the acquired image information to obtain a clustering result of each object; and determining the target object meeting the preset position condition and/or the preset time condition based on the obtained clustering result of each object and the image acquisition position and the image acquisition time included in the acquired image information.
According to another embodiment of the present invention, there is provided an apparatus for determining a target object, including: the acquisition module is used for acquiring image information respectively acquired by at least two camera devices at respective corresponding acquisition moments, wherein the image information comprises an image acquisition place, image acquisition time and an image of an object conforming to a target type; the processing module is used for clustering the images of the objects included in the acquired image information to obtain a clustering result of each object; and the determining module is used for determining the target object meeting the preset position condition and/or the preset time condition based on the obtained clustering result of each object and the image acquisition position and the image acquisition time included in the acquired image information.
According to a further embodiment of the present invention, there is also provided a computer-readable storage medium having a computer program stored thereon, wherein the computer program is arranged to perform the steps of any of the above method embodiments when executed.
According to yet another embodiment of the present invention, there is also provided an electronic device, including a memory in which a computer program is stored and a processor configured to execute the computer program to perform the steps in any of the above method embodiments.
According to the invention, the image information acquired by at least two camera devices is acquired, the images of the objects included in the image information are clustered, the target object meeting the conditions is determined according to the clustering result and the image information, and the effect of automatically screening the target object is achieved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
fig. 1 is a block diagram of a hardware structure of a mobile terminal of a method of determining a target object according to an embodiment of the present invention;
FIG. 2 is a flow diagram of a method of determining a target object according to an embodiment of the invention;
FIG. 3 is a flow diagram of a method of determining a target object in accordance with a specific embodiment of the present invention;
FIG. 4 is a flow diagram of generating a globally unique person ID in accordance with a specific embodiment of the present invention;
FIG. 5 is a flow diagram of a spatio-temporal multi-segment analysis in accordance with an embodiment of the present invention;
fig. 6 is a block diagram of a structure of an apparatus for determining a target object according to an embodiment of the present invention.
Detailed Description
The invention will be described in detail hereinafter with reference to the accompanying drawings in conjunction with embodiments. It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order.
The method provided by the embodiment of the application can be executed in a mobile terminal, a computer terminal or a similar operation device. Taking the example of being operated on a mobile terminal, fig. 1 is a hardware structure block diagram of the mobile terminal of a method for determining a target object according to an embodiment of the present invention. As shown in fig. 1, the mobile terminal 10 may include one or more (only one shown in fig. 1) processors 102 (the processor 102 may include, but is not limited to, a processing device such as a microprocessor MCU or a programmable logic device FPGA) and a memory 104 for storing data, and optionally may also include a transmission device 106 for communication functions and an input-output device 108. It will be understood by those skilled in the art that the structure shown in fig. 1 is only an illustration, and does not limit the structure of the mobile terminal. For example, the mobile terminal 10 may also include more or fewer components than shown in FIG. 1, or have a different configuration than shown in FIG. 1.
The memory 104 may be used for storing computer programs, for example, software programs and modules of application software, such as computer programs corresponding to the method for determining a target object in the embodiment of the present invention, and the processor 102 executes various functional applications and data processing by running the computer programs stored in the memory 104, so as to implement the method described above. The memory 104 may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some instances, the memory 104 may further include memory located remotely from the processor 102, which may be connected to the mobile terminal 10 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device 106 is used for receiving or transmitting data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the mobile terminal 10. In one example, the transmission device 106 includes a Network adapter (NIC), which can be connected to other Network devices through a base station so as to communicate with the internet. In one example, the transmission device 106 may be a Radio Frequency (RF) module, which is used for communicating with the internet in a wireless manner.
In the present embodiment, a method for determining a target object is provided, and fig. 2 is a flowchart of the method for determining a target object according to the embodiment of the present invention, as shown in fig. 2, the flowchart includes the following steps:
step S202, acquiring image information respectively acquired by at least two camera devices at respective corresponding acquisition moments, wherein the image information comprises an image acquisition place, image acquisition time and an image of an object conforming to a target type;
step S204, clustering the images of the objects included in the acquired image information to obtain clustering results of the objects;
step S206, determining a target object satisfying a predetermined location condition and/or a predetermined time condition based on the obtained clustering result of each object and the image acquisition location and the image acquisition time included in the acquired image information.
In the above-described embodiments, the image pickup apparatus may be a camera provided on a traffic post, a camera provided in a building or a building, or a mobile terminal having an image pickup function, or the like. The image collected by the camera device can be a picture or a video, when the collected image is a video, the video can comprise a real-time video and a video, the video is processed to obtain image information (wherein the image of the object conforming to the target type can be obtained by decoding the video into complete single-frame data, the object conforming to the target type in the image is detected, the object conforming to the target type is subjected to image matting, or a small face image is provided at a coordinate position in the image), the image of the object in the image information is subjected to clustering processing, and the target object is determined according to a clustering result and a preset condition. The target object may be a person, a vehicle, an animal, etc., among others. For example, when a public security officer needs to determine a suspicious person, the suspicious person may be determined by analyzing and processing a real-time video (a video recorded by a camera device grasped by the public security in real time) during a period of a scheduled event to obtain image information, and then clustering images of persons included in the image information, or by analyzing and summarizing a video (a video provided by a person or an organization near the scheduled event and belonging to a camera device deployed by a non-public security) during the period of the scheduled event to sort out detected person information.
Optionally, the main body of the above steps may be a background processor, or other devices with similar processing capabilities, and may also be a machine integrated with at least an image acquisition device and a data processing device, where the image acquisition device may include a graphics acquisition module such as a camera, and the data processing device may include a terminal such as a computer, but is not limited thereto.
According to the invention, the image information acquired by at least two camera devices is acquired, the images of the objects included in the image information are clustered, the target object meeting the conditions is determined according to the clustering result and the image information, and the effect of automatically screening the target object is achieved.
In an optional embodiment, performing clustering processing on the images of the objects included in the acquired image information to obtain a clustering result of each object includes: performing feature extraction on an image of an object included in the acquired image information; and determining the objects with the similarity exceeding a preset threshold value by comparing the similarity of the features, and classifying the objects with the similarity exceeding the preset threshold value into one class. In this embodiment, the features of the objects are extracted using an algorithm process based on the image information, the extracted features of the objects are subjected to a cluster analysis, and when the similarity exceeds a predetermined threshold, the two objects are classified into one class. When the object is a person, the extracted features may include height, whether glasses are worn, a face contour, five sense organs, and the like, and when the object is a vehicle, the extracted features may include a vehicle type, a vehicle color, a license plate, and the like. The predetermined similarity threshold may be set to 85% (this value is only an optional embodiment, and specifically, the predetermined threshold may also be determined according to the type of the target object, for example, 80%, 90%, 95%, and the like may also be used).
In an optional embodiment, after obtaining the clustering result of each object, the method further includes: and distributing global unique Identification (ID) for each object based on the clustering result, wherein the IDs of the objects under the same cluster are the same. In the present embodiment, object records detected in an image acquired by an image capturing apparatus, that is, objects appearing in the image are each subjected to object information update, and objects belonging to the same object are identified by the same object code (ID). For example, when the object is a person, the person information is analyzed and collated, the person ID is established by face recognition and feature extraction, and by feature value comparison, the comparison result is above a predetermined threshold as the same target person.
In an optional embodiment, based on the obtained clustering result of each object and the image acquisition location and the image acquisition time included in the acquired image information, determining that the target object meeting the predetermined location condition and/or the predetermined time condition includes at least one of the following: determining the target object with the occurrence frequency exceeding a preset threshold in a target area based on the obtained clustering result of each object and the image acquisition place and the image acquisition time included in the acquired image information; and determining the target object appearing in the target area in the target time period based on the obtained clustering result of each object and the image acquisition place and the image acquisition time included in the acquired image information. In this embodiment, objects are screened according to different target time periods and target areas set according to the examination requirements, and when the number of times that the objects appear in the target areas in the target time periods exceeds a predetermined threshold, the target objects are determined. For example, the public security personnel screen suspicious persons, and when the single case is found, whether the persons appearing during the case, the movement tracks of the persons and the appearance conditions are normal or not is counted (for example, the person appears at night in the daytime, frequently appears at the case scene before the case, appears at the moment of the case scene for the first time, and the like). The predetermined threshold of the number of occurrences may be 4 occurrences in the target time period (this value is only an optional embodiment, and specifically, different predetermined thresholds may also be determined according to different target time periods and target areas, for example, 5 or 6 occurrences may also be used).
In an optional embodiment, after performing clustering processing on the images of the objects included in the acquired image information to obtain a clustering result of each object, the method further includes at least one of: determining time and place information of the first appearance of the object under the first cluster based on the image acquisition place and the image acquisition time included in the image information corresponding to the object under the first cluster; and depicting track information of the object under the first cluster based on an image acquisition place and image acquisition time included in the image information corresponding to the object under the first cluster. In this embodiment, the objects in the same cluster include image information acquired by different image capturing devices, and the trajectory information of the objects is drawn and recorded in the database according to the action trajectory, the activity time, and the like of the objects counted according to the image acquisition location and time included in the image information. For example, the appearance of the target person may be represented in a table form, or a mobile line may be drawn on a GIS (Geographic Information System) map according to the action rule of the target person. After the trajectory information of the object is depicted, statistical results can also be analyzed. For example, the public security personnel screen suspicious personnel, and when the plurality of cases exist, the time and space screening is carried out to determine whether the same personnel appear at the case scene and the case moment of the different cases, and the probability of the repeated case situation (the chain case) of the same perpetrator is considered.
How to determine the target object is described below with reference to specific embodiments, with the target object being a person:
fig. 3 is a flowchart of a method for determining a target object according to an embodiment of the present invention, and as shown in fig. 3, the flowchart of the method for determining a target object according to an embodiment of the present invention includes the following steps:
step S302, a camera collects image videos, decodes the videos into complete single-frame data, detects faces in pictures, and conducts cutout or provides coordinates of small faces in the pictures for face feature extraction. Wherein this step may be performed by the face detection module.
And step S304, extracting a characteristic value by using algorithm processing according to the face picture. Wherein this step can be performed by the face feature extraction module. Wherein this step can be performed by the face feature extraction module.
And S306, comparing and clustering the extracted characteristic values, comparing whether the faces in different images belong to the same person, setting a threshold value in advance, and when the similarity is greater than the threshold value, considering that the faces in different images belong to the same person. Wherein the step can be performed by the feature comparison module.
And step S308, generating a globally unique personnel ID according to the comparison result. And updating the snapshot face record detected in the video, namely updating the personnel information of all the personnel appearing in the video, wherein the personnel belonging to the same person are identified by the same personnel ID. Wherein, the step can be executed by a person update module to which the face belongs.
And step S310, screening the target personnel in time and space according to the checking requirement, and counting the movement routes, the movement time and other modes of the target personnel. Wherein the steps may be performed by a service processing module.
In step S312, suspicious target information satisfying the condition is output.
Fig. 4 is a flowchart of generating a globally unique human ID according to an embodiment of the present invention, and as shown in fig. 4, the flowchart of generating a globally unique human ID in the embodiment of the present invention includes the following steps:
in step S402, different image capturing apparatuses capture face images.
And step S404, extracting characteristic values in the face image.
And step S406, performing feature clustering analysis according to the human face features.
In step S408, it is determined whether the face similarity of the cluster reaches a predetermined threshold (corresponding to the predetermined threshold), and if the determination result is yes, step S410 is performed, and if the determination result is no, step S416 is performed.
And step S410, judging whether the personnel ID exists or not, if so, executing step S412, and if not, executing step S414.
Step S412, selecting and storing one of the personnel IDs.
In step S414, a globally unique person ID is generated.
In step S416, it is determined whether or not the person ID already exists, and if the determination result is yes, step S418 is performed, and if the determination result is no, step S414 is performed.
Step S418, selecting and storing one of the person IDs.
And step S420, ending the process of generating the globally unique personnel ID.
Fig. 5 is a flowchart of a time-space multi-segment analysis according to an embodiment of the present invention, and as shown in fig. 5, the flowchart of the time-space multi-segment analysis in the embodiment of the present invention includes the following steps:
and step S502, clustering the face information captured by different camera devices in different time and different areas.
Step S504, generating snapshot record information of the globally unique personnel ID.
And step S506, filtering the suspicious target information meeting the conditions by combining with the time space.
And step S508, screening out the persons with the occurrence frequency exceeding a specified threshold value.
And step S510, screening out the persons who appear in abnormal working and resting time at night in the daytime.
And step S512, determining the time and place information of the first appearance of each person.
In step S514, trajectory information of each person is drawn based on the map.
It should be noted that step S506 may include step S508 and step S510.
For example, when a public security officer analyzes a case, example simulation data obtained by time-space multi-segment analysis is shown in table 1.
TABLE 1
Snapshot number Time of taking a snapshot Camera spot location (area) Characteristic value Person id
1 2019/9/1 7:00 A 000011000111100 001
2 2019/9/1 8:00 A 000011011101100 004
3 2019/9/1 9:00 A 000011100001100 001
4 2019/9/1 15:00 A 000011000111001 001
5 2019/9/1 21:00 A 000011000110010 002
6 2019/9/2 7:00 B 100011000111100 001
7 2019/9/2 8:00 B 100011011101100 003
8 2019/9/2 9:00 B 100011100001100 002
9 2019/9/2 15:00 B 100011000111001 001
10 2019/9/2 21:00 B 100011000110010 002
11 2019/9/3 7:00 C 010011000111100 002
12 2019/9/3 8:00 C 010011011101100 002
13 2019/9/3 9:00 C 010011100001100 001
14 2019/9/3 15:00 C 010011000111001 004
15 2019/9/3 21:00 C 010011000110010 005
16 2019/9/4 7:00 D 110011000111100 005
17 2019/9/4 8:00 D 110011011101100 001
18 2019/9/4 9:00 D 110011100001100 001
19 2019/9/4 15:00 D 110011000111001 003
20 2019/9/4 21:00 D 110011000110010 005
As can be seen from the above table, the face information is captured in the area a, the virtual person ID to which each face belongs is confirmed through feature comparison, and the person 001 with high frequency can be judged through warehousing statistical analysis; the face information is captured in the area D, and the possibility of day and night is preliminarily analyzed 005; in case analysis, all case areas are integrated, and the first occurrence place and time of all people can be listed by combining the time range.
As can be seen from the above table, two consecutive days in the A, B area, for example, the people who have appeared more than a certain number of times are 001 and 002; for example, A, B, C, D, when four areas are continued for four days, people are 001, and when multiple cases are analyzed, the user can be assisted to screen part of target people by combining the actual situation of the case.
In the foregoing embodiment, an object is detected by analyzing a real-time video, analyzing a video, and the like, clustering analysis is performed on feature values of the extracted object, and a target object is determined based on a generated globally unique person ID and in combination with multiple segments of time-space point location information and a determination rule, so as to achieve the purpose of efficiently analyzing massive video data.
Through the above description of the embodiments, those skilled in the art can clearly understand that the method according to the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but the former is a better implementation mode in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present invention.
In this embodiment, a device for determining a target object is further provided, and the device is used to implement the foregoing embodiments and preferred embodiments, which have already been described and are not described again. As used below, the term "module" may be a combination of software and/or hardware that implements a predetermined function. Although the means described in the embodiments below are preferably implemented in software, an implementation in hardware, or a combination of software and hardware is also possible and contemplated.
Fig. 6 is a block diagram of a structure of an apparatus for determining a target object according to an embodiment of the present invention, as shown in fig. 6, the apparatus including:
the acquiring module 62 is configured to acquire image information respectively acquired by at least two image capturing apparatuses at respective corresponding acquiring times, where the image information includes an image acquiring location, image acquiring time, and an image of an object conforming to a target type; a processing module 64, configured to perform clustering processing on the images of the objects included in the acquired image information to obtain a clustering result of each object; and a determining module 66, configured to determine, based on the obtained clustering result of each object and the image acquisition location and the image acquisition time included in the acquired image information, a target object that meets a predetermined location condition and/or a predetermined time condition.
The obtaining module 62 corresponds to the face detecting module, the processing module 64 corresponds to the face feature extracting module and the feature comparing module, and the determining module 66 corresponds to the business processing module.
In an optional embodiment, the processing module 64 may perform clustering processing on the images of the objects included in the acquired image information to obtain a clustering result of each object by: performing feature extraction on an image of an object included in the acquired image information; and determining the objects with the similarity exceeding a preset threshold value by comparing the similarity of the features, and classifying the objects with the similarity exceeding the preset threshold value into one class.
In an alternative embodiment, the apparatus may be configured to, after obtaining a clustering result of each object, assign a globally unique identification ID to each object based on the clustering result, where IDs of objects under the same cluster are the same.
In an alternative embodiment, the determining module 66 is configured to perform at least one of the following operations: determining the target object with the occurrence frequency exceeding a preset threshold in a target area based on the obtained clustering result of each object and the image acquisition place and the image acquisition time included in the acquired image information; and determining the target object appearing in the target area in the target time period based on the obtained clustering result of each object and the image acquisition place and the image acquisition time included in the acquired image information.
In an optional embodiment, the apparatus is further configured to perform at least one of the following: after clustering processing is carried out on the images of the objects included in the acquired image information to obtain a clustering result of each object, time and place information of the first appearance of the objects under a first cluster is determined based on an image acquisition place and image acquisition time included in the image information corresponding to the objects under the first cluster; after clustering processing is carried out on the images of the objects included in the acquired image information to obtain a clustering result of each object, track information of the objects under a first cluster is described based on an image acquisition place and image acquisition time included in the image information corresponding to the objects under the first cluster.
It should be noted that, the above modules may be implemented by software or hardware, and for the latter, the following may be implemented, but not limited to: the modules are all positioned in the same processor; alternatively, the modules are respectively located in different processors in any combination.
Embodiments of the present invention also provide a computer-readable storage medium having a computer program stored thereon, wherein the computer program is arranged to perform the steps of any of the above-mentioned method embodiments when executed.
Alternatively, in the present embodiment, the above-mentioned computer-readable storage medium may be configured to store a computer program for executing the steps of:
s1, acquiring image information respectively acquired by at least two camera devices at respective corresponding acquisition moments, wherein the image information comprises an image acquisition place, image acquisition time and an image of an object conforming to a target type;
s2, clustering the images of the objects included in the acquired image information to obtain clustering results of the objects;
and S3, determining the target objects meeting the preset position condition and/or the preset time condition based on the obtained clustering result of each object and the image acquisition positions and the image acquisition time included in the acquired image information.
Optionally, in this embodiment, the computer-readable storage medium may include, but is not limited to: various media capable of storing computer programs, such as a usb disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk.
Embodiments of the present invention also provide an electronic device comprising a memory having a computer program stored therein and a processor arranged to run the computer program to perform the steps of any of the above method embodiments.
Optionally, the electronic apparatus may further include a transmission device and an input/output device, wherein the transmission device is connected to the processor, and the input/output device is connected to the processor.
Optionally, in this embodiment, the processor may be configured to execute the following steps by a computer program:
s1, acquiring image information respectively acquired by at least two camera devices at respective corresponding acquisition moments, wherein the image information comprises an image acquisition place, image acquisition time and an image of an object conforming to a target type;
s2, clustering the images of the objects included in the acquired image information to obtain clustering results of the objects;
and S3, determining the target objects meeting the preset position condition and/or the preset time condition based on the obtained clustering result of each object and the image acquisition positions and the image acquisition time included in the acquired image information.
Optionally, the specific examples in this embodiment may refer to the examples described in the above embodiments and optional implementation manners, and this embodiment is not described herein again.
It will be apparent to those skilled in the art that the modules or steps of the present invention described above may be implemented by a general purpose computing device, they may be centralized on a single computing device or distributed across a network of multiple computing devices, and alternatively, they may be implemented by program code executable by a computing device, such that they may be stored in a storage device and executed by a computing device, and in some cases, the steps shown or described may be performed in an order different than that described herein, or they may be separately fabricated into individual integrated circuit modules, or multiple ones of them may be fabricated into a single integrated circuit module. Thus, the present invention is not limited to any specific combination of hardware and software.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A method of determining a target object, comprising:
acquiring image information respectively acquired by at least two camera devices at respective corresponding acquisition moments, wherein the image information comprises an image acquisition place, image acquisition time and an image of an object conforming to a target type;
clustering the images of the objects included in the acquired image information to obtain a clustering result of each object;
and determining the target object meeting the preset position condition and/or the preset time condition based on the obtained clustering result of each object and the image acquisition position and the image acquisition time included in the acquired image information.
2. The method according to claim 1, wherein clustering the images of the objects included in the acquired image information to obtain a clustering result of each object comprises:
performing feature extraction on an image of an object included in the acquired image information;
and determining the objects with the similarity exceeding a preset threshold value by comparing the similarity of the features, and classifying the objects with the similarity exceeding the preset threshold value into one class.
3. The method according to claim 1 or 2, wherein after obtaining the clustering result of each object, the method further comprises:
and distributing global unique Identification (ID) for each object based on the clustering result, wherein the IDs of the objects under the same cluster are the same.
4. The method according to claim 1, wherein determining the target object satisfying the predetermined location condition and/or the predetermined time condition based on the obtained clustering result of each object and the image acquisition location and the image acquisition time included in the acquired image information comprises at least one of:
determining the target object with the occurrence frequency exceeding a preset threshold in a target area based on the obtained clustering result of each object and the image acquisition place and the image acquisition time included in the acquired image information;
and determining the target object appearing in the target area in the target time period based on the obtained clustering result of each object and the image acquisition place and the image acquisition time included in the acquired image information.
5. The method according to claim 1, wherein after clustering the images of the objects included in the acquired image information to obtain a clustering result of each object, the method further comprises at least one of:
determining time and place information of the first appearance of the object under the first cluster based on the image acquisition place and the image acquisition time included in the image information corresponding to the object under the first cluster;
and depicting track information of the object under the first cluster based on an image acquisition place and image acquisition time included in the image information corresponding to the object under the first cluster.
6. An apparatus for determining a target object, comprising:
the acquisition module is used for acquiring image information respectively acquired by at least two camera devices at respective corresponding acquisition moments, wherein the image information comprises an image acquisition place, image acquisition time and an image of an object conforming to a target type;
the processing module is used for clustering the images of the objects included in the acquired image information to obtain a clustering result of each object;
and the determining module is used for determining the target object meeting the preset position condition and/or the preset time condition based on the obtained clustering result of each object and the image acquisition position and the image acquisition time included in the acquired image information.
7. The apparatus of claim 6, wherein the determining module is configured to perform at least one of:
determining the target object with the occurrence frequency exceeding a preset threshold in a target area based on the obtained clustering result of each object and the image acquisition place and the image acquisition time included in the acquired image information;
and determining the target object appearing in the target area in the target time period based on the obtained clustering result of each object and the image acquisition place and the image acquisition time included in the acquired image information.
8. The apparatus of claim 6, wherein the apparatus is further configured to perform at least one of:
after clustering processing is carried out on the images of the objects included in the acquired image information to obtain a clustering result of each object, time and place information of the first appearance of the objects under a first cluster is determined based on an image acquisition place and image acquisition time included in the image information corresponding to the objects under the first cluster;
after clustering processing is carried out on the images of the objects included in the acquired image information to obtain a clustering result of each object, track information of the objects under a first cluster is described based on an image acquisition place and image acquisition time included in the image information corresponding to the objects under the first cluster.
9. A computer-readable storage medium, in which a computer program is stored, wherein the computer program is arranged to perform the method of any of claims 1 to 5 when executed.
10. An electronic device comprising a memory and a processor, wherein the memory has stored therein a computer program, and wherein the processor is arranged to execute the computer program to perform the method of any of claims 1 to 5.
CN202010082713.2A 2020-02-07 2020-02-07 Method and device for determining target object, storage medium and electronic device Pending CN111291682A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010082713.2A CN111291682A (en) 2020-02-07 2020-02-07 Method and device for determining target object, storage medium and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010082713.2A CN111291682A (en) 2020-02-07 2020-02-07 Method and device for determining target object, storage medium and electronic device

Publications (1)

Publication Number Publication Date
CN111291682A true CN111291682A (en) 2020-06-16

Family

ID=71024378

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010082713.2A Pending CN111291682A (en) 2020-02-07 2020-02-07 Method and device for determining target object, storage medium and electronic device

Country Status (1)

Country Link
CN (1) CN111291682A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111860345A (en) * 2020-07-22 2020-10-30 海尔优家智能科技(北京)有限公司 Method and device for determining object position, storage medium and electronic device
CN112148769A (en) * 2020-09-15 2020-12-29 浙江大华技术股份有限公司 Data synchronization method, device, storage medium and electronic device
CN112149520A (en) * 2020-09-03 2020-12-29 上海趋视信息科技有限公司 Multi-target management method, system and device
CN112241686A (en) * 2020-09-16 2021-01-19 四川天翼网络服务有限公司 Trajectory comparison matching method and system based on feature vectors
CN112258363A (en) * 2020-10-16 2021-01-22 浙江大华技术股份有限公司 Identity information confirmation method and device, storage medium and electronic device
CN112268554A (en) * 2020-09-16 2021-01-26 四川天翼网络服务有限公司 Regional range loitering detection method and system based on path trajectory analysis
CN112749652A (en) * 2020-12-31 2021-05-04 浙江大华技术股份有限公司 Identity information determination method and device, storage medium and electronic equipment
CN112836089A (en) * 2021-01-28 2021-05-25 浙江大华技术股份有限公司 Method and device for confirming motion trail, storage medium and electronic device

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102724482A (en) * 2012-06-18 2012-10-10 西安电子科技大学 Intelligent visual sensor network moving target relay tracking system based on GPS (global positioning system) and GIS (geographic information system)
CN103079061A (en) * 2013-01-30 2013-05-01 浙江宇视科技有限公司 Video tracking processing device and video link processing device
US20130170696A1 (en) * 2011-12-28 2013-07-04 Pelco, Inc. Clustering-based object classification
CN105913037A (en) * 2016-04-26 2016-08-31 广东技术师范学院 Face identification and radio frequency identification based monitoring and tracking system
CN106372606A (en) * 2016-08-31 2017-02-01 北京旷视科技有限公司 Target object information generation method and unit identification method and unit and system
CN108229879A (en) * 2017-12-26 2018-06-29 拉扎斯网络科技(上海)有限公司 A kind of stroke duration predictor method, device and storage medium
CN109815845A (en) * 2018-12-29 2019-05-28 深圳前海达闼云端智能科技有限公司 Face recognition method and device and storage medium
CN109886196A (en) * 2019-02-21 2019-06-14 中水北方勘测设计研究有限责任公司 Personnel track traceability system and method based on BIM plus GIS video monitoring
KR20190084594A (en) * 2018-01-08 2019-07-17 현대모비스 주식회사 Apparatus and method tracking object based on 3 dimension images
CN110147710A (en) * 2018-12-10 2019-08-20 腾讯科技(深圳)有限公司 Processing method, device and the storage medium of face characteristic
CN110163137A (en) * 2019-05-13 2019-08-23 深圳市商汤科技有限公司 A kind of image processing method, device and storage medium
CN110334231A (en) * 2019-06-28 2019-10-15 深圳市商汤科技有限公司 A kind of information processing method and device, storage medium
CN110543583A (en) * 2019-06-28 2019-12-06 深圳市商汤科技有限公司 information processing method and apparatus, image device, and storage medium
CN110705477A (en) * 2019-09-30 2020-01-17 深圳市商汤科技有限公司 Behavior analysis method and apparatus, electronic device, and computer storage medium

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130170696A1 (en) * 2011-12-28 2013-07-04 Pelco, Inc. Clustering-based object classification
CN102724482A (en) * 2012-06-18 2012-10-10 西安电子科技大学 Intelligent visual sensor network moving target relay tracking system based on GPS (global positioning system) and GIS (geographic information system)
CN103079061A (en) * 2013-01-30 2013-05-01 浙江宇视科技有限公司 Video tracking processing device and video link processing device
CN105913037A (en) * 2016-04-26 2016-08-31 广东技术师范学院 Face identification and radio frequency identification based monitoring and tracking system
CN106372606A (en) * 2016-08-31 2017-02-01 北京旷视科技有限公司 Target object information generation method and unit identification method and unit and system
CN108229879A (en) * 2017-12-26 2018-06-29 拉扎斯网络科技(上海)有限公司 A kind of stroke duration predictor method, device and storage medium
KR20190084594A (en) * 2018-01-08 2019-07-17 현대모비스 주식회사 Apparatus and method tracking object based on 3 dimension images
CN110147710A (en) * 2018-12-10 2019-08-20 腾讯科技(深圳)有限公司 Processing method, device and the storage medium of face characteristic
CN109815845A (en) * 2018-12-29 2019-05-28 深圳前海达闼云端智能科技有限公司 Face recognition method and device and storage medium
CN109886196A (en) * 2019-02-21 2019-06-14 中水北方勘测设计研究有限责任公司 Personnel track traceability system and method based on BIM plus GIS video monitoring
CN110163137A (en) * 2019-05-13 2019-08-23 深圳市商汤科技有限公司 A kind of image processing method, device and storage medium
CN110334231A (en) * 2019-06-28 2019-10-15 深圳市商汤科技有限公司 A kind of information processing method and device, storage medium
CN110543583A (en) * 2019-06-28 2019-12-06 深圳市商汤科技有限公司 information processing method and apparatus, image device, and storage medium
CN110705477A (en) * 2019-09-30 2020-01-17 深圳市商汤科技有限公司 Behavior analysis method and apparatus, electronic device, and computer storage medium

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111860345A (en) * 2020-07-22 2020-10-30 海尔优家智能科技(北京)有限公司 Method and device for determining object position, storage medium and electronic device
CN112149520A (en) * 2020-09-03 2020-12-29 上海趋视信息科技有限公司 Multi-target management method, system and device
CN112148769A (en) * 2020-09-15 2020-12-29 浙江大华技术股份有限公司 Data synchronization method, device, storage medium and electronic device
CN112241686A (en) * 2020-09-16 2021-01-19 四川天翼网络服务有限公司 Trajectory comparison matching method and system based on feature vectors
CN112268554A (en) * 2020-09-16 2021-01-26 四川天翼网络服务有限公司 Regional range loitering detection method and system based on path trajectory analysis
CN112258363A (en) * 2020-10-16 2021-01-22 浙江大华技术股份有限公司 Identity information confirmation method and device, storage medium and electronic device
CN112749652A (en) * 2020-12-31 2021-05-04 浙江大华技术股份有限公司 Identity information determination method and device, storage medium and electronic equipment
CN112749652B (en) * 2020-12-31 2024-02-20 浙江大华技术股份有限公司 Identity information determining method and device, storage medium and electronic equipment
CN112836089A (en) * 2021-01-28 2021-05-25 浙江大华技术股份有限公司 Method and device for confirming motion trail, storage medium and electronic device
CN112836089B (en) * 2021-01-28 2023-08-22 浙江大华技术股份有限公司 Method and device for confirming motion trail, storage medium and electronic device

Similar Documents

Publication Publication Date Title
CN111291682A (en) Method and device for determining target object, storage medium and electronic device
CN106203458B (en) Crowd video analysis method and system
CN109271554B (en) Intelligent video identification system and application thereof
US20220092881A1 (en) Method and apparatus for behavior analysis, electronic apparatus, storage medium, and computer program
CN107273833B (en) Method for monitoring floating population and system thereof
US20210357678A1 (en) Information processing method and apparatus, and storage medium
CN109784274A (en) Identify the method trailed and Related product
CN108446681B (en) Pedestrian analysis method, device, terminal and storage medium
CN110263613A (en) Monitor video processing method and processing device
CN111209776A (en) Method, device, processing server, storage medium and system for identifying pedestrians
CN110969215A (en) Clustering method and device, storage medium and electronic device
CN112308001A (en) Data analysis method and personnel tracking method and system for smart community
CN111010547A (en) Target object tracking method and device, storage medium and electronic device
CN110659391A (en) Video detection method and device
CN109815878B (en) Foothold analysis method and device based on face recognition
CN112818149A (en) Face clustering method and device based on space-time trajectory data and storage medium
CN111222373A (en) Personnel behavior analysis method and device and electronic equipment
CN110705476A (en) Data analysis method and device, electronic equipment and computer storage medium
CN110990455B (en) Method and system for recognizing house property by big data
CN110519324B (en) Person tracking method and system based on network track big data
CN110796014A (en) Garbage throwing habit analysis method, system and device and storage medium
CN109120896B (en) Security video monitoring guard system
CN114863364B (en) Security detection method and system based on intelligent video monitoring
CN112613396B (en) Task emergency degree processing method and system
CN112601054B (en) Pickup picture acquisition method and device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination