CN111563479B - Concurrent person weight removing method, partner analyzing method and device and electronic equipment - Google Patents

Concurrent person weight removing method, partner analyzing method and device and electronic equipment Download PDF

Info

Publication number
CN111563479B
CN111563479B CN202010455227.0A CN202010455227A CN111563479B CN 111563479 B CN111563479 B CN 111563479B CN 202010455227 A CN202010455227 A CN 202010455227A CN 111563479 B CN111563479 B CN 111563479B
Authority
CN
China
Prior art keywords
person
time
snapshot
image
track
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010455227.0A
Other languages
Chinese (zh)
Other versions
CN111563479A (en
Inventor
李晓通
李蔚琳
梁栋
葛飞剑
付豪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Sensetime Technology Co Ltd
Original Assignee
Shenzhen Sensetime Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Sensetime Technology Co Ltd filed Critical Shenzhen Sensetime Technology Co Ltd
Priority to CN202010455227.0A priority Critical patent/CN111563479B/en
Publication of CN111563479A publication Critical patent/CN111563479A/en
Application granted granted Critical
Publication of CN111563479B publication Critical patent/CN111563479B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/103Static body considered as a whole, e.g. static pedestrian or occupant recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention provides a peer-to-peer duplicate removal method, a group analysis method, a device and electronic equipment, which comprise the following steps: acquiring an image including a face of a first person; acquiring an image class comprising the face of the first person according to the image; determining a track of a first person according to the snapshot time and the snapshot camera of each image in the image class, wherein the track of the first person comprises at least one track point; determining the pedestrians of the first person according to the track; and under the condition that the second person appears in at least one image captured by the first track point, determining that the second person and the first person are in the same row once at the first track point, wherein the second person is any person in the same person, and the first track point is any track point in the at least one track point. The embodiment of the invention can improve the accuracy of the same-line times.

Description

Concurrent person weight removing method, partner analyzing method and device and electronic equipment
Technical Field
The invention relates to the technical field of computers, in particular to a peer duplicate removal method, a partner analysis device and electronic equipment.
Background
With the continuous development of electronic technology, the use of monitoring cameras is increasing. With the continuous development of the face recognition technology, the track of a person can be obtained by analyzing the images or videos captured by the monitoring cameras through face recognition, the pedestrians of the person can be determined based on the track of the person, and the number of times of the pedestrians in the pedestrians can be determined. At present, the method for determining the number of the same-pedestrian times of each same-pedestrian in the same-pedestrian is as follows: the number of occurrences of the co-occurrence in the trajectory of the person is determined as the number of co-occurrence of the co-occurrence. In the method, the determined same-line times of the same-line people are high, so that the accuracy of the same-line times is reduced.
Disclosure of Invention
The embodiment of the invention provides a co-pedestrian de-duplication method, a group analysis method and a related device, which are used for improving the accuracy of the times of the co-pedestrians.
The first aspect provides a peer deduplication method, comprising: acquiring an image including a face of a first person; acquiring an image class comprising the face of the first person according to the image; determining the track of the first person according to the snapshot time and the snapshot camera of each image in the image class, wherein the track of the first person comprises at least one track point; determining a peer of the first person according to the track; and under the condition that a second person appears in at least one image captured by a first track point, determining that the second person and the first person are in the same row at the first track point once, wherein the second person is any person in the same person, and the first track point is any track point in the at least one track point.
In the embodiment of the invention, at one track point of the same person and the target person in the track of the target person, the same person and the target person are considered to be the same at one track point no matter how many times the same person appears, so that the situation that the same person is excessively high in the number of times due to multiple times of statistics on multiple shots of the same person in a short time can be avoided, and the accuracy of the same person times can be improved.
As a possible implementation manner, the acquiring, according to the image, an image class including a face of the first person includes: extracting face features of the face from the image; determining a label of the face feature; and obtaining the image class corresponding to the label according to the corresponding relation between the label and the image class, and obtaining the image class comprising the face of the first person.
In the embodiment of the invention, the images comprising the faces of the same person are clustered into one image class in advance, the labels are marked for the face features, and the corresponding relation between the image class and the labels of the corresponding face features is established, so that the corresponding image class can be quickly searched through the face features, and the duplicate removal efficiency of the same person can be improved.
As a possible implementation manner, the determining the track of the first person according to the capture time and the capture camera of each image in the image class includes: acquiring the snapshot time and the snapshot camera of each image in the image class; classifying the images in the image class according to the snapshot cameras to obtain M class images, wherein M is the number of the snapshot cameras; under the condition that a current class image in the M class images comprises one image, determining a first snapshot camera and a first time period as one track point of the first person, wherein the first snapshot camera is a snapshot camera corresponding to the current class image, the first time period is a time period between a first snapshot time front threshold time and a first snapshot time rear threshold time, and the first snapshot time is a snapshot time of the one image; and determining the track of the first person according to the track points of the first person.
In the embodiment of the invention, when the track of the target person is determined, under the condition that the snapshot cameras of the images in the image class are different, the time interval between the snapshot times is not needed to be considered.
As a possible implementation manner, in the case that the current class image includes two images, calculating a time interval between a second snapshot time and a third snapshot time, to obtain a first time interval, where the second snapshot time is a snapshot time of one of the two images, and the third snapshot time is a snapshot time of the other of the two images; determining that the first snapshot camera and a second time period are one track point of the first person and the first snapshot camera and a third time period are the other track point of the first person when the first time interval is larger than a threshold value, wherein the second time period is a time period between a threshold value time before the second snapshot time and a threshold value time after the second snapshot time, and the third time period is a time period between the threshold value time before the third snapshot time and the threshold value time after the third snapshot time; and under the condition that the first time interval is not greater than the threshold value, determining that the first snapshot camera and a fourth time period are a track point of the first person, wherein the fourth time period is a time period between the threshold value time before the second snapshot time and the threshold value time after the third snapshot time, and the second snapshot time is earlier than the third snapshot time.
In the embodiment of the invention, when the track of the target person is determined, under the condition that the time interval between the capturing time of two images captured by the same camera is larger, the two track points of the target person can be respectively determined according to the capturing time of the two images, the condition that the images captured by the same camera with far capturing time difference are determined to be the images captured by the same track point can be avoided, and further, the condition that the same person in the two images with far capturing time difference of the same camera is counted at the same track point to cause lower same-line times can be avoided, so that the accuracy of the same-line times can be further improved. Under the condition that the time interval between the snapshot time of two images which are snapshot by the same camera is smaller, one track point of a target person can be determined according to the snapshot time of the two images, the superposition of different track points with the same position in time can be avoided, that is, the fact that the image which is snapshot by the same camera at one snapshot time is simultaneously determined to be the image which is snapshot by the other track point can be avoided, that is, the fact that a plurality of snapshot images of the target person which is snapshot by the same camera (namely at the same position) in a short time are determined to be the images which are snapshot by the plurality of track points can be avoided, the situation that the same person in the same image is repeatedly counted at different track points to cause higher same line times can be further avoided, and therefore the accuracy of the same line times can be further improved.
As a possible implementation manner, in the case that the current class image includes N images, sorting the capturing times of the N images according to a time sequence to obtain a sorting table, where N is an integer greater than or equal to 3; calculating the time interval between two adjacent snapshot times in the sequencing table; dividing N snapshot times into K groups of snapshot times according to the time interval, wherein K is an integer smaller than or equal to N, and the time interval between any two groups of snapshot times in the K groups of snapshot times is larger than the threshold value; and determining K track points of the first person according to the K groups of snapshot time.
In the embodiment of the invention, when the track of the target person is determined, under the condition that the time interval between each of the plurality of capturing times of the plurality of images captured by the same camera and the adjacent capturing time is smaller, one track point of the target person can be determined according to the plurality of capturing times, the coincidence of different track points with the same position in time can be avoided, that is, the condition that the images captured by one camera at one capturing time are simultaneously determined as the images captured by the different track points can be avoided, that is, the condition that the images captured by the same camera (namely the same position) in a short time are determined as the images captured by the plurality of track points can be avoided, the condition that the same person in the same image is repeatedly counted at different track points to cause higher same-line times can be further avoided, and therefore, the accuracy of same-line times can be further improved.
As a possible implementation manner, the determining the peer of the first person according to the trajectory includes: and determining people except the first person in the images captured by all track points in the track as the same person as the first person.
As a possible implementation manner, the method further comprises: and determining the number of the same-person times of each same-person and the first person.
A second aspect provides a method of group analysis, comprising: determining the number of co-workers of each co-worker of a first person with the first person, the number of co-workers being determined according to the method provided above; the same person is ranked according to the sequence from high to low of the same person times to obtain a first ranking table, or the same person is ranked according to the sequence from low to high of the same person times to obtain a second ranking table; determining the top L individuals in the first ranking table as the partners of the first person or the bottom L individuals in the second ranking table as the partners of the first person, wherein L is an integer greater than 1.
In the embodiment of the invention, when the number of the same person and the target person in the same line is counted, one track point in the track of the target person considers that the same person and the target person are the same line at the track point once no matter how many times the same person appears, so that the situation that the number of the same line is too high due to the fact that the same person is counted for many times in a short time can be avoided, the accuracy of the number of the same line can be improved, and the accuracy of group analysis can be further improved.
A third aspect provides a peer deduplication apparatus comprising: a first acquisition unit configured to acquire an image including a face of a first person; a second obtaining unit, configured to obtain, according to the image, an image class including a face of the first person; the first determining unit is used for determining the track of the first person according to the snapshot time and the snapshot camera of each image in the image class, and the track of the first person comprises at least one track point; a second determining unit configured to determine a peer person of the first person according to the trajectory; the third determining unit is configured to determine that, when a second person appears in at least one image captured by a first track point, the second person and the first person are in the same row at the first track point, the second person is any person in the same person, and the first track point is any track point in the at least one track point.
As a possible implementation manner, the second obtaining unit is specifically configured to: extracting face features of the face from the image; determining a label of the face feature; and obtaining the image class corresponding to the label according to the corresponding relation between the label and the image class, and obtaining the image class comprising the face of the first person.
As a possible implementation manner, the first determining unit is specifically configured to: acquiring the snapshot time and the snapshot camera of each image in the image class; classifying the images in the image class according to the snapshot cameras to obtain M class images, wherein M is the number of the snapshot cameras; under the condition that a current class image in the M class images comprises one image, determining a first snapshot camera and a first time period as one track point of the first person, wherein the first snapshot camera is a snapshot camera corresponding to the current class image, the first time period is a time period between a first snapshot time front threshold time and a first snapshot time rear threshold time, and the first snapshot time is a snapshot time of the one image; and determining the track of the first person according to the track points of the first person.
As a possible implementation manner, the first determining unit is specifically further configured to: calculating a time interval between a second snapshot time and a third snapshot time under the condition that the current class image comprises two images, so as to obtain a first time interval, wherein the second snapshot time is the snapshot time of one image in the two images, and the third snapshot time is the snapshot time of the other image in the two images; determining that the first snapshot camera and a second time period are one track point of the first person and the first snapshot camera and a third time period are the other track point of the first person when the first time interval is larger than a threshold value, wherein the second time period is a time period between a threshold value time before the second snapshot time and a threshold value time after the second snapshot time, and the third time period is a time period between the threshold value time before the third snapshot time and the threshold value time after the third snapshot time; and under the condition that the first time interval is not greater than the threshold value, determining that the first snapshot camera and a fourth time period are a track point of the first person, wherein the fourth time period is a time period between the threshold value time before the second snapshot time and the threshold value time after the third snapshot time, and the second snapshot time is earlier than the third snapshot time.
As a possible implementation manner, the first determining unit is specifically further configured to: under the condition that the current class image comprises N images, sequencing the snapshot time of the N images according to the time sequence to obtain a sequencing table, wherein N is an integer greater than or equal to 3; calculating the time interval between two adjacent snapshot times in the sequencing table; dividing N snapshot times into K groups of snapshot times according to the time interval, wherein K is an integer smaller than or equal to N, and the time interval between any two groups of snapshot times in the K groups of snapshot times is larger than the threshold value; and determining K track points of the first person according to the K groups of snapshot time.
As a possible implementation manner, the second determining unit is specifically configured to determine, as the same person as the first person, a person other than the first person in the images captured by all track points in the track.
As a possible implementation manner, the device further comprises: and a fourth determining unit configured to determine the number of times of the same person and the first person for each of the same person.
A fourth aspect provides a group analysis device, comprising: a first determining unit configured to determine a number of co-occurrence times of each of co-occurrence persons of a first person and the first person, the number of co-occurrence times being determined according to the method provided above; the ordering unit is used for ordering the same-line persons according to the order of the same-line times from high to low to obtain a first ordering table, or ordering the same-line persons according to the order of the same-line times from low to high to obtain a second ordering table; a second determining unit, configured to determine, as a partner of the first person, L persons ranked in the first ranking table before, or determine, as a partner of the first person, L persons ranked in the second ranking table after, L being an integer greater than 1.
A fifth aspect provides an electronic device comprising a processor and a memory, the memory being for storing a computer program, the processor being for invoking the computer program stored in the memory to perform a peer deduplication method as provided by the first aspect or any of the possible implementations of the first aspect.
A sixth aspect provides an electronic device comprising a processor and a memory, the memory for storing a computer program, the processor for invoking the computer program stored in the memory to perform a group analysis method as provided in the second aspect.
A seventh aspect provides a computer readable storage medium storing a computer program comprising program code which when executed by a processor causes the processor to perform the peer deduplication method of the first aspect or any of the possible implementations of the first aspect.
An eighth aspect provides a computer readable storage medium storing a computer program comprising program code which when executed by a processor causes the processor to perform the method of the group analysis provided in the second aspect.
A ninth aspect provides an application for executing, at runtime, the peer deduplication method provided by the first aspect or any of the possible implementations of the first aspect.
A tenth aspect provides an application for executing the method of group analysis provided in the second aspect at runtime.
Drawings
Fig. 1 is a schematic diagram of a network architecture according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart of a peer deduplication method according to an embodiment of the present invention;
FIG. 3 is a schematic flow chart of a track determining method according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a trace point determination according to an embodiment of the present invention;
fig. 5 is a schematic diagram of dividing six snapshot times into three groups of snapshot times according to a time interval according to an embodiment of the present invention;
FIG. 6 is a flow chart of a method for group analysis according to an embodiment of the present invention;
FIG. 7 is a schematic structural diagram of a peer deduplication device according to an embodiment of the present invention;
FIG. 8 is a schematic diagram of a group analysis device according to an embodiment of the present invention;
fig. 9 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The embodiment of the invention provides a peer-to-peer duplicate removal method, a peer-to-peer duplicate analysis device and electronic equipment, which are used for improving the accuracy of the times of peers. The following will describe in detail.
In order to better understand the peer-to-peer duplication elimination method, the partner analysis method, the device and the electronic equipment provided by the embodiment of the invention, the network architecture used by the embodiment of the invention is described below. Referring to fig. 1, fig. 1 is a schematic diagram of a network architecture according to an embodiment of the invention. As shown in fig. 1, the network architecture may include a camera 101, a resolution server 102, and a processing device 103. A camera 101 for capturing images in real time or periodically. The camera 101 may be a camera in a monitoring device, and the monitoring device may send an image collected by the camera 101 to the analysis server 102, or the monitoring device may actively send the image collected by the camera 101 to the analysis server 102 in real time or periodically, or the monitoring device may send the image collected by the camera 101 to the analysis server 102 after receiving information for obtaining the image from the analysis server 102. The camera 101 may be a camera, and in the case that the camera has no communication function, the collected image may be uploaded to the analysis server 102 through a memory card of the camera. The camera 101 may also be a camera in a mobile phone or the like.
The parsing server 102 is configured to tag the face features in the static library, extract the face features of the face from the images acquired by the camera 101, aggregate and classify the aggregated face features by using a clustering algorithm according to the distribution of the face features in the multidimensional space to obtain a plurality of image classes, compare the face features corresponding to each image class in the plurality of image classes with the face features in the static library, and establish a correspondence between the image class with the comparison result meeting the confidence standard and the tag corresponding to the face features in the static library. The label can be the name, serial number, etc. of the person corresponding to the face feature.
A processing device 103, configured to obtain an image including a face of a person, obtain an image class including the face of the person from the resolution server 102 according to the image, and determine a co-pedestrian of the person and a number of times that each of the co-pedestrians is co-lined with the person according to the image class.
The processing device 103 is a device with image processing capability and computing capability, such as a server deployed based on private cloud, an edge device, and the like.
Referring to fig. 2, fig. 2 is a schematic flow chart of a peer deduplication method according to an embodiment of the present invention based on the network architecture shown in fig. 1. Wherein the peer deduplication method is described from the perspective of the processing device 103. As shown in fig. 2, the peer deduplication method may include the following steps.
201. An image is acquired that includes a face of a first person.
In the case where it is necessary to determine the pedestrians of the first person and the number of times of the pedestrians with the first person, an image including the face of the first person may be acquired according to an instruction input by the user or a generated instruction. The image of the face of the first person may be a whole-body image including the face of the first person or may be a half-body image including the face of the first person. The image including the face of the first person may be obtained locally, from another device, or uploaded by the user.
202. An image class including a face of a first person is acquired from an image including the face of the first person.
After the image including the face of the first person is acquired, an image class including the face of the first person may be acquired from the image including the face of the first person. The processing device may obtain the stored face features, the labels of the face features, and the image class from the parsing server in advance, and then store the obtained face features, the labels of the face features, and the image class locally for subsequent recall. The face features of the face of the first person can be extracted from the image comprising the face of the first person to obtain the first face features, then the labels of the first face features are determined, and finally the image class corresponding to the labels of the first face features is obtained according to the corresponding relation between the labels of the first face features and the image class to obtain the image class comprising the face of the first person. When determining the label of the first face feature, each face feature in the stored face features acquired from the analysis server can be determined as a class, the first face feature is clustered according to a clustering algorithm, and the label of the face feature of the class to which the first face feature belongs after the clustering is determined as the label of the first face feature.
An information acquisition request for acquiring an image class including the face of the first person may also be sent to the resolution server, the information acquisition request carrying an image including the face of the first person. After receiving the information acquisition request, the parsing server may determine an image class including a face of the first person according to the above method, and send the image class including the face of the first person to the processing device. The processing device may receive an image class from the resolution server that includes a face of the first person.
The face features of the face of the first person may be extracted from the image including the face of the first person to obtain the first face feature, and then an information acquisition request for acquiring the image class corresponding to the first face feature is sent to the parsing server, where the information acquisition request carries the first face feature. After receiving the information acquisition request, the analysis server can determine the label of the first face feature, finally acquire the image class corresponding to the label of the first face feature according to the corresponding relation between the label of the first face feature and the image class to obtain the image class of the face of the first person, and send the image class of the face of the first person to the processing equipment. The processing device may receive an image class from the resolution server that includes a face of the first person.
203. And determining the track of the first person according to the snapshot time and the snapshot camera of each image in the image class.
After the image class including the face of the first person is acquired from the image including the face of the first person, a trajectory of the first person may be determined from a snapshot time and a snapshot camera of each image in the image class. The trajectory of the first person comprises at least one trajectory point, i.e. comprises one or more trajectory points. Each track point comprises a time period and a camera, i.e. a piece of position information, i.e. a camera corresponds to a piece of position information.
204. The co-person of the first person is determined from the trajectory of the first person.
After determining the track of the first person according to the snapshot time and the snapshot camera of each image in the image class, the same person as the first person can be determined according to the track of the first person. Persons other than the first person in the images captured at all the track points in the track of the first person may be determined as co-persons of the first person. An image of each track point in the track of the first person may be acquired first, and then the persons other than the first person in the acquired images are determined as the same person as the first person. The images acquired include images in an image class of the face of the first person. An image captured by one track point, that is, all images captured by the capture camera included in the track point in a period of time included in the track point.
205. In the event that the second person appears in at least one image captured by the first tracking point, it is determined that the second person is co-located with the first person at the first tracking point.
After determining the same person of the first person according to the track of the first person, the duplicate removal can be performed on one same person with each track point appearing multiple times in the track of the first person, namely, under the condition that the second person appears in at least one image captured by the first track point, the second person and the first person are determined to be in the same row at the first track point once, namely, the same row times are all once at the track point regardless of the fact that one person appears multiple times. The second person is any one of the same person as the first person, and the first track point is any track point in the at least one track point.
After or simultaneously with step 205, the number of the same-person walks with the first person may be determined, that is, the number of track points included in the track where each of the same-person walks with the first person appears in the first person may be counted, for example, for the second person, the number of the track points where the second person appears may be counted to obtain the number of the same-person walks with the first person. The number of the same-row times of the second person and the first person is greater than or equal to 0 and less than or equal to the number of track points included in the track of the first person.
In the method for removing duplication of the same person depicted in fig. 2, at a track point of the track of the same person and the target person, no matter how many times the same person appears, the same person and the target person are considered to be the same line at the track point, so that the situation that the times of the same line are too high due to multiple statistics of multiple snapshots of the same person in a short time can be avoided, and the accuracy of the times of the same line can be improved.
Referring to fig. 3, fig. 3 is a flowchart of a track determining method according to an embodiment of the invention. As shown in fig. 3, step 203 may specifically include the following steps.
301. And acquiring the snapshot time and the snapshot camera of each image in the image class.
302. Classifying the images in the image classes according to the snapshot cameras to obtain M class images.
After capturing time and capturing cameras of each image in the image class are obtained, the images in the image class can be classified according to the capturing cameras to obtain M class images, namely, the images captured by the same capturing camera in the image class are classified into one class to obtain M class images. M is the number of snap cameras.
303. The number of images included in the current class of images is determined.
After classifying the images in the image class according to the snapshot camera to obtain M class images, any one class image in the M class images can be determined as a current class image, and then the number of images included in the current class image is determined.
304. In the case where the current class image includes one image, the first snapshot camera and the first time period are determined to be one track point of the first person.
The first snapshot camera is a snapshot camera corresponding to the current type of image, the first time period is a time period between a threshold time before the first snapshot time and a threshold time after the first snapshot time, and the first snapshot time is the snapshot time of the image. The first time period may be a time period between a first threshold time before the first capturing time and a second threshold time after the first capturing time, and the first threshold may be greater than the second threshold or may be less than the second threshold.
305. And under the condition that the current class image comprises two images, calculating the time interval between the second snapshot time and the third snapshot time to obtain a first time interval.
The second snapshot time is the snapshot time of one of the two images, and the third snapshot time is the snapshot time of the other of the two images.
306. It is determined whether the first time interval is greater than (or equal to) the threshold value, and in the case where the first time interval is greater than (or equal to) the threshold value, step 307 is performed, and in the case where the first time interval is not greater than (or less than) the threshold value, step 308 is performed.
307. Determining the first snapshot camera and the second time period as one track point of the first person and the first snapshot camera and the third time period as the other track point of the first person.
The second time period is a time period between a threshold time before the second snapshot time and a threshold time after the second snapshot time, and the third time period is a time period between a threshold time before the third snapshot time and a threshold time after the third snapshot time.
308. And determining the first snapshot camera and the fourth time period as a track point of the first person.
The fourth time period is a time period between a threshold time before the second snapshot time and a threshold time after the third snapshot time, and the second snapshot time is earlier than the third snapshot time.
Referring to fig. 4, fig. 4 is a schematic diagram illustrating determination of a track point according to an embodiment of the invention. As shown in fig. 4, the threshold is X seconds, and the time periods of the front and rear X seconds of the capturing time of two images captured by the same capturing camera are combined to obtain a time period of one track point.
309. Under the condition that the current class image comprises N images, sorting the snapshot time of the N images according to the time sequence to obtain a sorting table, calculating the time interval between two adjacent snapshot times in the sorting table, dividing the N snapshot times into K groups of snapshot times according to the time interval, and determining K track points of the first person according to the K groups of snapshot times.
N is an integer greater than or equal to 3. When the current class image includes three or more images, the capturing times of the N images may be first sorted according to the time sequence to obtain a sorted list, and then a time interval between two adjacent capturing times in the sorted list is calculated. And dividing the N snapshot times into K groups of snapshot times according to the time intervals, namely cutting at positions with the time intervals larger than a threshold value in the sequencing table to obtain K groups of snapshot times. K is an integer less than or equal to N, and the time interval between any two groups of snapshot time in the K groups of snapshot time is greater than a threshold value. Referring to fig. 5, fig. 5 is a schematic diagram of dividing six snapshot times into three groups according to time intervals according to an embodiment of the present invention. As shown in fig. 5, the time interval between the second snapshot time and the third snapshot time in the ordered list is greater than 30 seconds (i.e., the threshold time) and the time interval between the fourth snapshot time and the fifth snapshot time is greater than 30 seconds, and therefore, three sets of snapshot times can be obtained by cutting between the second snapshot time and the third snapshot time and between the fourth snapshot time and the fifth snapshot time. And then, determining K track points of the first person according to the K groups of snapshot time, namely respectively determining one track point according to each group of snapshot time in the K groups of snapshot time. Each of the K track points includes a current snapshot camera and a corresponding set of time periods determined by the snapshot time. The time period is determined in a manner that is a time period between a threshold time before the earliest snapshot time in the set of snapshot times and a threshold time after the latest snapshot time in the set of snapshot times.
Under the condition that the current class image comprises N images, the snapshot time of the N images can be firstly sequenced according to the time sequence to obtain a sequencing table, and then the time interval between two adjacent snapshot times in the sequencing table is calculated. And judging whether the first time interval is larger than a threshold value, and determining the time interval between the current camera and the threshold value time before the first snapshot time and the time between the current camera and the threshold value time after the first snapshot time as a track point of the first person under the condition that the first time interval is larger than the threshold value. If the first time interval is determined to be less than or equal to the threshold value, whether the second time interval is greater than the threshold value can be continuously determined, and if the second time interval is determined to be greater than the threshold value, the current camera and the time interval between the threshold time before the first snapshot time and the threshold time after the second snapshot time can be determined to be a track point of the first person. And if the third time interval is judged to be greater than the threshold value, determining the time interval between the current camera and the threshold value time before the first snapshot time and the time interval between the current camera and the threshold value time after the third snapshot time as a track point of the first person until all the calculated time intervals are judged and corresponding processing is performed. The first snapshot time, the second snapshot time, …, and the nth snapshot time are the order of the snapshot times in the order table. Similarly, the first time interval, …, N-1 time interval is the ranking of the time intervals calculated according to the ranking of the snapshot times in the ranking table.
After step 304, step 307, step 308 or step 309 are performed, it may be continued to determine whether the M-class image is processed, and if it is determined that the M-class image is not processed, any one of the M-class images that is not processed is determined to be the current class image, and then steps 303 to 310 are performed. When it is determined that the M-class image is processed, the processing is ended.
Referring to fig. 6, fig. 6 is a flow chart of a method for analyzing a group according to an embodiment of the present invention, based on the network architecture shown in fig. 1. Wherein the group analysis method is described from the point of view of the processing device 103. As shown in FIG. 6, the group analysis method may include the following steps.
601. The number of co-workers with the first person is determined for each of the co-workers of the first person.
The detailed description of determining the number of times of each co-pedestrian of the first person and the first person may refer to the above description, and will not be repeated herein.
602. And ordering the same-person of the first person according to the order of the same-person times from high to low to obtain a first ordering table, or ordering the same-person of the first person according to the order of the same-person times from low to high to obtain a second ordering table.
After the number of times of each co-worker in the co-workers of the first person and the first person is determined, the co-workers of the first person can be ranked according to the order of the number of times of the co-workers from high to low to obtain a first ranking table, namely, the co-workers of the first person are ranked in descending order according to the number of times of the co-workers. The same-person ordering method can also be used for ordering the same-person persons of the first person according to the order from low to high of the same-person times to obtain a second ordering table, namely the same-person persons of the first person are arranged in ascending order according to the same-person times.
603. The top-ranked L individuals in the first ranking table are determined to be the first person's partners, or the bottom-ranked L individuals in the second ranking table are determined to be the first person's partners.
After the peer persons of the first person are ranked according to the order of the peer times from high to low to obtain the first ranking table, the L persons ranked in the first ranking table in front may be determined as a partner of the first person. After the peer persons of the first person are ranked in order of the peer times from low to high to obtain the second ranking table, the L persons ranked at the back in the second ranking table can be determined as the group partner of the first person. By the method, the crime party, such as a crime party, can be determined. L is an integer greater than 1, and the specific value can be adjusted as required.
In the method for analyzing the group partner described in fig. 6, when the number of times of the same person and the target person in the same line is counted, a track point in the track of the target person considers that the same person and the target person are in the same line at the track point once no matter how many times the same person appears, so that the situation that the number of times of the same person in the same line is too high due to the fact that the number of times of the same person in the same line is counted for many times in a short time can be avoided, the accuracy of the number of times of the same line can be improved, and the accuracy of the group partner analysis can be improved.
Referring to fig. 7, fig. 7 is a schematic structural diagram of a peer deduplication device according to an embodiment of the present invention, based on the network architecture shown in fig. 1. The peer deduplication device may be disposed in the processing device 103, or may be the processing device 103. As shown in fig. 7, the peer deduplication apparatus may include:
a first acquiring unit 701 configured to acquire an image including a face of a first person;
a second acquiring unit 702 configured to acquire an image class including a face of a first person from an image including the face of the first person;
a first determining unit 703, configured to determine a track of the first person according to the capture time and the capture camera of each image in the image class, where the track of the first person includes at least one track point;
A second determining unit 704, configured to determine a pedestrian of the first person according to the trajectory of the first person;
the third determining unit 705 is configured to determine that, in a case where the second person appears in at least one image captured by the first track point, the second person and the first person are on the same line at the first track point once, and any person in the second person and the person is on the same line, where the first track point is any track point in the at least one track point.
In some embodiments, the second obtaining unit 702 is specifically configured to:
extracting face features of a face of a first person from an image including the face of the first person;
determining a label of the face feature;
and obtaining the image class corresponding to the label according to the corresponding relation between the label and the image class, and obtaining the image class comprising the face of the first person.
In some embodiments, the first determining unit 703 is specifically configured to:
acquiring the snapshot time and the snapshot camera of each image in the image class;
classifying images in the image classes according to the snapshot cameras to obtain M classes of images, wherein M is the number of the snapshot cameras;
under the condition that a current class image in the M class images comprises one image, determining a first snapshot camera and a first time period as one track point of a first person, wherein the first snapshot camera is a snapshot camera corresponding to the current class image, the first time period is a time period between a first snapshot time front threshold time and a first snapshot time rear threshold time, and the first snapshot time is the snapshot time of the one image;
The trajectory of the first person is determined from the trajectory points of the first person.
In some embodiments, the first determining unit 703 is specifically further configured to:
under the condition that the current class image comprises two images, calculating a time interval between a second snapshot time and a third snapshot time to obtain a first time interval, wherein the second snapshot time is the snapshot time of one of the two images, and the third snapshot time is the snapshot time of the other of the two images;
under the condition that the first time interval is larger than the threshold value, determining that the first snapshot camera and the second time period are one track point of the first person and the first snapshot camera and the third time period are the other track point of the first person, wherein the second time period is a time period between the threshold value time before the second snapshot time and the threshold value time after the second snapshot time, and the third time period is a time period between the threshold value time before the third snapshot time and the threshold value time after the third snapshot time;
under the condition that the first time interval is not greater than the threshold value, determining that the first snapshot camera and the fourth time period are track points of the first person, wherein the fourth time period is a time period between the threshold value time before the second snapshot time and the threshold value time after the third snapshot time, and the second snapshot time is earlier than the third snapshot time.
In some embodiments, the first determining unit 703 is specifically further configured to:
under the condition that the current class image comprises N images, sequencing the snapshot time of the N images according to the time sequence to obtain a sequencing table, wherein N is an integer greater than or equal to 3;
calculating the time interval between two adjacent snapshot times in the sequencing table;
dividing N snapshot times into K groups of snapshot times according to the calculated time interval, wherein K is an integer smaller than or equal to N, and the time interval between any two groups of snapshot times in the K groups of snapshot times is larger than a threshold value;
and determining K track points of the first person according to the K groups of snapshot time.
In some embodiments, the second determining unit 704 is specifically configured to determine, as the same person as the first person, a person other than the first person in the images captured by all the track points in the track of the first person.
In some embodiments, the peer deduplication device may further comprise:
a fourth determining unit 706, configured to determine the number of times of the first person and each of the pedestrians.
The present embodiment may correspond to the method embodiment described in the embodiment of the present application, and the foregoing and other operations and/or functions of each unit are respectively for implementing the corresponding flows in each method in fig. 2 to 3, and are not described herein for brevity.
Referring to fig. 8, fig. 8 is a schematic structural diagram of a partner analysis device according to an embodiment of the present application, based on the network architecture shown in fig. 1. The group analysis device may be provided in the processing apparatus 103 or may be the processing apparatus 103. As shown in fig. 8, the group analysis device may include:
a first determining unit 801, configured to determine the number of times of the same-person and each of the same-person in the same-person, and determine the number of times of the same-person and each of the same-person in the first person according to the above method;
a sorting unit 802, configured to sort the same-person persons of the first person according to the order from high to low in the same-person times to obtain a first sorting table, or sort the same-person persons of the first person according to the order from low to high in the same-person times to obtain a second sorting table;
a second determining unit 803 for determining the L persons ranked first in the first ranking table as a partner of the first person or the L persons ranked later in the second ranking table as a partner of the first person, L being an integer greater than 1.
The present embodiment may correspond to the method embodiment described in the embodiment of the present application, and the foregoing and other operations and/or functions of each unit are respectively for implementing the corresponding flows in each method in fig. 2 to 6, and are not described herein for brevity.
Referring to fig. 9, fig. 9 is a schematic structural diagram of an electronic device according to an embodiment of the present invention, based on the network architecture shown in fig. 1. As shown in fig. 9, the electronic device may include: at least one processor 901, such as a CPU, memory 902, transceiver 903, and at least one bus 904. Bus 904 for enabling connected communications between these components. Wherein:
in one case, the memory 902 stores a set of computer programs, and the processor 901 is configured to call the computer programs stored in the memory 902 to perform the following operations:
acquiring an image including a face of a first person;
acquiring an image class comprising the face of the first person according to the image comprising the face of the first person;
determining a track of a first person according to the snapshot time and the snapshot camera of each image in the image class, wherein the track of the first person comprises at least one track point;
determining the same person of the first person according to the track of the first person;
and under the condition that the second person appears in at least one image captured by the first track point, determining that the second person and the first person are in the same row once at the first track point, wherein the second person is any person in the same row, and the first track point is any track point in at least one track point.
In some embodiments, the processor 901 obtaining an image class including a face of a first person from an image including the face of the first person includes:
extracting face features of a face of a first person from an image including the face of the first person;
determining a label of the face feature;
and obtaining the image class corresponding to the label according to the corresponding relation between the label and the image class, and obtaining the image class comprising the face of the first person.
In some embodiments, the processor 901 determining the trajectory of the first person from the snapshot time and the snapshot camera of each image in the class of images comprises:
acquiring the snapshot time and the snapshot camera of each image in the image class;
classifying images in the image classes according to the snapshot cameras to obtain M classes of images, wherein M is the number of the snapshot cameras;
under the condition that a current class image in the M class images comprises one image, determining a first snapshot camera and a first time period as one track point of a first person, wherein the first snapshot camera is a snapshot camera corresponding to the current class image, the first time period is a time period between a first snapshot time front threshold time and a first snapshot time rear threshold time, and the first snapshot time is the snapshot time of the one image;
The trajectory of the first person is determined from the trajectory points of the first person.
In some embodiments, the processor 901 determining the trajectory of the first person from the snapshot time and the snapshot camera of each image in the class of images further comprises:
under the condition that the current class image comprises two images, calculating a time interval between a second snapshot time and a third snapshot time to obtain a first time interval, wherein the second snapshot time is the snapshot time of one of the two images, and the third snapshot time is the snapshot time of the other of the two images;
under the condition that the first time interval is larger than the threshold value, determining that the first snapshot camera and the second time period are one track point of the first person and the first snapshot camera and the third time period are the other track point of the first person, wherein the second time period is a time period between the threshold value time before the second snapshot time and the threshold value time after the second snapshot time, and the third time period is a time period between the threshold value time before the third snapshot time and the threshold value time after the third snapshot time;
under the condition that the first time interval is not greater than the threshold value, determining that the first snapshot camera and the fourth time period are track points of the first person, wherein the fourth time period is a time period between the threshold value time before the second snapshot time and the threshold value time after the third snapshot time, and the second snapshot time is earlier than the third snapshot time.
In some embodiments, the processor 901 determining the trajectory of the first person from the snapshot time and the snapshot camera of each image in the class of images further comprises:
under the condition that the current class image comprises N images, sequencing the snapshot time of the N images according to the time sequence to obtain a sequencing table, wherein N is an integer greater than or equal to 3;
calculating the time interval between two adjacent snapshot times in the sequencing table;
dividing N snapshot times into K groups of snapshot times according to the calculated time interval, wherein K is an integer smaller than or equal to N, and the time interval between any two groups of snapshot times in the K groups of snapshot times is larger than a threshold value;
and determining K track points of the first person according to the K groups of snapshot time.
In some embodiments, the processor 901 determining the layperson of the first person from the trajectory of the first person comprises:
a person other than the first person in the images captured at all the track points in the track of the first person is determined as the same person as the first person.
In some embodiments, the processor 901 is further configured to invoke a computer program stored in the memory 902 to:
the number of co-workers with the first person is determined for each of the co-workers of the first person.
In some embodiments, transceiver 903 is used to transmit and receive information.
In another case, a set of computer programs is stored in the memory 902, and the processor 901 is configured to call the computer programs stored in the memory 902 to perform the following operations:
determining the number of the same-row times of each same-row person in the same-row persons of the first person and the first person, wherein the number of the same-row times of each same-row person in the same-row persons of the first person and the first person is determined according to the method;
ordering the same-person of the first person according to the order of the same-person times from high to low to obtain a first ordering table, or ordering the same-person of the first person according to the order of the same-person times from low to high to obtain a second ordering table;
the top L individuals in the first ranking table are determined to be the first person's group partner, or the bottom L individuals in the second ranking table are determined to be the first person's group partner, L being an integer greater than 1.
In some embodiments, transceiver 903 is used to transmit and receive information.
The electronic device may also be used to execute the various methods executed in the foregoing method embodiments, which are not described in detail.
In some embodiments, a computer readable storage medium is provided for storing an application for performing the peer deduplication method of FIG. 2 or the group analysis method of FIG. 6 at runtime.
In some embodiments, an application is provided for executing the peer deduplication method of FIG. 2 or the group analysis method of FIG. 6 at runtime.
Those of ordinary skill in the art will appreciate that all or a portion of the steps in the various methods of the above embodiments may be implemented by hardware associated with program instructions, where the program may be stored in a computer readable memory, where the memory may include: flash disk, ROM, RAM, magnetic or optical disk, etc.
The foregoing has outlined rather broadly the more detailed description of embodiments of the invention, wherein the principles and embodiments of the invention are explained in detail using specific examples, the above examples being provided solely to facilitate the understanding of the method and core concepts of the invention; meanwhile, as those skilled in the art will have variations in the specific embodiments and application scope in accordance with the ideas of the present invention, the present description should not be construed as limiting the present invention in view of the above.

Claims (12)

1. A peer deduplication method, comprising:
acquiring an image including a face of a first person;
acquiring an image class comprising the face of the first person according to the image;
Determining the track of the first person according to the snapshot time and the snapshot camera of each image in the image class, wherein the track of the first person comprises at least one track point; the determining the track of the first person according to the snapshot time and the snapshot camera of each image in the image class comprises: acquiring the snapshot time and the snapshot camera of each image in the image class; classifying the images in the image class according to the snapshot cameras to obtain M class images, wherein M is the number of the snapshot cameras; the current class image comprises two images, a time interval between a second snapshot time and a third snapshot time is calculated, a first time interval is obtained, the second snapshot time is the snapshot time of one image in the two images, and the third snapshot time is the snapshot time of the other image in the two images; determining that a first snapshot camera and a second time period are one track point of the first person and the first snapshot camera and a third time period are the other track point of the first person when the first time interval is larger than a threshold value, wherein the second time period is a time period between a threshold value time before the second snapshot time and a threshold value time after the second snapshot time, and the third time period is a time period between the threshold value time before the third snapshot time and the threshold value time after the third snapshot time; determining that the first snapshot camera and a fourth time period are a track point of the first person under the condition that the first time interval is not greater than the threshold value, wherein the fourth time period is a time period between a threshold value time before the second snapshot time and a threshold value time after the third snapshot time, and the second snapshot time is earlier than the third snapshot time; determining the track of the first person according to the track points of the first person;
Determining a peer of the first person according to the track;
and under the condition that a second person appears in at least one image captured by a first track point, determining that the second person and the first person are in the same row at the first track point once, wherein the second person is any person in the same person, and the first track point is any track point in the at least one track point.
2. The method of claim 1, wherein the obtaining an image class including a face of the first person from the image comprises:
extracting face features of the face from the image;
determining a label of the face feature;
and obtaining the image class corresponding to the label according to the corresponding relation between the label and the image class, and obtaining the image class comprising the face of the first person.
3. The method of claim 1, wherein determining the trajectory of the first person based on the snapshot time and the snapshot camera of each image in the class of images further comprises:
under the condition that the current class image comprises N images, sequencing the snapshot time of the N images according to the time sequence to obtain a sequencing table, wherein N is an integer greater than or equal to 3;
Calculating the time interval between two adjacent snapshot times in the sequencing table;
dividing N snapshot times into K groups of snapshot times according to the time interval, wherein K is an integer smaller than or equal to N, and the time interval between any two groups of snapshot times in the K groups of snapshot times is larger than the threshold value;
and determining K track points of the first person according to the K groups of snapshot time.
4. The method according to claim 1 or 2, wherein said determining the peer of the first person from the trajectory comprises:
and determining people except the first person in the images captured by all track points in the track as the same person as the first person.
5. The method according to claim 1 or 2, characterized in that the method further comprises:
and determining the number of the same-person times of each same-person and the first person.
6. A method of group analysis, comprising:
determining a number of co-workers for each of the co-workers of the first person, the number of co-workers determined according to the method of claim 5;
the same person is ranked according to the sequence from high to low of the same person times to obtain a first ranking table, or the same person is ranked according to the sequence from low to high of the same person times to obtain a second ranking table;
Determining the top L individuals in the first ranking table as the partners of the first person or the bottom L individuals in the second ranking table as the partners of the first person, wherein L is an integer greater than 1.
7. A peer deduplication apparatus, comprising:
a first acquisition unit configured to acquire an image including a face of a first person;
a second obtaining unit, configured to obtain, according to the image, an image class including a face of the first person;
the first determining unit is used for determining the track of the first person according to the snapshot time and the snapshot camera of each image in the image class, and the track of the first person comprises at least one track point; the determining the track of the first person according to the snapshot time and the snapshot camera of each image in the image class comprises: acquiring the snapshot time and the snapshot camera of each image in the image class; classifying the images in the image class according to the snapshot cameras to obtain M class images, wherein M is the number of the snapshot cameras; the current class image comprises two images, a time interval between a second snapshot time and a third snapshot time is calculated, a first time interval is obtained, the second snapshot time is the snapshot time of one image in the two images, and the third snapshot time is the snapshot time of the other image in the two images; determining that a first snapshot camera and a second time period are one track point of the first person and the first snapshot camera and a third time period are the other track point of the first person when the first time interval is larger than a threshold value, wherein the second time period is a time period between a threshold value time before the second snapshot time and a threshold value time after the second snapshot time, and the third time period is a time period between the threshold value time before the third snapshot time and the threshold value time after the third snapshot time; determining that the first snapshot camera and a fourth time period are a track point of the first person under the condition that the first time interval is not greater than the threshold value, wherein the fourth time period is a time period between a threshold value time before the second snapshot time and a threshold value time after the third snapshot time, and the second snapshot time is earlier than the third snapshot time; determining the track of the first person according to the track points of the first person;
A second determining unit configured to determine a peer person of the first person according to the trajectory;
the third determining unit is configured to determine that, when a second person appears in at least one image captured by a first track point, the second person and the first person are in the same row at the first track point, the second person is any person in the same person, and the first track point is any track point in the at least one track point.
8. A partner analysis apparatus, comprising:
a first determining unit configured to determine a number of co-occurrence times of each of co-occurrence persons of a first person and the first person, the number of co-occurrence times being determined according to the method of claim 5;
the ordering unit is used for ordering the same-line persons according to the order of the same-line times from high to low to obtain a first ordering table, or ordering the same-line persons according to the order of the same-line times from low to high to obtain a second ordering table;
a second determining unit, configured to determine, as a partner of the first person, L persons ranked in the first ranking table before, or determine, as a partner of the first person, L persons ranked in the second ranking table after, L being an integer greater than 1.
9. An electronic device comprising a processor and a memory, the memory for storing a computer program, the processor for invoking the computer program to perform the co-worker deduplication method of any of claims 1-5.
10. An electronic device comprising a processor and a memory, the memory for storing a computer program, the processor for invoking the computer program to perform a method of group analysis as claimed in claim 6.
11. A computer readable storage medium, characterized in that the computer readable storage medium stores a computer program which, when executed by a processor, implements the co-worker deduplication method according to any of claims 1-5.
12. A computer readable storage medium storing a computer program which when executed by a processor implements a method of group analysis as claimed in claim 6.
CN202010455227.0A 2020-05-26 2020-05-26 Concurrent person weight removing method, partner analyzing method and device and electronic equipment Active CN111563479B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010455227.0A CN111563479B (en) 2020-05-26 2020-05-26 Concurrent person weight removing method, partner analyzing method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010455227.0A CN111563479B (en) 2020-05-26 2020-05-26 Concurrent person weight removing method, partner analyzing method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN111563479A CN111563479A (en) 2020-08-21
CN111563479B true CN111563479B (en) 2023-11-03

Family

ID=72073773

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010455227.0A Active CN111563479B (en) 2020-05-26 2020-05-26 Concurrent person weight removing method, partner analyzing method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN111563479B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114511816A (en) * 2020-11-16 2022-05-17 杭州海康威视系统技术有限公司 Data processing method and device, electronic equipment and machine-readable storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107016322A (en) * 2016-01-28 2017-08-04 浙江宇视科技有限公司 A kind of method and device of trailing personnel analysis
CN108229335A (en) * 2017-12-12 2018-06-29 深圳市商汤科技有限公司 It is associated with face identification method and device, electronic equipment, storage medium, program
CN109117714A (en) * 2018-06-27 2019-01-01 北京旷视科技有限公司 A kind of colleague's personal identification method, apparatus, system and computer storage medium
CN110276272A (en) * 2019-05-30 2019-09-24 罗普特科技集团股份有限公司 Confirm method, apparatus, the storage medium of same administrative staff's relationship of label personnel

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107016322A (en) * 2016-01-28 2017-08-04 浙江宇视科技有限公司 A kind of method and device of trailing personnel analysis
CN108229335A (en) * 2017-12-12 2018-06-29 深圳市商汤科技有限公司 It is associated with face identification method and device, electronic equipment, storage medium, program
CN109117714A (en) * 2018-06-27 2019-01-01 北京旷视科技有限公司 A kind of colleague's personal identification method, apparatus, system and computer storage medium
CN110276272A (en) * 2019-05-30 2019-09-24 罗普特科技集团股份有限公司 Confirm method, apparatus, the storage medium of same administrative staff's relationship of label personnel

Also Published As

Publication number Publication date
CN111563479A (en) 2020-08-21

Similar Documents

Publication Publication Date Title
Beery et al. Efficient pipeline for camera trap image review
CN109117714B (en) Method, device and system for identifying fellow persons and computer storage medium
CN110175549B (en) Face image processing method, device, equipment and storage medium
CN109284729B (en) Method, device and medium for acquiring face recognition model training data based on video
US20160224826A1 (en) Identifying Images Using Face Recognition
JP2022518469A (en) Information processing methods and devices, storage media
CN108563651B (en) Multi-video target searching method, device and equipment
CN111985348B (en) Face recognition method and system
JP2022518459A (en) Information processing methods and devices, storage media
CN111368622A (en) Personnel identification method and device, and storage medium
CN112818149A (en) Face clustering method and device based on space-time trajectory data and storage medium
CN111107319B (en) Target tracking method, device and system based on regional camera
CN109784220B (en) Method and device for determining passerby track
CN111177469A (en) Face retrieval method and face retrieval device
CN111985360A (en) Face recognition method, device, equipment and medium
CN113963303A (en) Image processing method, video recognition method, device, equipment and storage medium
CN116244609A (en) Passenger flow volume statistics method and device, computer equipment and storage medium
CN112306829A (en) Method and device for determining performance information, storage medium and terminal
CN111563479B (en) Concurrent person weight removing method, partner analyzing method and device and electronic equipment
CN115062186A (en) Video content retrieval method, device, equipment and storage medium
CN109241316B (en) Image retrieval method, image retrieval device, electronic equipment and storage medium
CN114048344A (en) Similar face searching method, device, equipment and readable storage medium
CN114519879A (en) Human body data archiving method, device, equipment and storage medium
CN114863364B (en) Security detection method and system based on intelligent video monitoring
CN112883213A (en) Picture archiving method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant