CN111385527A - Method for judging peer and related products - Google Patents

Method for judging peer and related products Download PDF

Info

Publication number
CN111385527A
CN111385527A CN201811632531.7A CN201811632531A CN111385527A CN 111385527 A CN111385527 A CN 111385527A CN 201811632531 A CN201811632531 A CN 201811632531A CN 111385527 A CN111385527 A CN 111385527A
Authority
CN
China
Prior art keywords
target
video
occurrence
video image
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811632531.7A
Other languages
Chinese (zh)
Other versions
CN111385527B (en
Inventor
谢友平
陈信
李兰
黄凯斌
刘威
�隆平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Yuntian Lifei Technology Co ltd
Original Assignee
Chengdu Yuntian Lifei Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Yuntian Lifei Technology Co ltd filed Critical Chengdu Yuntian Lifei Technology Co ltd
Priority to CN201811632531.7A priority Critical patent/CN111385527B/en
Publication of CN111385527A publication Critical patent/CN111385527A/en
Application granted granted Critical
Publication of CN111385527B publication Critical patent/CN111385527B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the application discloses a peer judgment method and a related product, wherein the method comprises the following steps: acquiring a target video image set of a specified time period; determining the simultaneous occurrence times of a first target and a second target in a target video image set; determining the peer score between the first target and the second target according to the simultaneous occurrence times of the first target and the second target; and when the collinear score is larger than a first preset threshold value, confirming that the first target and the second target are collinear. According to the method, the co-occurrence frequency of the two users in the video image is determined to determine the co-occurrence score between the users, and then whether the users are co-occurrence users is determined.

Description

Method for judging peer and related products
Technical Field
The application relates to the technical field of data processing, in particular to a peer judgment method and a related product.
Background
With the rapid development of national economy and the accelerated progress of urbanization, more and more foreign people are merged into cities, and the population promotes the development and brings great challenges to city management. At present, the video monitoring technology provides technical support for city safety management, but the video monitoring is only checked manually, or the video monitoring is checked after an event occurs, and the safety management is far from sufficient. Therefore, it is desirable to provide a method for acquiring the daily performance of a user from a video, and then analyzing and acquiring the relationship between the user and the user, so as to prevent the user from being safe in advance, thereby reducing the occurrence of safety problems.
Disclosure of Invention
The embodiment of the application provides a peer judgment method and a related product, so that the peer score between users is determined by the number of simultaneous occurrences of two users in a video image, and whether the users are the same-row users is further determined.
In a first aspect, an embodiment of the present application provides a peer determination method, where the method includes:
acquiring a target video image set of a specified time period;
determining the number of simultaneous occurrences of a first target and a second target in the target video image set;
determining a peer score between the first target and the second target according to the number of simultaneous occurrences of the first target and the second target;
and when the peer score is larger than a first preset threshold value, confirming that the first target and the second target are in the same row.
Optionally, the acquiring the target video image set includes:
acquiring a video set shot by a plurality of cameras in a specified area in the specified time period to obtain a plurality of video sets, wherein the video sets shot by the plurality of cameras comprise corresponding camera identifications and time identifications;
performing video analysis on each video set in the plurality of video sets to obtain a plurality of video images, wherein each image in the plurality of video images comprises the camera identification and the time point identification;
and performing target identification on each video image in the plurality of video images, and determining that the video images including the target in the plurality of video images form the target video image set.
Optionally, the determining the number of times that the first object and the second object appear simultaneously in the target video image set includes:
acquiring the number of times that the first target and the second target are simultaneously identified from a plurality of video images in the target video image set as a first occurrence number;
acquiring the first target and the second target which are identified from a plurality of video images corresponding to the same camera identification in the target video image set, wherein the times that the time point identification difference values of the plurality of video images are smaller than a first preset time threshold value are second occurrence times;
acquiring the first target and the second target identified from a plurality of target video images corresponding to the same time point identifier in the target video image set, wherein the times that the camera distances corresponding to the camera identifiers of the plurality of video images are smaller than a first preset distance threshold value are third occurrence times;
the first occurrence number, the second occurrence number and the third occurrence number are the number of times that the first target and the second target appear in the target video image at the same time.
Optionally, the determining the peer score between the first target and the second target according to the number of times of the first target and the second target occurring simultaneously includes:
setting a first weight for a first occurrence number in the simultaneous occurrence numbers of the first target and the second target;
setting a second weight for a second occurrence number in the simultaneous occurrence numbers of the first target and the second target;
setting a third weight for a third occurrence number in the simultaneous occurrence numbers of the first target and the second target;
multiplying the first occurrence frequency, the second occurrence frequency and the third occurrence frequency by corresponding weights thereof and summing to obtain the weight sum of the simultaneous occurrence frequencies of the first target and the second target;
and taking the weight sum as a peer score between the first user and the second target.
Optionally, after determining that the peer score is greater than a first preset threshold, the method further includes: determining that the intimacy between the first target and the second target is greater than a second preset threshold, specifically comprising:
acquiring the target video image in which the first target and the second target are identified at the same time as an intimacy target video image;
calculating the image-plane distance between the first target and the second target in the intimacy target video image, and determining the first intimacy between the first target and the second target according to the image-plane distance;
calculating second intimacy of the first target and the second target according to behaviors of the first target and the second target in the intimacy target video image, wherein the behaviors comprise hugging, hand pulling, side face talking and the like;
and summing the first intimacy degree and the second intimacy degree to obtain the intimacy degree of the first target and the second target, and determining that the intimacy degree is larger than a second preset threshold value.
Optionally, the first target and the second target are persons, the performing target identification on each of the plurality of video images, and determining that the video images including the targets in the plurality of video images form the target video image set specifically includes:
performing face recognition on each video image in the plurality of video images, and determining that the video images including faces form a first target video image set;
extracting the characteristics of the persons corresponding to the faces in the first target video image set, wherein the characteristics comprise the body parameters and the dressing parameters of the persons;
acquiring other video images except the first target video image set in the plurality of video images as a video image set to be detected;
identifying a person in a first image to be detected in the video image set to be detected, and extracting body parameters and dressing parameters of the person;
matching the body parameters extracted from the first target video image set with the body parameters extracted from the first image to be detected to obtain a first matching degree;
matching the dressing parameters extracted from the first target video image set with the dressing parameters extracted from the first image to be detected to obtain a second matching degree;
setting a fourth weight value for the first matching degree, and setting a fifth weight value for the second matching degree;
multiplying the first matching degree and the second matching degree by corresponding weights and summing to obtain a character matching degree;
determining whether the figure matching degree is larger than a third preset threshold value, if so, determining that the first image to be detected belongs to a second target video image set;
the first set of target video images and the second set of target video images constitute the set of target video images.
In a second aspect, the present application provides a peer determination device, comprising:
the acquisition unit is used for acquiring a target video image set;
the determining unit is used for determining the number of times of the first target and the second target appearing simultaneously in the target video image set;
the scoring unit is used for determining the peer scoring between the first target and the second target according to the occurrence frequency of the first target and the second target;
and the judging unit is used for confirming that the first target and the second target are in the same row when the score of the same row is larger than a first preset threshold value.
In a third aspect, embodiments of the present application provide an electronic device, including a processor, a memory, a communication interface, and one or more programs, stored in the memory and configured to be executed by the processor, the programs including instructions for performing the steps of any of the methods of the first aspect.
In a fourth aspect, the present application provides a computer-readable storage medium storing a computer program for electronic data exchange, wherein the computer program causes a computer to execute the instructions of the steps of the method in the first aspect.
In a fifth aspect, the present application provides a computer program product, wherein the computer program product includes a non-transitory computer-readable storage medium storing a computer program, and the computer program is operable to cause a computer to perform some or all of the steps as described in any one of the methods of the first aspect of the embodiments of the present application. The computer program product may be a software installation package.
It can be seen that, in the embodiment of the present application, a target video image set of a specified time period is obtained first; determining the simultaneous occurrence times of a first target and a second target in a target video image set; then determining the peer score between the first target and the second target according to the simultaneous occurrence times of the first target and the second target; and finally, when the peer score is larger than a first preset threshold value, confirming that the first target and the second target are in the same row. In the process, the relation between the targets is judged for the first time by determining the simultaneous occurrence times of the first target and the second target, the related targets are preliminarily screened out, and then whether the targets are in the same-row relation or not is determined according to the score of the same row, the relevance of the targets is further screened, and the accuracy of the same-row judgment is improved.
Drawings
Reference will now be made in brief to the accompanying drawings, to which embodiments of the present application relate.
Fig. 1A is a peer determination method provided in the present application;
fig. 1B is a schematic diagram of determining a first occurrence number according to an embodiment of the present application;
fig. 1C is a schematic diagram of determining a second occurrence number according to an embodiment of the present application;
fig. 1D is a schematic diagram of determining a third occurrence number according to an embodiment of the present application;
fig. 2 is another peer determination method provided in the embodiment of the present application;
fig. 3 is another peer determination method provided in the embodiment of the present application;
fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of a peer determination device disclosed in an embodiment of the present application.
Detailed Description
In order to make the technical solutions of the present application better understood, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "first," "second," and the like in the description and claims of the present application and in the above-described drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
The following describes embodiments of the present application in detail.
Referring to fig. 1A, fig. 1A is a method for determining the identity of a peer in an embodiment of the present application, as shown in fig. 1A, the method includes the following steps:
101. a set of target video images for a specified time period is acquired.
A plurality of monitoring videos can be shot through the monitoring camera, the host stores the monitoring videos, the monitoring videos are extracted and analyzed when needed, and hidden information which cannot be observed by human eyes can be obtained. One of the commonly used methods for analyzing a surveillance video is to analyze the surveillance video to obtain a video image, and then perform operations such as segmentation, identification, or clustering on the video image to obtain a target video image set.
Optionally, the acquiring the target video image set includes: acquiring a video set shot by a plurality of cameras in a specified area in a specified time period to obtain a plurality of video sets, wherein the video sets shot by the plurality of cameras comprise corresponding camera identifications and time identifications; performing video analysis on each video set in the plurality of video sets to obtain a plurality of video images, wherein each image in the plurality of video images comprises a camera mark and a time point mark; and performing target identification on each video image in the plurality of video images, and determining that the video images including the target in the plurality of video images form a target video image set.
Specifically, the designated time period may be 6: 00-12: 00 in the morning of the same day, or 00: 00-23: 59 of the same monday in the same week, and the designated area may be in the same cell, the same mall, and the like, and a plurality of video sets are obtained according to the monitored video sets shot by all cameras in the designated area of the designated time period, each video in the video sets includes a camera identifier and a time identifier, and the camera identifier may be a model or a name of a camera, or may be the same number set for the camera in the area, so that the video content and the camera are conveniently corresponded; each video also comprises a time mark which can be the corresponding current time when the video is recorded, for example, the time mark of certain video display is 2018/03/02,23:50:09, which indicates that the recording time corresponding to the video picture is 50 minutes and 9 seconds at 23 pm and 3 months and 2 months in 2018; the continuous recording time of the camera can also be a continuous recording time period, for example, the time mark of certain video display is 2018/03/02,22:09:10, which indicates that the continuous recording time of the video picture is 22 hours, 9 minutes and 10 seconds. The continuous recording time may be divided or may not be divided in the cycle T. For example, when the recording is performed by dividing 24 hours per day, 2018/03/02,22:09:10 indicates that 22 hours, 9 minutes and 10 seconds have been recorded continuously on 3, 2 and 2 days in 2018, and if the recording is not performed, 2018/03/02,22:09:10 indicates that the recording is performed by the camera for 22 hours, 9 minutes and 10 seconds without interruption, possibly starting from 3, 2 and 3 days in 2018, and possibly starting from the previous day.
Analyzing the video to obtain a plurality of video images, wherein when the video images are the same as the video images, the time marks of the video are the same as the time point marks of the video images, and the marks of the cameras are the same. The video image is then subject to object recognition, which may be a human, animal or other object.
102. Determining the number of simultaneous occurrences of a first object and a second object in the set of target video images.
Whether the first target and the second target are in the same-row relationship can be determined through the number of times the first target and the second target appear at the same time. Simultaneous occurrence includes simultaneous time-and-place occurrence, close-spaced time-and-place occurrence, or short time-difference occurrence of the same place, all of which can be obtained from video image analysis.
Optionally, determining the number of times that the first target and the second target appear in the monitored image at the same time includes: acquiring the number of times that a first target and a second target are simultaneously identified from a plurality of video images in a target video image set as a first occurrence number; acquiring a first target and a second target which are identified from a plurality of video images corresponding to the same camera mark in a target video image set, wherein the times that the time point mark difference values of the plurality of video images are smaller than a first preset time threshold value are second occurrence times; acquiring a first target and a second target which are identified from a plurality of target video images corresponding to the same time point identification in a target video image set, wherein the times that the camera distances corresponding to the camera identifications of the plurality of video images are smaller than a first preset distance threshold value are third occurrence times; the first occurrence number, the second occurrence number and the third occurrence number are the number of times that the first target and the second target appear in the target video image at the same time.
The first target and the second target are identified from the same target video image, and the first target and the second target are shown to appear in the same place at the same time, so that the first target and the second target can be considered to appear at the same time. Referring to fig. 1B, fig. 1B is a schematic diagram illustrating a first occurrence count determination according to an embodiment of the present application, as shown in fig. 1B, the first object and the second object are both characters, the first object is a girl 1, the second object is a boy 1, and (a) and (B) in fig. 1B recognize that the first object and the second object are excluded, so that the first occurrence count of the first object and the second object is 2. From this method, the total first number of occurrences R1 can be determined.
The first target and the second target appear in two target video images of the same camera identification, and the time difference between the two target video images is smaller than a first preset time threshold, which indicates that the first target and the second target appear in the same place within a very short time difference, and the first target and the second target can be considered to appear at the same time. Referring to fig. 1C, fig. 1C is a schematic diagram for determining the second occurrence frequency according to an embodiment of the present application, as shown in fig. 1C, the camera identifiers corresponding to the target video images indicated by (C) and (d) in fig. 1C are both "camera 1", and the time point identifiers corresponding to (C) and (d) in fig. 1C represent the time of day when the video is captured, so that the difference between the time point identifiers of the two is: t 1: 07:05:00-07:02: 30: 2 min 30 s. Assuming that the first preset time threshold T1 is 3 minutes, T1< T1, it may be determined that the second simultaneous occurrence number of the first target and the second target is 1. The total second number of occurrences R2 is obtained according to this method.
The first target and the second target appear in the two target video images marked at the same time point, but the camera distance corresponding to the camera marks of the two video images is smaller than a first preset distance threshold value, which indicates that the first target and the second target appear at a very close place at the same time, and the first target and the second target can be considered to appear at the same time. Referring to fig. 1D, fig. 1D is a schematic diagram of determining a third occurrence frequency according to an embodiment of the present application, in which time marks of (e) and (f) in fig. 1D are the same, and camera marks corresponding to a target video image are "camera 1" and "camera 2", respectively, which are adjacent cameras, and a distance between the two cameras is smaller than a first distance preset threshold, it is determined that the third occurrence frequency of the first target and the second target is 1. The total third occurrence R3 is obtained according to this method.
In addition, the same time point identifier may be identical to the time point identifier, and the difference between the time point identifiers is smaller than a second preset time threshold, where the second preset time threshold may be an extremely small value, for example, 15 seconds, 30 seconds, and the like.
The first number of occurrences R1, the second number of occurrences R2, and the third number of occurrences R3 are collectively taken as the number of times that the first object and the second object appear simultaneously in the target video image.
Therefore, in the embodiment of the application, the first occurrence number, the second occurrence number and the third occurrence number are combined to determine the number of times of simultaneous occurrence of the first target and the second target, so that the situation that the first target and the second target occur simultaneously can be considered more comprehensively, the situation that the first target and the second target occur simultaneously and at the same place, or the first target and the second target occur at the same place but at a distance, or the first target and the second target occur at the same place but at a time different from each other, and the accuracy of determining the number of times of simultaneous occurrence is improved.
103. And determining the peer score between the first target and the second target according to the simultaneous occurrence times of the first target and the second target.
According to the simultaneous occurrence times of the first target and the second target, the trip situations of the first target and the second target can be subjected to the same-row scoring, the simultaneous occurrence times can be directly used as the same-row scoring, and the same-row scoring can be obtained after the simultaneous occurrence times are processed.
Optionally, determining a peer score between the first target and the second target according to the number of times that the first target and the second target occur simultaneously includes: setting a first weight for a first occurrence number in the simultaneous occurrence numbers of the first target and the second target; setting a second weight for a second occurrence number in the simultaneous occurrence numbers of the first target and the second target; setting a third weight for a third occurrence number in the simultaneous occurrence numbers of the first target and the second target; multiplying the first occurrence frequency, the second occurrence frequency and the third occurrence frequency by corresponding weights thereof and summing to obtain the weight sum of the simultaneous occurrence frequencies of the first target and the second target; and taking the weighted sum as a peer score between the first user and the second target.
Specifically, the first occurrence number, the second occurrence number and the third occurrence number together form the simultaneous occurrence number of the first target and the second target, but the simultaneous occurrence situations represented by the first occurrence number, the second occurrence number and the third occurrence number are different, the first occurrence number represents the occurrence number of two targets at the same time and the same place, the second occurrence number represents the occurrence number of two targets at the same place and the time difference exists, the third occurrence number represents the occurrence number of two targets at the same time and the distance exists, obviously, the simultaneous occurrence of the two targets corresponding to the first occurrence number is more reliable, the simultaneous occurrence situations of the two targets corresponding to the second occurrence number and the third occurrence number are speculative, therefore, the first weight α is set for the first occurrence number, the second weight β is set for the second occurrence number, the third weight γ is set for the third occurrence number, the first weight is greater than the second weight and the third weight, the second occurrence number is more reliable than the simultaneous occurrence situation corresponding to the third occurrence number, therefore, the second weight may be greater than or equal to the third weight, namely, the number R3R + R3 is greater than or equal to the third weight R3, the number R3 is greater than or equal to the third weight R3, and the target score R3.
Therefore, in the embodiment of the application, different weights are set for the first occurrence number, the second occurrence number and the third occurrence number, so that the peer score can be obtained according to the reliability of each occurrence number, the accuracy of the peer score is improved, and the reliability of determining that the first target and the second target are in the same row according to the peer score is improved.
Optionally, the first target and the second target are people, the target recognition is performed on each of the plurality of video images, and it is determined that the video images including the target in the plurality of video images form a target video image set, which specifically includes: carrying out face recognition on each video image in the plurality of video images, and determining that the video images comprising the faces form a first target video image set; extracting the characteristics of the persons corresponding to the faces in the first target video image set, wherein the characteristics comprise body parameters and dressing parameters of the persons; acquiring other video images except the first target video image set in the plurality of video images as a video image set to be detected; identifying a person in a first image to be detected in a video image set to be detected, and extracting body parameters and dressing parameters of the person; matching the body parameters extracted from the first target video image set with the body parameters extracted from the first image to be detected to obtain a first matching degree; matching the dressing parameters extracted from the first target video image set with the dressing parameters extracted from the first image to be detected to obtain a second matching degree; setting a fourth weight value for the first matching degree, and setting a fifth weight value for the second matching degree; multiplying the first matching degree and the second matching degree by the corresponding weight and summing to obtain the character matching degree; determining whether the figure matching degree is larger than a third preset threshold value, if so, determining that the first image to be detected belongs to a second target video image set; the first target video set and the second target video image set constitute a target video image set.
Specifically, if the first object and the second object are persons, a video image including the first object person or the second object person is acquired when the target video image set is acquired. Firstly, carrying out face recognition on a video image to obtain an image comprising a face of a first target person or a face of a second target person to form a first target image video set. In the remaining video images, the video images of the first target person or the second target person may be included without exposing a face, and therefore, the remaining video images need to be further filtered. The physical parameters of the same target person do not change or change very little over a period of time, such as a month or a week, while the dressing of one person does not change during the same day. Therefore, a video image including the first target person or the second target person without a face is acquired with the first target image video set subjected to face recognition as a reference. The extracted body parameters can comprise height, fat-thin body or body-size proportion, the height and fat-thin body can be determined according to surrounding reference objects, and can also be determined according to the proportion of the video image and a real object, and the body-size proportion can be directly obtained according to data on the video image. The dressing parameters include hairstyle, clothing, accessories, backpack, or hand held items, including objects or animals. The body parameters extracted from the first video image in the video image set to be detected are matched with the body parameters extracted from each video image in the first target video image set one by one to obtain a plurality of matching degrees, the maximum value in the matching degrees is used as the first matching degree, and similarly, the second matching degree of the first video image is also the maximum matching degree value obtained after the dressing parameters extracted from the first video image are matched with the dressing parameters extracted from each video image in the first target video image set one by one. And setting a fourth weight for the first matching degree and a fifth weight for the second matching degree, wherein the fourth weight can be set to be larger than the fifth weight because the stability of the body parameters is higher, and then carrying out weighted summation on the first matching degree and the second matching degree to obtain the character matching degree of the first image to be detected and the first target video image set. And if the matching degree of the person is larger than a third preset threshold value, which indicates that the person in the first image to be detected is the same as the person in the first target video image set, dividing the first image to be detected into a second target video image set, wherein the first target video image set and the second target video image set jointly form a target video image set. Optionally, the second target video image set may be stored in correspondence with the first target video image set, that is, when it is determined that the first matching degree between the first to-be-detected image and the first target video image set is higher than a third preset threshold, the maximum matching degree image in the first target video image set, which has the maximum matching degree with the dressing parameter and the body parameter of the first to-be-detected image, is determined, and the first to-be-detected image and the maximum matching degree image are stored in correspondence, so that when a target person corresponding to the maximum matching degree image is identified, the target person may be determined to be the target person corresponding to the first to-be-detected image.
In the embodiment of the application, the first target and the second target are set to be the persons, so that the pertinence of acquiring the target video image set is improved, determining a first target video image set which is corresponding to a target person and comprises a human face by carrying out face recognition on the video images, then matching the body parameters and the dressing parameters extracted from the first target video image set with the body parameters and the dressing parameters extracted from the video images to be tested, and determining whether the video image to be detected corresponds to a second target video set of which the target person does not comprise a human face according to the matching result, in the process, the target video image corresponding to the target person is determined through face recognition, and meanwhile, a method for determining the target video image corresponding to the target person when the face cannot be recognized is further considered, so that the comprehensiveness and integrity of the obtained target video image are improved, and the accuracy of peer-to-peer judgment is further improved.
104. And when the peer score is larger than a first preset threshold value, confirming that the first target and the second target are in the same row.
After obtaining the peer score S, it is determined whether the peer score is greater than a first preset threshold, where the first preset threshold may be a standard value, such as 90 points, 100 points, and the like, or may be a variable value obtained by learning according to historical data, such as performing video image analysis according to a target known as a peer relationship to obtain a plurality of known peer scores, and then averaging the plurality of known peer scores to serve as the first preset threshold, or obtaining a minimum value of the plurality of known peer scores to serve as the first preset threshold.
Optionally, after determining that the peer score is greater than the first preset threshold, the method further includes: determining that the intimacy between the first target and the second target is greater than a second preset threshold, specifically comprising: acquiring a target video image which simultaneously identifies a first target and a second target as an intimacy target video image; calculating the image-surface distance between a first target and a second target in the intimacy target video image, and determining the first intimacy between the first target and the second target according to the image-surface distance; calculating second intimacy of the first target and the second target according to behaviors of the first target and the second target in the intimacy target video image, wherein the behaviors comprise hugging, holding, side face talking and the like; and summing the first intimacy degree and the second intimacy degree to obtain the intimacy degree of the first target and the second target, and determining that the intimacy degree is greater than a second preset threshold value.
Specifically, after the peer score between the first target and the second target is determined to be larger than the first preset threshold, it indicates that the first target and the second target often appear at the same place at the same time, but the two targets may only be strangers with similar moving ranges, so that whether the first target and the second target are in a peer relationship is further determined through affinity analysis of the first target and the second target. The method comprises the steps of acquiring a video image of a first target and a video image of a second target which are simultaneously identified in the embodiment as an intimacy target video image, then calculating the image-plane distance between the first target and the second target in the intimacy target video image, wherein the image-plane distance can be determined by directly measuring the distance between the two targets on an image without considering the proportion and the relative position relation, or can be determined by considering the proportion of an object in reality and determining the distance between the two targets according to a reference object. The distance of the drawing and the intimacy degree are set to be in inverse proportion, namely the smaller the distance value of the drawing is, the larger the intimacy degree value is. In addition, the behaviors of the first target and the second target are obtained, and a second intimacy degree between the first target and the second target is determined according to the behaviors. For example, setting the affinity behavior includes hugging, holding, side-face talking between people, direct contact, interaction between people and objects, etc., and if the behavior between the first object and the second object includes the set affinity behavior, the second affinity is increased by one unit value. And finally, summing the first intimacy degree and the second intimacy degree to determine the intimacy degree of the first target and the second target, and if the intimacy degree is greater than a second preset threshold value, determining that the first target and the second target are in the same-row relationship.
Therefore, in the embodiment of the application, on the premise that the first target and the second target are often found at the same time, whether the intimacy of the first target and the second target is greater than a second preset threshold value is further determined, and then whether the first target and the second target are in the same-row relationship is determined, so that interference caused by the fact that the simple strangers are in the same activity range can be eliminated, and the accuracy and the effectiveness of the same-row judgment are further improved.
In the embodiment of the application, a target video image set of a specified time period is obtained firstly; determining the simultaneous occurrence times of a first target and a second target in a target video image set; then determining the peer score between the first target and the second target according to the simultaneous occurrence times of the first target and the second target; and finally, when the peer score is larger than a first preset threshold value, confirming that the first target and the second target are in the same row. In the process, the relation between the targets is judged for the first time by determining the simultaneous occurrence times of the first target and the second target, the related targets are preliminarily screened out, and then whether the targets are in the same-row relation or not is determined according to the score of the same row, the relevance of the targets is further screened, and the accuracy of the same-row judgment is improved.
Referring to fig. 2, fig. 2 is a diagram illustrating another peer determination method according to an embodiment of the present application, as shown in fig. 2, the method includes the following steps:
201. acquiring a video set shot by a plurality of cameras in a specified area in the specified time period to obtain a plurality of video sets, wherein the video sets shot by the plurality of cameras comprise corresponding camera identifications and time identifications;
202. performing video analysis on each video set in the plurality of video sets to obtain a plurality of video images, wherein each image in the plurality of video images comprises the camera identification and the time point identification;
203. performing target identification on each video image in the plurality of video images, and determining that the video images including targets in the plurality of video images form the target video image set;
204. acquiring the number of times that the first target and the second target are simultaneously identified from a plurality of video images in the target video image set as a first occurrence number;
205. acquiring the first target and the second target which are identified from a plurality of video images corresponding to the same camera identification in the target video image set, wherein the times that the time point identification difference values of the plurality of video images are smaller than a first preset time threshold value are second occurrence times;
206. acquiring the first target and the second target identified from a plurality of target video images corresponding to the same time point identifier in the target video image set, wherein the times that the camera distances corresponding to the camera identifiers of the plurality of video images are smaller than a first preset distance threshold value are third occurrence times;
207. the first occurrence frequency, the second occurrence frequency and the third occurrence frequency are the times of the first target and the second target in the target video image appearing at the same time;
208. setting a first weight for a first occurrence number in the simultaneous occurrence numbers of the first target and the second target; setting a second weight for a second occurrence number in the simultaneous occurrence numbers of the first target and the second target; setting a third weight for a third occurrence number in the simultaneous occurrence numbers of the first target and the second target;
209. multiplying the first occurrence frequency, the second occurrence frequency and the third occurrence frequency by corresponding weights thereof and summing to obtain the weight sum of the simultaneous occurrence frequencies of the first target and the second target;
210. taking the weight sum as a peer score between the first target and the second target;
211. and when the peer score is larger than a first preset threshold value, confirming that the first target and the second target are in the same row.
The detailed description of the steps 201 to 211 may refer to the corresponding description of the clustering method described in the steps 101 to 104, and is not repeated herein.
As can be seen, in the embodiment of the present application, a target video image set of a specified time period is obtained according to a video set shot by a camera; then acquiring the first occurrence frequency, the second occurrence frequency and the third occurrence frequency as the simultaneous occurrence frequency of the first target and the second target in the target video image set; carrying out weighted summation on the first occurrence number, the second occurrence number and the third occurrence number, and determining the syntactical score between the first target and the second target; and finally, when the peer score is larger than a first preset threshold value, confirming that the first target and the second target are in the same row. In the process, the simultaneous occurrence times of the first target and the second target are determined through the three occurrence times, so that the situation that the first target and the second target occur simultaneously can be considered more comprehensively, and the accuracy of determining the simultaneous occurrence times of the first target and the second target is improved; different weights are set for the first occurrence frequency, the second occurrence frequency and the third occurrence frequency, the peer scoring can be obtained according to the reliability of each occurrence frequency, the accuracy of peer scoring is improved, and the reliability that the first target and the second target are determined to be in the same row according to the peer scoring is improved.
Referring to fig. 3, fig. 3 is a diagram illustrating another peer determination method according to an embodiment of the present application, and as shown in fig. 3, the method includes the following steps:
301. acquiring a target video image set of a specified time period;
302. determining the number of simultaneous occurrences of a first target and a second target in the target video image set;
303. determining a peer score between the first target and the second target according to the number of simultaneous occurrences of the first target and the second target;
304. when the peer score is larger than a first preset threshold value, acquiring the target video image which simultaneously identifies the first target and the second target as an intimacy target video image;
305. calculating the image-plane distance between the first target and the second target in the intimacy target video image, and determining the first intimacy between the first target and the second target according to the image-plane distance;
306. calculating second intimacy degree of the first target and the second target according to behaviors of the first target and the second target in the intimacy degree target video image;
307. and summing the first intimacy degree and the second intimacy degree to obtain the intimacy degree of the first target and the second target, and determining that the intimacy degree is larger than a second preset threshold value.
308. And confirming that the first target and the second target are in the same row.
The above detailed descriptions of steps 301 to 308 may refer to the corresponding descriptions of the clustering methods described in steps 101 to 104, and are not repeated herein.
In the embodiment of the application, a target video image set in a specified time period is obtained firstly; determining the simultaneous occurrence times of a first target and a second target in a target video image set; then determining the peer score between the first target and the second target according to the simultaneous occurrence times of the first target and the second target; and finally, when the peer score is larger than a first preset threshold value, further determining whether the intimacy between the first target and the second target is larger than a second preset threshold value, so that the interference caused by the same simple stranger activity range is reduced, and the accuracy of peer judgment is further improved.
Referring to fig. 4, fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure, as shown in fig. 4, the electronic device includes a processor, a memory, a communication interface, and one or more programs, where the one or more programs are stored in the memory and configured to be executed by the processor, and the programs include instructions for performing the following steps:
acquiring a target video image set of a specified time period;
determining the number of simultaneous occurrences of a first target and a second target in the target video image set;
determining a peer score between the first target and the second target according to the number of simultaneous occurrences of the first target and the second target;
and when the peer score is larger than a first preset threshold value, confirming that the first target and the second target are in the same row.
As can be seen, the electronic device first obtains a target video image set of a specified time period; determining the simultaneous occurrence times of a first target and a second target in a target video image set; then determining the peer score between the first target and the second target according to the simultaneous occurrence times of the first target and the second target; and finally, when the peer score is larger than a first preset threshold value, confirming that the first target and the second target are in the same row. In the process, the relation between the targets is judged for the first time by determining the simultaneous occurrence times of the first target and the second target, the related targets are preliminarily screened out, and then whether the targets are in the same-row relation or not is determined according to the score of the same row, the relevance of the targets is further screened, and the accuracy of the same-row judgment is improved.
In one possible example, the acquiring the target video image set includes:
acquiring a video set shot by a plurality of cameras in a specified area in the specified time period to obtain a plurality of video sets, wherein the video sets shot by the plurality of cameras comprise corresponding camera identifications and time identifications;
performing video analysis on each video set in the plurality of video sets to obtain a plurality of video images, wherein each image in the plurality of video images comprises the camera identification and the time point identification;
and performing target identification on each video image in the plurality of video images, and determining that the video images including the target in the plurality of video images form the target video image set.
In one possible example, the determining a number of times a first object and a second object in the set of object video images occur simultaneously includes:
acquiring the number of times that the first target and the second target are simultaneously identified from a plurality of video images in the target video image set as a first occurrence number;
acquiring the first target and the second target which are identified from a plurality of video images corresponding to the same camera identification in the target video image set, wherein the times that the time point identification difference values of the plurality of video images are smaller than a first preset time threshold value are second occurrence times;
acquiring the first target and the second target identified from a plurality of target video images corresponding to the same time point identifier in the target video image set, wherein the times that the camera distances corresponding to the camera identifiers of the plurality of video images are smaller than a first preset distance threshold value are third occurrence times;
the first occurrence number, the second occurrence number and the third occurrence number are the number of times that the first target and the second target appear in the target video image at the same time.
In one possible example, the determining the peer score between the first target and the second target according to the number of times the first target and the second target occur simultaneously comprises:
setting a first weight for a first occurrence number in the simultaneous occurrence numbers of the first target and the second target;
setting a second weight for a second occurrence number in the simultaneous occurrence numbers of the first target and the second target;
setting a third weight for a third occurrence number in the simultaneous occurrence numbers of the first target and the second target;
multiplying the first occurrence frequency, the second occurrence frequency and the third occurrence frequency by corresponding weights thereof and summing to obtain the weight sum of the simultaneous occurrence frequencies of the first target and the second target;
and taking the weighted sum as a peer score between the first target and the second target.
In one possible example, after determining that the peer score is greater than a first preset threshold, the method further comprises: determining that the intimacy between the first target and the second target is greater than a second preset threshold, specifically comprising:
acquiring the target video image in which the first target and the second target are identified at the same time as an intimacy target video image;
calculating the image-plane distance between the first target and the second target in the intimacy target video image, and determining the first intimacy between the first target and the second target according to the image-plane distance;
calculating second intimacy of the first target and the second target according to behaviors of the first target and the second target in the intimacy target video image, wherein the behaviors comprise hugging, hand pulling, side face talking and the like;
and summing the first intimacy degree and the second intimacy degree to obtain the intimacy degree of the first target and the second target, and determining that the intimacy degree is larger than a second preset threshold value.
Referring to fig. 5, fig. 5 is a schematic structural diagram of a peer determination device according to an embodiment of the present application, and as shown in fig. 5, the peer determination device 500 includes:
an obtaining unit 501, configured to obtain a target video image set;
a determining unit 502, configured to determine a number of times that a first target and a second target in the target video image set occur simultaneously;
a scoring unit 503, configured to determine a peer score between the first target and the second target according to the occurrence frequency of the first target and the second target;
a determining unit 504, configured to confirm that the first target and the second target are in the same row when the score of the same row is greater than a first preset threshold.
As can be seen, the peer determination apparatus first obtains a target video image set of a specified time period; determining the simultaneous occurrence times of a first target and a second target in a target video image set; then determining the peer score between the first target and the second target according to the simultaneous occurrence times of the first target and the second target; and finally, when the peer score is larger than a first preset threshold value, confirming that the first target and the second target are in the same row. In the process, the relation between the targets is judged for the first time by determining the simultaneous occurrence times of the first target and the second target, the related targets are preliminarily screened out, and then whether the targets are in the same-row relation or not is determined according to the score of the same row, the relevance of the targets is further screened, and the accuracy of the same-row judgment is improved.
The obtaining unit 501 may be configured to implement the method described in the step 101, the determining unit 502 may be configured to implement the method described in the step 102, the scoring unit 503 may be configured to implement the method described in the step 103, the determining unit 504 may be configured to implement the method described in the step 104, and so on.
In one possible example, the obtaining unit 501 is specifically configured to:
acquiring a video set shot by a plurality of cameras in a specified area in a specified time period to obtain a plurality of video sets, wherein the video sets shot by the plurality of cameras comprise corresponding camera identifications and time identifications;
performing video analysis on each video set in the plurality of video sets to obtain a plurality of video images, wherein each image in the plurality of video images comprises the camera identification and the time point identification;
and performing target identification on each video image in the plurality of video images, and determining that the video images including the target in the plurality of video images form the target video image set.
In one possible example, the determining unit 502 is specifically configured to:
acquiring the number of times that the first target and the second target are simultaneously identified from a plurality of video images in the target video image set as a first occurrence number;
acquiring the first target and the second target which are identified from a plurality of video images corresponding to the same camera identification in the target video image set, wherein the times that the time point identification difference values of the plurality of video images are smaller than a first preset time threshold value are second occurrence times;
acquiring the first target and the second target identified from a plurality of target video images corresponding to the same time point identifier in the target video image set, wherein the times that the camera distances corresponding to the camera identifiers of the plurality of video images are smaller than a first preset distance threshold value are third occurrence times;
the first occurrence number, the second occurrence number and the third occurrence number are the number of times that the first target and the second target appear in the target video image at the same time.
In one possible example, the scoring unit 503 is specifically configured to:
setting a first weight for a first occurrence number in the simultaneous occurrence numbers of the first target and the second target;
setting a second weight for a second occurrence number in the simultaneous occurrence numbers of the first target and the second target;
setting a third weight for a third occurrence number in the simultaneous occurrence numbers of the first target and the second target;
multiplying the first occurrence frequency, the second occurrence frequency and the third occurrence frequency by corresponding weights thereof and summing to obtain the weight sum of the simultaneous occurrence frequencies of the first target and the second target;
and taking the weighted sum as a peer score between the first target and the second target.
In a possible example, the peer determining apparatus further includes an affinity determining unit 505, configured to determine that the affinity of the first target and the second target is greater than a second preset threshold, specifically, to:
acquiring the target video image in which the first target and the second target are identified at the same time as an intimacy target video image;
calculating the image-plane distance between the first target and the second target in the intimacy target video image, and determining the first intimacy between the first target and the second target according to the image-plane distance;
calculating second intimacy of the first target and the second target according to behaviors of the first target and the second target in the intimacy target video image, wherein the behaviors comprise hugging, hand pulling, side face talking and the like;
and summing the first intimacy degree and the second intimacy degree to obtain the intimacy degree of the first target and the second target, and determining that the intimacy degree is larger than a second preset threshold value.
It can be understood that the functions of each program module of the peer determination device in this embodiment may be specifically implemented according to the method in the foregoing method embodiment, and the specific implementation process may refer to the related description of the foregoing method embodiment, which is not described herein again.
The present application further provides a computer storage medium, where the computer storage medium may store a program, and when the program is executed, the program includes some or all of the steps of any one of the clustering methods described in the above method embodiments.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, apparatus (device), or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein. A computer program stored/distributed on a suitable medium supplied together with or as part of other hardware, may also take other distributed forms, such as via the Internet or other wired or wireless telecommunication systems.
Although the present application has been described in conjunction with specific features and embodiments thereof, it will be evident that various modifications and combinations can be made thereto without departing from the spirit and scope of the application. Accordingly, the specification and figures are merely exemplary of the present application as defined in the appended claims and are intended to cover any and all modifications, variations, combinations, or equivalents within the scope of the present application. It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.

Claims (10)

1. A peer determination method, the method comprising:
acquiring a target video image set of a specified time period;
determining the number of simultaneous occurrences of a first target and a second target in the target video image set;
determining a peer score between the first target and the second target according to the number of simultaneous occurrences of the first target and the second target;
and when the peer score is larger than a first preset threshold value, confirming that the first target and the second target are in the same row.
2. The method of claim 1, wherein said obtaining a set of target video images comprises:
acquiring a video set shot by a plurality of cameras in a specified area in the specified time period to obtain a plurality of video sets, wherein the video sets shot by the plurality of cameras comprise corresponding camera identifications and time identifications;
performing video analysis on each video set in the plurality of video sets to obtain a plurality of video images, wherein each image in the plurality of video images comprises the camera identification and the time point identification;
and performing target identification on each video image in the plurality of video images, and determining that the video images including the target in the plurality of video images form the target video image set.
3. The method of claim 2, wherein determining the number of times a first object and a second object in the set of object video images occur simultaneously comprises:
acquiring the number of times that the first target and the second target are simultaneously identified from a plurality of video images in the target video image set as a first occurrence number;
acquiring the first target and the second target which are identified from a plurality of video images corresponding to the same camera identification in the target video image set, wherein the times that the time point identification difference values of the plurality of video images are smaller than a first preset time threshold value are second occurrence times;
acquiring the first target and the second target identified from a plurality of target video images corresponding to the same time point identifier in the target video image set, wherein the times that the camera distances corresponding to the camera identifiers of the plurality of video images are smaller than a first preset distance threshold value are third occurrence times;
the first occurrence number, the second occurrence number and the third occurrence number are the number of times that the first target and the second target appear in the target video image at the same time.
4. The method of claim 3, wherein determining a peer score between the first object and the second object based on a number of times the first object and the second object co-occur comprises:
setting a first weight for a first occurrence number in the simultaneous occurrence numbers of the first target and the second target;
setting a second weight for a second occurrence number in the simultaneous occurrence numbers of the first target and the second target;
setting a third weight for a third occurrence number in the simultaneous occurrence numbers of the first target and the second target;
multiplying the first occurrence frequency, the second occurrence frequency and the third occurrence frequency by corresponding weights thereof and summing to obtain the weight sum of the simultaneous occurrence frequencies of the first target and the second target;
and taking the weighted sum as a peer score between the first target and the second target.
5. The method of any one of claims 1-4, wherein upon determining that the peer score is greater than a first preset threshold, the method further comprises: determining that the intimacy between the first target and the second target is greater than a second preset threshold, specifically comprising:
acquiring the target video image in which the first target and the second target are identified at the same time as an intimacy target video image;
calculating the image-plane distance between the first target and the second target in the intimacy target video image, and determining the first intimacy between the first target and the second target according to the image-plane distance;
calculating second intimacy of the first target and the second target according to behaviors of the first target and the second target in the intimacy target video image, wherein the behaviors comprise hugging, hand pulling, side face talking and the like;
and summing the first intimacy degree and the second intimacy degree to obtain the intimacy degree of the first target and the second target, and determining that the intimacy degree is larger than a second preset threshold value.
6. A peer determination apparatus, comprising:
the acquisition unit is used for acquiring a target video image set;
the determining unit is used for determining the number of times of the first target and the second target appearing simultaneously in the target video image set;
the scoring unit is used for determining the peer scoring between the first target and the second target according to the occurrence frequency of the first target and the second target;
and the judging unit is used for confirming that the first target and the second target are in the same row when the score of the same row is larger than a first preset threshold value.
7. The peer determination device according to claim 6, wherein the acquisition unit is specifically configured to:
acquiring a video set shot by a plurality of cameras in a specified area in a specified time period to obtain a plurality of video sets, wherein the video sets shot by the plurality of cameras comprise corresponding camera identifications and time identifications;
performing video analysis on each video set in the plurality of video sets to obtain a plurality of video images, wherein each image in the plurality of video images comprises the camera identification and the time point identification;
and performing target identification on each video image in the plurality of video images, and determining that the video images including the target in the plurality of video images form the target video image set.
8. The apparatus according to claim 7, wherein the determining unit is specifically configured to:
acquiring the number of times that the first target and the second target are simultaneously identified from a plurality of video images in the target video image set as a first occurrence number;
acquiring the first target and the second target which are identified from a plurality of video images corresponding to the same camera identification in the target video image set, wherein the times that the time point identification difference values of the plurality of video images are smaller than a first preset time threshold value are second occurrence times;
acquiring the first target and the second target identified from a plurality of target video images corresponding to the same time point identifier in the target video image set, wherein the times that the camera distances corresponding to the camera identifiers of the plurality of video images are smaller than a first preset distance threshold value are third occurrence times;
the first occurrence number, the second occurrence number and the third occurrence number are the number of times that the first target and the second target appear in the target video image at the same time.
9. An electronic device comprising a processor, a memory, a communication interface, and one or more programs stored in the memory and configured to be executed by the processor, the programs comprising instructions for performing the steps in the method of any of claims 1-5.
10. A computer-readable storage medium, characterized in that a computer program for electronic data exchange is stored, wherein the computer program causes a computer to perform the method according to any one of claims 1-5.
CN201811632531.7A 2018-12-28 2018-12-28 Method for judging peer and related products Active CN111385527B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811632531.7A CN111385527B (en) 2018-12-28 2018-12-28 Method for judging peer and related products

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811632531.7A CN111385527B (en) 2018-12-28 2018-12-28 Method for judging peer and related products

Publications (2)

Publication Number Publication Date
CN111385527A true CN111385527A (en) 2020-07-07
CN111385527B CN111385527B (en) 2021-09-14

Family

ID=71220499

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811632531.7A Active CN111385527B (en) 2018-12-28 2018-12-28 Method for judging peer and related products

Country Status (1)

Country Link
CN (1) CN111385527B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102043816A (en) * 2009-10-12 2011-05-04 腾讯科技(深圳)有限公司 Method and apparatus for presenting character relation
CN102855269A (en) * 2011-06-13 2013-01-02 索尼公司 Content extracting device, content extracting method and program
CN103365876A (en) * 2012-03-29 2013-10-23 北京百度网讯科技有限公司 Method and device for generating network operation auxiliary information based on relation maps
CN105005777A (en) * 2015-07-30 2015-10-28 科大讯飞股份有限公司 Face-based audio and video recommendation method and face-based audio and video recommendation system
CN105183758A (en) * 2015-07-22 2015-12-23 深圳市万姓宗祠网络科技股份有限公司 Content recognition method for continuously recorded video or image
CN105979287A (en) * 2016-05-31 2016-09-28 无锡天脉聚源传媒科技有限公司 Method and device used for extracting and counting program key words
US20160295833A1 (en) * 2015-04-09 2016-10-13 Jonathan O. Baize Herd Control Method and System
CN106504162A (en) * 2016-10-14 2017-03-15 北京锐安科技有限公司 Same pedestrian's association analysis method and device based on station MAC scan datas
CN108229335A (en) * 2017-12-12 2018-06-29 深圳市商汤科技有限公司 It is associated with face identification method and device, electronic equipment, storage medium, program
EP3399464A1 (en) * 2017-05-03 2018-11-07 Accenture Global Solutions Limited Target object color analysis and tagging
CN110569720A (en) * 2019-07-31 2019-12-13 安徽四创电子股份有限公司 audio and video intelligent identification processing method based on audio and video processing system

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102043816A (en) * 2009-10-12 2011-05-04 腾讯科技(深圳)有限公司 Method and apparatus for presenting character relation
CN102855269A (en) * 2011-06-13 2013-01-02 索尼公司 Content extracting device, content extracting method and program
CN103365876A (en) * 2012-03-29 2013-10-23 北京百度网讯科技有限公司 Method and device for generating network operation auxiliary information based on relation maps
US20160295833A1 (en) * 2015-04-09 2016-10-13 Jonathan O. Baize Herd Control Method and System
CN105183758A (en) * 2015-07-22 2015-12-23 深圳市万姓宗祠网络科技股份有限公司 Content recognition method for continuously recorded video or image
CN105005777A (en) * 2015-07-30 2015-10-28 科大讯飞股份有限公司 Face-based audio and video recommendation method and face-based audio and video recommendation system
CN105979287A (en) * 2016-05-31 2016-09-28 无锡天脉聚源传媒科技有限公司 Method and device used for extracting and counting program key words
CN106504162A (en) * 2016-10-14 2017-03-15 北京锐安科技有限公司 Same pedestrian's association analysis method and device based on station MAC scan datas
EP3399464A1 (en) * 2017-05-03 2018-11-07 Accenture Global Solutions Limited Target object color analysis and tagging
CN108229335A (en) * 2017-12-12 2018-06-29 深圳市商汤科技有限公司 It is associated with face identification method and device, electronic equipment, storage medium, program
CN110569720A (en) * 2019-07-31 2019-12-13 安徽四创电子股份有限公司 audio and video intelligent identification processing method based on audio and video processing system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
李幼蛟等: "行人再识别技术综述", 《自动化学报》 *
耿征: "智能化视频分析技术探讨", 《中国安防》 *

Also Published As

Publication number Publication date
CN111385527B (en) 2021-09-14

Similar Documents

Publication Publication Date Title
CN106203458B (en) Crowd video analysis method and system
CN109284733B (en) Shopping guide negative behavior monitoring method based on yolo and multitask convolutional neural network
CN110659397B (en) Behavior detection method and device, electronic equipment and storage medium
CN110717358B (en) Visitor number counting method and device, electronic equipment and storage medium
CN110427859A (en) A kind of method for detecting human face, device, electronic equipment and storage medium
CN109063977B (en) Non-inductive transaction risk monitoring method and device
CN108229262B (en) Pornographic video detection method and device
CN105809178A (en) Population analyzing method based on human face attribute and device
CN110705383A (en) Smoking behavior detection method and device, terminal and readable storage medium
CN111563396A (en) Method and device for online identifying abnormal behavior, electronic equipment and readable storage medium
CN111241883B (en) Method and device for preventing cheating of remote tested personnel
CN110047513B (en) Video monitoring method and device, electronic equipment and storage medium
CN110807117B (en) User relation prediction method and device and computer readable storage medium
CN111382627B (en) Method for judging peer and related products
CN114049378A (en) Queuing analysis method and device
CN113343913A (en) Target determination method, target determination device, storage medium and computer equipment
CN110175553B (en) Method and device for establishing feature library based on gait recognition and face recognition
CN111385527B (en) Method for judging peer and related products
CN114463779A (en) Smoking identification method, device, equipment and storage medium
CN112651366A (en) Method and device for processing number of people in passenger flow, electronic equipment and storage medium
WO2023273132A1 (en) Behavior detection method and apparatus, computer device, storage medium, and program
CN113592427A (en) Method and apparatus for counting man-hours and computer readable storage medium
CN110764676B (en) Information resource display method and device, electronic equipment and storage medium
CN110944290B (en) Companion relationship analysis method and apparatus
CN110751034B (en) Pedestrian behavior recognition method and terminal equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant