CN110719398A - Face snapshot object determination method and device - Google Patents

Face snapshot object determination method and device Download PDF

Info

Publication number
CN110719398A
CN110719398A CN201810760917.XA CN201810760917A CN110719398A CN 110719398 A CN110719398 A CN 110719398A CN 201810760917 A CN201810760917 A CN 201810760917A CN 110719398 A CN110719398 A CN 110719398A
Authority
CN
China
Prior art keywords
face
target
determining
quality score
snapshot object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810760917.XA
Other languages
Chinese (zh)
Other versions
CN110719398B (en
Inventor
刘建伦
莫耀奎
刘常积
王大朋
眭旭敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jinan Yushi Intelligent Technology Co.,Ltd.
Original Assignee
Zhejiang Uniview Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Uniview Technologies Co Ltd filed Critical Zhejiang Uniview Technologies Co Ltd
Priority to CN201810760917.XA priority Critical patent/CN110719398B/en
Publication of CN110719398A publication Critical patent/CN110719398A/en
Application granted granted Critical
Publication of CN110719398B publication Critical patent/CN110719398B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention provides a method and a device for determining a face snapshot object, and relates to the technical field of face analysis. The method and the device extract continuous first video segments and second video segments, acquire a first target, first face feature data of the first target, a second target and second face feature data of the second target, predict a first motion track of the first target and determine a second motion track of the second target, and accordingly determine a face snapshot object according to the first motion track, the second motion track, the first face feature data and the second face feature data; because the finally determined face snapshot object is the most preferable face in the continuous segments determined by the first motion track, the second motion track, the first face characteristic data and the second face characteristic data, the multi-shooting rate in the high-efficiency snapshot process is reduced, the accuracy rate is improved for the subsequent face comparison, and meanwhile, the storage space of the face snapshot picture is also saved.

Description

Face snapshot object determination method and device
Technical Field
The invention relates to the technical field of face analysis, in particular to a method and a device for determining a face snapshot object.
Background
With the continuous optimization and improvement of deep learning algorithm and Graphic Processing Unit (GPU) chip operation capability, the demand of face recognition application is more and more urgent, wherein the application is the most extensive in the public security industry, one is real-time face recognition alarm, namely, a face snapshot camera is deployed to snapshot a face in real time to perform arrangement control early warning of blacklist personnel, and the other is face snapshot and comparison after a case is sent aiming at a video to extract information related to the case to assist in case solving. For the video analysis mode, the requirement of the speed of the video analysis generally requires that high-efficiency snapshot be realized.
In the prior art, high-efficiency snapshot is realized by a video slicing mode, namely, a video slice is a plurality of small segments with equal duration, and each segment is distributed to different analysis algorithm analysis units for parallel analysis, so that the effect of multiple speed is achieved; however, this analysis mode necessarily has multiple beats of the same target at two consecutive slices, and the more slices, the more serious the multiple beats. The multi-shot pictures are not only not optimally selected for the same face, which affects the comparison performance consumption and accuracy reduction of the subsequent face, but also wastes the storage space and has low economic benefit.
Disclosure of Invention
In view of the above, the present invention provides a method and an apparatus for determining a face snapshot object to solve the above problem.
In order to achieve the above purpose, the embodiment of the present invention adopts the following technical solutions:
in a first aspect, an embodiment of the present invention provides a method for determining a face snapshot object, where the method for determining a face snapshot object includes:
extracting continuous first video clips and second video clips from a plurality of video clips obtained by dividing a video to be analyzed according to preset time intervals;
acquiring a first target contained in a last frame in the first video clip, first face feature data of the first target, a second target contained in a first frame in the second video clip, and second face feature data of the second target;
predicting a first motion trajectory of the first target within the second video segment;
determining a second motion trajectory of the second target within the second video segment;
and determining a face snapshot object according to the first motion track, the second motion track, the first face feature data and the second face feature data.
In a second aspect, an embodiment of the present invention further provides a face snapshot object determining apparatus, where the face snapshot object determining apparatus includes:
the video clip extraction unit is used for extracting continuous first video clips and second video clips from a plurality of video clips obtained by dividing a video to be analyzed according to preset time intervals in advance;
a face parameter obtaining unit, configured to obtain a first target included in a last frame in the first video segment, first face feature data of the first target, and a second target included in a first frame in the second video segment, and second face feature data of the second target;
a motion trajectory prediction unit for predicting a first motion trajectory of the first target within the second video segment;
a motion trajectory determination unit for determining a second motion trajectory of the second target within the second video segment;
and the face snapshot object determining unit is used for determining a face snapshot object according to the first motion track, the second motion track, the first face characteristic data and the second face characteristic data.
The method and the device for determining the face snapshot object provided by the embodiment of the invention extract continuous first video segments and second video segments from a plurality of video segments obtained by dividing a video to be analyzed according to preset time intervals in advance, acquire a first target contained in a last frame of the first video segment, first face feature data of the first target and second face feature data of a second target contained in a first frame of the second video segment, predict a first motion track of the first target in the second video segment and determine a second motion track of the second target in the second video segment, and determine the face snapshot object according to the first motion track, the second motion track, the first face feature data and the second face feature data; because the finally determined face snapshot object is the most preferable face in the continuous segments determined by the first motion track, the second motion track, the first face characteristic data and the second face characteristic data, the multi-shooting rate in the high-efficiency snapshot process is reduced, the accuracy rate is improved for the subsequent face comparison, and meanwhile, the storage space of the face snapshot picture is also saved.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
Fig. 1 shows a block diagram of a server provided by an embodiment of the present invention.
Fig. 2 shows a flowchart of a face snapshot object determination method according to an embodiment of the present invention.
Fig. 3 shows a detailed flowchart of step S205 in fig. 2.
Fig. 4 is a functional block diagram of a face snapshot object determination apparatus according to an embodiment of the present invention.
Icon: 100-a server; 111-a memory; 112-a processor; 113-a communication unit; 200-face snapshot object determination means; 210-a video segment extraction unit; 220-a face parameter acquisition unit; 230-a motion trajectory prediction unit; 240-a motion trajectory determination unit; 250-a face snapshot object determination unit.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present invention without making any creative effort, shall fall within the protection scope of the present invention.
It is noted that relational terms such as "first" and "second," and the like, may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
Fig. 1 is a block diagram of a server 100. The server 100 includes a face-snap object determination apparatus 200, a memory 111, a processor 112, and a communication unit 113.
The memory 111, the processor 112 and the communication unit 113 are electrically connected to each other directly or indirectly to realize data transmission or interaction. For example, the components may be electrically connected to each other via one or more communication buses or signal lines. The face snapshot object determination apparatus 200 includes a software function module which can be stored in the memory 111 in the form of software or Firmware (Firmware) or is fixed in an Operating System (OS) of the server 100. The processor 112 is configured to execute executable modules stored in the memory 111, such as software functional modules and computer programs included in the face capture object determination apparatus 200.
The Memory 111 may be, but is not limited to, a Random Access Memory (RAM), a Read Only Memory (ROM), a Programmable Read-Only Memory (PROM), an Erasable Read-Only Memory (EPROM), an electrically Erasable Read-Only Memory (EEPROM), and the like. The memory 111 is used to store programs or data. The communication unit 113 is configured to establish a communication connection between the server 100 and another communication terminal via the network, and to transceive data via the network.
It should be understood that the configuration shown in fig. 1 is merely a schematic diagram of the configuration of the server 100, and that the server 100 may include more or less components than those shown in fig. 1, or have a different configuration than that shown in fig. 1. The components shown in fig. 1 may be implemented in hardware, software, or a combination thereof.
First embodiment
The embodiment of the invention provides a face snapshot object determination method, which is used for reducing the multi-shooting rate in the snapshot process and determining a face snapshot object in the process of high-efficiency snapshot of a video, so that the accuracy rate is improved for the follow-up face comparison. Fig. 2 is a flowchart of a method for determining a face snapshot object according to an embodiment of the present invention. The face snapshot object determination method comprises the following steps:
step S201: and extracting continuous first video clips and second video clips from a plurality of video clips obtained by dividing a video to be analyzed according to a preset time interval.
The video to be analyzed is divided into a plurality of video segments according to the preset time interval, and each video segment is distributed to different analysis algorithm analysis units for parallel analysis, so that the requirement of performing multi-speed face snapshot analysis on the video to be analyzed is met.
Meanwhile, continuous first video clips and second video clips are extracted from the plurality of video clips so as to further determine whether the first video clips and the second video clips have the same human face target.
Step S202: and acquiring a first target contained in the last frame in the first video segment and first face feature data of the first target, and second target contained in the first frame in the second video segment and second face feature data of the second target.
Specifically, a preset face detection method can be used to extract a first target from the last frame of the first video segment and a second target from the first frame of the second video segment.
It should be noted that the first facial feature data includes a first optimal feature value of the first target in the first video segment, and a first quality score corresponding to the first optimal feature value; the second face feature data comprises a second optimal feature value of the second target in the second video segment and a second quality score corresponding to the second optimal feature value.
Specifically, a first optimal characteristic value and a first quality score of a first target can be determined through a preset human face feature extraction algorithm; a second optimal feature value and a second quality score of the second target may be determined by a conventional face feature extraction algorithm.
Wherein the first optimal characteristic value is used for representing a first target; the second optimal eigenvalue is used to represent a second target. The quality score is a parameter for representing the quality degree of the picture according to parameters such as the angle and the definition of the target face. The first quality score is used for representing the quality degree of the first optimal characteristic value; the second quality score is used for representing the quality degree of the second optimal characteristic value.
In this embodiment, P is usednCharacterizing a first target by
Figure BDA0001727828850000071
Characterizing the first optimal eigenvalue, using FnCharacterizing a first mass fraction, using a target Pn' characterizing the second object, using
Figure BDA0001727828850000072
Characterizing a second optimal eigenvalue, utilizing F'nCharacterizing the second mass fraction.
Step S203: a first motion trajectory of the first object within the second video segment is predicted.
Specifically, the server 100 may predict a first motion trajectory of the first target in the second video segment by analyzing a motion speed and a motion direction of the first target in the first video segment and then using a preset trajectory prediction algorithm.
Step S204: a second motion trajectory of a second object within a second video segment is determined.
Specifically, the server 100 may determine a second motion trajectory of the second object within the second video segment by analyzing the motion speed and the motion direction of the second object in the second video segment.
Step S205: and determining a face snapshot object according to the first motion track, the second motion track, the first face characteristic data and the second face characteristic data.
Please refer to fig. 3, which is a flowchart illustrating the step S205. The step S205 includes:
substep S2051: and determining a first position vector and a second position vector respectively corresponding to the three groups of first targets and the three groups of second targets at the same moment according to the first motion track and the second motion track.
Namely in the trueSetting the first motion track as Pnt(point, time), the position coordinates of the first object at several continuous moments in the second video segment are predicted through the first motion track.
For example: predict t1、t2、t3The position coordinates of the first target at three moments are respectively Wt1(point,time)、Wt2(point,time)、Wt3(point, time), so as to obtain position vectors at three moments:
Figure BDA0001727828850000081
determining that the second motion track is P'nt(Point, time) is acquired at t1、t2、t3The position coordinates of the second target at the three moments are W't1(point,time)、W′t2(point,time)、W′t3(point, time), the position vector of three moments can be obtained by the same method and is divided into:
Figure BDA0001727828850000082
substep S2052: judging whether the similarity of each group of the first position vectors and the second position vectors is greater than or equal to a preset first similarity threshold, if so, executing a substep S2053; if not, then sub-step S2057 is performed.
First, the similarity between the first position vector and the second position vector of each group is calculated. Understandably, t1、t2、t3The included angle between two vectors respectively corresponding to the three moments is theta1、θ2、θ3Therefore, the similarity between each set of the first position vector and the second position vector is equal to the cosine value of each included angle, and the specific formula is as follows:
Figure BDA0001727828850000083
Figure BDA0001727828850000084
Figure BDA0001727828850000085
by judging C1、C2、C3Whether all are greater than or equal to a preset first similarity threshold C0Whether the three position coordinates obtained through the prediction of the first motion trajectory are all located on the second motion trajectory of the second target can be judged, and whether the first target and the second target are the same target can be preliminarily judged.
It should be noted that, in a preferred embodiment, the first similarity threshold C is0Is the cosine value of the angle equal to 0, i.e. C01. It should be noted that, in other embodiments, the first similarity threshold C is0Or may be an infinite number approaching 1.
Substep S2053: judging whether the similarity between the first optimal characteristic value and the second optimal characteristic value is greater than or equal to a preset second similarity threshold, if so, executing a substep S2054; if not, then sub-step S2057 is performed.
It will be appreciated that when two vectors are at an angle of 0 degrees, the two vectors are collinear, and when three sets of vectors are collinear, i.e. C1≥C0、C2≥C0And C3≥C0In the process, it is indicated that the three position coordinates obtained by predicting the first motion trajectory are all located on the second motion trajectory of the second target, and the first target and the second target are likely to be the same human face target, so that it is necessary to further determine whether the first target and the second target are the same target through the first optimal feature value and the second optimal feature value.
The similarity between the first optimal characteristic value and the second optimal characteristic value can be calculated by the following formula:
judging the similarity S of the first optimal characteristic value and the second optimal characteristic value againnWith a predetermined second similarity threshold value N0May further determine whether the first object and the second object are the same object.
Substep S2054: judging whether the first quality score is greater than or equal to the second quality score, if so, executing a substep S2055; if not, then sub-step S2056 is performed.
When S isn≥N0Then, the similarity S between the first optimal characteristic value and the second optimal characteristic value is shownnIf the size of the target is larger, the first target and the second target can be judged to be the same face target, and at the moment, in order to reduce the multi-shot rate and improve the accuracy rate in the subsequent identification and analysis process, better quality of the first target and the second target needs to be selected as a face snapshot object. Therefore, the first quality score is compared with the second quality score in size.
Substep S2055: the first target is determined as a face capture object.
When the first mass fraction is greater than or equal to the second mass fraction, i.e. Fn≥F′nAnd when the first target is determined to be the face snapshot object, the quality score of the first optimal characteristic value is higher than that of the second optimal characteristic value, and the first target is clearer and easier to identify.
Substep S2056: and determining the second target as a face snapshot object.
When the first mass fraction is less than the second mass fraction, i.e. Fn<F′nAnd then, the quality score of the first optimal characteristic value is smaller than that of the second optimal characteristic value, and the second target is clearer and easier to identify, so that the second target is determined as a face snapshot object.
Substep S2057: and determining the first target and the second target as the face snapshot objects.
When the similarity between any group of first position vectors and any group of second position vectors is smaller than a preset first similarity threshold, it indicates that the three position coordinates obtained through the first motion trajectory prediction are not all located on the second motion trajectory of the second target, so that the first target and the second target are not considered to be the same face target, and therefore the first target and the second target are both determined to be face-snapshot objects.
In addition, when the similarity between each group of the first position vectors and the second position vectors is greater than or equal to a preset first similarity threshold, and the similarity between the first optimal feature value and the second optimal feature value is less than a preset second similarity threshold, it may also be determined that the first target and the second target are not the same human face target, so that the first target and the second target are both determined as the human face capture object at this time.
Second embodiment
Referring to fig. 4, fig. 4 is a functional block diagram of a face snapshot object determination apparatus 200 according to a preferred embodiment of the present invention. It should be noted that the basic principle and the generated technical effect of the face snapshot object determination apparatus 200 provided in the present embodiment are the same as those of the above embodiments, and for the sake of brief description, no part of the present embodiment is mentioned, and reference may be made to the corresponding contents in the above embodiments. The face capture object determination apparatus 200 includes: a video segment extracting unit 210, a face parameter acquiring unit 220, a motion trail predicting unit 230, a motion trail determining unit 240, and a face snap-shot object determining unit 250.
The video segment extracting unit 210 is configured to extract a first video segment and a second video segment that are consecutive from a plurality of video segments obtained by dividing a video to be analyzed according to a preset time interval.
It is to be understood that, in a preferred embodiment, the video segment extracting unit 210 is configured to execute step S201.
The face parameter obtaining unit 220 is configured to obtain a first target and first face feature data of the first target included in a last frame of the first video segment, and second face feature data of a second target and a second target included in a first frame of the second video segment.
It is to be understood that, in a preferred embodiment, the face parameter obtaining unit 220 is configured to perform step S202.
The motion trajectory prediction unit 230 is configured to predict a first motion trajectory of the first object within the second video segment.
It is to be understood that, in a preferred embodiment, the motion trajectory prediction unit 230 is configured to perform step S203.
The motion trajectory determination unit 240 is configured to determine a second motion trajectory of the second object within the second video segment.
It is to be understood that, in a preferred embodiment, the motion trajectory determination unit 240 may be configured to perform step S204.
The face snapshot object determining unit 250 is configured to determine a face snapshot object according to the first motion trajectory, the second motion trajectory, the first face feature data, and the second face feature data.
Specifically, the face snapshot object determining unit 250 is configured to determine, according to the first motion trajectory and the second motion trajectory, a first position vector and a second position vector corresponding to the three groups of first targets and the three groups of second targets respectively at the same time.
The face snapshot object determining unit 250 is further configured to determine whether the similarity between each group of the first position vectors and the second position vectors is greater than or equal to a preset first similarity threshold, and determine a face snapshot object according to the first face feature data and the second face feature data when the similarity between each group of the first position vectors and the second position vectors is greater than or equal to the preset first similarity threshold; and the method is used for determining the first target and the second target as the face snapshot objects when the similarity between any group of first position vectors and second position vectors is smaller than a preset first similarity threshold.
The face snapshot object determining unit 250 is further configured to, when the similarity between the first optimal feature value and the second optimal feature value is greater than or equal to a preset second similarity threshold, determine whether the similarity between the first optimal feature value and the second optimal feature value is greater than or equal to the preset second similarity threshold, and determine the face snapshot object according to the first quality score and the second quality score when the similarity between the first optimal feature value and the second optimal feature value is greater than or equal to the preset second similarity threshold; and the similarity determination module is used for determining the first target and the second target as the face snapshot objects when the similarity between the first optimal characteristic value and the second optimal characteristic value is smaller than a preset second similarity threshold value.
The face snapshot object determining unit 250 is further configured to, when the similarity between the first optimal feature value and the second optimal feature value is greater than or equal to a preset second similarity threshold and the similarity between the first optimal feature value and the second optimal feature value is greater than or equal to a preset second similarity threshold, determine whether the first quality score is greater than or equal to the second quality score, and determine the first target as the face snapshot object when the first quality score is greater than or equal to the second quality score; and the second target is determined as the face snapshot object when the first quality score is smaller than the second quality score.
It is to be understood that in a preferred embodiment, the face-snap object determination unit 250 is operable to perform step S205, sub-step S2051, sub-step S2052, sub-step S2053, sub-step S2054, sub-step S2055, sub-step S2056 and sub-step S2057.
In summary, according to the method and the apparatus for determining a face snapshot object provided in the embodiments of the present invention, continuous first video segments and second video segments are extracted from a plurality of video segments obtained by dividing a video to be analyzed according to preset time intervals, first face feature data of a first target and a first target included in a last frame of the first video segment and second face feature data of a second target and a second target included in a first frame of the second video segment are obtained, a first motion trajectory of the first target in the second video segment is predicted, and a second motion trajectory of the second target in the second video segment is determined, so that the face snapshot object is determined according to the first motion trajectory, the second motion trajectory, the first face feature data, and the second face feature data; because the finally determined face snapshot object is the most preferable face in the continuous segments determined by the first motion track, the second motion track, the first face characteristic data and the second face characteristic data, the multi-shooting rate in the high-efficiency snapshot process is reduced, the accuracy rate is improved for the subsequent face comparison, and meanwhile, the storage space of the face snapshot picture is also saved.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method can be implemented in other ways. The apparatus embodiments described above are merely illustrative, and for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, the functional modules in the embodiments of the present invention may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A face snapshot object determination method is characterized by comprising the following steps:
extracting continuous first video clips and second video clips from a plurality of video clips obtained by dividing a video to be analyzed according to preset time intervals;
acquiring a first target contained in a last frame in the first video clip, first face feature data of the first target, a second target contained in a first frame in the second video clip, and second face feature data of the second target;
predicting a first motion trajectory of the first target within the second video segment;
determining a second motion trajectory of the second target within the second video segment;
and determining a face snapshot object according to the first motion track, the second motion track, the first face feature data and the second face feature data.
2. The method according to claim 1, wherein the step of determining the face capture object from the first motion trajectory, the second motion trajectory, the first face feature data, and the second face feature data includes:
determining a first position vector and a second position vector corresponding to the first target and the second target in three groups at the same moment according to the first motion track and the second motion track;
and when the similarity of each group of the first position vectors and the second position vectors is greater than or equal to a preset first similarity threshold, determining the face snapshot object according to the first face characteristic data and the second face characteristic data.
3. The method according to claim 2, wherein the first facial feature data includes a first optimal feature value of the first target in the first video segment and a first quality score corresponding to the first optimal feature value, and the second facial feature data includes a second optimal feature value of the second target in the second video segment and a second quality score corresponding to the second optimal feature value;
the step of determining the face snapshot object according to the first face feature data and the second face feature data comprises:
and when the similarity between the first optimal characteristic value and the second optimal characteristic value is greater than or equal to a preset second similarity threshold value, determining the face snapshot object according to the first quality score and the second quality score.
4. The method according to claim 3, wherein the step of determining the face-shot object according to the first quality score and the second quality score comprises:
when the first quality score is larger than or equal to the second quality score, determining the first target as the face snapshot object;
and when the first quality score is smaller than the second quality score, determining the second target as the face snapshot object.
5. The face-snap object determination method according to claim 3, characterized by further comprising:
and when the similarity between the first optimal characteristic value and the second optimal characteristic value is smaller than a preset second similarity threshold, determining the first target and the second target as the face snapshot object.
6. The face-snap object determination method according to claim 2, characterized by further comprising:
and when the similarity between any group of first position vectors and the second position vectors is smaller than a preset first similarity threshold, determining the first target and the second target as the face snapshot objects.
7. A face snapshot object determination apparatus, comprising:
the video clip extraction unit is used for extracting continuous first video clips and second video clips from a plurality of video clips obtained by dividing a video to be analyzed according to preset time intervals in advance;
a face parameter obtaining unit, configured to obtain a first target included in a last frame in the first video segment, first face feature data of the first target, and a second target included in a first frame in the second video segment, and second face feature data of the second target;
a motion trajectory prediction unit for predicting a first motion trajectory of the first target within the second video segment;
a motion trajectory determination unit for determining a second motion trajectory of the second target within the second video segment;
and the face snapshot object determining unit is used for determining a face snapshot object according to the first motion track, the second motion track, the first face characteristic data and the second face characteristic data.
8. The apparatus according to claim 7, wherein the face-capture-object determining unit is configured to determine, according to the first motion trajectory and the second motion trajectory, a first position vector and a second position vector corresponding to the first target and the second target in three groups respectively at a same time;
the face snapshot object determining unit is further configured to determine the face snapshot object according to the first face feature data and the second face feature data when the similarity between the first position vector and the second position vector in each group is greater than or equal to a preset first similarity threshold.
9. The apparatus according to claim 8, wherein the first facial feature data includes a first optimal feature value of the first target in the first video segment and a first quality score corresponding to the first optimal feature value, and the second facial feature data includes a second optimal feature value of the second target in the second video segment and a second quality score corresponding to the second optimal feature value;
the face snapshot object determination unit is used for determining the face snapshot object according to the first quality score and the second quality score when the similarity between the first optimal characteristic value and the second optimal characteristic value is larger than or equal to a preset second similarity threshold value.
10. The face snapshot object determination apparatus according to claim 9, wherein the face snapshot object determination unit is configured to determine the first target as the face snapshot object when the first quality score is greater than or equal to the second quality score;
the face snapshot object determination unit is used for determining the second target as the face snapshot object when the first quality score is smaller than the second quality score.
CN201810760917.XA 2018-07-12 2018-07-12 Face snapshot object determination method and device Active CN110719398B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810760917.XA CN110719398B (en) 2018-07-12 2018-07-12 Face snapshot object determination method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810760917.XA CN110719398B (en) 2018-07-12 2018-07-12 Face snapshot object determination method and device

Publications (2)

Publication Number Publication Date
CN110719398A true CN110719398A (en) 2020-01-21
CN110719398B CN110719398B (en) 2021-07-20

Family

ID=69208297

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810760917.XA Active CN110719398B (en) 2018-07-12 2018-07-12 Face snapshot object determination method and device

Country Status (1)

Country Link
CN (1) CN110719398B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102880864A (en) * 2012-04-28 2013-01-16 王浩 Method for snap-shooting human face from streaming media file
CN102930261A (en) * 2012-12-05 2013-02-13 上海市电力公司 Face snapshot recognition method
CN103914685A (en) * 2014-03-07 2014-07-09 北京邮电大学 Multi-target tracking method based on generalized minimum clique graph and taboo search
CN104539909A (en) * 2015-01-15 2015-04-22 安徽大学 Video monitoring method and video monitoring server
CN105208377A (en) * 2015-09-14 2015-12-30 北京奇虎科技有限公司 Image processing method and device
CN105243373A (en) * 2015-10-27 2016-01-13 北京奇虎科技有限公司 Method for filtering facial images to prevent repeated snapshot, server, intelligent monitoring device and system
CN107346426A (en) * 2017-07-10 2017-11-14 深圳市海清视讯科技有限公司 A kind of face information collection method based on video camera recognition of face
US20180004275A1 (en) * 2016-07-01 2018-01-04 Microsoft Technology Licensing, Llc Face detection circuit
CN107609497A (en) * 2017-08-31 2018-01-19 武汉世纪金桥安全技术有限公司 The real-time video face identification method and system of view-based access control model tracking technique
CN107958220A (en) * 2017-12-06 2018-04-24 杭州魔点科技有限公司 A kind of face database compression processing method and its intelligent apparatus based on recognition of face

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102880864A (en) * 2012-04-28 2013-01-16 王浩 Method for snap-shooting human face from streaming media file
CN102930261A (en) * 2012-12-05 2013-02-13 上海市电力公司 Face snapshot recognition method
CN103914685A (en) * 2014-03-07 2014-07-09 北京邮电大学 Multi-target tracking method based on generalized minimum clique graph and taboo search
CN104539909A (en) * 2015-01-15 2015-04-22 安徽大学 Video monitoring method and video monitoring server
CN105208377A (en) * 2015-09-14 2015-12-30 北京奇虎科技有限公司 Image processing method and device
CN105243373A (en) * 2015-10-27 2016-01-13 北京奇虎科技有限公司 Method for filtering facial images to prevent repeated snapshot, server, intelligent monitoring device and system
US20180004275A1 (en) * 2016-07-01 2018-01-04 Microsoft Technology Licensing, Llc Face detection circuit
CN107346426A (en) * 2017-07-10 2017-11-14 深圳市海清视讯科技有限公司 A kind of face information collection method based on video camera recognition of face
CN107609497A (en) * 2017-08-31 2018-01-19 武汉世纪金桥安全技术有限公司 The real-time video face identification method and system of view-based access control model tracking technique
CN107958220A (en) * 2017-12-06 2018-04-24 杭州魔点科技有限公司 A kind of face database compression processing method and its intelligent apparatus based on recognition of face

Also Published As

Publication number Publication date
CN110719398B (en) 2021-07-20

Similar Documents

Publication Publication Date Title
CN108733819B (en) Personnel archive establishing method and device
CN108133172B (en) Method for classifying moving objects in video and method and device for analyzing traffic flow
CN109325964B (en) Face tracking method and device and terminal
EP3477506A1 (en) Video detection method, server and storage medium
US11557147B2 (en) Systems and methods for selecting a best facial image of a target human face
US9495760B2 (en) Adaptable framework for cloud assisted augmented reality
CN109740004B (en) Filing method and device
CN112989962B (en) Track generation method, track generation device, electronic equipment and storage medium
KR20140028809A (en) Adaptive image processing apparatus and method in image pyramid
CN113052029A (en) Abnormal behavior supervision method and device based on action recognition and storage medium
CN110930434B (en) Target object following method, device, storage medium and computer equipment
CN110460838B (en) Lens switching detection method and device and computer equipment
WO2016201683A1 (en) Cloud platform with multi camera synchronization
WO2014193220A2 (en) System and method for multiple license plates identification
CN110569770A (en) Human body intrusion behavior recognition method and device, storage medium and electronic equipment
CN112633205A (en) Pedestrian tracking method and device based on head and shoulder detection, electronic equipment and storage medium
CN113673311A (en) Traffic abnormal event detection method, equipment and computer storage medium
CN110719398B (en) Face snapshot object determination method and device
CN108334811B (en) Face image processing method and device
CN110738129A (en) end-to-end video time sequence behavior detection method based on R-C3D network
EP4332910A1 (en) Behavior detection method, electronic device, and computer readable storage medium
CN116030390A (en) Intelligent detection method, device, equipment and storage medium for abnormal behavior in video
CN116030096A (en) Target identification method, device, terminal equipment and computer readable storage medium
CN112613510B (en) Picture preprocessing method, character recognition model training method and character recognition method
US20210287051A1 (en) Methods and systems for recognizing object using machine learning model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20220331

Address after: 250101 floor 3b, building A2-5, Hanyu Jingu, high tech Zone, Jinan City, Shandong Province

Patentee after: Jinan Yushi Intelligent Technology Co.,Ltd.

Address before: 310000 1-11 / F, South Block, building 10, No. 88, Jiangling Road, Xixing street, Binjiang District, Hangzhou City, Zhejiang Province

Patentee before: ZHEJIANG UNIVIEW TECHNOLOGIES Co.,Ltd.